AI Brain Fry
2026 is the year you'll burn out
AI is supposed to save time, but it makes me feel both productive and behind at the same time.
I recently watched a video about the disconnect between the expected benefits of AI and the employees’ actual experiences. I agree with this and I’m not saying there are no benefits of AI; there definitely is but the expectation is different from reality, and that gap is greater between those who only understand AI in theory vs actual AI practitioners. The video states that many workers report increased anxiety and mental fatigue because they are burdened with the time-consuming tasks of correcting AI errors and teaching themselves how to use complex new AI tools without adequate company support. Expectations at work have quietly expanded, and people have absorbed that expansion without it ever being explicitly acknowledged.
No one tells you your job responsibilities are bigger because of AI. It shows up in smaller ways. There’s an expectation that you’re learning it, experimenting with it, integrating it into your workflow, and staying current as the tools keep changing. None of that is written down as additional scope, but it accumulates, and a lot of it happens outside clearly defined work hours. Is this any different than any other technological tool that came before it? Yes, because AI is changing so quickly that companies cannot keep up with formal learning and development programs, and those who rise to the top in terms of knowing how to wield it effectively are the ones who go out of their way outside of work hours to learn and build things with AI.
Once you actually sit down to use these tools, the experience doesn’t quite match the narrative of straightforward efficiency gains. You start with something simple. AI helps you get to a first pass quickly, and that part works well. It can summarize, synthesize, and generate something that looks coherent or even functional. There’s a moment where it feels like this is working exactly as intended.
Then you try to expand beyond just summarization or using AI as a chatbot and you realize its output is not quite right. Not wrong enough to throw away, but not right enough to trust. So you adjust. You refine inputs, add context, rethink structure. Then you start questioning whether the issue is the prompt, the system around it, or the way you’re organizing information. If you have any background in ML, this step is almost automatic. You start thinking in terms of evals, feedback loops, and better scaffolding.
Each of those paths makes sense, and the friction to explore them is low enough that you don’t have to choose, so you don’t. Instead, you go deeper into all the options, all the ways in which you can optimize, so you end up on what feels like side quests instead of burning down the main quest line. What looks like efficiency turns into expansion. You’re kind of completing the task faster but more likely, you’re widening the scope of what you’re attempting within the same block of time. By the end of it, you’ve produced more, but it’s not clear that you’ve made proportional progress on the original goal.
There’s research starting to reflect this more clearly. A study out of UC Berkeley found that AI didn’t reduce work in practice. It “consistently intensified” it, with people taking on more tasks and extending work into hours that weren’t previously considered work time. As someone who’s written about the rise of agentic AI over a year ago, and tinkered with context engineering and multi-agent systems before it registered for most people at work, I think this maps closely to how this feels in practice. And if you don’t feel this yet, you might soon.
It’s not just about volume but the nature of the work shifts. You’re no longer just producing output; you’re constantly evaluating it. You’re reading, checking, second-guessing, deciding whether something that looks correct actually holds up. Researchers have started calling this AI brain fry: mental fatigue that comes specifically from supervising rather than doing, from the overhead of managing AI output rather than the work itself.
Not everyone experiences this the same way, and the research accounts for that. When AI is used to replace repetitive tasks, burnout actually decreases. When teams have organized AI integration rather than leaving individuals to figure it out alone, mental strain is measurably lower. The people who seem fine are often in environments where someone made deliberate choices about how AI gets used and what it's for. The people who aren't fine are often in the opposite situation: high oversight, unclear expectations, and an implicit message that AI means you should now be doing more. When organizations don't communicate clearly about AI's role, the study found mental fatigue scores were 12% higher. That ambiguity alone is a stressor. So if someone in your life doesn't feel this, it might not be about how much they use AI. It might be about where they work, or whether anyone communicated what the mandate actually means for their specific role.
What makes this harder is that there isn’t stable ground yet. Tooling changes constantly, models are continually being updated, and best practices are still forming. Most people are learning alongside their actual jobs rather than within dedicated time. In larger organizations, you add constraints around tooling, access, and team-level priorities. The result is a lot of parallel experimentation without much shared infrastructure.
AI is framed as leverage, but it also raises expectations. If you can do more, then the baseline shifts. And if everyone is being told the same thing, it starts to feel like a quiet competition to show you’re keeping up.
There’s also a reluctance to say any of this out loud. When I do talk about it, I don’t get pushback. I get private messages from people saying they feel the same way but don’t want to be seen as resistant or behind. And nobody feels good about admitting they’re behind because of the economic pressure where yearly layoffs in tech companies are the norm; people are trying to avoid falling behind in a system where the bar keeps moving because we don’t want to become a part of the permanent underclass.
A lot of this learning is happening outside of formal work. Nights, weekends, in between everything else. It’s framed as personal growth, but it’s also directly tied to company priorities. But if AI adoption is important at the company level, then the cost of learning and integrating it shouldn’t fall entirely on individuals to figure out on their own time. There’s reporting showing that employees already feel AI is increasing their workload and that expectations are rising faster than support.
I still use AI. There are things it’s good at, and it does enable new kinds of work. But the experience of using it day-to-day feels both productive and exhausting at the same time. A lot of us are spending our “spare time” learning, experimenting, and trying to stay relevant, and for what? To generate shareholder value? If you’re being restricted on what tools you can use, then maybe yes.
If any of this feels familiar, you’re not imagining it. The least we can do is call it what it is.

