Friday, March 13, 2026

AI is exhausting workers so much, researchers have dubbed the condition ‘AI brain fry’

Part of the pitch for artificial intelligence in the workplace goes like this: It’s like having a team of people to delegate your grunt work to, freeing you up to think strategically and maybe, just maybe, take a long lunch or head home early. Or maybe even be more productive, to make more money. It’s a nice idea! But as everyone who’s either had a boss or been a boss knows, managing is a job in itself, one that comes with its own distinct brand of stress and annoyance. And that doesn’t change if the “people” in question aren’t people at all. For participants in a recent study by Boston Consulting Group, the experience of overseeing multiple AI “agents,” autonomous software that’s designed to execute tasks, rather than just churn out information like a chatbot, caused an acute sensation of “buzzing” — a fog that left workers exhausted and struggling to concentrate. The study’s authors call it “AI brain fry,” defined as mental fatigue “from excessive use or oversight of AI tools beyond one’s cognitive capacity.” “Contrary to the promise of having more time to focus on meaningful work, juggling and multitasking can become the definitive features of working with AI,” they wrote in the study. published by Harvard Business Review last week. “This AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit.” Workers quoted in the study reminded me a lot of my fellow elder Millennials circa 1997, rushing home to tend to their Tamagotchis. “It was like I had a dozen browser tabs open in my head, all fighting for attention,” one senior engineering manager told researchers. “I caught myself rereading the same stuff, second-guessing way more than usual, and getting weirdly impatient. My thinking wasn’t broken, just noisy—like mental static.” This is just one new side effect from a push by company executives to make workers use AI more. Last fall, a Harvard Business Review report chronicled the scourge of “workslop” — the nonsensical AI-generated memos, pitch decks and presentations that end up creating more work for colleagues who have to fix what the bot got wrong. Workslop reflects a kind of “cognitive surrender” in which workers feel unmotivated, giving AI work to do and not really paying attention to the output, said Gabriella Rosen Kellerman, a psychiatrist who co-authored both reports, in an interview. “Brain fry is almost the opposite… It’s like trying to go tête-à-tête — intelligence to intelligence — with the AI.” Francesco Bonacci, CEO of Cua AI, which builds AI agents, described his AI fatigue as “vibe coding paralysis” (a reference to the Silicon Valley trend of building less-polished projects with AI prompts rather than traditional coding). “I end each day exhausted — not from the work itself, but from the managing of the work,” he wrote last month in an essay on X. “Six worktrees open, four half-written features, two ‘quick fixes’ that spawned rabbit holes, and a growing sense that I’m losing the plot entirely.” To some extent, brain fry and workslop could both be a case of growing pains. Imagine plucking a middle-aged office worker from 1986, dropping them into a 2026 workplace and asking them to send 10 emails, respond to Slacks and Zoom into a call with the social media team who are all working from home. You’d expect some cognitive overload, not to mention some confused looks when you tell them Donald Trump is president and that it took more than 30 years to make a “Top Gun” sequel. Of course, people learn how to be managers, in general, all the time. “I do think this is potentially temporary,” said Matthew Kropp, a co-author of the brain fry study and BCG managing director. “These are tools we haven’t had before.” Kropp compared the experience of someone managing multiple AI tools to that of someone who just learned to drive being given a Ferrari. You can go really fast, but it’s easy to lose control. Of course, even tech pros seem to be struggling to control their AI assistants at times. Last month, Meta’s director of AI safety and alignment tweeted about her own experience watching bots nearly delete her inbox without permission. “I had to RUN to my Mac mini like I was defusing a bomb,” she wrote, chalking the incident up to a “rookie mistake.” Both Kropp and Kellerman emphasized that the result of the study wasn’t all negative. Surprisingly, the people experiencing brain fry tended to experience less burnout, defined as a state of chronic workplace stress that builds over time and makes workers perform poorly. Brain fry is an acute experience, as participants described it to them. “When they take a break, it goes away,” Kellerman said. Analysis by Allison Morrow

Wednesday, March 11, 2026

Bad News for Your Burner Account: AI Is Surprisingly Effective at Identifying the Person Behind One

It’s not uncommon for people to have anonymous or burner accounts in their online activities for a variety of reasons. A new study, though, shows why you might want to be as careful posting from those accounts as you would from one that uses your real name, since they might not hide your identity as well as you think. A recently released research paper found that artificial intelligence has proved quite effective at figuring out who’s behind those false-name accounts. Large language models, the study found, can use a number of identifiers, such as extracting identity signals (data points or behaviors used to identify, verify, or categorize individuals) or searching for matching data, to significantly outperform existing identity methods. The study successfully deanonymized 68 percent of the users in its trial data set. Of that 68 percent, it boasted a 90 percent precision rate, meaning it accurately identified the user running the account. “Our findings have significant implications for online privacy,” the researchers, who were based at ETH Zurich, a public university in Zurich, Switzerland, and MATS, an independent research and educational program, wrote. “The average online user has long operated under an implicit threat model where they have assumed pseudonymity provides adequate protection because targeted deanonymization would require extensive effort. LLMs invalidate this assumption.” Anthropic also contributed to the study. The findings that pseudonymous content can be fairly easily unmasked by AI have implications far beyond burner accounts and social media, of course. It can also be a powerful tool for hackers. And it can make it easier for companies to track down employees who leak corporate information or dig into who is asking questions in open forums. It could also prove embarrassing for leaders who utilize burner accounts to pump up their businesses or covertly settle online scores with rivals. Casey Bloys, chairman and CEO of HBO and Max Content at Warner Bros. Discovery, admitted in 2023 that he had fake social media accounts he used to troll critics about network programming (later admitting that was a “dumb idea“). Elon Musk has confirmed in a court deposition that he has used them in the past. And Barstool Sports was accused in 2023 of using more than 40 accounts to promote its content and help it go viral. Users hoping to keep their identity private or vulnerable members of society who depend on privacy (e.g., whistleblowers, activists, or abuse survivors) could also be identified. A slightly deeper dive by the AI could also determine where those people live, their occupation (and estimated income level), and more. To protect against that, the researchers proposed several mitigations, including having platforms enforce rate limits on API access to user data, better detection of automated scraping, and restricting bulk data exports. That said, they acknowledge that preventing AI from being used to identify people and accounts that are trying to obfuscate the user’s identity will be increasingly challenging in the months and years to come. “Recent advances in LLM capabilities have made it clear that there is an urgent need to rethink various aspects of computer security in the wake of LLM-driven offensive cyber capabilities,” the study reads. “Our work shows that the same is likely true for privacy as well. … Any moderately sophisticated actor can already do what we do using readily available LLMs and embedding models. With future LLMs, without mitigations, this attack will be within the means of basically all adversarial actors.” BY CHRIS MORRIS @MORRISATLARGE

Monday, March 9, 2026

The Hidden Advantage of Being Over 50 in the Age of AI

I’ve been through a few technology revolutions. I built my first website in 1995, back when the internet made that screeching dial-up sound and nobody really knew what we were building, just that something big was happening. I watched the dot‑com bubble inflate and implode, watched social media go from novelty to addiction, and saw smartphones quietly rewire how humans behave. And now, here we are again: AI. Everywhere you look, someone is launching an AI startup, automating departments, or building agents that promise to replace entire job functions. If you’re an experienced founder or executive—especially north of 50—it’s easy to feel like you showed up late to the party. I’ve felt it myself. A few months ago, I was sitting in front of my computer watching younger founders crank out AI apps in days, shipping products before I’d even finished reading about the tools they were using. I remember thinking, “Am I becoming the guy who missed it?” That thought lasted about a week. Once I stopped comparing velocity and started actually using AI in my own work, something clicked. This might be the first tech wave where experience is the real unfair advantage. AI isn’t about being technical. It’s about thinking clearly Previous tech revolutions rewarded people who could code, manipulate algorithms, or master new platforms faster than everyone else, but AI is different. You don’t need to learn a programming language; you need to ask better questions. And asking better questions isn’t a technical skill—it’s a judgment skill. The leverage in AI doesn’t come from typing prompts quickly; it comes from knowing what matters, what doesn’t, and what consequences might follow. That’s pattern recognition, and pattern recognition is built over decades. It’s something AI is really good at, and it turns out those with experience are as well. Speed is overrated. Judgment isn’t Younger founders are moving fast right now, and I respect that. It’s exciting to watch. But speed without context creates a whole lot of noise, while experience creates context. When I use AI, I’m not asking it to build me a novelty app; I’m asking it to stress‑test a business idea, identify blind spots in a launch plan, challenge my assumptions, and help me flesh out existing models. I don’t accept what it gives me—I argue with it, refine it, and push it. That’s not something you learn from YouTube tutorials. That’s something you learn from making expensive mistakes. The real danger isn’t falling behind—it’s outsourcing your thinking There’s a subtle shift happening where leaders are starting to treat AI like a strategy generator instead of a thought partner, and that’s dangerous. AI predicts patterns. It doesn’t carry fiduciary responsibility, understand internal politics, feel reputational damage, or know which risks are existential versus cosmetic. It produces possibilities. You decide. If you’ve been in business long enough, you understand that difference instinctively—and that instinct is more valuable now than ever. The confidence gap is mostly psychological I’ve talked to more than a few executives who whisper some version of the same thing: “I’m not technical,” “I feel behind,” or “My kids understand this better than I do.” That may be true at the interface level, but understanding tools isn’t the same as understanding leverage. If you know how distribution works, AI can sharpen your messaging. If you understand customer psychology, AI can help you surface objections faster. If you understand operations, AI can reveal inefficiencies you’ve been tolerating for years. You don’t need to become an AI founder—you need to become more precise. We’ve seen this movie before, but this time you’re the advantage Every tech wave follows the same emotional arc: hype, overconfidence, correction, integration. What feels different about AI isn’t the hype—we’ve seen that—it’s the accessibility. You talk to it; it talks back. That simplicity lowers the barrier dramatically, and when the barrier lowers, judgment becomes the differentiator. Not youth. Not speed. Judgment. The leaders who win this era won’t just be 22‑year‑olds building AI‑native startups. They’ll also be experienced operators who integrate AI quietly and intelligently into systems they already understand. If you’re over 50 and feeling behind, you might actually be early. Because when the tools get easier, experience becomes more powerful—not less. And this time, that experience may finally be the competitive edge. EXPERT OPINION BY JOEL COMM, AUTHOR AND SPEAKER @JOELCOMM

Friday, March 6, 2026

How to Switch From ChatGPT to Claude With Just 1 Simple Prompt

Anthropic has had a turbulent few days, but the safety-focused AI company might be having the last laugh. Following Anthropic’s standoff with the United States Department of War, President Trump’s subsequent firing of Claude from government use, and OpenAI’s surprise deal with the Pentagon, individual users are dumping ChatGPT and flocking to Claude. On Saturday, the Claude mobile app rose to the top spot on the iOS App Store, surpassing ChatGPT for the first time. At that same time, TechCrunch has reported, uninstalls of the ChatGPT mobile app jumped 295 percent compared with the previous day. But switching AI providers isn’t always a seamless experience. The more often you use an AI platform, the more it gains an understanding of you, your work, and your personal context, which is why starting over with a new AI can feel like taking a major step back. Now, Anthropic is looking to capitalize on its newfound momentum among consumers by making it easy to transfer context about yourself from rival AI providers like ChatGPT and Google Gemini to Claude. On Monday, the company announced that its Memory feature, which enables Claude to remember key information about you across conversations, is now available for non-paying Claude users. Anthropic says on its website that this allows users to transfer their personal information with a single copy-paste, although in reality, it actually takes two copy-pastes. How to transfer your context from ChatGPT to Claude On Claude.ai, navigate to the settings page and select “Capabilities” from the sidebar menu. Then, click the button labeled “start import” under a section titled “Import memory from other AI providers.” Next, you’ll see a pop-up requesting that you copy a prewritten prompt and paste it into a new chat with the AI platform you’re looking to leave behind. For example, if you’ve been using ChatGPT and want to move on, you’d enter this prompt into ChatGPT. Here’s the full prompt, courtesy of Anthropic: Export all of my stored memories and any context you’ve learned about me from past conversations. Preserve my words verbatim where possible, especially for instructions and preferences. ## Categories (output in this order): 1. **Instructions**: Rules I’ve explicitly asked you to follow going forward — tone, format, style, “always do X”, “never do Y”, and corrections to your behavior. Only include rules from stored memories, not from conversations. 2. **Identity**: Name, age, location, education, family, relationships, languages, and personal interests. 3. **Career**: Current and past roles, companies, and general skill areas. 4. **Projects**: Projects I meaningfully built or committed to. Ideally ONE entry per project. Include what it does, current status, and any key decisions. Use the project name or a short descriptor as the first words of the entry. 5. **Preferences**: Opinions, tastes, and working-style preferences that apply broadly. ## Format: Use section headers for each category. Within each category, list one entry per line, sorted by oldest date first. Format each line as: [YYYY-MM-DD] – Entry content here. If no date is known, use [unknown] instead. ## Output: – Wrap the entire export in a single code block for easy copying. – After the code block, state whether this is the complete set or if more remain. What to do with Claude after you’ve entered this prompt If you prompt a platform like ChatGPT or Gemini with this message, you’ll receive a response that details the information the platform has about you, broken down into sections like identity, career, and projects. The response should also contain instructions detailing how you like your AI models to converse with you, such as specifications for tone of voice. Once the response is done generating, you can copy it, paste it into the textbox in the Claude settings page, and click the “add to memory” button. With that, you should see a pop-up box named “manage memory.” This box contains all the personal information that Claude knows about you, and after a minute or two it will update with the new data you just transferred from the other platform. Make sure to review this context closely and edit any data that seems inaccurate or unnecessary for what you’re planning on using Claude for. And there you have it—now you’re ready to start your new journey with Claude. What will you do first? BY BEN SHERRY @BENLUCASSHERRY