Friday, May 1, 2026

Duolingo’s AI U-Turn Is a Warning for Other Companies

At many big companies these days, finding ways to use AI to do your job better isn’t a suggestion. It’s a requirement. As The Wall Street Journal recently reported, “From small startups to giants including Amazon.com, Alphabet, Google, and Meta Platforms, tech companies are measuring [AI use] with an eye on productivity gains and in certain cases factoring it into performance reviews.” Given the industry’s mad dash to realize the potential productivity gains of AI and keep ahead of the competition, leaders’ desperation to have employees embrace AI makes sense. But is tracking and scoring AI usage in performance reviews the best way to go about it? The experience of learning app Duolingo, as well as some fascinating recent research, suggests companies should think carefully about how they evaluate employees’ AI. The potential for unpleasant and unintended consequences is high. Duolingo’s AI U-turn Duolingo embraced AI early and enthusiastically, stirring controversy. So on a recent episode of the Silicon Valley Girl podcast, host Marina Mogilko wanted to dig into the details of the company’s AI push. She asked CEO Luis von Ahn to explain how Duolingo tracks and evaluates AI use as part of the performance review process. But von Ahn pushed back against the premise of the question. “For a while, it was part of performance reviews. We decided not to do that,” he clarified. Why the change of heart? “I sent a memo to the company that said, ‘Part of your performance review is going to be usage of AI.’ And we found that people were … kind of asking, ‘Do you just want us to use AI for AI’s sake?’” he explained. The focus on maximizing AI use over maximizing AI benefits wasn’t what Duolingo was after. Von Ahn changed course. “We said, ‘No, look, the most important thing in your performance is that you are doing whatever your job is as well as possible.’ A lot of times AI can help you with that, but if it can’t, I’m not going to force you to do that,” he said. “We backtracked from that because it felt like, rather than being held accountable for the actual outcome, we’re trying to just push something that in some cases did not fit.” Beware workslop Duolingo discovered that forced, performative AI use wasn’t actually benefiting anyone. Instead, it was creating AI showpieces to cite when performance review season rolled around again, and crowding out other, more impactful work in the process. Power to management for recognizing the problem and reversing course. But is this only the unique experience of one particular company? Or are other leaders likely to discover, as von Ahn did, that forcing AI usage creates time-wasting, resource-consuming distraction? Recent research from Stanford University and coaching platform BetterUp suggests the problems that cropped up at Duolingo are a danger that more managers need to consider. And they gave that danger a catchy name: workslop. You may have heard the word because it ricocheted around the internet once the researchers coined it. That instant popularity probably reflects how many of us recognized the widespread problem it describes—low-quality, AI-assisted output that forces others to spend time understanding, processing, and fixing it. Just how widespread is workslop? In an initial study, the researchers crunched some numbers and came up with a startling estimate. “Employees reported spending an average of one hour and 56 minutes dealing with each instance of workslop. Based on participants’ estimates of time spent, as well as on their self-reported salary, we find that these workslop incidents carry an invisible tax of $186 per month. For an organization of 10,000 workers, given the estimated prevalence of workslop (41 percent), this yields more than $9 million per year in lost productivity,” the researchers wrote on HBR. How Duolingo accidentally encouraged workslop Using AI to cut cognitive corners and/or impress the boss costs companies millions a year. It also annoys workers tremendously. And leaders, the researchers discovered in a subsequent study, are often guilty of accidentally making the problem worse with AI mandates like the one originally instituted at Duolingo. “Many leaders are facing pressure to make responsible investment decisions about AI in the face of uncertainty and macroeconomic pressures,” the researchers wrote in a second HBR article. “In response, leaders are using a blunt strategy, mandating that employees use AI broadly and quickly.” The predictable result of these less-than-well-thought-out AI mandates isn’t tech-driven productivity gains. It’s more workslop, more wasted time, and more frustrated employees. Better ways to get employees to use AI Bosses thinking of following the lead of tech giants like Meta and using brute force to compel teams to use AI more should take Duolingo’s experience as a warning. Everyone agrees that AI will ultimately have huge upsides for businesses. The stakes are high, and pressure is on leadership. But rushing out blanket AI mandates has serious downsides. So what should leaders do instead of one day announcing to workers that they’ll be evaluated on their AI use at their next performance review and hoping for the best? In their second HBR article, the researchers lay out a handful of suggestions. They include creating an atmosphere of trust where people can discuss their AI experiments honestly, warts and all, and investing in training and knowledge-sharing initiatives between employees. Some companies might even consider creating a position of “AI collaboration architect” to help employees figure out the best ways to deploy AI. EXPERT OPINION BY JESSICA STILLMAN @ENTRYLEVELREBEL

No comments: