Friday, May 1, 2026

Duolingo’s AI U-Turn Is a Warning for Other Companies

At many big companies these days, finding ways to use AI to do your job better isn’t a suggestion. It’s a requirement. As The Wall Street Journal recently reported, “From small startups to giants including Amazon.com, Alphabet, Google, and Meta Platforms, tech companies are measuring [AI use] with an eye on productivity gains and in certain cases factoring it into performance reviews.” Given the industry’s mad dash to realize the potential productivity gains of AI and keep ahead of the competition, leaders’ desperation to have employees embrace AI makes sense. But is tracking and scoring AI usage in performance reviews the best way to go about it? The experience of learning app Duolingo, as well as some fascinating recent research, suggests companies should think carefully about how they evaluate employees’ AI. The potential for unpleasant and unintended consequences is high. Duolingo’s AI U-turn Duolingo embraced AI early and enthusiastically, stirring controversy. So on a recent episode of the Silicon Valley Girl podcast, host Marina Mogilko wanted to dig into the details of the company’s AI push. She asked CEO Luis von Ahn to explain how Duolingo tracks and evaluates AI use as part of the performance review process. But von Ahn pushed back against the premise of the question. “For a while, it was part of performance reviews. We decided not to do that,” he clarified. Why the change of heart? “I sent a memo to the company that said, ‘Part of your performance review is going to be usage of AI.’ And we found that people were … kind of asking, ‘Do you just want us to use AI for AI’s sake?’” he explained. The focus on maximizing AI use over maximizing AI benefits wasn’t what Duolingo was after. Von Ahn changed course. “We said, ‘No, look, the most important thing in your performance is that you are doing whatever your job is as well as possible.’ A lot of times AI can help you with that, but if it can’t, I’m not going to force you to do that,” he said. “We backtracked from that because it felt like, rather than being held accountable for the actual outcome, we’re trying to just push something that in some cases did not fit.” Beware workslop Duolingo discovered that forced, performative AI use wasn’t actually benefiting anyone. Instead, it was creating AI showpieces to cite when performance review season rolled around again, and crowding out other, more impactful work in the process. Power to management for recognizing the problem and reversing course. But is this only the unique experience of one particular company? Or are other leaders likely to discover, as von Ahn did, that forcing AI usage creates time-wasting, resource-consuming distraction? Recent research from Stanford University and coaching platform BetterUp suggests the problems that cropped up at Duolingo are a danger that more managers need to consider. And they gave that danger a catchy name: workslop. You may have heard the word because it ricocheted around the internet once the researchers coined it. That instant popularity probably reflects how many of us recognized the widespread problem it describes—low-quality, AI-assisted output that forces others to spend time understanding, processing, and fixing it. Just how widespread is workslop? In an initial study, the researchers crunched some numbers and came up with a startling estimate. “Employees reported spending an average of one hour and 56 minutes dealing with each instance of workslop. Based on participants’ estimates of time spent, as well as on their self-reported salary, we find that these workslop incidents carry an invisible tax of $186 per month. For an organization of 10,000 workers, given the estimated prevalence of workslop (41 percent), this yields more than $9 million per year in lost productivity,” the researchers wrote on HBR. How Duolingo accidentally encouraged workslop Using AI to cut cognitive corners and/or impress the boss costs companies millions a year. It also annoys workers tremendously. And leaders, the researchers discovered in a subsequent study, are often guilty of accidentally making the problem worse with AI mandates like the one originally instituted at Duolingo. “Many leaders are facing pressure to make responsible investment decisions about AI in the face of uncertainty and macroeconomic pressures,” the researchers wrote in a second HBR article. “In response, leaders are using a blunt strategy, mandating that employees use AI broadly and quickly.” The predictable result of these less-than-well-thought-out AI mandates isn’t tech-driven productivity gains. It’s more workslop, more wasted time, and more frustrated employees. Better ways to get employees to use AI Bosses thinking of following the lead of tech giants like Meta and using brute force to compel teams to use AI more should take Duolingo’s experience as a warning. Everyone agrees that AI will ultimately have huge upsides for businesses. The stakes are high, and pressure is on leadership. But rushing out blanket AI mandates has serious downsides. So what should leaders do instead of one day announcing to workers that they’ll be evaluated on their AI use at their next performance review and hoping for the best? In their second HBR article, the researchers lay out a handful of suggestions. They include creating an atmosphere of trust where people can discuss their AI experiments honestly, warts and all, and investing in training and knowledge-sharing initiatives between employees. Some companies might even consider creating a position of “AI collaboration architect” to help employees figure out the best ways to deploy AI. EXPERT OPINION BY JESSICA STILLMAN @ENTRYLEVELREBEL

Wednesday, April 29, 2026

5 Lessons From an AI Startup That’s Quietly Disrupting a $30 Billion Industry

I’ve spent years writing about how entrepreneurs can leverage AI in their businesses and the non-obvious ways AI is changing the game. But I’ve been lucky enough to spend the last two decades surrounded by entrepreneurs who look at massive industries and ask one simple question: Why does this still work this way? My friend Trevor Sumner is one of those entrepreneurs. Trevor is the CEO of an AI company that’s shaking up the consumer market research industry—a space worth more than $30 billion that, until recently, still relied heavily on the same methods it used before the internet existed. Think focus groups, quarterly surveys, and PowerPoint decks that arrived months after the question was asked. As I’ve written before, your network is often worth more than your startup—and it was through my network that I first connected with Trevor and learned about what he’s building. Trevor’s company uses AI to analyze millions of real consumer signals online—social conversations, reviews, search behavior—and turns them into the kind of insights that used to take months and cost a fortune. And they’re growing fast: revenue up significantly, team quadrupled in a year, working with major global brands across 30-plus countries. But here’s what I find most interesting. The lessons from Trevor’s journey aren’t just about market research. They’re a blueprint for any founder trying to build a company in an industry being disrupted by AI. And let’s be honest—that’s almost every industry right now. Here are the five lessons that stood out to me. 1. Find the industry still running on fax machines Every industry ripe for disruption has a tell: the output is genuinely valuable, but the process is stuck in a different era. In market research, major brands desperately need consumer insights to make billion-dollar decisions. But the way those insights were generated hadn’t fundamentally changed in decades. Surveys designed before TikTok—or even the internet—existed. Reports delivered months after the question was asked. I see this pattern everywhere. When I was building Likeable Media in 2007, the advertising industry was still spending the majority of budgets on TV and print while consumers were spending their time on social media. The gap between how an industry operates and how the world actually works—that’s where the opportunity lives. The lesson: Look for industries where the process is visibly broken but the need is undeniable. That gap is where AI creates the most dramatic ROI. 2. Don’t sell AI—Sell the outcome AI makes possible This one is huge, and I see founders get it wrong all the time. Nobody signs a contract because they’re excited about your algorithm. They sign because you can deliver a result they couldn’t get before—faster, cheaper, or more reliably. Trevor told me that when his team pitches major brands, AI is never the headline. The headline is: What if you could understand what millions of consumers actually think about your brand—in real time, instead of waiting three months for a survey? The moment you make AI the hero of your pitch, you’ve invited a procurement committee to debate whether AI is ready, safe, or overhyped. When the outcome is the hero, the conversation shifts to: Can you deliver this result? That’s a much better meeting. I think about this with my own ventures. When Carrie and I built Likeable Media, we didn’t sell “social media management.” We sold the ability to turn your customers into your marketing department. The technology was the how. The outcome was the why. The lesson: Position the result, not the technology. AI is how you do it. The outcome is why they buy. 3. Your first five clients should scare you a little Trevor’s company didn’t start by landing small, safe clients to cut their teeth. They went straight after some of the biggest consumer brands in the world—and they did it before they’d even raised outside funding or built a formal sales team. That’s not recklessness. That’s strategy. I learned this lesson the hard way. Early in Likeable Media’s life, we spent too long working with small accounts that were easy to manage but didn’t push us to be better. It wasn’t until we landed bigger clients that our product, our team, and our confidence leveled up. Big logos validate your product, compress future sales cycles, and set your pricing floor permanently higher. The lesson: Don’t wait until you feel ready. Punch up. Your first five clients should stretch you and push your vision forward. 4. Context beats capability in a disrupted market Here’s something that keeps coming up in every AI-disrupted industry I watch: incumbents fight back by slapping the word “AI” onto their existing products. Traditional research firms are rebranding legacy tools as “AI-powered,” creating confusion for buyers who can’t tell the difference between a company built on AI and one that just bolted AI onto the side. But here’s what separates the winners from the noise: deep domain expertise. Anyone can access powerful AI models these days. Not everyone understands the problem well enough to apply AI in a way that actually matters. Trevor’s co-founders spent decades inside the world’s biggest consumer brands. They know how brand equity works, how category dynamics shift, what a CMO actually needs on their desk Monday morning. That kind of context can’t be replicated by fine-tuning a model. I see this as the single biggest differentiator for AI startups right now. The founders who win won’t necessarily have the most powerful technology. They’ll be the ones who understand their buyer’s world better than anyone else. The lesson: Anyone can access powerful AI. Not everyone understands the problem well enough to apply it. Domain expertise is your moat. 5. Build for the transition, not just the transformation This is the lesson I think most founders miss entirely. Enterprise clients aren’t going to abandon their existing tools and processes overnight—no matter how much better your solution is. Trevor’s company was designed to complement existing workflows first, and replace them over time. They even provide playbooks for managing the internal transition—helping their clients navigate change management and stakeholder buy-in. That patience, counterintuitively, accelerated their adoption. I think about this with my own ventures too. When you’re building something that asks people to change how they operate, you can’t just show up with a better mousetrap and expect everyone to switch. You have to earn the transition by meeting people where they are. The lesson: The boldest disruption often wins by moving slowly enough for the buyer to say “yes.” The AI gold rush is real, but the founders who win won’t just be the ones with the most powerful models. They’ll be the ones who found the broken process, led with the outcome, punched up early, earned domain trust, and respected the buyer’s journey. That’s the playbook. And from what I’ve seen, it works. EXPERT OPINION BY DAVE KERPEN, CEO, KERPEN VENTURES @DAVEKERPEN

Monday, April 27, 2026

Shadow AI: Silicon Valley’s New Productivity Secret Is Also a Massive Liability

Your employees are most likely using shadow AI. It’s a scary-sounding name for a relatively common practice, but one that could have real consequences for your business. First, it helps to understand what shadow AI actually is, before unpacking strategies to prevent your employees from using it—and potentially exposing your company to reputational damage, litigation, or even financial losses. According to Rick Holland, a cybersecurity expert and chief information security officer at AI-native cybersecurity firm Cyera, “shadow” refers to unsanctioned use of technology in the workplace. That can include software, hardware, or AI tools. “It’s the use of a technology that the IT function, the business, the CTO is unaware of,” Holland says. “You don’t know who’s using it. You don’t know who has access to it. You don’t know the data that is being used.” If, for example, Microsoft Copilot is your company’s sanctioned chatbot, employees who turn to ChatGPT for work help are using shadow AI. A November report from cybersecurity firm UpGuard found that more than 80 percent of some 1,500 workers surveyed across the U.S., U.K., and other countries use unapproved AI tools at work—about half of them regularly. Even cybersecurity professionals aren’t immune, with an even higher proportion, some 90 percent, admitting to using shadow AI. The first step in addressing shadow AI is recognizing employee motivations. And more often than not, those motivations are not nefarious, Holland says. As AI has swept the business world, workers are under pressure to be increasingly productive and be fluent in new technologies—otherwise, they could risk becoming redundant in a future experts warn will be defined by AI. Not to mention, they’re likely discovering that new, AI-enhanced tools are making their lives easier, and IT departments often don’t move quickly enough to support them. “They’re trying to do their jobs, and they may have found a better, faster way to do it,” Holland says. “We all need to be learning AI right now, because it’s disrupting every vertical that’s out there.” “So I always start off [assuming] best intentions when having conversations around shadow AI,” he adds. But even actions undertaken with the best intentions can have serious consequences. Shadow AI can threaten a business in a few different ways. Regulatory violations Employees who are using unapproved tools may unintentionally expose data that is governed by regulations like HIPAA, resulting in fines. This can be anything from patient health data and payment information, to data on private citizens that is covered by the General Data Protection Regulation (GDPR) in Europe. “When you put [data] into Claude or OpenAI or Grok or whatever, you can’t get that information out, and it’s training on your data,” Holland says. “There’s a potential that someone else could query the frontier model and then get that information back.” IP loss & reputational damage Although it is relatively easy to quantify the consequences of data leaks that run afoul of regulations like HIPAA, more insidious—and perhaps more damaging in the long term—are leaks of sensitive or proprietary data. In other words, your company’s intellectual property. Consider the following scenarios: An employee at a soft drink company uses a free transcription app during a meeting and accidentally shares the company’s secret ingredient with an LLM. That means the trade secret is now stored by an unapproved third-party outside of the company’s control, risking exposure of that data in the event of a security breach. Or, a pharmaceutical employee working on marketing materials feeds data into AI about a new drug in-development shortly before a patent filing. Disclosing IP before securing legal protections can potentially jeopardize a company’s patent rights. “Regulated data that gets fines, that may or may not set your business back,” Holland says. “But if you lost your secret sauce—whatever your secret sauce is—and a competitor was able to find it, that could have very strategic implications to you long term.” New vectors for attack Any time new software is used in a corporate setting, Holland says, it represents a new “attack surface” that bad actors can use to infiltrate a system. If an IT department doesn’t have visibility into the software employees are using, they can’t guarantee that it’s safe. Just look at the recent LiteLLM supply chain attack, which was designed to steal all sorts of login credentials. It all started with a tool called Trivy, which is an open-source security scanner that is reportedly used by major companies. After Trivy was infected, the malware was able to spread to any project that depended on Trivy, including LiteLLM. If IT departments are not aware that workers were using those tools, they wouldn’t have a chance to familiarize themselves with what those tools are built on, and look out for telltale signs of an attack. Unauthorized access AI agents have opened a whole new world of concerns for IT departments. As agents are designed to act autonomously, their access to data must be strictly policed. One high profile recent example, documented in a now-viral post on X, was when a Meta AI researcher asked OpenClaw to clean up her inbox, and instead, she says it went on a “speedrun” deleting her emails en masse. “I had to RUN to my Mac mini like I was defusing a bomb,” she wrote, noting that the agent did not respond to commands from her phone. A more serious example would be if a hypothetical company didn’t restrict what internal data an agent could access. In response to a staff or customer query, it could pull sensitive information such as executive compensation or information related to forthcoming, but not yet disclosed, M&A. Holland emphasized that it is crucial that companies discover and identify their data and who has access to it, in order to secure it, which is one of the key services his firm, Cyera, provides. “Our historical nature of over-providing data and access is going to come back to get us. Agents are people pleasers,” Holland says. “That’s why you have to lock them down and what they can access.” Start with visibility—and resist blocking tools Given all the possible threats from shadow AI, how can IT departments ensure their employees are only using safe and approved products? The answer, according to Jeff Pollard, a vice president and principal analyst at global market research and advisory firm Forrester, is to start with understanding—and that means resisting blocking access to AI tools. “Trying to block or ban shadow AI is rarely effective because there are so many different ways to access and get to AI, so if you block it on an endpoint or from a browser on an employee’s workstation, they just pick up their phone and then they use it there,” he says. “The other problem with blocking is that you do lose the insight that you would get out of what the employee is trying to do and why they’re trying to do it.” Pollard, who helps companies navigate securing the enterprise adoption of AI, whether that be tools like Microsoft Copilot or vibe coding tools like Cursor and Replit, recommends working separately with different departments to ascertain what types of tools they need, and setting policies accordingly. The types of AI tools a finance department uses usually won’t look anything like the ones marketing or customer service teams use, which is why one-size-fits-all policies rarely work. Understanding how and why employees are using unsanctioned AI can help inform IT departments what kinds of tools employees really need as they search for safe alternatives. Define the approval process Transparency is key. Pollard says it is important to spell out for staff how and why different programs are approved—or not. That opens the door for employees to submit requests for new approvals, and also educates them about why certain tools or software are not considered safe. “It’s about co-creation, because ultimately, from a security perspective, you are coaching the organization on risk acceptance, but the organization itself has to accept that risk,” he says. Holland adds that establishing a governance model that is meant to “work at the speed of AI” is crucial. And that means establishing an AI governance committee with staff who understand AI, new technology, and data and information security. Those experts, he says, should be charged with cultivating a culture of communication in which different departments feel comfortable discussing their technological needs and tools that may address them without fear of punishment. Know when to bring in legal Pollard agrees with Holland’s assertion that most employees don’t intend to cause harm when using shadow AI. That’s why education is so important, although Pollard notes that “ignorance is no excuse.” He says many policy violations are a training issue, although a scenario in which violations are widespread could necessitate institutional introspection to determine whether policies are actually working for employees. In the event that companies have done their best to both establish workable policies and educate their employees about them, and someone still knowingly violates them, then it might be time for a call to legal. “I will tell you that CISOs don’t want to be the ‘Department of No’ anymore,” Pollard says. “When you’re looking for someone to come in and be the heavy hitter to say, ‘Shut this down,’ pull on legal shirt sleeves, because they’ll absolutely come in and help you out.” Important reminders When constructing corporate policies, it can be difficult to keep everyone happy. But one way to alleviate this tension is to move quickly and remain adaptable, given the pace of AI development. “You’re never going to have every single platform covered—there are just too many of them. So you do have to sort of accept that you’re going to have to adapt. You’re going to learn about a new platform all the time,” Pollard says. “You can’t leave a policy, or set it and forget it.” And although verifying that a tool is safe to use can be labor intensive, there are a few broad recommendations to keep in mind. Companies often prefer to choose AI models that are hosted domestically, and Pollard says that can mean U.S. companies avoiding Chinese models (or even European companies avoiding models hosted in the U.S.). And he adds that securing enterprise contracts is paramount, because it sets expectations and offers legal recourse in case those expectations are not met. “The consumer grade aspect of this is certainly the one that’s the most problematic, where someone goes directly to Cursor as an individual, or goes directly to Copilot as an individual and buys it,” he says. “That’s definitely what you want to try to crack down on, but that’s also the way a lot of these tools are introduced to an enterprise environment. So in that scenario, it’s about trying to work with as many as you reasonably can to accommodate what different employees need.” BY CHLOE AIELLO @CHLOBO_ILO

Friday, April 24, 2026

Anthropic’s Claude Opus 4.7 Is Here, and It’s Already Outperforming Gemini 3.1 Pro and GPT-5

Anthropic’s just released its latest AI model, Claude Opus 4.7. The company claims it handles complex, long-running tasks with greater rigor and consistency than its predecessor, follows instructions more precisely, and can verify its own outputs before delivering a response. In short, Anthropic says the new model is a real-world productivity booster. This comes shortly after Anthropic launched Claude Opus 4.6 in February. And the model is “less broadly capable” than its most recent offering, Claude Mythos Preview. But at this time Anthropic has no plans to release Claude Mythos Preview to the general public. It says the effort is aimed at understanding how models of that caliber could eventually be deployed at scale. How Does Opus 4.7 Compare? According to The Next Web, the most striking gains are in software engineering: on SWE-bench Pro, an AI evaluation benchmark, Opus 4.7 scored 64.3 percent—up from 53.4 percent on Opus 4.6 and ahead of both GPT-5.4 at 57.7 percent and Gemini 3.1 Pro at 54.2 percent. Opus 4.7 Token Usage Users upgrading from Opus 4.6 should note two changes that affect token usage. An updated tokenizer improves how the model processes text but can increase token counts by roughly 1.0 to 1.35 times depending on content type. The model also thinks more deeply at higher effort levels, particularly in later turns of agentic tasks, which boosts reliability on complex problems but produces more output tokens. Opus 4.7 Safety Additionally, Anthropic says Opus 4.7 carries a safety profile comparable to its predecessor, with evaluations showing low rates of deception, sycophancy, and susceptibility to misuse. The company’s alignment assessment concluded that the model is “largely well-aligned and trustworthy, though not without room for improvement.” “We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses,” Anthropic said in a release. “What we learn from the real-world deployment of these safeguards will help us work towards our eventual goal of a broad release of Mythos-class models.” This commitment to transparency around safety is central to how Anthropic has positioned itself since its founding in 2021. Anthropic has spent much of its existence cultivating a reputation as a more safety-focused alternative. CEO Dario Amodei previously served as vice president of research at OpenAI before leaving to co-found Anthropic alongside his sister Daniela Amodei and other former OpenAI employees who shared his concerns that the company was not taking AI safety seriously enough. “We’re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies,” Anthropic’s CEO Dario Amode said on podcaster Dwarkesh Patel’s Podcast. The upgrade represents a step forward across the capabilities that matter most to Claude’s users. “The model does not win every benchmark against every competitor, but it wins convincingly on the ones most directly tied to real-world productivity,” The Next Web said. BY AMAYA NICHOLE