Wednesday, April 29, 2026

5 Lessons From an AI Startup That’s Quietly Disrupting a $30 Billion Industry

I’ve spent years writing about how entrepreneurs can leverage AI in their businesses and the non-obvious ways AI is changing the game. But I’ve been lucky enough to spend the last two decades surrounded by entrepreneurs who look at massive industries and ask one simple question: Why does this still work this way? My friend Trevor Sumner is one of those entrepreneurs. Trevor is the CEO of an AI company that’s shaking up the consumer market research industry—a space worth more than $30 billion that, until recently, still relied heavily on the same methods it used before the internet existed. Think focus groups, quarterly surveys, and PowerPoint decks that arrived months after the question was asked. As I’ve written before, your network is often worth more than your startup—and it was through my network that I first connected with Trevor and learned about what he’s building. Trevor’s company uses AI to analyze millions of real consumer signals online—social conversations, reviews, search behavior—and turns them into the kind of insights that used to take months and cost a fortune. And they’re growing fast: revenue up significantly, team quadrupled in a year, working with major global brands across 30-plus countries. But here’s what I find most interesting. The lessons from Trevor’s journey aren’t just about market research. They’re a blueprint for any founder trying to build a company in an industry being disrupted by AI. And let’s be honest—that’s almost every industry right now. Here are the five lessons that stood out to me. 1. Find the industry still running on fax machines Every industry ripe for disruption has a tell: the output is genuinely valuable, but the process is stuck in a different era. In market research, major brands desperately need consumer insights to make billion-dollar decisions. But the way those insights were generated hadn’t fundamentally changed in decades. Surveys designed before TikTok—or even the internet—existed. Reports delivered months after the question was asked. I see this pattern everywhere. When I was building Likeable Media in 2007, the advertising industry was still spending the majority of budgets on TV and print while consumers were spending their time on social media. The gap between how an industry operates and how the world actually works—that’s where the opportunity lives. The lesson: Look for industries where the process is visibly broken but the need is undeniable. That gap is where AI creates the most dramatic ROI. 2. Don’t sell AI—Sell the outcome AI makes possible This one is huge, and I see founders get it wrong all the time. Nobody signs a contract because they’re excited about your algorithm. They sign because you can deliver a result they couldn’t get before—faster, cheaper, or more reliably. Trevor told me that when his team pitches major brands, AI is never the headline. The headline is: What if you could understand what millions of consumers actually think about your brand—in real time, instead of waiting three months for a survey? The moment you make AI the hero of your pitch, you’ve invited a procurement committee to debate whether AI is ready, safe, or overhyped. When the outcome is the hero, the conversation shifts to: Can you deliver this result? That’s a much better meeting. I think about this with my own ventures. When Carrie and I built Likeable Media, we didn’t sell “social media management.” We sold the ability to turn your customers into your marketing department. The technology was the how. The outcome was the why. The lesson: Position the result, not the technology. AI is how you do it. The outcome is why they buy. 3. Your first five clients should scare you a little Trevor’s company didn’t start by landing small, safe clients to cut their teeth. They went straight after some of the biggest consumer brands in the world—and they did it before they’d even raised outside funding or built a formal sales team. That’s not recklessness. That’s strategy. I learned this lesson the hard way. Early in Likeable Media’s life, we spent too long working with small accounts that were easy to manage but didn’t push us to be better. It wasn’t until we landed bigger clients that our product, our team, and our confidence leveled up. Big logos validate your product, compress future sales cycles, and set your pricing floor permanently higher. The lesson: Don’t wait until you feel ready. Punch up. Your first five clients should stretch you and push your vision forward. 4. Context beats capability in a disrupted market Here’s something that keeps coming up in every AI-disrupted industry I watch: incumbents fight back by slapping the word “AI” onto their existing products. Traditional research firms are rebranding legacy tools as “AI-powered,” creating confusion for buyers who can’t tell the difference between a company built on AI and one that just bolted AI onto the side. But here’s what separates the winners from the noise: deep domain expertise. Anyone can access powerful AI models these days. Not everyone understands the problem well enough to apply AI in a way that actually matters. Trevor’s co-founders spent decades inside the world’s biggest consumer brands. They know how brand equity works, how category dynamics shift, what a CMO actually needs on their desk Monday morning. That kind of context can’t be replicated by fine-tuning a model. I see this as the single biggest differentiator for AI startups right now. The founders who win won’t necessarily have the most powerful technology. They’ll be the ones who understand their buyer’s world better than anyone else. The lesson: Anyone can access powerful AI. Not everyone understands the problem well enough to apply it. Domain expertise is your moat. 5. Build for the transition, not just the transformation This is the lesson I think most founders miss entirely. Enterprise clients aren’t going to abandon their existing tools and processes overnight—no matter how much better your solution is. Trevor’s company was designed to complement existing workflows first, and replace them over time. They even provide playbooks for managing the internal transition—helping their clients navigate change management and stakeholder buy-in. That patience, counterintuitively, accelerated their adoption. I think about this with my own ventures too. When you’re building something that asks people to change how they operate, you can’t just show up with a better mousetrap and expect everyone to switch. You have to earn the transition by meeting people where they are. The lesson: The boldest disruption often wins by moving slowly enough for the buyer to say “yes.” The AI gold rush is real, but the founders who win won’t just be the ones with the most powerful models. They’ll be the ones who found the broken process, led with the outcome, punched up early, earned domain trust, and respected the buyer’s journey. That’s the playbook. And from what I’ve seen, it works. EXPERT OPINION BY DAVE KERPEN, CEO, KERPEN VENTURES @DAVEKERPEN

Monday, April 27, 2026

Shadow AI: Silicon Valley’s New Productivity Secret Is Also a Massive Liability

Your employees are most likely using shadow AI. It’s a scary-sounding name for a relatively common practice, but one that could have real consequences for your business. First, it helps to understand what shadow AI actually is, before unpacking strategies to prevent your employees from using it—and potentially exposing your company to reputational damage, litigation, or even financial losses. According to Rick Holland, a cybersecurity expert and chief information security officer at AI-native cybersecurity firm Cyera, “shadow” refers to unsanctioned use of technology in the workplace. That can include software, hardware, or AI tools. “It’s the use of a technology that the IT function, the business, the CTO is unaware of,” Holland says. “You don’t know who’s using it. You don’t know who has access to it. You don’t know the data that is being used.” If, for example, Microsoft Copilot is your company’s sanctioned chatbot, employees who turn to ChatGPT for work help are using shadow AI. A November report from cybersecurity firm UpGuard found that more than 80 percent of some 1,500 workers surveyed across the U.S., U.K., and other countries use unapproved AI tools at work—about half of them regularly. Even cybersecurity professionals aren’t immune, with an even higher proportion, some 90 percent, admitting to using shadow AI. The first step in addressing shadow AI is recognizing employee motivations. And more often than not, those motivations are not nefarious, Holland says. As AI has swept the business world, workers are under pressure to be increasingly productive and be fluent in new technologies—otherwise, they could risk becoming redundant in a future experts warn will be defined by AI. Not to mention, they’re likely discovering that new, AI-enhanced tools are making their lives easier, and IT departments often don’t move quickly enough to support them. “They’re trying to do their jobs, and they may have found a better, faster way to do it,” Holland says. “We all need to be learning AI right now, because it’s disrupting every vertical that’s out there.” “So I always start off [assuming] best intentions when having conversations around shadow AI,” he adds. But even actions undertaken with the best intentions can have serious consequences. Shadow AI can threaten a business in a few different ways. Regulatory violations Employees who are using unapproved tools may unintentionally expose data that is governed by regulations like HIPAA, resulting in fines. This can be anything from patient health data and payment information, to data on private citizens that is covered by the General Data Protection Regulation (GDPR) in Europe. “When you put [data] into Claude or OpenAI or Grok or whatever, you can’t get that information out, and it’s training on your data,” Holland says. “There’s a potential that someone else could query the frontier model and then get that information back.” IP loss & reputational damage Although it is relatively easy to quantify the consequences of data leaks that run afoul of regulations like HIPAA, more insidious—and perhaps more damaging in the long term—are leaks of sensitive or proprietary data. In other words, your company’s intellectual property. Consider the following scenarios: An employee at a soft drink company uses a free transcription app during a meeting and accidentally shares the company’s secret ingredient with an LLM. That means the trade secret is now stored by an unapproved third-party outside of the company’s control, risking exposure of that data in the event of a security breach. Or, a pharmaceutical employee working on marketing materials feeds data into AI about a new drug in-development shortly before a patent filing. Disclosing IP before securing legal protections can potentially jeopardize a company’s patent rights. “Regulated data that gets fines, that may or may not set your business back,” Holland says. “But if you lost your secret sauce—whatever your secret sauce is—and a competitor was able to find it, that could have very strategic implications to you long term.” New vectors for attack Any time new software is used in a corporate setting, Holland says, it represents a new “attack surface” that bad actors can use to infiltrate a system. If an IT department doesn’t have visibility into the software employees are using, they can’t guarantee that it’s safe. Just look at the recent LiteLLM supply chain attack, which was designed to steal all sorts of login credentials. It all started with a tool called Trivy, which is an open-source security scanner that is reportedly used by major companies. After Trivy was infected, the malware was able to spread to any project that depended on Trivy, including LiteLLM. If IT departments are not aware that workers were using those tools, they wouldn’t have a chance to familiarize themselves with what those tools are built on, and look out for telltale signs of an attack. Unauthorized access AI agents have opened a whole new world of concerns for IT departments. As agents are designed to act autonomously, their access to data must be strictly policed. One high profile recent example, documented in a now-viral post on X, was when a Meta AI researcher asked OpenClaw to clean up her inbox, and instead, she says it went on a “speedrun” deleting her emails en masse. “I had to RUN to my Mac mini like I was defusing a bomb,” she wrote, noting that the agent did not respond to commands from her phone. A more serious example would be if a hypothetical company didn’t restrict what internal data an agent could access. In response to a staff or customer query, it could pull sensitive information such as executive compensation or information related to forthcoming, but not yet disclosed, M&A. Holland emphasized that it is crucial that companies discover and identify their data and who has access to it, in order to secure it, which is one of the key services his firm, Cyera, provides. “Our historical nature of over-providing data and access is going to come back to get us. Agents are people pleasers,” Holland says. “That’s why you have to lock them down and what they can access.” Start with visibility—and resist blocking tools Given all the possible threats from shadow AI, how can IT departments ensure their employees are only using safe and approved products? The answer, according to Jeff Pollard, a vice president and principal analyst at global market research and advisory firm Forrester, is to start with understanding—and that means resisting blocking access to AI tools. “Trying to block or ban shadow AI is rarely effective because there are so many different ways to access and get to AI, so if you block it on an endpoint or from a browser on an employee’s workstation, they just pick up their phone and then they use it there,” he says. “The other problem with blocking is that you do lose the insight that you would get out of what the employee is trying to do and why they’re trying to do it.” Pollard, who helps companies navigate securing the enterprise adoption of AI, whether that be tools like Microsoft Copilot or vibe coding tools like Cursor and Replit, recommends working separately with different departments to ascertain what types of tools they need, and setting policies accordingly. The types of AI tools a finance department uses usually won’t look anything like the ones marketing or customer service teams use, which is why one-size-fits-all policies rarely work. Understanding how and why employees are using unsanctioned AI can help inform IT departments what kinds of tools employees really need as they search for safe alternatives. Define the approval process Transparency is key. Pollard says it is important to spell out for staff how and why different programs are approved—or not. That opens the door for employees to submit requests for new approvals, and also educates them about why certain tools or software are not considered safe. “It’s about co-creation, because ultimately, from a security perspective, you are coaching the organization on risk acceptance, but the organization itself has to accept that risk,” he says. Holland adds that establishing a governance model that is meant to “work at the speed of AI” is crucial. And that means establishing an AI governance committee with staff who understand AI, new technology, and data and information security. Those experts, he says, should be charged with cultivating a culture of communication in which different departments feel comfortable discussing their technological needs and tools that may address them without fear of punishment. Know when to bring in legal Pollard agrees with Holland’s assertion that most employees don’t intend to cause harm when using shadow AI. That’s why education is so important, although Pollard notes that “ignorance is no excuse.” He says many policy violations are a training issue, although a scenario in which violations are widespread could necessitate institutional introspection to determine whether policies are actually working for employees. In the event that companies have done their best to both establish workable policies and educate their employees about them, and someone still knowingly violates them, then it might be time for a call to legal. “I will tell you that CISOs don’t want to be the ‘Department of No’ anymore,” Pollard says. “When you’re looking for someone to come in and be the heavy hitter to say, ‘Shut this down,’ pull on legal shirt sleeves, because they’ll absolutely come in and help you out.” Important reminders When constructing corporate policies, it can be difficult to keep everyone happy. But one way to alleviate this tension is to move quickly and remain adaptable, given the pace of AI development. “You’re never going to have every single platform covered—there are just too many of them. So you do have to sort of accept that you’re going to have to adapt. You’re going to learn about a new platform all the time,” Pollard says. “You can’t leave a policy, or set it and forget it.” And although verifying that a tool is safe to use can be labor intensive, there are a few broad recommendations to keep in mind. Companies often prefer to choose AI models that are hosted domestically, and Pollard says that can mean U.S. companies avoiding Chinese models (or even European companies avoiding models hosted in the U.S.). And he adds that securing enterprise contracts is paramount, because it sets expectations and offers legal recourse in case those expectations are not met. “The consumer grade aspect of this is certainly the one that’s the most problematic, where someone goes directly to Cursor as an individual, or goes directly to Copilot as an individual and buys it,” he says. “That’s definitely what you want to try to crack down on, but that’s also the way a lot of these tools are introduced to an enterprise environment. So in that scenario, it’s about trying to work with as many as you reasonably can to accommodate what different employees need.” BY CHLOE AIELLO @CHLOBO_ILO

Friday, April 24, 2026

Anthropic’s Claude Opus 4.7 Is Here, and It’s Already Outperforming Gemini 3.1 Pro and GPT-5

Anthropic’s just released its latest AI model, Claude Opus 4.7. The company claims it handles complex, long-running tasks with greater rigor and consistency than its predecessor, follows instructions more precisely, and can verify its own outputs before delivering a response. In short, Anthropic says the new model is a real-world productivity booster. This comes shortly after Anthropic launched Claude Opus 4.6 in February. And the model is “less broadly capable” than its most recent offering, Claude Mythos Preview. But at this time Anthropic has no plans to release Claude Mythos Preview to the general public. It says the effort is aimed at understanding how models of that caliber could eventually be deployed at scale. How Does Opus 4.7 Compare? According to The Next Web, the most striking gains are in software engineering: on SWE-bench Pro, an AI evaluation benchmark, Opus 4.7 scored 64.3 percent—up from 53.4 percent on Opus 4.6 and ahead of both GPT-5.4 at 57.7 percent and Gemini 3.1 Pro at 54.2 percent. Opus 4.7 Token Usage Users upgrading from Opus 4.6 should note two changes that affect token usage. An updated tokenizer improves how the model processes text but can increase token counts by roughly 1.0 to 1.35 times depending on content type. The model also thinks more deeply at higher effort levels, particularly in later turns of agentic tasks, which boosts reliability on complex problems but produces more output tokens. Opus 4.7 Safety Additionally, Anthropic says Opus 4.7 carries a safety profile comparable to its predecessor, with evaluations showing low rates of deception, sycophancy, and susceptibility to misuse. The company’s alignment assessment concluded that the model is “largely well-aligned and trustworthy, though not without room for improvement.” “We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses,” Anthropic said in a release. “What we learn from the real-world deployment of these safeguards will help us work towards our eventual goal of a broad release of Mythos-class models.” This commitment to transparency around safety is central to how Anthropic has positioned itself since its founding in 2021. Anthropic has spent much of its existence cultivating a reputation as a more safety-focused alternative. CEO Dario Amodei previously served as vice president of research at OpenAI before leaving to co-found Anthropic alongside his sister Daniela Amodei and other former OpenAI employees who shared his concerns that the company was not taking AI safety seriously enough. “We’re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies,” Anthropic’s CEO Dario Amode said on podcaster Dwarkesh Patel’s Podcast. The upgrade represents a step forward across the capabilities that matter most to Claude’s users. “The model does not win every benchmark against every competitor, but it wins convincingly on the ones most directly tied to real-world productivity,” The Next Web said. BY AMAYA NICHOLE

Wednesday, April 22, 2026

5 ways your doctor may be using AI chatbots — and why it matters

Millions of Americans are turning to AI chatbots for health answers. Doctors are, too. But the ways doctors are incorporating AI chatbots into their practice are surprising. Specialized medical AI chatbots have quickly become a go-to source for many doctors and trainees. The CEO of one of these medical chatbot companies recently claimed that more than 100 million Americans were treated by a doctor who used their platform last year. Popular chatbots like OpenAI’s ChatGPT don’t meet the bar for doctors, who say these platforms aren’t always accurate or up to date with the latest guidance. OpenAI’s usage policies state that users are not allowed to use its services for “tailored advice” without consulting a licensed health professional. “ChatGPT is like your crazy uncle,” said Dr. Ida Sim, a professor at the University of California, San Francisco, who studies how to use data and technology to improve health care. The edge, Sim says, is that medical chatbots are less prone to sycophancy and more likely to ground answers in peer-reviewed research and clinical guidelines. That’s why she says the uptake has been “tremendous.” The most common use case Millions of research papers are published every year — and keeping up with them all is impossible. “You’d need like 18 hours a day to stay up to date,” said Dr. Jared Dashevsky, a resident physician at the Icahn School of Medicine at Mount Sinai. But doctors are expected to stay current on new research and guidelines to maintain their licenses. Many say they now use medical chatbots as a reference tool to help them stay updated. Rather than pulling information from the entire internet, specialized medical chatbots actively search medical literature, says Dr. Jonathan H. Chen, an associate professor at Stanford Medicine who leads his health system’s efforts to integrate AI into medical education. That workflow provides doctors with more accurate answers that summarize and link to important papers and guidelines. Dashevsky, who writes about AI, says these features are especially helpful for trainees working long hours. Uploading patient records to AI bots Some health systems have adopted AI chatbots to improve patient care, promising doctors safety and privacy protections. But many doctors use unauthorized chatbots called shadow AIs, according to doctors CNN spoke with. Some of these shadow AIs also advertise HIPAA compliance features. HIPAA is a federal law that requires certain organizations that maintain identifiable health information — such as hospitals and insurers — to protect it from being disclosed without patient consent. But language used by shadow AIs has led some doctors to believe that it’s safe to upload protected health information onto chatbots in exchange for more tailored answers. But Iliana Peters, a health care lawyer at the law firm Polsinelli who previously led HIPAA enforcement for the US Department of Health and Human Services, says that assumption is inaccurate. “‘HIPAA compliance’ is not an accurate term to use by any company,” Peters said, explaining that the phrase should be used only by government regulators. Despite that, Dr. Carolyn Kaufman — a resident physician at Stanford Medicine — and other doctors say that patient information is making its way into unauthorized chatbots, potentially opening the door to new ways of commodifying patient data. “Data is money,” Kaufman said, noting that she has never uploaded HIPAA-protected information onto an unapproved chatbot. “If we’re just freely uploading those data into certain websites, then that’s obviously a risk for the individual patient and for the institution, as well.” Drafting AI-generated notes AI chatbots have also stepped in to help doctors draft summaries of patient visits and long hospital stays. These notes are viewable on online patient portals and help doctors track a patient’s course and communicate plans across the care team. “It’s probably safer to have artificial intelligence review a hospital course and know everything happened, versus you as a human — with limited time, jumping between note to note — trying to put the pieces together,” Dashevsky said, arguing that although concerns over AI accuracy are valid, human-based summaries may also miss key details. Writing letters to insurance companies Administrative work can take up nearly nine hours a week for the average doctor, and the time doctors spend on insurance-related tasks costs an estimated $26.7 billion each year. A feature that Dashevsky says has been a “game-changer” is chatbot-authored letters to insurance companies for prior authorizations and other correspondence, allowing him to field patient requests more quickly. “I would have to figure out who this patient is, write the letter myself and review it. It took so much time,” he said. “Now, AI will produce for you a really good letter.” Creating a list of possible diagnoses When patients come to doctors with concerns, physicians have to figure out how to help them. Part of that process is considering a range of possible diagnoses. Many medical students and trainees use AI chatbots to help build that list, and some doctors beyond training use the feature, too. “From a med student perspective … you’re seeing a lot of things for the first time,” said Evan Patel, a fourth-year medical student at Rush University Medical College. “AI chatbots sort of help orient me to what possibilities it could be.” Kaufman says the bots provide the most accurate list when she includes every data point linked to patients, like lab results and imaging findings. What patients need to know All eight doctors and trainees CNN spoke with say they regularly use medical AI chatbots. And most have a positive outlook, viewing these tools as a way to offload certain cognitive and administrative tasks. But patient privacy concerns are valid, the doctors say. Five questions to ask your doctor How are you using AI chatbots to augment my care? What types of AI chatbots do you use, and have they been approved by the health system? Is any of my personal health information being entered into AI tools, and how is it protected? How do you check that the information from AI chatbots is accurate? Do you usually agree with the information from AI chatbots, or do you find yourself questioning it? As with any AI tool, Kaufman says, errors happen and information can be inaccurate. When she consults peers for second opinions, she says, they “almost never agree” with the AI chatbot’s answer. “People treat AI like it’s magic,” Chen said. “It’s not magic. It can’t just do anything you want.” He added: “You ask the same question 10 times, and it’ll give you 10 different answers.” That variability, Chen argues, highlights some of the surface-level limitations. Medicine operates on three layers, Sims says: workflows, knowledge and expertise. AI is transforming the first two. But that last layer — core to the care patients receive — is harder to replicate and may be what matters most. “If we just apply guidelines, then replace us,” Sim said. “It’s where you take the knowledge and apply it to an evolving set of conditions in the context of your life. That’s what medicine is. It’s in the context of people’s lives. And these machines don’t do that.” By Michal Ruprecht