Monday, April 27, 2026

Shadow AI: Silicon Valley’s New Productivity Secret Is Also a Massive Liability

Your employees are most likely using shadow AI. It’s a scary-sounding name for a relatively common practice, but one that could have real consequences for your business. First, it helps to understand what shadow AI actually is, before unpacking strategies to prevent your employees from using it—and potentially exposing your company to reputational damage, litigation, or even financial losses. According to Rick Holland, a cybersecurity expert and chief information security officer at AI-native cybersecurity firm Cyera, “shadow” refers to unsanctioned use of technology in the workplace. That can include software, hardware, or AI tools. “It’s the use of a technology that the IT function, the business, the CTO is unaware of,” Holland says. “You don’t know who’s using it. You don’t know who has access to it. You don’t know the data that is being used.” If, for example, Microsoft Copilot is your company’s sanctioned chatbot, employees who turn to ChatGPT for work help are using shadow AI. A November report from cybersecurity firm UpGuard found that more than 80 percent of some 1,500 workers surveyed across the U.S., U.K., and other countries use unapproved AI tools at work—about half of them regularly. Even cybersecurity professionals aren’t immune, with an even higher proportion, some 90 percent, admitting to using shadow AI. The first step in addressing shadow AI is recognizing employee motivations. And more often than not, those motivations are not nefarious, Holland says. As AI has swept the business world, workers are under pressure to be increasingly productive and be fluent in new technologies—otherwise, they could risk becoming redundant in a future experts warn will be defined by AI. Not to mention, they’re likely discovering that new, AI-enhanced tools are making their lives easier, and IT departments often don’t move quickly enough to support them. “They’re trying to do their jobs, and they may have found a better, faster way to do it,” Holland says. “We all need to be learning AI right now, because it’s disrupting every vertical that’s out there.” “So I always start off [assuming] best intentions when having conversations around shadow AI,” he adds. But even actions undertaken with the best intentions can have serious consequences. Shadow AI can threaten a business in a few different ways. Regulatory violations Employees who are using unapproved tools may unintentionally expose data that is governed by regulations like HIPAA, resulting in fines. This can be anything from patient health data and payment information, to data on private citizens that is covered by the General Data Protection Regulation (GDPR) in Europe. “When you put [data] into Claude or OpenAI or Grok or whatever, you can’t get that information out, and it’s training on your data,” Holland says. “There’s a potential that someone else could query the frontier model and then get that information back.” IP loss & reputational damage Although it is relatively easy to quantify the consequences of data leaks that run afoul of regulations like HIPAA, more insidious—and perhaps more damaging in the long term—are leaks of sensitive or proprietary data. In other words, your company’s intellectual property. Consider the following scenarios: An employee at a soft drink company uses a free transcription app during a meeting and accidentally shares the company’s secret ingredient with an LLM. That means the trade secret is now stored by an unapproved third-party outside of the company’s control, risking exposure of that data in the event of a security breach. Or, a pharmaceutical employee working on marketing materials feeds data into AI about a new drug in-development shortly before a patent filing. Disclosing IP before securing legal protections can potentially jeopardize a company’s patent rights. “Regulated data that gets fines, that may or may not set your business back,” Holland says. “But if you lost your secret sauce—whatever your secret sauce is—and a competitor was able to find it, that could have very strategic implications to you long term.” New vectors for attack Any time new software is used in a corporate setting, Holland says, it represents a new “attack surface” that bad actors can use to infiltrate a system. If an IT department doesn’t have visibility into the software employees are using, they can’t guarantee that it’s safe. Just look at the recent LiteLLM supply chain attack, which was designed to steal all sorts of login credentials. It all started with a tool called Trivy, which is an open-source security scanner that is reportedly used by major companies. After Trivy was infected, the malware was able to spread to any project that depended on Trivy, including LiteLLM. If IT departments are not aware that workers were using those tools, they wouldn’t have a chance to familiarize themselves with what those tools are built on, and look out for telltale signs of an attack. Unauthorized access AI agents have opened a whole new world of concerns for IT departments. As agents are designed to act autonomously, their access to data must be strictly policed. One high profile recent example, documented in a now-viral post on X, was when a Meta AI researcher asked OpenClaw to clean up her inbox, and instead, she says it went on a “speedrun” deleting her emails en masse. “I had to RUN to my Mac mini like I was defusing a bomb,” she wrote, noting that the agent did not respond to commands from her phone. A more serious example would be if a hypothetical company didn’t restrict what internal data an agent could access. In response to a staff or customer query, it could pull sensitive information such as executive compensation or information related to forthcoming, but not yet disclosed, M&A. Holland emphasized that it is crucial that companies discover and identify their data and who has access to it, in order to secure it, which is one of the key services his firm, Cyera, provides. “Our historical nature of over-providing data and access is going to come back to get us. Agents are people pleasers,” Holland says. “That’s why you have to lock them down and what they can access.” Start with visibility—and resist blocking tools Given all the possible threats from shadow AI, how can IT departments ensure their employees are only using safe and approved products? The answer, according to Jeff Pollard, a vice president and principal analyst at global market research and advisory firm Forrester, is to start with understanding—and that means resisting blocking access to AI tools. “Trying to block or ban shadow AI is rarely effective because there are so many different ways to access and get to AI, so if you block it on an endpoint or from a browser on an employee’s workstation, they just pick up their phone and then they use it there,” he says. “The other problem with blocking is that you do lose the insight that you would get out of what the employee is trying to do and why they’re trying to do it.” Pollard, who helps companies navigate securing the enterprise adoption of AI, whether that be tools like Microsoft Copilot or vibe coding tools like Cursor and Replit, recommends working separately with different departments to ascertain what types of tools they need, and setting policies accordingly. The types of AI tools a finance department uses usually won’t look anything like the ones marketing or customer service teams use, which is why one-size-fits-all policies rarely work. Understanding how and why employees are using unsanctioned AI can help inform IT departments what kinds of tools employees really need as they search for safe alternatives. Define the approval process Transparency is key. Pollard says it is important to spell out for staff how and why different programs are approved—or not. That opens the door for employees to submit requests for new approvals, and also educates them about why certain tools or software are not considered safe. “It’s about co-creation, because ultimately, from a security perspective, you are coaching the organization on risk acceptance, but the organization itself has to accept that risk,” he says. Holland adds that establishing a governance model that is meant to “work at the speed of AI” is crucial. And that means establishing an AI governance committee with staff who understand AI, new technology, and data and information security. Those experts, he says, should be charged with cultivating a culture of communication in which different departments feel comfortable discussing their technological needs and tools that may address them without fear of punishment. Know when to bring in legal Pollard agrees with Holland’s assertion that most employees don’t intend to cause harm when using shadow AI. That’s why education is so important, although Pollard notes that “ignorance is no excuse.” He says many policy violations are a training issue, although a scenario in which violations are widespread could necessitate institutional introspection to determine whether policies are actually working for employees. In the event that companies have done their best to both establish workable policies and educate their employees about them, and someone still knowingly violates them, then it might be time for a call to legal. “I will tell you that CISOs don’t want to be the ‘Department of No’ anymore,” Pollard says. “When you’re looking for someone to come in and be the heavy hitter to say, ‘Shut this down,’ pull on legal shirt sleeves, because they’ll absolutely come in and help you out.” Important reminders When constructing corporate policies, it can be difficult to keep everyone happy. But one way to alleviate this tension is to move quickly and remain adaptable, given the pace of AI development. “You’re never going to have every single platform covered—there are just too many of them. So you do have to sort of accept that you’re going to have to adapt. You’re going to learn about a new platform all the time,” Pollard says. “You can’t leave a policy, or set it and forget it.” And although verifying that a tool is safe to use can be labor intensive, there are a few broad recommendations to keep in mind. Companies often prefer to choose AI models that are hosted domestically, and Pollard says that can mean U.S. companies avoiding Chinese models (or even European companies avoiding models hosted in the U.S.). And he adds that securing enterprise contracts is paramount, because it sets expectations and offers legal recourse in case those expectations are not met. “The consumer grade aspect of this is certainly the one that’s the most problematic, where someone goes directly to Cursor as an individual, or goes directly to Copilot as an individual and buys it,” he says. “That’s definitely what you want to try to crack down on, but that’s also the way a lot of these tools are introduced to an enterprise environment. So in that scenario, it’s about trying to work with as many as you reasonably can to accommodate what different employees need.” BY CHLOE AIELLO @CHLOBO_ILO

Friday, April 24, 2026

Anthropic’s Claude Opus 4.7 Is Here, and It’s Already Outperforming Gemini 3.1 Pro and GPT-5

Anthropic’s just released its latest AI model, Claude Opus 4.7. The company claims it handles complex, long-running tasks with greater rigor and consistency than its predecessor, follows instructions more precisely, and can verify its own outputs before delivering a response. In short, Anthropic says the new model is a real-world productivity booster. This comes shortly after Anthropic launched Claude Opus 4.6 in February. And the model is “less broadly capable” than its most recent offering, Claude Mythos Preview. But at this time Anthropic has no plans to release Claude Mythos Preview to the general public. It says the effort is aimed at understanding how models of that caliber could eventually be deployed at scale. How Does Opus 4.7 Compare? According to The Next Web, the most striking gains are in software engineering: on SWE-bench Pro, an AI evaluation benchmark, Opus 4.7 scored 64.3 percent—up from 53.4 percent on Opus 4.6 and ahead of both GPT-5.4 at 57.7 percent and Gemini 3.1 Pro at 54.2 percent. Opus 4.7 Token Usage Users upgrading from Opus 4.6 should note two changes that affect token usage. An updated tokenizer improves how the model processes text but can increase token counts by roughly 1.0 to 1.35 times depending on content type. The model also thinks more deeply at higher effort levels, particularly in later turns of agentic tasks, which boosts reliability on complex problems but produces more output tokens. Opus 4.7 Safety Additionally, Anthropic says Opus 4.7 carries a safety profile comparable to its predecessor, with evaluations showing low rates of deception, sycophancy, and susceptibility to misuse. The company’s alignment assessment concluded that the model is “largely well-aligned and trustworthy, though not without room for improvement.” “We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses,” Anthropic said in a release. “What we learn from the real-world deployment of these safeguards will help us work towards our eventual goal of a broad release of Mythos-class models.” This commitment to transparency around safety is central to how Anthropic has positioned itself since its founding in 2021. Anthropic has spent much of its existence cultivating a reputation as a more safety-focused alternative. CEO Dario Amodei previously served as vice president of research at OpenAI before leaving to co-found Anthropic alongside his sister Daniela Amodei and other former OpenAI employees who shared his concerns that the company was not taking AI safety seriously enough. “We’re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies,” Anthropic’s CEO Dario Amode said on podcaster Dwarkesh Patel’s Podcast. The upgrade represents a step forward across the capabilities that matter most to Claude’s users. “The model does not win every benchmark against every competitor, but it wins convincingly on the ones most directly tied to real-world productivity,” The Next Web said. BY AMAYA NICHOLE

Wednesday, April 22, 2026

5 ways your doctor may be using AI chatbots — and why it matters

Millions of Americans are turning to AI chatbots for health answers. Doctors are, too. But the ways doctors are incorporating AI chatbots into their practice are surprising. Specialized medical AI chatbots have quickly become a go-to source for many doctors and trainees. The CEO of one of these medical chatbot companies recently claimed that more than 100 million Americans were treated by a doctor who used their platform last year. Popular chatbots like OpenAI’s ChatGPT don’t meet the bar for doctors, who say these platforms aren’t always accurate or up to date with the latest guidance. OpenAI’s usage policies state that users are not allowed to use its services for “tailored advice” without consulting a licensed health professional. “ChatGPT is like your crazy uncle,” said Dr. Ida Sim, a professor at the University of California, San Francisco, who studies how to use data and technology to improve health care. The edge, Sim says, is that medical chatbots are less prone to sycophancy and more likely to ground answers in peer-reviewed research and clinical guidelines. That’s why she says the uptake has been “tremendous.” The most common use case Millions of research papers are published every year — and keeping up with them all is impossible. “You’d need like 18 hours a day to stay up to date,” said Dr. Jared Dashevsky, a resident physician at the Icahn School of Medicine at Mount Sinai. But doctors are expected to stay current on new research and guidelines to maintain their licenses. Many say they now use medical chatbots as a reference tool to help them stay updated. Rather than pulling information from the entire internet, specialized medical chatbots actively search medical literature, says Dr. Jonathan H. Chen, an associate professor at Stanford Medicine who leads his health system’s efforts to integrate AI into medical education. That workflow provides doctors with more accurate answers that summarize and link to important papers and guidelines. Dashevsky, who writes about AI, says these features are especially helpful for trainees working long hours. Uploading patient records to AI bots Some health systems have adopted AI chatbots to improve patient care, promising doctors safety and privacy protections. But many doctors use unauthorized chatbots called shadow AIs, according to doctors CNN spoke with. Some of these shadow AIs also advertise HIPAA compliance features. HIPAA is a federal law that requires certain organizations that maintain identifiable health information — such as hospitals and insurers — to protect it from being disclosed without patient consent. But language used by shadow AIs has led some doctors to believe that it’s safe to upload protected health information onto chatbots in exchange for more tailored answers. But Iliana Peters, a health care lawyer at the law firm Polsinelli who previously led HIPAA enforcement for the US Department of Health and Human Services, says that assumption is inaccurate. “‘HIPAA compliance’ is not an accurate term to use by any company,” Peters said, explaining that the phrase should be used only by government regulators. Despite that, Dr. Carolyn Kaufman — a resident physician at Stanford Medicine — and other doctors say that patient information is making its way into unauthorized chatbots, potentially opening the door to new ways of commodifying patient data. “Data is money,” Kaufman said, noting that she has never uploaded HIPAA-protected information onto an unapproved chatbot. “If we’re just freely uploading those data into certain websites, then that’s obviously a risk for the individual patient and for the institution, as well.” Drafting AI-generated notes AI chatbots have also stepped in to help doctors draft summaries of patient visits and long hospital stays. These notes are viewable on online patient portals and help doctors track a patient’s course and communicate plans across the care team. “It’s probably safer to have artificial intelligence review a hospital course and know everything happened, versus you as a human — with limited time, jumping between note to note — trying to put the pieces together,” Dashevsky said, arguing that although concerns over AI accuracy are valid, human-based summaries may also miss key details. Writing letters to insurance companies Administrative work can take up nearly nine hours a week for the average doctor, and the time doctors spend on insurance-related tasks costs an estimated $26.7 billion each year. A feature that Dashevsky says has been a “game-changer” is chatbot-authored letters to insurance companies for prior authorizations and other correspondence, allowing him to field patient requests more quickly. “I would have to figure out who this patient is, write the letter myself and review it. It took so much time,” he said. “Now, AI will produce for you a really good letter.” Creating a list of possible diagnoses When patients come to doctors with concerns, physicians have to figure out how to help them. Part of that process is considering a range of possible diagnoses. Many medical students and trainees use AI chatbots to help build that list, and some doctors beyond training use the feature, too. “From a med student perspective … you’re seeing a lot of things for the first time,” said Evan Patel, a fourth-year medical student at Rush University Medical College. “AI chatbots sort of help orient me to what possibilities it could be.” Kaufman says the bots provide the most accurate list when she includes every data point linked to patients, like lab results and imaging findings. What patients need to know All eight doctors and trainees CNN spoke with say they regularly use medical AI chatbots. And most have a positive outlook, viewing these tools as a way to offload certain cognitive and administrative tasks. But patient privacy concerns are valid, the doctors say. Five questions to ask your doctor How are you using AI chatbots to augment my care? What types of AI chatbots do you use, and have they been approved by the health system? Is any of my personal health information being entered into AI tools, and how is it protected? How do you check that the information from AI chatbots is accurate? Do you usually agree with the information from AI chatbots, or do you find yourself questioning it? As with any AI tool, Kaufman says, errors happen and information can be inaccurate. When she consults peers for second opinions, she says, they “almost never agree” with the AI chatbot’s answer. “People treat AI like it’s magic,” Chen said. “It’s not magic. It can’t just do anything you want.” He added: “You ask the same question 10 times, and it’ll give you 10 different answers.” That variability, Chen argues, highlights some of the surface-level limitations. Medicine operates on three layers, Sims says: workflows, knowledge and expertise. AI is transforming the first two. But that last layer — core to the care patients receive — is harder to replicate and may be what matters most. “If we just apply guidelines, then replace us,” Sim said. “It’s where you take the knowledge and apply it to an evolving set of conditions in the context of your life. That’s what medicine is. It’s in the context of people’s lives. And these machines don’t do that.” By Michal Ruprecht

Monday, April 20, 2026

The Pareto Principle Is How AI Actually Takes Jobs

Are you afraid of losing your job? That question might sound silly at first, but over the past several years, the specter of losing one’s job has risen to horror-movie jump-scare proportions. It’s not just you. Everyone who has a job is deathly afraid of losing it. I hear this daily, in comments on my articles, in my consulting work, on social media, even among friends. No one is immune. Why? Well, there are a lot of reasons, but one reason might be the constant drone from big tech and the press, both of which have spent a lot of the past four years telling us that AI is coming to take our jobs and, with this new strain of zombie mutant AI, no one is immune. Is that true? Well, I’ve spent a lot of time working with AI, and I’ve also spent 15 years telling people why AI shouldn’t be coming for their jobs. I think I can connect the dots here. And they’re sobering. But someone has to tell you the truth. When AI Strikes, It’s Slow, Then Quick The first fact I can give you is that, despite the current conventional wisdom, AI has and will continue to put blue-collar jobs at risk far more frequently than white-collar or knowledge-worker jobs. The increased threat to blue-collar jobs is for a couple of reasons. But it mostly has to do with the Pareto principle that exists within automation, and automation is still the bulk of what AI is being used for in this cycle – whether that’s automating content, automating conversations, or more broadly, automating sets of tasks that just follow basic instructions. That last one, that’s what’s led to the onslaught of machines taking blue-collar jobs as far back as the 1980s. It’s still happening today, it’s just being overshadowed by all this new white-collar carnage. While maybe not as easy and inexpensive to get started with as today’s AI, replacing physical tasks with machines ultimately offers far more coverage and better results than knowledge tasks. The more repetitive the task and the less knowledge required to execute it, like spot welding, the more that job is suited for automation. As computer processing has become more powerful, the robots can now make choices, and even appear to understand the nuances in more complex tasks, like arc welding. Or driving. Keep that in mind. Because the second fact I can give you is that when AI comes for your job, it happens slowly with lots of warning, then real fast without warning. Self-Driving Tech Is Happening Fast Right now, the AI-job-taking threat that’s easiest to spot is that of taxi driver. And Uber and Lyft are not immune. Self-driving tech has been around since the 1950s, and experimental forms of passenger cars were being developed in the 1970s. At that point, with self-driving vehicles screaming down the Autobahn in the 1980s, the writing was on the wall for driving as a profession, no? Well, it’s 2026, so why didn’t it happen? Well, it did. It just started very slowly. I just got back from lovely Tempe, Arizona, which is kind of the epicenter of self-driving cars and those little delivery robots — and Starship, please, please, please send me one, I promise to take care of it and feed it and walk it every day. Waymo self-driving cars are, no exaggeration, everywhere. By my third day in Tempe, a Waymo brushed past me as I was getting out of my car and I didn’t think twice about it. And later, an empty one cut me off trying to change lanes, which happened a dozen more times that day with cars driven by humans. I know. Nice anecdote, bro. But it’s not just me. Waymo ridership is skyrocketing, especially over the past two years and significantly over the past few months. My daughter and her friends take them exclusively now. No one talks about their uniqueness or their ubiquitousness. They’re cheaper to run, they can all run 24/7, and they will eventually become safer than human drivers if they aren’t already, which is already debatable. Uber and Lyft are not immune. Despite all the advancements those companies brought to the taxi experience, from lowering the costs to cashless payments to accurate(ish) timing, as the human-driver side of the equation went from cheap rideshare side-gig to expensive full-time taxi driver job — an evolution those companies always knew was a risk but could never mitigate — the wheels were already in motion to eliminate that cost from the system. So when driverless comes, it’s coming for everyone. And it’ll come without warning, because we’ve already been warned. For 40 years. What Are the AI-Job-Taking Warning Signs? Again, the conventional wisdom here is to ask yourself how valuable you are to the company. But I think that metric is more of a requirement, less of an indicator. If what you do could be replaced, by a human or otherwise, that’s another issue. So we need to start with the baseline that everyone reading this is valuable to their company. Because the real metric is the company, and in a greater sense, it’s what that company does. Apologies to Cory Doctorow, but whether AI will replace your job has a lot to do with your company’s place in what I call the Enshitification Scale. Not to make a thing out of it, but we see this in every aspect of our lives, as both consumers and business people. The scale goes, in order of AI-dethroning threat: I Love This Product They Used to Be Awesome They’re the Lesser of All Evils They’re the Only Safe Bet I Hate Them And it’s actually more about the use case — what the company does — than the company itself. The thing is, you don’t have to get all the way to the bottom of the scale for AI or tech to start taking jobs. When the emotion about the product, love or hate, switches from the product itself to the use case, then it’s time for the asteroid to come in and take out all the dinosaurs. Policy Just Delays the Inevitable And when that time comes, no amount of entrenchment will make an incumbent industry or sector safe. That entrenchment usually starts with external protection, first policies, then unions, then laws. What these end up doing, in too many cases, is prevent the natural evolution of the use case and the growth of the person executing it, stunting that growth until neither can be saved, and the knowledge that actually lifts the person above the AI job taker is made irrelevant. Like I said, it happened when laws were passed to designate Uber and Lyft drivers as employees and not contractors. You’re seeing it in fast food right now. Minimum wage goes up, then in come the self-service order kiosks. I remember when we were all complaining about the lack of customer service at a McDonald’s. Now we’re all pissed off if the machine is busted and we have to talk to someone to get our Big Mac when we could have just punched it into an app. Think about it. It was actually the company’s decision to remove the knowledge behind customer service from the employees delivering customer service that turned those “McJobs” into a series of button-punching tasks. So what about those button-punching white-collar McJobs? Well … Evolve or Die? When it comes to knowledge and skills, “evolve or die” — as much as I hate that phrase — is hard to argue with. But here’s the thing. That evolve-or-die situation is not happening in tech, not as fast as they’d have you think. Techies are surprisingly fast to evolve, and one of the reasons these layoffs are so painful is because most techies haven’t had a problem adopting these new AI technologies. The problem is leadership and management has always seen AI as a replacement out of the box, regardless of the company’s place on the Enshitification Scale, and so, everybody is doing layoffs because AI gives them a way out. Hell, if it’s AI’s fault, then that might just “fix the glitch” and make the company great again. It won’t. The Enshitification Scale rarely, if ever, moves in reverse. So the warning signs are there if you take a look around you. Is your tech company still a tech company? Or is it a task factory? Are you being protected? Or are you adding value? If you can answer those questions, the warning signs should be easy to spot, and you should think about getting out before things happen quickly. If this resonated, please join my email list, a rebel alliance of professionals from all walks of life who want a unique take on tech, business, and the future of both. EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO