Monday, April 27, 2026

Shadow AI: Silicon Valley’s New Productivity Secret Is Also a Massive Liability

Your employees are most likely using shadow AI. It’s a scary-sounding name for a relatively common practice, but one that could have real consequences for your business. First, it helps to understand what shadow AI actually is, before unpacking strategies to prevent your employees from using it—and potentially exposing your company to reputational damage, litigation, or even financial losses. According to Rick Holland, a cybersecurity expert and chief information security officer at AI-native cybersecurity firm Cyera, “shadow” refers to unsanctioned use of technology in the workplace. That can include software, hardware, or AI tools. “It’s the use of a technology that the IT function, the business, the CTO is unaware of,” Holland says. “You don’t know who’s using it. You don’t know who has access to it. You don’t know the data that is being used.” If, for example, Microsoft Copilot is your company’s sanctioned chatbot, employees who turn to ChatGPT for work help are using shadow AI. A November report from cybersecurity firm UpGuard found that more than 80 percent of some 1,500 workers surveyed across the U.S., U.K., and other countries use unapproved AI tools at work—about half of them regularly. Even cybersecurity professionals aren’t immune, with an even higher proportion, some 90 percent, admitting to using shadow AI. The first step in addressing shadow AI is recognizing employee motivations. And more often than not, those motivations are not nefarious, Holland says. As AI has swept the business world, workers are under pressure to be increasingly productive and be fluent in new technologies—otherwise, they could risk becoming redundant in a future experts warn will be defined by AI. Not to mention, they’re likely discovering that new, AI-enhanced tools are making their lives easier, and IT departments often don’t move quickly enough to support them. “They’re trying to do their jobs, and they may have found a better, faster way to do it,” Holland says. “We all need to be learning AI right now, because it’s disrupting every vertical that’s out there.” “So I always start off [assuming] best intentions when having conversations around shadow AI,” he adds. But even actions undertaken with the best intentions can have serious consequences. Shadow AI can threaten a business in a few different ways. Regulatory violations Employees who are using unapproved tools may unintentionally expose data that is governed by regulations like HIPAA, resulting in fines. This can be anything from patient health data and payment information, to data on private citizens that is covered by the General Data Protection Regulation (GDPR) in Europe. “When you put [data] into Claude or OpenAI or Grok or whatever, you can’t get that information out, and it’s training on your data,” Holland says. “There’s a potential that someone else could query the frontier model and then get that information back.” IP loss & reputational damage Although it is relatively easy to quantify the consequences of data leaks that run afoul of regulations like HIPAA, more insidious—and perhaps more damaging in the long term—are leaks of sensitive or proprietary data. In other words, your company’s intellectual property. Consider the following scenarios: An employee at a soft drink company uses a free transcription app during a meeting and accidentally shares the company’s secret ingredient with an LLM. That means the trade secret is now stored by an unapproved third-party outside of the company’s control, risking exposure of that data in the event of a security breach. Or, a pharmaceutical employee working on marketing materials feeds data into AI about a new drug in-development shortly before a patent filing. Disclosing IP before securing legal protections can potentially jeopardize a company’s patent rights. “Regulated data that gets fines, that may or may not set your business back,” Holland says. “But if you lost your secret sauce—whatever your secret sauce is—and a competitor was able to find it, that could have very strategic implications to you long term.” New vectors for attack Any time new software is used in a corporate setting, Holland says, it represents a new “attack surface” that bad actors can use to infiltrate a system. If an IT department doesn’t have visibility into the software employees are using, they can’t guarantee that it’s safe. Just look at the recent LiteLLM supply chain attack, which was designed to steal all sorts of login credentials. It all started with a tool called Trivy, which is an open-source security scanner that is reportedly used by major companies. After Trivy was infected, the malware was able to spread to any project that depended on Trivy, including LiteLLM. If IT departments are not aware that workers were using those tools, they wouldn’t have a chance to familiarize themselves with what those tools are built on, and look out for telltale signs of an attack. Unauthorized access AI agents have opened a whole new world of concerns for IT departments. As agents are designed to act autonomously, their access to data must be strictly policed. One high profile recent example, documented in a now-viral post on X, was when a Meta AI researcher asked OpenClaw to clean up her inbox, and instead, she says it went on a “speedrun” deleting her emails en masse. “I had to RUN to my Mac mini like I was defusing a bomb,” she wrote, noting that the agent did not respond to commands from her phone. A more serious example would be if a hypothetical company didn’t restrict what internal data an agent could access. In response to a staff or customer query, it could pull sensitive information such as executive compensation or information related to forthcoming, but not yet disclosed, M&A. Holland emphasized that it is crucial that companies discover and identify their data and who has access to it, in order to secure it, which is one of the key services his firm, Cyera, provides. “Our historical nature of over-providing data and access is going to come back to get us. Agents are people pleasers,” Holland says. “That’s why you have to lock them down and what they can access.” Start with visibility—and resist blocking tools Given all the possible threats from shadow AI, how can IT departments ensure their employees are only using safe and approved products? The answer, according to Jeff Pollard, a vice president and principal analyst at global market research and advisory firm Forrester, is to start with understanding—and that means resisting blocking access to AI tools. “Trying to block or ban shadow AI is rarely effective because there are so many different ways to access and get to AI, so if you block it on an endpoint or from a browser on an employee’s workstation, they just pick up their phone and then they use it there,” he says. “The other problem with blocking is that you do lose the insight that you would get out of what the employee is trying to do and why they’re trying to do it.” Pollard, who helps companies navigate securing the enterprise adoption of AI, whether that be tools like Microsoft Copilot or vibe coding tools like Cursor and Replit, recommends working separately with different departments to ascertain what types of tools they need, and setting policies accordingly. The types of AI tools a finance department uses usually won’t look anything like the ones marketing or customer service teams use, which is why one-size-fits-all policies rarely work. Understanding how and why employees are using unsanctioned AI can help inform IT departments what kinds of tools employees really need as they search for safe alternatives. Define the approval process Transparency is key. Pollard says it is important to spell out for staff how and why different programs are approved—or not. That opens the door for employees to submit requests for new approvals, and also educates them about why certain tools or software are not considered safe. “It’s about co-creation, because ultimately, from a security perspective, you are coaching the organization on risk acceptance, but the organization itself has to accept that risk,” he says. Holland adds that establishing a governance model that is meant to “work at the speed of AI” is crucial. And that means establishing an AI governance committee with staff who understand AI, new technology, and data and information security. Those experts, he says, should be charged with cultivating a culture of communication in which different departments feel comfortable discussing their technological needs and tools that may address them without fear of punishment. Know when to bring in legal Pollard agrees with Holland’s assertion that most employees don’t intend to cause harm when using shadow AI. That’s why education is so important, although Pollard notes that “ignorance is no excuse.” He says many policy violations are a training issue, although a scenario in which violations are widespread could necessitate institutional introspection to determine whether policies are actually working for employees. In the event that companies have done their best to both establish workable policies and educate their employees about them, and someone still knowingly violates them, then it might be time for a call to legal. “I will tell you that CISOs don’t want to be the ‘Department of No’ anymore,” Pollard says. “When you’re looking for someone to come in and be the heavy hitter to say, ‘Shut this down,’ pull on legal shirt sleeves, because they’ll absolutely come in and help you out.” Important reminders When constructing corporate policies, it can be difficult to keep everyone happy. But one way to alleviate this tension is to move quickly and remain adaptable, given the pace of AI development. “You’re never going to have every single platform covered—there are just too many of them. So you do have to sort of accept that you’re going to have to adapt. You’re going to learn about a new platform all the time,” Pollard says. “You can’t leave a policy, or set it and forget it.” And although verifying that a tool is safe to use can be labor intensive, there are a few broad recommendations to keep in mind. Companies often prefer to choose AI models that are hosted domestically, and Pollard says that can mean U.S. companies avoiding Chinese models (or even European companies avoiding models hosted in the U.S.). And he adds that securing enterprise contracts is paramount, because it sets expectations and offers legal recourse in case those expectations are not met. “The consumer grade aspect of this is certainly the one that’s the most problematic, where someone goes directly to Cursor as an individual, or goes directly to Copilot as an individual and buys it,” he says. “That’s definitely what you want to try to crack down on, but that’s also the way a lot of these tools are introduced to an enterprise environment. So in that scenario, it’s about trying to work with as many as you reasonably can to accommodate what different employees need.” BY CHLOE AIELLO @CHLOBO_ILO

No comments: