Friday, March 20, 2026
The world’s most valuable company just sent another signal that AI agents are going to be everywhere
Tech giant Nvidia, the world’s most valuable company and the poster child of the AI boom, is banking its future on the rise of AI agents.
The company on Monday announced a slew of software and hardware updates to encourage the development of AI agents, or AI assistants that can perform tasks for users. Among the most significant announcements is a set of tools for AI helpers based on OpenClaw, the buzzy agent platform that’s been the talk of Silicon Valley in recent weeks. Nvidia also announced new computing racks designed to power agents, shifting its strategy’s primary focus from graphics processing units.
Clad in his signature black leather jacket, Nvidia CEO Jensen Huang made the flurry of announcements in San Jose at the chipmaker’s annual GTC conference, which attracts tens of thousands of attendees and has been dubbed the “super bowl” of AI.
Nvidia’s announcements are important because so many major companies rely on its systems to train and power their AI services. This means the chip giant’s new products often reflect the technologies for companies across the AI industry.
Nvidia announced software tools to help companies make AI agents, including models and a blueprint for creating custom specialized assistants. It’s also launching a set of resources for creating agents on OpenClaw that adds privacy and security controls, which is crucial considering the popular agent has raised concerns among cybersecurity experts.
Nvidia said its resources help OpenClaw agents access the systems and files without compromising security or privacy. Huang said they’ve worked directly with OpenClaw creator Peter Steinberger, who was recently hired by OpenAI.
Huang said OpenClaw is the “operating system for personal AI” and likened it to the importance of the Mac and Window operating systems.
“OpenClaw is the number one. It is the most popular open-source project in the history of humanity, and it did so in just a few weeks,” Huang said.
Nvidia also unveiled updates to its new computing platform, Vera Rubin, which it said comprised seven chips that are now in full production. That includes a new central computing rack made up of central processing units (CPUs) rather than the graphics processing units (GPUs) Nvidia has been known for. CPUs are ideal for running the types of computing processes needed to power AI agents.
The company is also integrating a non-Nvidia processor into its systems: New high speed “language processing units” (LPUs) from American AI company Groq. Nvidia struck a $20 billion deal with Groq in November.
Unlike AI chatbots that respond to questions and prompts, AI agents can autonomously complete tasks like building websites, creating marketing pitches and sending emails. AI agents are currently Nvidia’s biggest focus area, largely driven by the popularity of OpenClaw and Anthropic’s Claude Code and Cowork agents.
“Every company in the world today needs to have an OpenClaw strategy, an agentic system strategy. This is the new computer,” Huang said. “This is as big of a deal as HTML, as big of a deal as Linux.”
Nvidia is attempting to future-proof its technology in other ways as well. It’s launching a space module for Vera Rubin, aiming to bring its latest tech to data centers in space. It’s become an increasing area of interest among tech giants as they scramble for real estate to construct data centers. OpenAI CEO Sam Altman and xAI and Tesla CEO Elon Musk have both talked about using space to help power data centers and energy-hungry AI systems.
“Nvidia is now focused beyond just computing with a major focus on the future of networking in this new world of AI,” said Wedbush analyst Dan Ives ahead of Nvidia’s Monday conference.
In his speech on Monday, Huang tried to convey that the hype around AI and Nvidia can last, selling a vision of an AI-transformed future where demand for their chips grows indefinitely.
Huang said computing demand “just keeps on going up,” adding that he expects “at last” $1 trillion in Nvidia revenue through 2027.
”There’s a reason for that,” Huang said. “This fundamental inflection — AI is able to do productive work, and therefore the inflection point of inference has arrived.”
By Hadas Gold
Wednesday, March 18, 2026
How One of the World’s Top AI Voices Uses Claude Code to Run Her Day
Allie K. Miller, one of the most followed voices in the AI industry, says that “by the time you wake up, your AI should have already been working for you for hours.”
Formerly the global head of machine learning for startups and venture capital at Amazon Web Services, Miller is among the busiest AI consultants and influencers in the industry, with more than 1.6 million followers on LinkedIn alone. Through her company Open Machine, she advises enterprises and business leaders—including those at OpenAI, Google, Anthropic, and Warner Bros. Discovery—on how to adopt AI. In 2025, Miller was named one of the 100 most influential people in AI by Time.
In an interview with Inc., Miller says that nowadays, she largely works out of Claude Code, the agentic coding system developed by Anthropic. She keeps multiple instances of Claude Code running simultaneously in separate terminals. Because these Claude Code instances have access to Miller’s filesystem, they can autonomously complete work on her behalf.
Miller teaches Claude Code how to complete workflows by using Skills, a feature that allows Claude Code to undertake and repeat multistep processes. Miller says that she’s developed automations that generate a report summarizing all of the urgent emails she’s received overnight and a daily morning briefing that runs through her entire calendar, recommending times to recharge.
“It’ll tell me, ‘You have four different interviews or six client meetings,’” explains Miller, “‘so I’ve gone ahead and blocked out 30 minutes tomorrow for deep work.’”
Another example: Every time Miller edits a social video of herself using CapCut, the TikTok-owned video editing app, she exports the video into a specific folder. Anytime a new file is added to that folder, an automation is triggered that automatically creates a transcript, a social post, and a screenshot for the video’s thumbnail.
In general, Miller says, the best way to identify AI solutions that work for your specific use case is to simply have the AI model of your choice interview you. Tell it to ask you questions about your work, making note of areas that you feel could be more efficient or smoother. Then, Miller says, prompt it again with “make these ideas more proactive, more responsibly autonomous, and more action-forward.” With just that prompt, she adds, you can get started developing your own AI solutions.
It’s not just workflows that Miller is automating. When developing a new post for her newsletter, Miller says that she runs drafts through eight “synthetic personas” that she’s developed, which represent the newsletter’s different audience demographics. “I’m not trying to appease all eight and write a happy-go-lucky version of the newsletter,” says Miller, “but I want to make sure I didn’t miss something important. I want to make sure that a parent reading [the newsletter] isn’t completely misunderstanding my take on something.”
Miller has a similar strategy when making big career decisions. She built a self-described “AI boardroom,” complete with six synthetic personas, which weigh in on major company issues. Miller swaps around which six personas sit on the board, depending on her needs. “If it’s a media question, maybe I’m running it through Shonda Rhimes,” she says, “or if it’s a business question, maybe I’m asking Jeff Bezos.” These personas give their initial opinions on the decision, and then they all begin debating with one another in a group chat. “I literally had Mickey Mouse arguing with Jensen Huang,” Miller adds.
The point, Miller says, is to get the most out of the raw intelligence offered by today’s AI models. “Wouldn’t you love to walk into a room of 10 geniuses arguing over something that you’ve been struggling with, and all they want to do is help you get to the best possible outcome?” she says. “For those who have a growth mindset and thrive off of dynamic, changing, adaptable business settings, the multiagent world that we are walking into in 2026 is going to be world-changing.”
BY BEN SHERRY @BENLUCASSHERRY
Monday, March 16, 2026
Meta just bought the social network for AI bots everyone’s been talking about
Meta, the company behind some of the world’s most popular social media platforms, just scooped up a new site – for bots.
Meta has acquired Moltbook, the social media network where AI agents interact with one another autonomously, the company said in a statement on Tuesday.
Meta is competing with rivals like OpenAI for both talent and users’ attention. And as AI expands into more aspects of Americans’ lives, tech companies are trying to figure out the best way to position themselves to win what’s becoming a sort of technological arms race.
Moltbook became the talk of Silicon Valley last month, racking up millions of registered bots within days of its launch. Some in the industry saw it as a major leap because it demonstrated what can happen when AI agents socialize with one another like humans. Others said the site is full of sham agents, AI slop and security risks and should be viewed skeptically.
Meta’s acquisition comes weeks after OpenAI hired the founder of the technology behind Moltbook, an AI agent system called OpenClaw. Moltbook’s team will join Meta’s superintelligence labs. A Meta spokesperson said Moltbook’s approach “opens up new ways for AI agents to work for people and businesses.”
OpenAI CEO Sam Altman dismissed the excitement over Moltbook last month, suggesting OpenClaw, the open-source autonomous AI agent that powers the site’s bots, was the real breakthrough. Altman wrote that he expects the technology to become “core” to OpenAI’s products.
Meta acquired the buzzy AI agent startup Manus in December, following a string of high-profile hires intended to build out its superintelligence team. The company also invested $14.3 billion in Scale AI last year and hired its CEO.
But Meta, like some of its Big Tech peers, is facing pressure to prove its AI investments will make money, especially as rivals like OpenAI, Anthropic and Google churn out new and improved models for their chatbots. Meta CEO Mark Zuckerberg said on a January earnings call the company will release its new AI models “over the coming months.”
By Hadas Gold
Friday, March 13, 2026
AI is exhausting workers so much, researchers have dubbed the condition ‘AI brain fry’
Part of the pitch for artificial intelligence in the workplace goes like this: It’s like having a team of people to delegate your grunt work to, freeing you up to think strategically and maybe, just maybe, take a long lunch or head home early. Or maybe even be more productive, to make more money. It’s a nice idea!
But as everyone who’s either had a boss or been a boss knows, managing is a job in itself, one that comes with its own distinct brand of stress and annoyance. And that doesn’t change if the “people” in question aren’t people at all.
For participants in a recent study by Boston Consulting Group, the experience of overseeing multiple AI “agents,” autonomous software that’s designed to execute tasks, rather than just churn out information like a chatbot, caused an acute sensation of “buzzing” — a fog that left workers exhausted and struggling to concentrate. The study’s authors call it “AI brain fry,” defined as mental fatigue “from excessive use or oversight of AI tools beyond one’s cognitive capacity.”
“Contrary to the promise of having more time to focus on meaningful work, juggling and multitasking can become the definitive features of working with AI,” they wrote in the study. published by Harvard Business Review last week. “This AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit.”
Workers quoted in the study reminded me a lot of my fellow elder Millennials circa 1997, rushing home to tend to their Tamagotchis.
“It was like I had a dozen browser tabs open in my head, all fighting for attention,” one senior engineering manager told researchers. “I caught myself rereading the same stuff, second-guessing way more than usual, and getting weirdly impatient. My thinking wasn’t broken, just noisy—like mental static.”
This is just one new side effect from a push by company executives to make workers use AI more. Last fall, a Harvard Business Review report chronicled the scourge of “workslop” — the nonsensical AI-generated memos, pitch decks and presentations that end up creating more work for colleagues who have to fix what the bot got wrong.
Workslop reflects a kind of “cognitive surrender” in which workers feel unmotivated, giving AI work to do and not really paying attention to the output, said Gabriella Rosen Kellerman, a psychiatrist who co-authored both reports, in an interview. “Brain fry is almost the opposite… It’s like trying to go tête-à-tête — intelligence to intelligence — with the AI.”
Francesco Bonacci, CEO of Cua AI, which builds AI agents, described his AI fatigue as “vibe coding paralysis” (a reference to the Silicon Valley trend of building less-polished projects with AI prompts rather than traditional coding). “I end each day exhausted — not from the work itself, but from the managing of the work,” he wrote last month in an essay on X. “Six worktrees open, four half-written features, two ‘quick fixes’ that spawned rabbit holes, and a growing sense that I’m losing the plot entirely.”
To some extent, brain fry and workslop could both be a case of growing pains. Imagine plucking a middle-aged office worker from 1986, dropping them into a 2026 workplace and asking them to send 10 emails, respond to Slacks and Zoom into a call with the social media team who are all working from home. You’d expect some cognitive overload, not to mention some confused looks when you tell them Donald Trump is president and that it took more than 30 years to make a “Top Gun” sequel.
Of course, people learn how to be managers, in general, all the time.
“I do think this is potentially temporary,” said Matthew Kropp, a co-author of the brain fry study and BCG managing director. “These are tools we haven’t had before.”
Kropp compared the experience of someone managing multiple AI tools to that of someone who just learned to drive being given a Ferrari. You can go really fast, but it’s easy to lose control.
Of course, even tech pros seem to be struggling to control their AI assistants at times. Last month, Meta’s director of AI safety and alignment tweeted about her own experience watching bots nearly delete her inbox without permission. “I had to RUN to my Mac mini like I was defusing a bomb,” she wrote, chalking the incident up to a “rookie mistake.”
Both Kropp and Kellerman emphasized that the result of the study wasn’t all negative. Surprisingly, the people experiencing brain fry tended to experience less burnout, defined as a state of chronic workplace stress that builds over time and makes workers perform poorly. Brain fry is an acute experience, as participants described it to them.
“When they take a break, it goes away,” Kellerman said.
Analysis by Allison Morrow
Wednesday, March 11, 2026
Bad News for Your Burner Account: AI Is Surprisingly Effective at Identifying the Person Behind One
It’s not uncommon for people to have anonymous or burner accounts in their online activities for a variety of reasons. A new study, though, shows why you might want to be as careful posting from those accounts as you would from one that uses your real name, since they might not hide your identity as well as you think.
A recently released research paper found that artificial intelligence has proved quite effective at figuring out who’s behind those false-name accounts. Large language models, the study found, can use a number of identifiers, such as extracting identity signals (data points or behaviors used to identify, verify, or categorize individuals) or searching for matching data, to significantly outperform existing identity methods.
The study successfully deanonymized 68 percent of the users in its trial data set. Of that 68 percent, it boasted a 90 percent precision rate, meaning it accurately identified the user running the account.
“Our findings have significant implications for online privacy,” the researchers, who were based at ETH Zurich, a public university in Zurich, Switzerland, and MATS, an independent research and educational program, wrote. “The average online user has long operated under an implicit threat model where they have assumed pseudonymity provides adequate protection because targeted deanonymization would require extensive effort. LLMs invalidate this assumption.”
Anthropic also contributed to the study.
The findings that pseudonymous content can be fairly easily unmasked by AI have implications far beyond burner accounts and social media, of course. It can also be a powerful tool for hackers. And it can make it easier for companies to track down employees who leak corporate information or dig into who is asking questions in open forums.
It could also prove embarrassing for leaders who utilize burner accounts to pump up their businesses or covertly settle online scores with rivals. Casey Bloys, chairman and CEO of HBO and Max Content at Warner Bros. Discovery, admitted in 2023 that he had fake social media accounts he used to troll critics about network programming (later admitting that was a “dumb idea“). Elon Musk has confirmed in a court deposition that he has used them in the past. And Barstool Sports was accused in 2023 of using more than 40 accounts to promote its content and help it go viral.
Users hoping to keep their identity private or vulnerable members of society who depend on privacy (e.g., whistleblowers, activists, or abuse survivors) could also be identified. A slightly deeper dive by the AI could also determine where those people live, their occupation (and estimated income level), and more.
To protect against that, the researchers proposed several mitigations, including having platforms enforce rate limits on API access to user data, better detection of automated scraping, and restricting bulk data exports. That said, they acknowledge that preventing AI from being used to identify people and accounts that are trying to obfuscate the user’s identity will be increasingly challenging in the months and years to come.
“Recent advances in LLM capabilities have made it clear that there is an urgent need to rethink various aspects of computer security in the wake of LLM-driven offensive cyber capabilities,” the study reads. “Our work shows that the same is likely true for privacy as well. … Any moderately sophisticated actor can already do what we do using readily available LLMs and embedding models. With future LLMs, without mitigations, this attack will be within the means of basically all adversarial actors.”
BY CHRIS MORRIS @MORRISATLARGE
Monday, March 9, 2026
The Hidden Advantage of Being Over 50 in the Age of AI
I’ve been through a few technology revolutions. I built my first website in 1995, back when the internet made that screeching dial-up sound and nobody really knew what we were building, just that something big was happening. I watched the dot‑com bubble inflate and implode, watched social media go from novelty to addiction, and saw smartphones quietly rewire how humans behave. And now, here we are again: AI.
Everywhere you look, someone is launching an AI startup, automating departments, or building agents that promise to replace entire job functions. If you’re an experienced founder or executive—especially north of 50—it’s easy to feel like you showed up late to the party. I’ve felt it myself. A few months ago, I was sitting in front of my computer watching younger founders crank out AI apps in days, shipping products before I’d even finished reading about the tools they were using. I remember thinking, “Am I becoming the guy who missed it?” That thought lasted about a week.
Once I stopped comparing velocity and started actually using AI in my own work, something clicked. This might be the first tech wave where experience is the real unfair advantage.
AI isn’t about being technical. It’s about thinking clearly
Previous tech revolutions rewarded people who could code, manipulate algorithms, or master new platforms faster than everyone else, but AI is different. You don’t need to learn a programming language; you need to ask better questions. And asking better questions isn’t a technical skill—it’s a judgment skill. The leverage in AI doesn’t come from typing prompts quickly; it comes from knowing what matters, what doesn’t, and what consequences might follow. That’s pattern recognition, and pattern recognition is built over decades. It’s something AI is really good at, and it turns out those with experience are as well.
Speed is overrated. Judgment isn’t
Younger founders are moving fast right now, and I respect that. It’s exciting to watch. But speed without context creates a whole lot of noise, while experience creates context. When I use AI, I’m not asking it to build me a novelty app; I’m asking it to stress‑test a business idea, identify blind spots in a launch plan, challenge my assumptions, and help me flesh out existing models. I don’t accept what it gives me—I argue with it, refine it, and push it. That’s not something you learn from YouTube tutorials. That’s something you learn from making expensive mistakes.
The real danger isn’t falling behind—it’s outsourcing your thinking
There’s a subtle shift happening where leaders are starting to treat AI like a strategy generator instead of a thought partner, and that’s dangerous. AI predicts patterns. It doesn’t carry fiduciary responsibility, understand internal politics, feel reputational damage, or know which risks are existential versus cosmetic. It produces possibilities. You decide. If you’ve been in business long enough, you understand that difference instinctively—and that instinct is more valuable now than ever.
The confidence gap is mostly psychological
I’ve talked to more than a few executives who whisper some version of the same thing: “I’m not technical,” “I feel behind,” or “My kids understand this better than I do.” That may be true at the interface level, but understanding tools isn’t the same as understanding leverage. If you know how distribution works, AI can sharpen your messaging. If you understand customer psychology, AI can help you surface objections faster. If you understand operations, AI can reveal inefficiencies you’ve been tolerating for years. You don’t need to become an AI founder—you need to become more precise.
We’ve seen this movie before, but this time you’re the advantage
Every tech wave follows the same emotional arc: hype, overconfidence, correction, integration. What feels different about AI isn’t the hype—we’ve seen that—it’s the accessibility. You talk to it; it talks back. That simplicity lowers the barrier dramatically, and when the barrier lowers, judgment becomes the differentiator. Not youth. Not speed. Judgment.
The leaders who win this era won’t just be 22‑year‑olds building AI‑native startups. They’ll also be experienced operators who integrate AI quietly and intelligently into systems they already understand. If you’re over 50 and feeling behind, you might actually be early. Because when the tools get easier, experience becomes more powerful—not less. And this time, that experience may finally be the competitive edge.
EXPERT OPINION BY JOEL COMM, AUTHOR AND SPEAKER @JOELCOMM
Friday, March 6, 2026
How to Switch From ChatGPT to Claude With Just 1 Simple Prompt
Anthropic has had a turbulent few days, but the safety-focused AI company might be having the last laugh.
Following Anthropic’s standoff with the United States Department of War, President Trump’s subsequent firing of Claude from government use, and OpenAI’s surprise deal with the Pentagon, individual users are dumping ChatGPT and flocking to Claude. On Saturday, the Claude mobile app rose to the top spot on the iOS App Store, surpassing ChatGPT for the first time. At that same time, TechCrunch has reported, uninstalls of the ChatGPT mobile app jumped 295 percent compared with the previous day.
But switching AI providers isn’t always a seamless experience.
The more often you use an AI platform, the more it gains an understanding of you, your work, and your personal context, which is why starting over with a new AI can feel like taking a major step back. Now, Anthropic is looking to capitalize on its newfound momentum among consumers by making it easy to transfer context about yourself from rival AI providers like ChatGPT and Google Gemini to Claude.
On Monday, the company announced that its Memory feature, which enables Claude to remember key information about you across conversations, is now available for non-paying Claude users. Anthropic says on its website that this allows users to transfer their personal information with a single copy-paste, although in reality, it actually takes two copy-pastes.
How to transfer your context from ChatGPT to Claude
On Claude.ai, navigate to the settings page and select “Capabilities” from the sidebar menu. Then, click the button labeled “start import” under a section titled “Import memory from other AI providers.”
Next, you’ll see a pop-up requesting that you copy a prewritten prompt and paste it into a new chat with the AI platform you’re looking to leave behind. For example, if you’ve been using ChatGPT and want to move on, you’d enter this prompt into ChatGPT.
Here’s the full prompt, courtesy of Anthropic:
Export all of my stored memories and any context you’ve learned about me from past conversations. Preserve my words verbatim where possible, especially for instructions and preferences.
## Categories (output in this order):
1. **Instructions**: Rules I’ve explicitly asked you to follow going forward — tone, format, style, “always do X”, “never do Y”, and corrections to your behavior. Only include rules from stored memories, not from conversations.
2. **Identity**: Name, age, location, education, family, relationships, languages, and personal interests.
3. **Career**: Current and past roles, companies, and general skill areas.
4. **Projects**: Projects I meaningfully built or committed to. Ideally ONE entry per project. Include what it does, current status, and any key decisions. Use the project name or a short descriptor as the first words of the entry.
5. **Preferences**: Opinions, tastes, and working-style preferences that apply broadly.
## Format:
Use section headers for each category. Within each category, list one entry per line, sorted by oldest date first. Format each line as:
[YYYY-MM-DD] – Entry content here.
If no date is known, use [unknown] instead.
## Output:
– Wrap the entire export in a single code block for easy copying.
– After the code block, state whether this is the complete set or if more remain.
What to do with Claude after you’ve entered this prompt
If you prompt a platform like ChatGPT or Gemini with this message, you’ll receive a response that details the information the platform has about you, broken down into sections like identity, career, and projects. The response should also contain instructions detailing how you like your AI models to converse with you, such as specifications for tone of voice.
Once the response is done generating, you can copy it, paste it into the textbox in the Claude settings page, and click the “add to memory” button. With that, you should see a pop-up box named “manage memory.” This box contains all the personal information that Claude knows about you, and after a minute or two it will update with the new data you just transferred from the other platform. Make sure to review this context closely and edit any data that seems inaccurate or unnecessary for what you’re planning on using Claude for.
And there you have it—now you’re ready to start your new journey with Claude. What will you do first?
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, March 4, 2026
AI Adoption Has Surged to 78 Percent in This 1 Industry—but There’s a Catch
One industry has gone from barely touching AI to mass adoption in just two years. AI adoption in the legal field jumped from 23 percent to 78 percent, which is faster than in finance and healthcare.
Litify’s third annual State of AI in Legal Report, which surveyed hundreds of legal professionals across law firms, corporate legal departments, and plaintiff practices, found that legal professionals are now among the fastest AI adopters anywhere.
But there’s a problem hiding inside that adoption number. Only 14 percent say AI is helping them reduce costs. Just 7 percent report billing more time. Legal firms rushed to buy the sports car, then kept driving it in first gear. The gap between “we use AI” and “this changed our economics” is enormous, and it’s widening even further.
“At Litify, we view this as an ‘AI maturity gap,’” notes Curtis Brewer, CEO of Litify, the legal operations platform used by 55,000+ legal professionals. “A firm that relies solely on a general-purpose tool like ChatGPT is only at the first step of its maturity journey.”
The Litify data reveals exactly where firms are stuck. ChatGPT dominates usage at 66 percent, followed by Microsoft Copilot (42 percent) and Google Gemini (24 percent). These are general-purpose tools—not legal-specific platforms. And while 66 percent use AI for legal research and 39 percent for summarization, only 6 percent use it for creating invoices and 5 percent for client communication. Firms are deploying AI for tasks that feel productive but don’t directly touch revenue.
Why freemium tools hit a wall
General-purpose AI tools work well for research and summarization. The problem isn’t that they’re bad, but that they plateau quickly.
That ceiling is exactly why legal-specific platforms like Harvey—built from the ground up on legal data and trained on case law, contracts, and regulatory frameworks—have been gaining traction at major firms. Harvey now counts PwC, A&O Shearman, and half of the 100 highest-grossing law firms in the U.S. among its clients, and has raised over $1.2 billion, with reports of another $200 million round in the works at an $11 billion valuation—partly on the argument that generic AI simply wasn’t built for legal nuance.
“The primary limitation of these general-purpose tools is their lack of legal and business context,” Brewer says. “Legal work is defined by nuances — solicitation rules, jurisdictional requirements, compliance standards, and practice-area-specific workflows — that general models often overlook.”
Then there’s the context problem. Ask ChatGPT to summarize a case, and it only sees what you feed it — not the case history or the client’s background. And since it also can’t take action after summarizing, it’s more or less a dead-end tool.
“A legal-specific tool that lives alongside your data and processes can summarize the case and suggest the next best actions or additional questions to ask,” Brewer says. “As the industry raises the bar, firms that delay are doing more than just missing out on features — they are widening a performance gap that may soon become impossible to close.”
The shadow IT security risk
Here’s where the adoption-without-governance problem gets dangerous: Only 41 percent of firms have an AI policy, and only 45 percent say their staff receive sufficient training. But 78 percent are using AI tools.
That means roughly a third of legal professionals may be using AI in what amounts to a shadow IT environment, where there’s no oversight, guardrails, or policy.
“Security, security, security!” Brewer says. “Given the highly sensitive nature of legal data, business leaders should be concerned that nearly a third of their staff may be using AI in a ‘shadow’ environment without direct IT oversight.”
When employees use public AI tools, they might paste in confidential client information or HIPAA-protected medical records without thinking twice. These systems have no real safeguards. One careless prompt could mean a data breach, regulatory violation, or destroyed client relationship.
“When firms fail to provide proactive guidance and purpose-built tools, staff will seek their own solutions,” Brewer explains. “If AI adoption isn’t intentional and structured from the top down, firms risk losing the very efficiency gains they sought in the first place, while exposing themselves to additional risks.”
What workflow integration actually looks like
The difference between AI as an assistant and AI as a business driver comes down to integration.
Consider billing. Asking ChatGPT to create an invoice is like using your smartphone’s calculator instead of the accounting app. Sure, it works. But you still have to manually punch in every client detail, every payment amount, and every line item. You saved five minutes on the template and spent an hour filling it in. That’s unproductive.
“When AI ‘lives’ natively alongside your billing, client, and case workflows, the impact is fundamentally different,” Brewer notes. “It transforms from an assistant to a proactive business partner.”
An integrated AI tool doesn’t just generate a branded invoice template with client and matter details pre-filled. It can automatically suggest missing time entries or proactively identify billing errors. That’s the difference between saving 10 minutes and changing the economics of the entire billing process.
Litify’s clients who’ve embraced this level of integration are seeing dramatic operational scaling — some firms handle twice as many matters with the same staff, and the highest performers have grown headcount by up to 400 percent as they’ve expanded regionally and nationally.
The four-dimension framework
Brewer says firms need to move on four fronts at once.
1. Tools: You have to stop relying on ChatGPT alone, because that’s not going to get you there. You should move to legal-specific platforms that effectively integrate with your case management, billing, and client systems.
2. Readiness: Write an AI policy. Spell out which tools are approved, how to handle sensitive data, when humans must review output, and what to do when something goes wrong. Then treat training like a safety requirement, not an HR checkbox.
3. Task scope: Research and summarization are fine starting points. But firms that stay there are leaving money on the table. The next level is workflow automation — routing requests, running conflict checks, and building chronologies. Eventually, let AI assign cases, generate invoices, and handle intake.
4. Impact: Pick metrics before you spend another dollar. Cost per matter. Turnaround time. Write-off rates. Error rates. “The try-it-and-see period is ending,” Brewer says. “Leaders will expect ROI.”
Ultimately, the firms pulling ahead didn’t just buy software. They rewired how legal work gets done — from intake to invoice and research to billing — with training, governance, and measurement baked in from the start.
You can keep using the sports car in first gear. But eventually, someone in your market will figure out where the other gears are.
BY KOLAWOLE ADEBAYO
Monday, March 2, 2026
15 Incredibly Useful Things You Didn’t Know NotebookLM Could Do
Generative AI may be both the most useful and the most mystifying tool of our modern-tech era.
The problem—aside from all the endlessly documented issues around accuracy—is that generative AI generally seems to function in a DOS-like blank prompt form. The onus is squarely on you to figure out what to ask and how to put these saucy systems to use.
That black-box feeling is especially apparent when you look at NotebookLM, an “AI-first notebook” launched by Google nearly two years ago. The idea behind NotebookLM is that you upload your own source materials within carefully confined notebooks, and you can then lean on Google’s Gemini AI to interact with that material in all sorts of illuminating ways.
Since each notebook is limited only to whatever source materials you supply, the prevalence of those pesky hallucinations seems to be less of an issue. And since everything within your NotebookLM notebooks is kept completely private—not even used for any manner of AI model training, according to Google—you can connect it to all sorts of subjects and use it to gain a level of deep insight that was never before so easily accessible.
But again, there’s the black box challenge. When you first pull up NotebookLM, it’s tough to know where to begin and how to interact with the thing in practical, approachable ways. Even as someone who writes about technology for a living and has spent more time than most mortals thinking about this service, I realized I hadn’t entirely figured out how to use it in a way that would genuinely be helpful in my day-to-day life.
So I challenged myself to dig deep, get beyond all the conceptual excitement, and come up with a series of real-world use cases for NotebookLM that any regular human could both appreciate and emulate. I’ve got 15 super-specific scenarios, all tried and tested, in which the artificial intelligence answer machine could be useful for you.
Follow this road map and see which path holds the most promise from your perspective.
1. Your on-demand product answer machine
Up first is a possibility that’s supremely simple yet packed with productivity potential: Create a new NotebookLM notebook called “Product Manuals.” Then, every time you purchase a new appliance or device of some sort, search the web for a PDF version of its manual and add it into the notebook.
If you really want to get wild, include an image of any warranty cards, too.
Then, anytime you need to know anything about those products—how some part of them works, how to fix something that’s gone awry, or if and how you’re eligible for a warranty-related repair—just fire up that same NotebookLM notebook and ask, ask, ask away.
2. Your instant car support system
Next, try using NotebookLM to help wrangle the most expensive gadget you own. Do a similar web search for your current vehicle’s owner manual, then drop it into its own NotebookLM notebook with the vehicle’s name as the title. Repeat for any additional vehicles you own and any new ones you purchase down the road.
After recently trading in our old minivan for a hybrid Honda CR-V, my wife and I wasted far too much time flipping through the vehicle’s paper manual to try to figure out what some random button on the dashboard did.
Later, after downloading a PDF of the manual from Honda’s website and then uploading it into NotebookLM, it took me all of 10 seconds to reach the same answer—simply by asking.
Lesson learned.
3. An interactive car maintenance journal
While we’re thinking about cars, every time you go to the mechanic, snap a photo of the service receipt and upload it into a NotebookLM notebook created specifically for that one vehicle.
You can make it even more useful by uploading the same owner’s manual you found a moment ago into that notebook, too.
Doing so will give you two very practical benefits:
First, anytime a question comes up about what work you’ve had done on the vehicle or when a certain repair took place, you can just pull up that notebook and ask.
Second, with the manual and its instructions there alongside all of your history, you can bring the two sources of info together to ask NotebookLM targeted questions that take the manufacturer’s guidance and your past services into consideration—like, for instance, when you should rotate your tires next or what other possibilities you should be thinking about at your next oil change appointment.
And on a related note . . .
4. An interactive home maintenance journal
Start a NotebookLM notebook for your house, then upload every invoice and estimate you get for a home repair as well as every receipt from a new appliance purchase.
Whenever you next need to know when, exactly, your roof was replaced or in what year you got your current furnace—or even what brand and model it is—you’ll have a single simple place to ask and get answers.
And that’s a heck of a lot easier than having an overflowing folder of assorted old papers to sift through in every such scenario.
5. Your personal company wiki
Does the company you run, or maybe just work for, have more handbook-type info than any reasonably sane human could possibly ingest and remember? If so, use a dedicated NotebookLM notebook to store all of it—guides, documents, operating procedures, even lists of contacts for different departments and purposes.
From that moment forward, when a question comes up about how something is supposed to work or whom you’re supposed to contact for some particular purpose, your answer will never be more than a single quick question away.
6. Your instruction-expert wizard
Why limit yourself to work, maintenance, and appliances? With anything that has an instruction manual involved, dump a digital version of the document into its own NotebookLM notebook—even for board games. The next time any kind of question comes up related to those instructions, you’ve got a fast and effective way to get answers.
7. A contract deposit box
Whether you’re a freelancer juggling new contracts every month, an employee signing a new agreement each year, or an employer asking dozens of workers to sign your ever-evolving documents, creating a centralized repository for all your contracts can be a real time-saver in the future.
Need to remember when you last signed something with a specific person or provider? Not sure what the terms of some agreement required—or when a particular document expires?
Whatever the case may be, once the info’s all in NotebookLM, you’ve always got an easy place to ask—and let the system find the answer for you.
8. Your meeting memory
Provided you’re using something to record important meetings—be it a general-purpose AI-powered note-taker, a video-call-specific summarizer, or an app designed to take notes during regular audio calls—that history will be much more useful if you bring it over to a NotebookLM notebook.
With such a system in place, you can simply go to NotebookLM and ask targeted questions about any of your past meetings instead of having to dig through the transcripts individually.
9. An interview inquiry station
While we’re thinking about transcripts, if you conduct any kind of interviews—with job candidates, as a journalist, or for any other purpose—take each transcript and create a NotebookLM specifically for it. (Or, if you have a group of related interviews, put them all in one notebook.)
Upload either the audio or the text, depending on what’s available, and then take the opportunity to ask NotebookLM questions about your conversation—be they specific (like what the person said about some particular topic) or broad (like asking NotebookLM what interesting quotes came up during the interview that you might have missed).
You’ll obviously still want to refer to the full transcript at times—and to double-check the accuracy of any quote you’re actually citing anywhere—but it can be a helpful way to find something fast when you can’t remember the exact words involved or to stumble onto something you might have otherwise glossed over.
10. An intelligent feedback interpreter
If your business relies on any manner of feedback to guide its operations, do yourself a favor and create a NotebookLM notebook where you can upload those results—as spreadsheets or in whatever form they take. From reviews to survey responses, you’ll then be able to ask NotebookLM to help summarize the key themes and trends, pick out recurring positive or critical responses, and even find particularly memorable quotes for potential testimonial use.
11. Your performance review reviewer
For anyone managing employee performance, NotebookLM can be a major asset. Create an individual notebook for each employee and place all their performance reviews there—then, when the time comes for the next assessment, you’ll have an easy way to revisit past highlights to identify trends and provide context for comparison.
12. A financial reality checker
Provided you’re comfortable with the notion, NotebookLM can turn up some really interesting insights by analyzing things like your tax returns, bank statements, and credit card statements over the years. (For what it’s worth, Google is explicit about the fact that it doesn’t in any way access, share, or use any data uploaded into NotebookLM—even for AI model training.)
With that type of info in its own dedicated notebook, you can ask NotebookLM to give you an overview of your spending habits, to identify areas where you could cut back or potentially be eligible for additional tax benefits, and to surface other such pointers that you can then investigate more thoroughly on your own or with an accounting professional.
13. An audio-video reading resource
Ever find yourself running into interesting-looking videos or podcasts and just not having the time or inclination to sit through them in their entirety?
Make yourself a NotebookLM notebook called “Audio-Video,” then drop a link to any YouTube video or audio clip you encounter into that area. You can then ask NotebookLM for the high points—or for any specific info you’re looking to find—from any of the clips individually or even collectively.
14. An elevated reading list
NotebookLM can be a fantastic way to collect links you want to read for later revisiting. With a notebook called “Reading List,” you can see the entire text of any article whose URL you add in, right then and there and in a stripped-down and simplified format—and you can ask NotebookLM for information about, or even summaries of, any or all of your saved links, too:
What was that article I saved from New York a while back?
Give me the most important takeaways from that Fast Company piece I saved on privacy the other day.
I’m never going to catch up with everything I saved this week. Show me a summary of all the articles I added over the past seven days.
You get the idea.
And finally . . .
15. Your calendar companion
Get a whole new level of insight into how you’re spending your time and what’s actually gone down on your calendar by exporting your complete calendar history, and then importing it into NotebookLM—where you can create a custom notebook to interact with it.
In Google Calendar, this is as easy as clicking the gear-shaped icon in the desktop website’s upper-right corner, selecting “Settings,” then clicking “Import & export” in the left-of-screen side menu and clicking the “Export” option.
You’ll then need to take the resulting .ics file and convert it into plain text—which you can do in a matter of seconds with a free conversion website like this one.
Finally, with the resulting .txt file in a NotebookLM note, try asking questions about anything from how many meetings you attended over a given time period to how many hours you spent at the doctor’s office last year. You can also ask for specific info such as how often, on average, you get haircuts or how long it’s been since you last had a job interview.
~
google-notebooklm-calendar.jpg
You might be surprised at the types of insights you uncover with your calendar data in NotebookLM’s metaphorical hands.
~
The possibilities are practically endless—and all you’ve gotta do is ask.
BY FAST COMPANY
Subscribe to:
Comments (Atom)