IMPACT
..building a unique and dynamic generation.
Friday, March 20, 2026
The world’s most valuable company just sent another signal that AI agents are going to be everywhere
Tech giant Nvidia, the world’s most valuable company and the poster child of the AI boom, is banking its future on the rise of AI agents.
The company on Monday announced a slew of software and hardware updates to encourage the development of AI agents, or AI assistants that can perform tasks for users. Among the most significant announcements is a set of tools for AI helpers based on OpenClaw, the buzzy agent platform that’s been the talk of Silicon Valley in recent weeks. Nvidia also announced new computing racks designed to power agents, shifting its strategy’s primary focus from graphics processing units.
Clad in his signature black leather jacket, Nvidia CEO Jensen Huang made the flurry of announcements in San Jose at the chipmaker’s annual GTC conference, which attracts tens of thousands of attendees and has been dubbed the “super bowl” of AI.
Nvidia’s announcements are important because so many major companies rely on its systems to train and power their AI services. This means the chip giant’s new products often reflect the technologies for companies across the AI industry.
Nvidia announced software tools to help companies make AI agents, including models and a blueprint for creating custom specialized assistants. It’s also launching a set of resources for creating agents on OpenClaw that adds privacy and security controls, which is crucial considering the popular agent has raised concerns among cybersecurity experts.
Nvidia said its resources help OpenClaw agents access the systems and files without compromising security or privacy. Huang said they’ve worked directly with OpenClaw creator Peter Steinberger, who was recently hired by OpenAI.
Huang said OpenClaw is the “operating system for personal AI” and likened it to the importance of the Mac and Window operating systems.
“OpenClaw is the number one. It is the most popular open-source project in the history of humanity, and it did so in just a few weeks,” Huang said.
Nvidia also unveiled updates to its new computing platform, Vera Rubin, which it said comprised seven chips that are now in full production. That includes a new central computing rack made up of central processing units (CPUs) rather than the graphics processing units (GPUs) Nvidia has been known for. CPUs are ideal for running the types of computing processes needed to power AI agents.
The company is also integrating a non-Nvidia processor into its systems: New high speed “language processing units” (LPUs) from American AI company Groq. Nvidia struck a $20 billion deal with Groq in November.
Unlike AI chatbots that respond to questions and prompts, AI agents can autonomously complete tasks like building websites, creating marketing pitches and sending emails. AI agents are currently Nvidia’s biggest focus area, largely driven by the popularity of OpenClaw and Anthropic’s Claude Code and Cowork agents.
“Every company in the world today needs to have an OpenClaw strategy, an agentic system strategy. This is the new computer,” Huang said. “This is as big of a deal as HTML, as big of a deal as Linux.”
Nvidia is attempting to future-proof its technology in other ways as well. It’s launching a space module for Vera Rubin, aiming to bring its latest tech to data centers in space. It’s become an increasing area of interest among tech giants as they scramble for real estate to construct data centers. OpenAI CEO Sam Altman and xAI and Tesla CEO Elon Musk have both talked about using space to help power data centers and energy-hungry AI systems.
“Nvidia is now focused beyond just computing with a major focus on the future of networking in this new world of AI,” said Wedbush analyst Dan Ives ahead of Nvidia’s Monday conference.
In his speech on Monday, Huang tried to convey that the hype around AI and Nvidia can last, selling a vision of an AI-transformed future where demand for their chips grows indefinitely.
Huang said computing demand “just keeps on going up,” adding that he expects “at last” $1 trillion in Nvidia revenue through 2027.
”There’s a reason for that,” Huang said. “This fundamental inflection — AI is able to do productive work, and therefore the inflection point of inference has arrived.”
By Hadas Gold
Wednesday, March 18, 2026
How One of the World’s Top AI Voices Uses Claude Code to Run Her Day
Allie K. Miller, one of the most followed voices in the AI industry, says that “by the time you wake up, your AI should have already been working for you for hours.”
Formerly the global head of machine learning for startups and venture capital at Amazon Web Services, Miller is among the busiest AI consultants and influencers in the industry, with more than 1.6 million followers on LinkedIn alone. Through her company Open Machine, she advises enterprises and business leaders—including those at OpenAI, Google, Anthropic, and Warner Bros. Discovery—on how to adopt AI. In 2025, Miller was named one of the 100 most influential people in AI by Time.
In an interview with Inc., Miller says that nowadays, she largely works out of Claude Code, the agentic coding system developed by Anthropic. She keeps multiple instances of Claude Code running simultaneously in separate terminals. Because these Claude Code instances have access to Miller’s filesystem, they can autonomously complete work on her behalf.
Miller teaches Claude Code how to complete workflows by using Skills, a feature that allows Claude Code to undertake and repeat multistep processes. Miller says that she’s developed automations that generate a report summarizing all of the urgent emails she’s received overnight and a daily morning briefing that runs through her entire calendar, recommending times to recharge.
“It’ll tell me, ‘You have four different interviews or six client meetings,’” explains Miller, “‘so I’ve gone ahead and blocked out 30 minutes tomorrow for deep work.’”
Another example: Every time Miller edits a social video of herself using CapCut, the TikTok-owned video editing app, she exports the video into a specific folder. Anytime a new file is added to that folder, an automation is triggered that automatically creates a transcript, a social post, and a screenshot for the video’s thumbnail.
In general, Miller says, the best way to identify AI solutions that work for your specific use case is to simply have the AI model of your choice interview you. Tell it to ask you questions about your work, making note of areas that you feel could be more efficient or smoother. Then, Miller says, prompt it again with “make these ideas more proactive, more responsibly autonomous, and more action-forward.” With just that prompt, she adds, you can get started developing your own AI solutions.
It’s not just workflows that Miller is automating. When developing a new post for her newsletter, Miller says that she runs drafts through eight “synthetic personas” that she’s developed, which represent the newsletter’s different audience demographics. “I’m not trying to appease all eight and write a happy-go-lucky version of the newsletter,” says Miller, “but I want to make sure I didn’t miss something important. I want to make sure that a parent reading [the newsletter] isn’t completely misunderstanding my take on something.”
Miller has a similar strategy when making big career decisions. She built a self-described “AI boardroom,” complete with six synthetic personas, which weigh in on major company issues. Miller swaps around which six personas sit on the board, depending on her needs. “If it’s a media question, maybe I’m running it through Shonda Rhimes,” she says, “or if it’s a business question, maybe I’m asking Jeff Bezos.” These personas give their initial opinions on the decision, and then they all begin debating with one another in a group chat. “I literally had Mickey Mouse arguing with Jensen Huang,” Miller adds.
The point, Miller says, is to get the most out of the raw intelligence offered by today’s AI models. “Wouldn’t you love to walk into a room of 10 geniuses arguing over something that you’ve been struggling with, and all they want to do is help you get to the best possible outcome?” she says. “For those who have a growth mindset and thrive off of dynamic, changing, adaptable business settings, the multiagent world that we are walking into in 2026 is going to be world-changing.”
BY BEN SHERRY @BENLUCASSHERRY
Monday, March 16, 2026
Meta just bought the social network for AI bots everyone’s been talking about
Meta, the company behind some of the world’s most popular social media platforms, just scooped up a new site – for bots.
Meta has acquired Moltbook, the social media network where AI agents interact with one another autonomously, the company said in a statement on Tuesday.
Meta is competing with rivals like OpenAI for both talent and users’ attention. And as AI expands into more aspects of Americans’ lives, tech companies are trying to figure out the best way to position themselves to win what’s becoming a sort of technological arms race.
Moltbook became the talk of Silicon Valley last month, racking up millions of registered bots within days of its launch. Some in the industry saw it as a major leap because it demonstrated what can happen when AI agents socialize with one another like humans. Others said the site is full of sham agents, AI slop and security risks and should be viewed skeptically.
Meta’s acquisition comes weeks after OpenAI hired the founder of the technology behind Moltbook, an AI agent system called OpenClaw. Moltbook’s team will join Meta’s superintelligence labs. A Meta spokesperson said Moltbook’s approach “opens up new ways for AI agents to work for people and businesses.”
OpenAI CEO Sam Altman dismissed the excitement over Moltbook last month, suggesting OpenClaw, the open-source autonomous AI agent that powers the site’s bots, was the real breakthrough. Altman wrote that he expects the technology to become “core” to OpenAI’s products.
Meta acquired the buzzy AI agent startup Manus in December, following a string of high-profile hires intended to build out its superintelligence team. The company also invested $14.3 billion in Scale AI last year and hired its CEO.
But Meta, like some of its Big Tech peers, is facing pressure to prove its AI investments will make money, especially as rivals like OpenAI, Anthropic and Google churn out new and improved models for their chatbots. Meta CEO Mark Zuckerberg said on a January earnings call the company will release its new AI models “over the coming months.”
By Hadas Gold
Friday, March 13, 2026
AI is exhausting workers so much, researchers have dubbed the condition ‘AI brain fry’
Part of the pitch for artificial intelligence in the workplace goes like this: It’s like having a team of people to delegate your grunt work to, freeing you up to think strategically and maybe, just maybe, take a long lunch or head home early. Or maybe even be more productive, to make more money. It’s a nice idea!
But as everyone who’s either had a boss or been a boss knows, managing is a job in itself, one that comes with its own distinct brand of stress and annoyance. And that doesn’t change if the “people” in question aren’t people at all.
For participants in a recent study by Boston Consulting Group, the experience of overseeing multiple AI “agents,” autonomous software that’s designed to execute tasks, rather than just churn out information like a chatbot, caused an acute sensation of “buzzing” — a fog that left workers exhausted and struggling to concentrate. The study’s authors call it “AI brain fry,” defined as mental fatigue “from excessive use or oversight of AI tools beyond one’s cognitive capacity.”
“Contrary to the promise of having more time to focus on meaningful work, juggling and multitasking can become the definitive features of working with AI,” they wrote in the study. published by Harvard Business Review last week. “This AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit.”
Workers quoted in the study reminded me a lot of my fellow elder Millennials circa 1997, rushing home to tend to their Tamagotchis.
“It was like I had a dozen browser tabs open in my head, all fighting for attention,” one senior engineering manager told researchers. “I caught myself rereading the same stuff, second-guessing way more than usual, and getting weirdly impatient. My thinking wasn’t broken, just noisy—like mental static.”
This is just one new side effect from a push by company executives to make workers use AI more. Last fall, a Harvard Business Review report chronicled the scourge of “workslop” — the nonsensical AI-generated memos, pitch decks and presentations that end up creating more work for colleagues who have to fix what the bot got wrong.
Workslop reflects a kind of “cognitive surrender” in which workers feel unmotivated, giving AI work to do and not really paying attention to the output, said Gabriella Rosen Kellerman, a psychiatrist who co-authored both reports, in an interview. “Brain fry is almost the opposite… It’s like trying to go tête-à-tête — intelligence to intelligence — with the AI.”
Francesco Bonacci, CEO of Cua AI, which builds AI agents, described his AI fatigue as “vibe coding paralysis” (a reference to the Silicon Valley trend of building less-polished projects with AI prompts rather than traditional coding). “I end each day exhausted — not from the work itself, but from the managing of the work,” he wrote last month in an essay on X. “Six worktrees open, four half-written features, two ‘quick fixes’ that spawned rabbit holes, and a growing sense that I’m losing the plot entirely.”
To some extent, brain fry and workslop could both be a case of growing pains. Imagine plucking a middle-aged office worker from 1986, dropping them into a 2026 workplace and asking them to send 10 emails, respond to Slacks and Zoom into a call with the social media team who are all working from home. You’d expect some cognitive overload, not to mention some confused looks when you tell them Donald Trump is president and that it took more than 30 years to make a “Top Gun” sequel.
Of course, people learn how to be managers, in general, all the time.
“I do think this is potentially temporary,” said Matthew Kropp, a co-author of the brain fry study and BCG managing director. “These are tools we haven’t had before.”
Kropp compared the experience of someone managing multiple AI tools to that of someone who just learned to drive being given a Ferrari. You can go really fast, but it’s easy to lose control.
Of course, even tech pros seem to be struggling to control their AI assistants at times. Last month, Meta’s director of AI safety and alignment tweeted about her own experience watching bots nearly delete her inbox without permission. “I had to RUN to my Mac mini like I was defusing a bomb,” she wrote, chalking the incident up to a “rookie mistake.”
Both Kropp and Kellerman emphasized that the result of the study wasn’t all negative. Surprisingly, the people experiencing brain fry tended to experience less burnout, defined as a state of chronic workplace stress that builds over time and makes workers perform poorly. Brain fry is an acute experience, as participants described it to them.
“When they take a break, it goes away,” Kellerman said.
Analysis by Allison Morrow
Subscribe to:
Comments (Atom)