IMPACT
..building a unique and dynamic generation.
Monday, March 23, 2026
Replit CEO Says Its New AI Agent Can Vibe Code a Startup From Scratch
Replit founder and CEO Amjad Masad says the company’s latest AI agent can vibe code an entire company from scratch.
Masad, whose company released one of the first commercially available AI coding agents in 2024, has been at the forefront of the vibe-coding revolution, along with competitors Bolt and Lovable. Today, he announced that Replit has raised $400 million in a Series D round, and he also unveiled Agent 4, the newly updated version of its marquee product. Over 50 million people are currently using Replit to create apps and websites, according to a statement from Replit investor Georgian.
The founder says that Agent 4 is capable of not just building an application, but actually creating and maintaining an entire company. Masad tells Inc. that Replit is now “the cockpit or the launch control of your business,” and can help develop pitch decks and animated logos, connect to payment processors like Stripe, and work on multiple tasks in parallel.
As AI takes on more of the technical work of running a software business, Masad predicts, the role of humans will evolve to become more focused on creativity and taste. Even today’s best AI models have trouble understanding what aesthetically makes one version of an app “better” than another, he says, which is why Replit has focused on developing user interfaces that enable deeper creative interactions with AI.
The key to Agent 4’s new abilities is a feature that Replit calls Canvas; it’s essentially a scratchpad for Replit to store all work created for a specific project. Individual elements (like a website, product research, and financial spreadsheets) are displayed as cards that you can move around and annotate.
In a video example, Masad used Agent 4 to develop a job marketplace that helps companies find creative AI talent. First, he generated four variants of a landing page, and then iterated on the one he liked most. To change the color of a button, Masad simply highlighted the button and then used a gradient tool to select a new color.
In practice, Canvas combines some of the no-code tooling of platforms like Figma with the convenience of AI coding models. For solopreneurs, Masad says, “it almost feels like you have a bunch of employees at your disposal.”
Canvas and Agent 4 were partially inspired by sci-fi user interfaces, like the holographic displays used by Tony Stark in the Iron Man films, but even more so by a much simpler piece of hardware: a whiteboard.
After introducing agents in 2024, Masad noticed the Replit office’s whiteboards getting significantly more use than previously. The reason? Replit employees had more time to focus on design rather than coding, and were using whiteboards to visually communicate their ideas to each other. Masad believed that this process of interaction could be recreated within the Replit platform.
Just like a whiteboard, users can draw on Canvas, highlighting specific aspects of a website they want to change, or using arrows to indicate how different elements should interact. In his example website, Masad sketched an image of a globe in the Canvas, asked Replit to turn the sketch into an animated 3D asset, and then added that asset to the job marketplace.
Masad says this adds a new level of interaction between the user and the platform, enabling discussions that might be closer to what you’d actually have with a human technical co-founder.
“I think the tragedy of agents up until this moment was that we’re trying to squeeze this universe of ideas into this linear text box,” says Masad. “Now, you can be chaotic with it.”
BY BEN SHERRY @BENLUCASSHERRY
Friday, March 20, 2026
The world’s most valuable company just sent another signal that AI agents are going to be everywhere
Tech giant Nvidia, the world’s most valuable company and the poster child of the AI boom, is banking its future on the rise of AI agents.
The company on Monday announced a slew of software and hardware updates to encourage the development of AI agents, or AI assistants that can perform tasks for users. Among the most significant announcements is a set of tools for AI helpers based on OpenClaw, the buzzy agent platform that’s been the talk of Silicon Valley in recent weeks. Nvidia also announced new computing racks designed to power agents, shifting its strategy’s primary focus from graphics processing units.
Clad in his signature black leather jacket, Nvidia CEO Jensen Huang made the flurry of announcements in San Jose at the chipmaker’s annual GTC conference, which attracts tens of thousands of attendees and has been dubbed the “super bowl” of AI.
Nvidia’s announcements are important because so many major companies rely on its systems to train and power their AI services. This means the chip giant’s new products often reflect the technologies for companies across the AI industry.
Nvidia announced software tools to help companies make AI agents, including models and a blueprint for creating custom specialized assistants. It’s also launching a set of resources for creating agents on OpenClaw that adds privacy and security controls, which is crucial considering the popular agent has raised concerns among cybersecurity experts.
Nvidia said its resources help OpenClaw agents access the systems and files without compromising security or privacy. Huang said they’ve worked directly with OpenClaw creator Peter Steinberger, who was recently hired by OpenAI.
Huang said OpenClaw is the “operating system for personal AI” and likened it to the importance of the Mac and Window operating systems.
“OpenClaw is the number one. It is the most popular open-source project in the history of humanity, and it did so in just a few weeks,” Huang said.
Nvidia also unveiled updates to its new computing platform, Vera Rubin, which it said comprised seven chips that are now in full production. That includes a new central computing rack made up of central processing units (CPUs) rather than the graphics processing units (GPUs) Nvidia has been known for. CPUs are ideal for running the types of computing processes needed to power AI agents.
The company is also integrating a non-Nvidia processor into its systems: New high speed “language processing units” (LPUs) from American AI company Groq. Nvidia struck a $20 billion deal with Groq in November.
Unlike AI chatbots that respond to questions and prompts, AI agents can autonomously complete tasks like building websites, creating marketing pitches and sending emails. AI agents are currently Nvidia’s biggest focus area, largely driven by the popularity of OpenClaw and Anthropic’s Claude Code and Cowork agents.
“Every company in the world today needs to have an OpenClaw strategy, an agentic system strategy. This is the new computer,” Huang said. “This is as big of a deal as HTML, as big of a deal as Linux.”
Nvidia is attempting to future-proof its technology in other ways as well. It’s launching a space module for Vera Rubin, aiming to bring its latest tech to data centers in space. It’s become an increasing area of interest among tech giants as they scramble for real estate to construct data centers. OpenAI CEO Sam Altman and xAI and Tesla CEO Elon Musk have both talked about using space to help power data centers and energy-hungry AI systems.
“Nvidia is now focused beyond just computing with a major focus on the future of networking in this new world of AI,” said Wedbush analyst Dan Ives ahead of Nvidia’s Monday conference.
In his speech on Monday, Huang tried to convey that the hype around AI and Nvidia can last, selling a vision of an AI-transformed future where demand for their chips grows indefinitely.
Huang said computing demand “just keeps on going up,” adding that he expects “at last” $1 trillion in Nvidia revenue through 2027.
”There’s a reason for that,” Huang said. “This fundamental inflection — AI is able to do productive work, and therefore the inflection point of inference has arrived.”
By Hadas Gold
Wednesday, March 18, 2026
How One of the World’s Top AI Voices Uses Claude Code to Run Her Day
Allie K. Miller, one of the most followed voices in the AI industry, says that “by the time you wake up, your AI should have already been working for you for hours.”
Formerly the global head of machine learning for startups and venture capital at Amazon Web Services, Miller is among the busiest AI consultants and influencers in the industry, with more than 1.6 million followers on LinkedIn alone. Through her company Open Machine, she advises enterprises and business leaders—including those at OpenAI, Google, Anthropic, and Warner Bros. Discovery—on how to adopt AI. In 2025, Miller was named one of the 100 most influential people in AI by Time.
In an interview with Inc., Miller says that nowadays, she largely works out of Claude Code, the agentic coding system developed by Anthropic. She keeps multiple instances of Claude Code running simultaneously in separate terminals. Because these Claude Code instances have access to Miller’s filesystem, they can autonomously complete work on her behalf.
Miller teaches Claude Code how to complete workflows by using Skills, a feature that allows Claude Code to undertake and repeat multistep processes. Miller says that she’s developed automations that generate a report summarizing all of the urgent emails she’s received overnight and a daily morning briefing that runs through her entire calendar, recommending times to recharge.
“It’ll tell me, ‘You have four different interviews or six client meetings,’” explains Miller, “‘so I’ve gone ahead and blocked out 30 minutes tomorrow for deep work.’”
Another example: Every time Miller edits a social video of herself using CapCut, the TikTok-owned video editing app, she exports the video into a specific folder. Anytime a new file is added to that folder, an automation is triggered that automatically creates a transcript, a social post, and a screenshot for the video’s thumbnail.
In general, Miller says, the best way to identify AI solutions that work for your specific use case is to simply have the AI model of your choice interview you. Tell it to ask you questions about your work, making note of areas that you feel could be more efficient or smoother. Then, Miller says, prompt it again with “make these ideas more proactive, more responsibly autonomous, and more action-forward.” With just that prompt, she adds, you can get started developing your own AI solutions.
It’s not just workflows that Miller is automating. When developing a new post for her newsletter, Miller says that she runs drafts through eight “synthetic personas” that she’s developed, which represent the newsletter’s different audience demographics. “I’m not trying to appease all eight and write a happy-go-lucky version of the newsletter,” says Miller, “but I want to make sure I didn’t miss something important. I want to make sure that a parent reading [the newsletter] isn’t completely misunderstanding my take on something.”
Miller has a similar strategy when making big career decisions. She built a self-described “AI boardroom,” complete with six synthetic personas, which weigh in on major company issues. Miller swaps around which six personas sit on the board, depending on her needs. “If it’s a media question, maybe I’m running it through Shonda Rhimes,” she says, “or if it’s a business question, maybe I’m asking Jeff Bezos.” These personas give their initial opinions on the decision, and then they all begin debating with one another in a group chat. “I literally had Mickey Mouse arguing with Jensen Huang,” Miller adds.
The point, Miller says, is to get the most out of the raw intelligence offered by today’s AI models. “Wouldn’t you love to walk into a room of 10 geniuses arguing over something that you’ve been struggling with, and all they want to do is help you get to the best possible outcome?” she says. “For those who have a growth mindset and thrive off of dynamic, changing, adaptable business settings, the multiagent world that we are walking into in 2026 is going to be world-changing.”
BY BEN SHERRY @BENLUCASSHERRY
Monday, March 16, 2026
Meta just bought the social network for AI bots everyone’s been talking about
Meta, the company behind some of the world’s most popular social media platforms, just scooped up a new site – for bots.
Meta has acquired Moltbook, the social media network where AI agents interact with one another autonomously, the company said in a statement on Tuesday.
Meta is competing with rivals like OpenAI for both talent and users’ attention. And as AI expands into more aspects of Americans’ lives, tech companies are trying to figure out the best way to position themselves to win what’s becoming a sort of technological arms race.
Moltbook became the talk of Silicon Valley last month, racking up millions of registered bots within days of its launch. Some in the industry saw it as a major leap because it demonstrated what can happen when AI agents socialize with one another like humans. Others said the site is full of sham agents, AI slop and security risks and should be viewed skeptically.
Meta’s acquisition comes weeks after OpenAI hired the founder of the technology behind Moltbook, an AI agent system called OpenClaw. Moltbook’s team will join Meta’s superintelligence labs. A Meta spokesperson said Moltbook’s approach “opens up new ways for AI agents to work for people and businesses.”
OpenAI CEO Sam Altman dismissed the excitement over Moltbook last month, suggesting OpenClaw, the open-source autonomous AI agent that powers the site’s bots, was the real breakthrough. Altman wrote that he expects the technology to become “core” to OpenAI’s products.
Meta acquired the buzzy AI agent startup Manus in December, following a string of high-profile hires intended to build out its superintelligence team. The company also invested $14.3 billion in Scale AI last year and hired its CEO.
But Meta, like some of its Big Tech peers, is facing pressure to prove its AI investments will make money, especially as rivals like OpenAI, Anthropic and Google churn out new and improved models for their chatbots. Meta CEO Mark Zuckerberg said on a January earnings call the company will release its new AI models “over the coming months.”
By Hadas Gold
Subscribe to:
Comments (Atom)