Wednesday, March 25, 2026

With the MacBook Neo, Apple Made the Perfect AI Computer

A lot of the conversation about the MacBook Neo is whether the compromises Apple made in order to sell a Mac for under $600 meant that you ended up with a computer that wasn’t actually able to do anything useful. Of course, it doesn’t take long to realize that the Neo is, in fact, more than capable of handling most of the computer things people who are inclined to buy this particular Mac might need it to do. One of the things that conversation seems to have missed is the idea that the Neo is perfectly equipped to do the only thing that tech companies seem to think anyone cares about: AI. You can argue whether that’s actually true, but there’s no question that the Neo is one of the most interesting computers in the age of AI computing. To be clear, the MacBook Neo does come with compromises. I’m not going to go through all of them now, partly because I wrote about them when I reviewed the Neo. But also because all of the Neo’s compromises are irrelevant to making it a great computer for AI. It’s not that other Macs are less capable. There is, however, something magical about the idea that a $600 entry-level Mac is as capable as a $4000 MacBook Pro, or $6000 Mac Studio, when it comes to the most intensive computing that any of us do today. That, of course, is because most AI computing happens in the cloud, not on your computer. That means that the limiting factor isn’t memory, storage, or how fast your processor is. No, the limiting factor is how well you’re able to get your AI tool of choice to understand what you want. Oh, and I guess the speed of your internet connection. That means that a MacBook Neo, with an A18 Pro, 8GB of memory, and a 256 GB or 512 GB SSD, will be just fine to run the Mac ChatGPT app or run Gemini in Safari. And that changes what your laptop actually needs to be. I don’t know that Apple had that specific thought when they made the MacBook Neo. Maybe they just wanted to make a low-cost, entry-level MacBook that would appeal to people who wouldn’t otherwise buy a Mac. Either way, they ended up making what might be the most accessible AI-first computer yet. With the MacBook Neo, a high school student, freelancer, or small business owner can now own hardware that gives them full access to the best AI tools in the world. Interestingly, this isn’t exactly the way Apple has framed the marketing. In fact, Apple isn’t shy about how it markets the MacBook Pro as the laptop for AI. The new M5 Pro and M5 Max chips, Apple says, deliver up to 4x faster LLM prompt processing than the previous generation. The MacBook Pro, in Apple’s words, is built for “AI researchers and developers to train custom models locally.” I’m not arguing that isn’t a real use case. But I think we can all agree it’s a very narrow one that most people don’t understand or care about. Training models locally or running 30-billion-parameter LLMs on-device are things that matter enormously to a specific kind of user — and are completely irrelevant to almost everyone else. The average person using AI doesn’t need to run a model. The average user just wants to talk to one. When you ask Claude to help you rewrite an email, or ask ChatGPT to explain something complicated, or use Gemini to summarize a document, none of that requires local inference. The model lives somewhere else. The compute happens in the cloud. Your laptop is basically just a keyboard and screen for a computer that does the work for you. The MacBook Pro is a remarkable machine for people who need what it does. But positioning it as the computer for the AI era implies that on-device model training is how most people will use AI. It isn’t. It’s how a small number of highly technical users will use AI — the same people who were already buying MacBook Pros anyway. For everyone else, the question was never whether their laptop could run a model. It was whether their laptop could get out of the way while someone else’s computers did. For $599, Apple may just have given us the computer that answers that question. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Monday, March 23, 2026

Replit CEO Says Its New AI Agent Can Vibe Code a Startup From Scratch

Replit founder and CEO Amjad Masad says the company’s latest AI agent can vibe code an entire company from scratch. Masad, whose company released one of the first commercially available AI coding agents in 2024, has been at the forefront of the vibe-coding revolution, along with competitors Bolt and Lovable. Today, he announced that Replit has raised $400 million in a Series D round, and he also unveiled Agent 4, the newly updated version of its marquee product. Over 50 million people are currently using Replit to create apps and websites, according to a statement from Replit investor Georgian. The founder says that Agent 4 is capable of not just building an application, but actually creating and maintaining an entire company. Masad tells Inc. that Replit is now “the cockpit or the launch control of your business,” and can help develop pitch decks and animated logos, connect to payment processors like Stripe, and work on multiple tasks in parallel. As AI takes on more of the technical work of running a software business, Masad predicts, the role of humans will evolve to become more focused on creativity and taste. Even today’s best AI models have trouble understanding what aesthetically makes one version of an app “better” than another, he says, which is why Replit has focused on developing user interfaces that enable deeper creative interactions with AI. The key to Agent 4’s new abilities is a feature that Replit calls Canvas; it’s essentially a scratchpad for Replit to store all work created for a specific project. Individual elements (like a website, product research, and financial spreadsheets) are displayed as cards that you can move around and annotate. In a video example, Masad used Agent 4 to develop a job marketplace that helps companies find creative AI talent. First, he generated four variants of a landing page, and then iterated on the one he liked most. To change the color of a button, Masad simply highlighted the button and then used a gradient tool to select a new color. In practice, Canvas combines some of the no-code tooling of platforms like Figma with the convenience of AI coding models. For solopreneurs, Masad says, “it almost feels like you have a bunch of employees at your disposal.” Canvas and Agent 4 were partially inspired by sci-fi user interfaces, like the holographic displays used by Tony Stark in the Iron Man films, but even more so by a much simpler piece of hardware: a whiteboard. After introducing agents in 2024, Masad noticed the Replit office’s whiteboards getting significantly more use than previously. The reason? Replit employees had more time to focus on design rather than coding, and were using whiteboards to visually communicate their ideas to each other. Masad believed that this process of interaction could be recreated within the Replit platform. Just like a whiteboard, users can draw on Canvas, highlighting specific aspects of a website they want to change, or using arrows to indicate how different elements should interact. In his example website, Masad sketched an image of a globe in the Canvas, asked Replit to turn the sketch into an animated 3D asset, and then added that asset to the job marketplace. Masad says this adds a new level of interaction between the user and the platform, enabling discussions that might be closer to what you’d actually have with a human technical co-founder. “I think the tragedy of agents up until this moment was that we’re trying to squeeze this universe of ideas into this linear text box,” says Masad. “Now, you can be chaotic with it.” BY BEN SHERRY @BENLUCASSHERRY

Friday, March 20, 2026

The world’s most valuable company just sent another signal that AI agents are going to be everywhere

Tech giant Nvidia, the world’s most valuable company and the poster child of the AI boom, is banking its future on the rise of AI agents. The company on Monday announced a slew of software and hardware updates to encourage the development of AI agents, or AI assistants that can perform tasks for users. Among the most significant announcements is a set of tools for AI helpers based on OpenClaw, the buzzy agent platform that’s been the talk of Silicon Valley in recent weeks. Nvidia also announced new computing racks designed to power agents, shifting its strategy’s primary focus from graphics processing units. Clad in his signature black leather jacket, Nvidia CEO Jensen Huang made the flurry of announcements in San Jose at the chipmaker’s annual GTC conference, which attracts tens of thousands of attendees and has been dubbed the “super bowl” of AI. Nvidia’s announcements are important because so many major companies rely on its systems to train and power their AI services. This means the chip giant’s new products often reflect the technologies for companies across the AI industry. Nvidia announced software tools to help companies make AI agents, including models and a blueprint for creating custom specialized assistants. It’s also launching a set of resources for creating agents on OpenClaw that adds privacy and security controls, which is crucial considering the popular agent has raised concerns among cybersecurity experts. Nvidia said its resources help OpenClaw agents access the systems and files without compromising security or privacy. Huang said they’ve worked directly with OpenClaw creator Peter Steinberger, who was recently hired by OpenAI. Huang said OpenClaw is the “operating system for personal AI” and likened it to the importance of the Mac and Window operating systems. “OpenClaw is the number one. It is the most popular open-source project in the history of humanity, and it did so in just a few weeks,” Huang said. Nvidia also unveiled updates to its new computing platform, Vera Rubin, which it said comprised seven chips that are now in full production. That includes a new central computing rack made up of central processing units (CPUs) rather than the graphics processing units (GPUs) Nvidia has been known for. CPUs are ideal for running the types of computing processes needed to power AI agents. The company is also integrating a non-Nvidia processor into its systems: New high speed “language processing units” (LPUs) from American AI company Groq. Nvidia struck a $20 billion deal with Groq in November. Unlike AI chatbots that respond to questions and prompts, AI agents can autonomously complete tasks like building websites, creating marketing pitches and sending emails. AI agents are currently Nvidia’s biggest focus area, largely driven by the popularity of OpenClaw and Anthropic’s Claude Code and Cowork agents. “Every company in the world today needs to have an OpenClaw strategy, an agentic system strategy. This is the new computer,” Huang said. “This is as big of a deal as HTML, as big of a deal as Linux.” Nvidia is attempting to future-proof its technology in other ways as well. It’s launching a space module for Vera Rubin, aiming to bring its latest tech to data centers in space. It’s become an increasing area of interest among tech giants as they scramble for real estate to construct data centers. OpenAI CEO Sam Altman and xAI and Tesla CEO Elon Musk have both talked about using space to help power data centers and energy-hungry AI systems. “Nvidia is now focused beyond just computing with a major focus on the future of networking in this new world of AI,” said Wedbush analyst Dan Ives ahead of Nvidia’s Monday conference. In his speech on Monday, Huang tried to convey that the hype around AI and Nvidia can last, selling a vision of an AI-transformed future where demand for their chips grows indefinitely. Huang said computing demand “just keeps on going up,” adding that he expects “at last” $1 trillion in Nvidia revenue through 2027. ”There’s a reason for that,” Huang said. “This fundamental inflection — AI is able to do productive work, and therefore the inflection point of inference has arrived.” By Hadas Gold

Wednesday, March 18, 2026

How One of the World’s Top AI Voices Uses Claude Code to Run Her Day

Allie K. Miller, one of the most followed voices in the AI industry, says that “by the time you wake up, your AI should have already been working for you for hours.” Formerly the global head of machine learning for startups and venture capital at Amazon Web Services, Miller is among the busiest AI consultants and influencers in the industry, with more than 1.6 million followers on LinkedIn alone. Through her company Open Machine, she advises enterprises and business leaders—including those at OpenAI, Google, Anthropic, and Warner Bros. Discovery—on how to adopt AI. In 2025, Miller was named one of the 100 most influential people in AI by Time. In an interview with Inc., Miller says that nowadays, she largely works out of Claude Code, the agentic coding system developed by Anthropic. She keeps multiple instances of Claude Code running simultaneously in separate terminals. Because these Claude Code instances have access to Miller’s filesystem, they can autonomously complete work on her behalf. Miller teaches Claude Code how to complete workflows by using Skills, a feature that allows Claude Code to undertake and repeat multistep processes. Miller says that she’s developed automations that generate a report summarizing all of the urgent emails she’s received overnight and a daily morning briefing that runs through her entire calendar, recommending times to recharge. “It’ll tell me, ‘You have four different interviews or six client meetings,’” explains Miller, “‘so I’ve gone ahead and blocked out 30 minutes tomorrow for deep work.’” Another example: Every time Miller edits a social video of herself using CapCut, the TikTok-owned video editing app, she exports the video into a specific folder. Anytime a new file is added to that folder, an automation is triggered that automatically creates a transcript, a social post, and a screenshot for the video’s thumbnail. In general, Miller says, the best way to identify AI solutions that work for your specific use case is to simply have the AI model of your choice interview you. Tell it to ask you questions about your work, making note of areas that you feel could be more efficient or smoother. Then, Miller says, prompt it again with “make these ideas more proactive, more responsibly autonomous, and more action-forward.” With just that prompt, she adds, you can get started developing your own AI solutions. It’s not just workflows that Miller is automating. When developing a new post for her newsletter, Miller says that she runs drafts through eight “synthetic personas” that she’s developed, which represent the newsletter’s different audience demographics. “I’m not trying to appease all eight and write a happy-go-lucky version of the newsletter,” says Miller, “but I want to make sure I didn’t miss something important. I want to make sure that a parent reading [the newsletter] isn’t completely misunderstanding my take on something.” Miller has a similar strategy when making big career decisions. She built a self-described “AI boardroom,” complete with six synthetic personas, which weigh in on major company issues. Miller swaps around which six personas sit on the board, depending on her needs. “If it’s a media question, maybe I’m running it through Shonda Rhimes,” she says, “or if it’s a business question, maybe I’m asking Jeff Bezos.” These personas give their initial opinions on the decision, and then they all begin debating with one another in a group chat. “I literally had Mickey Mouse arguing with Jensen Huang,” Miller adds. The point, Miller says, is to get the most out of the raw intelligence offered by today’s AI models. “Wouldn’t you love to walk into a room of 10 geniuses arguing over something that you’ve been struggling with, and all they want to do is help you get to the best possible outcome?” she says. “For those who have a growth mindset and thrive off of dynamic, changing, adaptable business settings, the multiagent world that we are walking into in 2026 is going to be world-changing.” BY BEN SHERRY @BENLUCASSHERRY