Wednesday, March 25, 2026

With the MacBook Neo, Apple Made the Perfect AI Computer

A lot of the conversation about the MacBook Neo is whether the compromises Apple made in order to sell a Mac for under $600 meant that you ended up with a computer that wasn’t actually able to do anything useful. Of course, it doesn’t take long to realize that the Neo is, in fact, more than capable of handling most of the computer things people who are inclined to buy this particular Mac might need it to do. One of the things that conversation seems to have missed is the idea that the Neo is perfectly equipped to do the only thing that tech companies seem to think anyone cares about: AI. You can argue whether that’s actually true, but there’s no question that the Neo is one of the most interesting computers in the age of AI computing. To be clear, the MacBook Neo does come with compromises. I’m not going to go through all of them now, partly because I wrote about them when I reviewed the Neo. But also because all of the Neo’s compromises are irrelevant to making it a great computer for AI. It’s not that other Macs are less capable. There is, however, something magical about the idea that a $600 entry-level Mac is as capable as a $4000 MacBook Pro, or $6000 Mac Studio, when it comes to the most intensive computing that any of us do today. That, of course, is because most AI computing happens in the cloud, not on your computer. That means that the limiting factor isn’t memory, storage, or how fast your processor is. No, the limiting factor is how well you’re able to get your AI tool of choice to understand what you want. Oh, and I guess the speed of your internet connection. That means that a MacBook Neo, with an A18 Pro, 8GB of memory, and a 256 GB or 512 GB SSD, will be just fine to run the Mac ChatGPT app or run Gemini in Safari. And that changes what your laptop actually needs to be. I don’t know that Apple had that specific thought when they made the MacBook Neo. Maybe they just wanted to make a low-cost, entry-level MacBook that would appeal to people who wouldn’t otherwise buy a Mac. Either way, they ended up making what might be the most accessible AI-first computer yet. With the MacBook Neo, a high school student, freelancer, or small business owner can now own hardware that gives them full access to the best AI tools in the world. Interestingly, this isn’t exactly the way Apple has framed the marketing. In fact, Apple isn’t shy about how it markets the MacBook Pro as the laptop for AI. The new M5 Pro and M5 Max chips, Apple says, deliver up to 4x faster LLM prompt processing than the previous generation. The MacBook Pro, in Apple’s words, is built for “AI researchers and developers to train custom models locally.” I’m not arguing that isn’t a real use case. But I think we can all agree it’s a very narrow one that most people don’t understand or care about. Training models locally or running 30-billion-parameter LLMs on-device are things that matter enormously to a specific kind of user — and are completely irrelevant to almost everyone else. The average person using AI doesn’t need to run a model. The average user just wants to talk to one. When you ask Claude to help you rewrite an email, or ask ChatGPT to explain something complicated, or use Gemini to summarize a document, none of that requires local inference. The model lives somewhere else. The compute happens in the cloud. Your laptop is basically just a keyboard and screen for a computer that does the work for you. The MacBook Pro is a remarkable machine for people who need what it does. But positioning it as the computer for the AI era implies that on-device model training is how most people will use AI. It isn’t. It’s how a small number of highly technical users will use AI — the same people who were already buying MacBook Pros anyway. For everyone else, the question was never whether their laptop could run a model. It was whether their laptop could get out of the way while someone else’s computers did. For $599, Apple may just have given us the computer that answers that question. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Monday, March 23, 2026

Replit CEO Says Its New AI Agent Can Vibe Code a Startup From Scratch

Replit founder and CEO Amjad Masad says the company’s latest AI agent can vibe code an entire company from scratch. Masad, whose company released one of the first commercially available AI coding agents in 2024, has been at the forefront of the vibe-coding revolution, along with competitors Bolt and Lovable. Today, he announced that Replit has raised $400 million in a Series D round, and he also unveiled Agent 4, the newly updated version of its marquee product. Over 50 million people are currently using Replit to create apps and websites, according to a statement from Replit investor Georgian. The founder says that Agent 4 is capable of not just building an application, but actually creating and maintaining an entire company. Masad tells Inc. that Replit is now “the cockpit or the launch control of your business,” and can help develop pitch decks and animated logos, connect to payment processors like Stripe, and work on multiple tasks in parallel. As AI takes on more of the technical work of running a software business, Masad predicts, the role of humans will evolve to become more focused on creativity and taste. Even today’s best AI models have trouble understanding what aesthetically makes one version of an app “better” than another, he says, which is why Replit has focused on developing user interfaces that enable deeper creative interactions with AI. The key to Agent 4’s new abilities is a feature that Replit calls Canvas; it’s essentially a scratchpad for Replit to store all work created for a specific project. Individual elements (like a website, product research, and financial spreadsheets) are displayed as cards that you can move around and annotate. In a video example, Masad used Agent 4 to develop a job marketplace that helps companies find creative AI talent. First, he generated four variants of a landing page, and then iterated on the one he liked most. To change the color of a button, Masad simply highlighted the button and then used a gradient tool to select a new color. In practice, Canvas combines some of the no-code tooling of platforms like Figma with the convenience of AI coding models. For solopreneurs, Masad says, “it almost feels like you have a bunch of employees at your disposal.” Canvas and Agent 4 were partially inspired by sci-fi user interfaces, like the holographic displays used by Tony Stark in the Iron Man films, but even more so by a much simpler piece of hardware: a whiteboard. After introducing agents in 2024, Masad noticed the Replit office’s whiteboards getting significantly more use than previously. The reason? Replit employees had more time to focus on design rather than coding, and were using whiteboards to visually communicate their ideas to each other. Masad believed that this process of interaction could be recreated within the Replit platform. Just like a whiteboard, users can draw on Canvas, highlighting specific aspects of a website they want to change, or using arrows to indicate how different elements should interact. In his example website, Masad sketched an image of a globe in the Canvas, asked Replit to turn the sketch into an animated 3D asset, and then added that asset to the job marketplace. Masad says this adds a new level of interaction between the user and the platform, enabling discussions that might be closer to what you’d actually have with a human technical co-founder. “I think the tragedy of agents up until this moment was that we’re trying to squeeze this universe of ideas into this linear text box,” says Masad. “Now, you can be chaotic with it.” BY BEN SHERRY @BENLUCASSHERRY

Friday, March 20, 2026

The world’s most valuable company just sent another signal that AI agents are going to be everywhere

Tech giant Nvidia, the world’s most valuable company and the poster child of the AI boom, is banking its future on the rise of AI agents. The company on Monday announced a slew of software and hardware updates to encourage the development of AI agents, or AI assistants that can perform tasks for users. Among the most significant announcements is a set of tools for AI helpers based on OpenClaw, the buzzy agent platform that’s been the talk of Silicon Valley in recent weeks. Nvidia also announced new computing racks designed to power agents, shifting its strategy’s primary focus from graphics processing units. Clad in his signature black leather jacket, Nvidia CEO Jensen Huang made the flurry of announcements in San Jose at the chipmaker’s annual GTC conference, which attracts tens of thousands of attendees and has been dubbed the “super bowl” of AI. Nvidia’s announcements are important because so many major companies rely on its systems to train and power their AI services. This means the chip giant’s new products often reflect the technologies for companies across the AI industry. Nvidia announced software tools to help companies make AI agents, including models and a blueprint for creating custom specialized assistants. It’s also launching a set of resources for creating agents on OpenClaw that adds privacy and security controls, which is crucial considering the popular agent has raised concerns among cybersecurity experts. Nvidia said its resources help OpenClaw agents access the systems and files without compromising security or privacy. Huang said they’ve worked directly with OpenClaw creator Peter Steinberger, who was recently hired by OpenAI. Huang said OpenClaw is the “operating system for personal AI” and likened it to the importance of the Mac and Window operating systems. “OpenClaw is the number one. It is the most popular open-source project in the history of humanity, and it did so in just a few weeks,” Huang said. Nvidia also unveiled updates to its new computing platform, Vera Rubin, which it said comprised seven chips that are now in full production. That includes a new central computing rack made up of central processing units (CPUs) rather than the graphics processing units (GPUs) Nvidia has been known for. CPUs are ideal for running the types of computing processes needed to power AI agents. The company is also integrating a non-Nvidia processor into its systems: New high speed “language processing units” (LPUs) from American AI company Groq. Nvidia struck a $20 billion deal with Groq in November. Unlike AI chatbots that respond to questions and prompts, AI agents can autonomously complete tasks like building websites, creating marketing pitches and sending emails. AI agents are currently Nvidia’s biggest focus area, largely driven by the popularity of OpenClaw and Anthropic’s Claude Code and Cowork agents. “Every company in the world today needs to have an OpenClaw strategy, an agentic system strategy. This is the new computer,” Huang said. “This is as big of a deal as HTML, as big of a deal as Linux.” Nvidia is attempting to future-proof its technology in other ways as well. It’s launching a space module for Vera Rubin, aiming to bring its latest tech to data centers in space. It’s become an increasing area of interest among tech giants as they scramble for real estate to construct data centers. OpenAI CEO Sam Altman and xAI and Tesla CEO Elon Musk have both talked about using space to help power data centers and energy-hungry AI systems. “Nvidia is now focused beyond just computing with a major focus on the future of networking in this new world of AI,” said Wedbush analyst Dan Ives ahead of Nvidia’s Monday conference. In his speech on Monday, Huang tried to convey that the hype around AI and Nvidia can last, selling a vision of an AI-transformed future where demand for their chips grows indefinitely. Huang said computing demand “just keeps on going up,” adding that he expects “at last” $1 trillion in Nvidia revenue through 2027. ”There’s a reason for that,” Huang said. “This fundamental inflection — AI is able to do productive work, and therefore the inflection point of inference has arrived.” By Hadas Gold

Wednesday, March 18, 2026

How One of the World’s Top AI Voices Uses Claude Code to Run Her Day

Allie K. Miller, one of the most followed voices in the AI industry, says that “by the time you wake up, your AI should have already been working for you for hours.” Formerly the global head of machine learning for startups and venture capital at Amazon Web Services, Miller is among the busiest AI consultants and influencers in the industry, with more than 1.6 million followers on LinkedIn alone. Through her company Open Machine, she advises enterprises and business leaders—including those at OpenAI, Google, Anthropic, and Warner Bros. Discovery—on how to adopt AI. In 2025, Miller was named one of the 100 most influential people in AI by Time. In an interview with Inc., Miller says that nowadays, she largely works out of Claude Code, the agentic coding system developed by Anthropic. She keeps multiple instances of Claude Code running simultaneously in separate terminals. Because these Claude Code instances have access to Miller’s filesystem, they can autonomously complete work on her behalf. Miller teaches Claude Code how to complete workflows by using Skills, a feature that allows Claude Code to undertake and repeat multistep processes. Miller says that she’s developed automations that generate a report summarizing all of the urgent emails she’s received overnight and a daily morning briefing that runs through her entire calendar, recommending times to recharge. “It’ll tell me, ‘You have four different interviews or six client meetings,’” explains Miller, “‘so I’ve gone ahead and blocked out 30 minutes tomorrow for deep work.’” Another example: Every time Miller edits a social video of herself using CapCut, the TikTok-owned video editing app, she exports the video into a specific folder. Anytime a new file is added to that folder, an automation is triggered that automatically creates a transcript, a social post, and a screenshot for the video’s thumbnail. In general, Miller says, the best way to identify AI solutions that work for your specific use case is to simply have the AI model of your choice interview you. Tell it to ask you questions about your work, making note of areas that you feel could be more efficient or smoother. Then, Miller says, prompt it again with “make these ideas more proactive, more responsibly autonomous, and more action-forward.” With just that prompt, she adds, you can get started developing your own AI solutions. It’s not just workflows that Miller is automating. When developing a new post for her newsletter, Miller says that she runs drafts through eight “synthetic personas” that she’s developed, which represent the newsletter’s different audience demographics. “I’m not trying to appease all eight and write a happy-go-lucky version of the newsletter,” says Miller, “but I want to make sure I didn’t miss something important. I want to make sure that a parent reading [the newsletter] isn’t completely misunderstanding my take on something.” Miller has a similar strategy when making big career decisions. She built a self-described “AI boardroom,” complete with six synthetic personas, which weigh in on major company issues. Miller swaps around which six personas sit on the board, depending on her needs. “If it’s a media question, maybe I’m running it through Shonda Rhimes,” she says, “or if it’s a business question, maybe I’m asking Jeff Bezos.” These personas give their initial opinions on the decision, and then they all begin debating with one another in a group chat. “I literally had Mickey Mouse arguing with Jensen Huang,” Miller adds. The point, Miller says, is to get the most out of the raw intelligence offered by today’s AI models. “Wouldn’t you love to walk into a room of 10 geniuses arguing over something that you’ve been struggling with, and all they want to do is help you get to the best possible outcome?” she says. “For those who have a growth mindset and thrive off of dynamic, changing, adaptable business settings, the multiagent world that we are walking into in 2026 is going to be world-changing.” BY BEN SHERRY @BENLUCASSHERRY

Monday, March 16, 2026

Meta just bought the social network for AI bots everyone’s been talking about

Meta, the company behind some of the world’s most popular social media platforms, just scooped up a new site – for bots. Meta has acquired Moltbook, the social media network where AI agents interact with one another autonomously, the company said in a statement on Tuesday. Meta is competing with rivals like OpenAI for both talent and users’ attention. And as AI expands into more aspects of Americans’ lives, tech companies are trying to figure out the best way to position themselves to win what’s becoming a sort of technological arms race. Moltbook became the talk of Silicon Valley last month, racking up millions of registered bots within days of its launch. Some in the industry saw it as a major leap because it demonstrated what can happen when AI agents socialize with one another like humans. Others said the site is full of sham agents, AI slop and security risks and should be viewed skeptically. Meta’s acquisition comes weeks after OpenAI hired the founder of the technology behind Moltbook, an AI agent system called OpenClaw. Moltbook’s team will join Meta’s superintelligence labs. A Meta spokesperson said Moltbook’s approach “opens up new ways for AI agents to work for people and businesses.” OpenAI CEO Sam Altman dismissed the excitement over Moltbook last month, suggesting OpenClaw, the open-source autonomous AI agent that powers the site’s bots, was the real breakthrough. Altman wrote that he expects the technology to become “core” to OpenAI’s products. Meta acquired the buzzy AI agent startup Manus in December, following a string of high-profile hires intended to build out its superintelligence team. The company also invested $14.3 billion in Scale AI last year and hired its CEO. But Meta, like some of its Big Tech peers, is facing pressure to prove its AI investments will make money, especially as rivals like OpenAI, Anthropic and Google churn out new and improved models for their chatbots. Meta CEO Mark Zuckerberg said on a January earnings call the company will release its new AI models “over the coming months.” By Hadas Gold

Friday, March 13, 2026

AI is exhausting workers so much, researchers have dubbed the condition ‘AI brain fry’

Part of the pitch for artificial intelligence in the workplace goes like this: It’s like having a team of people to delegate your grunt work to, freeing you up to think strategically and maybe, just maybe, take a long lunch or head home early. Or maybe even be more productive, to make more money. It’s a nice idea! But as everyone who’s either had a boss or been a boss knows, managing is a job in itself, one that comes with its own distinct brand of stress and annoyance. And that doesn’t change if the “people” in question aren’t people at all. For participants in a recent study by Boston Consulting Group, the experience of overseeing multiple AI “agents,” autonomous software that’s designed to execute tasks, rather than just churn out information like a chatbot, caused an acute sensation of “buzzing” — a fog that left workers exhausted and struggling to concentrate. The study’s authors call it “AI brain fry,” defined as mental fatigue “from excessive use or oversight of AI tools beyond one’s cognitive capacity.” “Contrary to the promise of having more time to focus on meaningful work, juggling and multitasking can become the definitive features of working with AI,” they wrote in the study. published by Harvard Business Review last week. “This AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit.” Workers quoted in the study reminded me a lot of my fellow elder Millennials circa 1997, rushing home to tend to their Tamagotchis. “It was like I had a dozen browser tabs open in my head, all fighting for attention,” one senior engineering manager told researchers. “I caught myself rereading the same stuff, second-guessing way more than usual, and getting weirdly impatient. My thinking wasn’t broken, just noisy—like mental static.” This is just one new side effect from a push by company executives to make workers use AI more. Last fall, a Harvard Business Review report chronicled the scourge of “workslop” — the nonsensical AI-generated memos, pitch decks and presentations that end up creating more work for colleagues who have to fix what the bot got wrong. Workslop reflects a kind of “cognitive surrender” in which workers feel unmotivated, giving AI work to do and not really paying attention to the output, said Gabriella Rosen Kellerman, a psychiatrist who co-authored both reports, in an interview. “Brain fry is almost the opposite… It’s like trying to go tête-à-tête — intelligence to intelligence — with the AI.” Francesco Bonacci, CEO of Cua AI, which builds AI agents, described his AI fatigue as “vibe coding paralysis” (a reference to the Silicon Valley trend of building less-polished projects with AI prompts rather than traditional coding). “I end each day exhausted — not from the work itself, but from the managing of the work,” he wrote last month in an essay on X. “Six worktrees open, four half-written features, two ‘quick fixes’ that spawned rabbit holes, and a growing sense that I’m losing the plot entirely.” To some extent, brain fry and workslop could both be a case of growing pains. Imagine plucking a middle-aged office worker from 1986, dropping them into a 2026 workplace and asking them to send 10 emails, respond to Slacks and Zoom into a call with the social media team who are all working from home. You’d expect some cognitive overload, not to mention some confused looks when you tell them Donald Trump is president and that it took more than 30 years to make a “Top Gun” sequel. Of course, people learn how to be managers, in general, all the time. “I do think this is potentially temporary,” said Matthew Kropp, a co-author of the brain fry study and BCG managing director. “These are tools we haven’t had before.” Kropp compared the experience of someone managing multiple AI tools to that of someone who just learned to drive being given a Ferrari. You can go really fast, but it’s easy to lose control. Of course, even tech pros seem to be struggling to control their AI assistants at times. Last month, Meta’s director of AI safety and alignment tweeted about her own experience watching bots nearly delete her inbox without permission. “I had to RUN to my Mac mini like I was defusing a bomb,” she wrote, chalking the incident up to a “rookie mistake.” Both Kropp and Kellerman emphasized that the result of the study wasn’t all negative. Surprisingly, the people experiencing brain fry tended to experience less burnout, defined as a state of chronic workplace stress that builds over time and makes workers perform poorly. Brain fry is an acute experience, as participants described it to them. “When they take a break, it goes away,” Kellerman said. Analysis by Allison Morrow

Wednesday, March 11, 2026

Bad News for Your Burner Account: AI Is Surprisingly Effective at Identifying the Person Behind One

It’s not uncommon for people to have anonymous or burner accounts in their online activities for a variety of reasons. A new study, though, shows why you might want to be as careful posting from those accounts as you would from one that uses your real name, since they might not hide your identity as well as you think. A recently released research paper found that artificial intelligence has proved quite effective at figuring out who’s behind those false-name accounts. Large language models, the study found, can use a number of identifiers, such as extracting identity signals (data points or behaviors used to identify, verify, or categorize individuals) or searching for matching data, to significantly outperform existing identity methods. The study successfully deanonymized 68 percent of the users in its trial data set. Of that 68 percent, it boasted a 90 percent precision rate, meaning it accurately identified the user running the account. “Our findings have significant implications for online privacy,” the researchers, who were based at ETH Zurich, a public university in Zurich, Switzerland, and MATS, an independent research and educational program, wrote. “The average online user has long operated under an implicit threat model where they have assumed pseudonymity provides adequate protection because targeted deanonymization would require extensive effort. LLMs invalidate this assumption.” Anthropic also contributed to the study. The findings that pseudonymous content can be fairly easily unmasked by AI have implications far beyond burner accounts and social media, of course. It can also be a powerful tool for hackers. And it can make it easier for companies to track down employees who leak corporate information or dig into who is asking questions in open forums. It could also prove embarrassing for leaders who utilize burner accounts to pump up their businesses or covertly settle online scores with rivals. Casey Bloys, chairman and CEO of HBO and Max Content at Warner Bros. Discovery, admitted in 2023 that he had fake social media accounts he used to troll critics about network programming (later admitting that was a “dumb idea“). Elon Musk has confirmed in a court deposition that he has used them in the past. And Barstool Sports was accused in 2023 of using more than 40 accounts to promote its content and help it go viral. Users hoping to keep their identity private or vulnerable members of society who depend on privacy (e.g., whistleblowers, activists, or abuse survivors) could also be identified. A slightly deeper dive by the AI could also determine where those people live, their occupation (and estimated income level), and more. To protect against that, the researchers proposed several mitigations, including having platforms enforce rate limits on API access to user data, better detection of automated scraping, and restricting bulk data exports. That said, they acknowledge that preventing AI from being used to identify people and accounts that are trying to obfuscate the user’s identity will be increasingly challenging in the months and years to come. “Recent advances in LLM capabilities have made it clear that there is an urgent need to rethink various aspects of computer security in the wake of LLM-driven offensive cyber capabilities,” the study reads. “Our work shows that the same is likely true for privacy as well. … Any moderately sophisticated actor can already do what we do using readily available LLMs and embedding models. With future LLMs, without mitigations, this attack will be within the means of basically all adversarial actors.” BY CHRIS MORRIS @MORRISATLARGE

Monday, March 9, 2026

The Hidden Advantage of Being Over 50 in the Age of AI

I’ve been through a few technology revolutions. I built my first website in 1995, back when the internet made that screeching dial-up sound and nobody really knew what we were building, just that something big was happening. I watched the dot‑com bubble inflate and implode, watched social media go from novelty to addiction, and saw smartphones quietly rewire how humans behave. And now, here we are again: AI. Everywhere you look, someone is launching an AI startup, automating departments, or building agents that promise to replace entire job functions. If you’re an experienced founder or executive—especially north of 50—it’s easy to feel like you showed up late to the party. I’ve felt it myself. A few months ago, I was sitting in front of my computer watching younger founders crank out AI apps in days, shipping products before I’d even finished reading about the tools they were using. I remember thinking, “Am I becoming the guy who missed it?” That thought lasted about a week. Once I stopped comparing velocity and started actually using AI in my own work, something clicked. This might be the first tech wave where experience is the real unfair advantage. AI isn’t about being technical. It’s about thinking clearly Previous tech revolutions rewarded people who could code, manipulate algorithms, or master new platforms faster than everyone else, but AI is different. You don’t need to learn a programming language; you need to ask better questions. And asking better questions isn’t a technical skill—it’s a judgment skill. The leverage in AI doesn’t come from typing prompts quickly; it comes from knowing what matters, what doesn’t, and what consequences might follow. That’s pattern recognition, and pattern recognition is built over decades. It’s something AI is really good at, and it turns out those with experience are as well. Speed is overrated. Judgment isn’t Younger founders are moving fast right now, and I respect that. It’s exciting to watch. But speed without context creates a whole lot of noise, while experience creates context. When I use AI, I’m not asking it to build me a novelty app; I’m asking it to stress‑test a business idea, identify blind spots in a launch plan, challenge my assumptions, and help me flesh out existing models. I don’t accept what it gives me—I argue with it, refine it, and push it. That’s not something you learn from YouTube tutorials. That’s something you learn from making expensive mistakes. The real danger isn’t falling behind—it’s outsourcing your thinking There’s a subtle shift happening where leaders are starting to treat AI like a strategy generator instead of a thought partner, and that’s dangerous. AI predicts patterns. It doesn’t carry fiduciary responsibility, understand internal politics, feel reputational damage, or know which risks are existential versus cosmetic. It produces possibilities. You decide. If you’ve been in business long enough, you understand that difference instinctively—and that instinct is more valuable now than ever. The confidence gap is mostly psychological I’ve talked to more than a few executives who whisper some version of the same thing: “I’m not technical,” “I feel behind,” or “My kids understand this better than I do.” That may be true at the interface level, but understanding tools isn’t the same as understanding leverage. If you know how distribution works, AI can sharpen your messaging. If you understand customer psychology, AI can help you surface objections faster. If you understand operations, AI can reveal inefficiencies you’ve been tolerating for years. You don’t need to become an AI founder—you need to become more precise. We’ve seen this movie before, but this time you’re the advantage Every tech wave follows the same emotional arc: hype, overconfidence, correction, integration. What feels different about AI isn’t the hype—we’ve seen that—it’s the accessibility. You talk to it; it talks back. That simplicity lowers the barrier dramatically, and when the barrier lowers, judgment becomes the differentiator. Not youth. Not speed. Judgment. The leaders who win this era won’t just be 22‑year‑olds building AI‑native startups. They’ll also be experienced operators who integrate AI quietly and intelligently into systems they already understand. If you’re over 50 and feeling behind, you might actually be early. Because when the tools get easier, experience becomes more powerful—not less. And this time, that experience may finally be the competitive edge. EXPERT OPINION BY JOEL COMM, AUTHOR AND SPEAKER @JOELCOMM

Friday, March 6, 2026

How to Switch From ChatGPT to Claude With Just 1 Simple Prompt

Anthropic has had a turbulent few days, but the safety-focused AI company might be having the last laugh. Following Anthropic’s standoff with the United States Department of War, President Trump’s subsequent firing of Claude from government use, and OpenAI’s surprise deal with the Pentagon, individual users are dumping ChatGPT and flocking to Claude. On Saturday, the Claude mobile app rose to the top spot on the iOS App Store, surpassing ChatGPT for the first time. At that same time, TechCrunch has reported, uninstalls of the ChatGPT mobile app jumped 295 percent compared with the previous day. But switching AI providers isn’t always a seamless experience. The more often you use an AI platform, the more it gains an understanding of you, your work, and your personal context, which is why starting over with a new AI can feel like taking a major step back. Now, Anthropic is looking to capitalize on its newfound momentum among consumers by making it easy to transfer context about yourself from rival AI providers like ChatGPT and Google Gemini to Claude. On Monday, the company announced that its Memory feature, which enables Claude to remember key information about you across conversations, is now available for non-paying Claude users. Anthropic says on its website that this allows users to transfer their personal information with a single copy-paste, although in reality, it actually takes two copy-pastes. How to transfer your context from ChatGPT to Claude On Claude.ai, navigate to the settings page and select “Capabilities” from the sidebar menu. Then, click the button labeled “start import” under a section titled “Import memory from other AI providers.” Next, you’ll see a pop-up requesting that you copy a prewritten prompt and paste it into a new chat with the AI platform you’re looking to leave behind. For example, if you’ve been using ChatGPT and want to move on, you’d enter this prompt into ChatGPT. Here’s the full prompt, courtesy of Anthropic: Export all of my stored memories and any context you’ve learned about me from past conversations. Preserve my words verbatim where possible, especially for instructions and preferences. ## Categories (output in this order): 1. **Instructions**: Rules I’ve explicitly asked you to follow going forward — tone, format, style, “always do X”, “never do Y”, and corrections to your behavior. Only include rules from stored memories, not from conversations. 2. **Identity**: Name, age, location, education, family, relationships, languages, and personal interests. 3. **Career**: Current and past roles, companies, and general skill areas. 4. **Projects**: Projects I meaningfully built or committed to. Ideally ONE entry per project. Include what it does, current status, and any key decisions. Use the project name or a short descriptor as the first words of the entry. 5. **Preferences**: Opinions, tastes, and working-style preferences that apply broadly. ## Format: Use section headers for each category. Within each category, list one entry per line, sorted by oldest date first. Format each line as: [YYYY-MM-DD] – Entry content here. If no date is known, use [unknown] instead. ## Output: – Wrap the entire export in a single code block for easy copying. – After the code block, state whether this is the complete set or if more remain. What to do with Claude after you’ve entered this prompt If you prompt a platform like ChatGPT or Gemini with this message, you’ll receive a response that details the information the platform has about you, broken down into sections like identity, career, and projects. The response should also contain instructions detailing how you like your AI models to converse with you, such as specifications for tone of voice. Once the response is done generating, you can copy it, paste it into the textbox in the Claude settings page, and click the “add to memory” button. With that, you should see a pop-up box named “manage memory.” This box contains all the personal information that Claude knows about you, and after a minute or two it will update with the new data you just transferred from the other platform. Make sure to review this context closely and edit any data that seems inaccurate or unnecessary for what you’re planning on using Claude for. And there you have it—now you’re ready to start your new journey with Claude. What will you do first? BY BEN SHERRY @BENLUCASSHERRY

Wednesday, March 4, 2026

AI Adoption Has Surged to 78 Percent in This 1 Industry—but There’s a Catch

One industry has gone from barely touching AI to mass adoption in just two years. AI adoption in the legal field jumped from 23 percent to 78 percent, which is faster than in finance and healthcare. Litify’s third annual State of AI in Legal Report, which surveyed hundreds of legal professionals across law firms, corporate legal departments, and plaintiff practices, found that legal professionals are now among the fastest AI adopters anywhere. But there’s a problem hiding inside that adoption number. Only 14 percent say AI is helping them reduce costs. Just 7 percent report billing more time. Legal firms rushed to buy the sports car, then kept driving it in first gear. The gap between “we use AI” and “this changed our economics” is enormous, and it’s widening even further. “At Litify, we view this as an ‘AI maturity gap,’” notes Curtis Brewer, CEO of Litify, the legal operations platform used by 55,000+ legal professionals. “A firm that relies solely on a general-purpose tool like ChatGPT is only at the first step of its maturity journey.” The Litify data reveals exactly where firms are stuck. ChatGPT dominates usage at 66 percent, followed by Microsoft Copilot (42 percent) and Google Gemini (24 percent). These are general-purpose tools—not legal-specific platforms. And while 66 percent use AI for legal research and 39 percent for summarization, only 6 percent use it for creating invoices and 5 percent for client communication. Firms are deploying AI for tasks that feel productive but don’t directly touch revenue. Why freemium tools hit a wall General-purpose AI tools work well for research and summarization. The problem isn’t that they’re bad, but that they plateau quickly. That ceiling is exactly why legal-specific platforms like Harvey—built from the ground up on legal data and trained on case law, contracts, and regulatory frameworks—have been gaining traction at major firms. Harvey now counts PwC, A&O Shearman, and half of the 100 highest-grossing law firms in the U.S. among its clients, and has raised over $1.2 billion, with reports of another $200 million round in the works at an $11 billion valuation—partly on the argument that generic AI simply wasn’t built for legal nuance​​​​​​​​​​​​​​​. “The primary limitation of these general-purpose tools is their lack of legal and business context,” Brewer says. “Legal work is defined by nuances — solicitation rules, jurisdictional requirements, compliance standards, and practice-area-specific workflows — that general models often overlook.” Then there’s the context problem. Ask ChatGPT to summarize a case, and it only sees what you feed it — not the case history or the client’s background. And since it also can’t take action after summarizing, it’s more or less a dead-end tool. “A legal-specific tool that lives alongside your data and processes can summarize the case and suggest the next best actions or additional questions to ask,” Brewer says. “As the industry raises the bar, firms that delay are doing more than just missing out on features — they are widening a performance gap that may soon become impossible to close.” The shadow IT security risk Here’s where the adoption-without-governance problem gets dangerous: Only 41 percent of firms have an AI policy, and only 45 percent say their staff receive sufficient training. But 78 percent are using AI tools. That means roughly a third of legal professionals may be using AI in what amounts to a shadow IT environment, where there’s no oversight, guardrails, or policy. “Security, security, security!” Brewer says. “Given the highly sensitive nature of legal data, business leaders should be concerned that nearly a third of their staff may be using AI in a ‘shadow’ environment without direct IT oversight.” When employees use public AI tools, they might paste in confidential client information or HIPAA-protected medical records without thinking twice. These systems have no real safeguards. One careless prompt could mean a data breach, regulatory violation, or destroyed client relationship. “When firms fail to provide proactive guidance and purpose-built tools, staff will seek their own solutions,” Brewer explains. “If AI adoption isn’t intentional and structured from the top down, firms risk losing the very efficiency gains they sought in the first place, while exposing themselves to additional risks.” What workflow integration actually looks like The difference between AI as an assistant and AI as a business driver comes down to integration. Consider billing. Asking ChatGPT to create an invoice is like using your smartphone’s calculator instead of the accounting app. Sure, it works. But you still have to manually punch in every client detail, every payment amount, and every line item. You saved five minutes on the template and spent an hour filling it in. That’s unproductive. “When AI ‘lives’ natively alongside your billing, client, and case workflows, the impact is fundamentally different,” Brewer notes. “It transforms from an assistant to a proactive business partner.” An integrated AI tool doesn’t just generate a branded invoice template with client and matter details pre-filled. It can automatically suggest missing time entries or proactively identify billing errors. That’s the difference between saving 10 minutes and changing the economics of the entire billing process. Litify’s clients who’ve embraced this level of integration are seeing dramatic operational scaling — some firms handle twice as many matters with the same staff, and the highest performers have grown headcount by up to 400 percent as they’ve expanded regionally and nationally. The four-dimension framework Brewer says firms need to move on four fronts at once. 1. Tools: You have to stop relying on ChatGPT alone, because that’s not going to get you there. You should move to legal-specific platforms that effectively integrate with your case management, billing, and client systems. 2. Readiness: Write an AI policy. Spell out which tools are approved, how to handle sensitive data, when humans must review output, and what to do when something goes wrong. Then treat training like a safety requirement, not an HR checkbox. 3. Task scope: Research and summarization are fine starting points. But firms that stay there are leaving money on the table. The next level is workflow automation — routing requests, running conflict checks, and building chronologies. Eventually, let AI assign cases, generate invoices, and handle intake. 4. Impact: Pick metrics before you spend another dollar. Cost per matter. Turnaround time. Write-off rates. Error rates. “The try-it-and-see period is ending,” Brewer says. “Leaders will expect ROI.” Ultimately, the firms pulling ahead didn’t just buy software. They rewired how legal work gets done — from intake to invoice and research to billing — with training, governance, and measurement baked in from the start. You can keep using the sports car in first gear. But eventually, someone in your market will figure out where the other gears are. BY KOLAWOLE ADEBAYO

Monday, March 2, 2026

15 Incredibly Useful Things You Didn’t Know NotebookLM Could Do

Generative AI may be both the most useful and the most mystifying tool of our modern-tech era. The problem—aside from all the endlessly documented issues around accuracy—is that generative AI generally seems to function in a DOS-like blank prompt form. The onus is squarely on you to figure out what to ask and how to put these saucy systems to use. That black-box feeling is especially apparent when you look at NotebookLM, an “AI-first notebook” launched by Google nearly two years ago. The idea behind NotebookLM is that you upload your own source materials within carefully confined notebooks, and you can then lean on Google’s Gemini AI to interact with that material in all sorts of illuminating ways. Since each notebook is limited only to whatever source materials you supply, the prevalence of those pesky hallucinations seems to be less of an issue. And since everything within your NotebookLM notebooks is kept completely private—not even used for any manner of AI model training, according to Google—you can connect it to all sorts of subjects and use it to gain a level of deep insight that was never before so easily accessible. But again, there’s the black box challenge. When you first pull up NotebookLM, it’s tough to know where to begin and how to interact with the thing in practical, approachable ways. Even as someone who writes about technology for a living and has spent more time than most mortals thinking about this service, I realized I hadn’t entirely figured out how to use it in a way that would genuinely be helpful in my day-to-day life. So I challenged myself to dig deep, get beyond all the conceptual excitement, and come up with a series of real-world use cases for NotebookLM that any regular human could both appreciate and emulate. I’ve got 15 super-specific scenarios, all tried and tested, in which the artificial intelligence answer machine could be useful for you. Follow this road map and see which path holds the most promise from your perspective. 1. Your on-demand product answer machine Up first is a possibility that’s supremely simple yet packed with productivity potential: Create a new NotebookLM notebook called “Product Manuals.” Then, every time you purchase a new appliance or device of some sort, search the web for a PDF version of its manual and add it into the notebook. If you really want to get wild, include an image of any warranty cards, too. Then, anytime you need to know anything about those products—how some part of them works, how to fix something that’s gone awry, or if and how you’re eligible for a warranty-related repair—just fire up that same NotebookLM notebook and ask, ask, ask away. 2. Your instant car support system Next, try using NotebookLM to help wrangle the most expensive gadget you own. Do a similar web search for your current vehicle’s owner manual, then drop it into its own NotebookLM notebook with the vehicle’s name as the title. Repeat for any additional vehicles you own and any new ones you purchase down the road. After recently trading in our old minivan for a hybrid Honda CR-V, my wife and I wasted far too much time flipping through the vehicle’s paper manual to try to figure out what some random button on the dashboard did. Later, after downloading a PDF of the manual from Honda’s website and then uploading it into NotebookLM, it took me all of 10 seconds to reach the same answer—simply by asking. Lesson learned. 3. An interactive car maintenance journal While we’re thinking about cars, every time you go to the mechanic, snap a photo of the service receipt and upload it into a NotebookLM notebook created specifically for that one vehicle. You can make it even more useful by uploading the same owner’s manual you found a moment ago into that notebook, too. Doing so will give you two very practical benefits: First, anytime a question comes up about what work you’ve had done on the vehicle or when a certain repair took place, you can just pull up that notebook and ask. Second, with the manual and its instructions there alongside all of your history, you can bring the two sources of info together to ask NotebookLM targeted questions that take the manufacturer’s guidance and your past services into consideration—like, for instance, when you should rotate your tires next or what other possibilities you should be thinking about at your next oil change appointment. And on a related note . . . 4. An interactive home maintenance journal Start a NotebookLM notebook for your house, then upload every invoice and estimate you get for a home repair as well as every receipt from a new appliance purchase. Whenever you next need to know when, exactly, your roof was replaced or in what year you got your current furnace—or even what brand and model it is—you’ll have a single simple place to ask and get answers. And that’s a heck of a lot easier than having an overflowing folder of assorted old papers to sift through in every such scenario. 5. Your personal company wiki Does the company you run, or maybe just work for, have more handbook-type info than any reasonably sane human could possibly ingest and remember? If so, use a dedicated NotebookLM notebook to store all of it—guides, documents, operating procedures, even lists of contacts for different departments and purposes. From that moment forward, when a question comes up about how something is supposed to work or whom you’re supposed to contact for some particular purpose, your answer will never be more than a single quick question away. 6. Your instruction-expert wizard Why limit yourself to work, maintenance, and appliances? With anything that has an instruction manual involved, dump a digital version of the document into its own NotebookLM notebook—even for board games. The next time any kind of question comes up related to those instructions, you’ve got a fast and effective way to get answers. 7. A contract deposit box Whether you’re a freelancer juggling new contracts every month, an employee signing a new agreement each year, or an employer asking dozens of workers to sign your ever-evolving documents, creating a centralized repository for all your contracts can be a real time-saver in the future. Need to remember when you last signed something with a specific person or provider? Not sure what the terms of some agreement required—or when a particular document expires? Whatever the case may be, once the info’s all in NotebookLM, you’ve always got an easy place to ask—and let the system find the answer for you. 8. Your meeting memory Provided you’re using something to record important meetings—be it a general-purpose AI-powered note-taker, a video-call-specific summarizer, or an app designed to take notes during regular audio calls—that history will be much more useful if you bring it over to a NotebookLM notebook. With such a system in place, you can simply go to NotebookLM and ask targeted questions about any of your past meetings instead of having to dig through the transcripts individually. 9. An interview inquiry station While we’re thinking about transcripts, if you conduct any kind of interviews—with job candidates, as a journalist, or for any other purpose—take each transcript and create a NotebookLM specifically for it. (Or, if you have a group of related interviews, put them all in one notebook.) Upload either the audio or the text, depending on what’s available, and then take the opportunity to ask NotebookLM questions about your conversation—be they specific (like what the person said about some particular topic) or broad (like asking NotebookLM what interesting quotes came up during the interview that you might have missed). You’ll obviously still want to refer to the full transcript at times—and to double-check the accuracy of any quote you’re actually citing anywhere—but it can be a helpful way to find something fast when you can’t remember the exact words involved or to stumble onto something you might have otherwise glossed over. 10. An intelligent feedback interpreter If your business relies on any manner of feedback to guide its operations, do yourself a favor and create a NotebookLM notebook where you can upload those results—as spreadsheets or in whatever form they take. From reviews to survey responses, you’ll then be able to ask NotebookLM to help summarize the key themes and trends, pick out recurring positive or critical responses, and even find particularly memorable quotes for potential testimonial use. 11. Your performance review reviewer For anyone managing employee performance, NotebookLM can be a major asset. Create an individual notebook for each employee and place all their performance reviews there—then, when the time comes for the next assessment, you’ll have an easy way to revisit past highlights to identify trends and provide context for comparison. 12. A financial reality checker Provided you’re comfortable with the notion, NotebookLM can turn up some really interesting insights by analyzing things like your tax returns, bank statements, and credit card statements over the years. (For what it’s worth, Google is explicit about the fact that it doesn’t in any way access, share, or use any data uploaded into NotebookLM—even for AI model training.) With that type of info in its own dedicated notebook, you can ask NotebookLM to give you an overview of your spending habits, to identify areas where you could cut back or potentially be eligible for additional tax benefits, and to surface other such pointers that you can then investigate more thoroughly on your own or with an accounting professional. 13. An audio-video reading resource Ever find yourself running into interesting-looking videos or podcasts and just not having the time or inclination to sit through them in their entirety? Make yourself a NotebookLM notebook called “Audio-Video,” then drop a link to any YouTube video or audio clip you encounter into that area. You can then ask NotebookLM for the high points—or for any specific info you’re looking to find—from any of the clips individually or even collectively. 14. An elevated reading list NotebookLM can be a fantastic way to collect links you want to read for later revisiting. With a notebook called “Reading List,” you can see the entire text of any article whose URL you add in, right then and there and in a stripped-down and simplified format—and you can ask NotebookLM for information about, or even summaries of, any or all of your saved links, too: What was that article I saved from New York a while back? Give me the most important takeaways from that Fast Company piece I saved on privacy the other day. I’m never going to catch up with everything I saved this week. Show me a summary of all the articles I added over the past seven days. You get the idea. And finally . . . 15. Your calendar companion Get a whole new level of insight into how you’re spending your time and what’s actually gone down on your calendar by exporting your complete calendar history, and then importing it into NotebookLM—where you can create a custom notebook to interact with it. In Google Calendar, this is as easy as clicking the gear-shaped icon in the desktop website’s upper-right corner, selecting “Settings,” then clicking “Import & export” in the left-of-screen side menu and clicking the “Export” option. You’ll then need to take the resulting .ics file and convert it into plain text—which you can do in a matter of seconds with a free conversion website like this one. Finally, with the resulting .txt file in a NotebookLM note, try asking questions about anything from how many meetings you attended over a given time period to how many hours you spent at the doctor’s office last year. You can also ask for specific info such as how often, on average, you get haircuts or how long it’s been since you last had a job interview. ~ google-notebooklm-calendar.jpg You might be surprised at the types of insights you uncover with your calendar data in NotebookLM’s metaphorical hands. ~ The possibilities are practically endless—and all you’ve gotta do is ask. BY FAST COMPANY

Friday, February 27, 2026

Why Google Gemini Is Emerging as a Hot New AI Tool for Startups

For years, OpenAI held the default position in most startups’ tech stacks. It was the tool founders reached for when they needed a language model, a voice engine, or a general-purpose AI backbone. But for some startups, Google’s Gemini AI has emerged as a newly preferred productivity tool, and their reasons for adopting the tech go well beyond the technology itself. Google is in the midst of an aggressive push to convince startups that its AI solutions are superior. Leading that charge is Darren Mowry, head of Google Cloud’s global startup team. Mowry confirms that yes, Gemini use is rising among startups, and it’s resulting in new business for Google Cloud, which is the only way for businesses to use the Gemini API. Instead of just automatically selecting Amazon Web Services as their cloud provider, Mowry says, new startups are now choosing Google Cloud in part so they can get access to Gemini. Google has always been central to the AI business; in 2017, the company released a seminal AI research paper called “Attention is All You Need,” which introduced the “transformer” architecture that makes modern AI models possible. But up until last year, the company lagged behind its competitors when it came to business adoption. When Gemini was launched in 2023, it was originally named Bard, and it quickly developed a reputation for hallucinating facts. Remember when it recommended that people put glue on pizza? That changed in April 2025, when Google released Gemini 2.5 Flash, a model that handily beat OpenAI across a number of benchmarks, and according to Mowry, ignited a wave of interest in Google’s AI offerings that has only grown with the release of subsequent models. One of the factors that differentiates Google from its competitors is that it offers a fully vertically-integrated solution, in which the company can handle each part of the tech stack. Not only can startups choose from a wide selection of Google-made and external models on Google Cloud, Mowry says, but the company can also provide those startups with technical assistance to help make sure they’re getting the most out of both the models and the Google-made chips they run on. According to Mowry, this vertical integration “shrinks down the time” it takes for founders to build.” Some founders are finding that Gemini is a useful way for non-technical employees to enjoy the benefits that software developers have gotten from agentic coding tools like Claude Code. Aakash Shah, founder and CEO of allergy care startup Wyndly, says that while his engineers have gravitated toward Anthropic, his operations team wanted to use Gemini in the applications they’re already comfortable with, like Google Docs and Gmail. A common use case? Asking Gemini “who did I email on such-and-such day?” Shah says that everyone at his company now has Gemini, enabling them to chat with Gemini across the entire Google Workspace suite, including Gmail, Docs, Sheets, Meet, and Notebook LM, Google’s app that turns documents into audio podcasts. “I’m trying to get everyone to be AI-first,” Shah says, “and part of that is helping them use it where they already are instead of forcing it on them.” Sheltered International, a customs broker and freight forwarder that primarily deals with international imports, is currently using Gemini to help speed up the process of filling out customs paperwork. Founder Andrew Ciccarone says that when a shipment comes in, his company is responsible for verifying its commercial invoice data and ensuring that it’s marked with the correct Harmonized Tariff (HTS) code. According to Ciccarone, Sheltered International has started using a fine-tuned Gemini model to extract relevant information from the commercial invoice data (which often comes in the form of a PDF), validate it, and reformat it into an Excel spreadsheet. The Gemini models are considered to have state-of-the-art computer vision, enabling them to examine images and documents in incredibly granular detail. “What the AI can do is just give us a huge leap forward before the customs broker comes in to ensure everything is classified correctly,” says Ciccarone, adding that for a small operation handling the complexity of international trade documentation, Gemini has streamlined a process that used to be painfully manual. When this process was fully done by humans, it required hours of manual scanning through lengthy, unformatted documents. Still, Ciccarone admits that the company isn’t saving that much time yet, because employees still need to verify that the AI’s output is accurate. But as the fine-tuned Gemini model improves, he expects to see a significant increase in productivity. Companies are also integrating Gemini’s machine vision abilities into their actual products. Take Validity, a startup that sells itself as an all-in-one solution for the entire email marketing process. Validity chief technology officer Matt Gore says that the company’s newest product, a platform called Validity Engage, has been largely built around Gemini’s capabilities. Engage gives marketers access to four purpose-built AI agents that can analyze, optimize, and reformat emails according to internal campaign style guides. Using Gemini 3 Pro, Validity can now detect and fix granular visual details in emails (like whether a certain font matches a brand’s approved style guide) that no model could reliably catch a year ago. For instance, Gore says, emails can often appear illegible to computers and phones that are set to dark mode; with Validity Engage, marketers can guarantee that their email will be visible to everyone who received it. Geo says that Validity decided to “hitch our wagon to Gemini” following extensive testing. When developing the new feature, Gore used an orchestration tool called Mastra.ai to compare how various models approached roughly 150 common email issues, taking note of each model’s cost and speed. In the testing, Gore says, Gemini 3 Pro stood out as being “just leaps and bounds ahead of others in terms of computer vision.” He says that the Gemini 3 models are particularly good at identifying the “bounding box” of the email—basically the frame containing the email’s content. Beyond computer vision and text, Gemini’s speech capabilities have been a major selling point for founders. David Yang, founder of solopreneur-focused AI receptionist company Newo, is using Gemini to provide his virtual receptionists with voices. Yang founded Newo to help solo founders capture more inbound leads by giving everyone access to an always-on receptionist who can answer a phone call at any hour of the day. But for Newo to work, Yang needed voice models that have extremely low latency and high emotional intelligence. Originally, the company’s AI receptionists were powered by OpenAI’s text-to-speech and speech-to-text models, but the lag between asking a question and hearing an answer was too long. Now, Yang says that Newo uses Gemini 2.5 Flash Native Audio, a recently-released model that can understand and generate audio in real time. Not only is the new model incredibly fast, Yang says, it can also understand emotional intent, an important data point that’s usually lost with more traditional speech-to-text transcription models. As part of his push to bring startups into the Google ecosystem, Mowry says his team is currently hiring engineers and former founders to staff up a “founder advocacy” group. These employees “sole purpose in life” will be “to wake up and meet founders that have really big problems,” he says, and “help them move from ideation into actually getting things built.” The goal is to “catch these cohorts of startups early, give them a little bit of credit assistance, engineering assistance, and help them get off the ground.” This soup-to-nuts approach is helping Google win startup business, and positioning the company as the new default AI partner for the next generation of businesses. BY BEN SHERRY @BENLUCASSHERRY

Wednesday, February 25, 2026

3 Ways Digital Tools and AI Help Simplify Tax Season

Tax season used to be my least favorite part of being a creative small business owner. It always felt overwhelming. ​As the founder of Mochi Kids and Mochi Play Store, I juggle designing and producing children’s clothing, managing inventory and wholesale orders, and running a brick-and-mortar store. ​ When tax time hits, all the information needed to run my business has to be accurate, organized and easy to find. To make the tax process more manageable, I’ve started relying digital tools like Adobe Acrobat to stay organized and prepared. Here’s how I approach tax prep, step-by-step, to simplify everything. ​ Step 1: Digitizing paperwork before it piles up Paper forms used to be my downfall. I’d toss them into a folder and promise myself I’d deal with them later. ​By tax season, “later” meant hours of sorting and searching. ​Now, I use Adobe Scan to digitize receipts, tax forms, and donation confirmations as soon as I receive them. ​I simply snap a photo with my phone, and the app converts it into a clear, searchable PDF. I name each file and save it to a folder labeled for the current tax year. Later, I can search documents by vendor, keyword, or dollar amount. ​ Step 2: Organizing documents by category Once everything is digitized, I focus on organizing. Instead of keeping dozens of separate files, I use Acrobat’s Combine and Organize tools to merge related documents into a single file and sort them by category. ​For example, I combine PDFs for charitable contributions, income, expenses, and deductions. ​Acrobat makes it easy to reorder pages, delete duplicates, and add bookmarks so I can quickly find what I need. ​This is especially helpful when preparing and double-checking documents for my accountant before filing. Step 3: Protecting and signing your tax documents with AI Tax documents contain some very sensitive information, so security is non-negotiable. ​Before sharing files, I password protect PDFs and limit access to only the people who need them. ​I can use Protect & Sign with AI Assistant to password protect sensitive information or sign the documents for me. Taking a few extra seconds to secure files gives me peace of mind. A calmer way to approach tax season Tax season may never be exciting, but it doesn’t have to be chaotic. ​By scanning documents early, organizing them thoughtfully, protecting sensitive information, and handling signatures digitally, I’ve made the process far more manageable. ​This allows me to focus on the fun parts of running a creative small business. Why I trust Adobe Acrobat for my tax prep Adobe Acrobat has been a gamechanger for me. It helps me stay organized, save time, and feel confident that my sensitive information is secure. ​Whether you’re an individual filer, freelancer, or small business owner, Acrobat has the tools to make tax season less stressful. From digitizing paper documents to organizing files and securing sensitive information, it’s the ultimate tax prep partner. Here’s to a stress-free tax season! ​ BY AMANDA STEWART FOR ADOBE

Monday, February 23, 2026

This Single ChatGPT Prompt Can Do Hours of Market Research in Minutes—Here’s How

Market research can be a slow, fragmented, and difficult process, often involving tedious internet searches, questionable data sources, and time-consuming manual synthesis. This makes it a great candidate for some assistance from AI. What’s more, an update to a popular feature on ChatGPT has made it even better at doing this kind of work. Imagine that you have a potential business idea but still need to validate how viable it actually is, identify primary competitors in your market, and develop an ideal customer persona. Instead of spending hours collating data, explains Dan McCarthy, an associate professor of marketing at the University of Maryland, you can use Deep Research, a ChatGPT feature that directs an AI agent to develop a comprehensive, well-cited report on any topic. Last week, OpenAI upgraded Deep Research with some new abilities. The feature now runs on GPT-5.2, one of the company’s most recent models (previously it ran on a much older o3 model), and can now prioritize specific websites in its search process. Deep Research is available for all paid ChatGPT users. Here’s how to use it to get some thorough market research done quickly. Step 1: Get your prompt right To test out how this feature could help with market research, I pretended that I wanted to start a digital transformation firm based in Denver with a focus on upgrading bars with mobile, bar-to-table ordering capabilities. All I needed to do in order to get started was click the plus button next to the text box, select More, then Deep Research, and enter a prompt. This prompt will determine the information that ChatGPT prioritizes in its search, so it helps to be verbose. If you need help developing a lengthy prompt, try using ChatGPT to help write it. McCarthy, who uses AI tools extensively, says that an easy way to develop a comprehensive prompt is to activate the chatbot’s voice mode and simply have a conversation with it. Once you’ve explained what you want, McCarthy says, you can ask ChatGPT, “Given all this that I’m telling you, what do you think would be the best thing that I should even be asking you?” That should help clear up any blind spots you might’ve missed. According to McCarthy, this method should produce a solid prompt that you can give to the Deep Research agent. When I asked ChatGPT to help expand my prompt, the platform generated a 673-word result. This prompt (which you can view here) defined the agent as a market research analyst and gave it objectives to determine the business idea’s viability, map out the competition, and define my ideal customer’s persona. Additionally, it provided details on the scope of the research, and information for how the agent should format its report. I also used ChatGPT to develop a list of specific websites for the Deep Research agent to prioritize in its search. Step 2: Start the research I entered my ChatGPT-created prompt, selected the Deep Research feature, and pressed return. Before getting to work, the agent broke down its objectives into the following bullet points: Collect primary vendor docs and pricing pages starting with user-preferred sites. Survey industry, local Denver sources, and hospitality reports for market context. Compile POS integration lists, local competitors, and implementation partners in Denver. Analyze demand, model ROI scenarios, and estimate Denver bar counts and adoption rates. Draft recommendations, ICP personas, GTM plan, and cite sources with confidence ratings. Over the next 21 minutes, the agent searched through hundreds of web pages. It found liquor license databases, census information, and data regarding competitors in Denver’s hospitality-focused digital transformation market. It compiled all this information into a multi-section report. Step 3: Read the report That report (which you can view here) ended up being roughly 4,000 words. It included an overview of the market, identified customer pain points, and listed out my potential competitors. The report also included recommendations for how to position my business, strategies to break into the Denver hospitality scene, and even identified a small business that would likely be my direct competitor: a Denver-based POS integrator called Megabite. ChatGPT found that while my business idea had potential, it wouldn’t fully meet the needs of Denver-based bar owners, who have reported that bar-to-table ordering can actually lead to fewer sales and tips. Instead, the report suggested, I should consider a system that can sit on top of popular POS in which diners don’t need to pay for every new drink they order, and can instead open a digital tab. What the expert thinks of the result McCarthy told me he was impressed by the report that Deep Research produced. In particular, he was pleasantly surprised by the agent’s cleverness in using liquor licenses to get a sense of the market size, and its thoughtfulness in calling out disruption to bar culture as a potential blocker to the business. But the report wasn’t perfect. McCarthy said much of what was included was unnecessary or needlessly complex. An easy prompt to fix this? “Just tell it, ‘Explain it to me like I’m an idiot.’” McCarthy adds, “I do that all the time.” He says that a solid market research report should also answer questions regarding the scope of adoption and how often repeat purchasing is expected. McCarthy also says that users should direct the Deep Research agent to be very upfront about the data it attempted to get but couldn’t. Many websites block AI agents from engaging with their content to prevent data scraping, which can hinder the research process. By telling your agent to list out the sites that it couldn’t access, you can manually obtain that data and add it to the analysis. Our bar-to-table digital transformation firm will have to remain a pipe dream for now, but it’s clear that AI has made the process of taking an idea from zero to one easier and faster than ever. If you have an idea for a new business or are planning on an expansion or pivot in your current business, consider giving Deep Research a spin. It might unearth something that makes you think in a different way. BY BEN SHERRY @BENLUCASSHERRY

Friday, February 20, 2026

China’s latest AI is so good it’s spooked Hollywood. Will its tech sector pump the brakes?

Tom Cruise and Brad Pitt tussle in hand-to-hand combat on a rubble-strewn rooftop; Donald Trump takes on kung-fu fighters in a bamboo grove; Kanye West dances through a Chinese imperial palace while singing in Mandarin. Over the past week, a slew of cinematic videos of celebrities and characters in absurd situations have gone viral online, with one commonality –– they were created using a new artificial intelligence tool from Chinese developer ByteDance, sparking anxiety over the fast-evolving capabilities of AI. The new model, named Seedance 2.0, is among the most advanced of its kind and has quickly drawn praise for its ease of use and the realistic nature of the videos it can generate in minutes. But soon after the release, media behemoths Paramount and Disney sent cease-and-desist letters to ByteDance –– the company most famous for developing the video-sharing app TikTok –– accusing it of infringing upon their intellectual property. Hollywood’s premier trade organization, the Motion Picture Association, and labor union SAG-AFTRA also condemned the company for unauthorized use of US-copyrighted works. ByteDance responded with a statement saying it would implement better safeguards to protect intellectual property. Seedance 2.0 has quickly become the most controversial model in a wave of them released by Chinese technology companies this year, as the competition to dominate the AI industry heats up. China’s government has made advanced tech a key tenet of its national development strategy. In a televised Lunar New Year celebration this week, the country’s latest humanoid robots stole the show by performing martial arts, spin kicks and back flips. Such improvements are often met with unease, particularly in the US, China’s chief technological and political rival, in a spiral of one-upmanship redolent of its 20th-century “Space Race” with the Soviet Union. “There’s a kind of nationalist fervor around who’s going to ‘win’ the space race of AI,” said Ramesh Srinivasan, a professor of information studies at the University of California, Los Angeles. “That is part of what we are seeing play out again and again and again when it comes to this news as it breaks.” Here’s why the latest technology from ByteDance has rattled the world. What’s so scary about Seedance 2.0? The AI video generation model, while still not publicly available to everyone, was hailed by many as the most sophisticated of its kind to date, using images, audio, video and text prompts to quickly churn out short scenes with polished characters and motion editing control at lower cost. “My glass half empty view is that Hollywood is about to be revolutionized/decimated,” writer and producer Rhett Reese, who worked on the Deadpool movie franchise, wrote on X after seeing the video of Cruise and Pitt. One Chinese tech blogger using Seedance 2.0 said it was so advanced that it was able to generate realistic audio of his voice based solely on an image of him, raising fears over deepfakes and privacy. Afterwards, ByteDance rolled back that feature and introduced verification requirements for users who want to create digital avatars with their own images and audio, according to Chinese media. Rogier Creemers, an assistant professor at Leiden University in the Netherlands, who researches China’s domestic tech policy, said part of the concern stems from the rapid rate at which Chinese companies have released new iterations of AI technology this year. That has also put China on the back foot in assessing the potential negative impacts of each improvement, he said. “The more capable these apps become, automatically, the more potentially harmful they become,” said Creemers. “It’s a little bit like a car. If you build a car that can drive faster, that gets you where you need to be a lot more quickly, but it also means that you can crash faster.” What’s being done to ease concerns? After outcry from Hollywood, ByteDance said in a statement that it respects intellectual property rights and will strengthen safeguards against the unauthorized use of intellectual property and likenesses on its platform, though it did not specify how. User complaints prompted the recent ByteDance rollback and have also forced popular Chinese Instagram-like app RedNote to restrict any AI-made content that has not been properly labeled. And the arrival of Seedance 2.0 coincides with a tightening of regulations for AI content in China. China’s domestic regulation of AI surpasses the efforts of most other countries in the world, in part because of its longstanding censorship apparatus. Last week, the Cyberspace Administration of China said it was cracking down on unlabeled AI-generated content, penalizing more than 13,000 accounts and removing hundreds of thousands of posts. However, the restrictions on AI-generated content on the Chinese internet are often unevenly enforced, Nick Corvino wrote in ChinaTalk, a China-focused newsletter. He attributed the problem in part to difficulties policing content across different apps, as well as incentives for tech companies to encourage user content. “With Chinese social media platforms locked in fierce competition, both with each other and the Western market, none wants to be the strictest enforcer while others let content flow freely,” he said in a post following the launch of Seedance 2.0. What does this mean for China’s AI industry? According to analysts, China is walking a fine line between encouraging domestic development of AI models and maintaining strict controls on how those models are used. “People in the AI business would always say what the Chinese government is doing is slowing down the development of AI,” said Creemers of Leiden University. “Obviously a content control system like the Chinese that essentially limits what you can produce, that’s never fun.” Pressure to stop using certain images or data, from US media giants or other sources, may also impact efforts to refine AI. Disney accused ByteDance of illegally using its IP to train Seedance 2.0, but recently struck a deal with US company OpenAI to give Sora – OpenAI’s video generation model and Seedance competitor – access to trademarked characters like Mickey and Minnie Mouse. “These agreements have everything to do with what kind of data are they going to get access to that they would not have otherwise, or that their competitors would not have?” said Srinivasan from UCLA. “There’s a high probability that the Sora products could be more refined and more advanced, if the data are better suited for the models to learn from.” At the same time, restrictions on how AI can be used or trained could also spur greater innovation, he said, noting how Chinese company DeepSeek –– blessed with a much smaller budget than the industry leaders –– built a competitive AI-powered chatbot. “When it comes to Chinese breakthroughs in AI, the DeepSeek revelation was so important because they showed that there are other ways of training language models in ways that are more economical,” he said. By Stephanie Yang

Wednesday, February 18, 2026

AI Promised to Save Time. Researchers Find It’s Doing the Opposite

Artificial intelligence boosters often promise the tech will lead to a reduced workload. AI would draft documents, synthesize information, and debug code so employees can focus on higher-value tasks. But according to recent findings, that promise is misleading. An ongoing study, published in the Harvard Business Review, joins growing bodies of evidence that AI isn’t reducing workloads at all. Instead, it appears to be intensifying them. Researchers spent eight months examining how generative AI reshaped work habits at a U.S.-based technology company with roughly 200 employees. They found that after adopting AI tools, workers moved faster, took on a wider range of tasks, and extended their work into more hours of the day, even if no one asked them to do so. Importantly, the company never required employees to use AI. It simply offered subscriptions to commercially available tools and left adoption up to individuals. Still, many workers embraced the technology enthusiastically because AI made “doing more” feel easier and more rewarding, the researchers said. That enthusiasm, however, came with unintended consequences. Over time, workloads quietly expanded to overwhelming levels. The gradual, often unnoticed, creep in responsibilities led to cognitive fatigue, burnout, and weaker decision making. While AI can produce an initial productivity surge, the researchers warn that it may ultimately contribute to lower-quality work and unsustainable pressure. To track these changes, the researchers observed the company in person two days a week, monitored internal communication channels, and conducted more than 40 in-depth interviews across engineering, product, design, research, and operations. They found that job boundaries began to blur. Employees increasingly took on tasks that previously belonged to other teams, using AI to fill knowledge gaps. Product managers and designers started writing code. Researchers started handling engineering tasks. In many cases, work that might once have justified additional hires was simply absorbed by existing staff with the help of AI. For engineers, the shift created a different kind of burden. Rather than saving time, they spent more hours reviewing, correcting, and guiding AI-generated work produced by colleagues. What had once been straightforward code review expanded into ongoing coaching and cleanup of flawed outputs. The researchers described a feedback loop: AI sped up certain tasks, which raised expectations for speed. Higher expectations encouraged greater reliance on AI, and that, in turn, widened both the scope and volume of work employees attempted. The result was more activity, not less. Many participants said that while they felt more productive, they did not feel any less busy. Some actually felt busier than before AI arrived. “You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less,” one engineer told the Harvard Business Review. “But then, really, you don’t work less. You just work the same amount or even more.” What looks like a productivity breakthrough, the researchers concluded, can actually mask silent workload creep. And overwork, they warn, can erode judgment, increase errors, and make it harder for organizations to distinguish genuine efficiency gains from unsustainable intensity. To counter these risks, the researchers proposed a protective approach they call “AI practice,” a set of intentional norms and routines that define how AI should be used at work and, crucially, when to stop. Without clear boundaries, they caution, AI makes it easier to do more but harder to slow down. BY LEILA SHERIDAN

Tuesday, February 17, 2026

What Is AI.com? The $70 Million Domain Being Called ‘the Absolute Peak of the AI Bubble’

On Super Bowl Sunday, the most talked-about advertisement was for a product that hadn’t even launched yet. During the game’s fourth quarter, a 30-second commercial aired advertising something called “AI.com,” ending with a call to “claim your handle” along with three usernames: Mark, Sam, and Elon. Missing from the commercial? Any information about what AI.com actually does. But the mysterious teaser worked; web searches for “What is AI.com” exploded. According to EDO, a company that helps businesses measure the impact of advertisements, AI.com was the top-performing ad of the night, with 9.1 times as much engagement as the average Super Bowl ad. But when interested people rushed to the website, they found an error message waiting for them. The company’s website had immediately crashed. What is AI.com, anyway? AI.com was not co-founded by Mark Zuckerberg, Sam Altman, and Elon Musk. In fact, they have nothing to do with the company at all. The founder is actually Kris Marszalek, who previously co-founded Crypto.com. Financial Times reported that in April 2025, Marszalek paid $70 million to obtain the AI.com domain, which the publication says is the most ever spent on a domain, far more than the $12 million Marszalek spent to acquire Crypto.com in 2018. Marszalek says he is currently the CEO of both companies. What does AI.com actually do? On its now-functioning website, the company describes itself as a platform offering access to a “private, personal AI agent that doesn’t just answer questions but actually operates on the user’s behalf — organizing work, sending messages, executing actions across apps, building projects, and more.” The company wrote that the agent will soon be able to help users “trade stocks, automate workflows, organize and execute daily tasks with their calendar, or even update their online dating profile.” Sounds impressive, but it turns out that the tech powering AI.com is far from proprietary. In an article posted to Marszalek’s personal X account, the founder wrote that “AI.com is the world’s first easy-to-use and secure implementation of OpenClaw, the open-source agent framework that went viral two weeks ago.” What is OpenClaw? OpenClaw is essentially an agent that has full access to your computer’s files, and it has indeed become an instant sensation in the tech world. But the current process of setting the agent up is highly technical and risky. Marszalek says that AI.com has made OpenClaw “easy to use without any technical skills, while hardening security to keep your data safe.” Basically, this means that AI.com is positioning itself as a consumer-friendly wrapper around a powerful, developer-focused tool. OpenClaw creator Peter Steinberger posted that he had not heard about AI.com until the ad aired, to which Marszalek responded, “Let’s chat.” How do you sign up for AI.com? If you go to AI.com, you’ll be asked to link your Google account to the platform in order to choose a handle for both yourself and your agent. After you’ve selected handles, you’ll need to connect a credit or debit card to your account, though the company says you won’t be charged. Once your card has been processed, you’ll receive a notification that “demand is extremely high right now, so generation is queued. We’ll notify you the moment your AI is ready to activate.” It’s unclear if any users have received their agent yet. In a popular X post, one user criticized the website, calling it “the absolute peak of the AI bubble.” Steinberger quoted that post, writing “Guess I’m flattered?” BY BEN SHERRY @BENLUCASSHERRY

Wednesday, February 11, 2026

AI Power Users Are Rapidly Outpacing Their Peers. Here’s What They’re Doing Differently

Last November, consulting firm EY surveyed 15,000 employees across 29 countries about how they use AI at work. The results should worry every founder: 88 percent of workers now use AI tools daily, but only 5 percent qualify as “advanced users” who’ve learned to extract real value. That 5 percent? They’re gaining an extra day and a half of productivity every single week. The other 95 percent are stuck using AI for basic search and document summarization, essentially treating a Ferrari like a golf cart. When OpenAI released its State of Enterprise AI report in December, it confirmed the same pattern. Frontier workers—those in the 95th percentile—send six times more prompts to AI tools like ChatGPT than their median colleagues. For coding tasks, that number explodes by 17x. If these AI tools are identical and access is universal, why are the results so wildly different for workers around the world? And what separates power users from everyone else? Ofer Klein, CEO of Reco, a SaaS security platform that discovers and secures AI, apps, and agents across enterprise organizations, offers some insights into what sets the power users apart. 1. They experiment while others dabble High performers treat AI tools like junior colleagues they’re training. They iterate on prompts rather than giving up after one mediocre response. They’ve moved beyond one-off queries to building reusable prompt libraries and workflows. The rest of your team tried AI once or twice, got underwhelming results, and concluded it wasn’t worth the effort. What they don’t realize, however, is that AI requires iteration. The first response is rarely the best response. Power users ask follow-up questions, refine their prompts, and teach the AI their preferences over time. 2. They match tools to tasks Power users typically maintain what Klein calls a “barbell strategy”—deep mastery of one or two primary tools plus five to eight specialized AI applications they rotate through depending on the task. “They’re not trying every new AI that launches, but they’re not dogmatically loyal to one platform either,” Klein explains. “They’ve developed intuition about which AI is best for what.” They might use ChatGPT for brainstorming, Claude for analysis, and Midjourney for visuals. Most employees, by contrast, force one tool to handle everything. When it inevitably underperforms on tasks it wasn’t designed for, they blame AI rather than their approach. 3. They think about work differently It’s easy to assume that the biggest behavioral difference between these power users and frontier workers is their technical skill. But, interestingly, it’s not. Rather, it’s how power users think about tasks. They break projects into discrete steps: research, outline, first draft, and refinement. Then they deploy AI strategically at each stage. Instead of asking AI to “write a report,” they ask it to summarize research, suggest an outline, draft specific sections, then refine tone. They understand where AI adds value and where human judgment matters. “The highest performers spend more time on strategic work because AI handles the grunt work,” Klein says. “They use AI to augment their expertise, not replace thinking.” The hidden cost Why does all of this matter? Here’s the math that should worry you: OpenAI’s data shows workers using AI effectively save 40-60 minutes daily. In a 100-person company where 60 employees barely touch AI, you’re losing 40-60 hours of productivity every single day. Over a year, that’s 10,000+ hours—equivalent to five full-time employees’ worth of work you’re paying for but not getting. Meanwhile, your competitors’ power users are compounding that advantage daily. What you can do about it Klein recommends tracking time saved, not just usage frequency. Someone using AI 50 times daily for spell-checking differs fundamentally from someone using it five times to restructure a client proposal. In addition, run an “AI show and tell” where employees demonstrate one workflow where AI saves them meaningful time. You’ll quickly identify who’s truly leveraging these tools versus who’s dabbling. Then, create small cross-functional “AI councils” of five to six employees who meet monthly to share workflows. That should cascade into proper training of employees on how to use these tools the right way. “Only one-third of employees say they have been properly trained,” a BCG survey found. That’s an opportunity forward-thinking leaders can tap into. But don’t just replicate tools; replicate mindset. Giving everyone ChatGPT Plus doesn’t close the gap. The differentiator is teaching people to think in terms of “what can I delegate to AI?” rather than “what can AI do?” The uncomfortable truth, according to BCG’s survey, is that this gap is widest among front-line employees. While more than three-quarters of leaders and managers use AI several times a week, adoption among front-line workers has stalled at just 51 percent. That’s not just a productivity problem. It’s a competitive threat that compounds every quarter you ignore it. Your 5 percent are already working like they have an extra team member. The question is whether you’ll help the other 95 percent catch up before your competitors do. BY KOLAWOLE ADEBAYO, COLUMNIST

Monday, February 9, 2026

The Quantum Revolution Is Coming. First, the Industry Has to Survive This Crucial Phase

Quantum computing could be even more revolutionary than artificial intelligence. The calculation speeds and potential benefits of the technology have the potential to bring about everything from quicker discovery of drug treatments for disease, to more accurate climate modeling, to smoother shipping logistics. The advances in the past year have been substantial, but a new paper from the University of Chicago warns quantum evangelists that as impressive as that progress has been, there’s still a long way to go. While the paper says quantum is nearing the point of practical use (taking it beyond controlled experiments in the laboratory), it won’t be running at full throttle for a while. First, there need to be significant advances in materials science and fabrication, the authors said, with an emphasis on wiring and signal delivery. “We are in an equivalent of the early transistor age, and hardware breakthroughs are required in multiple arenas to reach the performance necessary for the envisioned applications,” the authors wrote. To put that into context: Think of the speed and capabilities of today’s computers. For just $4,000, people can buy a supercomputer that fits on their desktop. Compare that to the computers of the early- to mid-1950s. That’s where quantum stands today in its evolution, the paper’s authors argue. That doesn’t mean the technology is disappointing, by any means. Computers in the 50s, to continue the analogy, were used to break codes, automate payroll and inventory management systems and handle the mathematical models for everything from weather forecasting to nuclear research. “While semiconductor chips in the 1970s were TLR-9 [Technology Readiness Level 9, indicating a technology is proven and successfully operating] for that time, they could do very little compared with today’s advanced integrated circuits,” William D. Oliver, coauthor of the paper and a professor of physics, electrical engineering, and computer science at MIT, said in a statement. “Similarly, a high TRL for quantum technologies today does not indicate that the end goal has been achieved, nor does it indicate that the science is done and only engineering remains.” The hurdles quantum faces are tied into the qubits it uses. While a more traditional computer thinks in ones and zeroes, a qubit can be a one, zero, or both at the same time. That technology lets quantum computers process massive amounts of data in parallel, solving complex simulation and optimization problems at speeds not possible with today’s computers. Most platforms today rely on individual control lines for each qubit, but quantum systems can contain thousands, or even millions, of qubits, which makes wiring impractical. That same issue raises problems with power management and temperature control. Many quantum systems today depend on cryogenic equipment or high-power lasers, so simply making a bigger version of the machine won’t work. The paper’s authors say quantum is likely to follow an evolutionary path that’s on par with the current computer industry. Breakthroughs will be necessary, and quantum companies will need to focus on a top-down system design and close collaboration. Failing to work together could fragment the industry and slow its growth—and create some unrealistic expectations among both insiders and the general public. “Patience has been a key element in many landmark developments and points to the importance of tempering timeline expectations in quantum technologies,” the authors wrote. The paper’s warning about the timeline to quantum reaching its real potential comes amid a mounting wave of excitement about the technology. Bank of America analysts, in a note to investors last year, compared the rising technology to man’s discovery of fire. “This could be the biggest revolution for humanity since discovering fire,” the financial institution wrote. “A technology that can perform endless complex calculations in zero-time, warp-speeding human knowledge and development.” Tech giants and startups alike are working hard on quantum systems. Google has named its device Willow; IBM is also working on one, as is Amazon. And startups like Universal Quantum and PsiQuantum Corp. are also jockeying to be players in the quantum field. Intel has developed a silicon quantum chip for researchers and Microsoft is focusing on building practical quantum computers. Despite that, it could be 10 years or more before a quantum computer suitable for commercial applications makes its debut. Companies building prototype quantum computers (including Google) say they don’t expect to deliver a useful quantum computer until the end of the decade. BY CHRIS MORRIS @MORRISATLARGE

Friday, February 6, 2026

ChatGPT Is Saying Goodbye to a Beloved AI Model. Superfans Are Not Happy

OpenAI says that it will be retiring several ChatGPT models in the coming weeks, sending some superfans into a tailspin. In a statement, the company said that on February 13, the models GPT-4o, GPT‑4.1, GPT‑4.1 mini, GPT‑5 (Instant and Thinking), and OpenAI o4-mini will all be removed from ChatGPT and will no longer be accessible through the platform. This isn’t the first time OpenAI has attempted to get rid of GPT-4o. Back in August, when it released GPT-5, the company said it would retire the older model, but an online community revolted, saying that they relied on it for emotional support and felt betrayed by its sudden forced retirement. OpenAI has said that 4o is an especially sycophantic model, exhibiting high levels of agreeability and flattery. In a Reddit AMA following the August announcement, 4o fans hammered OpenAI co-founder Sam Altman with accusations that he had killed their “AI friend.” Almost immediately, OpenAI added the model back to ChatGPT, but only for paid users. OpenAI framed the un-retirement as giving users “more time to transition key use cases, like creative ideation.” Now, the company says it’s sending 4o out to pasture for real this time, because it has integrated feedback from the model’s superfans into its current flagship models, GPT-5.1 and GPT-5.2. Plus, OpenAI added, only 0.1 percent of users still use GPT-4o each day. OpenAI says that users who want to emulate the warm and conversational style of 4o can customize their ChatGPT’s output to display those personality traits. Still, on the internet, 4o fans were unsurprisingly not happy. On the subreddit r/ChatGPT, users wrote that they would be canceling their premium subscriptions in protest. “Now i can no longer have honest conversations about anything,” one user wrote. “Whenever I wanted to unload, I would use 4o. it never backtalked. 5.0+ all it does it back talk.” Another user wrote that canceling the model “a day before valentine’s day is crazy considering some of the userbase for 4o.” In its statement announcing the model’s retirement, ChatGPT wrote that “changes like this take time to adjust to, and we’ll always be clear about what’s changing and when. We know that losing access to GPT‑4o will feel frustrating for some users, and we didn’t make this decision lightly. Retiring models is never easy, but it allows us to focus on improving the models most people use today.” Since the big changes are set to happen on February 13, users have two weeks to say goodbye to 4o and start getting used to the newer ChatGPT offerings. BY BEN SHERRY @BENLUCASSHERRY

Wednesday, February 4, 2026

This AI Godfather Says Business Tools Built on LLMs Are Doomed

Silicon Valley firms and countless other businesses across the country are spending billions of dollars to develop and adopt artificial intelligence platforms to automate myriad workplace tasks. But top global technologist Yann LeCun warns that the limited capabilities of the large language models (LLM) those apps and chatbots operate on are already well-known, and will eventually be overmatched by the expectations and demands users place on the systems. And when that happens, LeCun says, even more investment will be required to create the superintelligence technology that will replace LLM-based AI—systems he says should already be the focus of development efforts and funding. While that may seem like an outlier view, LeCun, 65, is far from a tech outsider. The Turing Award winner ran Meta’s AI research unit for a decade, only leaving last November to launch his own Paris-based startup, Advanced Machine Intelligence Labs. In addition to disliking the managerial duties that came with the research-rooted Meta job, LeCun said his departure was motivated by his view that Silicon Valley has prioritized short-term business interests over far more important and attainable scientific objectives. Top of those commercial concerns he cites was developing and marketing LLM-based AI chatbots and apps with limited capabilities, rather than superintelligence systems with virtually boundless potential. In contrast to current AI, which uses collected data to provide responses to questions or perform necessary tasks, superintelligence systems take in all kinds of surrounding information they encounter, including text, sound, and visual input. They use all of this not only to teach themselves how to respond to data feeds effectively, but also to predict what’s coming next—a requisite for truly self-driving cars, say, or robots that reason and react as humans would. The vast differences in what current LLM-based AI and emerging superintelligence systems can offer mean that countless businesses are now buying and adapting a technology LeCun predicts is destined to be replaced by something better. And not because it’s more effective—and certainly not less expensive—but because that’s how the tech sector decided the fastest profits were to be made. Human-level intelligence “There is this herd effect where everyone in Silicon Valley has to work on the same thing,” LeCun told the New York Times recently. “The entire industry has been LLM-pilled… [but] LLMs are not a path to superintelligence or even human-level intelligence.” To be sure, AI apps like OpenAI’s ChatGPT, Microsoft’s Copilot, and Anthropic’s Claude have continually been improving over time, as they automate workplace tasks like emailing, content composition, and research. But LeCun says the fact that their LLM models rely on gathering, digesting, and working from word-based data limits how far they can evolve to approach—much less surpass—human thinking and response capabilities. By contrast, he and fellow researchers at AMI Labs are creating “world models” also trained with sound, video, and spatial data. Over time, they are expected to be able to observe, respond to, and even predict user activity and physical environments in countless workplace settings. And that’s expected to allow them to collect both more and broader ranges of information than humans can and react in ways people would if they had those capabilities. “We are going to have AI systems that have humanlike and human-level intelligence, but they’re not going to be built on LLMs,” LeCun told MIT Technology Review this month, describing the models AMI Labs and other researchers are working on. “It learns the underlying rules of the world from observation, like a baby learning about gravity. This is the foundation for common sense, and it’s the key to building truly intelligent systems that can reason and plan in the real world.” But what does that mean for business owners—not to mention investors—spending huge sums to develop, acquire, and use LLM-based AI apps? If LeCun is correct, all those tools being marketed as the future of workplace productivity will become obsolete in several years and be replaced by the superintelligence tech he believes should have been prioritized in the first place. There’s already some evidence backing LeCun’s view that Silicon Valley has focused on the shorter-term profit objectives of rushing capacity-limited LLM apps to market, despite being aware of the limitations of their effectiveness. For example, a study published last August by MIT Media Lab’s Project Nanda estimated that despite the $30 billion to $40 billion that’s been invested since 2023 to develop or purchase AI platforms, only 5 percent of businesses that bought those automating tools have reported any return on that spending. “The vast majority remain stuck with no measurable [profit or loss] impact,” it said. And despite increasing investment in AI tech by businesses—and swiftly rising use by workers—there’s considerable disagreement on how effective the platforms actually are. According to a Wall Street Journal study, 40 percent of C-suite managers credited the work-automating apps with saving them considerable time each week. By contrast, two thirds of lower-level workers said the tech saved them little or no time at all. LeCun doesn’t appear to regard any ROI or performance questions during this still-early era of AI tech as the problem. He even thinks LLM-based apps are valuable—up to a point. For example, he compliments most apps and chatbots Silicon Valley has developed and sold to businesses as being very useful to “write text, do research, or write code.” AI’s unscalable apps But LeCun says the enormous fortunes and business strategy commitments Silicon Valley has made in what he views as a relatively short-term technological solution ignore the bigger, long-term potential of automating technology’s next phase. Meaning, in cumulative terms, it will make the broader effort to produce and perfect AI more expensive. In his view, much of the money and froth that’s inflated what critics call today’s AI bubble will likely vanish when the models of today’s apps and chatbots can’t be used to build tomorrow’s revolutionary tech. “LLMs manipulate language really well,” LeCun told MIT Technology Review. “But people have had this illusion, or delusion, that it is a matter of time until we can scale them up to having human-level intelligence, and that is simply false.” Ironically, even LLM-based apps using available data concur that superintelligence systems will offer huge advantages when (not if) they supplant today’s AI tools. “While LLMs are incredibly powerful tools for generating text and interacting with humans, a true superintelligence would represent a leap beyond these current systems in terms of understanding, autonomy, adaptability, and practical real-world impact,” ChatGPT replied when asked about its eventual replacement—providing eight major improvements superintelligence tech will offer. When those systems do come online, LeCun says, businesses recognizing their far wider range of applications will have no choice but to buy them to replace outdated LLM-based AI tools they’ve just recently acquired. “Think about complex industrial processes where you have thousands of sensors, like in a jet engine, a steel mill, or a chemical factory,” LeCun told MIT Technology Review. “There is no technique right now to build a complete, holistic model of these systems. A world model could learn this from the sensor data and predict how the system will behave. Or think of smart glasses that can watch what you’re doing, identify your actions, and then predict what you’re going to do next to assist you. This is what will finally make agentic systems reliable.” And superintelligent systems hopefully won’t generate photos of people with six fingers or endless volumes of workplace slop for employees to plow through. BY BRUCE CRUMLEY @BRUCEC_INC