Monday, October 27, 2025

Far From Silicon Valley, This Founder’s Data Center Business Is Building the Future of AI

There’s a common refrain among business strategists that it’s better to be a pickaxe salesman than a gold prospector—or, in other words, that the best way to capitalize on a gold rush is to sell tools to the people hoping to strike it rich, rather than trying to hit paydirt yourself. With artificial intelligence currently enjoying a boom of its own—one that some big names have even called a bubble—plenty of companies are stepping in to sell the AI equivalent of pickaxes, such as backend infrastructure and computational power. That includes big-name ventures such as CoreWeave, the AI cloud company that IPO’d earlier this year and counts Microsoft, IBM and OpenAI among its clientele. CoreWeave claimed the No. 45 spot on this year’s Inc. 5000 list of the fastest-growing private companies in America, and is one of the AI boom’s biggest winners so far. But beating it out on that same list was another AI infrastructure company, the Chicago-based Introl, which came in at No. 14 on the list just a few years after its founding. Introl founder and CEO Ryan Puckett launched the company in 2021 while between jobs and looking to work as a freelance project manager. Introl helps set up GPUs, or graphics processing units: the computer chips that train and run modern AI models. A former low-voltage cable technician, Puckett has since built the company up to impressive scale: it’s grown annual revenue nearly 10,000 percent over the last three years, and Puckett says domestic revenue was about $38 million last year. All of that growth is bootstrapped, the success of which the CEO attributes to “managing cashflow effectively and efficiently”—as well as, at least initially, “a lot of credit card debt.” And though Introl was born in Dallas, Puckett moved it to Chicago a few years in; he’d lived in the Windy City during his early 20s, and wanted to go back. “There’s not a better city in the country,” he says of Chicago. “There was no other thought in my mind to build it anywhere else.” Blake Crosley, Introl’s CTO, says the company has deployed “up to 100,000 GPU units in a data center.” Each one needs multiple connections, he adds, requiring lots and lots of fiber optic cable; the company says it has run more than 40,000 miles of the stuff in all. “We don’t actually own or operate the data centers,” Crosley explains. “We basically help design, like, what does it look like to actually get that set up in the space? Once the racks are in place, how are we going to actually connect everything together?” This work, he adds, is known as “rack and stack.” Installation is followed by testing and quality control. NDAs limit Introl’s ability to disclose specific client names, but the company says it has around 45 to 50 full-time employees, plus over 1,000 subcontractors. The startup deploys that workforce to data centers around the country and the planet. Those data centers are so big, Puckett says, that people get around them in golf carts and measure their footprints in terms of how many Costcos could fit inside. Speed to market is the CEO’s biggest challenge, he tells Inc. Companies will sometimes give Introl barely a week’s notice to get people on-site to a data center, he says, and it can sometimes be hard to find enough hotel space to house all those staffers—who will sometimes number in the hundreds for a job—especially in the small towns where many data centers go up. “In a lot of cases, because they are trying to get things online so quickly…certain specific sections of [the data centers] are being built while you’re in a different part,” Puckett says. “It’s a constant flow of trucks coming in, dropping off pallets of cables.” AI is big business right now, but if the fervor starts to die down, demand for the underlying hardware could follow suit too. “I’m not 100 percent sure what our pivot would be [if], say, GPU deployments just kind of fell off the face of the earth,” Puckett says, although Introl’s focus could shift toward maintenance. Right now, he estimates, 70 percent of the company’s work surrounds new installations, while the other 30 percent has to do with maintaining pre-existing sites. For now, though, the company is feeling good about where things are headed. “Obviously there’s a lot of talk about [an] AI bubble and stuff like that,” says Crosley, the CTO. “The players are huge, and the money that’s flowing is even bigger. But from a user perspective, on the side of utilizing AI, I can only see things expanding faster in the total adoption and usage.” BY BRIAN CONTRERAS @_B_CONTRERAS_

Friday, October 24, 2025

Here’s How LinkedIn Co-Founder Reid Hoffman Says AI Needs to Be Regulated

Regulation can be good for technology, so long as it’s done thoughtfully, according to LinkedIn co-founder, investor, and AI-enthusiast Reid Hoffman. Speaking on the heels of a pitch event in San Francisco called Entrepreneurs First Demo Day, he compared AI regulation to seatbelts in vehicles. “Seatbelts are a good thing, relative to the fact that regulatory stuff can have a positive impact on society, technology evolution. Now doing it smart in the right way is important,” he tells Inc. “You don’t try to solve everything before you get on the road. You get on the road and then solve it as you go,” he adds. His voice joins a chorus of others from big names in tech speaking up about how much—or in the case of legendary investor Marc Andreessen and companies like Meta—how little regulation they support. Hoffman sits on the board of Entrepreneurs First, an international talent investment firm that hosts incubator-style programs and related annual pitch competitions. Those events are called Demo Days, and the most recent took place in San Francisco on Wednesday. Hoffman joined EF’s board after leading a significant round of investment in the company in 2017 through his capacity at venture capital firm Greylock Partners. Hoffman was not on the ground at Demo Day this year, but another big name in tech was: Anthropic co-founder Jack Clark was the keynote speaker in conversation with Entrepreneurs First CEO Alice Bentinck. Just a few days prior, Clark had made waves for commentary he gave at The Curve conference in Berkeley, California, and later published in essay form in his newsletter. He compared AI to a “mysterious creature” of humanity’s own creation. He said he was optimistic about its potential as well as appropriately afraid of it, especially if AI’s goals are not absolutely aligned with humanity’s. And finally, he ended by emphasizing the need for conversations with a broad swathe of society to help craft a “policy solution.” “There will surely be some crisis,” Clark notes in his blog. “We must be ready to meet that moment both with policy ideas, and with a pre-existing transparency regime which has been built by listening and responding to people.” In response to the post, U.S. AI and crypto czar David Sacks accused Anthropic of fearmongering. Hoffman’s take, which he wrote about in his recent book, is by no means anti-regulation, but does differ somewhat from Clark’s. “In the book that I published in January, Superagency, part of what I was arguing for within AI is iterative deployment and development,” he tells Inc. “We do the regulatory thing, but we do it in response to what we can actually see versus imagination of what [could] happen,” he adds. AI has never been more topical, especially among aspiring entrepreneurs. This week at Demo Day in San Francisco, founders from 20 different startups pitched more than 200 tech investors, among them big name firms like a16z, Khosla Ventures, Paladin Capital, Insight Partners and Engine Ventures, in hopes of landing as much as $7 million in seed funding. It represented the culmination of some six months of work the founders had put in during Entrepreneurs First’s incubator-style program. On the lips of most of those entrepreneurs was AI. “The majority of the companies that were pitching yesterday—85 to 90 percent—are all using AI in some way. Some of them are building novel AI models, others are creating wrappers or scaffolding around existing AI models,” says Bentinck. “If you look at what early stage investors want to put capital behind, they see this enormous opportunity in the new AI economy.” Originally founded in London, Entrepreneurs First started off as a nonprofit in 2011 before becoming the investment vehicle it is today, starting in 2015. The company expanded overseas to offer programming in San Francisco at the start of 2024, and continues to run cohorts across Europe, India and the U.S. Entrepreneurs First functions something like an incubator, although Bentinck says EF thinks of itself more as a “talent investing studio.” It searches out individuals, usually with technical backgrounds, who also possess certain qualities related to pacing, productivity, determination, and even aggression, Bentinck says—qualities that alert EF that these individuals may outperform their peers. EF then guides them through the process of building a startup including helping them ideate if they don’t already have an idea and introducing them to potential co-founders. “We find exceptional individuals, pre-team, pre-idea, pre-company. Really all that we’re looking for is their entrepreneurial potential and then we run them through a process that helps them build a startup from scratch,” Bentinck says. The group that pitched this week included the top tier companies from EF’s European and U.S. programs. Each of these teams had been selected by EF and received $250,000 in pre-seed investment in exchange for 8 percent equity. “That’s the culmination of EF and we then send them off into the wild to build enormous companies,” Bentinck says. BY CHLOE AIELLO @CHLOBO_ILO

Wednesday, October 22, 2025

How AI Can Make You a Better Negotiator: A Step-by-Step Guide

Earlier this year, Jennifer Barnes received an email from a client in financial distress, asking to renegotiate their contract. As the founder and CEO of Optima Office, an outsourced HR and business services company based in San Diego, this was not the first time she’d received a message like this. She’s been in business since 2018, growing her 100‑employee company to around $18 million in revenue and earning a place on the Inc. 5000. There isn’t much she hasn’t seen. From experience, Barnes knew negotiating was going to burn the better part of an hour. First, she’d have to read the whole exchange. Then she’d think about how to respond. After that, she’d have to spend a lot of time writing and editing the response. She’d have to be diplomatic and keep her own emotions in check, she says: “Clients can be really unreasonable when they’re very low on funds.” This time, however, instead of working through it on her own, Barnes popped the email into her paid version of the AI chatbot Claude and asked it for a three‑point summary of the client’s demands. She then uploaded a brief synopsis of her perspective on the situation, did a light edit and hit send. Total time to craft the message that solved the problem? Five minutes. Negotiating is much of the work of growing a company. Whether it’s with clients, suppliers, investors, joint venture partners, contractors, or employees, as an entrepreneur it can feel like you’re constantly either preparing, actually doing, or managing the results of a negotiation. All of that is intellectually and emotionally demanding, says Emily DeJeu, professor at the Tepper Business School at Carnegie Mellon University, who teaches classes on negotiation. “Even in our textual exchanges, negotiation is happening as much with our guts as with our brains.” But with the advent of AI, using your own brain unassisted has become a bit passé. If sensitive, time‑consuming tasks like negotiating can be even partially off‑loaded without inadvertently blowing up your company, well, that’s pretty compelling. On the other hand, a recent MIT report found that 95 percent of companies are getting literally zero return on their generative AI investment. But it doesn’t have to be that way. By deploying today’s AI tools to prepare for negotiation—with a keen understanding of their current limits—you can leverage them to your advantage right now. In this Premium article you will learn: A step-by-step guide for incorporating AI into the negotiation process Best practices for combining human intuition with artificial intelligence How many hours a week you can expect to save by using AI When an hour takes five minutes For growing companies, there’s a wide array of AI tools from business-function-specific programs like Salesforce’s Einstein to generally available large language models like ChatGPT or Perplexity. All of these AI tools can analyze data and generate cogent text or other forms of output. Between the time savings and the added bonus of strictly controlling her tone, which might have had a sarcastic edge, Barnes says she quickly found herself relying on Claude for her frequent negotiation tasks: to research, prepare, and learn from prior wins. “It saves me about eight hours a week,” she says. “It’s like having another executive on the team.” Of course, that extra executive is sometimes fallible. “It makes mistakes. Don’t get me wrong,” she says. Generative AI systems are known to hallucinate, or produce inaccurate and even fabricated results. But AI seems just as confident when it’s wrong as when it’s right. That’s why Barnes says she would “absolutely not” allow an AI to send a message on her behalf without reviewing it first. When I asked Claude for a comment it agreed: “People getting real value are using me as a tool they control, not as a replacement for judgment. The moment anyone treats me as a peer negotiator rather than a research assistant, things get questionable.” Machine processing power versus human nuance Even if AI never made a mistake, there are serious questions about how well machines can accomplish the extremely human task of negotiating a complicated deal, says DeJeu, who is also hosting a conference on how businesses can use generative AI later this year. Artificial intelligence lacks the ability to read human emotion, which is often cited by the negotiators she’s trained as an advantage. DeJeu, however, disagrees. “Emotion has a distinct, powerful role to play in persuasion,” she argues. “Negotiation is one of the most human‑touch-necessary communication scenarios. It’s nothing but nuance.” Today’s tools, she says, aren’t capable of the key basic task of accurately reading a room. DeJeu acknowledges that not every negotiation is that deep. Not every supplier contract needs an analysis of subtle body language. And she certainly sees many benefits of using an AI tool to research and synthesize information ahead of a negotiation. “It can make you a little more nimble,” she says. DeJeu specifically finds using voice‑enabled AI for rehearsing negotiations to be beneficial. She suggests preparing 10 minutes for every 10 seconds of talking time in a negotiation. It’s hard to imagine a human critique partner enduring that without diminishing returns, but AI is tireless. Agents of the Future Of course, AI tools are rapidly evolving. Building on LLMs are the more eye‑catching AI‑powered agents—also known as agentic AI—which are capable of self‑direction. “AI agents act with autonomy and authority to find and negotiate deals with suppliers at scale,” explains Kaspar Korjus, CEO of Pactum AI. If technology continues its progress, AI agents will eventually handle every step of negotiation, from first contact to final contract and delivery. While going totally hands‑off is not an available option for most small to midsize companies today, corporate giants have been working on this for a while. For example, way back in 2021, Walmart worked with Pactum AI to create a pilot to handle certain supplier negotiations with AI chatbots. That led to a wider deployment in 2022—which you may recall is the year when ChatGPT first launched its public version. Now that ChatGPT has more than warmed up the general audience—the latest estimates for this one platform alone are 700 million users worldwide—agentic AI will only become more common. Nearly three‑quarters (72 percent) of chief procurement officers surveyed by Gartner say that AI tools like these are their top technology priority over the next five years. However, the next five years are not the next five minutes. “It’s still early days—we really don’t see any process that is fully agentic,” says Sesh Iyer, North America chair of BCG X, the tech build-and-design unit of Boston Consulting Group. For one thing, agentic AIs are still flummoxed by unexpected things, and that can make them take strange, if not brazen, shortcuts. For example, in a Carnegie Mellon study of a simulated company, researchers found that when an AI agent couldn’t find a particular person it needed to contact to complete a task, it just renamed another user. Overall, the top‑performing AI agent successfully completed only 24 percent of its tasks. But make no mistake—the tech is evolving fast, says Iyer: “We always overestimate what new technology will do in the short term and underestimate what it will do in the long term.” A practical playbook To avoid these problems of estimation in either direction, Iyer suggests starting with what you have right now. It’s easy enough to incorporate AI into negotiation prep with the tech your business probably already uses. Test it out with background research, brainstorming arguments and counterarguments, and use the voice function for rehearsal. But even at the most basic level of AI usage, it’s best to check in with your legal and IT security teams, since there are heavy privacy implications for both. When you’re using LLMs to their fullest for negotiation, you’re “allowing an incredible amount of access to information, including emails and alendars,” says Cameron Powell, co-founder of DeepLaw, a legal consultancy that uses AI tools for negotiation on behalf of its clients. Sharing this information can raise questions about confidentiality, liability, and intellectual property. When AI has proved its mettle and security to your satisfaction, the next step is using it to conduct deep analysis of your current contracts, sales, and negotiation processes. This could mean using AI to review your less-used suppliers that might be costing you more than they’re worth. You could also use AI to create side-by-side comparisons of your competitors or to provide a deep analysis of your current contracts to see what advantages you may be leaving on the table. AI can help manage tasks that are too time‑consuming or unwieldy to otherwise manage closely. Eventually, you can move to testing semi‑autonomous AI agents on repetitive or less nuanced negotiating tasks. Wherever you are in the process of integrating this new tool, Iyer suggests moving deliberately, and with a sharp eye toward integrating AI into your and your employees’ workflows. That MIT report that found little return on generative AI investment attributed much of the problem to enthusiastic focus on the gee-whiz technology itself, at the expense of truly considering how to make the best use of it today. “Focus on things that truly matter to your business,” urges Iyer. “Don’t try to do a thousand things at once.” BY ALISON J. STEIN

Tuesday, October 21, 2025

Meta’s Bold Strategy to Beat OpenAI Starts With These 8 AI Innovators

OpenAI might be the center of the AI development world these days, but the competition has been heating up for quite a while. And few competitors are bankrolled on the same level as Meta. With a market capitalization of more than $1.75 trillion and a CEO who’s not afraid to spend heavily, Meta has been on a hiring spree in the AI world for months, poaching top tier talent from a variety of competitors. It appeared recently that the wave of high-profile (and high-dollar) recruitments was coming to an end. In August, Meta quietly announced a freeze on hiring after adding roughly 50 AI researchers and engineers. This month, though, two more big names have joined the Meta roster. While Meta might have a gap to close with its AI rivals, the company has assembled an all-star team to catch up and move forward. Here are some of the most notable experts to come on board. Andrew Tulloch, co-founder of Thinking Machines Lab Tulloch partnered with OpenAI’s former chief technical officer Mira Murati to launch Thinking Machines Lab in February of this year. Now he’s returning to his roots. Considered a leading researcher in the AI field, Tulloch previously spent 11 years at Meta, leaving in 2023 to join OpenAI, then departing with Murati. Meta founder Mark Zuckerberg has been chasing Tulloch for a while, reportedly making an offer with a $1.5 billion compensation package at one point, which Tulloch rejected. (Meta has called the description of the offer “inaccurate and ridiculous.”) There’s no word on what Tulloch was offered that made him decide to move. Ke Yang, Senior Director of Machine Learning at Apple Yang, who was appointed to lead Apple’s AI-driven web search effort just weeks ago, is another big October Meta hire. At Apple, his team (Answers, Knowledge and Information, or AKI) was working to make Siri more Chat-GPT-like by pulling that information from the web, making his departure one of Meta’s most notable poachings. Meta convinced him to come over after recruiting several of his colleagues. Shengjia Zhao, co-creator of OpenAI’s ChatGPT Zhao joined Meta in June to serve as chief scientist of Meta Superintelligence Labs. Beyond co-creating ChatGPT, he also played a role in building GPT-4 and led synthetic data at OpenAI for a stint. “Shengjia has already pioneered several breakthroughs including a new scaling paradigm and distinguished himself as a leader in the field,” Zuckerberg wrote in a social media post in July. “I’m looking forward to working closely with him to advance his scientific vision.” Daniel Gross, co-founder of Safe Superintelligence As it did with Murati’s Thinking Machines Lab, Meta tried to acquire Safe Superintelligence, the AI startup co-founded by OpenAI’s former chief scientist, Ilya Sutskever. When that offer was rejected, Zuckerberg began looking for talent, luring co-founder and CEO Gross in June. Gross is working on AI products for Meta’s superintelligence group. By joining Meta, he’s reunited with former GitHub CEO Nat Friedman, with whom he once created the venture fund NFDG. Ruoming Pang, Apple’s head of AI models Pang was one of the first high-profile departures from Apple to Meta, making the jump in July. At the time, he was Apple’s top executive overseeing AI models and had been with the company since 2021. While there, he helped develop the large language model that powers Apple Intelligence and other AI features, such as email and webpage summaries. Matt Deitke, co-founder of Vercept Vercept is a start-up that’s attempting to build AI agents that use other software to autonomously perform tasks, something that caught Zuckerberg’s attention. Deitke proved hard to lure, though. He reportedly turned down a $125 million, four-year offer, but a direct appeal by Zuckerberg (and a reported doubling of that offer) convinced him to make the move (with the blessing of his peers). Kiana Ehsani, his co-founder and CEO, announced his departure on social media, joking, “We look forward to joining Matt on his private island next year.” Alexandr Wang, founder and CEO of Scale AI Wang left his startup to join Meta after the social media company made a $14.3 billion investment into Scale AI (without any voting power in the company). “As you’ve probably gathered from recent news, opportunities of this magnitude often come at a cost,” Wang wrote in a memo to staff. “In this instance, that cost is my departure.” Wang joined Meta’s superintelligence unit. Scale made its name by helping companies like OpenAI, Google and Microsoft prepare data used to train AI models. Meta was already one of its biggest customers. Nat Friedman, former CEO of GitHub Friedman was already a part of Meta’s Advisory Group before he was brought on full-time. That external advisory council provides guidance on technology and product development. Now, he’s working with Wang to run the superintelligence unit. Friedman previously was CEO of GitHub, a cloud-based platform that hosts code for software development. Most recently, he was a board member at the AI investment firm he started with Safe Superintelligence’s Gross. As for what Zuck is going to do with all this talent, the sky’s the limit, but there’s some catchup to do first. The Llama Large Language Model hasn’t quite matched up to those of OpenAI or Google, but with Meta’s gargantuan user base (3.4 billion people use one of the company’s apps each day), Meta’s AI could still be one of the most widely used in the years to come. BY CHRIS MORRIS @MORRISATLARGE

Friday, October 17, 2025

This Report Says AI Stole 17,000 Jobs This Year. The DOGE Effect Is Much Worse

AI evangelists continue to insist that AI is improving workers’ efficiency and thus business productivity, freeing up staff from mundane duties to do more meaningful work. Not as many boosters are cheering the fact that it’s just as easy for companies that have gone all in on the new technology to cut labor costs by replacing people’s jobs. According to a new report thousands of jobs have already gone from the job market this year as AI has assumed those duties instead, and fully 7,000 of the losses happened in September alone. All of this may feed into your thinking about rolling out AI at your own company. The data, from Chicago-based executive outplacement firm Challenger, Gray & Christmas, attributes 17,375 job losses to adoption of AI tech since the start of 2025. Most of these cuts were made public in the second half of the year, industry news site HRDive reports. The numbers are dramatic, especially since a similar report from Challenger in July said that among some 20,000 jobs lost to “automation” in the first half of the year, only 75 were directly connected to AI. Andy Challenger, senior vice president at the firm, told CFODive at the time that the suspicion was that many more jobs were actually lost to AI. “We do see companies using the term ‘technological update’ more often than we have over the past decade, so our suspicion is that some of the AI job cuts that are likely happening are falling into that category,” Challenger said then, also noting that some firms were being careful because they “don’t want press on it.” In the new report, Challenger noted that it’s mainly tech firms that are “undergoing incredible disruption,” because of AI. Challenger also backed up many earlier reports by noting that the buzzy, controversial tech is “not only costing jobs, but also making it difficult to land positions, particularly for entry-level engineers.” HRDive notes that it’s losses at Salesforce that may be linked to those massive AI-related job cuts in recent months, with Salesforce CEO Marc Benioff noting in August that customer service staff numbers were slashed by about 4,000 after AI agents took on some customer handling duties. The interesting wrinkle here is that Salesforce is one of the big tech names that is pivoting aggressively and openly to adopting AI tech, and is even selling it to its customers with the promise that agent-based AIs can save them money. Benioff in early 2025 also said “my message to CEOs right now is that we are the last generation to manage only humans.” In his vision for future company leadership, managers will be steering both AIs and humans through their day to day operations. While 17,000 jobs lost to AI sounds like a lot, it’s dwarfed by other causes, the Challenger report shows. DOGE-related actions is the “leading reason for job cut announcements in 2025,” the report notes, with 293,753 planned layoffs connected to DOGE activities, including reductions to federal workforce numbers and the cutting of contractor deals. Nearly 21,000 more jobs have been lost as part of what Challenger’s report says is “DOGE Downstream Impact,” where funding cuts have hit nonprofits that depend on federal grants. Traditional market and general economic concerns drove another 208,227 cuts in 2025, the report also notes. This means DOGE and the typical workings of the economy are responsible for around 30 times as many job losses than AI. But it would be unreasonable to assume AI’s body count won’t rise, considering Big Tech’s push to get AI into the workplace, while developing increasingly capable AI tools that can handle human jobs. And while Challenger notes that tech-centric firms are bearing the brunt of AI-related job cuts right now, it would be sensible to guess that other industries will soon follow. What’s the takeaway for your company? Primarily that it may be a good idea to reassure your staff that if you’re rolling out AI tools to streamline operations, you’re not actually planning on downsizing your workforce. ”AI won’t be stealing anyone’s job here” is a strong message that will build your team’s trust, assuming that this is actually the case. Another side effect may be a glut of workers in the job marketplace. Since many job seekers are using AI tools to boost their hunt for new employment, you may actually see many more applicants than before for open positions at your company, and your HR team may be quickly overburdened. BY KIT EATON @KITEATON

Wednesday, October 15, 2025

This Brooklyn-Based AI Company Just Raised $2 Billion to Compete With DeepSeek

A Brooklyn startup just raised $2 billion to build a rival to DeepSeek, the Chinese AI company. Called Reflection AI, the company is now valued at about $8 billion, up some 15-fold from last March, when it announced $130 million in funding. The company is less than two years old. Reflection, which launched in March 2024, originally aimed to build a “superintelligent autonomous coding system,” and use that as a jumping off point. Now, it is working on building an open alternative to the types of closed frontier models that giants like OpenAI are developing. In other words, Reflection wants to be the U.S. answer to China’s DeepSeek. “AI is becoming the technology layer that everything else runs on top of,” Reflection noted in a blog post about the funding. “But the frontier is currently concentrated in closed labs. If this continues, a handful of entities will control the capital, compute, and talent required to build AI, creating a runaway dynamic that locks everyone else out.” U.S. AI and crypto czar David Sacks praised Reflection on Thursday. “It’s great to see more American open-source AI models. A meaningful segment of the global market will prefer the cost, customizability, and control that open source offers. We want the U.S. to win this category too,” he posted on social media platform X. Aside from remaining globally competitive, Reflection says there are numerous benefits to frontier open intelligence, including safety, transparency, and accountability. (Frontier in this case refers to the most advanced, large-scale LLMs, like those currently in development behind closed doors at companies like OpenAI.) But it also flags the potential for misuse. High profile players in the space, like OpenAI’s Sam Altman, have publicly fretted about bad actors weaponizing AI; another concern is that others in the space are not putting in place adequate safeguards—even as Altman pushes to avoid regulation. OpenAI has since announced it is working on its own open model. “We believe the answer to AI safety is not ‘security through obscurity’ but rigorous science conducted in the open, where the global research community can contribute to solutions rather than a handful of companies making decisions behind closed doors,” Reflection’s blog says. The startup has spent the past year assembling a crack team of experts who have “pioneered breakthroughs including PaLM, Gemini, AlphaGo, AlphaCode, and AlphaProof, and contributed to ChatGPT and Character AI, among many others.” Its founders, Misha Laskin and Ioannis Antonoglou, worked on DeepMind’s Gemini and go-playing AI AlphaGo, respectively. The company also noted that it developed a large language model and “reinforcement learning platform capable of training massive mixture-of-experts (MoE) models at frontier scale.” TechCrunch reported that MOE models are a type of architecture that powers these super advanced, frontier LLMs. “We saw the effectiveness of our approach firsthand when we applied it to the critical domain of autonomous coding. With this milestone unlocked, we’re now bringing these methods to general agentic reasoning,” the blog states. Reflection also stated it has come up with a commercial model that will allow the company to sustain itself, while developing frontier models. It aims to release its first model early next year, TechCrunch reported. BY CHLOE AIELLO @CHLOBO_ILO

Monday, October 13, 2025

This Robotics Startup Just Emerged From Stealth With $300 Million to Create an ‘AI Scientist’

A new AI startup created by OpenAI and Google DeepMind alumni has emerged from stealth with $300 million in funding from some of the biggest names in tech. The company, called Periodic Labs, says it is fully dedicated to accelerating scientific discovery with AI. According to a New York Times story on Tuesday, the company was cofounded by Liam Fedus, one of the original creators of ChatGPT, and Ekin Dogus Cubuk, who led some of Google DeepMind’s materials and chemistry research teams. Cubuk’s team discovered 2.2 million new inorganic crystals, according to TechCrunch. The founders met while they were both working at Google, and connected over a shared desire to use large language models (LLMs) to advance the study of physics and chemistry. Unlike their former employers, both of which have in recent weeks released products that use AI to generate short-form videos, the founders say that Periodic Labs will focus entirely on using AI to test physics simulations. To do this, Periodic will open up a laboratory in Menlo Park that uses robots, powered by large language models, to run scientific experiments. According to Periodic’s website, their “goal is to create an AI scientist.” Robots will handle the same kind of physical experiments conducted by human scientists, but at a scale that’s humanly impossible. In an example given by the Times, an AI scientist “might run thousands of experiments in which it combines various powders and other materials in an effort to create a new kind of superconductor, which could be used to build all sorts of new electrical equipment.” Unlike humans, robots don’t need to eat, sleep, or take breaks from work, so they can run experiments for much longer. Eventually, according to the Times, the robots could learn which factors lead to success when trying to prove a hypothesis, and use that knowledge to improve their work. The founders think this process could result in breakthroughs for multiple industries, including semiconductors. That potential was enough for Fedus and Cubuk to raise a $300 million seed fund, led by a16z along with NVIDIA, Jeff Bezos, former Google CEO Eric Schmidt, and FIRST Robotics founder Jeff Dean. Over 20 top Silicon Valley researchers have left their jobs to join Periodic. In a video interview with a16z released alongside news of the fundraise, Fedus said that Periodic’s ideal customers are engineers and researchers in advanced industries like space, defence, and semiconductors. These engineers and researchers “don’t really have particularly good tools,” said Fedus, “and that is our opportunity. These are massive R&D budgets.” BY BEN SHERRY @BENLUCASSHERRY

Friday, October 10, 2025

Walmart’s CEO Just Gave a Sobering Prediction About AI. The Time to Prepare Is Now

Doug McMillon, as the CEO of Walmart, runs the largest private employer in the United States. When he talks about the future of work, it isn’t theory—it’s the lived reality of millions of families. In fact, more than 2.1 million people around the world get a paycheck from Walmart. That’s why it matters that, speaking at a workforce conference in Bentonville, Arkansas, last week, Walmart’s CEO didn’t mince words about artificial intelligence. “It’s very clear that AI is going to change literally every job,” McMillon said, according to The Wall Street Journal. “Maybe there’s a job in the world that AI won’t change, but I haven’t thought of it.” Look, a lot of people have predicted that AI will change the way we work in the future. For that matter, people are predicting that AI will change the way we do pretty much everything. It’s already changing the way we look for and process information. And it’s having a real impact on creative work, from generating ideas to editing photos. But this is different. This isn’t some kind of edge case where AI is doing something that benefits niche work. This is a sober assessment from someone who thinks about the livelihoods of millions of people, from truck drivers to warehouse workers and store managers. So far, much of the AI conversation around work has been about replacing humans with robots or computers capable of doing everything from menial tasks to coding. The pitch is that companies will save extraordinary costs as humans are replaced with AI that can do more work, faster, and cheaper. The fear among many employees is that automation will come for knowledge work the same way robots came for manufacturing. McMillon’s warning is different: AI isn’t confined to Silicon Valley jobs. It’s coming for the retail floor, the supply chain, the back office, and the call center. For example, AI can already predict what items a store will sell and when, automatically adjusting orders. That doesn’t eliminate the need for employees—but it will definitely change what their job looks like. McMillon also made another point: Walmart’s overall head count will likely stay flat, even as its revenue grows. That—if you think about it—isn’t just surprising, it’s incredibly revealing. The assumption is that AI equals fewer jobs. Instead, Walmart expects them to be different. To make that happen, the company is mapping which roles will shrink, which will grow, and which will stay stable. The strategy is to invest in reskilling so workers can move into the new jobs AI creates. “We’ve got to create the opportunity for everybody to make it to the other side,” McMillon said. This is the part of the warning many leaders ignore. Pretending AI won’t affect your workforce is irresponsible. Pretending AI only means job cuts is short-sighted. The challenge is to figure out what your workforce looks like and what you need to do to make the transition. There are a few reasons that Walmart’s perspective matters. The obvious one is because it’s the largest private employer in the world. It is the company that, single-handedly, affects the greatest number of people when it makes a change to its workforce. That’s why AI isn’t just a technology problem; it’s a leadership problem. It’s one thing for McMillon to say “AI will change every job.” It’s another thing to commit that Walmart will still employ millions of people, even if the jobs look different. He’s saying the responsibility to guide workers through change rests squarely on leaders’ shoulders. That’s a message worth hearing far beyond the company’s Bentonville headquarters. AI is often pitched as a productivity story. That’s true, but the bigger story is about people. Technology that changes “literally every job” also changes lives, families, and communities. The ripple effect is enormous when you’re a company the size of Walmart. By the way, Walmart isn’t perfect, but its approach offers a model. Instead of framing AI as cost-cutting, it’s framing AI as a transformation challenge. That may seem like semantics, but reframing the conversation makes all the difference between a fearful workforce and a resilient one. McMillon’s prediction is sobering precisely because it’s credible. He isn’t selling software or trying to impress investors. He’s planning for how millions of his own employees will navigate the AI future. If you’re leading a business—whether that’s 20 people or 20,000—the message is pretty clear. AI is going to change every job. Your job is to be thinking hard about what that means for your company. It means thinking about how it will impact your people and coming up with a plan. It seems like almost everyone agrees that AI will change almost everything about the way we all work. The only question is whether you’ll help your people prepare or leave them to figure it out on their own. By then, it will be too late. That’s why every leader should start now. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Wednesday, October 8, 2025

OpenAI wants to build the next era of the web, and it’s shelling out billions to do it

OpenAI was an artificial intelligence research lab little known outside of Silicon Valley before ChatGPT debuted in November 2022. Three years later, OpenAI has become synonymous with the AI boom, making it the envy of its tech peers and thrusting CEO Sam Altman into President Donald Trump’s orbit. ChatGPT writes apps, plans trips and browses the web on users’ behalf. And OpenAI is making inroads into shopping, entertainment, education and government services — laying out plans to become more like a platform than a basic app in its developer conference on Monday. As its software spills into more areas of online life, OpenAI is shelling out billions to become a leading player in the physical infrastructure for the AI future. With its latest major investment, announced on Monday, OpenAI will invest in 6 gigawatts of data center capacity powered by AMD chips. That deal follows similar agreements with Nvidia and Oracle. In some ways, OpenAI’s expansion is circular — it needs new applications to bring in the money to fund its massive computing power. And it needs even more computing resources to power those new tools. OpenAI’s rapid expansion comes against a challenging backdrop. Tech companies are competing fiercely to build the most powerful AI models, but some investors worry the market is in a bubble. What’s more, OpenAI is competing with tech giants such as Meta that already have vast tech ecosystems to help them expand and earn money from their AI tech. And OpenAI, which is not yet profitable, needs to find a way to continue raking in huge amounts of cash to fund its future endeavors. OpenAI did not respond to a request for comment on this story. ChatGPT: More than just a chatbot Google, Amazon and Meta laid the groundwork for the modern web by popularizing search engines, e-commerce and social media. OpenAI could do the same for the AI era by adding new capabilities to ChatGPT, which now has 800 million weekly active users, according to Altman. OpenAI wants users to get things done online without ever having to leave ChatGPT, which could one day put the app at the core of how people use technology, much like Apple’s iOS or Google’s Android system. Soon ChatGPT will be able to create user playlists directly on Spotify or browse apartment listings on Zillow right from chats, OpenAI announced on Monday. In late September, OpenAI launched a tool called Instant Checkout that lets users buy certain items directly through ChatGPT. ChatGPT also now has a study mode, which tailors prompts and responses for students using the tool for schoolwork. And its new Sora 2 app is challenging Meta and TikTok with a scrollable feed of AI-generated short-form videos. OpenAI could even challenge the most prominent device in consumers’ daily lives: the smartphone. The company is collaborating with former Apple design chief Jony Ive on a new AI hardware product, though details are slim. (OpenAI’s peers like Google and Meta are chasing hardware markets by releasing smart glasses with built-in AI assistants.) OpenAI’s trajectory mirrors the rise of Google parent Alphabet, which built its business around indexing the web and now has a foothold in everything from consumer tech devices to health research. Thomas Thiele, an AI expert at management consulting group Arthur D. Little, said he sees similarities between the two companies. Google “has become this very broad corporation that has an inevitable footprint in everything we see on the internet,” Thiele said. “OpenAI is also aiming for a much bigger footprint.” Billions on data centers But scaling up those AI efforts means investing heavily in the sprawling data centers and infrastructure necessary to power them. OpenAI is shelling out billions of dollars to build a massive physical footprint, with plans for AI data centers across the United States and around the world. “We need as much computing power as we can possibly get,” OpenAI President Greg Brockman told CNBC on Monday. In January, the company announced a partnership with Oracle and SoftBank to invest up to $500 billion in a company called Stargate to build more AI infrastructure in the United States. The group’s first project, a one-million-square-foot data center, is already under construction in Abilene, Texas, with additional sites planned in Texas, New Mexico and the Midwest. OpenAI agreed in July to pay Oracle another $300 billion over five years to develop additional data center capacity for Stargate. Last month, OpenAI said it would buy enough Nvidia AI chips to power 10 gigawatts of data center capacity in exchange for a $100 billion investment from the chipmaker. And while analysts expect Nvidia — the undisputed leader in AI chips — to remain OpenAI’s core infrastructure partner, the ChatGPT maker is now also hedging its bets with its AMD deal. OpenAI has also signed onto partnerships to build out AI infrastructure abroad, including in the United Kingdom and United Arab Emirates. OpenAI’s aggressive expansion could be critical to keep up with rivals like Meta, Microsoft and Google that have spent decades building their digital ecosystems, said Daniel Keum, an associate professor at Columbia Business School. Google, for example, has the advantage of plugging its AI into popular services like Gmail and Google Docs. “ChatGPT is great right now, but it’s not ChatGPT versus Copilot. It’s ChatGPT versus the Microsoft bundle,” said Keum. So for OpenAI, working with chipmakers to maintain the most advanced large language models could give it a leg up, he said. But to carry out its ambitious infrastructure plans, OpenAI needs to continue bringing in a whole lot of cash. The company is reportedly valued at $500 billion. But it’s still far from profitable; it posted an operating loss of $7.8 billion in the first half of 2025 and is still ramping up data center spending, according to a report from tech news site The Information. It’s unclear whether OpenAI’s bid to turn ChatGPT into an all-encompassing platform will put it on the path to profitability. William Lee, a corporate investor at SuRo Capital, sees it as a “chicken-or-the-egg” problem, he said in an interview with CNN. Demand may be hard to gauge ahead of time, but the more OpenAI customizes ChatGPT for tasks like shopping and schoolwork, the more people could use it for those activities. It’s a strategy that has worked for the tech giants of today — spend aggressively to make your technology essential to millions of users’ lives, figure out how to make money from them later. OpenAI is clearly betting that it will pay off again. “AI revenue is growing faster than, I think, almost any product in history,” Brockman told Bloomberg. “At the end of the day, the reason this compute power is so important, is so worthwhile for everyone to build, is because the revenue ultimately will be there.”

Monday, October 6, 2025

Need a Social Media Influencer for Your Brand? There’s an AI for That

Influencer marketing campaigns can be powerful tools, helping brands acquire new customers, make product launches go viral, and even drive growth in down markets. But identifying which content creators you should partner with is often a time-consuming and complicated task. With the launch of its new AI-powered creator discovery tool, influencer marketing platform Superfiliate aims to change that. The Venice, California-based company, which was founded in 2021 by Anders Bill and Andy Cloyd, uses first-party data from Meta to match businesses with influencers, according to a press release. “Instead of manually scrolling through platforms hoping to find creators who might work, brands can now leverage the same recommendation intelligence that makes Netflix, Spotify, and even your Instagram Explore page so effective,” CEO Cloyd said in the press release. Marketers can use the tool to search for specific kinds of content creators, conduct research on their content style and past brand partnerships, determine if they’re brand safe, and contact them directly through email. A promotional video by Superfiliate, for example, shows that a brand-side user can type in something like “Find me home decor creators with 50K+ followers” in the tool’s search bar and surface several viable options. They can also upload a creator’s social media handle and find several influencers with similar niches and followings, per the video. There are, of course, plenty of tools already on the market that help brands search for creators, such as those by CreatorIQ and Grin. What makes Superfiliate’s stand out, according to Bill, who now serves as the company’s chief product officer, is its direct partnership with Meta. The social media giant’s participation “enables platform-native infrastructure that’s fundamentally more accurate and compliant than scraping-based approaches,” he said in the press release. While Superfiliate doesn’t publicly share its prices, Cloyd previously told Inc. that his company charges brands for the use of its platform plus an additional fee based on either their total ad spend or the upside of sales they generate through Superfiliate. If a business wants to use only specific parts of the platform, it’ll charge a flat rate instead. BY ANNABEL BURBA @ANNIEBURBA

Friday, October 3, 2025

This Company Says 1 New AI Feature Can Handle 20 Hours of Work in Seconds

Popular wedding planning platform The Knot has released an update to its mobile app that uses AI to streamline the process of finding local vendors. In a press release, the company said that the new update “cuts over 20 hours of planning work to just seconds.” The reimagined “planning experience,” as The Knot calls it, allows couples to browse through thousands of photos of weddings to create a vision board. By clicking an icon, users can activate a new feature called “make it yours,” which scans the image and then searches through The Knot’s database of venues and vendors to find similar options that “fit your vibe, budget, and location.” Christine Brown, The Knot’s VP of product, says that the company built this new AI feature entirely in-house, rather than relying on AI models from external providers like OpenAI or Anthropic. To create the feature, Brown says the company trained its own models on “more than a million images accessible on The Knot.” To test its effectiveness, The Knot ran a two-month pilot in which thousands of couples were given early access to the tool. As an example of how the new feature can help amateur wedding planners save time, Brown pointed to one of the most time-consuming aspects of throwing a wedding: picking a venue. Brown’s team estimated that most couples take roughly six weeks to pick a venue, spending 3.5 hours per week devoted to the search. That adds up to 21 hours of total searching time, which Brown says can now be reduced to minutes thanks to this new tool. The Knot says that this update is just the first step in a larger push to introduce AI-powered wedding planning features. As for what’s next, she says the company is building AI tools to help both couples and professional wedding planners and vendors. One of those tools is an AI-assisted email reply feature that allows vendors to convert more leads into bookings. “We see AI as a powerful force to support the planning journey,” Brown says, “helping couples and vendors save time, while still keeping personalization and human touch at the heart of the wedding experience.” BY BEN SHERRY @BENLUCASSHERRY

Wednesday, October 1, 2025

Microsoft Is Adding Anthropic’s Claude to Its AI Tools. Here’s What It Can Do for Businesses

Microsoft is expanding the lineup of AI models used to power 365 Copilot, its workplace-focused AI service. The move is a sign that Microsoft is actively working to lessen its reliance on OpenAI‘s models after investing over $10 billion in the company. In its blog post announcing the news, Microsoft said that while 365 Copilot will continue to be primarily powered by OpenAI’s models, users will now be able to harness Anthropic’s models in two specific ways. One is in Researcher, a 365 Copilot feature that searches the internet and analyzes internal data like emails, Teams chats, and files, in order to conduct deep research. Normally, Researcher runs on models developed by OpenAI, but 365 Copilot customers will now have the option of using Claude’s Opus 4.1 model (the company’s most advanced model currently available) instead. Microsoft said that Opus 4.1 in Researcher could be used to accomplish tasks like “building a detailed go-to-market strategy, analyzing emerging product trends, or creating a comprehensive quarterly report.” The other method for using Claude in 365 Copilot is within Copilot Studio, a feature that enables users to build customized AI agents that can automate workflows. Users will now be able to easily select Claude Opus 4.1 or Claude Sonnet 4 (Anthropic’s mid-sized model) when creating agents. Microsoft says users will even be able to orchestrate whole teams of agents, all powered by different AI models, to work in tandem in order to accomplish tasks. Workplaces with Microsoft 365 Copilot licenses can now use Claude in Researcher and Copilot Studio, but only if opted-in by an administrator. Microsoft wrote that “this is just the beginning,” and that users should stay tuned for Anthropic models to “bring even more powerful experiences to Microsoft 365 Copilot.” Microsoft is also reportedly working on an AI marketplace for news and media publishers, according to Axios. The marketplace would enable publishers to sell their content to AI companies, who would in turn use that content to train their new AI models. Axios reported that Microsoft discussed plans for the marketplace at its invite-only Partner Summit in Monaco. BY BEN SHERRY @BENLUCASSHERRY

Monday, September 29, 2025

Anthropic’s Claude AI Has 1 Killer Use Case, According to New Data

Software engineering is the overwhelming favorite use case for Claude, Anthropic’s AI model, according to a new report published by the company. The report, the third in a series tracking AI’s economic effects, also breaks down how enterprises are using Anthropic’s AI models. The takeaway? Enterprises are heavily focused on using Claude to automate tasks. The report, titled “Uneven Geographic and Enterprise AI Adoption,” found that 36 percent of sampled conversations on Claude.ai, Anthropic’s ChatGPT-like platform for chatting with Claude, are centered on providing software development assistance. That makes it by far the AI model’s most popular use case. It should come as no surprise, then, that software developers working on applications are Claude’s heaviest users, making up 5.2 percent of all usage. The other top Claude.ai uses, according to Anthropic’s data, include providing assistance with writing, acting as a virtual tutor, conducting research, and supplying financial guidance and investment assistance. The report also tracked how enterprises are using Claude’s API, which enables developers to integrate Claude into their products and software applications. The data shows that businesses are largely using Claude to automate tasks, rather than using it as a learning tool or a collaborator. Anthropic says this shouldn’t come as a surprise, because the API naturally lends itself to automation. “Businesses provide context,” the company explained, “Claude executes the task, and the output flows directly to end users or downstream systems.” Like with Claude.ai, according to the report, software development is by far the most popular use for enterprises using the Claude API, with just under half of all API traffic accounting for computer and mathematical tasks. More specifically, 6.1 percent of all Claude API use is for resolving technical issues and workflow problems in software development; 6 percent is for debugging and developing front-end code and components for web applications; 5.2 percent for developing or managing professional business software; and 4.9 percent for troubleshooting and optimizing software. In the report, Anthropic wrote that code generation tasks dominate API traffic “because they hit a sweet spot where model capabilities excel, deployment barriers are minimal, and employees can adopt the new technology quickly.” But coding isn’t the only way that enterprises are using Claude. Ten percent of API usage comes in the form of office and administrative tasks, 7 percent is for science tasks, 4 percent is for sales and marketing tasks, and 3 percent is for business and financial operations. The report also examined how the cost of using Claude to handle specific tasks correlates with usage amounts. According to the data, tasks typical of computer and mathematical jobs, like coding and data analysis, cost over 50 percent more than sales-related tasks, but still dominate overall use of the tech. This, according to the company, “suggests that cost plays an immaterial role in shaping patterns of enterprise AI deployment.” Rather than focusing on costs, Anthropic postulated, “businesses likely prioritize use in domains where model capabilities are strong and where Claude-powered automation generates enough economic value in excess of the API cost.” The report also revealed how each state in the U.S. typically uses Claude (specifically Claude.ai). Unsurprisingly, California (where Anthropic is based) is far and away the biggest Claude user, accounting for 25.3 percent of total use. Other states with heavy Claude usage include New York (9.3 percent), Texas (6.7 percent), and Virginia (4 percent). BY BEN SHERRY @BENLUCASSHERRY

Friday, September 26, 2025

Gen-Z AI Founders Are Merging Work and Life in These 3 Ways

Young AI founders in San Francisco are upending preconceived notions about Gen Z’s approach to work-life balance. In a recent Wall Street Journal article, founders from ages 18 to 32 described a lifestyle entirely structured around their companies. These founders and their grind-first mindsets are in stark contrast to a 2024 Deloitte survey, which found that while 36 percent of Gen Z respondents consider work to be central to their identity, 25 percent consider work-life balance the top factor in choosing an employer. Far from quiet quitting, these founders are working seven day weeks, living in their offices, and eating only for sustenance. And they couldn’t be happier, at least according to the Journal’s reporting. Here’s what these young founders are doing to win in the AI era. Living in the office Several young founders interviewed by the Journal claimed to be working constantly. Marty Kausas, a 28-year-old founder building an AI startup called Pylon, said he had recently worked three 92-hour weeks in a row. And Nico Laqua, a 25-year-old cofounder of AI-powered insurance startup Corgi, said that he lives in his office and typically spends “every waking hour” working (doggedly, perhaps) on his company. He claims to only hire people willing to work seven days a week. Indeed, Corgi is currently hiring for a chef to provide the team with breakfast, lunch, and dinner seven days a week. Blowing up the work-life balance Even when they’re not physically in the office, these founders are reportedly almost always advancing their business interests in some way. Recent social activities for Kausas include attending a hackathon and taking a bike ride with a fellow founder. Emily Yuan, a Corgi co-founder, told reporters that she and her founder friends spend their free time discussing funding rounds while exercising and going to saunas. De-centralizing food Another common theme among the interviewed founders is their attitudes toward food and meals. Kausas told the Journal that he eats pre-packaged breakfasts and lunches from nutrition and supplements company Blueprint, because “the workday is more efficient if he doesn’t have to think about food.” Haseab Ullah, founder of an AI customer support chatbot, also claimed to have a utilitarian approach to eating. Usually, his only meal of the day is an Uber Eats-delivered treat, a tactic that he said helps him “save time and avoid cooking.” (A young person using Uber Eats to avoid cooking may not be a shocker, but using it to source every meal sounds more extreme.) Michelle Fang, an event planner at VC firm Headline, told the Journal that many founder-focused get-togethers in San Francisco don’t even serve alcohol, both because it is “out of fashion in the San Francisco crowd,” and because many founders “aren’t old enough to drink” yet. BY BEN SHERRY @BENLUCASSHERRY

Thursday, September 25, 2025

OpenAI Introduces GPT-5-Codex, an AI Model Built Just for Coding

OpenAI has announced its newest model, GPT-5-Codex. The new model has been optimized for agentic coding in OpenAI’s suite of AI-powered software engineering tools, which is called Codex. This year, AI programs that can write and edit software have emerged as the most lucrative use case for AI, propelling multiple companies to huge revenue increases. These tools are being used both by professional developers to make their work more efficient, and by casual vibe coders, who lack the technical skill to create websites and apps. The Sam Altman-led company claims that by training this new AI model on real-world engineering tasks, it can outperform the default model. In a benchmark that compared that model and GPT-5-Codex’s ability to refactor code (essentially reorganizing and cleaning up code), GPT-5-Codex scored nearly 20 percent higher than the default model, which is simply called GPT-5. GPT-5-Codex is also said to be a strong independent worker. It can work autonomously on software for long stretches of time. According to a press release, OpenAI has seen the model “work independently for more than seven hours at a time on large, complex tasks, iterating on its implementation, fixing test failures, and ultimately delivering a successful implementation.” The new model could also help alleviate one of the most notable pain points of vibe coding: bad code. Many software developers have remarked that much of their time working with AI-assisted code editors is spent cleaning up the AI’s code, which isn’t always as thoughtfully written as a human expert’s would be. But OpenAI says that GPT-5-Codex has been “trained specifically for conducting code reviews and finding critical flaws.” In practice, the company says, this means GPT-5-Codex will review an entire codebase to identify flaws and autonomously test apps to find errors. OpenAI says that Codex currently handles “the vast majority” of proposed changes to code being written by OpenAI staffers, “catching hundreds of issues every day—often before a human review begins.” But even with its improved code review abilities, OpenAI still recommends using Codex as an additional reviewer; it says in a press release that it is “not a replacement for human reviews.” Unlike the normal version of GPT-5, GPT-5-Codex won’t be immediately available via API, and OpenAI recommends only using the model for coding tasks in Codex-supported environments. In addition, Codex is coming to mobile devices for the first time. Previously, to access Codex, you’d either need to use ChatGPT on a desktop computer or invoke Codex in an IDE (integrated development environment) like VSCode or Cursor. Now, Codex will be accessible in the ChatGPT iOS app, enabling easier coding on the go. Codex, and GPT-5-Codex, is available across all of ChatGPT’s paid tiers, with $20-per-month ChatGPT Plus members getting enough access to “cover a few focused coding sessions each week.” Meanwhile, $200-per-month ChatGPT Pro members will get enough to “support a full workweek across multiple projects.” Companies that pay for ChatGPT’s SMB-focused Business plan can purchase credits to give their developers more access to Codex, while larger companies with ChatGPT’s Enterprise plan get a shared credit pool. In OpenAI’s press release, engineers and tech leads at companies including Cisco, Duolingo, Ramp, Vanta, and Virgin Atlantic praised Codex’s utility, but it remains to be seen if GPT-5-Codex can help OpenAI take market share away from Anthropic, whose similar Claude Code product has proved very popular with professional and casual software developers. BY BEN SHERRY @BENLUCASSHERRY

Monday, September 22, 2025

‘I Feel Like a Better Manager’: Execs Share How AI Transforms How They Lead

It hasn’t taken long for business leaders to discover that AI can help them manage people, and they are using it in ways that executives likely couldn’t have dreamed of several years ago—from reimagining how an org chart works to using AI to help them write a tricky email. We spoke to five CEOs—and one chief human resources officer—to learn how they are harnessing AI to help get the most out of their people. They are: Arvind Jain, CEO, Glean, an AI enterprise search platform that has 1,000 employees and was most recently valued at $7.2 billion, according to PitchBook Stacy Spikes, CEO, MoviePass, a subscription-based movie ticketing service, which recently announced a $100 million capital investment Aakash Shah, CEO, Wyndly, a startup focusing on personalized and modern allergy treatments, which ranked No. 333 on the Inc. 5000 this year Renata Black, CEO, EBY, a membership-based women’s intimate apparel company that has raised more than $18 million, according to PitchBook Ashley Kirkwood, CEO, Speak Your Way to Cash, a sales and speaking training organization Ali Bebo, chief human resources officer, Pearson, a U.K.-based education and testing services company Throughout this process, these leaders are finding that AI is pushing their employees to reach beyond what they thought they were capable of—like setting benchmarks, preparing for one-on-ones, and improving reports before they reach their managers. And it’s giving CEOs a lot to consider when it comes to how they run their workplace. The technology, says Spikes, is “going to create more than it will take away.” 1. Supercharge the org chart Jain, who started Glean in 2019, sees himself as a facilitator. That means he has to make sure every task gets assigned to the right person or group—and for him, an old-fashioned org chart just isn’t good enough. “It gets obsolete quickly, because the world is changing so fast,” he says. That’s why his company, which makes AI tools designed to help businesses find answers and automate workflows, has created a kind of living org chart with AI. Jain says that when he has an idea, he doesn’t have time to waste working out which person or team has capacity to take it on. Instead, he wants to get straight to the person or people who can best work on it with him. Glean’s AI examines employees’ work and contributions in real time, mapping core competencies in a way a traditional org chart can’t. The AI is “constantly observing on any given subject matter who are the top voices, who are the ones who are answering the most questions in Slack or in Teams, who are the ones who are writing authoritative documents on that,” Jain says. He adds that he uses this tool every day to help Glean move fast on new ideas and keep projects on track. Pearson’s Bebo says that employees use an in-house AI agent called CARA that can answer questions about their role and ways that they can excel or get promoted: “She is what I would describe as our people’s friend as they think about navigating their career here,” Bebo says. CARA is designed to act as an enabler, helping both employees and managers be more effective and understand where they are in relation to their job expectations and goals. “We don’t want to have AI replace managers, but we really want to think about how it helps our managers even perform better,” she says. One way Spikes sees AI transforming his workforce at MoviePass is by creating more opportunities for the people he has—and for tomorrow’s hires. “I’m finding that it is overall increasing how you’re going to use people, not decreasing how you’re going to use them,” he says. “I think that’s the beauty of this emerging technology.” 2. Transform meetings from status updates into deep conversations Shah, Wyndly co-founder and CEO, uses AI to better prepare for his one-on-ones, especially with his executive team and co-founder, Manan Shah, his cousin. He sees it as akin to the culinary technique of “mise en place,” where chefs prep everything they need before they turn on the heat. “If we can get everything prepped before we’re ready to jump into the work, it makes the work both more fulfilling but also just more effective,” he says, adding that it also creates room for more interpersonal connections between him and his direct reports. “I think that’s what the difference maker is between a good and a bad leader, at least for me, is whenever I’ve been able to spend more time on the interpersonal stuff, I found that I feel like a better manager,” Shah says. At intimate apparel retailer EBY, CEO Black says the entire company is mandated to use AI to help optimize reports and analysis before presenting anything to her. Black makes them show her their original plan and how they optimized it using AI. As a result, her people have more clarity into what they are doing and how to achieve their full potential, she says. “AI allows them to present information in a much clearer way that allows them to be more confident in what they’re presenting,” she says. In turn, she is able to give them better feedback. 3. Power up performance reviews and employee evaluations Bebo, who joined Pearson in 2021 to assist in its culture and business transformation, says that AI agents are embedded in the performance reviews at the company. But while managers are still doing the employee evaluations, the AI can help both employees and leaders craft sharper and more articulate reviews and self-assessments so that every single one “sounds like Pearson.” The agents aren’t mandatory, but for managers who do use them, Bebo reports that they have sped up the performance review process and helped them deliver meaningful feedback to their employees. Glean takes this approach one step further in its performance reviews, says CEO Jain. While managers and employees use AI prompts to help them write their assessments like at Pearson, Glean also uses AI to collect and analyze each employee’s contribution to the company, enabling managers to have a complete, clear, and—crucially—objective record of everything the employee did during the review period. That combats biases and favoritism, Jain says, but it also means he doesn’t forget or overlook any of his employees’ achievements or sticking points. “The conversation shifts from getting on the same page to, actually, we are already on the same page, and this is now a time to solve problems that you run into so that you can become better, you can grow as an employee,” he says. 4. Use AI prompts to get the best responses from your people Spikes, who co-founded MoviePass in 2011, left in 2018, and then returned to save the company in 2021, says he started using AI prompts with his teams to challenge them to think differently about how they are tackling business challenges or new projects. “That curiosity helps speed up the team,” he says, mentioning that some projects that used to take weeks now take as little as a couple of hours. What that gives him as a manager is not just a faster outcome, but also more opportunities for iterations and feedback, leading to a better outcome. “You get much more of a response loop that you just didn’t have before,” Spikes says. Jain also uses AI prompts with his Glean executive team, asking them to set business goals each week. Then he uses a custom-built AI that helps him track progress and gather insights on each of those goals. He says that gives him a “deep understanding” of precisely where there was forward momentum and where there were slowdowns or blocks. And Wyndly CEO Shah says his business is moving to a similar model. When people do their daily check-in, they are prompted to think about how what they are doing is aligning with the business’s goals and to preempt what questions Shah might have for them based on what they report. That way, he says, “everyone’s speaking the same language.” 5. Let AI be a thought partner Knowing what to say and how to say it is crucial to getting a CEO’s message and vision across to their employees, and AI can act like the ultimate comms specialist and thought partner to do just that. “Anytime I have to write a very complicated email, I just press play. I tell it exactly what I want to say, and then I say, polish this up, and then make it super short and punchy. And it gives me a really strong response,” EBY’s Black says. Speak Your Way to Cash CEO Kirkwood, who published a book with the same name as her company in 2021, agrees that AI can help take the edge off otherwise potentially tense interactions with staff. “If I have to have a difficult conversation, it’s helpful for me to have a script,” she says. “That way I can have it quickly, succinctly, get in and out, and not open up any legal liabilities.” AI can also help temper hard-to-hear feedback so that your employees get the message without getting over-anxious, Black says, adding that because she has a very direct style of communication, AI can help soften her tone without losing impact. “That’s like the AI coaching me on my leadership skills,” she says. Wyndly’s Shah puts it another way: When he wants to send out a company-wide message at Wyndly, AI is a strategy for getting over “blank-page syndrome.” And at Pearson, some of the company’s executives have created digital twins that act as “thought partners,” helping them role-play different conversations and strengthen their arguments, Bebo says. “Think of AI as your friend and a partner,” she says. “It doesn’t replace your owning and delivering and making sure you’re sending the right message. It’s just sharpening the conversation.” BY CLAIRE CAMERON, FREELANCE WRITER

Saturday, September 20, 2025

ELON MUSK xAI STRATEGIC ACQUISITION OF X(FORMERLY TWITTER)

Elon Musk Just Pulled Off His Most Strategic Move Yet—And No One Saw It Coming. His AI company, xAI, just acquired X (formerly Twitter) in a massive $33 billion deal. On the surface, it looks like just another corporate shuffle. But in reality, Musk may have just outmaneuvered the entire AI industry. Here’s why this changes everything: • X is valued at $33 billion • xAI is now worth a staggering $80 billion • The deal is an all-stock transaction, excluding $12 billion in X’s debt At first glance, it seems like Musk took a loss—after all, he originally paid $44 billion for Twitter. But this move isn’t about social media. It’s about something far more valuable: data. The Real Reason Musk Bought Twitter Back in 2022, people were confused. Why would the world’s richest man, known for building rockets and electric cars, want a struggling social media platform? Now, the answer is clear: Twitter (now X) was never just a social media company—it was a massive, real-time data engine. With 600 million active users generating a constant stream of conversations, opinions, and real-world events, X is a goldmine for training AI models. And that’s exactly what xAI needs to take on OpenAI, Anthropic, and Google. The Timing Is No Coincidence - Just a few months ago, xAI secured a $6 billion funding round at a $24 billion valuation. Now, after this acquisition, its valuation has skyrocketed to $80 billion—outpacing even OpenAI’s growth. Why does this matter? Most AI companies struggle to get high-quality, real-world data. Their models rely on stale, pre-existing datasets that don’t reflect real-time human behavior. But xAI now has something its competitors don’t: a live firehose of human interaction. This means: ✅ More human-like AI models ✅ A competitive edge in real-time applications ✅ The ability to train AI on the most up-to-date information available anywhere What Happens Next? This merger isn’t just about an AI assistant inside X. It’s the foundation for something much bigger. 1️⃣ AI-Driven Content & Conversations Expect smarter content recommendations that understand not just what you like, but why you like it. AI-generated insights, real-time fact-checking, and even automated dispute resolution could change how people engage online. 2️⃣ X Becomes More Than Social Media This could push X toward becoming a full-fledged “everything app”—integrating AI-powered tools for content creation, virtual assistants, and even education. 3️⃣ Regulatory Strategy at Play By structuring the deal as xAI acquiring X (instead of the other way around), Musk positions this as an AI-driven initiative rather than a social media consolidation—potentially avoiding regulatory roadblocks. The Bottom Line This isn’t just another tech merger. It’s a calculated move that positions xAI as a major player in AI, while using X’s data to supercharge its models. Musk isn’t just competing with OpenAI, Google, and Anthropic. He’s changing the game entirely.

Wednesday, September 17, 2025

OpenAI Introduces GPT-5-Codex, an AI Model Built Just for Coding

OpenAI has announced its newest model, GPT-5-Codex. The new model has been optimized for agentic coding in OpenAI’s suite of AI-powered software engineering tools, which is called Codex. This year, AI programs that can write and edit software have emerged as the most lucrative use case for AI, propelling multiple companies to huge revenue increases. These tools are being used both by professional developers to make their work more efficient, and by casual vibe coders, who lack the technical skill to create websites and apps. The Sam Altman-led company claims that by training this new AI model on real-world engineering tasks, it can outperform the default model. In a benchmark that compared that model and GPT-5-Codex’s ability to refactor code (essentially reorganizing and cleaning up code), GPT-5-Codex scored nearly 20 percent higher than the default model, which is simply called GPT-5. GPT-5-Codex is also said to be a strong independent worker. It can work autonomously on software for long stretches of time. According to a press release, OpenAI has seen the model “work independently for more than seven hours at a time on large, complex tasks, iterating on its implementation, fixing test failures, and ultimately delivering a successful implementation.” The new model could also help alleviate one of the most notable pain points of vibe coding: bad code. Many software developers have remarked that much of their time working with AI-assisted code editors is spent cleaning up the AI’s code, which isn’t always as thoughtfully written as a human expert’s would be. But OpenAI says that GPT-5-Codex has been “trained specifically for conducting code reviews and finding critical flaws.” In practice, the company says, this means GPT-5-Codex will review an entire codebase to identify flaws and autonomously test apps to find errors. OpenAI says that Codex currently handles “the vast majority” of proposed changes to code being written by OpenAI staffers, “catching hundreds of issues every day—often before a human review begins.” But even with its improved code review abilities, OpenAI still recommends using Codex as an additional reviewer; it says in a press release that it is “not a replacement for human reviews.” Unlike the normal version of GPT-5, GPT-5-Codex won’t be immediately available via API, and OpenAI recommends only using the model for coding tasks in Codex-supported environments. In addition, Codex is coming to mobile devices for the first time. Previously, to access Codex, you’d either need to use ChatGPT on a desktop computer or invoke Codex in an IDE (integrated development environment) like VSCode or Cursor. Now, Codex will be accessible in the ChatGPT iOS app, enabling easier coding on the go. Codex, and GPT-5-Codex, is available across all of ChatGPT’s paid tiers, with $20-per-month ChatGPT Plus members getting enough access to “cover a few focused coding sessions each week.” Meanwhile, $200-per-month ChatGPT Pro members will get enough to “support a full workweek across multiple projects.” Companies that pay for ChatGPT’s SMB-focused Business plan can purchase credits to give their developers more access to Codex, while larger companies with ChatGPT’s Enterprise plan get a shared credit pool. In OpenAI’s press release, engineers and tech leads at companies including Cisco, Duolingo, Ramp, Vanta, and Virgin Atlantic praised Codex’s utility, but it remains to be seen if GPT-5-Codex can help OpenAI take market share away from Anthropic, whose similar Claude Code product has proved very popular with professional and casual software developers. BY BEN SHERRY @BENLUCASSHERRY

Monday, September 15, 2025

Anthropic Says This AI Tool Can Now Create and Edit Documents

Anthropic’s Claude AI has been updated with the ability to create and edit files, including PDFs, Excel spreadsheets, Word documents, Google docs, and more. Anthropic made their announcement on their blog, explaining that the new features live on its consumer-facing platform, Claude.ai. Until now, the platform could analyze files, but couldn’t create or manipulate them. (Claude.ai is basically Anthropic’s version of ChatGPT.) In a video detailing how the new feature works, a user asks Claude to help them analyze revenue data for their small food truck fleet and package the findings in a Google doc. After the user uploads a few CSV files containing the data, Claude performs its analysis, creates a series of data visualizations, and puts it all together in a handy DOCX file that can either be downloaded or opened directly in Google Drive. “Whether you need a customer segmentation analysis, sales forecasting, or budget tracking,” Anthropic wrote in its blog, “Claude handles the technical work and produces the files you need.” To create files, Claude uses what Anthropic refers to as a “private computer environment,” in which the AI model can write code and run programs. This is similar to ChatGPT’s recently announced agent mode, which gives the AI platform access to a virtual browser that it can use to navigate the internet. These features, which involve giving an AI model access to additional tools, are referred to as agentic capabilities. The company advises starting “with straightforward tasks like data cleaning or simple reports,” and then working up to “complex projects like financial models once you’re comfortable with how Claude handles files.” Currently, when users ask Claude to create a document or spreadsheet, the model opens a window called an Artifact, which is essentially an interactive block of content. Prior to the release of these new features, if you were to ask for a document, Claude would create a document Artifact. If you asked for a spreadsheet, it would create an interactive Artifact. Now, instead of keeping those Artifacts contained within chats, users can download and use their AI-created files. Anthropic says that file creation is currently available for workplace-based Claude Team and Enterprise users, and Claude Max subscribers, who pay $200 per month to the company. Claude Plus users, who pay $20 per month, will get access to the feature “in the coming weeks.” BY BEN SHERRY @BENLUCASSHERRY

Friday, September 12, 2025

The Best AI Success Stories Are Sitting on Hard Drives and Have 1 User

I had coffee with my favorite CTO yesterday and he told me about his new AI app. It’s basically a CTO-in-a box. And it’s awesome. And he’s the only one using it. And it’s going to stay that way. Despite my trying to persuade him otherwise. One of the reasons there’s so little proof of the value of AI is that the best, most useful, most ingenious apps actually never leave the creator’s hard drive. In fact, once my friend pointed out what he was doing, I myself realized that most of what I’ve created with AI is available only to me on my hard drive, and moreover, that’s definitely where my best stuff is. In fact, it seems like most of the better “AI apps” aren’t even primarily AI, but AI being implemented, like my CTO friend implemented it, to unlock automation and unstructured data — and ultimately narrative output — in a way that couldn’t be done before. So why is this happening? The Genius of CTO-in-a-Box I’m probably overhyping this because he’s my buddy and he kindly listens to a lot of my BS before it gets to you folks, but my CTO friend’s CTO-in-a-box isn’t anything to eff with. He and I worked shoulder-to-shoulder for years, and together we developed some amazing little features, a few apps, and the tech backbone of a multimillion-dollar business. I say “we” but all I did was dream stuff up with him, vet it, and MVP it out, after which he and his brilliant team coded it. And they got it right the first time every time, and he usually added his own flair to surprise me with some technical trick no one would ever notice but made what we were doing 10 times better under the hood. He left that company not long after I did, and despite my trying to wrangle him into what I was doing, he took another job, to come in and do a technical turnaround on a private equity-purchased startup that had tons of potential but was stagnating. He hadn’t done anything like a turnaround before and I had just finished one. We have coffee every two weeks and so our conversations turned to the science of the turnaround. Then he disappeared for a month, and when we got back together, yesterday, he shocked the hell out of me. “Basically, what I did was take every bit of data, company data, sales data, all the code, all the documentation — they had a lot of ‘stuff’ [his air quotes] just sitting in directories and databases,” he told me. “I slammed it all into a vector database, wrote some code, integrated Claude Code to build some agents and totally write the front end, and now the LLM is like my personal assistant.” He’s underselling it. I know this because of the example he gave me. Builders Gonna Build “We had a sudden spike in resources, so I asked it what was going on, and it brought me to the right section of code that was the problem and hypothesized why, and I fixed it in 30 seconds,” he said. And then he made me jealous. “Oh, it also does all my weekly status reports and my standup agenda and all the reporting I have to do for the ELT and the board,” he continued. “I don’t let it send emails, but it’ll create the draft for me to review with the summary and a link to the report.” “Tell me you built it so anyone can use it,” I said. “Of course,” he responded. “I mean, not for all the outliers, but yeah you could start over and import new data, it knows what it’s getting and what to do with it.” “Tell me it’s self-perpetuating with new data it creates on its own,” I said, “like those email summaries and reports.” He just smiled. “Dude,” I said and threw my hands up. “It’s a CTO-in-a-box. Let me at it.” “No,” he laughed. “It’s staying on my hard drive.” “But you built it like a product.” “Because that’s how I roll.” Then he took a smug sip of his mocha whatever and I couldn’t help not being mad at him. Don’t Be So Quick to Write Off AI I say this as the guy who can’t stop writing off AI. Nah, I’ve been disparaging how we’ve been selling AI for years now, having been building it since 2010, and, in a nascent sense, as far back as 2000. But each time I’ve firebombed today’s AI hype in public, especially generative AI — because that’s the “AI” everyone is familiar with and what 95 percent of people are talking about when they say “AI” — I’ve prefaced my flaming with how amazing the technology actually can be when you know what you’re doing. In the hands of my CTO friend, amazing doesn’t even begin to describe what you can do. For the record, he’s on the uppermost subscription level of at least five different providers, a four-figure-a-month bill footed by his private equity overlords. And he’s aware that he will be squeezed soon. In fact, he said openly, “I got on the gravy train while the platforms are loss-leading.” They’ll price him out, and that’s another reason not to build a public product around it. He doesn’t know the true economics. Do What the CTOs Are Doing Of course, I asked my CTO friend to send me his documentation, because of course he documented it, and I’m building something around content and creators that could use its own CTO-in-a-box. And that got me thinking. Right now, all the coding I’ve done with the AI and the agents and such, it’s all sitting on my hard drive, and like my friend, I’ve built it like a product but I’m the only user in the credentials table. But unlike my friend, I built it like a product because I am indeed thinking of packaging it and selling it as a product down the road. If I could just stop writing for a while and get my brain on it for more than five minutes. Which, in today’s world, actually gets a lot of Claude coding done. It’s the peer review that takes time, if you get me. If I’ve got advice, it’s this. If you want to build something with AI, find the people who are doing amazing things on their hard drive — facing real challenges, solving real problems, and not just leveraging AI to jump on the gravy train. Buy them a mocha whatever and ask them what they’re doing and how they’re doing it. Because the more my CTO friend spoke, the more my vision was clouded by dollar signs. The problem is that for every story like his I hear 100 more stories about chatbot wrappers and unstructured data parsers being sold like they’re magic. Those aren’t being funded anymore, finally. That opens the door for people to wring real value and usage out of this AI nonsense. If you’re a fan of real value and usage, jump on my email list. I try to talk about that as much as possible, whether that’s AI or tech or something else. EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO

Wednesday, September 10, 2025

Mark Cuban Has 2 Words for People Who Don’t Want to Learn AI

Skims founding partner and sometimes visiting Shark Tank Shark Emma Grede was never an AI skeptic, exactly. In 2023, she offered a cash bonus to her staff for finding creative ways to use AI in their work. But she herself was mostly just using ChatGPT as an occasional replacement for Google search. “I’m using AI like a 42-year-old woman,” she joked in a recent Fortune interview. Then she had former Shark Mark Cuban on her podcast. Turns out the billionaire founder and former Mavs owner has strong words — two, to be exact — for people like Grede who are dragging their feet on experimenting with AI. Talking to Cuban was enough to convince Grede to change her approach. She started Googling class on AI and downloading AI apps immediately. The episode “gave me a new urgency around how I use AI,” she told Fortune. “He gave me a kick.” It might be just the kick you need too. Not learning AI? Mark Cuban says “you’re f***ed” On her podcast, Grede didn’t ask Cuban about AI. She asked him about how to get started with a business idea. But the billionaire entrepreneur insisted that now, there’s no difference between going from idea to execution and utilizing AI. You need the latter to do the former fast and well. “The first thing you have to do is learn AI,” Cuban responded. “Whether it’s ChatGPT, Gemini, Perplexity, Claude, you’ve got to spend tons and tons and tons of time just learning how it works and how to ask it questions.” Noodling around with new tools and asking various AI models questions is how Cuban is spending his time at the moment. And he has no patience for founders and others in business who aren’t doing the same. “What do you say to someone who is like, ‘I don’t like AI. I don’t want any more technology in my life’?” Grede asked. Cuban’s answer was short, punchy, and profane: “You’re f***ed.” Is Mark Cuban right? Cuban went on to explain that the current moment is much like his early career at the dawn of the internet age. New, hugely disruptive technology is rolling out at an incredible rate. Those who don’t run to keep up are going to end up as roadkill. Saying you don’t want to use AI, he says, “is like people saying back in the day, I don’t want to use the PC. I don’t want to use the internet. I don’t need a cellphone, Wi-Fi.” Those businesses died. Is he right in making the comparison? He’s certainly correct that those around you are adopting AI at a rate equal to or greater than the rate at which the internet took off. Harvard researchers have compared recent data on AI usage to government data on the uptake of new technology at the turn of the millennium. They found more people are using AI more quickly these days than people started adopting the internet back then. “The usage rate [for AI] … is actually higher than both personal computers and the internet at the same stage in their product cycles,” the trio of researchers explained to The Harvard Gazette. No one can predict the future. And the breathlessness of some discussions of AI certainly suggest that the hype will exceed the reality in plenty of areas. We may yet witness an AI “trough of disillusionment” or even crash. But the numbers strongly suggest that Mark Cuban is on to something when he says that ignoring AI is just not a viable option. What happened to businesses that ignored the internet? “If you were to go back to 1984 and tell people, ‘Hey, there’s this new thing called the personal computer. I have a crystal ball. Twenty years from now, everybody’s going to have one of these and every single new technological development and every single new product is going to be using it as the base.’ Knowing that now, what would you do differently?” the Harvard researchers ask. “You could make billions and billions of dollars,” they add. According to their data, they say, “it sure looks like generative AI is going to be on that scale,” and “the spoils will go to people who can figure out how to harness it first and best.” How to get started with AI If you’re convinced, how do you start learning AI? Playing around with new tools and technologies as Cuban suggests is certainly a good first step. Elsewhere, Cuban — along with other tech icons like Tim Cook and Bill Gates — has outlined specific ways he’s using AI, which could give you additional ideas. Other AI experts have advice as well. Nvidia CEO Jensen Huang has talked on multiple occasions about how he’s personally experimenting with AI. OpenAI president Greg Brockman has offered advice on honing your AI prompting skills. No one knows exactly how the AI revolution will play out, or even the best way to start to prepare. But even the skeptics should probably heed Mark Cuban’s words and admit that AI is going to change the world. If you stick your head in the sand, you’re doomed. Better start experimenting today so you can be prepared however this thing plays out. EXPERT OPINION BY JESSICA STILLMAN @ENTRYLEVELREBEL

Monday, September 8, 2025

Is the AI Bubble Too Big to Fail?

On Wednesday, analysts bemoaned Nvidia’s lackluster Q2 earnings. The company posted a 56 percent gain in sales, its smallest in more than two years, despite the chipmaker’s positioning as one of the biggest winners of the AI boom. The company’s inability to live up to its expectations has reignited fears of an AI bubble on the precipice of rupture. Despite Silicon Valley throwing hundreds of billions of dollars into its most speculative gamble yet, the revolutionary promises, and more important, profits, of AI have yet to materialize. OpenAI is expected to lose money this year, even as its revenue exceeds a projected $20 billion. Meta’s CFO told investors, “We don’t expect that the genAI work is going to be a meaningful driver of revenue this year or next year,” despite the company dropping upwards of $70 billion on its AI investments this year. A recent MIT study found that U.S. companies have invested between $30 billion and $40 billion into generative AI tools but are seeing “zero return” from AI agents. Some fear that all of this could presage a collapse bigger than the dot-com bust of the early 2000s. As Apollo Global Management’s chief economist warned in a recent investor’s note, big tech firms are driving the market with valuations more bloated than they were in the 1990s. This would be scary for big tech companies—except many of them, according to several researchers who spoke to Inc., are already too big to fail, thanks to how closely the industry has become intertwined with our economy and government. The leading AI companies believe “the only way for this technology to exist is to be as big as possible, and the only way for it to get better is to throw more money at it,” says Catherine Bracy, CEO of the policy and research organization Tech Equity. That need for money and investment has spurred an industry lobbying blitz, pushing everyone from OpenAI CEO Sam Altman to VCs like Andreessen Horowitz into the halls of Congress over the past couple of years. Just earlier this week, The Wall Street Journal reported that Andreessen Horowitz and OpenAI are behind a nascent lobbying campaign through a super PAC network that’s already amassed $100 million to elect AI-friendly candidates. Those beltway relationships appear to be paying off. Currently, more than 30 states offer tax incentives for data center construction. But the booming growth of the industry has been enormously costly, largely owing to the vast amounts of energy needed to run large language models. The Trump administration’s AI Action Plan frames the industry’s growth as essential to “human flourishing” in the U.S. and the country’s continued geopolitical dominance. “We’re now locked into a particular version of the market and the future where all roads lead to big tech,” says Amba Kak, co-executive director of the AI Now Institute, which studies AI development and policy. Indeed, the success of major stock indexes—and perhaps your 401(k)—is resting on the continued growth of AI: Meta, Amazon, and the chipmakers Nvidia and Broadcom have accounted for 60 percent of the S&P 500’s returns this year. But ultimately, in the event of a market reckoning, it’s likely that the biggest companies would remain relatively unscathed. “AI is too big to fail in the United States, both because of how intertwined it has become with the government, and also because of how much AI investment is propping up the stock market and the entire economy,” says Daron Acemoglu, an economist at MIT. When the bubble pops, it’s likely going to be the smallest AI businesses, those riding the AI hype train with products based on existing LLMs, that’ll get wiped out in an eventual rupture. “Those little companies are not going to get bailed out,” he argues. Hardware companies like Nvidia or big tech firms, with diverse revenue streams, are likely to be better insulated from the potential fallout of the bubble popping. As Timnit Gebru, a former Google AI researcher and founder of the Distributed AI Research Institute, puts it, a chipmaker like Nvidia is essentially just selling shovels during a gold rush. “Shovels are still useful with or without the gold rush,” she says. BY SAM BLUM @SAMMBLUM

Friday, September 5, 2025

Why Google’s New AI Image Generator Could Give OpenAI a Run for Its Money

Google just dropped a major update for its AI image generation tech, enabling anyone to generate images with more accurate outcomes. In a blog post, Google revealed Gemini 2.5 Flash Image (also called nano-banana), its latest and greatest AI model for generating and editing images. Google says the new model gives users the ability to blend multiple images into a single image, maintain character consistency across multiple generations, and make more granular tweaks to specific parts of an image. One of the model’s new features is that ability to maintain character consistency, meaning that if you create a specific look for an AI-generated character, the character will maintain that look each time you generate a new image featuring them. “You can now place the same character into different environments,” Google wrote, “showcase a single product from multiple angles in new settings, or generate consistent brand assets, all while preserving the subject.” Gemini 2.5 Flash Image can also make more granular edits to images, like blurring a background, and changing the color of an item of clothing. Another major feature is the ability to fuse multiple images into a single image. Google says this could let people place an object into a room or to restyle an environment with a new color scheme or texture. To demonstrate, Google built a demo in which users can upload a picture of a room, upload images of products that they’d like to see in the room, and then drag the product image to the specific place where they want it to appear in the room. It’s not difficult to imagine people using this feature to see how a new appliance or piece of furniture will look in their home before committing to a purchase. Google also says that Gemini 2.5 Flash Image is particularly adept at sticking to visual templates, such as real estate listing cards, uniform employee badges, and trading cards. This kind of feature could also be used to create thumbnails for YouTube videos. Gemini 2.5 Flash actually debuted on website LMArena last week under the codename nano-banana. LMArena is a platform for evaluating an AI’s performance against other AIs, and big artificial intelligence companies often submit their new models to the site before publicly revealing them. Also of note is Gemini 2.5 Flash Image’s API price. According to Google, the model is priced at $30 per one million output tokens. In comparison, OpenAI’s image-generation API fees cost $40 per one million output tokens, making Google’s offering significantly cheaper. The new model can be used in the Gemini app and in Google AI Studio. BY BEN SHERRY @BENLUCASSHERRY