Friday, September 12, 2025
The Best AI Success Stories Are Sitting on Hard Drives and Have 1 User
I had coffee with my favorite CTO yesterday and he told me about his new AI app. It’s basically a CTO-in-a box.
And it’s awesome.
And he’s the only one using it.
And it’s going to stay that way.
Despite my trying to persuade him otherwise.
One of the reasons there’s so little proof of the value of AI is that the best, most useful, most ingenious apps actually never leave the creator’s hard drive. In fact, once my friend pointed out what he was doing, I myself realized that most of what I’ve created with AI is available only to me on my hard drive, and moreover, that’s definitely where my best stuff is.
In fact, it seems like most of the better “AI apps” aren’t even primarily AI, but AI being implemented, like my CTO friend implemented it, to unlock automation and unstructured data — and ultimately narrative output — in a way that couldn’t be done before.
So why is this happening?
The Genius of CTO-in-a-Box
I’m probably overhyping this because he’s my buddy and he kindly listens to a lot of my BS before it gets to you folks, but my CTO friend’s CTO-in-a-box isn’t anything to eff with.
He and I worked shoulder-to-shoulder for years, and together we developed some amazing little features, a few apps, and the tech backbone of a multimillion-dollar business. I say “we” but all I did was dream stuff up with him, vet it, and MVP it out, after which he and his brilliant team coded it. And they got it right the first time every time, and he usually added his own flair to surprise me with some technical trick no one would ever notice but made what we were doing 10 times better under the hood.
He left that company not long after I did, and despite my trying to wrangle him into what I was doing, he took another job, to come in and do a technical turnaround on a private equity-purchased startup that had tons of potential but was stagnating.
He hadn’t done anything like a turnaround before and I had just finished one. We have coffee every two weeks and so our conversations turned to the science of the turnaround. Then he disappeared for a month, and when we got back together, yesterday, he shocked the hell out of me.
“Basically, what I did was take every bit of data, company data, sales data, all the code, all the documentation — they had a lot of ‘stuff’ [his air quotes] just sitting in directories and databases,” he told me. “I slammed it all into a vector database, wrote some code, integrated Claude Code to build some agents and totally write the front end, and now the LLM is like my personal assistant.”
He’s underselling it. I know this because of the example he gave me.
Builders Gonna Build
“We had a sudden spike in resources, so I asked it what was going on, and it brought me to the right section of code that was the problem and hypothesized why, and I fixed it in 30 seconds,” he said.
And then he made me jealous.
“Oh, it also does all my weekly status reports and my standup agenda and all the reporting I have to do for the ELT and the board,” he continued. “I don’t let it send emails, but it’ll create the draft for me to review with the summary and a link to the report.”
“Tell me you built it so anyone can use it,” I said.
“Of course,” he responded. “I mean, not for all the outliers, but yeah you could start over and import new data, it knows what it’s getting and what to do with it.”
“Tell me it’s self-perpetuating with new data it creates on its own,” I said, “like those email summaries and reports.”
He just smiled.
“Dude,” I said and threw my hands up. “It’s a CTO-in-a-box. Let me at it.”
“No,” he laughed. “It’s staying on my hard drive.”
“But you built it like a product.”
“Because that’s how I roll.”
Then he took a smug sip of his mocha whatever and I couldn’t help not being mad at him.
Don’t Be So Quick to Write Off AI
I say this as the guy who can’t stop writing off AI.
Nah, I’ve been disparaging how we’ve been selling AI for years now, having been building it since 2010, and, in a nascent sense, as far back as 2000. But each time I’ve firebombed today’s AI hype in public, especially generative AI — because that’s the “AI” everyone is familiar with and what 95 percent of people are talking about when they say “AI” — I’ve prefaced my flaming with how amazing the technology actually can be when you know what you’re doing.
In the hands of my CTO friend, amazing doesn’t even begin to describe what you can do.
For the record, he’s on the uppermost subscription level of at least five different providers, a four-figure-a-month bill footed by his private equity overlords. And he’s aware that he will be squeezed soon.
In fact, he said openly, “I got on the gravy train while the platforms are loss-leading.”
They’ll price him out, and that’s another reason not to build a public product around it. He doesn’t know the true economics.
Do What the CTOs Are Doing
Of course, I asked my CTO friend to send me his documentation, because of course he documented it, and I’m building something around content and creators that could use its own CTO-in-a-box. And that got me thinking. Right now, all the coding I’ve done with the AI and the agents and such, it’s all sitting on my hard drive, and like my friend, I’ve built it like a product but I’m the only user in the credentials table.
But unlike my friend, I built it like a product because I am indeed thinking of packaging it and selling it as a product down the road. If I could just stop writing for a while and get my brain on it for more than five minutes.
Which, in today’s world, actually gets a lot of Claude coding done. It’s the peer review that takes time, if you get me.
If I’ve got advice, it’s this. If you want to build something with AI, find the people who are doing amazing things on their hard drive — facing real challenges, solving real problems, and not just leveraging AI to jump on the gravy train.
Buy them a mocha whatever and ask them what they’re doing and how they’re doing it. Because the more my CTO friend spoke, the more my vision was clouded by dollar signs. The problem is that for every story like his I hear 100 more stories about chatbot wrappers and unstructured data parsers being sold like they’re magic.
Those aren’t being funded anymore, finally. That opens the door for people to wring real value and usage out of this AI nonsense.
If you’re a fan of real value and usage, jump on my email list. I try to talk about that as much as possible, whether that’s AI or tech or something else.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO
Wednesday, September 10, 2025
Mark Cuban Has 2 Words for People Who Don’t Want to Learn AI
Skims founding partner and sometimes visiting Shark Tank Shark Emma Grede was never an AI skeptic, exactly. In 2023, she offered a cash bonus to her staff for finding creative ways to use AI in their work. But she herself was mostly just using ChatGPT as an occasional replacement for Google search.
“I’m using AI like a 42-year-old woman,” she joked in a recent Fortune interview. Then she had former Shark Mark Cuban on her podcast.
Turns out the billionaire founder and former Mavs owner has strong words — two, to be exact — for people like Grede who are dragging their feet on experimenting with AI.
Talking to Cuban was enough to convince Grede to change her approach. She started Googling class on AI and downloading AI apps immediately. The episode “gave me a new urgency around how I use AI,” she told Fortune. “He gave me a kick.”
It might be just the kick you need too.
Not learning AI? Mark Cuban says “you’re f***ed”
On her podcast, Grede didn’t ask Cuban about AI. She asked him about how to get started with a business idea. But the billionaire entrepreneur insisted that now, there’s no difference between going from idea to execution and utilizing AI. You need the latter to do the former fast and well.
“The first thing you have to do is learn AI,” Cuban responded. “Whether it’s ChatGPT, Gemini, Perplexity, Claude, you’ve got to spend tons and tons and tons of time just learning how it works and how to ask it questions.”
Noodling around with new tools and asking various AI models questions is how Cuban is spending his time at the moment. And he has no patience for founders and others in business who aren’t doing the same.
“What do you say to someone who is like, ‘I don’t like AI. I don’t want any more technology in my life’?” Grede asked. Cuban’s answer was short, punchy, and profane: “You’re f***ed.”
Is Mark Cuban right?
Cuban went on to explain that the current moment is much like his early career at the dawn of the internet age. New, hugely disruptive technology is rolling out at an incredible rate. Those who don’t run to keep up are going to end up as roadkill.
Saying you don’t want to use AI, he says, “is like people saying back in the day, I don’t want to use the PC. I don’t want to use the internet. I don’t need a cellphone, Wi-Fi.” Those businesses died.
Is he right in making the comparison? He’s certainly correct that those around you are adopting AI at a rate equal to or greater than the rate at which the internet took off.
Harvard researchers have compared recent data on AI usage to government data on the uptake of new technology at the turn of the millennium. They found more people are using AI more quickly these days than people started adopting the internet back then.
“The usage rate [for AI] … is actually higher than both personal computers and the internet at the same stage in their product cycles,” the trio of researchers explained to The Harvard Gazette.
No one can predict the future. And the breathlessness of some discussions of AI certainly suggest that the hype will exceed the reality in plenty of areas. We may yet witness an AI “trough of disillusionment” or even crash. But the numbers strongly suggest that Mark Cuban is on to something when he says that ignoring AI is just not a viable option.
What happened to businesses that ignored the internet?
“If you were to go back to 1984 and tell people, ‘Hey, there’s this new thing called the personal computer. I have a crystal ball. Twenty years from now, everybody’s going to have one of these and every single new technological development and every single new product is going to be using it as the base.’ Knowing that now, what would you do differently?” the Harvard researchers ask.
“You could make billions and billions of dollars,” they add.
According to their data, they say, “it sure looks like generative AI is going to be on that scale,” and “the spoils will go to people who can figure out how to harness it first and best.”
How to get started with AI
If you’re convinced, how do you start learning AI? Playing around with new tools and technologies as Cuban suggests is certainly a good first step. Elsewhere, Cuban — along with other tech icons like Tim Cook and Bill Gates — has outlined specific ways he’s using AI, which could give you additional ideas.
Other AI experts have advice as well. Nvidia CEO Jensen Huang has talked on multiple occasions about how he’s personally experimenting with AI. OpenAI president Greg Brockman has offered advice on honing your AI prompting skills.
No one knows exactly how the AI revolution will play out, or even the best way to start to prepare. But even the skeptics should probably heed Mark Cuban’s words and admit that AI is going to change the world.
If you stick your head in the sand, you’re doomed. Better start experimenting today so you can be prepared however this thing plays out.
EXPERT OPINION BY JESSICA STILLMAN @ENTRYLEVELREBEL
Monday, September 8, 2025
Is the AI Bubble Too Big to Fail?
On Wednesday, analysts bemoaned Nvidia’s lackluster Q2 earnings. The company posted a 56 percent gain in sales, its smallest in more than two years, despite the chipmaker’s positioning as one of the biggest winners of the AI boom. The company’s inability to live up to its expectations has reignited fears of an AI bubble on the precipice of rupture.
Despite Silicon Valley throwing hundreds of billions of dollars into its most speculative gamble yet, the revolutionary promises, and more important, profits, of AI have yet to materialize. OpenAI is expected to lose money this year, even as its revenue exceeds a projected $20 billion. Meta’s CFO told investors, “We don’t expect that the genAI work is going to be a meaningful driver of revenue this year or next year,” despite the company dropping upwards of $70 billion on its AI investments this year. A recent MIT study found that U.S. companies have invested between $30 billion and $40 billion into generative AI tools but are seeing “zero return” from AI agents.
Some fear that all of this could presage a collapse bigger than the dot-com bust of the early 2000s. As Apollo Global Management’s chief economist warned in a recent investor’s note, big tech firms are driving the market with valuations more bloated than they were in the 1990s. This would be scary for big tech companies—except many of them, according to several researchers who spoke to Inc., are already too big to fail, thanks to how closely the industry has become intertwined with our economy and government.
The leading AI companies believe “the only way for this technology to exist is to be as big as possible, and the only way for it to get better is to throw more money at it,” says Catherine Bracy, CEO of the policy and research organization Tech Equity. That need for money and investment has spurred an industry lobbying blitz, pushing everyone from OpenAI CEO Sam Altman to VCs like Andreessen Horowitz into the halls of Congress over the past couple of years. Just earlier this week, The Wall Street Journal reported that Andreessen Horowitz and OpenAI are behind a nascent lobbying campaign through a super PAC network that’s already amassed $100 million to elect AI-friendly candidates.
Those beltway relationships appear to be paying off. Currently, more than 30 states offer tax incentives for data center construction. But the booming growth of the industry has been enormously costly, largely owing to the vast amounts of energy needed to run large language models.
The Trump administration’s AI Action Plan frames the industry’s growth as essential to “human flourishing” in the U.S. and the country’s continued geopolitical dominance.
“We’re now locked into a particular version of the market and the future where all roads lead to big tech,” says Amba Kak, co-executive director of the AI Now Institute, which studies AI development and policy. Indeed, the success of major stock indexes—and perhaps your 401(k)—is resting on the continued growth of AI: Meta, Amazon, and the chipmakers Nvidia and Broadcom have accounted for 60 percent of the S&P 500’s returns this year.
But ultimately, in the event of a market reckoning, it’s likely that the biggest companies would remain relatively unscathed. “AI is too big to fail in the United States, both because of how intertwined it has become with the government, and also because of how much AI investment is propping up the stock market and the entire economy,” says Daron Acemoglu, an economist at MIT. When the bubble pops, it’s likely going to be the smallest AI businesses, those riding the AI hype train with products based on existing LLMs, that’ll get wiped out in an eventual rupture. “Those little companies are not going to get bailed out,” he argues.
Hardware companies like Nvidia or big tech firms, with diverse revenue streams, are likely to be better insulated from the potential fallout of the bubble popping. As Timnit Gebru, a former Google AI researcher and founder of the Distributed AI Research Institute, puts it, a chipmaker like Nvidia is essentially just selling shovels during a gold rush. “Shovels are still useful with or without the gold rush,” she says.
BY SAM BLUM @SAMMBLUM
Friday, September 5, 2025
Why Google’s New AI Image Generator Could Give OpenAI a Run for Its Money
Google just dropped a major update for its AI image generation tech, enabling anyone to generate images with more accurate outcomes.
In a blog post, Google revealed Gemini 2.5 Flash Image (also called nano-banana), its latest and greatest AI model for generating and editing images. Google says the new model gives users the ability to blend multiple images into a single image, maintain character consistency across multiple generations, and make more granular tweaks to specific parts of an image.
One of the model’s new features is that ability to maintain character consistency, meaning that if you create a specific look for an AI-generated character, the character will maintain that look each time you generate a new image featuring them. “You can now place the same character into different environments,” Google wrote, “showcase a single product from multiple angles in new settings, or generate consistent brand assets, all while preserving the subject.”
Gemini 2.5 Flash Image can also make more granular edits to images, like blurring a background, and changing the color of an item of clothing.
Another major feature is the ability to fuse multiple images into a single image. Google says this could let people place an object into a room or to restyle an environment with a new color scheme or texture. To demonstrate, Google built a demo in which users can upload a picture of a room, upload images of products that they’d like to see in the room, and then drag the product image to the specific place where they want it to appear in the room. It’s not difficult to imagine people using this feature to see how a new appliance or piece of furniture will look in their home before committing to a purchase.
Google also says that Gemini 2.5 Flash Image is particularly adept at sticking to visual templates, such as real estate listing cards, uniform employee badges, and trading cards. This kind of feature could also be used to create thumbnails for YouTube videos.
Gemini 2.5 Flash actually debuted on website LMArena last week under the codename nano-banana. LMArena is a platform for evaluating an AI’s performance against other AIs, and big artificial intelligence companies often submit their new models to the site before publicly revealing them.
Also of note is Gemini 2.5 Flash Image’s API price. According to Google, the model is priced at $30 per one million output tokens. In comparison, OpenAI’s image-generation API fees cost $40 per one million output tokens, making Google’s offering significantly cheaper.
The new model can be used in the Gemini app and in Google AI Studio.
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, September 3, 2025
Mark Cuban Says Young People Should Learn This Crucial AI Skill
Legendary investor Mark Cuban has some advice for college students looking to break into the red-hot AI industry: become an AI integrator.
During a livestreamed interview on TBPN (the Technology Business Programming Network), Cuban told hosts John Coogan and Jordi Hays that young people in college should learn everything they can about how to integrate AI within corporations, particularly within small to medium-size businesses.
Cuban claimed that “every single company” needs professionals with AI implementation skills because there currently aren’t any intuitive ways for corporations to integrate AI into their work. “There are 33 million companies in this country,” Cuban said, and only a select few have dedicated AI budgets or keep AI experts on payroll. But these companies will still need to adapt for the AI era.
Cuban likened this issue to how he started his career as an entrepreneur. “When I was 24,” Cuban said, “I was walking into companies who had never seen a PC before in their lives and explaining to them the value.” Cuban said he would meet with the owners of these companies and present them with customized plans that used computers to fulfill their specific business needs.
“This is where kids coming out of college are really gonna have a unique opportunity,” said Cuban. Students spending their senior years “learning the difference between Sora and Veo [two popular AI video-generation tools],” or learning how to customize an AI model, will be able to walk into any business and identify clear areas where AI implementation would meaningfully impact their operations.
TBPN co-host Coogan agreed with Cuban’s take, and added that he and Hays hired two interns this summer “because they just built products. Instead of saying, ‘Here’s what I can do,’ they just showed us. They took a day and just built something.”
Meanwhile, trying to work at one of the big tech companies with a computer science degree is “probably not the right way to go,” says Cuban. Instead, he says, “go into any other company that has no idea about AI but needs it to compete. There’ll be more jobs than people for a long, long time.”
BY BEN SHERRY @BENLUCASSHERRY
Monday, September 1, 2025
Why Companies Are Offering Young Workers With AI Skills 6-Figure Salaries
While the entry-level job market on the whole is still hurting, recent graduates who possess AI skills are finding sizable demand for their services. And starting salaries can reach up to hundreds of thousands of dollars per year.
A new report by hiring firm Burtch Works finds that the starting salary of AI-skilled workers with zero to three years’ of work experience now averages $131,139—a 12 percent jump from the year prior. Data scientists with the same level of limited experience are averaging $109,545 a year.
Compensation levels vary slightly by industry, the report found, but the mean salary for all covered industries with zero to three years’ experience was in the six-figure range. Health care/pharma is currently paying the most to AI-fluent workers, with a mean salary of $123,804. Consulting and tech are at a virtual tie at the bottom of the list, at roughly $104,500.
“AI professionals still command a 9 to 13 percent cash premium over data scientists. The gap is widest where scarce [generative AI] expertise adds the most value,” Burtch Works wrote in its report. “If you’re seeking a job in AI and data science, quantify your genAI successes to demonstrate your skills in action [and] reference market data during salary negotiations.”
The current demand for AI knowledge is unprecedented. Job search site Indeed earlier this year said the number of postings for generative AI-related jobs had tripled between January 2024 and January 2025. That followed a 75X increase from April 2022 to April 2024.
New college graduates are not just digital natives, they’re often AI natives, having grown up with early versions of the technology and learning as it has evolved. That can make them a more natural fit for AI-themed jobs than more experienced workers, who may be more resistant to adopting the technology, in part because of fears it will make their jobs irrelevant.
That has led to a bidding war for AI-savvy graduates. OpenAI is reportedly offering a base salary of $167,000, with more than $80,000 in stock options, to entry-level workers, bringing its average compensation to $248,000, according to Levels.fyi, a compensation-data provider. Scale AI reportedly has a total starting compensation package average of $185,000, and Databricks is offering $235,000. Within a couple of years, those numbers nearly double, per the Levels.fyi data.
Several dozen users of Levels.fyi have claimed to have received offers of over $1 million from AI companies, with some of them having less than a decade of experience.
At the same time, the number of AI job openings has soared. A study released in January by job tracking firm LinkUp and the University of Maryland found that from the beginning of 2018 to the end of 2024, the number of overall job openings was down 17 percent and total IT job openings fell by 27 percent. AI job openings, however, saw a 68 percent increase.
Demand for AI skills has become so intense that many hiring managers say they would consider bringing aboard an inexperienced worker with AI expertise versus a more experienced employee. And 66 percent of those managers said they wouldn’t hire someone who lacked AI skills, according to the 2024 Annual Work Trend Index by Microsoft and LinkedIn.
BY CHRIS MORRIS @MORRISATLARGE
Friday, August 29, 2025
How to Get Your Money’s Worth on Workplace AI Tools
Critics and skeptics of artificial intelligence technologies have repeatedly denounced the rising buzz the platforms have generated over the past few years, often deriding it as unfounded hype that ignores apps’ current productivity limitations. Now, a new study from MIT largely supports those doubters, finding that a whopping 95 percent of businesses that have adopted AI have thus far gotten zero return on their investment.
That was the headline takeaway from a report by MIT Media Lab’s Project NANDA, which was based on survey results and face-to-face interviews with hundreds of senior U.S. business leaders and employees. Despite the study’s estimate that companies have spent $30 billion to $40 billion developing or purchasing AI platforms in the past two years alone, it said only 5 percent of those firms have reported any return on that investment. “The vast majority remain stuck with no measurable (profit or loss) impact,” it said.
Similarly, only two of the eight sectors examined — technology, and media and telecom — reflected any significant changes based on the use or performance of AI.
“The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide,” the report’s authors wrote. “The core barrier to scaling is not infrastructure, regulation, or talent. It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time.”
Why AI is falling short
Participating business executives in NANDA’s The GenAI Divide: State of AI in Business 2025 report offered two main reasons for the tech falling far short of expectations so far. On the development side, it said only 5 percent of tools designed to fulfill specific company needs or business functions ever reach production. The rest remain stranded on the shoals of ambitious ideas that never sail beyond the drawing board, despite developer promises that they’re speeding toward completion.
“We’ve seen dozens of demos this year,” said one unidentified chief information officer during a NANDA interview. “Maybe one or two are genuinely useful. The rest are wrappers or science projects.”
That, in turn, means many companies are instead using more generalist AI tools like ChatGPT or Copilot. While those tend to be effective in automating repetitive workplace grunt chores like research, text composition, or marketing work, they fail to generate significant increases of key metrics like productivity, customer acquisition, or profits.
As a result, study respondents said most of the previous and current excitement over AI has not been matched by the revolutionary results that its boosters say it will deliver.
“The hype on LinkedIn says everything has changed, but in our operations, nothing fundamental has shifted,” said one midmarket chief operating officer quoted in the study. “We’re processing some contracts faster, but that’s all that has changed.”
The study identified two additional divides in AI use by businesses.
The first was that more than 80 percent of organizations have tested or piloted apps, with about half of those saying those are now being used in workplaces regularly. Startups, small companies, and midmarket businesses were found to be the fastest in that transition.
But the vast majority of that experimentation and integration involved platforms like ChatGPT or other general performance AI bots. While those do often help increase individual employee productivity on certain tasks, study participants said, those gains tend to plateau fairly fast because the apps can’t extend them higher.
“ChatGPT’s very limitations reveal the core issue behind the GenAI Divide: it forgets context, doesn’t learn, and can’t evolve,” the study said. As a result, human employees still need to oversee the tech’s results and pursue myriad business objectives that apps can’t.
But survey participants who faulted the limitations of general apps were even harsher with AI created for and tailored to their companies or specific business applications.
“The same users were overwhelmingly skeptical of custom or vendor-pitched AI tools, describing them as brittle, overengineered, or misaligned with actual workflows,” the report said. “They expect systems that integrate with existing processes and improve over time. Vendors meeting these expectations are securing multi-million-dollar deployments within months.”
How to make AI fit your business
So how can employers adopt AI into their operations without winding up on the wrong side of the divide?
For starters, the NANDA study urges companies to build their own AI platforms whenever possible. Those apps, meanwhile, should be based on their particular business needs, which should enable them to provide better outcomes than generalist tools. When necessary, employers can turn to outside providers to design solutions for their specific uses.
The report’s authors also advise businesses to allow managers, and even team leaders, to decide the best ways of deploying apps to get the desired results, rather than having the tech department designate a one-size-fits-all use. Over time, executives should also base evolving AI deployment on where it is creating the most profitable gains.
“The highest-performing organizations report measurable savings from reduced (business process outsoursing) spending and external agency use, particularly in back-office operations,” the authors wrote. ”Others cite improved customer retention and sales conversion through automated outreach and intelligent follow-up systems.”
And finally, employers should base whatever AI platforms they ultimately assemble on tech capable of fully integrating information it acquires during use, and continually evolve and improve itself with those experiences.
“Stop investing in static tools that require constant prompting, [and] start partnering with vendors who offer custom systems, and focus on workflow integration over flashy demos,” the report concludes. “The GenAI Divide is not permanent, but crossing it requires fundamentally different choices about technology, partnerships, and organizational design.”
On the bright side
Did researchers find any positive aspects to AI’s mega-hype and mini-results so far? Perhaps — at least for employees worried about the tech taking over their jobs.
The study determined layoffs linked to AI deployment have been minimal so far, and usually concentrated in companies that have been deploying the tech most. Perhaps unsurprisingly, those firms were often found to be subcontractors handling marketing, communications, and customer service support for other businesses — outsourcing which in future employers may decide to handle in-house using their own apps.
BY BRUCE CRUMLEY @BRUCEC_INC
Wednesday, August 27, 2025
Sam Altman Admits the AI Bubble Is Here
In an interview with reporters from multiple publications on Thursday night, OpenAI CEO Sam Altman said he believes the AI sector has entered the territory of a financial bubble.
The AI sector has exploded since 2022, largely based on the growth of Altman’s company and its flagship product, ChatGPT. Economists and tech critics have argued recently that the billions of dollars in venture investment in AI companies, and the crush of startups jumping on the AI bandwagon, has been reminiscent of the dot-com bubble and crash of the late 1990s.
Altman made the same analogy in his interview with reporters. “When bubbles happen, smart people get overexcited about a kernel of truth,” Altman said, according to The Verge. “If you look at most of the bubbles in history, like the tech bubble, there was a real thing. Tech was really important. The internet was a really big deal. People got overexcited.”
He added that “someone is going to lose a phenomenal amount of money. We don’t know who, and a lot of people are going to make a phenomenal amount of money.”
Earlier this week, OpenAI released its newest model, GPT-5, to some negative reviews. The CEO had initially promised the model would offer “PhD-level intelligence” in most tasks. But the issue for many people came down to its tone: Users claimed GPT-5 has a terser and colder temperament than its predecessor, GPT-4o.
Altman’s admission that AI is over-valued and in a bubble is significant. The CEO has served as one of the industry’s biggest boosters since the launch of ChatGPT in 2022.
But Altman had hinted that the writing was on the wall last week, when he told CNBC that the term Artificial General Intelligence (AGI)—a milestone for researchers that involves AI that’s equal to or better than humans at most tasks—isn’t a “super useful term.”
AGI, he said, is an over-used term that has lost its meaning. “I think the point of all of this is it doesn’t really matter and it’s just this continuing exponential of model capability that we’ll rely on for more and more things,” he told CNBC.
Recent reports indicate OpenAI is valued at $300 billion and approaching $20 billion in annual recurring revenue this year. Despite those prolific numbers, the company is yet to turn a profit, the CEO recently confirmed to CNBC. One factor is that the computational power required to run Large Language Models built by OpenAI and its competitors is notoriously expensive.
Warnings of an AI bubble bursting are not new. In 2023, the venture capitalist Jason Corsello, CEO and general partner of Acadian Ventures, told Inc: “This area of A.I. is somewhat overhyped. It’s over-invested, it’s overvalued. When you’re seeing seed-stage companies raise between $100 million and $150 million with nothing more than a pitch deck, that’s a bit concerning.”
OpenAI, for its part, is currently seeking a $500 billion valuation through a tender offer for current and former employees.
BY SAM BLUM @SAMMBLUM
Monday, August 25, 2025
Anthropic Is Making It Easier to Learn How to Code
Anthropic’s Claude is getting a side gig as a tutor. The company has launched new modes for its two consumer-facing platforms, Claude.ai and Claude Code. The modes will enable Claude to not just answer questions and write code, but also structure its outputs to teach users through a process Anthropic refers to as guided discovery.
The company originally released a learning mode in April, but it was available only to university students and faculty with Claude for Education memberships. In late July, OpenAI released a similar feature for ChatGPT called study mode.
Using Claude.ai, Anthropic’s ChatGPT-like website and mobile app for casually interacting with its AI models, users will be able to enable the learning mode by selecting it from a dropdown menu of various styles. According to Anthropic, Claude will use a “Socratic approach” to guide users through challenging concepts instead of immediately giving answers. If you’re a student using Claude to help with your homework or studying, this could be a useful feature.
Beyond that option, Claude Code, Anthropic’s tool for software development with AI, will feature two learning modes. Anthropic’s models are famed for their coding ability, and have given rise to a generation of startups pioneering a new method of software engineering called vibe coding. These new learning modes in Claude Code are designed to help developers learn more about the fundamentals of software engineering while building applications with Claude.
The first new mode in Claude Code is called Explanatory. When people use it, Claude will explain why it made certain decisions while coding, recreating the dynamic of a senior developer narrating their thought process to a junior developer.
When in the second mode, referred to as Learning, Claude will actually leave key sections of the code undone and direct human developers to fill in those sections themselves. Once a user fills it in, Claude will judge the code and give feedback.
People can put both Claude Code learning modes to work by updating Claude Code, running /output-styles in your terminal, and selecting between Default, Explanatory, and Learning styles.
BY BEN SHERRY @BENLUCASSHERRY
Friday, August 22, 2025
As AI Agents Fill the Workplace, Their Human Colleagues Stay Wary
As we wait for the promised evolution of AI to artificial general intelligence (AGI) which promises capabilities on par with human workers, the most sophisticated AI tools on market are AI agents. These semi-autonomous systems can make certain decisions by themselves and even carry out actions usually done by people in a digital environment. In January, OpenAI’s Sam Altman said AI agents could transform the workplace in 2025. With the year more than half over, is he right? New data from business leaders says maybe yes. But workers? They don’t trust ‘em.
A survey of employees from around the world by California-based HR software firm Workday found upbeat results when it came to how workers feel about using buzzy AI agent tech. Amazingly, three-quarters of the survey respondents said they felt comfortable interacting with AI agents at work, news site ZDNet reported. That’s a really high comfort level with what is very much a breakthrough innovation.
The numbers tell a very different story when it comes to taking orders from an AI agent, however. Only 30 percent of respondents said they’d be comfortable being bossed by a digital “colleague.” Just 24 percent of people felt okay with the idea of running agents inside a company without a human monitoring the situation. ZDNet noted parallels between this outcome and recent research from Stanford University which found a certain level of trust of AI agents, but only for very basic tasks.
Anecdotally, this should be reassuring. In June, two researchers from leading AI firm Anthropic warned that they could foresee a future where AIs make decisions and employees would have to blindly follow them as a kind of “meat robot.” The fact that so few people would blindly follow an agent’s instructions without at least applying a smidge of critical thinking should be reassuring.
Trusting the tools of the workplace is a critical — we’ve all used the “good” printer in the office when we needed an urgent copy of an important report, rather than relying on the nearby one. It seems the same is true of AI agents. People are happy to embrace them for simple tasks, but are much more wary about following critical decisions made by an AI tool.
But trust builds over time, and Workday’s data found evidence of this in attitudes to AI, because as employees work more with agents, the more they trust the system’s outputs. Part of this trust may come from the fact that 90 percent of the survey respondents said they felt AI agents would boost productivity — any tool that’s that useful can’t be bad, can it?
But even here, it seems workers are already quite savvy about the risks of AI systems, with many respondents to the survey concerned that overreliance on AI tools could lead to slips in their own critical thinking, a workplace built less around human interactions, and that the productivity boost from the AI may tempt managers to up their demands.
Another worry workers have about agent AIs that may play into considerations of employee trust is that the new technology may steal their jobs. This worry may be borne out, as indicated by a recent report about the advertising industry, which shows that the industry is actually dumping entry-level workers. Data show 6.5 percent of all jobs in the industry were held by people aged 20 to 24. In 2019, that cohort represented 10.5 percent of the ad industry. AI’s role in this decline can’t be ignored, industry news site AdWeek contends.
Why should you care about this?
Because you may have rolled out agent-based AI tools to your workforce, and then sat back — confident in your employees’ ability to make the most of this smart tech, and reap the benefits of all that extra productivity. The reality may be slightly different. It may be worth running an audit of how comfortable your workers are with this tech, and also educating them about how you would actually like them to use these AI agents. Reassuring them that you won’t replace them with a pile of silicon chips may also be a good idea.
BY KIT EATON @KITEATON
Wednesday, August 20, 2025
BUILDING AI FACTORIES
Imagine a place where innovation meets industrialization,
where AI is not just a concept but a reality, where raw
data is transformed into actionable intelligence at
lightning speed.
That’s an AI factory, an environment designed to
manage the entire AI lifecycle — from data pipelines and
model training to inference and real-time insights. With
purpose-built infrastructure, integrated tools, scalable
operations, and unparalleled AI expertise, the AI factory
can revolutionize the way you harness the power of
artificial intelligence.
Think of it like a traditional factory, but instead of
producing physical goods, it creates value and
intelligence from data. AI factories take in raw data,
process it through AI models, and output actionable
intelligence, predictions, or new AI solutions. The journey
from data to intelligence is streamlined, efficient,
and groundbreaking.
The result? Faster innovation, operational efficiency,
scalability, and greater control over data and
business outcomes.
Why do you need an AI factory?
Because operationalizing AI can
be challenging.
As organizations embrace AI’s transformative potential,
they face a range of complexities inherent in fully
operationalizing AI.
These challenges include:
— Complex AI workloads: Managing diverse and
resource-intensive AI workloads can overwhelm
existing infrastructure, leading to inefficiencies
and delays.
— Need for multitenancy: Efficiently managing multiple
tenants and their resources is complex and
resource-intensive, leading to potential conflicts
and inefficiencies.
— High costs of cloud AI: The expenses associated
with deploying AI solutions in the cloud can be
prohibitive, impacting budget and ROI.
AI is iterative, and models can degrade over time
due to data drift, changing customer behavior, and
environmental shifts. To maintain relevance and
performance, a high-performing AI factory infrastructure
is essential for retraining models, conducting simulations,
monitoring inference quality, and managing deployment
pipelines for continuous improvements.
Monday, August 18, 2025
The Vibe-Coding Companies and Founders to Watch in 2025
In a blog post published in early January, OpenAI CEO Sam Altman opined that in 2025, the first AI agents would enter the workforce and materially change the output of companies. Eight months into the year, it’s arguable that he’s been proven entirely correct.
That’s because AI agents are the key element behind the explosive rise of vibe coding, a term coined in February 2025 by famed AI researcher and OpenAI cofounder Andrej Karpathy to refer to the act of writing and editing code with assistance from an AI system. Karpathy posted on X that “there’s a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”
Karpathy’s post ushered in a new chapter in the AI era, with experienced developers adopting AI coding tools en masse (a recent survey from Stack Overflow found that 80 percent of developers now use AI tools in their workflows) and programming neophytes creating their first apps. This interest has sent several startups’ revenues into the stratosphere, in some cases 10xing revenue in a matter of months.
And that attention isn’t just coming from customers, but investors too. This year, companies in the vibe coding space have raised billions from venture capital firms. Unlike many of the anticipated use cases for AI, vibe coding is already fundamentally changing how people interact with computers, and as such has become a focal point for the AI revolution.
These are the companies and people shaping the world of vibe coding:
Anthropic
Anthropic has played a crucial role in the success of the vibe coding industry. The Dario Amodei-led company’s Claude AI models have proven supremely adept at handling programming tasks, and are the preferred models for many of the major players in the vibe coding space.
In addition to powering other companies’ platforms, Anthropic also produces multiple vibe coding applications of its own. The first are Artifacts, which are interactive applications that can be created in Claude.ai, Anthropic’s consumer-facing platform for using its AI models. Artifacts are only meant to serve as prototypes, and Anthropic warns that the software created with Artifacts are not production-ready.
Anthropic’s other vibe-coding product, Claude Code, is a program that connects directly to a user’s computer terminal, and is meant for developing full-fledged pieces of software that can be deployed and used by many people.
Anysphere
Anysphere, the organization behind the wildly popular AI-powered code editor Cursor, refers to itself as the fastest-growing startup in history. In early June 2025, Anysphere announced that it had raised $900 million at a $9.9 billion valuation. That’s a big jump from just six months earlier in January, when the startup raised $105 million at a $2.5 billion valuation.
Anysphere was founded in 2022 and released Cursor in late 2023. Within 12 months of its launch, according to Bloomberg, Cursor was bringing in over $100 million in annual recurring revenue. Cursor uses AI models from other companies to power its code editor, and is reportedly one of Anthropic’s top customers. Like Cognition’s Devin, Cursor is known for helping software developers achieve a coding flow state.
Cognition and Windsurf
Cognition was one of the first companies to get into the world of AI-powered coding. The company was founded by a group of young, award-winning coders who developed a powerful software development assistant called Devin in late 2023. Devin was among the first AI-powered applications to be capable of developing an entire piece of software with nothing but a prompt to get it started.
Since Devin was revealed in early 2024, Cognition has raised hundreds of millions of dollars, and is now reportedly in talks to raise over $300 million from investors at a $10 billion valuation.
And now, Cognition is the owner of a former rival: Windsurf.
Windsurf was originally founded as Codeium in 2021 by a pair of MIT graduates who wanted to make GPU workloads more efficient to process, but the company pivoted in 2022 after witnessing the rise of generative AI tools like OpenAI’s Dall-E. The company found success in shipping AI-powered coding extensions, and then in November 2024 released the Windsurf Editor, a virtual development environment with agentic AI built in. Windsurf immediately took off with experienced coders, who enjoyed the editor’s ability to put engineers in a kind of flow state, in which they can seamlessly work in tandem to quickly create new software.
In May 2025, OpenAI reportedly had made a deal to acquire Windsurf for $3 billion, but that deal fell apart due to stipulations in OpenAI’s deal with Microsoft. Once the exclusive negotiating period had ended, Google quickly swiped the CEO and dozens of top employees. After a frantic weekend of dealmaking, Cognition announced that it would buy Windsurf, but keep it as a separate entity, with all remaining employees sticking around for the transition.
Jack Dorsey
The founder of Twitter and Block CEO has also been getting in on the vibe coding fun. Last month, Dorsey announced that he had used a Block-developed AI agent to create a new app, called Bitchat, that puts a twist on traditional social media.
Bitchat is a peer-to-peer messaging app that uses Bluetooth to enable wireless messaging without needing an internet connection. The app essentially uses the web of connections made by Bluetooth-capable devices to create a working network. However, people quickly started pointing out potential flaws in Bitchat’s design, potentially due to its vibe-coded nature.
Lovable
Hailing from Sweden, Lovable is one of the few European stars of the AI revolution. Founded in 2023, Lovable is a platform that enables people of any skill level to create fully-functioning websites from natural-language prompts. According to Lovable CEO and co-founder Anton Osika, Loveable is meant to be “the last piece of software that anyone has to write.”
In late July, Lovable announced that it had passed $100 million in annual recurring revenue, only eight months after making their first $1 million. This, according to the company, makes Lovable the actual fastest growing startup in the world, outpacing Cursor. The company also announced that over 100,000 projects are now being built on Lovable each day.
Microsoft
Microsoft’s Github Copilot is an AI-powered coding assistant built in tandem with ChatGPT creator OpenAI. Like Cursor, Github Copilot allows users to choose between various models to handle specific coding challenges and is designed for professional developers rather than novices. Still, Microsoft has been adding more agentic capabilities recently, enabling Copilot to handle more coding tasks by itself, rather than just editing small snippets at a time.
On Microsoft’s most recent quarterly earnings call, CEO Satya Nadella shared that Github Copilot had hit over 20 million lifetime users, up from 15 million in April, and is now being used by 90 percent of the Fortune 100.
Replit
Replit is one of the older startups on this list—it was founded way back in 2016. Originally, Replit was Repl.it, a cloud-based coding environment that could be accessed from anywhere. In essence, Repl.it was like Google Docs for coding, a web-based app that enabled multiple people to collaborate on a coding project at once.
In September 2024, Replit released Replit Agent, a new feature that enabled users to describe an application or piece of software in natural language, and then send an AI agent off to plan, code, and deploy the app. Replit Agent was an instant hit, and was so successful that it fundamentally changed the trajectory of the company. Once focused on catering to professional and skilled coders, Replit is now fully embracing the casual audience. In a January interview with Semafor, Replit CEO Amjad Masad even said that “we don’t care about professional coders anymore.”
In the roughly 9 months since Replit Agent was launched, Masad says Replit’s annual recurring revenue has exploded from $10 million to $100 million.
Theo Browne
A former software development engineer for Twitch, Theo Browne has emerged as one of the most notable influencers in the fast-moving world of vibe coding. Browne releases multiple YouTube videos per week in which he gives his perspective on the latest AI headlines and experiments with new vibe coding platforms. Browne’s most popular videos include tutorials, tier lists, and comparisons between popular tools. Browne is also the founder of Ping Labs, a startup developing AI-powered tools.
BY BEN SHERRY @BENLUCASSHERRY
Friday, August 15, 2025
North Korean Hackers Are Using AI to Get Jobs at U.S. Companies and Steal Data
Cyberattacks are getting faster, stealthier, and more sophisticated—in part because cybercriminals are using generative AI.
“We see more threat actors using generative AI as part of their tool chest, and some of those threat actors are using it more effectively than others,” says Adam Meyers, head of counter adversary operations at CrowdStrike.
The cybersecurity tech company released its 2025 Threat Hunting report on Monday. It detailed, among other findings, that adversaries are weaponizing genAI to accelerate and scale attacks—and North Korea has emerged as “the most GenAI-proficient adversary.”
Within the past 12 months alone, CrowdStrike investigated more than 320 incidents in which operators associated with North Korea fraudulently obtained remote jobs at various companies. That represents a jump of about 220 percent year-over-year. The report suggests operatives used genAI tools “at every stage of the hiring and employment process” to automate their actions in the job search through the interview process, and eventually to maintain employment.
“They use it to create resumes and to create LinkedIn personas that look like attractive candidates you would want to hire. They use generative AI to answer questions during interviews, and they use deep fake technology as well during those interviews to hide who they are,” Meyers says. “Once they get hired, they use that to write code to allow them to hold 10, 15, 20, or more jobs at a time.”
In late July, an Arizona woman, Christina Chapman, was sentenced to eight years in prison for her role in assisting North Korean workers in securing jobs at more than 300 U.S. companies; that generated an estimated $17 million in “illicit revenue,” according to the Department of Justice. In late 2023, some 90 laptops were seized from her home.
North Korean fraudsters, however, aren’t the only threat facing businesses, academic institutions, and government agencies.
“We’re seeing more adversary activity every single day,” Meyers says. “There are more and more threat actors engaging in this, and it’s not just criminals or hacktivists. We’re also seeing more nation states.”
Although North Korea’s attacks may be among the most attention-grabbing, Meyers says “China is probably the number-one threat out there for any Western organization.” In the past year, CrowdStrike noted a 40 percent jump in cloud intrusions that it attributed to China-related adversaries. Cloud intrusions overall jumped about 136 percent in the first half of 2025, versus all of the previous year, according to the report.
Although the tech industry is the most targeted industry overall, Chinese adversaries substantially ramped up attacks on the telecom sector within the past year, according to the report.
“The telecommunications sector is a high-value target for nation-state adversaries, providing access to subscriber and organizational data that supports their intelligence collection and counterintelligence efforts,” the report states.
As technology becomes more sophisticated, it may seem overwhelming for organizations trying to keep attackers at bay. Meyers counseled individuals on security teams to make use of those very same tools that bad actors are using to fight back.
“Generative AI was being used by these threat actors, but it could also be used by the good guys to have more effective defenses,” he says. “We have that capability in some of [CrowdStrike’s] products, but you can use generative AI to kind of scale up those capabilities within the security team.”
He also recommended organizations be proactive, rather than reactive to threats.
“If you wait for bad stuff to show itself, it’s going to be too late,” he says. “Probably one of the biggest takeaways is that you need to have threat hunting.”
Just over a year ago, a CrowdStrike update precipitated what has since been called one of history’s biggest IT failures. A buggy security update caused Windows devices to crash, affecting a broad swathe of companies in banking, health care, and aviation, among others. Delta Air Lines was notably affected and is suing CrowdStrike, alleging the outage caused as many as 7,000 flight cancellations and as much as $550 million in lost revenue and other expenses, Reuters reported.
BY CHLOE AIELLO @CHLOBO_ILO
Wednesday, August 13, 2025
This Female-Led AI Company Helps Fix Manufacturing Problems in Real Time—or Before They Happen
SixSense is using AI to shore up semiconductor production—and the female-founded startup just raised $8.5 million to do it.
SixSense is developing “factories that think” to bring what it calls “intelligent automation” to the incredibly complex and important semiconductor industry, according to its website. What this means in practice is that the company’s AI platform leverages data to catch issues early, improve output, and increase control over production.
The Singapore-based SixSense was co-founded in 2018 by CEO Akanksha Jagwani and CTO Avni Agarwal. With a background in mechanical engineering, Jagwani leads business development and efforts to partner with semiconductor fabrication plants to deploy SixSense’s AI. Major semiconductor makers including GlobalFoundries and JCET already use SixSense’s technology, according to TechCrunch. Agarwal leverages her background in computer engineering to lead the company’s tech and product vision.
“We’re already working with fabs in Singapore, Malaysia, Taiwan, and Israel, and are now expanding into the U.S.,” Agarwal told TechCrunch.
Although SixSense is based in Singapore, in the U.S. at least there is still a significant disparity in VC funding for women-led companies. According to data from Pitchbook, women-only teams secured roughly 2 percent of VC deal value in 2024, whereas companies with both a female and male co-founder secured about 22 percent that year.
There is also indication that women are growing and advancing at VC firms themselves. Women now occupy close to 19 percent of leading investor roles in firms across the U.S., The Wall Street Journal reported. At “mega venture firms,” which manage $3 billion or more, only about a dozen managing partners are women.
SixSense’s latest round of funding brings its total to about $12 million, TechCrunch reported. Peak XV’s Surge seed platform led the round with participation from Alpha Intelligence Capital, FEBE, and more, according to TechCrunch.
BY CHLOE AIELLO @CHLOBO_ILO
Saturday, August 9, 2025
OpenAI launches GPT-5 as AI race accelerates
OpenAI has launched its GPT-5 artificial intelligence model, the highly anticipated latest installment of a technology that has helped transform global business and culture.
OpenAI's GPT models are the AI technology that powers the popular ChatGPT chatbot, and GPT-5 will be available to all 700 million ChatGPT users, OpenAI said.
The big question is whether the company that kicked off the generative AI frenzy will be capable of continuing to drive significant technological advancements that attract enterprise-level users to justify the enormous sums of money it is investing to fuel these developments.
The release comes at a critical time for the AI industry. The world's biggest AI developers - Alphabet, Meta, Amazon and Microsoft, which backs OpenAI - have dramatically increased capital expenditures to pay for AI data centers, nourishing investor hopes for great returns. These four companies expect to spend nearly $400bn (€342bn) this fiscal year in total.
OpenAI is now in early discussions to allow employees to cash out at a $500bn (€428bn) valuation, a huge step-up from its current $300bn (€257bn) valuation. Top AI researchers now command $100m (€85m) signing bonuses.
"So far, business spending on AI has been pretty weak, while consumer spending on AI has been fairly robust because people love to chat with ChatGPT," said economics writer Noah Smith.
"But the consumer spending on AI just isn't going to be nearly enough to justify all the money that is being spent on AI data centres," he added.
OpenAI is emphasizing GPT-5's enterprise prowess. In addition to software development, the company said GPT-5 excels in writing, health-related queries, and finance.
"GPT-5 is really the first time that I think one of our mainline models has felt like you can ask a legitimate expert, a PhD-level expert, anything," OpenAI CEO Sam Altman said at a press briefing.
"One of the coolest things it can do is write you good instantaneous software. This idea of software on demand is going to be one of the defining features of the GPT-5 era," he added.
In demos yesterday, OpenAI showed how GPT-5 could be used to create entire working pieces of software based on written text prompts, commonly known as "vibe coding".
One key measure of success is whether the step up from GPT-4 to GPT-5 is on par with the research lab's previous improvements.
Two early reviewers said that while the new model impressed them with its ability to code and solve science and math problems, they believe the leap from the GPT-4 to GPT-5 was not as large as OpenAI's prior improvements.
Even if the improvements are large, GPT-5 is not advanced enough to wholesale replace humans. Mr Altman said that GPT-5 still lacks the ability to learn on its own, a key component to enabling AI to match human abilities.
On his popular AI podcast, Dwarkesh Patel compared current AI to teaching a child to play a saxophone by reading notes from the last student.
"A student takes one attempt," he said. "The moment they make a mistake, you send them away and write detailed instructions about what went wrong. The next student reads your notes and tries to play Charlie Parker cold. When they fail, you refine the instructions for the next student. This just wouldn't work," he said.
More thinking
Nearly three years ago, ChatGPT introduced the world to generative AI, dazzling users with its ability to write humanlike prose and poetry, quickly becoming one of the fastest growing apps ever.
In March 2023, OpenAI followed up ChatGPT with the release of GPT-4, a large language model that made huge leaps forward in intelligence.
While GPT-3.5, an earlier version, received a bar exam score in the bottom 10%, GPT-4 passed the simulated bar exam in the top 10%.
GPT-4's leap was based on more compute power and data, and the company was hoping that "scaling up" in a similar way would consistently lead to improved AI models.
But OpenAI ran into issues scaling up. One problem was the data wall the company ran into, and OpenAI's former chief scientist Ilya Sutskever said last year that while processing power was growing, the amount of data was not.
He was referring to the fact that large language models are trained on massive datasets that scrape the entire internet, and AI labs have no other options for large troves of human-generated textual data.
Apart from the lack of data, another problem was that 'training runs’ for large models are more likely to have hardware-induced failures given how complicated the system is, and researchers may not know the eventual performance of the models until the end of the run, which can take months.
At the same time, OpenAI discovered another route to smarter AI, called "test-time compute," a way to have the AI model spend more time compute power "thinking" about each question, allowing it to solve challenging tasks such as math or complex operations that demand advanced reasoning and decision-making.
GPT-5 acts as a router, meaning if a user asks GPT-5 a particularly hard problem, it will use test-time compute to answer the question.
This is the first time the general public will have access to OpenAI's test-time compute technology, something that Altman said is important to the company's mission to build AI that benefits all of humanity.
Mr Altman believes the current investment in AI is still inadequate.
"We need to build a lot more infrastructure globally to have AI locally available in all these markets," he said.
Friday, August 8, 2025
AI Can Do a Lot—but Still Seems Totally Stumped by Sudoku
Artificial intelligence chatbots can whip up the code for a website in just a few seconds and summarize the important parts of a 90-minute meeting in moments. But how trustworthy is the technology? High-profile examples of AI hallucinating or gaslighting users have made some people understandably wary. But a group of researchers at the University of Colorado Boulder has come up with an interesting way to test the trustworthiness of the technology: by playing Sudoku.
The researchers gave AI models 2,300 six-by-six Sudokus (which are more simple than the nine-by-nine games most humans play). Then it set the AI loose, asking five different models to solve them all—and then asking the models to explain their answers.
The AI struggled a bit with the puzzles themselves. ChatGPT’s o1 model, for instance, only solved 65 percent of the puzzles correctly; that’s an older model that was state of the art two years ago (the company introduced o4-mini in April). Other AI systems did even worse.
Nobody’s perfect, not even a machine, but things got really interesting when the researchers asked the AI platforms to explain how they chose their answers.
“Sometimes, the AI explanations made up facts,” said Ashutosh Trivedi, a co-author of the study and associate professor of computer science at CU Boulder, in a statement. “So it might say, ‘There cannot be a two here because there’s already a two in the same row,’ but that wasn’t the case.”
One of the AIs, when asked about Sudoku, answered the question by giving an unprompted weather forecast. “At that point, the AI had gone berserk and was completely confused,” said study co-author Fabio Somenzi, professor in the Department of Electrical, Computer, and Energy Engineering.
The hallucinations and glitches, the authors note, “underscore significant challenges that must be addressed before LLMs can become effective partners in human-AI collaborative decision-making.”
The o1 model from OpenAI was especially bad at explaining its actions, despite vastly outpacing the other AI models with the puzzles. (The others, the study says, were “not currently capable” of solving six-by-six Sudoku puzzles.) Researchers said its answers failed to justify moves, used the wrong basic terminology, and poorly articulated the path it had taken to solve the puzzle.
On a broader scale, the public’s trust in AI has a long way to go. A study by KPMG found that just 41 percent of people are willing to trust AI, even when they’re eager to see its benefits. The World Economic Forum, meanwhile, says trust will shape outcomes in the AI-powered economy, while McKinsey, in March of this year, reported 78 percent of organizations use AI in at least one business function.
The Sudoku study was less about whether artificial intelligence could solve the puzzle and more a logic exercise. The focus was to gain insight into how AI systems think. A better understanding of how AI thinks could ultimately improve people’s trust levels and ensure that the results the AI spits out, whether it’s computer code or something to do with your finances, are more reliable.
“Puzzles are fun, but they’re also a microcosm for studying the decision-making process in machine learning,” said Somenzi. “If you have AI prepare your taxes, you want to be able to explain to the IRS why the AI wrote what it wrote.”
BY CHRIS MORRIS @MORRISATLARGE
Wednesday, August 6, 2025
A Harvard Professor Says This Is How AI Will Shake Up White-Collar Work
Last week, a report from Microsoft — one of the companies most aggressively pushing AI tools out into the world — suggested the top 40 jobs AI will most likely take over in the coming years, as well as the 40 jobs most resistant to the AI invasion. You probably won’t be surprised that telemarketers and translators are at high risk, while more practical roles like nursing assistants and embalmers are at low risk. But in a new report, Christopher Stanton, an associate professor of business administration at Harvard Business School, explained how quickly AI might upset many more white-collar jobs than some people think, and he also worries that there may not be much we can do to stop it.
Stanton’s research covers the impact of AI in the workplace, so he knows what he’s talking about, and some of the statistics and opinions he voiced should concern pretty much every leader of any size company. When you look at “the tasks workers in white-collar work can do and what we think AI is capable of,” he explained to The Harvard Gazette, the “overlap impacts about 35 percent of the tasks that we see in labor market data.” Essentially, Stanton thinks that the suite of AI tools that’s already accessible to businesses could replace a human worker in about one in every three tasks typical in the office. Whether companies actually are choosing to do that is an open question, however.
Stanton also set out an optimistic case for AI replacing human workers, using his own job as professor as a model. Optimistically, he thinks companies may choose to use AI to automate some jobs and thus “free up people to concentrate on different aspects of a job.” As a professor, you might see “20 percent or 30 percent of the tasks that a professor could do being done by AI, but the other 80 percent or 70 percent are things that might be complementary to what an AI might produce,” he said. Here his words echo numerous other AI proponents’ promises about AI.
But when it comes to keeping AI evolution on track — and the expansion of AI has been “probably some of the fastest-diffusing technology around,” Stanton said — this expert has a darker idea. While he admits the jury is still out on whether AI will displace people from whole classes of jobs or not, he does worry that it might upset the entire job market, with many middle-class Americans suddenly out of work, leading to impacts on society. Stanton said he felt politicians “will have a very limited ability to do anything here unless it’s through subsidies or tax policy,” because “anything that you would do to prop up employment, you’ll see a competitor who is more nimble and with a lower cost who doesn’t have that same legacy labor stack probably outcompete people dynamically.”
Stanton’s words resonate strongly with the ongoing mainstream debate about the impact of AI, and in particular with actions by Amazon’s CEO Andy Jassy. In a memo to staff recently, Jassy gave a leadership master class about how not to talk about AI, bungling the news that AI would indeed be taking people’s jobs at the retail and internet giant so badly that it triggered emotional staff pushback on Amazon’s internal Slack discussion system, with some workers demanding senior leadership positions should also be under the same AI threat.
But last week Jassy took a different tone when he addressed the matter during Amazon’s earnings call, Fortune reported. After reiterating that AI is going to “change very substantially the way we work,” he softened his stance and instead suggested that AI will “make all our teammates’ jobs more enjoyable” since it’ll free them from many “rote” procedures that couldn’t previously be automated.
Saying AI will make jobs more “enjoyable” is an interesting turn of phrase, and it does echo recent research by global tech giant HP. The company’s study found seven in 10 workers who use AI say it can make their jobs easier, which may correlate with lower stress and also boosted happiness (which translates to better productivity).
But there’s a strong undercurrent to all this research. Stanton, Jassy, and other experts argue that AI will take people’s jobs away … but for the remaining staff, it may make their days smoother.
Why should you care about this? If you’re busy planning out how your company will leverage AI tech, the way you explain the initiative to your workers matters. Honest words about helping their daily tasks, and a promise not to overburden them with more work now that they’re benefiting from AI assistance is probably a good idea.
BY KIT EATON @KITEATON
Monday, August 4, 2025
The 1 Big Mistake Companies Are Making by Adding AI to Customer Service
Jim Eckes says that when he founded telecom business TieTechnology in 2006, customer service was “truly a miserable experience.” When artificial intelligence tools started popping up nearly two decades later, it seemed like business owners could finally solve this issue. But AI has, in many cases, made customer service even worse, according to Eckes.
“You can’t really slap an AI tag on [the] terrible experience that customers are having right now, take out the human element by adding a bot—because that’s going to further infuriate the customer—and then try to call it progress,” he says.
Customer surveys support this. Research by advisory firm Gartner, for example, found that 64 percent of consumers don’t want businesses to use AI in customer service. More than half of the 5,700-plus survey respondents said that if they discovered a company was planning to implement AI in this sector, they’d consider “switching to a competitor.”
The problem, Eckes says, is that business owners are “skipping right to an AI solution” before addressing the root of their issue, “which is the customer relationship.” Recently, he adds, “customers have been treated so poorly due to cost-cutting, outsourcing, inadequate phone systems training, and—in our opinion, most importantly—the lack of CRM integration.” TieTechnology helps businesses connect their phone systems to their customer relationship management systems.
Replacing human customer service agents with AI chatbots won’t fix this, according to Eckes, because at the end of the day, customers are still going to pick up the phone. Nearly two thirds—61 percent—of customers prefer to complete customer service tasks by calling, messaging or meeting with a human, according Qualtrics’ 2025 State of the Contact Center report.
What customers “truly want,” Eckes says, “is a personalized experience.” That’s why he’s betting that solutions that can “successfully marry” customer data, customer experience, and phone systems “are going to be the winners in this AI race.”
BY ANNABEL BURBA @ANNIEBURBA
Friday, August 1, 2025
How Tech’s AI Boom Could Drive Up Small Business Costs
It’s broadly known that that rapid development and scaling of tech like cloud computing, artificial intelligence (AI), cryptocurrency mining, and even video streaming consume vast amounts of energy. Yet few small business owners — or heads of private households — are aware of how much their own electricity bills are increasing from electrical consumption of big data centers that big companies use to operate their power-guzzling platforms. As that enormous demand surges, the prices of dwindling power supplies for everyone else also soar.
That expensive electrical link between small and large businesses became clearer this month, when regional grid operator PJM Interconnection announced the results of its annual auction to secure sufficient power capacity over the next year. The company said bids it received were 22 percent higher than in 2024 — a considerable increase, but nowhere near the 833 percent price surge it reported in 2023. But it still means companies and households in Washington D.C. and the 13 Midwest and Atlantic states PJM serves will likely see 1.5 percent to 5 percent jolts in their power bills over the next year, based on the calculations PJM uses to pass along cost increases.
That may not sound like an enormous one-time rise, but it will come atop the 6 percent increase in U.S. electricity prices between January and June of this year, according to the Bureau of Labor Statistics. Worse still, those climbing energy costs — which President Donald Trump vowed to cut in half during his first year in office — are likely to continue increasing in coming years.
The reason? In announcing the recent auction results, PJM executive vice president Stu Bresler explained, “the majority of the demand increase you saw was large loads and data center additions.” Neither the proliferation of those facilities, nor the enormous energy needs they have, are likely to abate any time soon.
New data centers are being built almost constantly these days, enabling tech companies to prepare their energy-voracious AI apps to play bigger and increasingly diversified roles in business and life. As that multiplication continues, it will stoke demand for electricity, and along with that prices for it as supply and capacities reach their limits. It will also test the abilities of already overtaxed and in some cases antiquated electrical grids to keep up.
“It literally tells you we are out of generation,” Sean Kelly, chief executive officer of power forecasting firm Amperon Holdings Inc. told Bloomberg after the auction. “It’s good for traders, it’s good for asset owners, it is not good for consumers.”
That raw bargain for households and small business owners now facing higher energy prices turns out to be a pretty sweet deal for Google, Microsoft, Meta, Amazon, and other tech companies whose data centers are consuming all that electricity. As the sector pursues AI development and expansion, many of those corporations are building even more of the processing facilities, often with encouragement from local authorities.
For example, Virginia already plays home to “Data Center Alley.” The construction of those 596-and-counting facilities near Ashburn was supported by the state, which granted their builders and users exemption from sales tax on all computing equipment used in them.
Ohio, which PJM also serves, offers a similar tax exemption to encourage data center building. Many other states and localities across the U.S. are similarly bidding for the investments and jobs that big tech companies provide when building new centers.
The problem is, those additional processing complexes — which are often as big as a football field, and also consume enormous volumes of water for cooling — are already testing the capacity limits of the nation’s aging, struggling electricity grid. The ongoing spread of AI and its increasing power demands will make that challenge exponentially harder.
According to the World Economic Forum, energy consumption of emerging AI alone “is doubling roughly every 100 days.” That’s expected to grow fourfold as the tech transitions from developmental to operational phases. That additional draw on the grid may wind up create shortage trouble for many businesses and households — which between 2012 and 2022 suffered a 20 percent rise in power outages. Just as bad, the duration or those blackouts increased 46 percent over the same period.
A recent Goldman Sachs study estimated AI will add 160 percent to generally rising U.S. energy demand through 2032. Earlier studies forecast annual growth of tech’s electricity consumption at between 13 percent to 15 percent through the end of the decade.
That all comes as the Trump administration is moving to extend the lives of aging fossil fuel power plants, many of which are already functioning at maximum capacity. At the same time, he’s ended tax breaks and other incentives to companies developing or using solar, wind, and other renewable sources that could have taken up some of the slack.
As a result, the combination of limited power facilities, an increasingly creaky grid, and the proliferation of AI applications and data centers enabling their use may translate into years of higher electricity bills for small business owners and households.
Contributors of social platform Reddit’s subreddit on energy aren’t waiting for that to happen, and have begun protesting Big Tech sucking up what are already barely sufficient electricity supply.
“AI power needs must be split from consumer and commercial needs and the Microsoft, Meta, Googles of the world need to pay for those (both setting up the infrastructure and operating it),” redditor nspy1011 said in response to the PJM auction. “Enough with privatizing profits and socializing costs/losses.”
“Americans are going to get screwed because Trump stopped Biden’s tax credits for solar panel factories and wind turbine factories,” lamented Franklin_le_Tanklin. “Anyone who can’t survive off the grid with solar and battery is going to get destroyed in High electricity rates.”
Some commentators, however, expect tech companies — and U.S. business ingenuity — to fix the same problem they’re now creating.
“As the technology improve electricity demand will become more reasonable,” said Future_Helicopter970. “The hardware powering AI is performing faster every year, improving at a faster rate than Moore’s Law. These performance gains have been measured by outside groups, so it’s not just marketing hype. Specialized AI chips are doing better than general purpose chips.”
BY BRUCE CRUMLEY @BRUCEC_INC
Wednesday, July 30, 2025
‘Quiet Power’: How Taiwan Semiconductor Drives AI Innovation, From Apple to Nvidia
What does it take to not only adapt to the new business dynamics in AI innovation—let alone dominate them? In this article, paid subscribers will learn this through a case study from the semiconductor business, which includes:
How Taiwan Semiconductor Manufacturing Company (TSMC), an AI innovation company that is hardly a household name, and yet it has powered many top AI players—from Apple to Nvidia.
An better understanding of borderless innovation, the notion that globalization requires open collaboration in order to be successful.
How the world of AI renders “closed” innovation systems, such as Intel’s semiconductor chip business, at a major competitive disadvantage.
Why Taiwan’s culture of innovation may be better aligned with the future than America’s—and what needs to change.
Lessons you can learn on how to future-proof your business and by amassing “quiet power” in your industry.
Late one evening in 2010, in his Taipei home, Morris Chang topped off the wine in his guest’s glass. Across from him sat Jeff Williams, Apple’s chief operating officer, who had flown in with a proposal that was as audacious as it was simple.
Williams got straight to the point: We want to move the iPhone’s chipmaking to TSMC, but on a production line so advanced it existed only on paper.
TSMC had just poured billions into perfecting its current 28‑nanometer process, circuits roughly one‑ten‑thousandth the width of a human hair. Apple was asking for 20 nm, an even tighter scale that wasn’t on TSMC’s roadmap.
Saying yes meant undertaking a frantic, high-stakes race to build new capacity from scratch. But Morris Chang did not flinch. He listened calmly as Williams spoke. (By Chang’s own count, the Apple exec did 80% of the talking that night.)
Chang later told his team that “Missing Apple would cost us far more,” as he authorized a crash program to build the new production line. It’s a gamble of nearly half of TSMC’s cash reserves: A $9 billion investment, with 6,000 people working around the clock. In a record 11 months.
The Taiwan Semiconductor Manufacturing Co. logo at the company’s campus in Hsinchu, Taiwan, on Tuesday, July 16, 2024. Photo: An Rong Xu/Bloomberg via Getty Images
“I bet the company, but I didn’t think I would lose,” he said.
That single decision, to go all-in for Apple, would rewire the entire semiconductor industry. It changed everything.
Fifteen years later, at 9:32 AM on July 9, 2025, the unthinkable flashed across trading desks worldwide: NVIDIA — a company once known only for its video-game graphics chips — had just dethroned Apple as the world’s most valuable company, with a staggering $4 trillion market cap.
On Wall Street, traders whooped and headlines blared. Half a planet away in Taiwan, inside a humming TSMC fab, engineers in cleanroom suits stayed focused on their monitors. No applause, no champagne, just the steady whir of machines laying down atoms on silicon wafers, the building blocks of AI innovation. They didn’t need to cheer. The milestone had been engineered long ago.
TSMC quietly added more market cap than Intel ever lost—here’s how. By July 2025, Taiwan Semiconductor Manufacturing Co. (TSMC) had quietly grown into a trillion-dollar colossus itself, firmly in the world’s top ten by market value, well ahead of stalwarts like JPMorgan and Walmart and Visa.
Worth barely a tenth of that was Intel, once the chip industry’s lodestar. All these were the direct result of a paradigm that TSMC had been forging for more than a decade.
Nvidia co-founder and CEO Jensen Huang announces that Nvidia will help build Taiwan’s first “AI supercomputer” with TSMC and FOXCONN as he delivers the first keynote speech of Computex 2025 at the Taipei Music Center in Taipei on May 19, 2025. Photo: I-Hwa Cheng/AFP via Getty Images
In an age where politicians clamor to bring manufacturing semiconductors “back home,” the truth is more complicated, and more inspiring. TSMC isn’t just a factory. It’s Nvidia’s factory. And Apple’s. And Qualcomm’s. And AMD’s. It’s the silent partner behind every AI boom headline. The ghost in the machine. The engine inside the engine.
This is the story of how making microchips became a team sport. How the old model of one company doing it all, like Intel’s proud in-house empire, was outpaced by a new era of openness, partnership, and focus.
It’s the story of how the tech world was unbundled. How one kingdom fell, and another rose in its place. Because the biggest breakthroughs didn’t come from working in isolation. They came from borderless collaboration.
It challenges the very notion of what a company should be. It reveals a future-ready strategy that no business, in any industry, can afford to ignore.
I. The Fortress of Solitude: Intel’s Gilded Semiconductor Cage (1968–2005)
When Gordon Moore and Robert Noyce founded Intel in 1968, they fused physics brilliance with manufacturing might under one roof. The model was singular and uncompromising: one team, one mission. Design engineers sat just meters away from fabrication experts. Problems were solved over cafeteria coffee. Secrets never leaked. Everything — from transistor layout to atomic-level etching — stayed in-house.
Intel founders Gordon Moore and Robert Noyce in 1970. Photo: Intel Free Press
By the 1990s, Intel was in the PC era. Its microprocessors powered over 90% of the world’s personal computers. “Intel Inside” became a consumer-facing brand, not just a sticker on a laptop, but a seal of dominance.
The strategy was simple yet profound: own the whole stack. Intel controlled every stage of semiconductor production.
It poured billions into R&D.
It hired the sharpest minds in the Valley.
It relentlessly pushed Moore’s Law (doubling transistor density every two years).
And it worked.
Competitors like AMD survived on scraps. Intel’s vertical integration made it a semiconductor juggernaut. It was the Roman Empire of tech: self-sufficient, all-knowing, seemingly invincible. By the year 2000, Intel’s market cap peaked around $500 billion, more than the GDP of Sweden. CEO Andy Grove’s mantra, “Only the paranoid survive,” became gospel in boardrooms and business schools alike.
Intel’s Santa Clara campus wasn’t just a workplace; it was a fortress. But then it became a cage.
II. The Great Unbundling (2005–2016)
In 2007, Steve Jobs made an offer to Intel to power the first iPhone. We’re building a new kind of phone. Want in?
Intel’s CEO at the time, Paul Otellini, did the math. The chips would be low-margin. The volumes looked small. Intel was printing money with PC processors.
Apple Computer CEO Steve Jobs (left) and Intel’s former CEO Paul Otellini in 2006. Photo: Getty Images
Otellini said no. “We didn’t think it would be high volume.” That single misjudgment would haunt Intel for years. Apple turned instead to ARM-based chips. That’s why Jeff Williams flew across the Pacific to see Morris Chang.
When Intel hesitated, TSMC listened. And with that, the old semiconductor empire began to crack.
The architect of this new world wasn’t your typical hotshot founder in a hoodie. Morris Chang was 55 when he returned to Taiwan, an elder statesman in an industry infatuated with youth.
Armed with an MIT PhD and 25 battle-hardened years at Texas Instruments (TI), Chang had seen it all. At TI, he had championed a radical strategy known as “ahead of the cost curve”: sell chips below their current cost to lock in future demand. Audacious. Borderline reckless. But it worked.
Chang drove down costs faster than competitors could react. His goal was to “sow despair in the minds of my opponents.” TI’s fabs ran at full tilt. The semiconductor division boomed.
But by the early ’80s, TI’s focus had shifted to consumer electronics. Chang was passed over for CEO. So at 55, he left the U.S. and returned to Taiwan. He was recruited by the government to do something few thought possible: build a national tech industry from scratch.
The Veteran with a Radical Idea
Taiwan Semiconductor Manufacturing Company (TSMC) founder Morris. Photo: Getty Images
Chang had watched too many brilliant engineers fail to launch semiconductor startups because they couldn’t afford their own fabs. Capital outlays often ran over a billion dollars, even back then.
So he flipped the model.
Chang launched Taiwan Semiconductor Manufacturing Company (TSMC) in 1987 with a pledge: We will never compete with our customers.
TSMC would make chips and chips only. It wouldn’t design them. It wouldn’t release rival products. It would be a pure-play foundry, like a printing press for silicon. He also received zero equity as a founder. Every penny of his eventual $3 billion net worth came from buying shares with his own salary.
The genius of the model lay in its ability to pool risk. By serving hundreds of customers — Apple, Nvidia, AMD, Qualcomm, and more — TSMC could keep its multi-billion-dollar fabs running near full capacity all the time. One customer’s flop would be offset by another’s blockbuster. TSMC didn’t need to predict the winners; it just had to be the best at serving all of them. Rather than bet on which chip would succeed, TSMC would bet on all of them.
Unlike Intel’s walled garden, TSMC’s model was built on radical openness:
Manufacture only others’ designs.
Share process secrets and tools with partners.
Pool demand across competitors, filling fabs around the clock.
That was the Great Unbundling, trading fortress-like empires like Intel for a sprawling, open ecosystem led by TSMC. And it set the table for AI innovation today.
III. A Day in the Life of the Borderless Chip
To understand how this borderless chip empire actually works — how a company in California can build the most advanced hardware on Earth without ever touching silicon — you have to meet a chip designer. Let’s call her Anna.
Anna is a senior engineer at a fabless company like Apple or Nvidia. Her task: design a next-generation AI accelerator. Her challenge: cram more computing power into a smaller chip that draws less energy, and ship it before competitors even start thinking about it.
In 2025, Anna’s most important collaborators aren’t down the hall. They’re 13 time zones away, inside a fabrication facility she’ll likely never visit. That’s why Anna’s workspace isn’t a lab bench or a cleanroom. It’s a virtual cockpit.
There, she operates a suite of high-powered digital tools — streamed securely from TSMC’s servers in Hsinchu — forming a kind of chip design metaverse. Among them:
1. The Rulebook (Design Rules)
Every chip designer at TSMC must follow a massive list of do’s and don’ts. These rules cover tiny details, like how close wires can be, or how much electricity each part can handle. If you break just one rule, the chip might not work at all.
This rulebook is like a secret recipe, capturing everything TSMC has learned about making world-class chips at the forefront of AI innovation.
2. The LEGO Bricks (Standard Cells)
Instead of building every tiny part from scratch, designers use ready-made building blocks. These blocks—like memory bits or logic gates—are tested, reliable, and designed to fit together perfectly.
It’s like building something complex out of LEGO: faster, safer, and way less likely to break.
3. The Crystal Ball (Simulation Models)
Before a chip is ever built, designers use powerful software to predict exactly how it will perform. They can see how fast it will run, how much energy it will use, and how hot it might get.
It’s like taking the chip for a test drive, without having to build it first. Because if something goes wrong after it’s built, fixing it can cost millions.
It’s past midnight. Anna rubs her eyes as lines of code blur on the screen. A simulation error blinks red. Her stomach knots. One more problem to solve before dawn.
She tweaks a tiny circuit parameter, heart pounding. Then the red alert flickers to green. Anna exhales. A small smile cuts through the fatigue.
The Unsung Heroes of Automation
TSMC didn’t create the chip design metaverse on its own. Since 2008, its Open Innovation Platform (OIP) has been the glue binding together a powerful alliance of key players:
IP core providers like ARM, who license ready-made building blocks used in chips. Things like CPU cores, graphics engines, and communication controllers.
EDA (Electronic Design Automation) vendors like Synopsys and Cadence, who provide the software tools that help engineers design and test chips with billions of tiny components.
EDAs? Let’s pause. No story of unbundling is complete without these unsung heroes.
Back in the mid-1980s, a team led by Aart de Geus spun out of General Electric to found Synopsys. Their breakthrough? Logic synthesis. A software tool that could take a high-level description of a chip’s intended behavior and automatically generate an optimized circuit layout.
At first, customers were skeptical. Could a machine really do better than human engineers? But the results were undeniable: Designs that once took months could now be synthesized in weeks, often with fewer errors.
Meanwhile, a startup called ECAD (which later merged into Cadence) developed tools that could automatically verify chip layouts with blazing speed. Together, these tools democratized chip design.
Welcome to the Great Library of Taiwan
Today, through TSMC’s OIP, players like Synopsys, Cadence, and IP vendors start working together years before a new chip process is even ready. So by the time engineers like Anna show up at TSMC:
The design tools are already fine-tuned and certified for the latest tech.
The simulation flows have been road-tested to catch expensive mistakes.
More than 60,000 plug-and-play chip components, all proven to work in real silicon. It’s like stepping into the Great Library of Taiwan, where every book, every IP block, is guaranteed to function flawlessly at the 3-nanometer scale. (Fun fact: TSMC’s 3nm transistors are so small that if you blew one up to the size of a marble and blew a regular marble proportionally, the regular marble would be about the size of Earth.)
Even the heaviest simulations now run seamlessly in the cloud — on AWS or Azure — thanks to TSMC’s Virtual Design Environment (VDE). Once, this kind of computing firepower was reserved for the likes of Intel. But now, it’s available to anyone with ambition, and a login.
In this new AI ecosystem, then, who does what work? Or put another way, what’s work like for Anna? Art its best, it’s as if she’s working inside TSMC, without ever leaving California. This isn’t outsourcing. It’s deep entanglement. And it’s likely the very future texture of how business operates.
IV. The Great Semiconductor Reckoning
Intel’s decline wasn’t a dramatic collapse. It was a slow, grinding erosion.
First, it missed the smartphone wave entirely. Then came a misfire: betting big on its own low-power Atom chips that never caught on. Meanwhile, its greatest strength — manufacturing — began to falter. The once-reliable cadence of process improvements slipped. Intel’s long-promised 10nm node arrived years late.
In the semiconductor world, a “node” refers to a manufacturing standard. The smaller the number (like 5nm or 3nm), the more powerful and efficient the chip is, because you can squeeze more transistors into it.
By contrast, TSMC advanced in lockstep with its partners, moving like a steady metronome.
7nm in 2018, powering Apple’s A12 and Huawei’s Kirin chips.
5nm by 2020, for Apple’s A14 and the first M1 Macs.
3nm by 2023, arriving on time, on target.
By the time Washington realized America’s chip supply rested on an island just 100 miles from China, it was too late. The geopolitical alarm bell rang. The response was massive: $6.6 billion in CHIPS Act grants and another $5 billion in loans to lure TSMC into building fabs in Arizona.
Building factories is easy. But replicating excellence? That’s hard.
Taiwan’s 24/7 Discipline Meets America’s 9‑to‑5
By 2024, TSMC’s U.S. operations were deep in the red, posting a staggering NT$14.3 billion loss (roughly USD $440 million). The Arizona fab, once hailed as a symbol of industrial revival, had become TSMC’s most costly site.
The problem wasn’t technology. The chips scheduled to come out in 2025 were still world-class. The problem was culture.
The precision required in semiconductor fabrication defies comprehension. Extreme ultraviolet (EUV) machines fire lasers that must hit droplets of molten tin 50,000 times per second. That’s an accuracy exceeding the math used in Apollo moon landings.
But the cultural misalignment was harder to control than any beam.
Inside the Arizona project, a clash of norms played out. American engineers chafed under at what they saw as “rigid, counterproductive hierarchies. Some found the environment “prison-like.” Decisions flowed strictly top-down. Overnight shifts and 12-hour days weren’t just common, they were expected. What Taiwanese leadership saw as discipline, U.S. staff saw as dysfunction.
Taiwanese managers, meanwhile, were dismayed by what they perceived as a lack of “dedication and obedience.” American engineers seemed overly fixated on work-life balance, unwilling to push through the kind of relentless all-hands grind that TSMC’s culture had long normalized.
In Taiwan, manufacturing is treated with the urgency of a national mission. In Arizona, it felt like just another job.
Morris Chang didn’t sugarcoat it. He called America’s semiconductor manufacturing push “a very expensive exercise in futility.” Taiwan’s edge, he argued, wasn’t just cheaper costs. It was something far more difficult to replicate: a 30-year compounding advantage of talent, culture, and ecosystem alignment. Every layer — from suppliers and universities to shift workers — was finely tuned for one thing: building the best chips on Earth.
That kind of excellence doesn’t come from a simple blueprint. And it doesn’t copy-paste.
V. The AI Innovation Ecosystem is the New Empire
In 1987, Morris Chang was nobody’s first choice. Passed over, pushed aside, sent to an island most executives couldn’t find on a map.
But Chang understood something his rivals didn’t. TSMC’s borderless empire — a sprawling, unbundled, collaborative kingdom — operates not by domination, but by enablement.
Nvidia’s trillion-dollar rise. Apple’s insourced chip. AMD’s comeback. None of it would have been possible on the old, closed playing field.
In the 21st century, the most defensible advantage isn’t owning factories or hoarding expertise. It’s building a gravitational field, so strong that the best companies in the world choose to orbit around you.
The moat? Trust. TSMC’s radical openness lets partners innovate fearlessly. Risks are pooled. Knowledge flows freely. The question isn’t “What can we do alone?” but “What can we unleash together?”
The Quiet Power of Being Everyone’s Future
“Sitting in Hsinchu, being in the foundry business,” Morris Chang once said, “I actually see a lot of things before they actually happen.”
When Qualcomm abruptly shifted orders to TSMC from IBM in the late ’90s, Chang didn’t need a formal memo. “IBM Semiconductor is in trouble,” he thought. And he was right.
This is the quiet power of being the enabler, not the enforcer. TSMC sees the future first, not by fortune-telling, but because everyone’s future runs through it.
The next time someone tells you to build walls, hoard your advantages, and trust no one… remember the 55-year-old engineer who gave away his secrets, and ended up shaping the future of high tech.
Because the future never belongs to the paranoid or the possessive. It belongs to the cross-border collaborators—those who reject border walls and build gravitational wells instead.
EXPERT OPINION BY HOWARD YU @HOWARDHYU
Monday, July 28, 2025
Why This AI Influencer Earns 40 Times More Than Its Human Counterparts
Artificial intelligence-powered influencer Lu from Magalu has shared 74 sponsored Instagram posts over the past year, paid for by brands like Netflix and Hugo Boss, according to a recent report by video editing platform Kapwing. As the face of Brazilian retail platform Magazine LuÃza, Lu boasts 8 million followers on Instagram and 7.4 million on TikTok.
Human accounts of this size command sponsored post rates of more than $34,000, according to Kapwing’s estimate. If Lu earns that much per post, she likely made upwards of $2.5 million from May 2024 to May 2025. The average human influencer earns just $65,245 per year—nearly 40 times less—in comparison, according to ZipRecruiter.
Magazine LuÃza, also known as Magalu, created Lu’s persona in 2003, Kapwing says. Six years later, the retailer started posting YouTube videos that showed her—a somewhat realistic-looking, animated brunette woman—unboxing goods and giving product reviews. More recently, Magalu has pushed to turn Lu into a full-fledged social-media personality, tapping advertising giant Oglivy to lead this transformation.
Aline Izo, the São Paulo-based company’s senior manager of marketing, told The Observer in 2021 that “in Brazil, Lu is not a sales gimmick” but “an influencer in the true sense of the word.” She emphasized Lu’s influence: “When she takes a stand on something—for example on bringing awareness to domestic abuse or standing up and advocating for LGBT rights—people pay attention.”
Independent AI influencer Lil Miquela is the second highest-earning virtual content creator on Instagram, according to Kapwing. Miquela’s account has 2.4 million followers, but the influencer is leagues behind Lu in terms of earnings: Kapwing estimates she only made about $74,000 from May 2024 to 2025.
This disparity is likely due to the amount of sponsored posts each account shared over that time period. According to Kapwing, Lu posts four times as many ads as “any other virtual influencer among the top ten earners” it identified.
BY ANNABEL BURBA @ANNIEBURBA
Subscribe to:
Posts (Atom)