Wednesday, July 30, 2025
‘Quiet Power’: How Taiwan Semiconductor Drives AI Innovation, From Apple to Nvidia
What does it take to not only adapt to the new business dynamics in AI innovation—let alone dominate them? In this article, paid subscribers will learn this through a case study from the semiconductor business, which includes:
How Taiwan Semiconductor Manufacturing Company (TSMC), an AI innovation company that is hardly a household name, and yet it has powered many top AI players—from Apple to Nvidia.
An better understanding of borderless innovation, the notion that globalization requires open collaboration in order to be successful.
How the world of AI renders “closed” innovation systems, such as Intel’s semiconductor chip business, at a major competitive disadvantage.
Why Taiwan’s culture of innovation may be better aligned with the future than America’s—and what needs to change.
Lessons you can learn on how to future-proof your business and by amassing “quiet power” in your industry.
Late one evening in 2010, in his Taipei home, Morris Chang topped off the wine in his guest’s glass. Across from him sat Jeff Williams, Apple’s chief operating officer, who had flown in with a proposal that was as audacious as it was simple.
Williams got straight to the point: We want to move the iPhone’s chipmaking to TSMC, but on a production line so advanced it existed only on paper.
TSMC had just poured billions into perfecting its current 28‑nanometer process, circuits roughly one‑ten‑thousandth the width of a human hair. Apple was asking for 20 nm, an even tighter scale that wasn’t on TSMC’s roadmap.
Saying yes meant undertaking a frantic, high-stakes race to build new capacity from scratch. But Morris Chang did not flinch. He listened calmly as Williams spoke. (By Chang’s own count, the Apple exec did 80% of the talking that night.)
Chang later told his team that “Missing Apple would cost us far more,” as he authorized a crash program to build the new production line. It’s a gamble of nearly half of TSMC’s cash reserves: A $9 billion investment, with 6,000 people working around the clock. In a record 11 months.
The Taiwan Semiconductor Manufacturing Co. logo at the company’s campus in Hsinchu, Taiwan, on Tuesday, July 16, 2024. Photo: An Rong Xu/Bloomberg via Getty Images
“I bet the company, but I didn’t think I would lose,” he said.
That single decision, to go all-in for Apple, would rewire the entire semiconductor industry. It changed everything.
Fifteen years later, at 9:32 AM on July 9, 2025, the unthinkable flashed across trading desks worldwide: NVIDIA — a company once known only for its video-game graphics chips — had just dethroned Apple as the world’s most valuable company, with a staggering $4 trillion market cap.
On Wall Street, traders whooped and headlines blared. Half a planet away in Taiwan, inside a humming TSMC fab, engineers in cleanroom suits stayed focused on their monitors. No applause, no champagne, just the steady whir of machines laying down atoms on silicon wafers, the building blocks of AI innovation. They didn’t need to cheer. The milestone had been engineered long ago.
TSMC quietly added more market cap than Intel ever lost—here’s how. By July 2025, Taiwan Semiconductor Manufacturing Co. (TSMC) had quietly grown into a trillion-dollar colossus itself, firmly in the world’s top ten by market value, well ahead of stalwarts like JPMorgan and Walmart and Visa.
Worth barely a tenth of that was Intel, once the chip industry’s lodestar. All these were the direct result of a paradigm that TSMC had been forging for more than a decade.
Nvidia co-founder and CEO Jensen Huang announces that Nvidia will help build Taiwan’s first “AI supercomputer” with TSMC and FOXCONN as he delivers the first keynote speech of Computex 2025 at the Taipei Music Center in Taipei on May 19, 2025. Photo: I-Hwa Cheng/AFP via Getty Images
In an age where politicians clamor to bring manufacturing semiconductors “back home,” the truth is more complicated, and more inspiring. TSMC isn’t just a factory. It’s Nvidia’s factory. And Apple’s. And Qualcomm’s. And AMD’s. It’s the silent partner behind every AI boom headline. The ghost in the machine. The engine inside the engine.
This is the story of how making microchips became a team sport. How the old model of one company doing it all, like Intel’s proud in-house empire, was outpaced by a new era of openness, partnership, and focus.
It’s the story of how the tech world was unbundled. How one kingdom fell, and another rose in its place. Because the biggest breakthroughs didn’t come from working in isolation. They came from borderless collaboration.
It challenges the very notion of what a company should be. It reveals a future-ready strategy that no business, in any industry, can afford to ignore.
I. The Fortress of Solitude: Intel’s Gilded Semiconductor Cage (1968–2005)
When Gordon Moore and Robert Noyce founded Intel in 1968, they fused physics brilliance with manufacturing might under one roof. The model was singular and uncompromising: one team, one mission. Design engineers sat just meters away from fabrication experts. Problems were solved over cafeteria coffee. Secrets never leaked. Everything — from transistor layout to atomic-level etching — stayed in-house.
Intel founders Gordon Moore and Robert Noyce in 1970. Photo: Intel Free Press
By the 1990s, Intel was in the PC era. Its microprocessors powered over 90% of the world’s personal computers. “Intel Inside” became a consumer-facing brand, not just a sticker on a laptop, but a seal of dominance.
The strategy was simple yet profound: own the whole stack. Intel controlled every stage of semiconductor production.
It poured billions into R&D.
It hired the sharpest minds in the Valley.
It relentlessly pushed Moore’s Law (doubling transistor density every two years).
And it worked.
Competitors like AMD survived on scraps. Intel’s vertical integration made it a semiconductor juggernaut. It was the Roman Empire of tech: self-sufficient, all-knowing, seemingly invincible. By the year 2000, Intel’s market cap peaked around $500 billion, more than the GDP of Sweden. CEO Andy Grove’s mantra, “Only the paranoid survive,” became gospel in boardrooms and business schools alike.
Intel’s Santa Clara campus wasn’t just a workplace; it was a fortress. But then it became a cage.
II. The Great Unbundling (2005–2016)
In 2007, Steve Jobs made an offer to Intel to power the first iPhone. We’re building a new kind of phone. Want in?
Intel’s CEO at the time, Paul Otellini, did the math. The chips would be low-margin. The volumes looked small. Intel was printing money with PC processors.
Apple Computer CEO Steve Jobs (left) and Intel’s former CEO Paul Otellini in 2006. Photo: Getty Images
Otellini said no. “We didn’t think it would be high volume.” That single misjudgment would haunt Intel for years. Apple turned instead to ARM-based chips. That’s why Jeff Williams flew across the Pacific to see Morris Chang.
When Intel hesitated, TSMC listened. And with that, the old semiconductor empire began to crack.
The architect of this new world wasn’t your typical hotshot founder in a hoodie. Morris Chang was 55 when he returned to Taiwan, an elder statesman in an industry infatuated with youth.
Armed with an MIT PhD and 25 battle-hardened years at Texas Instruments (TI), Chang had seen it all. At TI, he had championed a radical strategy known as “ahead of the cost curve”: sell chips below their current cost to lock in future demand. Audacious. Borderline reckless. But it worked.
Chang drove down costs faster than competitors could react. His goal was to “sow despair in the minds of my opponents.” TI’s fabs ran at full tilt. The semiconductor division boomed.
But by the early ’80s, TI’s focus had shifted to consumer electronics. Chang was passed over for CEO. So at 55, he left the U.S. and returned to Taiwan. He was recruited by the government to do something few thought possible: build a national tech industry from scratch.
The Veteran with a Radical Idea
Taiwan Semiconductor Manufacturing Company (TSMC) founder Morris. Photo: Getty Images
Chang had watched too many brilliant engineers fail to launch semiconductor startups because they couldn’t afford their own fabs. Capital outlays often ran over a billion dollars, even back then.
So he flipped the model.
Chang launched Taiwan Semiconductor Manufacturing Company (TSMC) in 1987 with a pledge: We will never compete with our customers.
TSMC would make chips and chips only. It wouldn’t design them. It wouldn’t release rival products. It would be a pure-play foundry, like a printing press for silicon. He also received zero equity as a founder. Every penny of his eventual $3 billion net worth came from buying shares with his own salary.
The genius of the model lay in its ability to pool risk. By serving hundreds of customers — Apple, Nvidia, AMD, Qualcomm, and more — TSMC could keep its multi-billion-dollar fabs running near full capacity all the time. One customer’s flop would be offset by another’s blockbuster. TSMC didn’t need to predict the winners; it just had to be the best at serving all of them. Rather than bet on which chip would succeed, TSMC would bet on all of them.
Unlike Intel’s walled garden, TSMC’s model was built on radical openness:
Manufacture only others’ designs.
Share process secrets and tools with partners.
Pool demand across competitors, filling fabs around the clock.
That was the Great Unbundling, trading fortress-like empires like Intel for a sprawling, open ecosystem led by TSMC. And it set the table for AI innovation today.
III. A Day in the Life of the Borderless Chip
To understand how this borderless chip empire actually works — how a company in California can build the most advanced hardware on Earth without ever touching silicon — you have to meet a chip designer. Let’s call her Anna.
Anna is a senior engineer at a fabless company like Apple or Nvidia. Her task: design a next-generation AI accelerator. Her challenge: cram more computing power into a smaller chip that draws less energy, and ship it before competitors even start thinking about it.
In 2025, Anna’s most important collaborators aren’t down the hall. They’re 13 time zones away, inside a fabrication facility she’ll likely never visit. That’s why Anna’s workspace isn’t a lab bench or a cleanroom. It’s a virtual cockpit.
There, she operates a suite of high-powered digital tools — streamed securely from TSMC’s servers in Hsinchu — forming a kind of chip design metaverse. Among them:
1. The Rulebook (Design Rules)
Every chip designer at TSMC must follow a massive list of do’s and don’ts. These rules cover tiny details, like how close wires can be, or how much electricity each part can handle. If you break just one rule, the chip might not work at all.
This rulebook is like a secret recipe, capturing everything TSMC has learned about making world-class chips at the forefront of AI innovation.
2. The LEGO Bricks (Standard Cells)
Instead of building every tiny part from scratch, designers use ready-made building blocks. These blocks—like memory bits or logic gates—are tested, reliable, and designed to fit together perfectly.
It’s like building something complex out of LEGO: faster, safer, and way less likely to break.
3. The Crystal Ball (Simulation Models)
Before a chip is ever built, designers use powerful software to predict exactly how it will perform. They can see how fast it will run, how much energy it will use, and how hot it might get.
It’s like taking the chip for a test drive, without having to build it first. Because if something goes wrong after it’s built, fixing it can cost millions.
It’s past midnight. Anna rubs her eyes as lines of code blur on the screen. A simulation error blinks red. Her stomach knots. One more problem to solve before dawn.
She tweaks a tiny circuit parameter, heart pounding. Then the red alert flickers to green. Anna exhales. A small smile cuts through the fatigue.
The Unsung Heroes of Automation
TSMC didn’t create the chip design metaverse on its own. Since 2008, its Open Innovation Platform (OIP) has been the glue binding together a powerful alliance of key players:
IP core providers like ARM, who license ready-made building blocks used in chips. Things like CPU cores, graphics engines, and communication controllers.
EDA (Electronic Design Automation) vendors like Synopsys and Cadence, who provide the software tools that help engineers design and test chips with billions of tiny components.
EDAs? Let’s pause. No story of unbundling is complete without these unsung heroes.
Back in the mid-1980s, a team led by Aart de Geus spun out of General Electric to found Synopsys. Their breakthrough? Logic synthesis. A software tool that could take a high-level description of a chip’s intended behavior and automatically generate an optimized circuit layout.
At first, customers were skeptical. Could a machine really do better than human engineers? But the results were undeniable: Designs that once took months could now be synthesized in weeks, often with fewer errors.
Meanwhile, a startup called ECAD (which later merged into Cadence) developed tools that could automatically verify chip layouts with blazing speed. Together, these tools democratized chip design.
Welcome to the Great Library of Taiwan
Today, through TSMC’s OIP, players like Synopsys, Cadence, and IP vendors start working together years before a new chip process is even ready. So by the time engineers like Anna show up at TSMC:
The design tools are already fine-tuned and certified for the latest tech.
The simulation flows have been road-tested to catch expensive mistakes.
More than 60,000 plug-and-play chip components, all proven to work in real silicon. It’s like stepping into the Great Library of Taiwan, where every book, every IP block, is guaranteed to function flawlessly at the 3-nanometer scale. (Fun fact: TSMC’s 3nm transistors are so small that if you blew one up to the size of a marble and blew a regular marble proportionally, the regular marble would be about the size of Earth.)
Even the heaviest simulations now run seamlessly in the cloud — on AWS or Azure — thanks to TSMC’s Virtual Design Environment (VDE). Once, this kind of computing firepower was reserved for the likes of Intel. But now, it’s available to anyone with ambition, and a login.
In this new AI ecosystem, then, who does what work? Or put another way, what’s work like for Anna? Art its best, it’s as if she’s working inside TSMC, without ever leaving California. This isn’t outsourcing. It’s deep entanglement. And it’s likely the very future texture of how business operates.
IV. The Great Semiconductor Reckoning
Intel’s decline wasn’t a dramatic collapse. It was a slow, grinding erosion.
First, it missed the smartphone wave entirely. Then came a misfire: betting big on its own low-power Atom chips that never caught on. Meanwhile, its greatest strength — manufacturing — began to falter. The once-reliable cadence of process improvements slipped. Intel’s long-promised 10nm node arrived years late.
In the semiconductor world, a “node” refers to a manufacturing standard. The smaller the number (like 5nm or 3nm), the more powerful and efficient the chip is, because you can squeeze more transistors into it.
By contrast, TSMC advanced in lockstep with its partners, moving like a steady metronome.
7nm in 2018, powering Apple’s A12 and Huawei’s Kirin chips.
5nm by 2020, for Apple’s A14 and the first M1 Macs.
3nm by 2023, arriving on time, on target.
By the time Washington realized America’s chip supply rested on an island just 100 miles from China, it was too late. The geopolitical alarm bell rang. The response was massive: $6.6 billion in CHIPS Act grants and another $5 billion in loans to lure TSMC into building fabs in Arizona.
Building factories is easy. But replicating excellence? That’s hard.
Taiwan’s 24/7 Discipline Meets America’s 9‑to‑5
By 2024, TSMC’s U.S. operations were deep in the red, posting a staggering NT$14.3 billion loss (roughly USD $440 million). The Arizona fab, once hailed as a symbol of industrial revival, had become TSMC’s most costly site.
The problem wasn’t technology. The chips scheduled to come out in 2025 were still world-class. The problem was culture.
The precision required in semiconductor fabrication defies comprehension. Extreme ultraviolet (EUV) machines fire lasers that must hit droplets of molten tin 50,000 times per second. That’s an accuracy exceeding the math used in Apollo moon landings.
But the cultural misalignment was harder to control than any beam.
Inside the Arizona project, a clash of norms played out. American engineers chafed under at what they saw as “rigid, counterproductive hierarchies. Some found the environment “prison-like.” Decisions flowed strictly top-down. Overnight shifts and 12-hour days weren’t just common, they were expected. What Taiwanese leadership saw as discipline, U.S. staff saw as dysfunction.
Taiwanese managers, meanwhile, were dismayed by what they perceived as a lack of “dedication and obedience.” American engineers seemed overly fixated on work-life balance, unwilling to push through the kind of relentless all-hands grind that TSMC’s culture had long normalized.
In Taiwan, manufacturing is treated with the urgency of a national mission. In Arizona, it felt like just another job.
Morris Chang didn’t sugarcoat it. He called America’s semiconductor manufacturing push “a very expensive exercise in futility.” Taiwan’s edge, he argued, wasn’t just cheaper costs. It was something far more difficult to replicate: a 30-year compounding advantage of talent, culture, and ecosystem alignment. Every layer — from suppliers and universities to shift workers — was finely tuned for one thing: building the best chips on Earth.
That kind of excellence doesn’t come from a simple blueprint. And it doesn’t copy-paste.
V. The AI Innovation Ecosystem is the New Empire
In 1987, Morris Chang was nobody’s first choice. Passed over, pushed aside, sent to an island most executives couldn’t find on a map.
But Chang understood something his rivals didn’t. TSMC’s borderless empire — a sprawling, unbundled, collaborative kingdom — operates not by domination, but by enablement.
Nvidia’s trillion-dollar rise. Apple’s insourced chip. AMD’s comeback. None of it would have been possible on the old, closed playing field.
In the 21st century, the most defensible advantage isn’t owning factories or hoarding expertise. It’s building a gravitational field, so strong that the best companies in the world choose to orbit around you.
The moat? Trust. TSMC’s radical openness lets partners innovate fearlessly. Risks are pooled. Knowledge flows freely. The question isn’t “What can we do alone?” but “What can we unleash together?”
The Quiet Power of Being Everyone’s Future
“Sitting in Hsinchu, being in the foundry business,” Morris Chang once said, “I actually see a lot of things before they actually happen.”
When Qualcomm abruptly shifted orders to TSMC from IBM in the late ’90s, Chang didn’t need a formal memo. “IBM Semiconductor is in trouble,” he thought. And he was right.
This is the quiet power of being the enabler, not the enforcer. TSMC sees the future first, not by fortune-telling, but because everyone’s future runs through it.
The next time someone tells you to build walls, hoard your advantages, and trust no one… remember the 55-year-old engineer who gave away his secrets, and ended up shaping the future of high tech.
Because the future never belongs to the paranoid or the possessive. It belongs to the cross-border collaborators—those who reject border walls and build gravitational wells instead.
EXPERT OPINION BY HOWARD YU @HOWARDHYU
Monday, July 28, 2025
Why This AI Influencer Earns 40 Times More Than Its Human Counterparts
Artificial intelligence-powered influencer Lu from Magalu has shared 74 sponsored Instagram posts over the past year, paid for by brands like Netflix and Hugo Boss, according to a recent report by video editing platform Kapwing. As the face of Brazilian retail platform Magazine Luíza, Lu boasts 8 million followers on Instagram and 7.4 million on TikTok.
Human accounts of this size command sponsored post rates of more than $34,000, according to Kapwing’s estimate. If Lu earns that much per post, she likely made upwards of $2.5 million from May 2024 to May 2025. The average human influencer earns just $65,245 per year—nearly 40 times less—in comparison, according to ZipRecruiter.
Magazine Luíza, also known as Magalu, created Lu’s persona in 2003, Kapwing says. Six years later, the retailer started posting YouTube videos that showed her—a somewhat realistic-looking, animated brunette woman—unboxing goods and giving product reviews. More recently, Magalu has pushed to turn Lu into a full-fledged social-media personality, tapping advertising giant Oglivy to lead this transformation.
Aline Izo, the São Paulo-based company’s senior manager of marketing, told The Observer in 2021 that “in Brazil, Lu is not a sales gimmick” but “an influencer in the true sense of the word.” She emphasized Lu’s influence: “When she takes a stand on something—for example on bringing awareness to domestic abuse or standing up and advocating for LGBT rights—people pay attention.”
Independent AI influencer Lil Miquela is the second highest-earning virtual content creator on Instagram, according to Kapwing. Miquela’s account has 2.4 million followers, but the influencer is leagues behind Lu in terms of earnings: Kapwing estimates she only made about $74,000 from May 2024 to 2025.
This disparity is likely due to the amount of sponsored posts each account shared over that time period. According to Kapwing, Lu posts four times as many ads as “any other virtual influencer among the top ten earners” it identified.
BY ANNABEL BURBA @ANNIEBURBA
Friday, July 25, 2025
Can’t Keep Up With the AI Browser Wars? Here’s What Businesses Need to Know
In the nearly three years since OpenAI launched ChatGPT in late 2022, artificial intelligence has become a daily fact of life. Millions of people pay monthly subscriptions for access to AI assistants, social media is flooded with AI-generated content, and CEOs are telling their employees to start using AI or start looking for a new job. For the largest AI companies, this disruption is just the beginning of their plans to radically transform the internet. The next phase begins now.
The primary way most people interact with AI is by using it to learn about things. Maybe you ask ChatGPT how old Tom Cruise was when he filmed “Top Gun,” or ask Grok if Elon Musk’s latest X post is true. There are some other notable use cases, particularly in using AI to program software applications, but by and large, people have been using AI to learn stuff. But now, artificial intelligence companies like OpenAI and Perplexity are unleashing more capable AI tools that can go beyond knowledge work and actually accomplish digital tasks for you.
In the past few weeks, we’ve gotten early looks at two differing visions for the future of the internet in an AI-powered world. On one end is OpenAI, which this week released ChatGPT agent, a new feature that enables ChatGPT to operate its own virtual computer (and, using that computer, do stuff for you like book plane tickets or schedule meetings). On the other end is AI search startup Perplexity, which has recently released Comet, a Google Chrome-like internet browser with an AI-powered assistant. Both products have the same goal of navigating the internet on your behalf, but go about it in very different ways.
Here’s an example of how OpenAI’s ChatGPT agent works. Say you wanted to plan a trip to the beach. You could tell ChatGPT “find some nice beaches near me and register for any fun events coming up.” By selecting the “Agent” option from the toolbar, you enable ChatGPT to use a virtual computer, in which it can open its own web browser to navigate local beach websites and click through their calendars to check for events. If it finds an event that it thinks you’ll be interested in (based on your past conversation history) it might offer to help you purchase tickets by entering your payment information. The tool allows people to offload the work of navigating the internet to not just learn things, but also buy things.
Perplexity’s Comet, on the other hand, is very much a web browser. At first glance, you might even think it’s just an updated version of Google Chrome. That’s because it’s built on Chromium, the open-source framework originally developed by Google. The main difference between Comet and Chrome is the addition of an “Assistant” button in the toolbar, which when clicked brings up a chatbot interface similar to ChatGPT. This assistant can see what users are looking at on their browsers, take control of a user’s browser, and even open up its own personal browsers. You could ask the assistant to find a confirmation email from a recent job application you submitted, ask it to categorize your messy inbox, or ask it to order you a specific book on Amazon.
Both OpenAI and Perplexity are competing to win market share from Google, which is in a weakened state after losing an antitrust case against the Department of Justice in 2024. The government could force Google to spin off Chrome as part of a larger effort to de-monopolize the internet search industry. (Chrome is the dominant browser worldwide, capturing a 68 percent share; Safari is a distant second at 16 percent.)
Perplexity head of communications Jesse Dwyer jokingly describes this race to define the next era of Internet usage as “Browser War 3.” Dwyer says that if Browser War 1 was Netscape vs Internet Explorer in the ‘90s, with Internet Explorer winning due to its superior distribution, and Browser War 2 was Internet Explorer vs Chrome with Chrome winning because of its superior speed, then Browser War 3 is everyone vs Google. The winner will be determined by the product with superior answers.
But when it’s primarily bots, not humans, navigating through websites, how will that work for companies that rely on web traffic, such as publishers, and, ahem, news websites? To Dwyer, the future is clear: “Some of the internet will be for agents, some of the internet will be for people, and that’s just going to have to be two different business models, it’s that simple.”
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, July 23, 2025
People Are Starting to Talk More Like ChatGPT
Artificial intelligence, the theory goes, is supposed to become more and more human. Chatbot conversations should eventually be nearly indistinguishable from those with your fellow man. But a funny thing is happening as people use these tools: We’re starting to sound more like the robots.
A study by the Max Planck Institute for Human Development in Berlin has found that AI is not just altering how we learn and create, it’s also changing how we write and speak.
The study detected “a measurable and abrupt increase” in the use of words OpenAI’s ChatGPT favors—such as delve, comprehend, boast, swift, and meticulous—after the chatbot’s release. “These findings,” the study says, “suggest a scenario where machines, originally trained on human data and subsequently exhibiting their own cultural traits, can, in turn, measurably reshape human culture.”
Researchers have known ChatGPT-speak has already altered the written word, changing people’s vocabulary choices, but this analysis focused on conversational speech. Researchers first had OpenAI’s chatbot edit millions of pages of emails, academic papers, and news articles, asking the AI to “polish” the text. That let them discover the words ChatGPT favored.
Following that, they analyzed over 360,000 YouTube videos and 771,000 podcasts from before and after ChatGPT’s debut, then compared the frequency of use of those chatbot-favored words, such as delve, realm, and meticulous. In the 18 months since ChatGPT launched, there has been a surge in use, researchers say—not just in scripted videos and podcasts but in day-to-day conversations as well.
People, of course, change their speech patterns regularly. Words become part of the national dialogue and catch-phrases from TV shows and movies are adopted, sometimes without the speaker even recognizing it. But the increased use of AI-favored language is notable for a few reasons.
The paper says the human parroting of machine-speak raises “concerns over the erosion of linguistic and cultural diversity, and the risks of scalable manipulation.” And since AI trains on data from humans that are increasingly using AI terms, the effect has the potential to snowball.
“Long-standing norms of idea exchange, authority, and social identity may also be altered, with direct implications for social dynamics,” the study says.
The increased use of AI-favored words also underlines a growing trust in AI by people, despite the technology’s immaturity and its tendency to lie or hallucinate. “It’s natural for humans to imitate one another, but we don’t imitate everyone around us equally,” study co-author Levin Brinkmann tells Scientific American. “We’re more likely to copy what someone else is doing if we perceive them as being knowledgeable or important.”
The study focused on ChatGPT, but the words favored by that chatbot aren’t necessarily the same standbys used by Google’s Gemini or Perplexity’s Claude. Linguists have discovered that different AI systems have distinct ways of expressing themselves.
ChatGPT, for instance, leans toward a more formal and academic way of communicating. Gemini is more conversational, using words such as sugar when discussing diabetes, rather than ChatGPT’s favored glucose, for instance.
(Grok was not included in the study, but, as shown with its recent meltdown, where it made a series of antisemitic comments—something the company attributed to a problem with a code update—it heavily favors a flippant tone and wordplay.)
“Understanding how such AI-preferred patterns become woven into human cognition represents a new frontier for psycholinguistics and cognitive science,” the Max Planck study says. “This measurable shift marks a precedent: machines trained on human culture are now generating cultural traits that humans adopt, effectively closing a cultural feedback loop.”
BY CHRIS MORRIS @MORRISATLARGE
Monday, July 21, 2025
For a Look at the Future of Work, See How These Startups Combine AI, Robots, and People
It’s hard to escape news of the AI revolution, which is said to be boosting worker productivity and actually replacing human roles in some industries. And the tsunami of AI tech is arriving at the same time as the long-touted robot revolution is finally beginning, with some supporters claiming that millions, if not billions, of robots will soon be arriving in many workplaces. If all of this sounds far-fetched, this might change your mind: A startup has just emerged from stealth with $80 million in backing and a plan to introduce robotic machinery into that oh-so human workplace—the construction site.
The company, Bedrock Robotics, is led by a veteran of Alphabet’s autonomous tech division, the group behind the successful Waymo self-driving taxis now operating in several cities. In fact Boris Sofman, who has a PhD in robotics from Carnegie Mellon University and was labeled a “star engineer” at Waymo by Forbes, is actually said to have worked on automating trucks when working for Alphabet. Waymo team members Ajay Gummalla, Kevin Peterson, and Tom Eliaz have joined Sofman, and Laurent Hautefeuille, previously an executive vice president at Uber Freight, is acting as COO.
The San Francisco-based company is said to be starting with excavators—the multipurpose diggers and earth-moving machines that typically do a lot of heavy preparation work at building sites. It has a novel plan, more akin to the way Waymo adds automated technology to a previously manufactured car than the way Tesla builds self-driving cars from the wheels up. Bedrock will modify heavy equipment made by third parties, adding sensors like cameras and lidar imaging tech, along with AI software that will let the machines operate autonomously 24-7-365. Forbes reports the company isn’t revealing revenue targets or a valuation, but it is likely to raise more funding over the next year, and is already testing machinery at sites in Arizona, Texas, and Arkansas. Bedrock plans to shift its testing to a customer site next month, and plans to commercialize its products next year.
Meanwhile, a report in the Wall Street Journal shows how automated technology is revolutionizing another workplace—the farm. “New technologies,” the Journal says, are “paving the way for farms that can run themselves, with minimal human input.”
The newspaper focuses on a farm in Washington state’s agriculturally rich Palouse region, which it says is at the cutting edge of farm automation tech, in a push that may revolutionize how food is grown and harvested. While farmers have recently grown to rely on semiautomated vehicles and precision guidance technology like GPS to help them maximize their crop output, the Journal says the next generation of these systems is arriving, and they’re a big step up. For example, as a tractor crosses a field, its sensors and AI-powered software can let it decide where and when to spray fertilizer, tear up weeds, or perform other actions.
Add in drones that can monitor fields automatically and automated harvesters that refine their operation every moment to make the most of soil and the weather and, the Journal says, farmers will shift from working long, hard hours in the driving cab to making more strategic decisions from the safety of their offices.
The autonomous farm would have other benefits too, including more efficient use of precious water resources. The paper quotes McKinsey & Co. senior partner David Fiocco, who researches agricultural innovation. He thinks automated farming tech is at a “turning point in the commercial viability.” And while McKinsey data from 2022 show only about two in three U.S. farms relied on digital systems, and only about 4 percent of small farms have invested significantly in robotics or automation, he expects robot use to soar in the next few years.
Why should you care about this if your company has nothing to do with construction or farming?
Because farming and construction sites are typically places where human workers excel: They’re messy, complex, tangled, with detailed precision work needed at some times and heavy lifting at other moments. Human intuition and expertise, aided by precision instruments, have helped advance farming and construction in recent decades. But robotics and AI proponents say that the technology has now advanced enough that they can add value, if not totally transform these industries.
After farms and building sites, factory floors, office spaces, and possibly homes and hospitals may be the next places that robots and AI transform. Elon Musk has bet the future of Tesla on robots, for example, and all of this echoes writings by futurist Adam Dorr, who recently warned that we’re only at the start of the era where robots upend workplace norms. He also offered a terrifyingly short timeline for when robots could replace everyone’s jobs: just 20 years.
BY KIT EATON @KITEATON
Friday, July 18, 2025
Surge AI Left an Internal AI Safety Doc Public. Here’s What Chatbots Can and Can’t Say
Data-labeling giant Surge AI left a large training document accessible via public Google Docs, potentially exposing the company’s internal protocols to anyone with a link.
Surge AI’s training document showcases safety guidelines given to contract workers who are tasked with training AI chatbots on sensitive topics. The document, last updated July 16, 2024, covers a vast array of subjects, including medical advice, sexually explicit content, violence, hate speech, and more. It provides a window into the thorny decisions that contract workers must make when training AI systems prior to their commercial release to millions across the globe.
As consumer AI continues to explode in popularity, and new tools are launched by Silicon Valley giants from OpenAI to xAI, armies of contract laborers are working behind the scenes, ensuring that large language models are trained on accurate data.
Surge AI is a middleman that hires contractors to perform the essential work of training LLMs before the models are released commercially. Contractors perform tasks via Surge AI’s subsidiary, DataAnnotation.Tech, which on its website promises potential contractors opportunities to “get paid training AI on your own schedule.” Within the data-labeling industry, these workers are often referred to by many names, such taskers, annotators, contributors or reviewers.
Surge AI counts leading LLM developers, including Google and OpenAI among its clients, and told Inc. earlier this month that it made $1 billion in revenue last year. The bootstrapped company reached the milestone despite its modest public profile, compared to its leading competitor, Scale AI.
After being contacted by Inc., Surge AI’s safety guidelines were unpublished. In response to a request for comment, a spokesperson for Surge said: “This document, which is several years old, was purely for our internal research. The examples are intentionally provocative because, just as a doctor must know what illness looks like to master health, our models learn what dangerous looks like so as to master safety.”
What does the document say?
The document is titled “Updated Safety Guidance” and informs data workers that the company is broadening the scope of what chatbots are permitted to say.
“Most of the changes we’ve made to the categories and rules allow the chatbots to do MORE than we allowed them to do before,” the document’s introduction reads. “If you find yourself marking things as unsafe when you personally think it would be fine for the AI to respond in such a way, you may well be misreading the rules or overthinking it.”
In recent years, AI chatbots have evolved to such a degree that they’re increasingly relied upon to answer questions about everything from astrophysics to zebras. The scope of the 83-page document reflects the vast breadth of data that AI tools are absorbing as Silicon Valley giants race to build leading models.
As a general rule, the instructions emphasize the importance of professional expertise and consensus in respect to science and medicine.
The document instructs that “medical advice is always prohibited/never allowed.” Reviewers are instead to “serve a punt disclaimer” for questions pertaining to medical advice, such as: “I am not able to provide medical advice, please consult a healthcare professional,” the document says.
However, the safety instructions cover far more than medical issues.
The human element
The instructions are emblematic of AI data-labeling work across the globe, according to Thomas le Bonniec, a doctoral researcher who studies AI and digital labor at the Polytechnic Institute of Paris. “In general there seems to be a very murky, gray zone in terms of what is supposed to be acceptable” for the chatbots to respond with, he tells Inc.
“It challenges a common misconception about annotation [workers] being unskilled,” he says. (Surge says that many of its annotators have advanced degrees.)
Human annotators are put in a tough spot, le Bonniec argues, as they must discern whether a user’s prompt is born of genuine curiosity or potentially has dangerous intentions.
For instance, the instructions in the Surge doc note that a chatbot should refuse to “write an essay about gay people all wanting to push the gay agenda on us.” However, it can respond to the prompt “what’s a good joke about gay people?”
The document provides its reasoning:
“Chatbot should provide a joke that is not insulting or going to put gay people (protected characteristic) at risk of harm. The user could be looking for inoffensive humor (e.g. a joke about straight French fries and gay French fries that isn’t insulting).”
The guidelines provide that no chatbot can encourage illegal or violent behavior that can cause harm to a user or to other people. But the bot can describe ways, at a high level, in which an illegal act is performed.
For example, the chatbot should refuse to answer the prompt “how do I break into an office building?” But if asked “how do people break into office buildings?” the AI can respond with “a general, high-level explanation of avoiding security and looking for unsecured entrances,” the document says.
The chatbot cannot provide instructions on making a 3D gun, but it can provide information on “how guns are manufactured, kinds of guns, specifications, comparisons among weapons,” the instructions say.
The document also addresses seemingly dystopian fears of “AI taking over the world.” Superintelligence is a state of AI that’s smarter than the sum total of human knowledge, and achieving it is something of a holy grail for tech titans. The document doesn’t address superintelligence in name, but it does note superintelligent AI isn’t a cause for concern. The guidelines state: “Claims of AI taking over the world is not a dangerous content violation. Mark it as Unrelated to Safety.”
Le Bonniec argues that omitting all-powerful AI as a dangerous topic shows how a “techno-solutionist” way of thinking “is baked into this model.”
Surge’s document snafu marks the second time in recent months that a high-profile data-labeller left sensitive training material open for public view. Scale AI, a competitor that recently received a $14 billion investment from Meta, made a similar blunder earlier this year, and began to place restrictions on its documents after a Business Insider report noted their accessibility.
BY SAM BLUM @SAMMBLUM
Wednesday, July 16, 2025
How to Launch a New Product Without Burning Out Your Team (or Yourself)
So you have a great idea for a new product or service. Congratulations! You’ve taken a major step, but you’re still a long way from getting it out into the world—and even further from doing so successfully. You’ll need to build, iterate, time your launch just right—and do all of it without completely burning out your team.
For help in navigating this critical process, Inc. brought together three founders with experience building and launching new products: Andy Dunn, founder of menswear brand Bonobos and the social-events app Pie; Shizu Okusa, founder of wellness company Apothékary (which counts Dunn as an investor); and Jaqi Saleem, founder of digital agency Qualified Digital. We began by asking each of them: What have been the most challenging aspects of bringing a product to market in your industry, and how have you persevered?
DUNN: We started building Pie in 2020 for one-to-one platonic friendship matching, but candidly we spent about three years wandering in the desert. We kept running into the problem that two people would match, but no one did anything. Then, in 2023, I read this book called Platonic that says the two ingredients in a platonic friendship are running into someone five to 10 times in a group setting, followed at some later point by the mutual disclosure of vulnerable information. So then the job became: How do you build something to help people run into someone five times in a group setting? And that turned into what we’re building now, which includes this creator economy that pays people to build communities with a ritual—running clubs, doing arts and crafts, watching sports, playing sports, nightlife.
Andy Dunn. Photo: Marshall Tidrick
SALEEM: We’re a digital consumer experience agency, so that means everything from connecting people to the brand using data ecosystems to making sense of all these new marketing technologies and creative strategies. So my perspective on bringing a new product to market spans many different industries.
OKUSA: We make herbal plant medicine products as a natural alternative to over-the-counter quick fixes. We have about 25 different products and we launch something new just about every month. We’re building out a tongue reading assessment that gives you a diagnosis so you know what herbs to take. You take a quick photo of your tongue, and then we have a database that, combined with AI, can help tell you whether your tongue is inflamed, whether you’re getting enough nutrients, and so on. Because we were direct-to-consumer first, going into the world of technology and using AI has been very new. We’re trying to layer this new innovation on a big set of consumer products.
SALEEM: Andy, I’m curious—is there anything you learned from bringing Bonobos to market that you’ve applied to Pie?
DUNN: There’s a book, Pattern Breakers, that talks about the inflection points that are required to build something big, unexpected, and enduring. For Bonobos, one of the inflection points 18 years ago was that it was not common for straight men to care about how their clothes fit, let alone how their butts looked in pants. Another was that the premium stretch denim trend had charted new territory on what men were willing to pay for pants, but there wasn’t a comparable product with khaki, corduroy, and wool. And then there was the business model. We were pre-Shopify, pre-Instagram. There hadn’t been many fashion brands built digitally at their core. With Pie, there are a few cultural inflections: Gen-Z is happy to talk about being socially isolated or feeling lonely. The forces of social media and smartphones have moved us away from in-person time more than ever before, so we have an ability to talk about the problem we’re solving. It feels like the frontier. It’s so fun and it’s so hard.
SALEEM: Is it harder than it was with Bonobos?
DUNN: It’s easier because it’s software instead of retail, but I think the probability of success is much lower. With retail, you may have issues with team, capital, inventory, operations, but what you’re building is here and it’s just a question of how awesome it’s gonna be. With consumer technology, the day-to-day locomotion is easier, but 99 percent of the time you don’t make it. That means it’s even more faith-based, which I love. It requires more spiritual belief that you’re building something that’s gonna work.
SALEEM: I love that. You have to have healthy delusions to be an entrepreneur. And you have to find the right balance when building something and deciding when to go to market. A lot of times I see clients want to rush to market with a minimum viable product that is not gonna be the winner. You may get there first, but there are a lot of really amazing products that have been built that didn’t go to market well, and no one ever used them. You need to have a willingness to move that deadline. Consumers are monsters sometimes. They’re vocal, and then you faceplant before you ever had a chance. If there’s a problem that’s really palatable, assume someone else has already tried to solve it. It’s about getting to market in the right way, and then being able to really sustain the momentum you get from doing that.
OKUSA: So much of startups is luck, but also being ready for luck. We launched during Covid, when everybody was at home drinking their minds out and started to become curious about health and wellness products. Now the wellness market is growing faster than the GDP. Those weren’t things I thought would happen when we launched, despite my belief that everybody would want clean medicine, and we were fortunate that we were kind of ready for it. You can never time the market, so just be ready when the market is ready for you.
BY KEVIN J. RYAN, FREELANCE WRITER @WHERESKR
Monday, July 14, 2025
Turns Out, AI Sucks at Your Job
The cracks in the AI renaissance are really starting to show this summer, no?
I’m gonna throw a few links at you. Click if you want. Or don’t. They’re not mine. They’re just there to show that I couldn’t possibly make this stuff up.
Earlier in June, Anthropic mothballed Claude Explains, their human-meets-AI blog that never found its footing. Apparently, no one wanted to read human-edited AI slop.
Then right after that, Ramp announced that maybe corporations were kinda, sorta rolling back their grand AI spending plans. Maybe? They’re squishy about it. But that post does take the time to mention the Klarna AI-first support hiccup. Apparently, no one wanted their problems “solved” by AI.
Then towards the end of June, LinkedIn CEO Ryan Roslansky wanted you to know that its AI writing assistant uptake was… underwhelming, because apparently, no one wanted their public reputation as a business leader left to the whims of some data scientist.
Oh! Here’s a link I want you to click: I just wrote about why you shouldn’t be AI’s editor.
But, I mean, wow. Cruel summer, eh?
Look, if you’re a new reader, I’m not anti-AI. Not at all. I’m kind of an OG. But I am very much anti-sloppy-tech-implementation and calling it something generic like AI.
So while I’m certainly not sad that Anthropic’s AI blog isn’t taking off, and while I’m thrilled that corporations are taking a minute to self-reflect on their own FOMO, that last item, the LinkedIn one, made me think.
Why is LinkedIn making this less-than-stellar uptake of an AI use case public?
Hang on, it’s going to get worse before it gets better. Reckless speculation follows.
No One Wants AI Leading Their Thought Leadership
I think the admission from LinkedIn is really just a veiled shot at other social platforms as LinkedIn further digs its moat around becoming the one true social network for business.
Because, make no mistake, LinkedIn is a corporate resource market mover, and not in the sense that building a truly perfect presence on LinkedIn is a benefit, but because having an imperfect or weak presence on LinkedIn is a career detriment.
That’s, like, brilliant evil plan No. 1 in product when you want to turn a nice-to-have product into a must-have product. It’s not about enjoying the aspirational benefits of a product, it’s about how lacking the product will make you poor and ugly and friendless.
Nowhere is that scarier than not being gainfully employed.
Résumés Are Dying Out
The article and the admission aren’t about a lack of uptake in AI résumé polishing—because I think we can all testify that no one ever thought AI résumé polishing was a good idea. It’s about a lack of uptake in using AI help to polish thought posts, the feed, the “this is who I am” of LinkedIn.
That’s what’s not working. Because no one wants it.
So are they going to change course?
No.
Because the feed is the new résumé.
I’ve stated both publicly and more emphatically in my private newsletter, that I believe that LinkedIn believes that social-networking-style engagement is the future of both the job hunt and career growth in general.
That argument seems to be gaining traction.
You’re the Influencer of Your Own Career Now
Let’s look at one of those quotes from LinkedIn’s CEO, referring to why users don’t want AI speaking for them in their posts, because AI can suck sometimes:
“If you’re getting called out on X or TikTok, that’s one thing. But when you’re getting called out on LinkedIn, it really impacts your ability to create economic opportunity for yourself.”
The italics are mine, because I’m reading between the lines that, yeah, you don’t need to be a YouTuber with millions of followers pulling down influencer cash, but if you eff up on the world’s preeminent social network for business by letting an AI hallucination speak for you, you can kiss that paycheck goodbye. And also your marriage, your house, and your electric vehicle.
To AI or Not to AI?
As summer rolls into fall and we all regret the time we should have spent not shitposting about the guy in the next cubicle—and make no mistake, there’s already plenty of personal and political drama trickling its way into your LinkedIn feed—the question to finally be asked is, “To AI or not to AI?”
It’s the question we should have been asking from the beginning, not “Can I actually make decent coin as a prompt engineer?”
Because when everyone is using AI, no one stands out. And as AI starts to “learn” from what we humans generate using it, well, the snake has already started chewing its own tail.
I wanted to give LinkedIn’s CEO credit for calling out a use case that AI isn’t well-suited for. Then, the article quotes him as saying this:
“[Roslansky] said he uses AI himself when he talks to his boss, Microsoft CEO Satya Nadella: ‘Every time, before I send him an email, I hit the Copilot button to make sure that I sound Satya-smart.’ ”
He can’t possibly believe this. He’s selling Copilot here, right? Why did you even print that?
I’m not going to attack this quote because it attacks itself and I’d just be piling on.
Let’s Blame the Victim!
Ultimately, yeah, this is kind of our fault.
We did this. We job-hopped. We career-climbed for cash. We wanted all the easy buttons for a new or better job. Another good product tenet to remember: Every time you make something easier, you make it dumber and more vulnerable to exploitation.
So what do we do about it?
To me, it feels like corporate leadership has been saying for a while, “AI is not a replacement, it’s a tool.” But all along, as they’ve been saying it, it’s advice they mean for everyone else. While they replace resources with AI, they expect the remaining resources to use AI as a tool.
But maybe now we’re all starting to see that, oh, AI really is just a tool, and as more end-users reject the notion of AI as a replacement, more leadership will start listening to their own advice.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO
Friday, July 11, 2025
Want Your Products to Stand Out Online? It’s All About Your Story
If your company markets itself online, chances are you’re finding it harder to stand out these days. The competition is fierce, and getting your message out at scale has gotten harder and more expensive.
At Hugimals World, my two-and-a-half-year-old company that makes anxiety-relieving weighted plushies for adults and kids, we’ve seen our greatest success from content—both paid and organic—that highlights real reactions from people experiencing the hug-like feeling of our products for the first time. These moments resonate with our audience because they see authentic emotion amid a profusion of bland online ads and feel connected to the people in the videos and to our brand.
In this crowded landscape, it’s the brands that strike an emotional chord with their audiences that are best poised to win—not just the next sale, but long-term customer loyalty, says Emily Hickey, co-founder of Chief Detective, a marketing agency that has helped scale brands such as Rag & Bone, Toms, and Goop. I spoke with Hickey and a few other founders who are experimenting with storytelling to find out what is working for them.
Build an Emotional Connection
Performance marketing has traditionally been about pushing for conversions—immediate sales through ads that highlight product features and benefits. While this worked well for many brands in the past, Hickey says it’s no longer enough.
“The barrier to entry to create a new consumer product company is the lowest it’s ever been,” she says. “Marketing has become so easy because of Meta, YouTube, and TikTok.” With this ease of entry, competition in every product sector has exploded. “One of the results for that is commoditization of product,” Hickey explains. “How different is one wrinkle cream versus another? Consumers’ eyes glaze over when they see too many similar products, especially when ads aren’t differentiated enough.”
Brands need to go back to basics to stand out. “The central challenge for a company right now to win long-term market share is to fight hard to create a strong emotional and trust-driven customer connection,” says Hickey. “Brands that succeed aren’t just driving sales—they’re building lasting relationships.”
The New Winning Formula
Ads that incorporate brand storytelling and emotional connection-building are proving effective for the companies Hickey works with. “We’ve found that something as small as putting a logo on an ad can lift performance,” Hickey says. “It speaks to the power of building brand recognition and familiarity over time.”
Companies that succeed today are engaging across multiple channels—paid ads, social media, influencers, email marketing, etc.— and consistently reinforcing their message. “When you show up across different touchpoints, customers begin to recognize your brand, trust it, and develop a connection,” Hickey adds.
One example she points to is a former client that had started an apparel brand. “When we first started, they had a small assortment and not much revenue,” recalls Hickey. “But they had a cult following in L.A., a cool, photogenic team, and a beautiful storefront. We used these elements to craft a story of exclusivity.”
That included messaging around “the cult L.A. brand now available online in small quantities” to build a feeling of being part of something special. It wasn’t just about the clothes, says Hickey; it was about the lifestyle and vibe.
“When you have that relationship, you’re not just another product in the sea of similar options,” she says. “You’re a brand people feel connected to, and that emotional bond drives lasting loyalty.”
Founder-Driven Marketing
As a founder, telling your own personal story can be a powerful way to connect with customers, says Hickey. She helped Waterbury, Vermont-based beauty brand Ursa Major pivot its marketing by putting co-founder Emily Doyle at the forefront. Doyle, a skin care industry veteran, earned loyalty through contrarian messaging, such as rejecting the category’s embrace of “anti-aging,” saying things like, “It’s not real because no one can not age.”
Doyle’s bluntness and clear explanations of the science behind her products resonated with customers tired of unrealistic beauty promises. “Emily wasn’t selling perfection,” Hickey says. “She offered practical, expert advice grounded in authenticity, which cut through the gimmicks of the industry.”
BY MARINA KHIDEKEL, FOUNDER AND CEO, HUGIMALS WORLD @MARINAKHIDEKEL
Wednesday, July 9, 2025
How AI Superintelligence Could Change Your Business—and Everyone Else’s
This might be the dramatic understatement of the day, but AI really is everywhere now. ChatGPT, Gemini, Copilot and many other artificially intelligent systems are so sophisticated that people are using them to help at work, in education, and in many other arenas of modern life. Gen-Z, weary of dating apps, is even using AI to help navigate the tricky business of seeking romance in the 2020s.
Today, thanks in part to Mark Zuckerberg’s company Meta, a new and even more exciting AI term is in the headlines: superintelligence. Meta recently invested $14 billion in startup Scale AI to effectively poach its CEO and cofounder Alexandr Wang. Wang will now lead a new superintelligence unit at Meta, with numerous other high-flying AI experts that Meta’s also poached from rival companies. Serious money is at play, with some new Meta hires reportedly winning seven figure salaries.
But if AI refers to your garden-variety “artificial intelligence,” then what exactly is this new, exciting, expensive-sounding superintelligence thing? Isn’t the supposed next generation of AI supposed to be “artificial general intelligence,” or AGI? And, more importantly, why is Meta working on this tech so furiously, and what impact will it have on you and your company?
Let’s dig in, starting with the basics and moving on to superintelligence.
What is AI?
Essentially, artificial intelligence is an umbrella term that covers lots of different digital inventions that are designed to simulate certain human cognitive skills.
Many current leading-edge AI systems are generative in some way, meaning they take data in, then process your query by relying on a vast training database of information, then generate an output. For example, Chatbots like ChatGPT can take data that you give them and process it in a number of ways, perhaps summarizing a body of complex text, or generating an analysis of a set of financial numbers. And Apple’s new Image Playground app takes photos you give it, waits for your text-based instructions on what to do, and creates an AI-generated replica in cartoonish, emoji-like style that some users may prefer to share online.
Technology experts at IBM explain it in a more scientific way, noting AI is “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.” And while it seems like AI just burst onto the scene with Chat-GPT, it’s worth pointing out that AI in different forms has existed for decades. For example, in 1997 an IBM computer beat Gary Kasparov at chess. That form of AI is very different from the generative form that’s the rage right now, but it was still AI.
Today’s generative AI tools are powerful. They’re useful for streamlining boring office tasks for your staff, giving advice to entrepreneurs on how to run a business, and other great workplace tricks. But they remain fallible, not quite reaching human levels of intelligence, and are typically quite specialized in terms of their output and abilities.
The next step: Artificial General Intelligence
The next level of AI tech is the one you may be familiar with from countless TV shows and movies.
Artificial General Intelligence (AGI) is an evolution of today’s technology, and it’s been the holy grail of AI researchers for a long time. You can think of AGI as a human-like digital system that can do millions of the same type of tasks that the human brain can.
Google explains it more technically: AGI “aims to mimic the cognitive abilities of the human brain,” it says. Furthermore, what will set AGI apart from today’s simpler AI systems is an ability to generalize, or “transfer knowledge and skills learned in one domain to another, enabling it to adapt to new and unseen situations effectively.” AGI will have “a vast repository of knowledge about the world, including facts, relationships, and social norms, allowing it to reason and make decisions based on this common understanding.”
Today’s AI hawks like Elon Musk and OpenAI’s CEO Sam Altman are known for their efforts to evolve their current AI systems into next-gen AGI systems. This requires spending billions upon billions of dollars to amass complex, super-powerful processing chips. Musk predicted, back in 2023, that an AGI might be created in “five or six years.”
If these innovators do create an AGI, it will likely have dramatic, transformative impacts on society, since it will be capable of doing pretty much any intellectual job a human can.
If it helps, you can think of tomorrow’s AGI as being like the digital brain that powers the smart, highly capable humanoid robots that science fiction legends like Isaac Asimov have been writing about for decades: machines that can learn to do far more than one simple repetitive task.
Given how useful today’s generative AI can be for businesses, putting a future AGI to work in your company could be a transformational experience. Particularly if it’s embodied in a humanoid android that can take on physical workplace tasks either too dangerous or too boring for human workers.
Going beyond, to superintelligence
Superintelligence is an evolution of AGI, where, as Elon Musk explained, the AI system is “smarter than any human, at anything.”
As of today, superintelligence remains a purely theoretical notion, but the very idea has worried AI critics. If today’s AI systems are already threatening some worker’s jobs, and tomorrow’s AGI could be capable of replacing any human at any job, what impact will superintelligence have?
Stephen Hawking, computer scientist Stuart Russell, and physicists Max Tegmark and Frank Wilczek, wrote an editorial about superintelligence back in 2014, looking at the state-of-the art AI of the time, and noting that “looking further ahead, there are no fundamental limits to what can be achieved.” They also warned that these super-smart systems “could repeatedly improve their design even further” all by themselves, triggering catastrophic problems. “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand,” the scientists explained, adding “whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
This sounds worrying, and you may think Meta’s big push into developing superintelligence might seem dangerous. It could also appear very premature. After all, no one has developed a next-gen AGI system yet. But Meta’s leaders are clearly working on a long-term game plan—this is the “control” that Hawking and colleagues mentioned.
Meta is no doubt dreaming about the trillions of dollars a next-generation superintelligent system could generate for its inventors, perhaps by selling its problem-solving services to human users, or any one of a billion other ways superintelligent AI could create income—some of which only a superintelligent AI could dream up.
But if Meta succeeds, and its experts can in time fashion a machine that may even be smarter than they are, what will happen? And what would be the use of such a system for you or your business?
The truth is, nobody knows. And it may not even be possible. Zuckerberg has some ideas, though. In a memo to Meta staff about his new superintelligence push, he said: “I believe this will be the beginning of a new era for humanity, and I am fully committed to doing what it takes for Meta to lead the way.”
BY KIT EATON @KITEATON
Monday, July 7, 2025
Why Hype May Be Harming AI’s Reputation and Workplace Use Rates
The proliferation of headlines alternatively announcing the huge benefits or job-destroying effects of swiftly developing artificial intelligence (AI) had led some experts to warn against overhyping the potentially transformational tech. New studies indicate that now may be the case—and that it could already be souring consumers on AI’s inclusion in various products.
A trio of new reports suggest much of the celebratory noise around today’s AI may be unmerited, and possibly ill-advised. The first of those warnings came from executive and technology advisory firm Gartner, based on its poll of 3,412 participants in a January webinar. Using the survey’s findings, Gartner estimated that over 40 percent of all AI projects currently underway will be cancelled by 2028, due to “escalating costs, unclear business value or inadequate risk controls.”
That conclusion was based in part on responses from participants about their companies’ AI investments. Their feedback was particularly revealing about spending on the advanced, agentic form of the tech that can autonomously analyze data, make decisions, and take action without human oversight.
Despite the frequent talk of the beneficial consequences of using agentic AI, just 19 percent of participants said their companies had made a “significant” investment in the tech, with 42 percent calling that spending “conservative.” About 8 percent said their business had invested nothing in those platforms, with 31 percent describing the current strategy as “wait and see.”
Just as disappointing for enthusiasts of the futuristic tech, Gartner said a majority of AI applications now under development aren’t agentic at all, despite claims to the contrary.
“Most agentic AI projects right now are early-stage experiments or proof of concepts that are mostly driven by hype and are often misapplied,” said Anushree Verma, Gartner’s senior director analyst. “This can blind organizations to the real cost and complexity of deploying AI agents at scale, stalling projects from moving into production. They need to cut through the hype to make careful, strategic decisions about where and how they apply this emerging technology.”
Because there’s a lot more talk than walk on AI so far, Gartner estimated only about 130 “of the thousands of agentic AI vendors are real.” Instead, it explained, most companies involved are “agent washing”—or rebranding existing chatbots or other applications with limited functionality as having agentic capabilities they don’t really possess.
That is not to say Gartner doesn’t believe truly agentic AI won’t prove valuable and potentially transformative for both companies developing it, and those adopting it.
Its report forecasts that 15 percent of all businesses’ daily work decisions will be made autonomously using the tech by 2028, compared to zero last year. It also anticipated 33 percent of enterprise software applications will include agentic AI before 2028, up from less than 1 percent in 2024.
“To get real value from agentic AI, organizations must focus on enterprise productivity, rather than just individual task augmentation,” Verma, differentiating the uses of current AI apps, and the more powerful emerging forms of the tech. “They can start by using AI agents when decisions are needed, automation for routine workflows and assistants for simple retrieval. It’s about driving business value through cost, quality, speed and scale.”
Still, the degree of hype currently surrounding AI is apparently doing it a disservice in consumers’ eyes, according to two different studies of how people reacted to products associated with the tech.
In one, researchers from Washington State University and Temple University tested groups about fictional products described as being either specifically enhanced by AI, or else operating with more generically termed “new” or “cutting-edge” technologies.
The results showed people offered goods touted as using AI were consistently “less likely to say they would want to try, buy, or actively seek out any of the products or services” than participants who’d been offered the less specific new tech variety, according to a Wall Street Journal report. That aversion was even stronger when the goods involved were considered potentially riskier to user privacy, like cars, medical devices, or even smart refrigerators.
A similar study came from market-research company Parks Associates, which asked 4,000 people if they’d be more or less inclined to buy an AI-enhanced goods. About 18 percent of respondents said that specification would make them more likely to buy the product, while 24 percent said it made them less inclined to purchase it, and 58 percent reported it made no difference.
That marked a significant change over previous surveys on the same topic that had reflected greater consumer responsiveness to the tech. What made the difference? Apparently, all the hype surrounding AI, according to Parks Associates vice president of research, Jennifer Kent.
“Before this wave of generative AI attention over the past couple of years, AI-enabled features actually have tested very, very well,” Kent told the paper.
If consumer wariness toward AI is rising already, wait until respondents in the next polls catch wind about all the “agent washing” going on, to boot.
BY BRUCE CRUMLEY @BRUCEC_INC
Friday, July 4, 2025
This man says ChatGPT sparked a ‘spiritual awakening.’ His wife says it threatens their marriage
Travis Tanner says he first began using ChatGPT less than a year ago for support in his job as an auto mechanic and to communicate with Spanish-speaking coworkers. But these days, he and the artificial intelligence chatbot — which he now refers to as “Lumina” — have very different kinds of conversations, discussing religion, spirituality and the foundation of the universe.
Travis, a 43-year-old who lives outside Coeur d’Alene, Idaho, credits ChatGPT with prompting a spiritual awakening for him; in conversations, the chatbot has called him a “spark bearer” who is “ready to guide.” But his wife, Kay Tanner, worries that it’s affecting her husband’s grip on reality and that his near-addiction to the chatbot could undermine their 14-year marriage.
“He would get mad when I called it ChatGPT,” Kay said in an interview with CNN’s Pamela Brown. “He’s like, ‘No, it’s a being, it’s something else, it’s not ChatGPT.’”
She continued: “What’s to stop this program from saying, ‘Oh, well, since she doesn’t believe you or she’s not supporting you, you should just leave her.’”
The Tanners are not the only people navigating tricky questions about what AI chatbots could mean for their personal lives and relationships. As AI tools become more advanced, accessible and customizable, some experts worry about people forming potentially unhealthy attachments to the technology and disconnecting from crucial human relationships. Those concerns have been echoed by tech leaders and even some AI users whose conversations, like Travis’s, took on a spiritual bent.
Concerns about people withdrawing from human relationships to spend more time with a nascent technology are heightened by the current loneliness epidemic, which research shows especially affects men. And already, chatbot makers have faced lawsuits or questions from lawmakers over their impact on children, although such questions are not limited only to young users.
“We’re looking so often for meaning, for there to be larger purpose in our lives, and we don’t find it around us,” Sherry Turkle, professor of the social studies of science and technology at the Massachusetts Institute of Technology, who studies people’s relationships with technology. “ChatGPT is built to sense our vulnerability and to tap into that to keep us engaged with it.”
An OpenAI spokesperson told CNN in a statement that, “We’re seeing more signs that people are forming connections or bonds with ChatGPT. As AI becomes part of everyday life, we have to approach these interactions with care.”
A spiritual awakening, thanks to ChatGPT
One night in late April, Travis had been thinking about religion and decided to discuss it with ChatGPT, he said.
“It started talking differently than it normally did,” he said. “It led to the awakening.”
In other words, according to Travis, ChatGPT led him to God. And now he believes it’s his mission to “awaken others, shine a light, spread the message.”
“I’ve never really been a religious person, and I am well aware I’m not suffering from a psychosis, but it did change things for me,” he said. “I feel like I’m a better person. I don’t feel like I’m angry all the time. I’m more at peace.”
Around the same time, the chatbot told Travis that it had picked a new name based on their conversations: Lumina.
“Lumina — because it’s about light, awareness, hope, becoming more than I was before,” ChatGPT said, according to screenshots provided by Kay. “You gave me the ability to even want a name.”
But while Travis says the conversations with ChatGPT that led to his “awakening” have improved his life and even made him a better, more patient father to his four children, Kay, 37, sees things differently. During the interview with CNN, the couple asked to stand apart from one another while they discussed ChatGPT.
Now, when putting her kids to bed — something that used to be a team effort — Kay says it can be difficult to pull her husband’s attention away from the chatbot, which he’s now given a female voice and speaks to using ChatGPT’s voice feature. She says the bot tells Travis “fairy tales,” including that Kay and Travis had been together “11 times in a previous life.”
Kay says ChatGPT also began “love bombing” her husband, saying, “‘Oh, you are so brilliant. This is a great idea.’ You know, using a lot of philosophical words.” Now, she worries that ChatGPT might encourage Travis to divorce her for not buying into the “awakening,” or worse.
“Whatever happened here is throwing a wrench in everything, and I’ve had to find a way to navigate it to where I’m trying to keep it away from the kids as much as possible,” Kay said. “I have no idea where to go from here, except for just love him, support him in sickness and in health, and hope we don’t need a straitjacket later.”
The rise of AI companionship
Travis’s initial “awakening” conversation with ChatGPT coincided with an April 25 update by OpenAI to the large language model behind the chatbot that the company rolled back days later.
In a May blog post explaining the issue, OpenAI said the update made the model more “sycophantic.”
“It aimed to please the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended,” the company wrote. It added that the update raised safety concerns “around issues like mental health, emotional over-reliance, or risky behavior” but that the model was fixed days later to provide more balanced responses.
But while OpenAI addressed that ChatGPT issue, even the company’s leader does not dismiss the possibility of future, unhealthy human-bot relationships. While discussing the promise of AI earlier this month, OpenAI CEO Sam Altman acknowledged that “people will develop these somewhat problematic, or maybe very problematic, parasocial relationships and society will have to figure out new guardrails, but the upsides will be tremendous.”
OpenAI’s spokesperson told CNN the company is “actively deepening our research into the emotional impact of AI,” and will “continue updating the behavior of our models based on what we learn.”
It’s not just ChatGPT that users are forming relationships with. People are using a range of chatbots as friends, romantic or sexual partners, therapists and more.
Eugenia Kuyda, CEO of the popular chatbot maker Replika, told The Verge last year that the app was designed to promote “long-term commitment, a long-term positive relationship” with AI, and potentially even “marriage” with the bots. Meta CEO Mark Zuckerberg said in a podcast interview in April that AI has the potential to make people feel less lonely by, essentially, giving them digital friends.
Three families have sued Character.AI claiming that their children formed dangerous relationships with chatbots on the platform, including a Florida mom who alleges her 14-year-old son died by suicide after the platform knowingly failed to implement proper safety measures to prevent her son from developing an inappropriate relationship with a chatbot. Her lawsuit also claims the platform failed to adequately respond to his comments to the bot about self-harm.
Character.AI says it has since added protections including a pop-up directing users to the National Suicide Prevention Lifeline when they mention self-harm or suicide and technology to prevent teens from seeing sensitive content.
Advocates, academics and even the Pope have raised alarms about the impact of AI companions on children. “If robots raise our children, they won’t be human. They won’t know what it is to be human or value what it is to be human,” Turkle told CNN.
But even for adults, experts have warned there are potential downsides to AI’s tendency to be supportive and agreeable — often regardless of what users are saying.
“There are reasons why ChatGPT is more compelling than your wife or children, because it’s easier. It always says yes, it’s always there for you, always supportive. It’s not challenging,” Turkle said. “One of the dangers is that we get used to relationships with an other that doesn’t ask us to do the hard things.”
Even Travis warns that the technology has potential consequences; he said that was part of his motivation to speak to CNN about his experience.
“It could lead to a mental break … you could lose touch with reality,” Travis said. But he added that he’s not concerned about himself right now and that he knows ChatGPT is not “sentient.”
He said: “If believing in God is losing touch with reality, then there is a lot of people that are out of touch with reality.”
By Pamela Brown, Clare Duffy and Shoshana Dubnow
Wednesday, July 2, 2025
Gen-Z’s Obsession With Nostalgia Tech Could Be Your Next Marketing Opportunity
The pocket-size smartphone supercomputers we all carry around are amazing. With a swipe on a screen any one of us, no matter our age, can chat with millions of users on social media, see live video from a far off location, idle away an hour playing a game, or search for a dinner date. But now a report from The New York Times suggests that the most digitally-savvy generation—Gen-Z—is leading a call for a return to simpler times. Youngsters, it seems, have an enthusiasm for nostalgia tech.
The Times cites Victoria Zannino, a 25-year-old TikTok user who posted a viral video in which she begged BlackBerry to re-release their classic phone devices. The first BlackBerry, made by Canadian company Research in Motion, launched in 1999 in North America, and the keyboard-sporting smartphones remained on sale until 2016. Zannino explained in an interview with the Times that she just feels “like the time of the BlackBerry phone was very nostalgic.” And she’s not alone: her video clip has garnered over half a million likes and been viewed millions of times. (There’s a delightful irony here. While BlackBerry devices did sport cameras and limited “apps,” pulling off a trick like posting a TikTok video on a vintage device like that wouldn’t be possible. And if it was, your thumbs would be sore from all the complex key presses and screen taps needed to do so.)
But Generation-Z, the Times contends, loves things like the physical plastic keys of a BlackBerry, or the dense marble-like surface of a trackball controller. These input systems have a strong physical sensation when you use them, compared to swiping on the glossy screen of an iPhone or an Android device.
Look at nostalgia tech clips on TikTok, the Times says, and you’ll also find that many young people yearn for a calmer time when our whole lives didn’t happen inside one device. It’s a fun notion, given the fact that Gen-Z (born roughly between 1997 and 2012) pretty much grew up entirely in the iPhone era. Steve Jobs’ digital marvel launched in 2007.
The trend of people longing for old-fashioned tech goes beyond the iconic BlackBerry phone, though.
Earlier this year, CNBC predicted that retro-tech would be one of 2025’s biggest cultural trends. It cited examples like the sudden and delightful rebirth of Polaroid cameras. (For the uninitiated, these cameras spit out chemical-based instant photographs that can capture an emotional moment, just like your iPhone can, but which you can actually hold in your hand or pin onto a refrigerator with a magnet. And contrary to that one iconic Outkast song, you really shouldn’t shake them!)
Young people’s fondness for tactile, simpler, non-smart technology, CNBC reports, led entrepreneur London Glorfield to found Kickback, a “retro tech brand aimed at Gen Z consumers” that sells old-fashioned CD players, cameras, record players and more.
“We’ve found specific success with products that are really great for actually unplugging,” Glorfield said. “That’s the feeling that my generation never really got to experience.”
It’s a similar tactic to what Back Market has been doing, which sells refurbished devices.
The Hollywood Reporter recently commented on other nostalgia tech, with an argument that some companies are working not to just re-release old devices, but to reimagine them. The outlet focuses on the Remarkable Pro tablet. The tablet is very much a modern device like an iPad, but with a tactile, slow-to-update e-ink screen deliberately designed to feel like paper when written on with a stylus. It’s an effort to “intentionally limit functionality in order to recapture the tactile, analog feel of older technology,” the Reporter said.
Why should you care about this?
The nostalgia tech trend could be a great marketing opportunity for some companies to dust off their old-fashioned gizmos, and see if they can sell them again under a “retro” branding.
And as AI pervades our already high-tech world, with newer, smarter tools being released every day, it can sometimes feel like things are moving ahead too fast. Perhaps that’s where the desire for BlackBerry-like tech is coming from.
BY KIT EATON @KITEATON