Friday, February 20, 2026
China’s latest AI is so good it’s spooked Hollywood. Will its tech sector pump the brakes?
Tom Cruise and Brad Pitt tussle in hand-to-hand combat on a rubble-strewn rooftop; Donald Trump takes on kung-fu fighters in a bamboo grove; Kanye West dances through a Chinese imperial palace while singing in Mandarin.
Over the past week, a slew of cinematic videos of celebrities and characters in absurd situations have gone viral online, with one commonality –– they were created using a new artificial intelligence tool from Chinese developer ByteDance, sparking anxiety over the fast-evolving capabilities of AI.
The new model, named Seedance 2.0, is among the most advanced of its kind and has quickly drawn praise for its ease of use and the realistic nature of the videos it can generate in minutes.
But soon after the release, media behemoths Paramount and Disney sent cease-and-desist letters to ByteDance –– the company most famous for developing the video-sharing app TikTok –– accusing it of infringing upon their intellectual property. Hollywood’s premier trade organization, the Motion Picture Association, and labor union SAG-AFTRA also condemned the company for unauthorized use of US-copyrighted works.
ByteDance responded with a statement saying it would implement better safeguards to protect intellectual property.
Seedance 2.0 has quickly become the most controversial model in a wave of them released by Chinese technology companies this year, as the competition to dominate the AI industry heats up.
China’s government has made advanced tech a key tenet of its national development strategy. In a televised Lunar New Year celebration this week, the country’s latest humanoid robots stole the show by performing martial arts, spin kicks and back flips.
Such improvements are often met with unease, particularly in the US, China’s chief technological and political rival, in a spiral of one-upmanship redolent of its 20th-century “Space Race” with the Soviet Union.
“There’s a kind of nationalist fervor around who’s going to ‘win’ the space race of AI,” said Ramesh Srinivasan, a professor of information studies at the University of California, Los Angeles. “That is part of what we are seeing play out again and again and again when it comes to this news as it breaks.”
Here’s why the latest technology from ByteDance has rattled the world.
What’s so scary about Seedance 2.0?
The AI video generation model, while still not publicly available to everyone, was hailed by many as the most sophisticated of its kind to date, using images, audio, video and text prompts to quickly churn out short scenes with polished characters and motion editing control at lower cost.
“My glass half empty view is that Hollywood is about to be revolutionized/decimated,” writer and producer Rhett Reese, who worked on the Deadpool movie franchise, wrote on X after seeing the video of Cruise and Pitt.
One Chinese tech blogger using Seedance 2.0 said it was so advanced that it was able to generate realistic audio of his voice based solely on an image of him, raising fears over deepfakes and privacy. Afterwards, ByteDance rolled back that feature and introduced verification requirements for users who want to create digital avatars with their own images and audio, according to Chinese media.
Rogier Creemers, an assistant professor at Leiden University in the Netherlands, who researches China’s domestic tech policy, said part of the concern stems from the rapid rate at which Chinese companies have released new iterations of AI technology this year.
That has also put China on the back foot in assessing the potential negative impacts of each improvement, he said.
“The more capable these apps become, automatically, the more potentially harmful they become,” said Creemers. “It’s a little bit like a car. If you build a car that can drive faster, that gets you where you need to be a lot more quickly, but it also means that you can crash faster.”
What’s being done to ease concerns?
After outcry from Hollywood, ByteDance said in a statement that it respects intellectual property rights and will strengthen safeguards against the unauthorized use of intellectual property and likenesses on its platform, though it did not specify how.
User complaints prompted the recent ByteDance rollback and have also forced popular Chinese Instagram-like app RedNote to restrict any AI-made content that has not been properly labeled.
And the arrival of Seedance 2.0 coincides with a tightening of regulations for AI content in China.
China’s domestic regulation of AI surpasses the efforts of most other countries in the world, in part because of its longstanding censorship apparatus. Last week, the Cyberspace Administration of China said it was cracking down on unlabeled AI-generated content, penalizing more than 13,000 accounts and removing hundreds of thousands of posts.
However, the restrictions on AI-generated content on the Chinese internet are often unevenly enforced, Nick Corvino wrote in ChinaTalk, a China-focused newsletter. He attributed the problem in part to difficulties policing content across different apps, as well as incentives for tech companies to encourage user content.
“With Chinese social media platforms locked in fierce competition, both with each other and the Western market, none wants to be the strictest enforcer while others let content flow freely,” he said in a post following the launch of Seedance 2.0.
What does this mean for China’s AI industry?
According to analysts, China is walking a fine line between encouraging domestic development of AI models and maintaining strict controls on how those models are used.
“People in the AI business would always say what the Chinese government is doing is slowing down the development of AI,” said Creemers of Leiden University. “Obviously a content control system like the Chinese that essentially limits what you can produce, that’s never fun.”
Pressure to stop using certain images or data, from US media giants or other sources, may also impact efforts to refine AI. Disney accused ByteDance of illegally using its IP to train Seedance 2.0, but recently struck a deal with US company OpenAI to give Sora – OpenAI’s video generation model and Seedance competitor – access to trademarked characters like Mickey and Minnie Mouse.
“These agreements have everything to do with what kind of data are they going to get access to that they would not have otherwise, or that their competitors would not have?” said Srinivasan from UCLA. “There’s a high probability that the Sora products could be more refined and more advanced, if the data are better suited for the models to learn from.”
At the same time, restrictions on how AI can be used or trained could also spur greater innovation, he said, noting how Chinese company DeepSeek –– blessed with a much smaller budget than the industry leaders –– built a competitive AI-powered chatbot.
“When it comes to Chinese breakthroughs in AI, the DeepSeek revelation was so important because they showed that there are other ways of training language models in ways that are more economical,” he said.
By Stephanie Yang
Wednesday, February 18, 2026
AI Promised to Save Time. Researchers Find It’s Doing the Opposite
Artificial intelligence boosters often promise the tech will lead to a reduced workload. AI would draft documents, synthesize information, and debug code so employees can focus on higher-value tasks. But according to recent findings, that promise is misleading.
An ongoing study, published in the Harvard Business Review, joins growing bodies of evidence that AI isn’t reducing workloads at all. Instead, it appears to be intensifying them.
Researchers spent eight months examining how generative AI reshaped work habits at a U.S.-based technology company with roughly 200 employees. They found that after adopting AI tools, workers moved faster, took on a wider range of tasks, and extended their work into more hours of the day, even if no one asked them to do so.
Importantly, the company never required employees to use AI. It simply offered subscriptions to commercially available tools and left adoption up to individuals. Still, many workers embraced the technology enthusiastically because AI made “doing more” feel easier and more rewarding, the researchers said.
That enthusiasm, however, came with unintended consequences. Over time, workloads quietly expanded to overwhelming levels. The gradual, often unnoticed, creep in responsibilities led to cognitive fatigue, burnout, and weaker decision making.
While AI can produce an initial productivity surge, the researchers warn that it may ultimately contribute to lower-quality work and unsustainable pressure.
To track these changes, the researchers observed the company in person two days a week, monitored internal communication channels, and conducted more than 40 in-depth interviews across engineering, product, design, research, and operations. They found that job boundaries began to blur.
Employees increasingly took on tasks that previously belonged to other teams, using AI to fill knowledge gaps. Product managers and designers started writing code. Researchers started handling engineering tasks. In many cases, work that might once have justified additional hires was simply absorbed by existing staff with the help of AI.
For engineers, the shift created a different kind of burden. Rather than saving time, they spent more hours reviewing, correcting, and guiding AI-generated work produced by colleagues. What had once been straightforward code review expanded into ongoing coaching and cleanup of flawed outputs.
The researchers described a feedback loop: AI sped up certain tasks, which raised expectations for speed. Higher expectations encouraged greater reliance on AI, and that, in turn, widened both the scope and volume of work employees attempted. The result was more activity, not less.
Many participants said that while they felt more productive, they did not feel any less busy. Some actually felt busier than before AI arrived. “You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less,” one engineer told the Harvard Business Review. “But then, really, you don’t work less. You just work the same amount or even more.”
What looks like a productivity breakthrough, the researchers concluded, can actually mask silent workload creep. And overwork, they warn, can erode judgment, increase errors, and make it harder for organizations to distinguish genuine efficiency gains from unsustainable intensity.
To counter these risks, the researchers proposed a protective approach they call “AI practice,” a set of intentional norms and routines that define how AI should be used at work and, crucially, when to stop. Without clear boundaries, they caution, AI makes it easier to do more but harder to slow down.
BY LEILA SHERIDAN
Tuesday, February 17, 2026
What Is AI.com? The $70 Million Domain Being Called ‘the Absolute Peak of the AI Bubble’
On Super Bowl Sunday, the most talked-about advertisement was for a product that hadn’t even launched yet.
During the game’s fourth quarter, a 30-second commercial aired advertising something called “AI.com,” ending with a call to “claim your handle” along with three usernames: Mark, Sam, and Elon. Missing from the commercial? Any information about what AI.com actually does.
But the mysterious teaser worked; web searches for “What is AI.com” exploded. According to EDO, a company that helps businesses measure the impact of advertisements, AI.com was the top-performing ad of the night, with 9.1 times as much engagement as the average Super Bowl ad. But when interested people rushed to the website, they found an error message waiting for them. The company’s website had immediately crashed.
What is AI.com, anyway?
AI.com was not co-founded by Mark Zuckerberg, Sam Altman, and Elon Musk. In fact, they have nothing to do with the company at all. The founder is actually Kris Marszalek, who previously co-founded Crypto.com.
Financial Times reported that in April 2025, Marszalek paid $70 million to obtain the AI.com domain, which the publication says is the most ever spent on a domain, far more than the $12 million Marszalek spent to acquire Crypto.com in 2018. Marszalek says he is currently the CEO of both companies.
What does AI.com actually do?
On its now-functioning website, the company describes itself as a platform offering access to a “private, personal AI agent that doesn’t just answer questions but actually operates on the user’s behalf — organizing work, sending messages, executing actions across apps, building projects, and more.” The company wrote that the agent will soon be able to help users “trade stocks, automate workflows, organize and execute daily tasks with their calendar, or even update their online dating profile.”
Sounds impressive, but it turns out that the tech powering AI.com is far from proprietary. In an article posted to Marszalek’s personal X account, the founder wrote that “AI.com is the world’s first easy-to-use and secure implementation of OpenClaw, the open-source agent framework that went viral two weeks ago.”
What is OpenClaw?
OpenClaw is essentially an agent that has full access to your computer’s files, and it has indeed become an instant sensation in the tech world. But the current process of setting the agent up is highly technical and risky. Marszalek says that AI.com has made OpenClaw “easy to use without any technical skills, while hardening security to keep your data safe.” Basically, this means that AI.com is positioning itself as a consumer-friendly wrapper around a powerful, developer-focused tool.
OpenClaw creator Peter Steinberger posted that he had not heard about AI.com until the ad aired, to which Marszalek responded, “Let’s chat.”
How do you sign up for AI.com?
If you go to AI.com, you’ll be asked to link your Google account to the platform in order to choose a handle for both yourself and your agent. After you’ve selected handles, you’ll need to connect a credit or debit card to your account, though the company says you won’t be charged.
Once your card has been processed, you’ll receive a notification that “demand is extremely high right now, so generation is queued. We’ll notify you the moment your AI is ready to activate.” It’s unclear if any users have received their agent yet.
In a popular X post, one user criticized the website, calling it “the absolute peak of the AI bubble.”
Steinberger quoted that post, writing “Guess I’m flattered?”
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, February 11, 2026
AI Power Users Are Rapidly Outpacing Their Peers. Here’s What They’re Doing Differently
Last November, consulting firm EY surveyed 15,000 employees across 29 countries about how they use AI at work. The results should worry every founder: 88 percent of workers now use AI tools daily, but only 5 percent qualify as “advanced users” who’ve learned to extract real value.
That 5 percent? They’re gaining an extra day and a half of productivity every single week. The other 95 percent are stuck using AI for basic search and document summarization, essentially treating a Ferrari like a golf cart.
When OpenAI released its State of Enterprise AI report in December, it confirmed the same pattern. Frontier workers—those in the 95th percentile—send six times more prompts to AI tools like ChatGPT than their median colleagues. For coding tasks, that number explodes by 17x. If these AI tools are identical and access is universal, why are the results so wildly different for workers around the world? And what separates power users from everyone else?
Ofer Klein, CEO of Reco, a SaaS security platform that discovers and secures AI, apps, and agents across enterprise organizations, offers some insights into what sets the power users apart.
1. They experiment while others dabble
High performers treat AI tools like junior colleagues they’re training. They iterate on prompts rather than giving up after one mediocre response. They’ve moved beyond one-off queries to building reusable prompt libraries and workflows.
The rest of your team tried AI once or twice, got underwhelming results, and concluded it wasn’t worth the effort. What they don’t realize, however, is that AI requires iteration. The first response is rarely the best response. Power users ask follow-up questions, refine their prompts, and teach the AI their preferences over time.
2. They match tools to tasks
Power users typically maintain what Klein calls a “barbell strategy”—deep mastery of one or two primary tools plus five to eight specialized AI applications they rotate through depending on the task.
“They’re not trying every new AI that launches, but they’re not dogmatically loyal to one platform either,” Klein explains. “They’ve developed intuition about which AI is best for what.”
They might use ChatGPT for brainstorming, Claude for analysis, and Midjourney for visuals. Most employees, by contrast, force one tool to handle everything. When it inevitably underperforms on tasks it wasn’t designed for, they blame AI rather than their approach.
3. They think about work differently
It’s easy to assume that the biggest behavioral difference between these power users and frontier workers is their technical skill. But, interestingly, it’s not. Rather, it’s how power users think about tasks. They break projects into discrete steps: research, outline, first draft, and refinement. Then they deploy AI strategically at each stage.
Instead of asking AI to “write a report,” they ask it to summarize research, suggest an outline, draft specific sections, then refine tone. They understand where AI adds value and where human judgment matters.
“The highest performers spend more time on strategic work because AI handles the grunt work,” Klein says. “They use AI to augment their expertise, not replace thinking.”
The hidden cost
Why does all of this matter? Here’s the math that should worry you: OpenAI’s data shows workers using AI effectively save 40-60 minutes daily. In a 100-person company where 60 employees barely touch AI, you’re losing 40-60 hours of productivity every single day. Over a year, that’s 10,000+ hours—equivalent to five full-time employees’ worth of work you’re paying for but not getting.
Meanwhile, your competitors’ power users are compounding that advantage daily.
What you can do about it
Klein recommends tracking time saved, not just usage frequency. Someone using AI 50 times daily for spell-checking differs fundamentally from someone using it five times to restructure a client proposal.
In addition, run an “AI show and tell” where employees demonstrate one workflow where AI saves them meaningful time. You’ll quickly identify who’s truly leveraging these tools versus who’s dabbling. Then, create small cross-functional “AI councils” of five to six employees who meet monthly to share workflows.
That should cascade into proper training of employees on how to use these tools the right way. “Only one-third of employees say they have been properly trained,” a BCG survey found. That’s an opportunity forward-thinking leaders can tap into.
But don’t just replicate tools; replicate mindset. Giving everyone ChatGPT Plus doesn’t close the gap. The differentiator is teaching people to think in terms of “what can I delegate to AI?” rather than “what can AI do?”
The uncomfortable truth, according to BCG’s survey, is that this gap is widest among front-line employees. While more than three-quarters of leaders and managers use AI several times a week, adoption among front-line workers has stalled at just 51 percent.
That’s not just a productivity problem. It’s a competitive threat that compounds every quarter you ignore it. Your 5 percent are already working like they have an extra team member. The question is whether you’ll help the other 95 percent catch up before your competitors do.
BY KOLAWOLE ADEBAYO, COLUMNIST
Monday, February 9, 2026
The Quantum Revolution Is Coming. First, the Industry Has to Survive This Crucial Phase
Quantum computing could be even more revolutionary than artificial intelligence. The calculation speeds and potential benefits of the technology have the potential to bring about everything from quicker discovery of drug treatments for disease, to more accurate climate modeling, to smoother shipping logistics.
The advances in the past year have been substantial, but a new paper from the University of Chicago warns quantum evangelists that as impressive as that progress has been, there’s still a long way to go.
While the paper says quantum is nearing the point of practical use (taking it beyond controlled experiments in the laboratory), it won’t be running at full throttle for a while. First, there need to be significant advances in materials science and fabrication, the authors said, with an emphasis on wiring and signal delivery.
“We are in an equivalent of the early transistor age, and hardware breakthroughs are required in multiple arenas to reach the performance necessary for the envisioned applications,” the authors wrote.
To put that into context: Think of the speed and capabilities of today’s computers. For just $4,000, people can buy a supercomputer that fits on their desktop. Compare that to the computers of the early- to mid-1950s. That’s where quantum stands today in its evolution, the paper’s authors argue.
That doesn’t mean the technology is disappointing, by any means. Computers in the 50s, to continue the analogy, were used to break codes, automate payroll and inventory management systems and handle the mathematical models for everything from weather forecasting to nuclear research.
“While semiconductor chips in the 1970s were TLR-9 [Technology Readiness Level 9, indicating a technology is proven and successfully operating] for that time, they could do very little compared with today’s advanced integrated circuits,” William D. Oliver, coauthor of the paper and a professor of physics, electrical engineering, and computer science at MIT, said in a statement. “Similarly, a high TRL for quantum technologies today does not indicate that the end goal has been achieved, nor does it indicate that the science is done and only engineering remains.”
The hurdles quantum faces are tied into the qubits it uses. While a more traditional computer thinks in ones and zeroes, a qubit can be a one, zero, or both at the same time.
That technology lets quantum computers process massive amounts of data in parallel, solving complex simulation and optimization problems at speeds not possible with today’s computers.
Most platforms today rely on individual control lines for each qubit, but quantum systems can contain thousands, or even millions, of qubits, which makes wiring impractical. That same issue raises problems with power management and temperature control. Many quantum systems today depend on cryogenic equipment or high-power lasers, so simply making a bigger version of the machine won’t work.
The paper’s authors say quantum is likely to follow an evolutionary path that’s on par with the current computer industry. Breakthroughs will be necessary, and quantum companies will need to focus on a top-down system design and close collaboration. Failing to work together could fragment the industry and slow its growth—and create some unrealistic expectations among both insiders and the general public.
“Patience has been a key element in many landmark developments and points to the importance of tempering timeline expectations in quantum technologies,” the authors wrote.
The paper’s warning about the timeline to quantum reaching its real potential comes amid a mounting wave of excitement about the technology. Bank of America analysts, in a note to investors last year, compared the rising technology to man’s discovery of fire.
“This could be the biggest revolution for humanity since discovering fire,” the financial institution wrote. “A technology that can perform endless complex calculations in zero-time, warp-speeding human knowledge and development.”
Tech giants and startups alike are working hard on quantum systems. Google has named its device Willow; IBM is also working on one, as is Amazon. And startups like Universal Quantum and PsiQuantum Corp. are also jockeying to be players in the quantum field. Intel has developed a silicon quantum chip for researchers and Microsoft is focusing on building practical quantum computers.
Despite that, it could be 10 years or more before a quantum computer suitable for commercial applications makes its debut. Companies building prototype quantum computers (including Google) say they don’t expect to deliver a useful quantum computer until the end of the decade.
BY CHRIS MORRIS @MORRISATLARGE
Friday, February 6, 2026
ChatGPT Is Saying Goodbye to a Beloved AI Model. Superfans Are Not Happy
OpenAI says that it will be retiring several ChatGPT models in the coming weeks, sending some superfans into a tailspin.
In a statement, the company said that on February 13, the models GPT-4o, GPT‑4.1, GPT‑4.1 mini, GPT‑5 (Instant and Thinking), and OpenAI o4-mini will all be removed from ChatGPT and will no longer be accessible through the platform.
This isn’t the first time OpenAI has attempted to get rid of GPT-4o. Back in August, when it released GPT-5, the company said it would retire the older model, but an online community revolted, saying that they relied on it for emotional support and felt betrayed by its sudden forced retirement. OpenAI has said that 4o is an especially sycophantic model, exhibiting high levels of agreeability and flattery.
In a Reddit AMA following the August announcement, 4o fans hammered OpenAI co-founder Sam Altman with accusations that he had killed their “AI friend.” Almost immediately, OpenAI added the model back to ChatGPT, but only for paid users. OpenAI framed the un-retirement as giving users “more time to transition key use cases, like creative ideation.”
Now, the company says it’s sending 4o out to pasture for real this time, because it has integrated feedback from the model’s superfans into its current flagship models, GPT-5.1 and GPT-5.2. Plus, OpenAI added, only 0.1 percent of users still use GPT-4o each day. OpenAI says that users who want to emulate the warm and conversational style of 4o can customize their ChatGPT’s output to display those personality traits.
Still, on the internet, 4o fans were unsurprisingly not happy. On the subreddit r/ChatGPT, users wrote that they would be canceling their premium subscriptions in protest. “Now i can no longer have honest conversations about anything,” one user wrote. “Whenever I wanted to unload, I would use 4o. it never backtalked. 5.0+ all it does it back talk.” Another user wrote that canceling the model “a day before valentine’s day is crazy considering some of the userbase for 4o.”
In its statement announcing the model’s retirement, ChatGPT wrote that “changes like this take time to adjust to, and we’ll always be clear about what’s changing and when. We know that losing access to GPT‑4o will feel frustrating for some users, and we didn’t make this decision lightly. Retiring models is never easy, but it allows us to focus on improving the models most people use today.”
Since the big changes are set to happen on February 13, users have two weeks to say goodbye to 4o and start getting used to the newer ChatGPT offerings.
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, February 4, 2026
This AI Godfather Says Business Tools Built on LLMs Are Doomed
Silicon Valley firms and countless other businesses across the country are spending billions of dollars to develop and adopt artificial intelligence platforms to automate myriad workplace tasks. But top global technologist Yann LeCun warns that the limited capabilities of the large language models (LLM) those apps and chatbots operate on are already well-known, and will eventually be overmatched by the expectations and demands users place on the systems.
And when that happens, LeCun says, even more investment will be required to create the superintelligence technology that will replace LLM-based AI—systems he says should already be the focus of development efforts and funding.
While that may seem like an outlier view, LeCun, 65, is far from a tech outsider. The Turing Award winner ran Meta’s AI research unit for a decade, only leaving last November to launch his own Paris-based startup, Advanced Machine Intelligence Labs. In addition to disliking the managerial duties that came with the research-rooted Meta job, LeCun said his departure was motivated by his view that Silicon Valley has prioritized short-term business interests over far more important and attainable scientific objectives.
Top of those commercial concerns he cites was developing and marketing LLM-based AI chatbots and apps with limited capabilities, rather than superintelligence systems with virtually boundless potential.
In contrast to current AI, which uses collected data to provide responses to questions or perform necessary tasks, superintelligence systems take in all kinds of surrounding information they encounter, including text, sound, and visual input. They use all of this not only to teach themselves how to respond to data feeds effectively, but also to predict what’s coming next—a requisite for truly self-driving cars, say, or robots that reason and react as humans would.
The vast differences in what current LLM-based AI and emerging superintelligence systems can offer mean that countless businesses are now buying and adapting a technology LeCun predicts is destined to be replaced by something better. And not because it’s more effective—and certainly not less expensive—but because that’s how the tech sector decided the fastest profits were to be made.
Human-level intelligence
“There is this herd effect where everyone in Silicon Valley has to work on the same thing,” LeCun told the New York Times recently. “The entire industry has been LLM-pilled… [but] LLMs are not a path to superintelligence or even human-level intelligence.”
To be sure, AI apps like OpenAI’s ChatGPT, Microsoft’s Copilot, and Anthropic’s Claude have continually been improving over time, as they automate workplace tasks like emailing, content composition, and research. But LeCun says the fact that their LLM models rely on gathering, digesting, and working from word-based data limits how far they can evolve to approach—much less surpass—human thinking and response capabilities.
By contrast, he and fellow researchers at AMI Labs are creating “world models” also trained with sound, video, and spatial data. Over time, they are expected to be able to observe, respond to, and even predict user activity and physical environments in countless workplace settings. And that’s expected to allow them to collect both more and broader ranges of information than humans can and react in ways people would if they had those capabilities.
“We are going to have AI systems that have humanlike and human-level intelligence, but they’re not going to be built on LLMs,” LeCun told MIT Technology Review this month, describing the models AMI Labs and other researchers are working on. “It learns the underlying rules of the world from observation, like a baby learning about gravity. This is the foundation for common sense, and it’s the key to building truly intelligent systems that can reason and plan in the real world.”
But what does that mean for business owners—not to mention investors—spending huge sums to develop, acquire, and use LLM-based AI apps? If LeCun is correct, all those tools being marketed as the future of workplace productivity will become obsolete in several years and be replaced by the superintelligence tech he believes should have been prioritized in the first place.
There’s already some evidence backing LeCun’s view that Silicon Valley has focused on the shorter-term profit objectives of rushing capacity-limited LLM apps to market, despite being aware of the limitations of their effectiveness.
For example, a study published last August by MIT Media Lab’s Project Nanda estimated that despite the $30 billion to $40 billion that’s been invested since 2023 to develop or purchase AI platforms, only 5 percent of businesses that bought those automating tools have reported any return on that spending. “The vast majority remain stuck with no measurable [profit or loss] impact,” it said.
And despite increasing investment in AI tech by businesses—and swiftly rising use by workers—there’s considerable disagreement on how effective the platforms actually are. According to a Wall Street Journal study, 40 percent of C-suite managers credited the work-automating apps with saving them considerable time each week. By contrast, two thirds of lower-level workers said the tech saved them little or no time at all.
LeCun doesn’t appear to regard any ROI or performance questions during this still-early era of AI tech as the problem. He even thinks LLM-based apps are valuable—up to a point. For example, he compliments most apps and chatbots Silicon Valley has developed and sold to businesses as being very useful to “write text, do research, or write code.”
AI’s unscalable apps
But LeCun says the enormous fortunes and business strategy commitments Silicon Valley has made in what he views as a relatively short-term technological solution ignore the bigger, long-term potential of automating technology’s next phase. Meaning, in cumulative terms, it will make the broader effort to produce and perfect AI more expensive.
In his view, much of the money and froth that’s inflated what critics call today’s AI bubble will likely vanish when the models of today’s apps and chatbots can’t be used to build tomorrow’s revolutionary tech.
“LLMs manipulate language really well,” LeCun told MIT Technology Review. “But people have had this illusion, or delusion, that it is a matter of time until we can scale them up to having human-level intelligence, and that is simply false.”
Ironically, even LLM-based apps using available data concur that superintelligence systems will offer huge advantages when (not if) they supplant today’s AI tools.
“While LLMs are incredibly powerful tools for generating text and interacting with humans, a true superintelligence would represent a leap beyond these current systems in terms of understanding, autonomy, adaptability, and practical real-world impact,” ChatGPT replied when asked about its eventual replacement—providing eight major improvements superintelligence tech will offer.
When those systems do come online, LeCun says, businesses recognizing their far wider range of applications will have no choice but to buy them to replace outdated LLM-based AI tools they’ve just recently acquired.
“Think about complex industrial processes where you have thousands of sensors, like in a jet engine, a steel mill, or a chemical factory,” LeCun told MIT Technology Review. “There is no technique right now to build a complete, holistic model of these systems. A world model could learn this from the sensor data and predict how the system will behave. Or think of smart glasses that can watch what you’re doing, identify your actions, and then predict what you’re going to do next to assist you. This is what will finally make agentic systems reliable.”
And superintelligent systems hopefully won’t generate photos of people with six fingers or endless volumes of workplace slop for employees to plow through.
BY BRUCE CRUMLEY @BRUCEC_INC
Monday, February 2, 2026
Early-Stage AI Companies to Watch in 2026
Artificial intelligence is entering its fourth year as the most talked-about force in business. Since ChatGPT’s launch in 2022, AI has upended and reshaped workflows in countless industries, and continues to dominate boardroom conversations and investor strategies.
This year, a new wave of early-stage startups is emerging with bold ideas and transformative technologies. They aren’t looking to replicate the world-altering success of OpenAI, but rather to leverage technological advancements in AI to solve niche issues.
For example, Tim Tully, an investor at venture capital giant Menlo Ventures, predicts that AI-powered sales and go-to-market tools will break out in 2026. Still, the difference between startups that succeed and those that fail will be a strong intuition for product management—and for founder tenacity.
And speaking of founder tenacity, Kulveer Taggar’s venture fund, Phosphor Capital, invests exclusively in “top founders in each Y Combinator batch.” (His cousin, Harj Taggar, is a managing partner at YC.) A two-time alum of YC, Kulveer Taggar is looking for “customer-obsessed” founders and businesses that remind him of the startup accelerator’s most successful alumni, like Airbnb and Stripe, when they were just starting. Based on their suggestions and Inc.’s research, here are some early-stage AI companies poised for game-changing success.
1. OpenEvidence
Founder: Daniel Nadler
Location: Miami
Founded in 2022 by Canadian entrepreneur Daniel Nadler, OpenEvidence produces a medical AI assistant often dubbed “ChatGPT for doctors.” The company’s platform uses large language models specifically trained on massive amounts of clinical data, medical research, and electronic health records to provide real-time recommendations, diagnostic support, and administrative assistance to health care professionals.
Since its founding, OpenEvidence has secured major partnerships with several large hospital systems across the United States and Europe, allowing it to rapidly test and refine its models in clinical settings. The company says that its medical search engine is used on a daily basis by more than 40 percent of physicians in the U.S. today.
In January 2026, OpenEvidence announced that it had raised a $250 million Series D round, at a valuation of $12 billion. OpenEvidence wrote in a statement that the new funding will be used “to invest heavily in the R&D and compute costs associated with the multi-AI agentic architecture of OpenEvidence, which provides the highest quality and most accurate medical answers of any system in the world.” Over the past 12 months, OpenEvidence has raised a grand total of $700 million.
2. AMI Labs
Founder: Yann LeCun
Location: Paris
Yann LeCun, the acclaimed NYU professor, 2018 Turing Award winner, and former chief AI scientist at Meta, has launched his first startup, making him one of the most-watched figures in AI.
Announcing his December 31 departure from Meta via LinkedIn, LeCun revealed plans for a new company dedicated to his research into advanced machine intelligence (AMI). The company’s goal, he wrote, is to drive the “next big revolution in AI: systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.” These systems are also called “world models,” and they will be where LeCun focuses his attention.
FIVE AI TRENDS WILL SHAPE IT IN 2026:
Our report unpacks the macro trends shaping AI and ties each one back to IT strategy, governance, and transformation.
1. Foundational AI principles will rewrite organizational DNA
Enterprises will develop their own guiding AI principles to address rising AI risk and align their AI strategy around core organizational values.
2. From copilots to vibe coding: AI will continue to reinvent IT
New categories of enterprise AI tools will emerge, propelling many organizations toward a crucial decision: AI platform or best-of-breed AI tools?
3. Agentic AI will come of age and power the exponential enterprise
Although current adoption of agentic AI is low, it will grow faster than generative AI, powering exponential growth and change across organizations while bringing new opportunities and risks.
4. Risk management will be the price of admission for AI
The potential risks of new AI applications will drive organizations to adopt AI risk management programs, even in jurisdictions with no regulatory requirement.
5. AI will hang in the balance between freedom and control
AI sovereignty will become top of mind for regulators, but legislative policies will develop in a disjointed fashion around the world.
Subscribe to:
Comments (Atom)