IMPACT
..building a unique and dynamic generation.
Friday, February 20, 2026
China’s latest AI is so good it’s spooked Hollywood. Will its tech sector pump the brakes?
Tom Cruise and Brad Pitt tussle in hand-to-hand combat on a rubble-strewn rooftop; Donald Trump takes on kung-fu fighters in a bamboo grove; Kanye West dances through a Chinese imperial palace while singing in Mandarin.
Over the past week, a slew of cinematic videos of celebrities and characters in absurd situations have gone viral online, with one commonality –– they were created using a new artificial intelligence tool from Chinese developer ByteDance, sparking anxiety over the fast-evolving capabilities of AI.
The new model, named Seedance 2.0, is among the most advanced of its kind and has quickly drawn praise for its ease of use and the realistic nature of the videos it can generate in minutes.
But soon after the release, media behemoths Paramount and Disney sent cease-and-desist letters to ByteDance –– the company most famous for developing the video-sharing app TikTok –– accusing it of infringing upon their intellectual property. Hollywood’s premier trade organization, the Motion Picture Association, and labor union SAG-AFTRA also condemned the company for unauthorized use of US-copyrighted works.
ByteDance responded with a statement saying it would implement better safeguards to protect intellectual property.
Seedance 2.0 has quickly become the most controversial model in a wave of them released by Chinese technology companies this year, as the competition to dominate the AI industry heats up.
China’s government has made advanced tech a key tenet of its national development strategy. In a televised Lunar New Year celebration this week, the country’s latest humanoid robots stole the show by performing martial arts, spin kicks and back flips.
Such improvements are often met with unease, particularly in the US, China’s chief technological and political rival, in a spiral of one-upmanship redolent of its 20th-century “Space Race” with the Soviet Union.
“There’s a kind of nationalist fervor around who’s going to ‘win’ the space race of AI,” said Ramesh Srinivasan, a professor of information studies at the University of California, Los Angeles. “That is part of what we are seeing play out again and again and again when it comes to this news as it breaks.”
Here’s why the latest technology from ByteDance has rattled the world.
What’s so scary about Seedance 2.0?
The AI video generation model, while still not publicly available to everyone, was hailed by many as the most sophisticated of its kind to date, using images, audio, video and text prompts to quickly churn out short scenes with polished characters and motion editing control at lower cost.
“My glass half empty view is that Hollywood is about to be revolutionized/decimated,” writer and producer Rhett Reese, who worked on the Deadpool movie franchise, wrote on X after seeing the video of Cruise and Pitt.
One Chinese tech blogger using Seedance 2.0 said it was so advanced that it was able to generate realistic audio of his voice based solely on an image of him, raising fears over deepfakes and privacy. Afterwards, ByteDance rolled back that feature and introduced verification requirements for users who want to create digital avatars with their own images and audio, according to Chinese media.
Rogier Creemers, an assistant professor at Leiden University in the Netherlands, who researches China’s domestic tech policy, said part of the concern stems from the rapid rate at which Chinese companies have released new iterations of AI technology this year.
That has also put China on the back foot in assessing the potential negative impacts of each improvement, he said.
“The more capable these apps become, automatically, the more potentially harmful they become,” said Creemers. “It’s a little bit like a car. If you build a car that can drive faster, that gets you where you need to be a lot more quickly, but it also means that you can crash faster.”
What’s being done to ease concerns?
After outcry from Hollywood, ByteDance said in a statement that it respects intellectual property rights and will strengthen safeguards against the unauthorized use of intellectual property and likenesses on its platform, though it did not specify how.
User complaints prompted the recent ByteDance rollback and have also forced popular Chinese Instagram-like app RedNote to restrict any AI-made content that has not been properly labeled.
And the arrival of Seedance 2.0 coincides with a tightening of regulations for AI content in China.
China’s domestic regulation of AI surpasses the efforts of most other countries in the world, in part because of its longstanding censorship apparatus. Last week, the Cyberspace Administration of China said it was cracking down on unlabeled AI-generated content, penalizing more than 13,000 accounts and removing hundreds of thousands of posts.
However, the restrictions on AI-generated content on the Chinese internet are often unevenly enforced, Nick Corvino wrote in ChinaTalk, a China-focused newsletter. He attributed the problem in part to difficulties policing content across different apps, as well as incentives for tech companies to encourage user content.
“With Chinese social media platforms locked in fierce competition, both with each other and the Western market, none wants to be the strictest enforcer while others let content flow freely,” he said in a post following the launch of Seedance 2.0.
What does this mean for China’s AI industry?
According to analysts, China is walking a fine line between encouraging domestic development of AI models and maintaining strict controls on how those models are used.
“People in the AI business would always say what the Chinese government is doing is slowing down the development of AI,” said Creemers of Leiden University. “Obviously a content control system like the Chinese that essentially limits what you can produce, that’s never fun.”
Pressure to stop using certain images or data, from US media giants or other sources, may also impact efforts to refine AI. Disney accused ByteDance of illegally using its IP to train Seedance 2.0, but recently struck a deal with US company OpenAI to give Sora – OpenAI’s video generation model and Seedance competitor – access to trademarked characters like Mickey and Minnie Mouse.
“These agreements have everything to do with what kind of data are they going to get access to that they would not have otherwise, or that their competitors would not have?” said Srinivasan from UCLA. “There’s a high probability that the Sora products could be more refined and more advanced, if the data are better suited for the models to learn from.”
At the same time, restrictions on how AI can be used or trained could also spur greater innovation, he said, noting how Chinese company DeepSeek –– blessed with a much smaller budget than the industry leaders –– built a competitive AI-powered chatbot.
“When it comes to Chinese breakthroughs in AI, the DeepSeek revelation was so important because they showed that there are other ways of training language models in ways that are more economical,” he said.
By Stephanie Yang
Wednesday, February 18, 2026
AI Promised to Save Time. Researchers Find It’s Doing the Opposite
Artificial intelligence boosters often promise the tech will lead to a reduced workload. AI would draft documents, synthesize information, and debug code so employees can focus on higher-value tasks. But according to recent findings, that promise is misleading.
An ongoing study, published in the Harvard Business Review, joins growing bodies of evidence that AI isn’t reducing workloads at all. Instead, it appears to be intensifying them.
Researchers spent eight months examining how generative AI reshaped work habits at a U.S.-based technology company with roughly 200 employees. They found that after adopting AI tools, workers moved faster, took on a wider range of tasks, and extended their work into more hours of the day, even if no one asked them to do so.
Importantly, the company never required employees to use AI. It simply offered subscriptions to commercially available tools and left adoption up to individuals. Still, many workers embraced the technology enthusiastically because AI made “doing more” feel easier and more rewarding, the researchers said.
That enthusiasm, however, came with unintended consequences. Over time, workloads quietly expanded to overwhelming levels. The gradual, often unnoticed, creep in responsibilities led to cognitive fatigue, burnout, and weaker decision making.
While AI can produce an initial productivity surge, the researchers warn that it may ultimately contribute to lower-quality work and unsustainable pressure.
To track these changes, the researchers observed the company in person two days a week, monitored internal communication channels, and conducted more than 40 in-depth interviews across engineering, product, design, research, and operations. They found that job boundaries began to blur.
Employees increasingly took on tasks that previously belonged to other teams, using AI to fill knowledge gaps. Product managers and designers started writing code. Researchers started handling engineering tasks. In many cases, work that might once have justified additional hires was simply absorbed by existing staff with the help of AI.
For engineers, the shift created a different kind of burden. Rather than saving time, they spent more hours reviewing, correcting, and guiding AI-generated work produced by colleagues. What had once been straightforward code review expanded into ongoing coaching and cleanup of flawed outputs.
The researchers described a feedback loop: AI sped up certain tasks, which raised expectations for speed. Higher expectations encouraged greater reliance on AI, and that, in turn, widened both the scope and volume of work employees attempted. The result was more activity, not less.
Many participants said that while they felt more productive, they did not feel any less busy. Some actually felt busier than before AI arrived. “You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less,” one engineer told the Harvard Business Review. “But then, really, you don’t work less. You just work the same amount or even more.”
What looks like a productivity breakthrough, the researchers concluded, can actually mask silent workload creep. And overwork, they warn, can erode judgment, increase errors, and make it harder for organizations to distinguish genuine efficiency gains from unsustainable intensity.
To counter these risks, the researchers proposed a protective approach they call “AI practice,” a set of intentional norms and routines that define how AI should be used at work and, crucially, when to stop. Without clear boundaries, they caution, AI makes it easier to do more but harder to slow down.
BY LEILA SHERIDAN
Tuesday, February 17, 2026
What Is AI.com? The $70 Million Domain Being Called ‘the Absolute Peak of the AI Bubble’
On Super Bowl Sunday, the most talked-about advertisement was for a product that hadn’t even launched yet.
During the game’s fourth quarter, a 30-second commercial aired advertising something called “AI.com,” ending with a call to “claim your handle” along with three usernames: Mark, Sam, and Elon. Missing from the commercial? Any information about what AI.com actually does.
But the mysterious teaser worked; web searches for “What is AI.com” exploded. According to EDO, a company that helps businesses measure the impact of advertisements, AI.com was the top-performing ad of the night, with 9.1 times as much engagement as the average Super Bowl ad. But when interested people rushed to the website, they found an error message waiting for them. The company’s website had immediately crashed.
What is AI.com, anyway?
AI.com was not co-founded by Mark Zuckerberg, Sam Altman, and Elon Musk. In fact, they have nothing to do with the company at all. The founder is actually Kris Marszalek, who previously co-founded Crypto.com.
Financial Times reported that in April 2025, Marszalek paid $70 million to obtain the AI.com domain, which the publication says is the most ever spent on a domain, far more than the $12 million Marszalek spent to acquire Crypto.com in 2018. Marszalek says he is currently the CEO of both companies.
What does AI.com actually do?
On its now-functioning website, the company describes itself as a platform offering access to a “private, personal AI agent that doesn’t just answer questions but actually operates on the user’s behalf — organizing work, sending messages, executing actions across apps, building projects, and more.” The company wrote that the agent will soon be able to help users “trade stocks, automate workflows, organize and execute daily tasks with their calendar, or even update their online dating profile.”
Sounds impressive, but it turns out that the tech powering AI.com is far from proprietary. In an article posted to Marszalek’s personal X account, the founder wrote that “AI.com is the world’s first easy-to-use and secure implementation of OpenClaw, the open-source agent framework that went viral two weeks ago.”
What is OpenClaw?
OpenClaw is essentially an agent that has full access to your computer’s files, and it has indeed become an instant sensation in the tech world. But the current process of setting the agent up is highly technical and risky. Marszalek says that AI.com has made OpenClaw “easy to use without any technical skills, while hardening security to keep your data safe.” Basically, this means that AI.com is positioning itself as a consumer-friendly wrapper around a powerful, developer-focused tool.
OpenClaw creator Peter Steinberger posted that he had not heard about AI.com until the ad aired, to which Marszalek responded, “Let’s chat.”
How do you sign up for AI.com?
If you go to AI.com, you’ll be asked to link your Google account to the platform in order to choose a handle for both yourself and your agent. After you’ve selected handles, you’ll need to connect a credit or debit card to your account, though the company says you won’t be charged.
Once your card has been processed, you’ll receive a notification that “demand is extremely high right now, so generation is queued. We’ll notify you the moment your AI is ready to activate.” It’s unclear if any users have received their agent yet.
In a popular X post, one user criticized the website, calling it “the absolute peak of the AI bubble.”
Steinberger quoted that post, writing “Guess I’m flattered?”
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, February 11, 2026
AI Power Users Are Rapidly Outpacing Their Peers. Here’s What They’re Doing Differently
Last November, consulting firm EY surveyed 15,000 employees across 29 countries about how they use AI at work. The results should worry every founder: 88 percent of workers now use AI tools daily, but only 5 percent qualify as “advanced users” who’ve learned to extract real value.
That 5 percent? They’re gaining an extra day and a half of productivity every single week. The other 95 percent are stuck using AI for basic search and document summarization, essentially treating a Ferrari like a golf cart.
When OpenAI released its State of Enterprise AI report in December, it confirmed the same pattern. Frontier workers—those in the 95th percentile—send six times more prompts to AI tools like ChatGPT than their median colleagues. For coding tasks, that number explodes by 17x. If these AI tools are identical and access is universal, why are the results so wildly different for workers around the world? And what separates power users from everyone else?
Ofer Klein, CEO of Reco, a SaaS security platform that discovers and secures AI, apps, and agents across enterprise organizations, offers some insights into what sets the power users apart.
1. They experiment while others dabble
High performers treat AI tools like junior colleagues they’re training. They iterate on prompts rather than giving up after one mediocre response. They’ve moved beyond one-off queries to building reusable prompt libraries and workflows.
The rest of your team tried AI once or twice, got underwhelming results, and concluded it wasn’t worth the effort. What they don’t realize, however, is that AI requires iteration. The first response is rarely the best response. Power users ask follow-up questions, refine their prompts, and teach the AI their preferences over time.
2. They match tools to tasks
Power users typically maintain what Klein calls a “barbell strategy”—deep mastery of one or two primary tools plus five to eight specialized AI applications they rotate through depending on the task.
“They’re not trying every new AI that launches, but they’re not dogmatically loyal to one platform either,” Klein explains. “They’ve developed intuition about which AI is best for what.”
They might use ChatGPT for brainstorming, Claude for analysis, and Midjourney for visuals. Most employees, by contrast, force one tool to handle everything. When it inevitably underperforms on tasks it wasn’t designed for, they blame AI rather than their approach.
3. They think about work differently
It’s easy to assume that the biggest behavioral difference between these power users and frontier workers is their technical skill. But, interestingly, it’s not. Rather, it’s how power users think about tasks. They break projects into discrete steps: research, outline, first draft, and refinement. Then they deploy AI strategically at each stage.
Instead of asking AI to “write a report,” they ask it to summarize research, suggest an outline, draft specific sections, then refine tone. They understand where AI adds value and where human judgment matters.
“The highest performers spend more time on strategic work because AI handles the grunt work,” Klein says. “They use AI to augment their expertise, not replace thinking.”
The hidden cost
Why does all of this matter? Here’s the math that should worry you: OpenAI’s data shows workers using AI effectively save 40-60 minutes daily. In a 100-person company where 60 employees barely touch AI, you’re losing 40-60 hours of productivity every single day. Over a year, that’s 10,000+ hours—equivalent to five full-time employees’ worth of work you’re paying for but not getting.
Meanwhile, your competitors’ power users are compounding that advantage daily.
What you can do about it
Klein recommends tracking time saved, not just usage frequency. Someone using AI 50 times daily for spell-checking differs fundamentally from someone using it five times to restructure a client proposal.
In addition, run an “AI show and tell” where employees demonstrate one workflow where AI saves them meaningful time. You’ll quickly identify who’s truly leveraging these tools versus who’s dabbling. Then, create small cross-functional “AI councils” of five to six employees who meet monthly to share workflows.
That should cascade into proper training of employees on how to use these tools the right way. “Only one-third of employees say they have been properly trained,” a BCG survey found. That’s an opportunity forward-thinking leaders can tap into.
But don’t just replicate tools; replicate mindset. Giving everyone ChatGPT Plus doesn’t close the gap. The differentiator is teaching people to think in terms of “what can I delegate to AI?” rather than “what can AI do?”
The uncomfortable truth, according to BCG’s survey, is that this gap is widest among front-line employees. While more than three-quarters of leaders and managers use AI several times a week, adoption among front-line workers has stalled at just 51 percent.
That’s not just a productivity problem. It’s a competitive threat that compounds every quarter you ignore it. Your 5 percent are already working like they have an extra team member. The question is whether you’ll help the other 95 percent catch up before your competitors do.
BY KOLAWOLE ADEBAYO, COLUMNIST
Subscribe to:
Comments (Atom)