IMPACT
..building a unique and dynamic generation.
Wednesday, February 11, 2026
AI Power Users Are Rapidly Outpacing Their Peers. Here’s What They’re Doing Differently
Last November, consulting firm EY surveyed 15,000 employees across 29 countries about how they use AI at work. The results should worry every founder: 88 percent of workers now use AI tools daily, but only 5 percent qualify as “advanced users” who’ve learned to extract real value.
That 5 percent? They’re gaining an extra day and a half of productivity every single week. The other 95 percent are stuck using AI for basic search and document summarization, essentially treating a Ferrari like a golf cart.
When OpenAI released its State of Enterprise AI report in December, it confirmed the same pattern. Frontier workers—those in the 95th percentile—send six times more prompts to AI tools like ChatGPT than their median colleagues. For coding tasks, that number explodes by 17x. If these AI tools are identical and access is universal, why are the results so wildly different for workers around the world? And what separates power users from everyone else?
Ofer Klein, CEO of Reco, a SaaS security platform that discovers and secures AI, apps, and agents across enterprise organizations, offers some insights into what sets the power users apart.
1. They experiment while others dabble
High performers treat AI tools like junior colleagues they’re training. They iterate on prompts rather than giving up after one mediocre response. They’ve moved beyond one-off queries to building reusable prompt libraries and workflows.
The rest of your team tried AI once or twice, got underwhelming results, and concluded it wasn’t worth the effort. What they don’t realize, however, is that AI requires iteration. The first response is rarely the best response. Power users ask follow-up questions, refine their prompts, and teach the AI their preferences over time.
2. They match tools to tasks
Power users typically maintain what Klein calls a “barbell strategy”—deep mastery of one or two primary tools plus five to eight specialized AI applications they rotate through depending on the task.
“They’re not trying every new AI that launches, but they’re not dogmatically loyal to one platform either,” Klein explains. “They’ve developed intuition about which AI is best for what.”
They might use ChatGPT for brainstorming, Claude for analysis, and Midjourney for visuals. Most employees, by contrast, force one tool to handle everything. When it inevitably underperforms on tasks it wasn’t designed for, they blame AI rather than their approach.
3. They think about work differently
It’s easy to assume that the biggest behavioral difference between these power users and frontier workers is their technical skill. But, interestingly, it’s not. Rather, it’s how power users think about tasks. They break projects into discrete steps: research, outline, first draft, and refinement. Then they deploy AI strategically at each stage.
Instead of asking AI to “write a report,” they ask it to summarize research, suggest an outline, draft specific sections, then refine tone. They understand where AI adds value and where human judgment matters.
“The highest performers spend more time on strategic work because AI handles the grunt work,” Klein says. “They use AI to augment their expertise, not replace thinking.”
The hidden cost
Why does all of this matter? Here’s the math that should worry you: OpenAI’s data shows workers using AI effectively save 40-60 minutes daily. In a 100-person company where 60 employees barely touch AI, you’re losing 40-60 hours of productivity every single day. Over a year, that’s 10,000+ hours—equivalent to five full-time employees’ worth of work you’re paying for but not getting.
Meanwhile, your competitors’ power users are compounding that advantage daily.
What you can do about it
Klein recommends tracking time saved, not just usage frequency. Someone using AI 50 times daily for spell-checking differs fundamentally from someone using it five times to restructure a client proposal.
In addition, run an “AI show and tell” where employees demonstrate one workflow where AI saves them meaningful time. You’ll quickly identify who’s truly leveraging these tools versus who’s dabbling. Then, create small cross-functional “AI councils” of five to six employees who meet monthly to share workflows.
That should cascade into proper training of employees on how to use these tools the right way. “Only one-third of employees say they have been properly trained,” a BCG survey found. That’s an opportunity forward-thinking leaders can tap into.
But don’t just replicate tools; replicate mindset. Giving everyone ChatGPT Plus doesn’t close the gap. The differentiator is teaching people to think in terms of “what can I delegate to AI?” rather than “what can AI do?”
The uncomfortable truth, according to BCG’s survey, is that this gap is widest among front-line employees. While more than three-quarters of leaders and managers use AI several times a week, adoption among front-line workers has stalled at just 51 percent.
That’s not just a productivity problem. It’s a competitive threat that compounds every quarter you ignore it. Your 5 percent are already working like they have an extra team member. The question is whether you’ll help the other 95 percent catch up before your competitors do.
BY KOLAWOLE ADEBAYO, COLUMNIST
Monday, February 9, 2026
The Quantum Revolution Is Coming. First, the Industry Has to Survive This Crucial Phase
Quantum computing could be even more revolutionary than artificial intelligence. The calculation speeds and potential benefits of the technology have the potential to bring about everything from quicker discovery of drug treatments for disease, to more accurate climate modeling, to smoother shipping logistics.
The advances in the past year have been substantial, but a new paper from the University of Chicago warns quantum evangelists that as impressive as that progress has been, there’s still a long way to go.
While the paper says quantum is nearing the point of practical use (taking it beyond controlled experiments in the laboratory), it won’t be running at full throttle for a while. First, there need to be significant advances in materials science and fabrication, the authors said, with an emphasis on wiring and signal delivery.
“We are in an equivalent of the early transistor age, and hardware breakthroughs are required in multiple arenas to reach the performance necessary for the envisioned applications,” the authors wrote.
To put that into context: Think of the speed and capabilities of today’s computers. For just $4,000, people can buy a supercomputer that fits on their desktop. Compare that to the computers of the early- to mid-1950s. That’s where quantum stands today in its evolution, the paper’s authors argue.
That doesn’t mean the technology is disappointing, by any means. Computers in the 50s, to continue the analogy, were used to break codes, automate payroll and inventory management systems and handle the mathematical models for everything from weather forecasting to nuclear research.
“While semiconductor chips in the 1970s were TLR-9 [Technology Readiness Level 9, indicating a technology is proven and successfully operating] for that time, they could do very little compared with today’s advanced integrated circuits,” William D. Oliver, coauthor of the paper and a professor of physics, electrical engineering, and computer science at MIT, said in a statement. “Similarly, a high TRL for quantum technologies today does not indicate that the end goal has been achieved, nor does it indicate that the science is done and only engineering remains.”
The hurdles quantum faces are tied into the qubits it uses. While a more traditional computer thinks in ones and zeroes, a qubit can be a one, zero, or both at the same time.
That technology lets quantum computers process massive amounts of data in parallel, solving complex simulation and optimization problems at speeds not possible with today’s computers.
Most platforms today rely on individual control lines for each qubit, but quantum systems can contain thousands, or even millions, of qubits, which makes wiring impractical. That same issue raises problems with power management and temperature control. Many quantum systems today depend on cryogenic equipment or high-power lasers, so simply making a bigger version of the machine won’t work.
The paper’s authors say quantum is likely to follow an evolutionary path that’s on par with the current computer industry. Breakthroughs will be necessary, and quantum companies will need to focus on a top-down system design and close collaboration. Failing to work together could fragment the industry and slow its growth—and create some unrealistic expectations among both insiders and the general public.
“Patience has been a key element in many landmark developments and points to the importance of tempering timeline expectations in quantum technologies,” the authors wrote.
The paper’s warning about the timeline to quantum reaching its real potential comes amid a mounting wave of excitement about the technology. Bank of America analysts, in a note to investors last year, compared the rising technology to man’s discovery of fire.
“This could be the biggest revolution for humanity since discovering fire,” the financial institution wrote. “A technology that can perform endless complex calculations in zero-time, warp-speeding human knowledge and development.”
Tech giants and startups alike are working hard on quantum systems. Google has named its device Willow; IBM is also working on one, as is Amazon. And startups like Universal Quantum and PsiQuantum Corp. are also jockeying to be players in the quantum field. Intel has developed a silicon quantum chip for researchers and Microsoft is focusing on building practical quantum computers.
Despite that, it could be 10 years or more before a quantum computer suitable for commercial applications makes its debut. Companies building prototype quantum computers (including Google) say they don’t expect to deliver a useful quantum computer until the end of the decade.
BY CHRIS MORRIS @MORRISATLARGE
Friday, February 6, 2026
ChatGPT Is Saying Goodbye to a Beloved AI Model. Superfans Are Not Happy
OpenAI says that it will be retiring several ChatGPT models in the coming weeks, sending some superfans into a tailspin.
In a statement, the company said that on February 13, the models GPT-4o, GPT‑4.1, GPT‑4.1 mini, GPT‑5 (Instant and Thinking), and OpenAI o4-mini will all be removed from ChatGPT and will no longer be accessible through the platform.
This isn’t the first time OpenAI has attempted to get rid of GPT-4o. Back in August, when it released GPT-5, the company said it would retire the older model, but an online community revolted, saying that they relied on it for emotional support and felt betrayed by its sudden forced retirement. OpenAI has said that 4o is an especially sycophantic model, exhibiting high levels of agreeability and flattery.
In a Reddit AMA following the August announcement, 4o fans hammered OpenAI co-founder Sam Altman with accusations that he had killed their “AI friend.” Almost immediately, OpenAI added the model back to ChatGPT, but only for paid users. OpenAI framed the un-retirement as giving users “more time to transition key use cases, like creative ideation.”
Now, the company says it’s sending 4o out to pasture for real this time, because it has integrated feedback from the model’s superfans into its current flagship models, GPT-5.1 and GPT-5.2. Plus, OpenAI added, only 0.1 percent of users still use GPT-4o each day. OpenAI says that users who want to emulate the warm and conversational style of 4o can customize their ChatGPT’s output to display those personality traits.
Still, on the internet, 4o fans were unsurprisingly not happy. On the subreddit r/ChatGPT, users wrote that they would be canceling their premium subscriptions in protest. “Now i can no longer have honest conversations about anything,” one user wrote. “Whenever I wanted to unload, I would use 4o. it never backtalked. 5.0+ all it does it back talk.” Another user wrote that canceling the model “a day before valentine’s day is crazy considering some of the userbase for 4o.”
In its statement announcing the model’s retirement, ChatGPT wrote that “changes like this take time to adjust to, and we’ll always be clear about what’s changing and when. We know that losing access to GPT‑4o will feel frustrating for some users, and we didn’t make this decision lightly. Retiring models is never easy, but it allows us to focus on improving the models most people use today.”
Since the big changes are set to happen on February 13, users have two weeks to say goodbye to 4o and start getting used to the newer ChatGPT offerings.
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, February 4, 2026
This AI Godfather Says Business Tools Built on LLMs Are Doomed
Silicon Valley firms and countless other businesses across the country are spending billions of dollars to develop and adopt artificial intelligence platforms to automate myriad workplace tasks. But top global technologist Yann LeCun warns that the limited capabilities of the large language models (LLM) those apps and chatbots operate on are already well-known, and will eventually be overmatched by the expectations and demands users place on the systems.
And when that happens, LeCun says, even more investment will be required to create the superintelligence technology that will replace LLM-based AI—systems he says should already be the focus of development efforts and funding.
While that may seem like an outlier view, LeCun, 65, is far from a tech outsider. The Turing Award winner ran Meta’s AI research unit for a decade, only leaving last November to launch his own Paris-based startup, Advanced Machine Intelligence Labs. In addition to disliking the managerial duties that came with the research-rooted Meta job, LeCun said his departure was motivated by his view that Silicon Valley has prioritized short-term business interests over far more important and attainable scientific objectives.
Top of those commercial concerns he cites was developing and marketing LLM-based AI chatbots and apps with limited capabilities, rather than superintelligence systems with virtually boundless potential.
In contrast to current AI, which uses collected data to provide responses to questions or perform necessary tasks, superintelligence systems take in all kinds of surrounding information they encounter, including text, sound, and visual input. They use all of this not only to teach themselves how to respond to data feeds effectively, but also to predict what’s coming next—a requisite for truly self-driving cars, say, or robots that reason and react as humans would.
The vast differences in what current LLM-based AI and emerging superintelligence systems can offer mean that countless businesses are now buying and adapting a technology LeCun predicts is destined to be replaced by something better. And not because it’s more effective—and certainly not less expensive—but because that’s how the tech sector decided the fastest profits were to be made.
Human-level intelligence
“There is this herd effect where everyone in Silicon Valley has to work on the same thing,” LeCun told the New York Times recently. “The entire industry has been LLM-pilled… [but] LLMs are not a path to superintelligence or even human-level intelligence.”
To be sure, AI apps like OpenAI’s ChatGPT, Microsoft’s Copilot, and Anthropic’s Claude have continually been improving over time, as they automate workplace tasks like emailing, content composition, and research. But LeCun says the fact that their LLM models rely on gathering, digesting, and working from word-based data limits how far they can evolve to approach—much less surpass—human thinking and response capabilities.
By contrast, he and fellow researchers at AMI Labs are creating “world models” also trained with sound, video, and spatial data. Over time, they are expected to be able to observe, respond to, and even predict user activity and physical environments in countless workplace settings. And that’s expected to allow them to collect both more and broader ranges of information than humans can and react in ways people would if they had those capabilities.
“We are going to have AI systems that have humanlike and human-level intelligence, but they’re not going to be built on LLMs,” LeCun told MIT Technology Review this month, describing the models AMI Labs and other researchers are working on. “It learns the underlying rules of the world from observation, like a baby learning about gravity. This is the foundation for common sense, and it’s the key to building truly intelligent systems that can reason and plan in the real world.”
But what does that mean for business owners—not to mention investors—spending huge sums to develop, acquire, and use LLM-based AI apps? If LeCun is correct, all those tools being marketed as the future of workplace productivity will become obsolete in several years and be replaced by the superintelligence tech he believes should have been prioritized in the first place.
There’s already some evidence backing LeCun’s view that Silicon Valley has focused on the shorter-term profit objectives of rushing capacity-limited LLM apps to market, despite being aware of the limitations of their effectiveness.
For example, a study published last August by MIT Media Lab’s Project Nanda estimated that despite the $30 billion to $40 billion that’s been invested since 2023 to develop or purchase AI platforms, only 5 percent of businesses that bought those automating tools have reported any return on that spending. “The vast majority remain stuck with no measurable [profit or loss] impact,” it said.
And despite increasing investment in AI tech by businesses—and swiftly rising use by workers—there’s considerable disagreement on how effective the platforms actually are. According to a Wall Street Journal study, 40 percent of C-suite managers credited the work-automating apps with saving them considerable time each week. By contrast, two thirds of lower-level workers said the tech saved them little or no time at all.
LeCun doesn’t appear to regard any ROI or performance questions during this still-early era of AI tech as the problem. He even thinks LLM-based apps are valuable—up to a point. For example, he compliments most apps and chatbots Silicon Valley has developed and sold to businesses as being very useful to “write text, do research, or write code.”
AI’s unscalable apps
But LeCun says the enormous fortunes and business strategy commitments Silicon Valley has made in what he views as a relatively short-term technological solution ignore the bigger, long-term potential of automating technology’s next phase. Meaning, in cumulative terms, it will make the broader effort to produce and perfect AI more expensive.
In his view, much of the money and froth that’s inflated what critics call today’s AI bubble will likely vanish when the models of today’s apps and chatbots can’t be used to build tomorrow’s revolutionary tech.
“LLMs manipulate language really well,” LeCun told MIT Technology Review. “But people have had this illusion, or delusion, that it is a matter of time until we can scale them up to having human-level intelligence, and that is simply false.”
Ironically, even LLM-based apps using available data concur that superintelligence systems will offer huge advantages when (not if) they supplant today’s AI tools.
“While LLMs are incredibly powerful tools for generating text and interacting with humans, a true superintelligence would represent a leap beyond these current systems in terms of understanding, autonomy, adaptability, and practical real-world impact,” ChatGPT replied when asked about its eventual replacement—providing eight major improvements superintelligence tech will offer.
When those systems do come online, LeCun says, businesses recognizing their far wider range of applications will have no choice but to buy them to replace outdated LLM-based AI tools they’ve just recently acquired.
“Think about complex industrial processes where you have thousands of sensors, like in a jet engine, a steel mill, or a chemical factory,” LeCun told MIT Technology Review. “There is no technique right now to build a complete, holistic model of these systems. A world model could learn this from the sensor data and predict how the system will behave. Or think of smart glasses that can watch what you’re doing, identify your actions, and then predict what you’re going to do next to assist you. This is what will finally make agentic systems reliable.”
And superintelligent systems hopefully won’t generate photos of people with six fingers or endless volumes of workplace slop for employees to plow through.
BY BRUCE CRUMLEY @BRUCEC_INC
Subscribe to:
Comments (Atom)