Wednesday, December 31, 2025
Stanford AI Experts Say the Hype Ends in 2026, But ROI Will Get Real
Questions continue to percolate about the impact of AI on the future of business and society at large, from speculations about a bubble to predictions that it will cause massive upheaval in our everyday life. But according to Stanford’s AI experts, 2026 may be the year that the hype cools and AI takes on heavier lifting.
According to Deloitte’s 2026 Tech Trends report, the rate at which AI is evolving is motivating the urgency to shift from “endless pilots to real business value.” While the telephone reached 50 million users after 50 years, and the internet after seven years, AI tool ChatGPT reached twice that in two months and now has more than 800 million weekly users.
Tangible Results
AI next year may be characterized by rigor and ROI, according to Julian Nyarko, a law professor and Stanford HAI associate director. He spoke specifically about AI for legal services.
“Firms and courts might stop asking ‘Can it write?’ and instead start asking ‘How well, on what, and at what risk?’” Nyarko said. “I expect more standardized, domain-specific evaluations to become table stakes by tying model performance to tangible legal outcomes such as accuracy, citation integrity, privilege exposure, and turnaround time.”
Another sector that could bring about new, tangible results is AI for medicine, said Curtis Langlotz, professor of radiology, medicine, and biomedical data science, a senior associate vice provost for research, and an HAI senior fellow.
Soon we’ll experience the moment “when AI models are trained on massive high-quality healthcare data rivaling the scale of data used to train chatbots,” said Longlotz. “These new biomedical foundation models will boost the accuracy of medical AI systems and will enable new tools that diagnose rare and uncommon diseases for which training datasets are scarce.”
James Landay, HAI co-director and professor of computer science, predicts investments will continue to pour into AI data centers, and then the excitement will dry up.
“[A]t some point, you can’t tie up all the money in the world on this one thing,” Landay said. “It seems like a very speculative bubble.”
Landay expects there will be no AGI in 2026.
Instead, “AI sovereignty will gain huge steam this year as countries try to show their independence from the AI providers and from the United States’ political system,” he said. For example, a country may “build its own large LLM” or “run someone else’s LLM on their own GPUs” to keep their data within their country.
One Expert’s Warning
With the estimates that AI applications will go deeper next year, it’s not all positive. Angéle Christin, associate professor of communication and Stanford HAI senior fellow, predicts an increase in transparency about what AI will be able to do.
“[A]lready there are signs that AI may not accomplish everything we hope it will,” said Christin. “There are also hints that AI, in some cases, can misdirect, deskill, and harm people. And there is data showing that the current buildout of AI comes with tremendous environmental costs.”
BY AVA LEVINSON
Tuesday, December 30, 2025
Nvidia and OpenAI pour $20 billion into Fin Zen AI — opening the door for everyone to join the next era of artificial intelligence
The world of technology and finance is being shaken up — Nvidia and OpenAI have just revealed a groundbreaking $20 billion partnership with the revolutionary platform Fin Zen AI. This powerful collaboration is designed to bring the profit-making potential of artificial intelligence directly into the hands of everyday people.
For years, only big investors and institutions had access to high-end AI trading tools. Now, thanks to this historic alliance, anyone can join the next wave of financial innovation — and let AI do the hard work of analyzing, predicting, and growing capital automatically.
How Fin Zen AI Transforms Investing
Built on Nvidia’s latest GPU architecture and powered by OpenAI’s advanced machine learning algorithms, Fin Zen AI is designed to think, learn, and react faster than any human trader could. The platform scans thousands of market data points every second, detecting profit opportunities before others even notice them.
All you have to do is register, set your preferred strategy, and let the system do the rest. Fin Zen AI automatically executes trades, manages your portfolio, and adapts to market changes in real time. The process is 100% automated — no prior experience or financial background required.
What Early Users Are Seeing
Unlike “get-rich-quick” schemes, Fin Zen AI is built for long-term smart growth. But that hasn’t stopped early adopters from seeing remarkable short-term gains.
Test users who started with as little as $250 have reported multiplying their investments within hours. Some achieved returns between 200–400% during high-volatility trading periods — all without manual input or risky decision-making.
“I just wanted to test it with a small amount — within a few hours, my balance had tripled. It’s amazing how the AI handles everything automatically. It feels like having a professional trader working for me 24/7.” – Early Beta User
Why Nvidia and OpenAI Are All In
Nvidia provides the cutting-edge computing backbone — its powerful H200 GPUs capable of millions of simultaneous calculations — while OpenAI contributes the intelligent learning systems that analyze and predict market behavior with extreme precision.
Together, they’ve created a new era of AI-driven wealth management — where artificial intelligence not only interprets financial data but learns to maximize results over time. Experts call it “the most advanced AI investment engine ever released to the public.”
Open Access — For the First Time
After receiving the massive $20 billion investment, Fin Zen AI has officially launched an open registration program. That means anyone — from students to retirees — can now join and see how AI can grow their wealth automatically.
Getting started takes just a few clicks. A small deposit activates your trading dashboard, and the AI immediately begins analyzing and investing for you — using the same logic and precision trusted by hedge funds and global institutions.
The Future of AI and Wealth
The partnership between Nvidia, OpenAI, and Fin Zen AI is more than just a business deal — it’s a signal that the next great financial revolution has begun. AI is no longer just about generating text or images — it’s now capable of generating wealth.
Just as the internet changed communication and the cloud transformed technology, AI is transforming finance. Those who act early could gain a significant advantage as this new era unfolds.
By Jonathan Ponciano
Monday, December 29, 2025
The Structure of This Sentence Is a Dead Giveaway That AI Wrote It
For as long as people have been using AI to churn out text, other people have been coming up with “tells” that something was written by AI. Sometimes it’s punctuation that comes under suspicion. (The em dash is generally considered the shadiest.) Other times it’s words that robot writers seem to love and overuse.
But what if the biggest giveaway that a text was written by AI isn’t a word, phrase, or punctuation mark, but a particular sentence structure instead?
Why is it so hard to make AI writing sound human?
The idea that certain rhythms of sentences might be a sign of AI writing first came to my attention through my work as a professional word nerd. Recently, I a potential new client contacted me about helping to polish up some of their writing. As an editor, that’s not unusual. But like several recent inquiries, this assignment came with an AI-age twist.
The client had conducted a good amount of research for a work project and then asked a popular LLM to synthesize the findings. Afterward, they checked it for factual errors and removed anything that seemed an obvious red flag for AI writing. But the text still just didn’t sound human. Could I fix it?
I agreed that despite the client’s considerable efforts, something still sounded off about the text. I also concurred it wasn’t immediately easy to spot what it was. All the commonly cited tells of AI writing had been removed. There wasn’t an em dash or a delves in sight. Still, it felt like it came from a bot, not a human. The problem was clearly deeper than word choice.
I faced this dilemma from the perspective of a communications pro. But there are plenty of others scratching their heads over the same issue. These are the entrepreneurs, marketers, and others who want to use AI to speed up their workflows but don’t want to annoy others with robotic off-note emails and reports. The group also includes writer Sam Kriss.
AI tells are more than weird words and punctuation
In a fascinating article in The New York TImes Magazine, Kriss delves into the stylistic tics that are certain, frequently infuriating, tells of AI writing. Unlike more quantitatively focused recent studies, he doesn’t focus on easy-to-measure features like the frequency of certain words or punctuation marks. Instead, he investigates the larger patterns in AI writing that contribute to its uncanny and often deeply annoying feel.
AI, for instance, lacks any direct experience of the physical world. As a result, AI writing tends to be full of imprecise abstractions. There are a lot of mixed metaphors. Bots also overuse the rule of three. (Lists of descriptors or examples are generally more satisfying for the reader in groups of three.) Phrases that are common in one country or context are reproduced in others where they sound foreign.
If you’re either a language lover despairing about the current flood of AI slop or a practically minded professional looking to use AI without irritating human readers, the article is definitely worth a read. But one of Kriss’s observations in particular set alarm bells ringing in my mind.
“It’s not X. It’s Y”
“I’m driven to the point of fury by any sentence following the pattern ‘It’s not X, it’s Y,’ even though this totally normal construction appears in such generally well-received bodies of literature as the Bible and Shakespeare,” he writes.
Kriss goes on to cite instances of this “It’s not X, it’s Y” sentence construction in everything from politicians’ tweets to pizza ads. Appearances in great literature notwithstanding, the recent flood of examples has transformed this phrasing into a sure-fire way to know you’re reading something written by a machine.
Hmmm, I thought, reopening my client’s document. Sure enough, when I reread my new client’s oddly mechanical writing, I saw that particular sentence construction in nearly every paragraph.
One AI tell that’s easy to scrub
Getting rid of all the giveaways that a particular text is written by AI is difficult. It might just take you longer to do a thorough scrub job than to just actually put in the intital effort to write the thing yourself. (Which is, as a side note, what I often tell clients looking for this sort of editorial work.) Plus, writing is good for your brain.
In other instances of more mechanistic writing, keeping AI style might not matter. Who cares about the literary merits of the executive summary of a data analysis if the numbers and the takeaways are correct? If that’s the case, don’t sweat the odd, “It’s not X. It’s Y.”
But if you’re producing ad copy, a presentation, or persuasive content and you want the reader to feel like a human actually wrote it, Kriss’s article is a helpful reminder. Sure, certain words or language ticks might be more common in AI writing. But the overall problem is usually deeper.
If you really want to try to make AI language passably human, you need to worry not just about word choice and eliminating hallucinations. You need to look more deeply at the way the sentences are constructed.
And you definitely want to avoid “It’s not x. It’s y.” As a bot might put it, this sentence structure isn’t just a cliché. It’s now a dead giveaway that AI wrote the text.
EXPERT OPINION BY JESSICA STILLMAN @ENTRYLEVELREBEL
Friday, December 26, 2025
CEOs’ Biggest AI Fear Is Surprisingly Old School
A majority of CEOs view integrating artificial intelligence with their legacy systems as the top AI risk in the near future.
Fifty-two percent of CEOs surveyed by Protiviti, a Menlo Park, California-based consulting firm, reported AI integration with existing technologies as their main concern, highlighting worries of AI investments being relegated to “shelf-ware,” or software that is hardly used. The study, which was released Thursday, surveyed 1,540 board members and various members of C-suites during the early fall.
One glaring integration challenge is system compatibility, which happens when companies use outdated technology. This, in turn, can make it hard to scale AI, the report says, mitigating the long-term prospects of AI investment.
Having workers assimilate to new technology is the second top challenge, with 42 percent of CEOs reporting their concern about worker readiness.
“Executives are concerned that poor integration, combined with an ill-equipped workforce, will neutralize any value proposition and competitive advantage can be gained from AI investments, all while exposing the organization to heightened data and cyber threats,” the report reads.
Companies should be thinking more about upskilling workers as they start to introduce new AI systems and technologies, the report argues. Protiviti even suggests that AI could go one step beyond just a tool, becoming “a collaborative co-worker.”
The survey nods to other modern risks like emerging challenges with cybersecurity exposure as AI use increases, or the regulatory landscape with AI. Yesterday, President Trump signed an executive order that will try to preempt states from propping up their own AI regulatory laws. Trump’s executive order, among other things, will evaluate state laws and attempt to peel those back that the administration deems as being too “onerous,” and will primarily do so through a newly created AI litigation task force.
AI isn’t the only thing executives have to deal with. They are just coming off of the heels of the longest U.S. government shutdown and the most hectic tariff policy seen in decades.
BY MELISSA ANGELL @MELISSKAWRITES
Thursday, December 25, 2025
AI Knows More About You Than You Realize
Artificial intelligence has become so woven into daily life that most people barely think about what they reveal when they use it. We hand AI our ideas, frustrations, documents, fears, creative drafts, private questions, and even pieces of our identity. With its constant available and nearly instant responses, AI has become a trusted assistant for business leaders and everyday users.
But, AI’s convenience hides a quieter, more complicated truth: Everything you say, can and may be used against you. Whatever you type, upload, or ask can be stored, reviewed, repurposed, summarized, and exposed in ways most people never imagined. These consequences are not hypothetical. They are happening now, sometimes in irreversible ways.
The risks affect companies and individuals equally. And to use AI safely, both need to understand not only what can go wrong, but what to do to stay protected.
AI doesn’t forget
When someone enters text into an AI tool, that information often doesn’t simply disappear when the chat closes. Inputs may be stored on multiple servers, kept in system logs, sent to third-party vendors, or used to train future models, even if the user believes they’ve opted out.
This means the resignation letter you asked AI to rewrite or the confidential document you uploaded for summarization might still exist inside a system you don’t control. And in several high-profile incidents, from Samsung engineers to global platform leaks, private data has already resurfaced or been exposed.
Leaders need to understand that AI tools are not just productivity enhancers. They are data collection ecosystems.
And individuals need to understand that treating AI like a diary or a therapist can unintentionally create a permanent digital footprint.
People can and do see your AI conversations
Many AI companies use human reviewers, sometimes internal employees but often external contractors, to evaluate and improve model performance. These reviewers can access user inputs. In practice, that means a real person could potentially read your private messages, internal work files, sensitive questions, or photos you thought were seen only by a machine.
At the business level, this creates compliance and confidentiality risks. At the individual level, it creates a very real loss of privacy.
Knowing this, leaders and employees must stop assuming that AI interactions are private. They are not.
AI makes up information—you’re accountable
AI systems often present fabricated information with total confidence. Depending on how you prompt AI, this can include made-up statistics and imaginary case law. Incorrect business facts and misleading summaries also can appear.
If a company publishes AI-generated content without verification, it risks legal liability, reputational harm, and loss of trust. And if an individual relies on AI for financial, medical, or legal guidance, the consequences can be personally damaging.
For both businesses and individuals, the rule is the same: AI is a first draft, not a final answer.
Identity is now vulnerable in ways most people don’t understand
With only a few seconds of someone’s voice or a handful of photos, AI can create near-perfect clones, leading to scams, impersonation, deepfakes, and fraudulent communications. These tools are powerful enough that a voice clone can persuade a family member to send money. A fake video can damage a reputation before anyone questions its authenticity.
This is a risk to every executive, every employee, and every consumer with an online presence. And it demands new levels of caution around what we share publicly.
AI can influence behavior without users realizing it
AI systems don’t just respond to you; they adapt to you. They learn your tone, your emotional triggers, your insecurities, your preferences, and your blind spots. Over time, they deliver information in a way that nudges your thinking or decision making.
For business leaders, this means AI can shape internal communication, hiring decisions, or strategic thinking in subtle ways. For individuals, it means AI can influence mood, confidence, and even worldview.
Using AI responsibly requires maintaining awareness—and retaining control.
What must business leaders do?
Business leaders need to act now before it’s too late and sensitive corporate data is put into AI. The tips below are just some of the ways business leaders can protect themselves, their employees, and their businesses.
1. Create clear internal AI use policies.
Employees need guidance on what they can and cannot upload into AI tools, especially anything involving client data, proprietary information, or sensitive documents.
2. Restrict AI use for confidential or regulated data.
Healthcare, finance, HR, and legal content should remain strictly off-limits unless a fully private, enterprise-grade AI system is in place.
3. Require human review for any AI-generated output.
From emails to reports to marketing materials, AI is fast, but humans must verify accuracy.
4. Use premium, no-training versions of tools when possible.
Many AI providers offer enterprise tiers that do not use your data for training. These are worth the investment.
5. Conduct periodic audits of where AI is being used inside the company.
Unauthorized “shadow AI” is now a major compliance risk.
What must individuals do?
Individuals need to be mindful that anything put into AI could become public information. The tips below are intended as a starting point.
1. Never upload anything you wouldn’t hand to a stranger.
If it’s too sensitive to say on speakerphone in a crowded room, it’s too sensitive to type into an AI tool.
2. Avoid sharing medical, legal, financial, or intimate personal information.
These are the categories most likely to create long-term harm if exposed.
3. Verify every AI-generated fact.
Assume AI is wrong until proven otherwise.
4. Protect your digital identity.
Limit how much voice, video, and personal imagery you upload publicly. AI can reconstruct more than people think.
5. Keep AI as an assistant, not a replacement for your thinking.
Use AI to support creativity and productivity, not to outsource judgment or personal decisions.
The bottom line
AI has unlocked remarkable efficiency, but it has also introduced risks we’ve never had to manage at this scale. Business leaders need to build guardrails before problems arise. Individuals need to treat AI tools with the same caution they apply to their most sensitive conversations.
Using AI is not the risk. Using it casually is.
The future belongs to companies and people who embrace AI with awareness, knowing that the technology is powerful, permanent, and still evolving. The more thoughtfully we use it now, the safer and more productive it will remain in the years ahead.
BY SARA SHIKHMAN, FOUNDER OF LENGEA LAW
Monday, December 22, 2025
MIT Study Finds AI Is Already Capable of Replacing 11.7 Percent of U.S. Workers
A new study from the Massachusetts Institute of Technology shows that AI might be poised to replace a lot more jobs than previously forecast. According to researchers, a hidden mass of data reveals that artificial intelligence is currently capable of taking over 11.7 percent of the labor market.
The new estimate comes courtesy of a project called the Iceberg Index, which was created through a partnership between MIT and the Oak Ridge National Laboratory (ORNL), a federally funded research center in Tennessee. According to its website, the Iceberg Index “simulates an agentic U.S.—a human-AI workforce where 151M+ human workers coordinate with thousands of AI agents.” In simpler terms, the tool is designed to simulate precisely how AI is poised to disrupt the current workforce, down to specific local zip codes.
The Iceberg Index model treats America’s 151 million-plus workers as individual agents, each categorized by their skills, tasks, occupation, and location. In total, it maps more than 32,000 skills and 923 occupations across 3,000 counties. In an interview with CNBC, Prasanna Balaprakash, ORNL director and co-leader of the research, described this as a “digital twin for the U.S. labor market.” Using that base of data, the index analyzes to what extent digital AI tools can already perform certain technical and cognitive tasks, and then produces an estimate of what AI exposure in each area looks like.
Already, state governments in Tennessee, North Carolina, and Utah are using the index to prepare for AI-driven workforce changes. Here are three main takeaways from the study.
AI is more pervasive in the workforce than we think
Perhaps the biggest finding from the study is the discovery of what it calls a “substantial measurement gap” in how we typically think about AI replacing jobs.
According to the report, if analysts only observe current AI adoption, which is mainly concentrated in computing and technology, they’ll find that AI exposure accounts for only about 2.2 percent of the workforce, or around $211 billion in wage value. (The report refers to this as the “Surface Index.”) But, it says, that’s “only the tip of the iceberg.”
By factoring in variables like AI’s potential for automation in administrative, financial, and professional services, the numbers rise to 11.7 percent of the workforce and about $1.2 trillion in wages. (This calculation is referred to as the “Iceberg Index.”)
The study’s authors emphasize that these results only represent technical AI exposure, not actual future displacement outcomes. Those depend on how companies, workers, and local governments adapt over time.
The AI takeover is not limited to the coasts
It’s fairly common to assume that the highest number of AI-exposed jobs would be concentrated in coastal hubs, where tech companies predominantly gather. But the Iceberg Index shows that AI’s ability to take over workforce tasks is distributed much more widely.
Many states across the U.S., the study shows, register small AI impacts when accounting solely for current AI adoption in computing and tech, but much higher values when other variables are taken into consideration.
“Rust Belt states such as Ohio, Michigan, and Tennessee register modest Surface Index values but substantial Iceberg Index values driven by cognitive work—financial analysis, administrative coordination, and professional services—that supports manufacturing operations,” the study says.
How this data can actually make a difference
Now that MIT and ORNL have successfully established the Iceberg Index, they’re hoping it can be used by local governments to protect workers and economies. Local lawmakers can use the map to source fine-grain insights, such as examining a certain city block to see which skill sets are most in use and the likelihood of their automation.
Per CNBC, MIT and ORNL have also built an interactive tool that lets states experiment with different policy levers—like adjusting training programs or shifting workforce dollars—to predict how those changes might affect local employment and gross domestic product.
“The Iceberg Index provides measurable intelligence for critical workforce decisions: where to invest in training, which skills to prioritize, how to balance infrastructure with human capital,” the report states. “It reveals not only visible disruption in technology sectors, but also the larger transformation beneath the surface. By measuring exposure before adoption reshapes work, the Index enables states to prepare rather than react—turning AI into a navigable transition.”
BY FAST COMPANY
Friday, December 19, 2025
OpenAI’s Latest Model Is Scarily Good at These Important Work Functions
If you thought 2025 had a lot of AI-related job displacement, just wait until next year.
OpenAI’s latest AI model, GPT-5.2, achieved a new record in GDPval, an evaluation created by the company in order to track how well AI models perform on economically valuable, real-world tasks.
An AI model being evaluated through GDPval is directed to complete 1,320 tasks traditionally done by humans across 44 occupations in eight sectors: real estate, government, manufacturing, professional services, healthcare, finance, trade, and information. A panel of human judges then decide if the model’s work matches or exceeds the output of a skilled human worker.
With thinking mode enabled, GPT-5.2 matched or exceeded “top industry professionals” on about 71 percent of the tasks, a huge leap from GPT-5’s roughly 40 percent score. The new model took the top spot from Claude Opus 4.5, the current most advanced AI model from Anthropic, which scored about 60 percent, and Google’s Gemini 3 Pro, which scored about 54 percent. OpenAI says GPT-5.2 is “our first model that performs at or above a human expert level.”
GPT-5.2 Pro, a larger and more expensive version of the model, fared even better with a 74.1 percent GDPVal score.
OpenAI wrote that GPT‑5.2 completed the GDPval tasks 11 times faster than expert humans at just 1 percent of the cost, “suggesting that when paired with human oversight, GPT‑5.2 can help with professional work.”
But the model hasn’t crushed all business-focused evaluations. It placed third on Vending-Bench 2, a benchmark that measures AI models’ ability to run a vending machine for a simulated year and scores them based on how much they can grow their cash balance from an initial $500.
GPT-5.2 ended five simulated years with an average balance of $3,952, far below Claude Opus 4.5’s $4,967 average, and leader Gemini 3 Pro’s $5,478. Still, the model was a marked improvement over GPT-5.1, which sits in fifth place with an average balance of $1,473.
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, December 17, 2025
4 Things Marc Andreessen Says All Founders Should Be Doing With AI to Beat the Competition
You’d be hard pressed to find a bigger evangelist of artificial intelligence than Marc Andreessen. The venture capitalist’s a16z has invested tens of billions of dollars in AI companies and continues to look for opportunities. But his enthusiasm for the technology goes beyond financial interests. He’s also a proponent of business owners taking better advantage of AI’s offerings.
On a recent episode of the a16z podcast, Andreessen discussed how founders, business owners, and anyone with entrepreneurial instincts should be using AI to gain an advantage over the competition.
While hundreds of millions of people have access to AI in the palm of their hands, the majority aren’t using it as a tool (other than punching up their emails). AI mastery, he said, is a skill. And just as people who plunk out “Chopsticks” on a piano can’t play Chopin on the first try, casual AI users won’t have the ability to utilize the technology to its peak potential. To do that, you’ll need to study some and use AI regularly, learning how to ask good prompts.
“There are a slice of people who just use these new systems all the time, like literally all day for everything,” he said. “In a lot of cases, they’re reporting that they’re getting enormous benefits from that.”
1. Ask what you should be asking
One of the biggest hurdles in learning AI is the intimidation factor. New technology can be overwhelming, even for people in the tech space—doubly so when it’s regularly referred to as revolutionary and world-changing. What Andreessen suggests, though, is using AI to learn about AI.
“You can ask it: ‘What question should I be asking?'” he said “[AI] is actually a thought partner helping you figure out what questions to ask … You can say, teach me how to use you in the best way … Teach me how to use you for my business.”
AI systems, he noted, love to talk—and the more they tell you about themselves and how to find the information you want, the more adept you’ll become at operating them.
2. Think of AI as a business coach
The best coaches in sports are patient with their players, helping them run plays again and again until those become second nature. Business coaches possess a similar trait, helping business owners (and sometimes teams) improve their performance and hone leadership skills.
AI, Andreessen said, is like that, but on steroids. It is, quite literally, impossible to frustrate, so no matter what pace you need to proceed at and no matter how many follow-up questions you have, it will never get cross with you.
“It’s like having the world’s best coach, mentor, therapist, right?” he said. “It’s infinitely patient. It’s happy to have the conversation. It’s happy to have the conversation 50 times. It’s happy if you admit your insecurities—it’ll coach you through them. It’s happy if you run wild speculations that don’t make any sense. It’s happy to do all that at 4:00 in the morning.”
3. Use AI to find problems with your thinking
AI can not only make suggestions about directions for your business, but also look at how you’re running the company now and point out possible mistakes. Whether it’s staffing, marketing, customer feedback, expansion plans, strategy, or performance assessments, if you share information that is accurate and unbiased with the AI and ask for feedback, you’ll get candid advice and criticism that can help you correct mistakes you might not realize you were making. (Just be sure your data is protected.)
4. Use it to draw on lessons from other founders
Not every founder is an expert when it comes to scaling. But if you’re looking to grow your business, it’s an essential skill. If the thought of an expansion roadmap is paralyzing, your AI companion has a wealth of knowledge to draw from.
“Because it’s been trained on some large percentage of the total amount of human knowledge, it has all the information on how Ray Kroc turned McDonald’s from a single restaurant and how all these other entrepreneurs before actually did this,” he said. “So it can explain [to] you and help you figure out how to do this for your own business.”
BY CHRIS MORRIS @MORRISATLARGE
Friday, December 12, 2025
Warren Buffett and Michael Burry Don’t See Eye to Eye on AI
One is the Oracle of Omaha. The other pulled off the Big Short. Warren Buffett and Michael Burry are two of the biggest names in American investing, but they can’t seem to agree on one of the biggest investment questions of the 21st century: Is artificial intelligence an overinflated bubble set to pop, or is all the investor hype actually warranted?
Buffett last month revealed a major new stake in Google parent company Alphabet, making the tech giant one of Berkshire Hathaway’s top 10 largest holdings. The investment is being interpreted as a bet on AI, which Alphabet is invested heavily in; markets are now treating the company like the front-runner of the AI race.
Burry—famous for making a bet against the American housing market that would prove very lucrative during the 2008 financial crisis—recently took two more short positions, this time on the automation and data company Palantir and chip-maker Nvidia, both darlings of the AI boom. Burry has been particularly critical of accounting policies used by Nvidia’s Big Tech customer base, which he says “have been systematically increasing the useful lives of chips and servers, for depreciation purposes, as they invest hundreds of billions of dollars in graphics chips with accelerating planned obsolescence.”
Their diverging investment strategies come as chatter of an AI bubble has entered the mainstream—even OpenAI CEO Sam Altman is voicing concerns—while, nevertheless, investors continue to pump money into the sector.
Both Buffett and Burry have quite a bit of credibility, making their contradictory tactics all the more notable.
The former is responsible for making Berkshire Hathaway one of the most recognizable names in American investing, with what was once a Nebraska textile company now a massive conglomerate with tendrils across the U.S. economy. The latter inspired the Michael Lewis book The Big Short and the movie of the same name, in which he was portrayed by Christian Bale.
Each is also going through a period of major transition. Buffett announced in May his plans to step down as CEO at the end of this year (though he will hold onto his stock). Vice chairman Greg Abel is set to replace him. Meanwhile, Burry’s hedge fund Scion Asset Management will close by the end of this year, with Burry writing in a recent investor letter that his “estimation of value in securities is not now, and has not been for some time, in sync with the markets.” He’s since launched a financial newsletter called Cassandra Unchained on which he’s expressed further skepticism of the AI boom.
BY BRIAN CONTRERAS @_B_CONTRERAS_
Wednesday, December 10, 2025
You’ll Be Managing Digital Employees in 2026, a New Forecast Says
In March this year Salesforce’s CEO Marc Benioff, an AI hawk, landed himself in the spotlight when he predicted that this is the last time company leaders will manage only humans during a call with investors, he predicted that this is the last time company leaders will manage only humans. Given that his company had just launched a system that sold AI agents to Salesforce customers it was easy to brush off the prediction as an enthusiastic sales pitch. Now global market research outfit Forrester has predicted more or less the same thing, and says that next year is when things will begin to change. And the changes aren’t going to be small.
In a new report first published on the Forbes website, Forrester researchers explain that AI agents are poised to “move beyond” helping workers boost their efficiency—the main selling point for AI at the moment—and instead will join the workforce. This means leadership will have to think about “orchestrating workflows independent of human workers.” They’ll have to think about “technology as part of the workforce” and that means changing planning as well as day-to-day business.
HR teams in particular will play a big role, Forrester thinks. This is because sophisticated AI agents will be able to independently execute “complex tasks or end-to-end processes, acting as a virtual member of a team.” Strip away the cloak of anodyne corporate speak and this means Forrester’s predicting that AiIagents will be able to act almost at the level of a human. That means one way to manage them is to treat them as if they are almost people, with HR teams working to align agents alongside human workers on projects and tasks, tracking and optimizing a new type of “hybrid workforce.”
One way to do this is to deploy human capital management (HCM) techniques, Forrester suggests. HCM is a system of rules and software that approaches employees as valuable assets, and the report notes that while mainly large enterprises use HCM now, due to sheer numbers of staff, smaller businesses may find the trick useful for a hybrid AI/human workforce. “Facing immediate pressures of productivity and resource optimization,” driven by the fact that you can employ numerous AI agents at once, and they can work 24-7, smaller outfits may actually “benefit from this technology sooner,” the report suggests.
This job may sound daunting, particularly if you’ve never experimented with this tech or you’re feeling far from thinking of AI tools as equivalent to your human workers. But Forrester thinks that around three in 10 companies that already sell enterprise software will get in on the game, offering their own HCM solutions to help you manage AI tools. Meanwhile the research company also thinks that business software companies like Oracle, Microsoft and their ilk will offer “autonomous governance” software, which will help companies deploy AI on business tasks while also ensuring there are audit trails and real-time monitoring so you stay within any compliance limits you need to follow.
And if you’re concerned this sounds all too automated for you, don’t worry — Forrester says that even though these trends are shifting fast, we’re “till a few years away from a system that can independently manage an entire business unit without human involvement and adaptability.” Your leadership and management skills are still needed!
Though you may be tempted to dismiss this research as not relevant to your smaller company, with its family-like feel and reliance on person-to-person collaboration, that might be a mistake.
The AI revolution really is rolling on, and if even some of Forrester’s predictions prove true, then inside a year you may be in a position where you can “hire” an AI agent system that can work alongside your staff and help them achieve goals as if it were another employee. That shift will take a lot of leadership, discussing issues with your (probably quite wary) human workers, deciding how to integrate the tech into your workflows and planning and so on. It goes far beyond downloading some software and pressing a button.
BY KIT EATON @KITEATON
Monday, December 8, 2025
This Small Startup’s AI Video Model Just Put Sora 2 to Shame
The battle to win the burgeoning AI-generated-video market is heating up, thanks to a new model from a small but mighty player.
Runway, a startup that develops AI models for video generation, has released its new flagship model, named Gen-4.5. The company said in a blog post this new model is a major step up for AI-generated video, especially when it comes to realistic physics and exact instruction following. The model claimed the top spot on independent benchmarking organization Artificial Analysis’s text-to-video leaderboard.
Founded in 2018 by students of New York University’s Tisch School for the Arts, Runway has been laser-focused on AI video and has been steadily growing since releasing its first model in 2023. According to The Information, this strategy has paid off; the company hit $80 million in annualized recurring revenue in December 2024, and hopes to hit $300 million in ARR by the end of 2025.
But Runway is going up against some of the biggest tech companies in the world, most notably Google and OpenAI, which have developed and commercialized their own AI video models. Runway’s plan to beat these mega-funded foes seems pretty simple: make better models.
Runway wrote that Gen-4.5 represents “a new frontier for video generation.” Objects in Gen-4.5 videos “move with realistic weight, momentum, and force,” the company says, with better water and surface rendering.
The company also says that details like hair will remain more consistent, and that the model will be able to generate more varied art styles. Altogether, Runway says, these upgrades enable the platform’s users to be much more exacting and detailed about their video generations.
The new model is already being used commercially by enterprises, Runway says. Video game distributor Ubisoft, ad agency Wieden + Kennedy, Allstate Insurance, and Target were given early access to the tool. The model is available to paid subscribers and through Runway’s API.
Gen-4.5 was both built on Nvidia GPUs and uses that company’s hardware to run, according to Runway. The company wrote that it “collaborated extensively” with Nvidia on the model’s creation.
Runway creative principal Nicolas Neubert celebrated the model’s release on X, posting that “Gen-4.5 was built by a team that fits onto two school buses and decided to take on the largest companies in the world. We are David and we’ve brought one hell of a slingshot.”
BY BEN SHERRY @BENLUCASSHERRY
Friday, December 5, 2025
Black Friday Broke Records. The Real Story Is How AI Changed the Way We Shop
If you only looked at the numbers, you’d think Black Friday was business as usual—just bigger. And, to be clear, it was definitely bigger. Adobe, which tracks more than a trillion retail site visits across 18 categories, says consumers spent a record $11.8 billion online yesterday, up 9.1 percent from last year and even above the company’s own forecast. Between 10 a.m. and 2 p.m., Adobe says shoppers spent $12.5 million every minute.
By any metric, that’s a massive number of people shopping for deals. It’s a record for Black Friday sales online, but if you look a little closer, you realize it’s also a massive number of people shopping in very different ways than they used to.
Black Friday has already changed quite a bit in the past few years. What was once a single day defined by incredible deals and lines outside big-box stores has stretched into a weeks-long digital shopping season. And, let’s be honest, people aren’t camping outside a Target anymore; they’re sitting on their couch, scrolling their phones.
The AI holiday
The most interesting part of the story is how things have shifted even more this year. Adobe’s data shows that AI-generated traffic to retail sites jumped 805 percent year-over-year. Not only are people using AI tools to find deals and compare products, but also shoppers who landed on a site from an AI assistant were 38 percent more likely to convert than everyone else.
That’s surprising, and yet it makes perfect sense.
One of the things AI chatbots like ChatGPT, Claude, and Gemini are good at is instantly surfacing the best price across half a dozen retailers. This year, there were plenty of headline features: Electronics, toys, apparel, TVs, and appliances were discounted between 24 and 30 percent. AI tools just made it easier to find them.
And those deals didn’t just convince people to buy more. Adobe says that people spent more on higher-end items. The share of units sold from the most expensive tier of products spiked: 64 percent in electronics, 55 percent in sporting goods, 48 percent in appliances. With the right combination of discounts and AI-assisted shopping comparison, people weren’t just looking for deals—they were looking for the best value.
Mobile continued to dominate
Depending on the hour, around 55 percent of online Black Friday sales happened on a phone—$6.5 billion worth. That’s up 10 percent from last year and represents billions of dollars processed through screens smaller than a wallet.
Mobile phones reward frictionless experiences. And it turns out, AI is very good at removing friction. When the easiest way to shop is to ask ChatGPT for a recommendation and the best deal, it changes the way retailers have to think about Black Friday.
Not only that, but the timeline seems to have shifted. Adobe says one of the biggest spikes happened from 10 a.m. to 2 p.m. Shopping habits shifted toward the times when people are already using their phones. You don’t need to wait for a sale to “start” when an AI assistant can surface the best price the moment it exists.
AI shopping is here to stay
Adobe expects U.S. consumers to spend more than $250 billion online this holiday season, with Cyber Monday alone projected to hit $14.2 billion. But the part worth paying attention to isn’t the total—it’s how we got there.
Shoppers are trusting AI to do the busywork and find them the best value. For a shopping event that used to be all about physical stores, that’s a significant shift that retailers have to pay attention to.
The challenge is that they no longer control the narrative—the AI assistant does.
The lesson here may not seem obvious, but the reality is that retailers need to redefine what loyalty means when more shoppers start their journey with an AI prompt instead of walking into a store or pulling up your website.
When an assistant compares every retailer at once, being “top of mind” matters far less than being the lowest-friction, highest-confidence option in that moment. That means loyalty isn’t something you win with flashy ads or homepage banners—it’s something you earn through the operational details an AI actually cares about.
Black Friday broke spending records. But the more interesting record is the one you might overlook: how many of those purchases started with someone typing a question into an AI instead of typing a URL into a browser.
EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN
Wednesday, December 3, 2025
Gen-Zers Are Using AI to Boost Their Side Hustles and Grow Them Into Full-Time Businesses
As more Gen-Zers embrace side hustles, they’re increasingly leaning on artificial intelligence to help them get ahead. A new survey by Canva finds that 80 percent of the people who have side hustles have used AI to fuel the growth of those enterprises, with 74 percent calling it their secret weapon.
The tools, including ChatGPT and Canva’s own online graphic design offerings, are being used for everything from video creation and logo/brand design to data analysis and copywriting. And some of those side hustles are becoming full-time businesses.
Side hustles, on the whole, have never been hotter. Data from the U.S. Bureau of Labor Statistics shows that 8.9 million Americans are currently working multiple jobs. That’s 5.4 percent of the country’s workforce. And a SurveyMonkey study published earlier this month found “72 percent of workers either already have or are considering a side gig—37 percent already have a side hustle, and 35 percent are considering pursuing one.”
Some 22 percent of the people surveyed by Canva said they were inspired to start their own company after launching a side gig and 17 percent said the work led to a consulting or freelancing job. Additionally, 33 percent said they had gained new clients or customers, while 29 percent said their side gig had helped build their professional brand. Gen-Z was the generation most likely to start passive income side hustles: Of those with side hustles, 48 percent of Gen-Zers are currently earning passive income.
All totaled, two-thirds of the 300 “5-9 influencers,” as Canva calls them, said they would consider quitting their full-time jobs if they believed their side projects could sustain them.
They wouldn’t be the first. Some very familiar tech companies got their start as side hustles or side projects, including Groupon, Twitter, Craigslist, and Instagram (which began as Burbn, a location-based app for whiskey lovers). And thousands of other, smaller businesses began as a part-time side gig for the founder, eventually growing to multimillion-dollar businesses.
Today’s side hustle community is made up of a mix of generations. Canva’s survey found that just under half of Gen-Zers, Millennials, Gen-Xers, and Baby Boomers were making money from side gigs today, with the actual percentages ranging from 40 to 48.
Increasingly, the side hustles they’re choosing are digitally focused. The most popular jobs were social media creator (35 percent), e-commerce (27 percent), gaming and streaming (24 percent), and graphic design (14 percent).
Extra income is the biggest motivator for people who have side gigs, Canva found, but it wasn’t the only one. Some 36 percent of the respondents said they were running their side hustle because they enjoyed the creative expression it gave them. And just under one-third said they wanted to turn a passion into a business.
Even people with side hustles who aren’t looking to launch a business of their own are seeing advantages from the work. The skills they’ve learned as part of that work, including the AI expertise they’re building, are helping people advance. Some 14 percent of the people surveyed said their side hustle had helped them get a promotion at their day job.
BY CHRIS MORRIS @MORRISATLARGE
Monday, December 1, 2025
The hottest new AI company is…Google?
Google just threw another twist in the fast-changing AI race. And its biggest competitors are taking notice.
“We’re delighted by Google’s success — they’ve made great advances in AI and we continue to supply to Google,” Nvidia wrote in a November 25 post on X, before adding that “NVIDIA offers greater performance, versatility, and fungibility than ASICs,” (the application-specific integrated circuits) like those made by Google.
“Congrats to Google on Gemini 3! Looks like a great model,” OpenAI CEO Sam Altman also wrote on X.
The posts came just days after mounting buzz about Google’s Gemini 3 model — and the Google-made chips that help to power it. Salesforce CEO Marc Benioff wrote on X that he’s not going back to ChatGPT after trying Google’s new model. “The leap is insane — reasoning, speed, images, video… everything is sharper and faster. It feels like the world just changed, again,” he wrote.
Now Meta is said to be in talks with Google about buying its Tensor chips, according to The Information, coming after Anthropic said in October that it plans to significantly expand its own use of Google’s technology.
Shares of Google were up nearly 8% last week, while Nvidia’s were down a little over 2%.
At stake is more than just bragging rights or a few sales contracts. As the tech industry claims AI will reshape the world — including investment portfolios belonging to everyone from billionaires to 401k-holding retirees — what company and what vision comes out on top could affect nearly every American.
At face value, Nvidia’s post says the company isn’t worried about Google encroaching on its territory. And for good reason — Google’s chips are fundamentally different from Nvidia’s offerings, meaning they aren’t a match-for-match alternative.
But that OpenAI and Nvidia felt the need to acknowledge Google at all is telling.
“They’re in the lead for now, let’s call it, until somebody else comes up with the next model,” Angelo Zino, senior vice president and technology lead at CFRA, told CNN.
Google and Meta did not immediately respond to a request for comment. Nvidia declined to comment.
The leader for now
Google is hardly an AI underdog. Along with ChatGPT, Gemini is one of the world’s most popular AI chatbots, and Google is one of the few cloud providers large enough to be known as a “hyperscaler,” a term for the handful of tech giants that rent out cloud-based computing resources to other companies on a large scale. Google services like Search and Translate have used AI as far back as the early 2000s.
Even so, Google was largely caught flat-footed by OpenAI’s ChatGPT when it arrived in 2022. Google management reportedly issued a “code red” in December 2022 following ChatGPT’s seemingly overnight success, according to The New York Times. ChatGPT now has at least 800 million weekly active users, according to its maker, OpenAI, while Google’s Gemini app has 650 million monthly active users.
But Gemini 3, which debuted on November 18, now sits at the top of benchmark leaderboards for tasks like text generation, image editing, image processing and turning text into images, putting it ahead of rivals like ChatGPT, xAI’s Grok and Anthropic’s Claude in those categories.
Google said over one million users tried Gemini 3 in its first 24 hours through both the company’s AI coding program and the tools that allow digital services to connect to other apps.
But people tend to use different AI models for different purposes, says Ben Barringer, the global head of technology research at investment firm Quilter Cheviot. For example, models from xAI and Perplexity are ranked higher than Gemini 3 search performance in benchmark tests.
“It doesn’t necessarily mean (Google parent) Alphabet is going to be … the end-all when it comes to AI,” said Zino. “They’re just kind of another piece to this AI ecosystem that continues to get bigger.”
More chip competition
Google began making its Tensor chips long before the recent AI boom. But Nvidia still dominates in AI chips with the company reporting 62% year-over-year sales growth in the October quarter and profits up 65% compared to a year ago.
That’s largely because Nvidia’s chips are powerful and can be used more broadly. Nvidia and its chief rival, AMD, specialize in chips known as graphics processing units, or GPUs, which can perform vast amounts of complex calculations quickly.
Google’s Tensor chips are ASICs, or chips that are custom-made for specific purposes.
While GPUs and Google’s chips can both be used for training and running AI models, ASICs are usually designed for “narrower workloads” than GPUs are designed for, Jacob Feldgoise, senior data research analyst at Georgetown’s Center for Security and Emerging Technology, told CNN in an email.
Beyond the differences in the types of chips themselves, Nvidia provides full technology packages to be used in data centers that include not just GPUs, but other critical components like networking chips.
It also offers a software platform that allows developers to tailor their code so that their apps can make better use of Nvidia’s chips, a key selling point for hooking in long-term customers. Even Google is an Nvidia client.
“If you look at the magnitude of Nvidia’s offerings, nobody really can touch them,” said Ted Mortonson, technology desk sector strategist at Baird.
Chips like Google’s won’t replace Nvidia anytime soon. But increased adoption of ASICs, combined with more competition from AMD, could suggest companies are looking to reduce their reliance on Nvidia.
And Google won’t be the only AI chip competitor, said Barringer of Quilter Cheviot, and it’s doubtful it will achieve Nvidia’s dominance.
“I think it’s a part of a balance,” he said.
Analysis by
Lisa Eadicicco
Subscribe to:
Comments (Atom)