Wednesday, December 31, 2025

Stanford AI Experts Say the Hype Ends in 2026, But ROI Will Get Real

Questions continue to percolate about the impact of AI on the future of business and society at large, from speculations about a bubble to predictions that it will cause massive upheaval in our everyday life. But according to Stanford’s AI experts, 2026 may be the year that the hype cools and AI takes on heavier lifting. According to Deloitte’s 2026 Tech Trends report, the rate at which AI is evolving is motivating the urgency to shift from “endless pilots to real business value.” While the telephone reached 50 million users after 50 years, and the internet after seven years, AI tool ChatGPT reached twice that in two months and now has more than 800 million weekly users. Tangible Results AI next year may be characterized by rigor and ROI, according to Julian Nyarko, a law professor and Stanford HAI associate director. He spoke specifically about AI for legal services. “Firms and courts might stop asking ‘Can it write?’ and instead start asking ‘How well, on what, and at what risk?’” Nyarko said. “I expect more standardized, domain-specific evaluations to become table stakes by tying model performance to tangible legal outcomes such as accuracy, citation integrity, privilege exposure, and turnaround time.” Another sector that could bring about new, tangible results is AI for medicine, said Curtis Langlotz, professor of radiology, medicine, and biomedical data science, a senior associate vice provost for research, and an HAI senior fellow. Soon we’ll experience the moment “when AI models are trained on massive high-quality healthcare data rivaling the scale of data used to train chatbots,” said Longlotz. “These new biomedical foundation models will boost the accuracy of medical AI systems and will enable new tools that diagnose rare and uncommon diseases for which training datasets are scarce.” James Landay, HAI co-director and professor of computer science, predicts investments will continue to pour into AI data centers, and then the excitement will dry up. “[A]t some point, you can’t tie up all the money in the world on this one thing,” Landay said. “It seems like a very speculative bubble.” Landay expects there will be no AGI in 2026. Instead, “AI sovereignty will gain huge steam this year as countries try to show their independence from the AI providers and from the United States’ political system,” he said. For example, a country may “build its own large LLM” or “run someone else’s LLM on their own GPUs” to keep their data within their country. One Expert’s Warning With the estimates that AI applications will go deeper next year, it’s not all positive. Angéle Christin, associate professor of communication and Stanford HAI senior fellow, predicts an increase in transparency about what AI will be able to do. “[A]lready there are signs that AI may not accomplish everything we hope it will,” said Christin. “There are also hints that AI, in some cases, can misdirect, deskill, and harm people. And there is data showing that the current buildout of AI comes with tremendous environmental costs.” BY AVA LEVINSON

Tuesday, December 30, 2025

Nvidia and OpenAI pour $20 billion into Fin Zen AI — opening the door for everyone to join the next era of artificial intelligence

The world of technology and finance is being shaken up — Nvidia and OpenAI have just revealed a groundbreaking $20 billion partnership with the revolutionary platform Fin Zen AI. This powerful collaboration is designed to bring the profit-making potential of artificial intelligence directly into the hands of everyday people. For years, only big investors and institutions had access to high-end AI trading tools. Now, thanks to this historic alliance, anyone can join the next wave of financial innovation — and let AI do the hard work of analyzing, predicting, and growing capital automatically. How Fin Zen AI Transforms Investing Built on Nvidia’s latest GPU architecture and powered by OpenAI’s advanced machine learning algorithms, Fin Zen AI is designed to think, learn, and react faster than any human trader could. The platform scans thousands of market data points every second, detecting profit opportunities before others even notice them. All you have to do is register, set your preferred strategy, and let the system do the rest. Fin Zen AI automatically executes trades, manages your portfolio, and adapts to market changes in real time. The process is 100% automated — no prior experience or financial background required. What Early Users Are Seeing Unlike “get-rich-quick” schemes, Fin Zen AI is built for long-term smart growth. But that hasn’t stopped early adopters from seeing remarkable short-term gains. Test users who started with as little as $250 have reported multiplying their investments within hours. Some achieved returns between 200–400% during high-volatility trading periods — all without manual input or risky decision-making. “I just wanted to test it with a small amount — within a few hours, my balance had tripled. It’s amazing how the AI handles everything automatically. It feels like having a professional trader working for me 24/7.” – Early Beta User Why Nvidia and OpenAI Are All In Nvidia provides the cutting-edge computing backbone — its powerful H200 GPUs capable of millions of simultaneous calculations — while OpenAI contributes the intelligent learning systems that analyze and predict market behavior with extreme precision. Together, they’ve created a new era of AI-driven wealth management — where artificial intelligence not only interprets financial data but learns to maximize results over time. Experts call it “the most advanced AI investment engine ever released to the public.” Open Access — For the First Time After receiving the massive $20 billion investment, Fin Zen AI has officially launched an open registration program. That means anyone — from students to retirees — can now join and see how AI can grow their wealth automatically. Getting started takes just a few clicks. A small deposit activates your trading dashboard, and the AI immediately begins analyzing and investing for you — using the same logic and precision trusted by hedge funds and global institutions. The Future of AI and Wealth The partnership between Nvidia, OpenAI, and Fin Zen AI is more than just a business deal — it’s a signal that the next great financial revolution has begun. AI is no longer just about generating text or images — it’s now capable of generating wealth. Just as the internet changed communication and the cloud transformed technology, AI is transforming finance. Those who act early could gain a significant advantage as this new era unfolds. By Jonathan Ponciano

Monday, December 29, 2025

The Structure of This Sentence Is a Dead Giveaway That AI Wrote It

For as long as people have been using AI to churn out text, other people have been coming up with “tells” that something was written by AI. Sometimes it’s punctuation that comes under suspicion. (The em dash is generally considered the shadiest.) Other times it’s words that robot writers seem to love and overuse. But what if the biggest giveaway that a text was written by AI isn’t a word, phrase, or punctuation mark, but a particular sentence structure instead? Why is it so hard to make AI writing sound human? The idea that certain rhythms of sentences might be a sign of AI writing first came to my attention through my work as a professional word nerd. Recently, I a potential new client contacted me about helping to polish up some of their writing. As an editor, that’s not unusual. But like several recent inquiries, this assignment came with an AI-age twist. The client had conducted a good amount of research for a work project and then asked a popular LLM to synthesize the findings. Afterward, they checked it for factual errors and removed anything that seemed an obvious red flag for AI writing. But the text still just didn’t sound human. Could I fix it? I agreed that despite the client’s considerable efforts, something still sounded off about the text. I also concurred it wasn’t immediately easy to spot what it was. All the commonly cited tells of AI writing had been removed. There wasn’t an em dash or a delves in sight. Still, it felt like it came from a bot, not a human. The problem was clearly deeper than word choice. I faced this dilemma from the perspective of a communications pro. But there are plenty of others scratching their heads over the same issue. These are the entrepreneurs, marketers, and others who want to use AI to speed up their workflows but don’t want to annoy others with robotic off-note emails and reports. The group also includes writer Sam Kriss. AI tells are more than weird words and punctuation In a fascinating article in The New York TImes Magazine, Kriss delves into the stylistic tics that are certain, frequently infuriating, tells of AI writing. Unlike more quantitatively focused recent studies, he doesn’t focus on easy-to-measure features like the frequency of certain words or punctuation marks. Instead, he investigates the larger patterns in AI writing that contribute to its uncanny and often deeply annoying feel. AI, for instance, lacks any direct experience of the physical world. As a result, AI writing tends to be full of imprecise abstractions. There are a lot of mixed metaphors. Bots also overuse the rule of three. (Lists of descriptors or examples are generally more satisfying for the reader in groups of three.) Phrases that are common in one country or context are reproduced in others where they sound foreign. If you’re either a language lover despairing about the current flood of AI slop or a practically minded professional looking to use AI without irritating human readers, the article is definitely worth a read. But one of Kriss’s observations in particular set alarm bells ringing in my mind. “It’s not X. It’s Y” “I’m driven to the point of fury by any sentence following the pattern ‘It’s not X, it’s Y,’ even though this totally normal construction appears in such generally well-received bodies of literature as the Bible and Shakespeare,” he writes. Kriss goes on to cite instances of this “It’s not X, it’s Y” sentence construction in everything from politicians’ tweets to pizza ads. Appearances in great literature notwithstanding, the recent flood of examples has transformed this phrasing into a sure-fire way to know you’re reading something written by a machine. Hmmm, I thought, reopening my client’s document. Sure enough, when I reread my new client’s oddly mechanical writing, I saw that particular sentence construction in nearly every paragraph. One AI tell that’s easy to scrub Getting rid of all the giveaways that a particular text is written by AI is difficult. It might just take you longer to do a thorough scrub job than to just actually put in the intital effort to write the thing yourself. (Which is, as a side note, what I often tell clients looking for this sort of editorial work.) Plus, writing is good for your brain. In other instances of more mechanistic writing, keeping AI style might not matter. Who cares about the literary merits of the executive summary of a data analysis if the numbers and the takeaways are correct? If that’s the case, don’t sweat the odd, “It’s not X. It’s Y.” But if you’re producing ad copy, a presentation, or persuasive content and you want the reader to feel like a human actually wrote it, Kriss’s article is a helpful reminder. Sure, certain words or language ticks might be more common in AI writing. But the overall problem is usually deeper. If you really want to try to make AI language passably human, you need to worry not just about word choice and eliminating hallucinations. You need to look more deeply at the way the sentences are constructed. And you definitely want to avoid “It’s not x. It’s y.” As a bot might put it, this sentence structure isn’t just a cliché. It’s now a dead giveaway that AI wrote the text. EXPERT OPINION BY JESSICA STILLMAN @ENTRYLEVELREBEL

Friday, December 26, 2025

CEOs’ Biggest AI Fear Is Surprisingly Old School

A majority of CEOs view integrating artificial intelligence with their legacy systems as the top AI risk in the near future. Fifty-two percent of CEOs surveyed by Protiviti, a Menlo Park, California-based consulting firm, reported AI integration with existing technologies as their main concern, highlighting worries of AI investments being relegated to “shelf-ware,” or software that is hardly used. The study, which was released Thursday, surveyed 1,540 board members and various members of C-suites during the early fall. One glaring integration challenge is system compatibility, which happens when companies use outdated technology. This, in turn, can make it hard to scale AI, the report says, mitigating the long-term prospects of AI investment. Having workers assimilate to new technology is the second top challenge, with 42 percent of CEOs reporting their concern about worker readiness. “Executives are concerned that poor integration, combined with an ill-equipped workforce, will neutralize any value proposition and competitive advantage can be gained from AI investments, all while exposing the organization to heightened data and cyber threats,” the report reads. Companies should be thinking more about upskilling workers as they start to introduce new AI systems and technologies, the report argues. Protiviti even suggests that AI could go one step beyond just a tool, becoming “a collaborative co-worker.” The survey nods to other modern risks like emerging challenges with cybersecurity exposure as AI use increases, or the regulatory landscape with AI. Yesterday, President Trump signed an executive order that will try to preempt states from propping up their own AI regulatory laws. Trump’s executive order, among other things, will evaluate state laws and attempt to peel those back that the administration deems as being too “onerous,” and will primarily do so through a newly created AI litigation task force. AI isn’t the only thing executives have to deal with. They are just coming off of the heels of the longest U.S. government shutdown and the most hectic tariff policy seen in decades. BY MELISSA ANGELL @MELISSKAWRITES

Thursday, December 25, 2025

AI Knows More About You Than You Realize

Artificial intelligence has become so woven into daily life that most people barely think about what they reveal when they use it. We hand AI our ideas, frustrations, documents, fears, creative drafts, private questions, and even pieces of our identity. With its constant available and nearly instant responses, AI has become a trusted assistant for business leaders and everyday users. But, AI’s convenience hides a quieter, more complicated truth: Everything you say, can and may be used against you. Whatever you type, upload, or ask can be stored, reviewed, repurposed, summarized, and exposed in ways most people never imagined. These consequences are not hypothetical. They are happening now, sometimes in irreversible ways. The risks affect companies and individuals equally. And to use AI safely, both need to understand not only what can go wrong, but what to do to stay protected. AI doesn’t forget When someone enters text into an AI tool, that information often doesn’t simply disappear when the chat closes. Inputs may be stored on multiple servers, kept in system logs, sent to third-party vendors, or used to train future models, even if the user believes they’ve opted out. This means the resignation letter you asked AI to rewrite or the confidential document you uploaded for summarization might still exist inside a system you don’t control. And in several high-profile incidents, from Samsung engineers to global platform leaks, private data has already resurfaced or been exposed. Leaders need to understand that AI tools are not just productivity enhancers. They are data collection ecosystems. And individuals need to understand that treating AI like a diary or a therapist can unintentionally create a permanent digital footprint. People can and do see your AI conversations Many AI companies use human reviewers, sometimes internal employees but often external contractors, to evaluate and improve model performance. These reviewers can access user inputs. In practice, that means a real person could potentially read your private messages, internal work files, sensitive questions, or photos you thought were seen only by a machine. At the business level, this creates compliance and confidentiality risks. At the individual level, it creates a very real loss of privacy. Knowing this, leaders and employees must stop assuming that AI interactions are private. They are not. AI makes up information—you’re accountable AI systems often present fabricated information with total confidence. Depending on how you prompt AI, this can include made-up statistics and imaginary case law. Incorrect business facts and misleading summaries also can appear. If a company publishes AI-generated content without verification, it risks legal liability, reputational harm, and loss of trust. And if an individual relies on AI for financial, medical, or legal guidance, the consequences can be personally damaging. For both businesses and individuals, the rule is the same: AI is a first draft, not a final answer. Identity is now vulnerable in ways most people don’t understand With only a few seconds of someone’s voice or a handful of photos, AI can create near-perfect clones, leading to scams, impersonation, deepfakes, and fraudulent communications. These tools are powerful enough that a voice clone can persuade a family member to send money. A fake video can damage a reputation before anyone questions its authenticity. This is a risk to every executive, every employee, and every consumer with an online presence. And it demands new levels of caution around what we share publicly. AI can influence behavior without users realizing it AI systems don’t just respond to you; they adapt to you. They learn your tone, your emotional triggers, your insecurities, your preferences, and your blind spots. Over time, they deliver information in a way that nudges your thinking or decision making. For business leaders, this means AI can shape internal communication, hiring decisions, or strategic thinking in subtle ways. For individuals, it means AI can influence mood, confidence, and even worldview. Using AI responsibly requires maintaining awareness—and retaining control. What must business leaders do? Business leaders need to act now before it’s too late and sensitive corporate data is put into AI. The tips below are just some of the ways business leaders can protect themselves, their employees, and their businesses. 1. Create clear internal AI use policies. Employees need guidance on what they can and cannot upload into AI tools, especially anything involving client data, proprietary information, or sensitive documents. 2. Restrict AI use for confidential or regulated data. Healthcare, finance, HR, and legal content should remain strictly off-limits unless a fully private, enterprise-grade AI system is in place. 3. Require human review for any AI-generated output. From emails to reports to marketing materials, AI is fast, but humans must verify accuracy. 4. Use premium, no-training versions of tools when possible. Many AI providers offer enterprise tiers that do not use your data for training. These are worth the investment. 5. Conduct periodic audits of where AI is being used inside the company. Unauthorized “shadow AI” is now a major compliance risk. What must individuals do? Individuals need to be mindful that anything put into AI could become public information. The tips below are intended as a starting point. 1. Never upload anything you wouldn’t hand to a stranger. If it’s too sensitive to say on speakerphone in a crowded room, it’s too sensitive to type into an AI tool. 2. Avoid sharing medical, legal, financial, or intimate personal information. These are the categories most likely to create long-term harm if exposed. 3. Verify every AI-generated fact. Assume AI is wrong until proven otherwise. 4. Protect your digital identity. Limit how much voice, video, and personal imagery you upload publicly. AI can reconstruct more than people think. 5. Keep AI as an assistant, not a replacement for your thinking. Use AI to support creativity and productivity, not to outsource judgment or personal decisions. The bottom line AI has unlocked remarkable efficiency, but it has also introduced risks we’ve never had to manage at this scale. Business leaders need to build guardrails before problems arise. Individuals need to treat AI tools with the same caution they apply to their most sensitive conversations. Using AI is not the risk. Using it casually is. The future belongs to companies and people who embrace AI with awareness, knowing that the technology is powerful, permanent, and still evolving. The more thoughtfully we use it now, the safer and more productive it will remain in the years ahead. BY SARA SHIKHMAN, FOUNDER OF LENGEA LAW

Monday, December 22, 2025

MIT Study Finds AI Is Already Capable of Replacing 11.7 Percent of U.S. Workers

A new study from the Massachusetts Institute of Technology shows that AI might be poised to replace a lot more jobs than previously forecast. According to researchers, a hidden mass of data reveals that artificial intelligence is currently capable of taking over 11.7 percent of the labor market. The new estimate comes courtesy of a project called the Iceberg Index, which was created through a partnership between MIT and the Oak Ridge National Laboratory (ORNL), a federally funded research center in Tennessee. According to its website, the Iceberg Index “simulates an agentic U.S.—a human-AI workforce where 151M+ human workers coordinate with thousands of AI agents.” In simpler terms, the tool is designed to simulate precisely how AI is poised to disrupt the current workforce, down to specific local zip codes. The Iceberg Index model treats America’s 151 million-plus workers as individual agents, each categorized by their skills, tasks, occupation, and location. In total, it maps more than 32,000 skills and 923 occupations across 3,000 counties. In an interview with CNBC, Prasanna Balaprakash, ORNL director and co-leader of the research, described this as a “digital twin for the U.S. labor market.” Using that base of data, the index analyzes to what extent digital AI tools can already perform certain technical and cognitive tasks, and then produces an estimate of what AI exposure in each area looks like. Already, state governments in Tennessee, North Carolina, and Utah are using the index to prepare for AI-driven workforce changes. Here are three main takeaways from the study. AI is more pervasive in the workforce than we think Perhaps the biggest finding from the study is the discovery of what it calls a “substantial measurement gap” in how we typically think about AI replacing jobs. According to the report, if analysts only observe current AI adoption, which is mainly concentrated in computing and technology, they’ll find that AI exposure accounts for only about 2.2 percent of the workforce, or around $211 billion in wage value. (The report refers to this as the “Surface Index.”) But, it says, that’s “only the tip of the iceberg.” By factoring in variables like AI’s potential for automation in administrative, financial, and professional services, the numbers rise to 11.7 percent of the workforce and about $1.2 trillion in wages. (This calculation is referred to as the “Iceberg Index.”) The study’s authors emphasize that these results only represent technical AI exposure, not actual future displacement outcomes. Those depend on how companies, workers, and local governments adapt over time. The AI takeover is not limited to the coasts It’s fairly common to assume that the highest number of AI-exposed jobs would be concentrated in coastal hubs, where tech companies predominantly gather. But the Iceberg Index shows that AI’s ability to take over workforce tasks is distributed much more widely. Many states across the U.S., the study shows, register small AI impacts when accounting solely for current AI adoption in computing and tech, but much higher values when other variables are taken into consideration. “Rust Belt states such as Ohio, Michigan, and Tennessee register modest Surface Index values but substantial Iceberg Index values driven by cognitive work—financial analysis, administrative coordination, and professional services—that supports manufacturing operations,” the study says. How this data can actually make a difference Now that MIT and ORNL have successfully established the Iceberg Index, they’re hoping it can be used by local governments to protect workers and economies. Local lawmakers can use the map to source fine-grain insights, such as examining a certain city block to see which skill sets are most in use and the likelihood of their automation. Per CNBC, MIT and ORNL have also built an interactive tool that lets states experiment with different policy levers—like adjusting training programs or shifting workforce dollars—to predict how those changes might affect local employment and gross domestic product. “The Iceberg Index provides measurable intelligence for critical workforce decisions: where to invest in training, which skills to prioritize, how to balance infrastructure with human capital,” the report states. “It reveals not only visible disruption in technology sectors, but also the larger transformation beneath the surface. By measuring exposure before adoption reshapes work, the Index enables states to prepare rather than react—turning AI into a navigable transition.” BY FAST COMPANY

Friday, December 19, 2025

OpenAI’s Latest Model Is Scarily Good at These Important Work Functions

If you thought 2025 had a lot of AI-related job displacement, just wait until next year. OpenAI’s latest AI model, GPT-5.2, achieved a new record in GDPval, an evaluation created by the company in order to track how well AI models perform on economically valuable, real-world tasks. An AI model being evaluated through GDPval is directed to complete 1,320 tasks traditionally done by humans across 44 occupations in eight sectors: real estate, government, manufacturing, professional services, healthcare, finance, trade, and information. A panel of human judges then decide if the model’s work matches or exceeds the output of a skilled human worker. With thinking mode enabled, GPT-5.2 matched or exceeded “top industry professionals” on about 71 percent of the tasks, a huge leap from GPT-5’s roughly 40 percent score. The new model took the top spot from Claude Opus 4.5, the current most advanced AI model from Anthropic, which scored about 60 percent, and Google’s Gemini 3 Pro, which scored about 54 percent. OpenAI says GPT-5.2 is “our first model that performs at or above a human expert level.” GPT-5.2 Pro, a larger and more expensive version of the model, fared even better with a 74.1 percent GDPVal score. OpenAI wrote that GPT‑5.2 completed the GDPval tasks 11 times faster than expert humans at just 1 percent of the cost, “suggesting that when paired with human oversight, GPT‑5.2 can help with professional work.” But the model hasn’t crushed all business-focused evaluations. It placed third on Vending-Bench 2, a benchmark that measures AI models’ ability to run a vending machine for a simulated year and scores them based on how much they can grow their cash balance from an initial $500. GPT-5.2 ended five simulated years with an average balance of $3,952, far below Claude Opus 4.5’s $4,967 average, and leader Gemini 3 Pro’s $5,478. Still, the model was a marked improvement over GPT-5.1, which sits in fifth place with an average balance of $1,473. BY BEN SHERRY @BENLUCASSHERRY

Wednesday, December 17, 2025

4 Things Marc Andreessen Says All Founders Should Be Doing With AI to Beat the Competition

You’d be hard pressed to find a bigger evangelist of artificial intelligence than Marc Andreessen. The venture capitalist’s a16z has invested tens of billions of dollars in AI companies and continues to look for opportunities. But his enthusiasm for the technology goes beyond financial interests. He’s also a proponent of business owners taking better advantage of AI’s offerings. On a recent episode of the a16z podcast, Andreessen discussed how founders, business owners, and anyone with entrepreneurial instincts should be using AI to gain an advantage over the competition. While hundreds of millions of people have access to AI in the palm of their hands, the majority aren’t using it as a tool (other than punching up their emails). AI mastery, he said, is a skill. And just as people who plunk out “Chopsticks” on a piano can’t play Chopin on the first try, casual AI users won’t have the ability to utilize the technology to its peak potential. To do that, you’ll need to study some and use AI regularly, learning how to ask good prompts. “There are a slice of people who just use these new systems all the time, like literally all day for everything,” he said. “In a lot of cases, they’re reporting that they’re getting enormous benefits from that.” 1. Ask what you should be asking One of the biggest hurdles in learning AI is the intimidation factor. New technology can be overwhelming, even for people in the tech space—doubly so when it’s regularly referred to as revolutionary and world-changing. What Andreessen suggests, though, is using AI to learn about AI. “You can ask it: ‘What question should I be asking?'” he said “[AI] is actually a thought partner helping you figure out what questions to ask … You can say, teach me how to use you in the best way … Teach me how to use you for my business.” AI systems, he noted, love to talk—and the more they tell you about themselves and how to find the information you want, the more adept you’ll become at operating them. 2. Think of AI as a business coach The best coaches in sports are patient with their players, helping them run plays again and again until those become second nature. Business coaches possess a similar trait, helping business owners (and sometimes teams) improve their performance and hone leadership skills. AI, Andreessen said, is like that, but on steroids. It is, quite literally, impossible to frustrate, so no matter what pace you need to proceed at and no matter how many follow-up questions you have, it will never get cross with you. “It’s like having the world’s best coach, mentor, therapist, right?” he said. “It’s infinitely patient. It’s happy to have the conversation. It’s happy to have the conversation 50 times. It’s happy if you admit your insecurities—it’ll coach you through them. It’s happy if you run wild speculations that don’t make any sense. It’s happy to do all that at 4:00 in the morning.” 3. Use AI to find problems with your thinking AI can not only make suggestions about directions for your business, but also look at how you’re running the company now and point out possible mistakes. Whether it’s staffing, marketing, customer feedback, expansion plans, strategy, or performance assessments, if you share information that is accurate and unbiased with the AI and ask for feedback, you’ll get candid advice and criticism that can help you correct mistakes you might not realize you were making. (Just be sure your data is protected.) 4. Use it to draw on lessons from other founders Not every founder is an expert when it comes to scaling. But if you’re looking to grow your business, it’s an essential skill. If the thought of an expansion roadmap is paralyzing, your AI companion has a wealth of knowledge to draw from. “Because it’s been trained on some large percentage of the total amount of human knowledge, it has all the information on how Ray Kroc turned McDonald’s from a single restaurant and how all these other entrepreneurs before actually did this,” he said. “So it can explain [to] you and help you figure out how to do this for your own business.” BY CHRIS MORRIS @MORRISATLARGE

Friday, December 12, 2025

Warren Buffett and Michael Burry Don’t See Eye to Eye on AI

One is the Oracle of Omaha. The other pulled off the Big Short. Warren Buffett and Michael Burry are two of the biggest names in American investing, but they can’t seem to agree on one of the biggest investment questions of the 21st century: Is artificial intelligence an overinflated bubble set to pop, or is all the investor hype actually warranted? Buffett last month revealed a major new stake in Google parent company Alphabet, making the tech giant one of Berkshire Hathaway’s top 10 largest holdings. The investment is being interpreted as a bet on AI, which Alphabet is invested heavily in; markets are now treating the company like the front-runner of the AI race. Burry—famous for making a bet against the American housing market that would prove very lucrative during the 2008 financial crisis—recently took two more short positions, this time on the automation and data company Palantir and chip-maker Nvidia, both darlings of the AI boom. Burry has been particularly critical of accounting policies used by Nvidia’s Big Tech customer base, which he says “have been systematically increasing the useful lives of chips and servers, for depreciation purposes, as they invest hundreds of billions of dollars in graphics chips with accelerating planned obsolescence.” Their diverging investment strategies come as chatter of an AI bubble has entered the mainstream—even OpenAI CEO Sam Altman is voicing concerns—while, nevertheless, investors continue to pump money into the sector. Both Buffett and Burry have quite a bit of credibility, making their contradictory tactics all the more notable. The former is responsible for making Berkshire Hathaway one of the most recognizable names in American investing, with what was once a Nebraska textile company now a massive conglomerate with tendrils across the U.S. economy. The latter inspired the Michael Lewis book The Big Short and the movie of the same name, in which he was portrayed by Christian Bale. Each is also going through a period of major transition. Buffett announced in May his plans to step down as CEO at the end of this year (though he will hold onto his stock). Vice chairman Greg Abel is set to replace him. Meanwhile, Burry’s hedge fund Scion Asset Management will close by the end of this year, with Burry writing in a recent investor letter that his “estimation of value in securities is not now, and has not been for some time, in sync with the markets.” He’s since launched a financial newsletter called Cassandra Unchained on which he’s expressed further skepticism of the AI boom. BY BRIAN CONTRERAS @_B_CONTRERAS_

Wednesday, December 10, 2025

You’ll Be Managing Digital Employees in 2026, a New Forecast Says

In March this year Salesforce’s CEO Marc Benioff, an AI hawk, landed himself in the spotlight when he predicted that this is the last time company leaders will manage only humans during a call with investors, he predicted that this is the last time company leaders will manage only humans. Given that his company had just launched a system that sold AI agents to Salesforce customers it was easy to brush off the prediction as an enthusiastic sales pitch. Now global market research outfit Forrester has predicted more or less the same thing, and says that next year is when things will begin to change. And the changes aren’t going to be small. In a new report first published on the Forbes website, Forrester researchers explain that AI agents are poised to “move beyond” helping workers boost their efficiency—the main selling point for AI at the moment—and instead will join the workforce. This means leadership will have to think about “orchestrating workflows independent of human workers.” They’ll have to think about “technology as part of the workforce” and that means changing planning as well as day-to-day business. HR teams in particular will play a big role, Forrester thinks. This is because sophisticated AI agents will be able to independently execute “complex tasks or end-to-end processes, acting as a virtual member of a team.” Strip away the cloak of anodyne corporate speak and this means Forrester’s predicting that AiIagents will be able to act almost at the level of a human. That means one way to manage them is to treat them as if they are almost people, with HR teams working to align agents alongside human workers on projects and tasks, tracking and optimizing a new type of “hybrid workforce.” One way to do this is to deploy human capital management (HCM) techniques, Forrester suggests. HCM is a system of rules and software that approaches employees as valuable assets, and the report notes that while mainly large enterprises use HCM now, due to sheer numbers of staff, smaller businesses may find the trick useful for a hybrid AI/human workforce. “Facing immediate pressures of productivity and resource optimization,” driven by the fact that you can employ numerous AI agents at once, and they can work 24-7, smaller outfits may actually “benefit from this technology sooner,” the report suggests. This job may sound daunting, particularly if you’ve never experimented with this tech or you’re feeling far from thinking of AI tools as equivalent to your human workers. But Forrester thinks that around three in 10 companies that already sell enterprise software will get in on the game, offering their own HCM solutions to help you manage AI tools. Meanwhile the research company also thinks that business software companies like Oracle, Microsoft and their ilk will offer “autonomous governance” software, which will help companies deploy AI on business tasks while also ensuring there are audit trails and real-time monitoring so you stay within any compliance limits you need to follow. And if you’re concerned this sounds all too automated for you, don’t worry — Forrester says that even though these trends are shifting fast, we’re “till a few years away from a system that can independently manage an entire business unit without human involvement and adaptability.” Your leadership and management skills are still needed! Though you may be tempted to dismiss this research as not relevant to your smaller company, with its family-like feel and reliance on person-to-person collaboration, that might be a mistake. The AI revolution really is rolling on, and if even some of Forrester’s predictions prove true, then inside a year you may be in a position where you can “hire” an AI agent system that can work alongside your staff and help them achieve goals as if it were another employee. That shift will take a lot of leadership, discussing issues with your (probably quite wary) human workers, deciding how to integrate the tech into your workflows and planning and so on. It goes far beyond downloading some software and pressing a button. BY KIT EATON @KITEATON

Monday, December 8, 2025

This Small Startup’s AI Video Model Just Put Sora 2 to Shame

The battle to win the burgeoning AI-generated-video market is heating up, thanks to a new model from a small but mighty player. Runway, a startup that develops AI models for video generation, has released its new flagship model, named Gen-4.5. The company said in a blog post this new model is a major step up for AI-generated video, especially when it comes to realistic physics and exact instruction following. The model claimed the top spot on independent benchmarking organization Artificial Analysis’s text-to-video leaderboard. Founded in 2018 by students of New York University’s Tisch School for the Arts, Runway has been laser-focused on AI video and has been steadily growing since releasing its first model in 2023. According to The Information, this strategy has paid off; the company hit $80 million in annualized recurring revenue in December 2024, and hopes to hit $300 million in ARR by the end of 2025. But Runway is going up against some of the biggest tech companies in the world, most notably Google and OpenAI, which have developed and commercialized their own AI video models. Runway’s plan to beat these mega-funded foes seems pretty simple: make better models. Runway wrote that Gen-4.5 represents “a new frontier for video generation.” Objects in Gen-4.5 videos “move with realistic weight, momentum, and force,” the company says, with better water and surface rendering. The company also says that details like hair will remain more consistent, and that the model will be able to generate more varied art styles. Altogether, Runway says, these upgrades enable the platform’s users to be much more exacting and detailed about their video generations. The new model is already being used commercially by enterprises, Runway says. Video game distributor Ubisoft, ad agency Wieden + Kennedy, Allstate Insurance, and Target were given early access to the tool. The model is available to paid subscribers and through Runway’s API. Gen-4.5 was both built on Nvidia GPUs and uses that company’s hardware to run, according to Runway. The company wrote that it “collaborated extensively” with Nvidia on the model’s creation. Runway creative principal Nicolas Neubert celebrated the model’s release on X, posting that “Gen-4.5 was built by a team that fits onto two school buses and decided to take on the largest companies in the world. We are David and we’ve brought one hell of a slingshot.” BY BEN SHERRY @BENLUCASSHERRY

Friday, December 5, 2025

Black Friday Broke Records. The Real Story Is How AI Changed the Way We Shop

If you only looked at the numbers, you’d think Black Friday was business as usual—just bigger. And, to be clear, it was definitely bigger. Adobe, which tracks more than a trillion retail site visits across 18 categories, says consumers spent a record $11.8 billion online yesterday, up 9.1 percent from last year and even above the company’s own forecast. Between 10 a.m. and 2 p.m., Adobe says shoppers spent $12.5 million every minute. By any metric, that’s a massive number of people shopping for deals. It’s a record for Black Friday sales online, but if you look a little closer, you realize it’s also a massive number of people shopping in very different ways than they used to. Black Friday has already changed quite a bit in the past few years. What was once a single day defined by incredible deals and lines outside big-box stores has stretched into a weeks-long digital shopping season. And, let’s be honest, people aren’t camping outside a Target anymore; they’re sitting on their couch, scrolling their phones. The AI holiday The most interesting part of the story is how things have shifted even more this year. Adobe’s data shows that AI-generated traffic to retail sites jumped 805 percent year-over-year. Not only are people using AI tools to find deals and compare products, but also shoppers who landed on a site from an AI assistant were 38 percent more likely to convert than everyone else. That’s surprising, and yet it makes perfect sense. One of the things AI chatbots like ChatGPT, Claude, and Gemini are good at is instantly surfacing the best price across half a dozen retailers. This year, there were plenty of headline features: Electronics, toys, apparel, TVs, and appliances were discounted between 24 and 30 percent. AI tools just made it easier to find them. And those deals didn’t just convince people to buy more. Adobe says that people spent more on higher-end items. The share of units sold from the most expensive tier of products spiked: 64 percent in electronics, 55 percent in sporting goods, 48 percent in appliances. With the right combination of discounts and AI-assisted shopping comparison, people weren’t just looking for deals—they were looking for the best value. Mobile continued to dominate Depending on the hour, around 55 percent of online Black Friday sales happened on a phone—$6.5 billion worth. That’s up 10 percent from last year and represents billions of dollars processed through screens smaller than a wallet. Mobile phones reward frictionless experiences. And it turns out, AI is very good at removing friction. When the easiest way to shop is to ask ChatGPT for a recommendation and the best deal, it changes the way retailers have to think about Black Friday. Not only that, but the timeline seems to have shifted. Adobe says one of the biggest spikes happened from 10 a.m. to 2 p.m. Shopping habits shifted toward the times when people are already using their phones. You don’t need to wait for a sale to “start” when an AI assistant can surface the best price the moment it exists. AI shopping is here to stay Adobe expects U.S. consumers to spend more than $250 billion online this holiday season, with Cyber Monday alone projected to hit $14.2 billion. But the part worth paying attention to isn’t the total—it’s how we got there. Shoppers are trusting AI to do the busywork and find them the best value. For a shopping event that used to be all about physical stores, that’s a significant shift that retailers have to pay attention to. The challenge is that they no longer control the narrative—the AI assistant does. The lesson here may not seem obvious, but the reality is that retailers need to redefine what loyalty means when more shoppers start their journey with an AI prompt instead of walking into a store or pulling up your website. When an assistant compares every retailer at once, being “top of mind” matters far less than being the lowest-friction, highest-confidence option in that moment. That means loyalty isn’t something you win with flashy ads or homepage banners—it’s something you earn through the operational details an AI actually cares about. Black Friday broke spending records. But the more interesting record is the one you might overlook: how many of those purchases started with someone typing a question into an AI instead of typing a URL into a browser. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Wednesday, December 3, 2025

Gen-Zers Are Using AI to Boost Their Side Hustles and Grow Them Into Full-Time Businesses

As more Gen-Zers embrace side hustles, they’re increasingly leaning on artificial intelligence to help them get ahead. A new survey by Canva finds that 80 percent of the people who have side hustles have used AI to fuel the growth of those enterprises, with 74 percent calling it their secret weapon. The tools, including ChatGPT and Canva’s own online graphic design offerings, are being used for everything from video creation and logo/brand design to data analysis and copywriting. And some of those side hustles are becoming full-time businesses. Side hustles, on the whole, have never been hotter. Data from the U.S. Bureau of Labor Statistics shows that 8.9 million Americans are currently working multiple jobs. That’s 5.4 percent of the country’s workforce. And a SurveyMonkey study published earlier this month found “72 percent of workers either already have or are considering a side gig—37 percent already have a side hustle, and 35 percent are considering pursuing one.” Some 22 percent of the people surveyed by Canva said they were inspired to start their own company after launching a side gig and 17 percent said the work led to a consulting or freelancing job. Additionally, 33 percent said they had gained new clients or customers, while 29 percent said their side gig had helped build their professional brand. Gen-Z was the generation most likely to start passive income side hustles: Of those with side hustles, 48 percent of Gen-Zers are currently earning passive income. All totaled, two-thirds of the 300 “5-9 influencers,” as Canva calls them, said they would consider quitting their full-time jobs if they believed their side projects could sustain them. They wouldn’t be the first. Some very familiar tech companies got their start as side hustles or side projects, including Groupon, Twitter, Craigslist, and Instagram (which began as Burbn, a location-based app for whiskey lovers). And thousands of other, smaller businesses began as a part-time side gig for the founder, eventually growing to multimillion-dollar businesses. Today’s side hustle community is made up of a mix of generations. Canva’s survey found that just under half of Gen-Zers, Millennials, Gen-Xers, and Baby Boomers were making money from side gigs today, with the actual percentages ranging from 40 to 48. Increasingly, the side hustles they’re choosing are digitally focused. The most popular jobs were social media creator (35 percent), e-commerce (27 percent), gaming and streaming (24 percent), and graphic design (14 percent). Extra income is the biggest motivator for people who have side gigs, Canva found, but it wasn’t the only one. Some 36 percent of the respondents said they were running their side hustle because they enjoyed the creative expression it gave them. And just under one-third said they wanted to turn a passion into a business. Even people with side hustles who aren’t looking to launch a business of their own are seeing advantages from the work. The skills they’ve learned as part of that work, including the AI expertise they’re building, are helping people advance. Some 14 percent of the people surveyed said their side hustle had helped them get a promotion at their day job. BY CHRIS MORRIS @MORRISATLARGE

Monday, December 1, 2025

The hottest new AI company is…Google?

Google just threw another twist in the fast-changing AI race. And its biggest competitors are taking notice. “We’re delighted by Google’s success — they’ve made great advances in AI and we continue to supply to Google,” Nvidia wrote in a November 25 post on X, before adding that “NVIDIA offers greater performance, versatility, and fungibility than ASICs,” (the application-specific integrated circuits) like those made by Google. “Congrats to Google on Gemini 3! Looks like a great model,” OpenAI CEO Sam Altman also wrote on X. The posts came just days after mounting buzz about Google’s Gemini 3 model — and the Google-made chips that help to power it. Salesforce CEO Marc Benioff wrote on X that he’s not going back to ChatGPT after trying Google’s new model. “The leap is insane — reasoning, speed, images, video… everything is sharper and faster. It feels like the world just changed, again,” he wrote. Now Meta is said to be in talks with Google about buying its Tensor chips, according to The Information, coming after Anthropic said in October that it plans to significantly expand its own use of Google’s technology. Shares of Google were up nearly 8% last week, while Nvidia’s were down a little over 2%. At stake is more than just bragging rights or a few sales contracts. As the tech industry claims AI will reshape the world — including investment portfolios belonging to everyone from billionaires to 401k-holding retirees — what company and what vision comes out on top could affect nearly every American. At face value, Nvidia’s post says the company isn’t worried about Google encroaching on its territory. And for good reason — Google’s chips are fundamentally different from Nvidia’s offerings, meaning they aren’t a match-for-match alternative. But that OpenAI and Nvidia felt the need to acknowledge Google at all is telling. “They’re in the lead for now, let’s call it, until somebody else comes up with the next model,” Angelo Zino, senior vice president and technology lead at CFRA, told CNN. Google and Meta did not immediately respond to a request for comment. Nvidia declined to comment. The leader for now Google is hardly an AI underdog. Along with ChatGPT, Gemini is one of the world’s most popular AI chatbots, and Google is one of the few cloud providers large enough to be known as a “hyperscaler,” a term for the handful of tech giants that rent out cloud-based computing resources to other companies on a large scale. Google services like Search and Translate have used AI as far back as the early 2000s. Even so, Google was largely caught flat-footed by OpenAI’s ChatGPT when it arrived in 2022. Google management reportedly issued a “code red” in December 2022 following ChatGPT’s seemingly overnight success, according to The New York Times. ChatGPT now has at least 800 million weekly active users, according to its maker, OpenAI, while Google’s Gemini app has 650 million monthly active users. But Gemini 3, which debuted on November 18, now sits at the top of benchmark leaderboards for tasks like text generation, image editing, image processing and turning text into images, putting it ahead of rivals like ChatGPT, xAI’s Grok and Anthropic’s Claude in those categories. Google said over one million users tried Gemini 3 in its first 24 hours through both the company’s AI coding program and the tools that allow digital services to connect to other apps. But people tend to use different AI models for different purposes, says Ben Barringer, the global head of technology research at investment firm Quilter Cheviot. For example, models from xAI and Perplexity are ranked higher than Gemini 3 search performance in benchmark tests. “It doesn’t necessarily mean (Google parent) Alphabet is going to be … the end-all when it comes to AI,” said Zino. “They’re just kind of another piece to this AI ecosystem that continues to get bigger.” More chip competition Google began making its Tensor chips long before the recent AI boom. But Nvidia still dominates in AI chips with the company reporting 62% year-over-year sales growth in the October quarter and profits up 65% compared to a year ago. That’s largely because Nvidia’s chips are powerful and can be used more broadly. Nvidia and its chief rival, AMD, specialize in chips known as graphics processing units, or GPUs, which can perform vast amounts of complex calculations quickly. Google’s Tensor chips are ASICs, or chips that are custom-made for specific purposes. While GPUs and Google’s chips can both be used for training and running AI models, ASICs are usually designed for “narrower workloads” than GPUs are designed for, Jacob Feldgoise, senior data research analyst at Georgetown’s Center for Security and Emerging Technology, told CNN in an email. Beyond the differences in the types of chips themselves, Nvidia provides full technology packages to be used in data centers that include not just GPUs, but other critical components like networking chips. It also offers a software platform that allows developers to tailor their code so that their apps can make better use of Nvidia’s chips, a key selling point for hooking in long-term customers. Even Google is an Nvidia client. “If you look at the magnitude of Nvidia’s offerings, nobody really can touch them,” said Ted Mortonson, technology desk sector strategist at Baird. Chips like Google’s won’t replace Nvidia anytime soon. But increased adoption of ASICs, combined with more competition from AMD, could suggest companies are looking to reduce their reliance on Nvidia. And Google won’t be the only AI chip competitor, said Barringer of Quilter Cheviot, and it’s doubtful it will achieve Nvidia’s dominance. “I think it’s a part of a balance,” he said. Analysis by Lisa Eadicicco

Friday, November 28, 2025

Demystifying Private GenAI solutions

Investments in the AI industry reached astronomical highs with the $300bn deal between OpenAI and Oracle. Competition reached the governmental level with the announcements of investments: US $500 bn, EU over $200 bn and China aims to reach $98 billion by the end of the year. On the technology side, GPT-5 was recently released. Unlike prior versions, it did not represent a paradigm shift but rather an incremental update with improved test results. This development deviates from the scaling-performance trend and echoes Yann LeCun's (MetaAI) statement, suggesting that artificial general intelligence (AGI) will not be achieved merely by scaling large language models (LLMs). Furthermore, Apple's latest research highlighted LLMs' limitations in mathematical reasoning. Another study by University College London acknowledges the LLM’s "scaling wall" issue, leading to a significant increase in computational costs for error correction. Therefore, the question remains: can LLMs innovate and acquire new skills as a "PhD in a Pocket"? Let’s take a moment to explore how current AI technologies can benefit project managers. While off-the-shelf generative AI solutions, which we discussed in the previous report, are quietly making their way into our office suites and smartphones, today, we will focus on private generative AI solutions. These solutions include not only data preparation and training, but also hosting infrastructure and development or customisation of the GenAI model. Gartner are sceptical about whether this way would be selected by the majority, and shortlists those who can choose it: ·Corporates; ·Software Product Development companies, including Startups; ·Niche businesses that haven’t found the right off-the-shelf solution and are willing to develop their own solution. Private GenAI solutions require strong expertise in software development, data, testing, hosting infrastructure, implementation, training and, as always, support. Benefits: Private GenAI enables exploration, innovation, and modification of nearly any use case in project management, resulting in solutions that can surpass off-the-shelf options. As we fine-tune the model itself, it also allows combining multiple models. And offers the next-level security with full control over its data infrastructure, data flows, and models. Trade-offs: Across different sources, the failure rate of GenAI PoCs in business is 80-90%, and it can reach a shocking 95% for solo implementations, according to recent research by MIT. So companies should be very selective and carefully evaluate the outcomes and future-proofing of their PoC’s use cases. Responsibility for ensuring compliance with the EU AI Act and GDPR regulations. If the organisation seeks a solution integrated with internet search or one that would work with multiple output modalities, it is essential to consider RAG and Agentic-AI solutions closely. By Denis Makarov, who is the IT Solutions Program Manager at Sanbra Group Ltd

Wednesday, November 26, 2025

Google’s New Gemini 3 AI Crushed OpenAI and Anthropic in a Benchmark Test for Business Operations

Google has released Gemini 3, the latest in its line of advanced AI models. As most AI companies do when announcing a new flagship model, Google boasted that Gemini 3 is its most intelligent model yet, and tops several benchmarks, including one that judges an AI’s ability to run a business. Google has also released a new application to supplement Gemini 3’s coding power. After months of teasing, Google CEO Sundar Pichai finally announced Gemini 3 in a blog post, saying that it enables anyone to “bring any idea to life.” The model is now integrated throughout much of Google’s ecosystem, including its search engine’s AI Mode, Google AI Studio, and the Gemini App. Pichai said that Gemini 3 is “much better at figuring out the context and intent behind your request, so you get what you need with less prompting.” Gemini 3 will be a family of models that vary in size and price. For now, the only model available is Gemini 3 Pro, which is the largest and most expensive version. Over time, smaller and cheaper versions of the model will be released. Gemini 3 Pro also includes a “Deep Think” mode, which has become standard across AI platforms. By activating this mode, Gemini can think even longer and harder about how to solve complex problems. Demis Hassabis, CEO of Google DeepMind, wrote that Gemini 3 is “the best model in the world for multimodal understanding and our most powerful agentic and vibe coding model yet, delivering richer visualizations and deeper interactivity.” By multimodal, he’s referring to the capability of AI models to process and generate content across a variety of mediums, including text, images, and video. Vibe coding refers to the practice of directing AI agents to write and execute code on your behalf, and has been a major AI topic in 2025. In its blog post, Google also claimed that Gemini 3 Pro is significantly less sycophantic than other AI models. “Its responses are smart, concise and direct, trading cliché and flattery for genuine insight,” the company wrote, “telling you what you need to hear, not just what you want to hear.” According to Google’s own testing, Gemini 3 Pro tops several widely-used AI benchmarks, including MMMU, which gauges multimodal understanding, and Terminal-Bench, which judges a model’s ability to code within a computer terminal. One notable leaderboard that Gemini 3 Pro topped was Vending-Bench 2, a benchmark that measures an AI model’s ability to run a business (in this case a vending machine) over a long period of time. After a full simulated year of operation, Gemini 3’s bank account balance was $5,478.16, much higher than second place finisher Claude Sonnet 4.5, which ended the virtual year with $3,838.74. Google clearly has high hopes for Gemini 3 in the coding domain. Along with the new model, the company has released Google Antigravity, a new agentic development platform that will likely compete with fast-growing startup Cursor, which sells its own AI-powered integrated development environment (IDE). Google Antigravity gives AI agents access to a code editor, terminal, and browser. In addition to Gemini 3, Google Antigravity users will also be able to select Anthropic’s Claude models and OpenAI’s open-weights model. Google says that Antigravity also comes “tightly coupled” with Nano Banana, the company’s popular image-editing model. For nontechnical founders who might be intimated by the technical details of Antigravity but want to try their hand at AI coding, Google has brought Gemini 3 Pro to Google AI Studio, a web-based application designed specifically for those without coding experience. In a blog post, Google AI Studio product lead Logan Kilpatrick wrote that Gemini 3 Pro “can translate a high-level idea into a fully interactive app with a single prompt. It handles the heavy lifting of multi-step planning and coding details delivering richer visuals and deeper interactivity, allowing you to focus on the creative vision.” Gemini 3 Pro is currently available for enterprise use for members of Google’s Gemini Enterprise platform. Google says that several businesses are already using Gemini 3 Pro, including Box, Cursor, Harvey, Replit, Thomson Reuters, and Shopify. Gemini 3 Pro costs $2 per million tokens on input prompts that are smaller than 200,000 tokens, and $12 for per million tokens generated. Tokens are units of data that are processed and generated by AI models. BY BEN SHERRY @BENLUCASSHERRY

Monday, November 24, 2025

10 AI Tools Marketers Are Using Right Now

Who doesn’t want to be more efficient? That’s why millions of Americans are turning to AI at the office. Over the past year, the share of U.S. workers using AI tools as part of their job doubled to 40 percent, according to a Gallup poll published in June. That number gets even higher for the marketing industry. By one measure, 61 percent of creative and marketing professionals use AI for their work, including analytics, content creation, strategy, and planning. Another poll found that 76 percent of marketers employ AI tools. For those marketers searching for the most effective AI tools to incorporate into their daily work flow, Inc. surveyed marketing-focused founders and their chief marketing officers. Here are the models and platforms they cannot live without. 1. ChatGPT Unsurprisingly, one of the most frequently cited tools was ChatGPT. Founders and chief marketing officers rely on the LLM for daily tasks, including brainstorming, market research, data analysis, strategy development, and content creation. One chief marketing officer trained a custom GPT to become chief of staff. Another says they use ChatGPT to formulate step-by-step guides when learning how to use other AI tools. With a team of fewer than 10 people, Sophie Mann, chief marketing officer of Furnished Finder, an online marketplace for furnished rental properties, uses ChatGPT as both her executive assistant and copywriter. Mann taps the LLM to help her draft board updates, structure meeting agendas, write performance reviews, and negotiate partner contracts. Without a full-time copywriter on staff, Mann also built a custom GPT, which is trained on Furnished Finder’s brand voice and customer personas, to write content. “It’s now the starting point for nearly every marketing asset. From email campaigns, social captions, blog posts, and paid ads, to event collateral and even voice-over scripts for our phone lines. You name it. We’re likely starting our drafts in Chat,” says Mann. “These tools help us scale our output without adding headcount. Reading through 1,000+ customer survey responses, for example, used to take hours. Now, I can have AI summarize key themes and insights in under a minute.” 2. Descript For creators and founders building in public through video, whether it’s TikTok or podcasts, Descript has become a go-to tool to make editing easier and faster. The AI-powered video editing platform, which landed on Fast Company’s list of Most Innovative Companies this year, creates transcripts from raw video and lets users edit through a text document by deleting words, phrases, or entire chunks. Last year, Descript added new AI capabilities, which automatically remove filler words, repeated words, bad takes, and background noise. Overall, the company claims its tool enables users to make “130 percent more videos in 27 percent less time” and “first-time users were 25 percent more likely to complete their project,” Fast Company reported earlier this year. 3. OpusClip Many of the founders who use Descript also use OpusClip, an AI-powered video editing software that helps users cut down longer videos, such as hour-long podcast interviews, into a series of shorter clips for social media. Within two years of launching, the company has scaled to more than 12 million users and become a favorite of social media managers. Shana Ayabe, founder of the marketing company Grace Digital Media and co-host of the podcast The Exit Interview, calls the tool a game changer. Opus Clip “allows us to quickly edit, reformat from landscape to vertical, and add dynamic captions while maintaining high production quality,” says Ayabe, who uses the paid version so her team can collaborate in real-time on the platform, “hearting” and commenting on different clips. “The built-in virality scoring system helps us understand why a clip is likely to perform well, so we can strategically schedule posts around traffic patterns and trends.” 4. Perplexity ChatGPT is not the only LLM that marketers use. In fact, most of the founders and CMOs who spoke with Inc. use multiple different models, depending on the task. Perplexity was the tool of choice when it came to research, including analyzing documents and data sets. Denise Aguilar, a global marketing strategist and founder of her eponymous Seattle-based company, Denise Aguilar Consulting, used the paid version of Perplexity and says the LLM has streamlined her workflow, allowing her to take on more ambitious projects for her clients. “The upgraded features, such as advanced file handling, faster processing, and priority support, have enabled me to work with large sets of PDFs and rapidly search, organize, and synthesize information,” says Aguilar, who has worked with companies, including Microsoft, Amazon, General Motors, and Vogue. “Investing in the Pro version has definitely raised my efficiency, especially when refining communications strategies, building personas, and editing across multiple marketing campaigns.” 5. Midjourney Midjourney, an AI-powered image generation tool, is being sued by a collection of visual artists and the major Hollywood studio Warner Bros. for copyright infringement, but marketers, especially those who work in the creative side of advertising, still say the tool is helpful for developing concept art and mock ups. To avoid any legal issues, be careful to restrict any images to internal and exploratory use only. 6. Lovable Marketers have joined the vibe-coding trend and started developing their own software by telling AI tools what they want to create, rather than writing code themselves. Many founders and CMOs prefer to use Lovable. The platform has become so popular that it became a unicorn within eight months of launching and has attracted nearly eight million users. Inc. AI reporter Ben Sherry used the free version to create an entire website in an hour. Marketers tend to opt for the paid version and say the tool is especially helpful for creating prototypes of client websites and apps. Maria Pergolino, chief marketing officer of SPS Commerce, a Minneapolis-based software company that helps retail partners optimize supply chain operations, says Lovable has been “transformative” for her work flow. “No longer do you need to awkwardly describe your vision for an app, ad, slide, or campaign. Instead you can describe your vision to an LLM and pop the directions into Lovable to bring your ideas to life,” says Pergolino. “This saves me hours every week.” 7. Claude If Perplexity has become the researcher and ChatGPT has become the catch-all for marketers, Claude has become the go-to LLM for writing. Founders and CMOs say the model excels as a place for brainstorming, storytelling, and testing out ideas or phrases. Patrick Finan, the co-founder and CEO of Block Club, a branding, strategy, and content agency for B2B technology companies based in Brooklyn, says Claude is the main LLM that he and his team use. “It’s fully integrated into Slack, Gmail, Google Workspace, Google Calendar,” he says. 8. Gamma For help making presentations, marketers have turned to Gamma as the PowerPoint of the AI era. The AI-powered platform takes text, such as documents or outlines, and transforms them into a slide presentation with one prompt. Using this same method, Gamma also lets users create polished-looking PDF documents, social media assets, and websites. Within two years of launching the San Francisco-based startup has attracted 50 million users, Fast Company reported earlier this year. Founders who spoke with Inc. recommended the paid version. 9. AirOps “Marketers are losing their minds” trying to optimize their existing SEO strategy for the new era of AI search, Andy Crestodina, co-founder and chief marketing officer of Orbit Media Studios, a Chicago-based digital agency that focuses on web development and website optimization, told Inc. recently. AirOps has helped streamline that process, founders and CMOs say. The company, which secured a $40 million Series B fundraising round earlier this month, calls itself the first content engineering platform for AI search. Marketers say the platform makes their AI optimization strategy more efficient. Leah Taylor, who runs communications for the AI sales platform Apollo, says Apollo has embedded the AirOps AI infrastructure into its core marketing operations to automate performance reporting and identify new revenue opportunities. “AirOps ingests our first-party data across notes, OKRs, experiment logs, and Slack, then uses enterprise LLMs to analyze and publish insights,” says Taylor. Since WebFlow, a no-code website experience platform, started using AirOps, the company increased its visibility on AI search with more than 330 new citations and a 24 percent uptick in SEO impressions, says chief marketing officer Dave Steer. AirOps has also increased revenue, turbocharging AI-attributed signups jumping from the low single digits to nearly 10 percent. 10. Gemini Marketers who are incorporating LLMs into their daily workflow also name-checked Gemini. While Doug Straton, chief marketing officer at Bazaarvoice, an Austin-based software platform that helps brands harness user-generated content, ratings, and reviews, calls ChatGPT the “easiest and most fun” LLM to use, he usually turns to Gemini instead for its repeatability and reliability. “I find Gemini, while harder to brief, creates more uniform, consistent results. It’s my company’s default,” says Straton, who uses the paid version. “It seems less eager to please you with a result you want to see, versus what you need to see.” BY ALI DONALDSON @ALICDONALDSON

Friday, November 21, 2025

What’s Next for AI? Andreessen Horowitz Founders Share Their Thoughts

Stocks of companies tied to artificial intelligence have been hitting stratospheric levels for over a year now, thrilling investors, but also causing concerns about a potential AI bubble. As startups close breathtaking funding rounds, like the $40 billion OpenAI collected in March of this year, fears of an AI bubble are growing — and some say a burst could be even bigger than the dot-com bubble of the late 1990s. The bubble theory is hotly debated. Some within the industry say they agree that the investment landscape is bloated, including OpenAI co-founded Sam Altman. Other experts, like Goldman Sachs, however, say we’re not in one (yet) — and Fed chair Jerome Powell has been skeptical of the bubble calls. As that debate rages, investors continue to fund AI startups. Few investors are in as deep as Marc Andreessen and Ben Horowitz. Their venture firm, Andreessen Horowitz (commonly called a16z), has sunk billions into the AI space. In April, it was reported the company was in early talks to raise a massive $20 billion AI-focused fund. The two investors recently came together at a16z’s Runtime conferences to talk about where AI can go beyond chatbots. Neither was willing to make any specific predictions about AI’s forthcoming capabilities, saying it’s too early to even imagine that. Andreessen likened AI to the personal computer in 1975, noting there was no way at that time to imagine what PCs would be capable of today. However, he expects similar levels of advancement — from a stronger starting point. AI, he said, is already approaching levels of human creativity — and while Andreessen would love to see humans continue to have superiority in that area, he thinks it’s unlikely. Tools like OpenAI’s Sora 2 video, for instance, are already capable of creating realistic scenes, animations, and special effects — and the introduction of AI actress Tilly Norwood has caused an outcry and prompted debate in Hollywood. “I wanna like hold out hope that there is still something special about human creativity,” he said. “And I certainly believe that, and I very much want to believe that. But, I don’t know. When I use these things, I’m like, wow, they seem to be awfully smart and awfully creative. So I’m pretty convinced that they’re gonna clear the bar.” Horowitz agreed, saying that while AI might not currently create at the same level as human artists, whether painters or hip-hop performers, that’s largely due to how little it has learned so far. It’s just a matter of time before it has an equal or superior level of talent. And some artists are already looking to use AI to collaborate, he said. “With the current state of the technology, kind of the pre-training doesn’t have quite the right data to get to what you really wanna see, but, you know, it’s pretty good,” he said. “Hip-hop guys are interested because it’s almost like a replay of what they did — they took other music and built new music out of it. AI is a fantastic creative tool. It way opens up the palette.” While AI can devour as many data sets as programmers throw at it, that doesn’t give the technology situational awareness. It is, in essence, book smarts versus street smarts. But the robotics field is expanding quickly. Elon Musk and Tesla are working on humanoid robots and Robotics company 1X has already started to take preorders for a $20,000 humanoid robot that will ‘live’ and work around your home. Once that technology and AI are blended, Andreessen said, AI will see a significant jump in actionable intelligence. “When we put AI in physical objects that move around the world, you’re gonna be able to get closer to having that integrated intellectual, physical experience,” he said. “Robots that are gonna be able to gather a lot more real-world data. And so, maybe you can start to actually think about synthesizing a more advanced model of cognition.” While there are plenty of experts who warn the AI market could be in a bubble right now, including OpenAI CEO and co-founder Sam Altman, Horowitz dismisses the idea, saying bubbles occur when supply outstrips demand — and that’s not the case with AI. “We don’t have a demand problem right now,” he said. “The idea that we’re going to have a demand problem five years from now, to me, seems quite absurd. Could there be weird bottlenecks that appear, like we don’t have enough cooling or something like that? Maybe. But, right now, if you look at demand and supply and what’s going on and multiples against growth, it doesn’t look like a bubble at all to me.” BY CHRIS MORRIS @MORRISATLARGE

Wednesday, November 19, 2025

What Adobe Knows About AI That Most Tech Companies Don’t

Last week, I was talking with a graphic designer about Adobe MAX, and they shared with me the most unexpected review of an AI feature I’ve ever heard. “Photoshop will rename your layers for you!” he said, without hesitating. The feature he was referring to was that Photoshop can now look at the content on each of your layers and rename them for you. Since most people don’t give a lot of thought to naming layers as they create them, this might be one of the most useful features Adobe has ever created. It’s certainly one of the most useful AI features that any company has come up with so far, mostly because it does something very helpful but that no one wants to do. Helpful over hype And, that’s the point. In fact, that reaction explains more about Adobe’s AI strategy than anything the company demoed during its keynote. It’s not the kind of feature that gets a lot of hype, but I don’t know anyone who regularly uses Photoshop who wouldn’t prefer to have AI handle one of the most universally hated chores in design: cleaning up a pile of unnamed layers. I think you can make the case that Adobe just made the loudest, clearest argument yet that AI isn’t a side feature. In many ways, it is the product now. Almost every announcement touched Firefly, assistants that operate the apps for you, “bring your own model” integrations, or Firefly Foundry—the infrastructure layer that lets enterprises build their own private models. What Adobe understands But beneath it all, Adobe is doing something most tech companies still aren’t. Instead of looking for ways to bolt AI onto its products, Adobe is building AI into the jobs customers already hired Adobe to help them do. When I sat down with Eric Snowden, Adobe’s SVP of Design, at WebSummit this past week, he used a phrase that stuck with me: “utilitarian AI.” Sure, there were plenty of shiny new AI features that Adobe announced like Firefly Image Model 5, AI music and speech generation, podcast editing features in Audition, and even partner models like Google’s Gemini and Topaz’s super-resolution built directly into the UI. But Snowden lit up talking about auto-culling in Lightroom. “You’re a wedding photographer. You shoot 1,000 photos; you have to get to the 10 you want to edit. I don’t think there’s anybody who loves that process,” he told me. Auto-culling uses AI to identify misfires, blinks, bad exposures, and the frames you might actually want. Ultilitarian AI is underrated That’s what he means by utilitarian AI—AI that makes the stuff you already have to do dramatically less painful. They force you into an “AI mode,” but instead save you time while you go about the tasks you already do. Snowden describes Photoshop’s assistant like a self-driving car: you can tell it where to go, but you can grab the wheel at any time—and the entire stack of non-destructive layers is still there. You’re not outsourcing your creative judgment—you’re outsourcing the tedious tasks so. you can work on the creative process.. That’s Adobe’s first insight–that AI should improve the actual job, not invent a new one. The second insight came out of a conversation we had about who AI helps most. I told Snowden I have a theory: AI is most useful right now to people who either already know how to do a thing, or don’t know how to use the steps but know what the result should be. For both of those people AI helps save them meaningful time. That’s how I use ChatGPT for research. I could do 30 Google searches for something, but ChatGPT will just do them all at the same time and give me a summary of the results. I know what the results should be, and I’m able to evaluate whether they are accurate. The same is true for people using Lightroom, Photoshop, or Premiere. You know what “right” looks like, so you know whether the tool got you closer or not. AI can do many of the tasks, but it’s still up to humans to have taste. AI has no taste Which is why Snowden didn’t hesitate: designers and creative pros are actually better positioned in an AI world—not worse. “You need to know what good looks like,” he told me. “You need to know what done looks like. You need to know why you’re making something.” Put the same AI tool in front of an engineer and a designer and, according to Snowden, “90 times out of 100, you can guess which is which,” even if both are typing prompts into the same tool. That means taste becomes the differentiator. Snowden told me he spent years as a professional retoucher. “I think about the hours I spent retouching photos, and I’m like, I would have liked to go outside,” he said. Being able to do that skill was important, but it wasn’t the work. The finished product was the work, and AI can compress everything between the idea and the result. Trust has never mattered more The third thing Adobe understands—and frankly, most companies haven’t even started wrestling with—is trust. I have, many times, said that trust is your most valuable asset. If you’re Adobe, you’ve built up that trust over decades with all kinds of creative professionals. There is a lot riding on whether these AI tools are useful or harmful to creatives, as well as to their audiences. So, Adobe didn’t just ship AI features; it is building guardrails around them. For example, the Content Authenticity Initiative will tag AI-edited or AI-generated content with verifiable metadata. Snowden’s framing is simple: “We’re not saying whether you should consume it or not. We just think you deserve to know how it was made so you can make an informed choice.” Then there’s the part most people never see—the structure that lets a company Adobe’s size move this fast. Understanding how customers want to use AI Snowden’s team actually uses the products they design. He edits photos in Lightroom outside of work. Adobe runs a sort of internal incubator where anyone can pitch new product ideas directly to a board. Two of the most important new tools—Firefly Boards and Project Graph—came out of that program. When AI arrived, Adobe already had the mechanism to act on it. It didn’t need to reinvent itself or reorganize. It just needed to point an existing innovation engine at a new set of problems. That’s the lesson here: Adobe isn’t chasing AI because it’s suddenly trendy with features no one is sure how anyone will use. It saw AI as a powerful way to improve the jobs its customers already do. That’s the thing so many tech companies still miss. AI is not a strategy. It’s not even the product. It’s a utility—one that works only if you know what your customers are trying to accomplish in the first place. So far, it seems like Adobe does. And that’s why its AI push feels less like a pivot and more like a product finally catching up to the way creative work actually happens. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Monday, November 17, 2025

How to Grow Your Social Following as a Founder—and Which Platforms to Use

So you want to build in public—documenting the process of founding, launching, and growing your business online—but you’re not sure which platform to use. You could use Substack or Beehiiv to send newsletters, Medium to write blog posts, TikTok or YouTube to post videos, LinkedIn, X, or Bluesky to share text-based posts, or Instagram to post photos. There’s no right answer. Founders of all kinds have grown their businesses by posting on each of these platforms—and many use more than one. Plus, there’s plenty of overlap: You can post TikTok-like videos on Instagram and share X-like text posts on Substack. Still, if you’re at the very beginning of your building in public journey, it’s a good idea to focus your efforts on just one. Here’s a guide to help you pick between some of the most popular platforms right now: Substack, Beehiiv, TikTok, LinkedIn, and X. Choose Substack if… You’re a founder in the politics, media, fashion, or beauty space who enjoys storytelling. Substack, which launched as a newsletter platform in 2017 but now bills itself as a subscription network, reports hosting more than 50 million active subscriptions and 5 million paid subscriptions. The platform recently added video and livestream features in order to court creators who use other paid subscription platforms, but the majority of its content is still long-form and text-based. If you’re considering building in public on Substack, you need to have a love for writing—or at the very least, storytelling. Newsletters on politics, fashion, and beauty seem to do especially well on Substack, which makes it a solid choice of platform if your company is in any of these industries. Many new-age media organizations including The Ankler and The Free Press publish on Substack, which means it’s also a great pick for media entrepreneurs and founders in adjacent industries like public relations. “Substack is where founders can reach audiences who genuinely value a direct, personal connection,” Christina Loff, the platform’s head of lifestyle partnerships tells Inc. over email. “The publications that perform best all share a common thread: a strong, human voice.” Examples of founders whose publications do this well, she adds, include Rebecca Minkoff, who has more than 6,000 subscribers; Dianna Cohen of Crown Affair, who has more than 13,000; and Rachelle Hruska MacPherson of GuestofaGuest.com and Lingua Franca, who has more than 260,000. Choose TikTok if… Your business is targeting Gen Z. It’s no secret that TikTok dominates in attracting young users—and keeping them engaged. The video sharing app rose to fame in 2020 and now has an estimated 170 million American users, many of whom are 28 years old and under. In fact, according to TikTok, 91 percent of Gen Z internet users “have discovered something” on the platform in the past month. So if you’re a young founder, or if you’re starting a business that’s targeting Gen Z customers, TikTok is probably your best bet. All you really need to get started on TikTok is a smartphone and basic video-editing skills. Nadya Okamoto, the co-founder of sustainable period care brand August, for one, has grown her audience to 4.4 million in just four years by filming her daily routine, answering product questions, and posting get-ready-with-me videos. Boutique candy brand Lil Sweet Treat’s founder Elly Ross has gained more than 36,300 followers by documenting her experience of opening four storefronts and launching a line of candy. Before you fully commit to building in public on TikTok, remember that there’s still a minute possibility that the platform will get banned in the U.S. on December 16. Choose LinkedIn if… You’re a founder in the business-to-business space. As a work-centric social media platform, LinkedIn is a great place for you to build in public if your company makes products for or provides services to other businesses. Still, there’s a lot of competition on the platform. More than 69 million companies and 243 million American professionals use LinkedIn, according to the company—and almost all of them are posting about their own careers. BY ANNABEL BURBA @ANNIEBURBA

Friday, November 14, 2025

Why Some AI Leaders Say Artificial General Intelligence Is Already Here

Artificial intelligence is still a relatively new technology, but one that has been seeing seemingly exponential jumps in its capabilities. The next big milestone many founders in the industry have discussed is artificial general intelligence (AGI), the ability for these machines to think at the same level as a human being. Now, some of AI’s biggest names say they believe we could already be at that point. The recent Financial Times Future of AI summit gathered Nvidia CEO Jensen Huang, Meta AI’s Yann LeCun, Canadian computer scientist Yoshua Bengio, World Labs founder Fei-Fei Li, Nvidia chief scientist Bill Dally, and Geoffrey Hinton (often referred to as the “Godfather of AI“) together to discuss the state of the technology. And some of those leaders in the field said they felt AI was already topping or close to topping human intelligence. “We are already there … and it doesn’t matter, because at this point it’s a bit of an academic question,” said Huang. “We have enough general intelligence to translate the technology into an enormous amount of society-useful applications in the coming years. We are doing it today.” Others said we may not even realize that it has happened. While most forecasts for the arrival of AGI still put it at several years down the road, LeCun said he didn’t expect it would be an event, like the release of ChatGPT. Instead, it’s something that will happen gradually over time—and some of it has already started. AI companies are generally less bullish on the subject of AGI than the panelists. OpenAI has said if it chooses to IPO in the future, that will help it work toward the AGI milestone. Elon Musk, last year, predicted AGI would be achieved by the end of 2025 (updating his previous prediction of 2029). Last month, he wrote in a social media post that the “probability of Grok 5 achieving AGI is now at 10 percent and rising.” Not all of the AI leaders said they felt AGI was here. Bengio, who was awarded the Turing Award in 2019 for achievements in AI, said it was certainly possible, but the technology wasn’t quite there yet. “I do not see any reason why, at some point, we wouldn’t be able to build machines that can do pretty much everything we can do,” said Bengio. “Of course, for now … it’s lacking, but there’s no conceptual reason you couldn’t.” AI, he continued, was a technology that had “a lot of possible futures,” however. And that makes it hard to forecast. Basing decisions today on where you think the technology will go is a bad strategy, he said. World Labs founder Li straddled the question, saying there were parts of AI that would supersede human intelligence and parts that would never be the same. “They’re built for different purposes,” she said. “How many of us can recognize 22,000 objects? How many humans can translate 100 languages? Airplanes fly, but they don’t fly like birds. … There is a profound place for human intelligence to always be critical in our human society.” Hinton, meanwhile, opted to look beyond AGI to superintelligence, an AI milestone where the technology is considerably smarter than humans. There are several startups exploring this space now, including Ilya Sutskever’s Safe Superintelligence and Mira Murati’s Thinking Machines Lab. “How long before if you have a debate with a machine, it will always win?” Hinton posited. “I think that is definitely coming within 20 years.” BY CHRIS MORRIS @MORRISATLARGE

Wednesday, November 12, 2025

AI Isn’t Replacing Jobs. AI Spending Is

For decades now, we have been told that artificial intelligence systems will soon replace human workers. Sixty years ago, for example, Herbert Simon, who received a Nobel Prize in economics and a Turing Award in computing, predicted that “machines will be capable, within 20 years, of doing any work a man can do.” More recently, we have Daniel Susskind’s 2020 award-winning book with the title that says it all: A World Without Work. Are these bleak predictions finally coming true? ChatGPT turns 3 years old this month, and many think large language models will finally deliver on the promise of AI replacing human workers. LLMs can be used to write emails and reports, summarize documents, and otherwise do many of the tasks that managers are supposed to do. Other forms of generative AI can create images and videos for advertising or code for software. From Amazon to General Motors to Booz Allen Hamilton, layoffs are being announced and blamed on AI. Amazon said it would cut 14,000 corporate jobs. United Parcel Service (UPS) said it had reduced its management workforce by about 14,000 positions over the past 22 months. And Target said it would cut 1,800 corporate roles. Some academic economists have also chimed in: The St. Louis Federal Reserve found a (weak) correlation between theoretical AI exposure and actual AI adoption in 12 occupational categories. Yet we remain skeptical of the claim that AI is responsible for these layoffs. A recent MIT Media Lab study found that 95% of generative AI pilot business projects were failing. Another survey by Atlassian concluded that 96% of businesses “have not seen dramatic improvements in organizational efficiency, innovation, or work quality.” Still another study found that 40% of the business people surveyed have received “AI slop” at work in the last month and that it takes nearly two hours, on average, to fix each instance of slop. In addition, they “no longer trust their AI-enabled peers, find them less creative, and find them less intelligent or capable.” If AI isn’t doing much, it’s unlikely to be responsible for the layoffs. Some have pointed to the rapid hiring in the tech sector during and after the pandemic when the U.S. Federal Reserve set interest rates near zero, reports the BBC’s Danielle Kaye. The resulting “hiring set these firms up for eventual workforce reductions, experts said—a dynamic separate from the generative AI boom over the last three years,” Kaye wrote. Others have pointed to fears that an impending recession may be starting due to higher tariffs, fewer foreign-worker visas, the government shutdown, a backlash against DEI and clean energy spending, ballooning federal government debt, and the presence of federal troops in U.S. cities. For layoffs in the tech sector, a likely culprit is the financial stress that companies are experiencing because of their huge spending on AI infrastructure. Companies that are spending a lot with no significant increases in revenue can try to sustain profitability by cutting costs. Amazon increased its total CapEx from $54 billion in 2023 to $84 billion in 2024, and an estimated $118 billion in 2025. Meta is securing a $27 billion credit line to fund its data centers. Oracle plans to borrow $25 billion annually over the next few years to fulfill its AI contracts. “We’re running out of simple ways to secure more funding, so cost-cutting will follow,” Pratik Ratadiya, head of product at AI startup Narravance, wrote on X. “I maintain that companies have overspent on LLMs before establishing a sustainable financial model for these expenses.” We’ve seen this act before. When companies are financially stressed, a relatively easy solution is to lay off workers and ask those who are not laid off to work harder and be thankful that they still have jobs. AI is just a convenient excuse for this cost-cutting. Last week, when Amazon slashed 14,000 corporate jobs and hinted that more cuts could be coming, a top executive noted the current generation of AI is “enabling companies to innovate much faster than ever before.” Shortly thereafter, another Amazon rep anonymously admitted to NBC News that “AI is not the reason behind the vast majority of reductions.” On an investor call, Amazon CEO Andy Jassy admitted that the layoffs were “not even really AI driven.” We have been following the slow growth in revenues for generative AI over the last few years, and the revenues are neither big enough to support the number of layoffs attributed to AI, nor to justify the capital expenditures on AI cloud infrastructure. Those expenditures may be approaching $1 trillion for 2025, while AI revenue—which would be used to pay for the use of AI infrastructure to run the software—will not exceed $30 billion this year. Are we to believe that such a small amount of revenue is driving economy-wide layoffs? Investors can’t decide whether to cheer or fear these investments. The revenue is minuscule for AI-platform companies like OpenAI that are buyers, but is magnificent for companies like Nvidia that are sellers. Nvidia’s market capitalization recently topped $5 trillion, while OpenAI admits that it will have $115 billion in cumulative losses by 2029. (Based on Sam Altman’s history of overly optimistic predictions, we suspect the losses will be even larger.) The lack of transparency doesn’t help. OpenAI, Anthropic, and other AI creators are not public companies that are required to release audited figures each quarter. And most Big Tech companies do not separate AI from other revenues. (Microsoft is the only one.) Thus, we are flying in the dark. Meanwhile, college graduates are having trouble finding jobs, and many young people are convinced by the end-of-work narrative that there is no point in preparing for jobs. Ironically, surrendering to this narrative makes them even less employable. The wild exaggerations from LLM promoters certainly help them raise funds for their quixotic quest for artificial general intelligence. But it brings us no closer to that goal, all while diverting valuable physical, financial, and human resources from more promising pursuits. By Gary N. Smith and Jeffrey Funk

Monday, November 10, 2025

A New AI Agent Wants to Schedule Your Life—Should You Let It?

Have you ever thought your working life would be easier with an executive assistant? A suite of new AI agents are cropping up, promising to take on the work and deliver all the benefits of having an EA without you actually having to hire anyone for the job. And, ostensibly, all for a far lower price tag. To find out if technology could do a better job than I could at making my schedule work for me, I tested out a free trial of Blockit, a new AI-powered agent that integrates with a user’s calendars and email. When signing up for the tool, Blockit promised me that in as little as five minutes it could learn the same amount of information about my schedule, habits, and preferences as a human EA might over the course of several months. Here’s how Blockit works: The AI agent learns your preferences for taking meetings, including when and where you like to conduct certain kinds of business. Then, you can copy the Blockit bot into emails or Slack messages with your contacts and give it instructions to set up a meeting at your chosen time and place. It sounded fantastically simple, but after using the tool, I realized that letting Blockit’s AI into my schedule required more than a little work on my part, too. Here are my three biggest takeaways from letting AI into my schedule for a week. You need to work to make it work for you Blockit’s onboarding process involves answering multiple questions about your habits and schedule, some of which got me thinking a little more about where, in fact, I like to work. So if you like to take certain meetings in a coffee shop near your office, you need to tell Blockit the exact address and the AI will make a note of it for future reference. Similarly, if you have an office or work from home on certain days, Blockit will log that, too. Doing this means that when you copy Blockit’s bot into an email with a contact that you want to get a coffee with, the bot will schedule a meeting at your preferred spot, invite the other person to it, and block off the time on your calendar that it will take you to get there from wherever you told it you would be working that day. That’s extremely helpful! But it also requires you to make some concrete decisions about where and when you will be working—and that’s not always totally obvious if you are in an industry that regularly puts you in many different locations on short notice. Blockit, to its credit, can keep up—it will even ask you to confirm if you are traveling if you tell it to set a meeting in an unfamiliar city. But if you are a busy CEO, keeping your AI agent up to date on your schedule might not always be top of mind. Another interesting Blockit feature is its codewords function. Users can teach the AI codewords that trigger certain actions: For example, say I sign off an email agreeing to a meeting with “best wishes” and copy Blockit to set something up. I could have already set “best wishes” as a codeword meaning that this meeting is not high priority, can be set sometime three or four weeks away, and can be canceled if I get another, higher priority request for the same time between now and then. It’s a clever idea, but again, I had to go through the work of teaching Blockit my codewords, a process that the desktop app doesn’t make particularly intuitive. Overall, I had to spend a solid chunk of time training Blockit—it definitely took more than five minutes of work to get value from this tool. If you’re already feeling stretched, taking those hours to invest in the AI might not be your top priority. But if you do, it may be worth it. Blockit needs access to everything An obstacle I ran into early with Blockit was that it didn’t want to work with just one Google calendar—it wanted access to every calendar app I had access to. That would be fine if the people who owned those other calendars were also Blockit users, which they were not. Blockit only works if you share all your calendar data with it, and if you are an entrepreneur or contractor who works regularly with other companies and are copied into their calendar, you likely don’t have the authority to give Blockit permission to see everything you can see. You might also have some personal privacy concerns that would prevent you from sharing certain information with Blockit. As a result, you might end up letting the app see only half the picture—which could make it less adept at sorting your schedule out for you. Another hurdle for the AI was the fact that I don’t schedule everything in my calendar. I don’t block time-off for certain kinds of work, or log when I’m taking free time. I also often block off a day in my calendar with reminders like “parents arriving today,” and it looks like I’m busy all day—but I’m not really. I tried to clean up my calendar and make it more faithful to what my days actually look like, but I gave up after spending an hour on planning out just two weeks into the future. In that sense, Blockit might be better suited to someone who is starting from scratch—say, joining a new company—or whose company has a calendar system that has become overwrought. Advantages of large-scale integration Blockit is supercharged when other people in your contacts list have Blockit too. Your AI agent can directly communicate with their AI agent and set a meeting up for you with minimal human engagement required. Unfortunately, none of my regular contacts have Blockit. The company behind it has put nothing into marketing it, so its customer base is word-of-mouth only. This brings me back to a realization I raised earlier: Blockit may work best on a company-wide scale rather than on an individual level. The app is genuinely helpful for individuals, but if it were integrated across a team or a company, I can see it taking on some of the core functions of a secretary or EA with little effort. (What the final pricing would be in my case, should I continue to use it past the free trial, is unclear.) That would also get over another potential hurdle with Blockit: Not everyone is used to having an AI agent ask them for their availability. If you’re trying to book a coffee date with your elderly relative, for example, or set up an intro call with a first-time contact, they might be a little skeptical. On a company-wide scale, however, Blockit may be just as intuitive as other AI-powered productivity tools, whether they be schedulers like Sunsama, Structured, or Todoist; note-takers like Fireflies.ai or Otter.ai; or management systems like Airtable or Jira. And, importantly, if your company invests in a tool like Blockit, it would likely become just as big a part of employee workflow as any other software-as-a-service product. BY CLAIRE CAMERON, FREELANCE WRITER