Wednesday, March 4, 2026

AI Adoption Has Surged to 78 Percent in This 1 Industry—but There’s a Catch

One industry has gone from barely touching AI to mass adoption in just two years. AI adoption in the legal field jumped from 23 percent to 78 percent, which is faster than in finance and healthcare. Litify’s third annual State of AI in Legal Report, which surveyed hundreds of legal professionals across law firms, corporate legal departments, and plaintiff practices, found that legal professionals are now among the fastest AI adopters anywhere. But there’s a problem hiding inside that adoption number. Only 14 percent say AI is helping them reduce costs. Just 7 percent report billing more time. Legal firms rushed to buy the sports car, then kept driving it in first gear. The gap between “we use AI” and “this changed our economics” is enormous, and it’s widening even further. “At Litify, we view this as an ‘AI maturity gap,’” notes Curtis Brewer, CEO of Litify, the legal operations platform used by 55,000+ legal professionals. “A firm that relies solely on a general-purpose tool like ChatGPT is only at the first step of its maturity journey.” The Litify data reveals exactly where firms are stuck. ChatGPT dominates usage at 66 percent, followed by Microsoft Copilot (42 percent) and Google Gemini (24 percent). These are general-purpose tools—not legal-specific platforms. And while 66 percent use AI for legal research and 39 percent for summarization, only 6 percent use it for creating invoices and 5 percent for client communication. Firms are deploying AI for tasks that feel productive but don’t directly touch revenue. Why freemium tools hit a wall General-purpose AI tools work well for research and summarization. The problem isn’t that they’re bad, but that they plateau quickly. That ceiling is exactly why legal-specific platforms like Harvey—built from the ground up on legal data and trained on case law, contracts, and regulatory frameworks—have been gaining traction at major firms. Harvey now counts PwC, A&O Shearman, and half of the 100 highest-grossing law firms in the U.S. among its clients, and has raised over $1.2 billion, with reports of another $200 million round in the works at an $11 billion valuation—partly on the argument that generic AI simply wasn’t built for legal nuance​​​​​​​​​​​​​​​. “The primary limitation of these general-purpose tools is their lack of legal and business context,” Brewer says. “Legal work is defined by nuances — solicitation rules, jurisdictional requirements, compliance standards, and practice-area-specific workflows — that general models often overlook.” Then there’s the context problem. Ask ChatGPT to summarize a case, and it only sees what you feed it — not the case history or the client’s background. And since it also can’t take action after summarizing, it’s more or less a dead-end tool. “A legal-specific tool that lives alongside your data and processes can summarize the case and suggest the next best actions or additional questions to ask,” Brewer says. “As the industry raises the bar, firms that delay are doing more than just missing out on features — they are widening a performance gap that may soon become impossible to close.” The shadow IT security risk Here’s where the adoption-without-governance problem gets dangerous: Only 41 percent of firms have an AI policy, and only 45 percent say their staff receive sufficient training. But 78 percent are using AI tools. That means roughly a third of legal professionals may be using AI in what amounts to a shadow IT environment, where there’s no oversight, guardrails, or policy. “Security, security, security!” Brewer says. “Given the highly sensitive nature of legal data, business leaders should be concerned that nearly a third of their staff may be using AI in a ‘shadow’ environment without direct IT oversight.” When employees use public AI tools, they might paste in confidential client information or HIPAA-protected medical records without thinking twice. These systems have no real safeguards. One careless prompt could mean a data breach, regulatory violation, or destroyed client relationship. “When firms fail to provide proactive guidance and purpose-built tools, staff will seek their own solutions,” Brewer explains. “If AI adoption isn’t intentional and structured from the top down, firms risk losing the very efficiency gains they sought in the first place, while exposing themselves to additional risks.” What workflow integration actually looks like The difference between AI as an assistant and AI as a business driver comes down to integration. Consider billing. Asking ChatGPT to create an invoice is like using your smartphone’s calculator instead of the accounting app. Sure, it works. But you still have to manually punch in every client detail, every payment amount, and every line item. You saved five minutes on the template and spent an hour filling it in. That’s unproductive. “When AI ‘lives’ natively alongside your billing, client, and case workflows, the impact is fundamentally different,” Brewer notes. “It transforms from an assistant to a proactive business partner.” An integrated AI tool doesn’t just generate a branded invoice template with client and matter details pre-filled. It can automatically suggest missing time entries or proactively identify billing errors. That’s the difference between saving 10 minutes and changing the economics of the entire billing process. Litify’s clients who’ve embraced this level of integration are seeing dramatic operational scaling — some firms handle twice as many matters with the same staff, and the highest performers have grown headcount by up to 400 percent as they’ve expanded regionally and nationally. The four-dimension framework Brewer says firms need to move on four fronts at once. 1. Tools: You have to stop relying on ChatGPT alone, because that’s not going to get you there. You should move to legal-specific platforms that effectively integrate with your case management, billing, and client systems. 2. Readiness: Write an AI policy. Spell out which tools are approved, how to handle sensitive data, when humans must review output, and what to do when something goes wrong. Then treat training like a safety requirement, not an HR checkbox. 3. Task scope: Research and summarization are fine starting points. But firms that stay there are leaving money on the table. The next level is workflow automation — routing requests, running conflict checks, and building chronologies. Eventually, let AI assign cases, generate invoices, and handle intake. 4. Impact: Pick metrics before you spend another dollar. Cost per matter. Turnaround time. Write-off rates. Error rates. “The try-it-and-see period is ending,” Brewer says. “Leaders will expect ROI.” Ultimately, the firms pulling ahead didn’t just buy software. They rewired how legal work gets done — from intake to invoice and research to billing — with training, governance, and measurement baked in from the start. You can keep using the sports car in first gear. But eventually, someone in your market will figure out where the other gears are. BY KOLAWOLE ADEBAYO

Monday, March 2, 2026

15 Incredibly Useful Things You Didn’t Know NotebookLM Could Do

Generative AI may be both the most useful and the most mystifying tool of our modern-tech era. The problem—aside from all the endlessly documented issues around accuracy—is that generative AI generally seems to function in a DOS-like blank prompt form. The onus is squarely on you to figure out what to ask and how to put these saucy systems to use. That black-box feeling is especially apparent when you look at NotebookLM, an “AI-first notebook” launched by Google nearly two years ago. The idea behind NotebookLM is that you upload your own source materials within carefully confined notebooks, and you can then lean on Google’s Gemini AI to interact with that material in all sorts of illuminating ways. Since each notebook is limited only to whatever source materials you supply, the prevalence of those pesky hallucinations seems to be less of an issue. And since everything within your NotebookLM notebooks is kept completely private—not even used for any manner of AI model training, according to Google—you can connect it to all sorts of subjects and use it to gain a level of deep insight that was never before so easily accessible. But again, there’s the black box challenge. When you first pull up NotebookLM, it’s tough to know where to begin and how to interact with the thing in practical, approachable ways. Even as someone who writes about technology for a living and has spent more time than most mortals thinking about this service, I realized I hadn’t entirely figured out how to use it in a way that would genuinely be helpful in my day-to-day life. So I challenged myself to dig deep, get beyond all the conceptual excitement, and come up with a series of real-world use cases for NotebookLM that any regular human could both appreciate and emulate. I’ve got 15 super-specific scenarios, all tried and tested, in which the artificial intelligence answer machine could be useful for you. Follow this road map and see which path holds the most promise from your perspective. 1. Your on-demand product answer machine Up first is a possibility that’s supremely simple yet packed with productivity potential: Create a new NotebookLM notebook called “Product Manuals.” Then, every time you purchase a new appliance or device of some sort, search the web for a PDF version of its manual and add it into the notebook. If you really want to get wild, include an image of any warranty cards, too. Then, anytime you need to know anything about those products—how some part of them works, how to fix something that’s gone awry, or if and how you’re eligible for a warranty-related repair—just fire up that same NotebookLM notebook and ask, ask, ask away. 2. Your instant car support system Next, try using NotebookLM to help wrangle the most expensive gadget you own. Do a similar web search for your current vehicle’s owner manual, then drop it into its own NotebookLM notebook with the vehicle’s name as the title. Repeat for any additional vehicles you own and any new ones you purchase down the road. After recently trading in our old minivan for a hybrid Honda CR-V, my wife and I wasted far too much time flipping through the vehicle’s paper manual to try to figure out what some random button on the dashboard did. Later, after downloading a PDF of the manual from Honda’s website and then uploading it into NotebookLM, it took me all of 10 seconds to reach the same answer—simply by asking. Lesson learned. 3. An interactive car maintenance journal While we’re thinking about cars, every time you go to the mechanic, snap a photo of the service receipt and upload it into a NotebookLM notebook created specifically for that one vehicle. You can make it even more useful by uploading the same owner’s manual you found a moment ago into that notebook, too. Doing so will give you two very practical benefits: First, anytime a question comes up about what work you’ve had done on the vehicle or when a certain repair took place, you can just pull up that notebook and ask. Second, with the manual and its instructions there alongside all of your history, you can bring the two sources of info together to ask NotebookLM targeted questions that take the manufacturer’s guidance and your past services into consideration—like, for instance, when you should rotate your tires next or what other possibilities you should be thinking about at your next oil change appointment. And on a related note . . . 4. An interactive home maintenance journal Start a NotebookLM notebook for your house, then upload every invoice and estimate you get for a home repair as well as every receipt from a new appliance purchase. Whenever you next need to know when, exactly, your roof was replaced or in what year you got your current furnace—or even what brand and model it is—you’ll have a single simple place to ask and get answers. And that’s a heck of a lot easier than having an overflowing folder of assorted old papers to sift through in every such scenario. 5. Your personal company wiki Does the company you run, or maybe just work for, have more handbook-type info than any reasonably sane human could possibly ingest and remember? If so, use a dedicated NotebookLM notebook to store all of it—guides, documents, operating procedures, even lists of contacts for different departments and purposes. From that moment forward, when a question comes up about how something is supposed to work or whom you’re supposed to contact for some particular purpose, your answer will never be more than a single quick question away. 6. Your instruction-expert wizard Why limit yourself to work, maintenance, and appliances? With anything that has an instruction manual involved, dump a digital version of the document into its own NotebookLM notebook—even for board games. The next time any kind of question comes up related to those instructions, you’ve got a fast and effective way to get answers. 7. A contract deposit box Whether you’re a freelancer juggling new contracts every month, an employee signing a new agreement each year, or an employer asking dozens of workers to sign your ever-evolving documents, creating a centralized repository for all your contracts can be a real time-saver in the future. Need to remember when you last signed something with a specific person or provider? Not sure what the terms of some agreement required—or when a particular document expires? Whatever the case may be, once the info’s all in NotebookLM, you’ve always got an easy place to ask—and let the system find the answer for you. 8. Your meeting memory Provided you’re using something to record important meetings—be it a general-purpose AI-powered note-taker, a video-call-specific summarizer, or an app designed to take notes during regular audio calls—that history will be much more useful if you bring it over to a NotebookLM notebook. With such a system in place, you can simply go to NotebookLM and ask targeted questions about any of your past meetings instead of having to dig through the transcripts individually. 9. An interview inquiry station While we’re thinking about transcripts, if you conduct any kind of interviews—with job candidates, as a journalist, or for any other purpose—take each transcript and create a NotebookLM specifically for it. (Or, if you have a group of related interviews, put them all in one notebook.) Upload either the audio or the text, depending on what’s available, and then take the opportunity to ask NotebookLM questions about your conversation—be they specific (like what the person said about some particular topic) or broad (like asking NotebookLM what interesting quotes came up during the interview that you might have missed). You’ll obviously still want to refer to the full transcript at times—and to double-check the accuracy of any quote you’re actually citing anywhere—but it can be a helpful way to find something fast when you can’t remember the exact words involved or to stumble onto something you might have otherwise glossed over. 10. An intelligent feedback interpreter If your business relies on any manner of feedback to guide its operations, do yourself a favor and create a NotebookLM notebook where you can upload those results—as spreadsheets or in whatever form they take. From reviews to survey responses, you’ll then be able to ask NotebookLM to help summarize the key themes and trends, pick out recurring positive or critical responses, and even find particularly memorable quotes for potential testimonial use. 11. Your performance review reviewer For anyone managing employee performance, NotebookLM can be a major asset. Create an individual notebook for each employee and place all their performance reviews there—then, when the time comes for the next assessment, you’ll have an easy way to revisit past highlights to identify trends and provide context for comparison. 12. A financial reality checker Provided you’re comfortable with the notion, NotebookLM can turn up some really interesting insights by analyzing things like your tax returns, bank statements, and credit card statements over the years. (For what it’s worth, Google is explicit about the fact that it doesn’t in any way access, share, or use any data uploaded into NotebookLM—even for AI model training.) With that type of info in its own dedicated notebook, you can ask NotebookLM to give you an overview of your spending habits, to identify areas where you could cut back or potentially be eligible for additional tax benefits, and to surface other such pointers that you can then investigate more thoroughly on your own or with an accounting professional. 13. An audio-video reading resource Ever find yourself running into interesting-looking videos or podcasts and just not having the time or inclination to sit through them in their entirety? Make yourself a NotebookLM notebook called “Audio-Video,” then drop a link to any YouTube video or audio clip you encounter into that area. You can then ask NotebookLM for the high points—or for any specific info you’re looking to find—from any of the clips individually or even collectively. 14. An elevated reading list NotebookLM can be a fantastic way to collect links you want to read for later revisiting. With a notebook called “Reading List,” you can see the entire text of any article whose URL you add in, right then and there and in a stripped-down and simplified format—and you can ask NotebookLM for information about, or even summaries of, any or all of your saved links, too: What was that article I saved from New York a while back? Give me the most important takeaways from that Fast Company piece I saved on privacy the other day. I’m never going to catch up with everything I saved this week. Show me a summary of all the articles I added over the past seven days. You get the idea. And finally . . . 15. Your calendar companion Get a whole new level of insight into how you’re spending your time and what’s actually gone down on your calendar by exporting your complete calendar history, and then importing it into NotebookLM—where you can create a custom notebook to interact with it. In Google Calendar, this is as easy as clicking the gear-shaped icon in the desktop website’s upper-right corner, selecting “Settings,” then clicking “Import & export” in the left-of-screen side menu and clicking the “Export” option. You’ll then need to take the resulting .ics file and convert it into plain text—which you can do in a matter of seconds with a free conversion website like this one. Finally, with the resulting .txt file in a NotebookLM note, try asking questions about anything from how many meetings you attended over a given time period to how many hours you spent at the doctor’s office last year. You can also ask for specific info such as how often, on average, you get haircuts or how long it’s been since you last had a job interview. ~ google-notebooklm-calendar.jpg You might be surprised at the types of insights you uncover with your calendar data in NotebookLM’s metaphorical hands. ~ The possibilities are practically endless—and all you’ve gotta do is ask. BY FAST COMPANY

Friday, February 27, 2026

Why Google Gemini Is Emerging as a Hot New AI Tool for Startups

For years, OpenAI held the default position in most startups’ tech stacks. It was the tool founders reached for when they needed a language model, a voice engine, or a general-purpose AI backbone. But for some startups, Google’s Gemini AI has emerged as a newly preferred productivity tool, and their reasons for adopting the tech go well beyond the technology itself. Google is in the midst of an aggressive push to convince startups that its AI solutions are superior. Leading that charge is Darren Mowry, head of Google Cloud’s global startup team. Mowry confirms that yes, Gemini use is rising among startups, and it’s resulting in new business for Google Cloud, which is the only way for businesses to use the Gemini API. Instead of just automatically selecting Amazon Web Services as their cloud provider, Mowry says, new startups are now choosing Google Cloud in part so they can get access to Gemini. Google has always been central to the AI business; in 2017, the company released a seminal AI research paper called “Attention is All You Need,” which introduced the “transformer” architecture that makes modern AI models possible. But up until last year, the company lagged behind its competitors when it came to business adoption. When Gemini was launched in 2023, it was originally named Bard, and it quickly developed a reputation for hallucinating facts. Remember when it recommended that people put glue on pizza? That changed in April 2025, when Google released Gemini 2.5 Flash, a model that handily beat OpenAI across a number of benchmarks, and according to Mowry, ignited a wave of interest in Google’s AI offerings that has only grown with the release of subsequent models. One of the factors that differentiates Google from its competitors is that it offers a fully vertically-integrated solution, in which the company can handle each part of the tech stack. Not only can startups choose from a wide selection of Google-made and external models on Google Cloud, Mowry says, but the company can also provide those startups with technical assistance to help make sure they’re getting the most out of both the models and the Google-made chips they run on. According to Mowry, this vertical integration “shrinks down the time” it takes for founders to build.” Some founders are finding that Gemini is a useful way for non-technical employees to enjoy the benefits that software developers have gotten from agentic coding tools like Claude Code. Aakash Shah, founder and CEO of allergy care startup Wyndly, says that while his engineers have gravitated toward Anthropic, his operations team wanted to use Gemini in the applications they’re already comfortable with, like Google Docs and Gmail. A common use case? Asking Gemini “who did I email on such-and-such day?” Shah says that everyone at his company now has Gemini, enabling them to chat with Gemini across the entire Google Workspace suite, including Gmail, Docs, Sheets, Meet, and Notebook LM, Google’s app that turns documents into audio podcasts. “I’m trying to get everyone to be AI-first,” Shah says, “and part of that is helping them use it where they already are instead of forcing it on them.” Sheltered International, a customs broker and freight forwarder that primarily deals with international imports, is currently using Gemini to help speed up the process of filling out customs paperwork. Founder Andrew Ciccarone says that when a shipment comes in, his company is responsible for verifying its commercial invoice data and ensuring that it’s marked with the correct Harmonized Tariff (HTS) code. According to Ciccarone, Sheltered International has started using a fine-tuned Gemini model to extract relevant information from the commercial invoice data (which often comes in the form of a PDF), validate it, and reformat it into an Excel spreadsheet. The Gemini models are considered to have state-of-the-art computer vision, enabling them to examine images and documents in incredibly granular detail. “What the AI can do is just give us a huge leap forward before the customs broker comes in to ensure everything is classified correctly,” says Ciccarone, adding that for a small operation handling the complexity of international trade documentation, Gemini has streamlined a process that used to be painfully manual. When this process was fully done by humans, it required hours of manual scanning through lengthy, unformatted documents. Still, Ciccarone admits that the company isn’t saving that much time yet, because employees still need to verify that the AI’s output is accurate. But as the fine-tuned Gemini model improves, he expects to see a significant increase in productivity. Companies are also integrating Gemini’s machine vision abilities into their actual products. Take Validity, a startup that sells itself as an all-in-one solution for the entire email marketing process. Validity chief technology officer Matt Gore says that the company’s newest product, a platform called Validity Engage, has been largely built around Gemini’s capabilities. Engage gives marketers access to four purpose-built AI agents that can analyze, optimize, and reformat emails according to internal campaign style guides. Using Gemini 3 Pro, Validity can now detect and fix granular visual details in emails (like whether a certain font matches a brand’s approved style guide) that no model could reliably catch a year ago. For instance, Gore says, emails can often appear illegible to computers and phones that are set to dark mode; with Validity Engage, marketers can guarantee that their email will be visible to everyone who received it. Geo says that Validity decided to “hitch our wagon to Gemini” following extensive testing. When developing the new feature, Gore used an orchestration tool called Mastra.ai to compare how various models approached roughly 150 common email issues, taking note of each model’s cost and speed. In the testing, Gore says, Gemini 3 Pro stood out as being “just leaps and bounds ahead of others in terms of computer vision.” He says that the Gemini 3 models are particularly good at identifying the “bounding box” of the email—basically the frame containing the email’s content. Beyond computer vision and text, Gemini’s speech capabilities have been a major selling point for founders. David Yang, founder of solopreneur-focused AI receptionist company Newo, is using Gemini to provide his virtual receptionists with voices. Yang founded Newo to help solo founders capture more inbound leads by giving everyone access to an always-on receptionist who can answer a phone call at any hour of the day. But for Newo to work, Yang needed voice models that have extremely low latency and high emotional intelligence. Originally, the company’s AI receptionists were powered by OpenAI’s text-to-speech and speech-to-text models, but the lag between asking a question and hearing an answer was too long. Now, Yang says that Newo uses Gemini 2.5 Flash Native Audio, a recently-released model that can understand and generate audio in real time. Not only is the new model incredibly fast, Yang says, it can also understand emotional intent, an important data point that’s usually lost with more traditional speech-to-text transcription models. As part of his push to bring startups into the Google ecosystem, Mowry says his team is currently hiring engineers and former founders to staff up a “founder advocacy” group. These employees “sole purpose in life” will be “to wake up and meet founders that have really big problems,” he says, and “help them move from ideation into actually getting things built.” The goal is to “catch these cohorts of startups early, give them a little bit of credit assistance, engineering assistance, and help them get off the ground.” This soup-to-nuts approach is helping Google win startup business, and positioning the company as the new default AI partner for the next generation of businesses. BY BEN SHERRY @BENLUCASSHERRY

Wednesday, February 25, 2026

3 Ways Digital Tools and AI Help Simplify Tax Season

Tax season used to be my least favorite part of being a creative small business owner. It always felt overwhelming. ​As the founder of Mochi Kids and Mochi Play Store, I juggle designing and producing children’s clothing, managing inventory and wholesale orders, and running a brick-and-mortar store. ​ When tax time hits, all the information needed to run my business has to be accurate, organized and easy to find. To make the tax process more manageable, I’ve started relying digital tools like Adobe Acrobat to stay organized and prepared. Here’s how I approach tax prep, step-by-step, to simplify everything. ​ Step 1: Digitizing paperwork before it piles up Paper forms used to be my downfall. I’d toss them into a folder and promise myself I’d deal with them later. ​By tax season, “later” meant hours of sorting and searching. ​Now, I use Adobe Scan to digitize receipts, tax forms, and donation confirmations as soon as I receive them. ​I simply snap a photo with my phone, and the app converts it into a clear, searchable PDF. I name each file and save it to a folder labeled for the current tax year. Later, I can search documents by vendor, keyword, or dollar amount. ​ Step 2: Organizing documents by category Once everything is digitized, I focus on organizing. Instead of keeping dozens of separate files, I use Acrobat’s Combine and Organize tools to merge related documents into a single file and sort them by category. ​For example, I combine PDFs for charitable contributions, income, expenses, and deductions. ​Acrobat makes it easy to reorder pages, delete duplicates, and add bookmarks so I can quickly find what I need. ​This is especially helpful when preparing and double-checking documents for my accountant before filing. Step 3: Protecting and signing your tax documents with AI Tax documents contain some very sensitive information, so security is non-negotiable. ​Before sharing files, I password protect PDFs and limit access to only the people who need them. ​I can use Protect & Sign with AI Assistant to password protect sensitive information or sign the documents for me. Taking a few extra seconds to secure files gives me peace of mind. A calmer way to approach tax season Tax season may never be exciting, but it doesn’t have to be chaotic. ​By scanning documents early, organizing them thoughtfully, protecting sensitive information, and handling signatures digitally, I’ve made the process far more manageable. ​This allows me to focus on the fun parts of running a creative small business. Why I trust Adobe Acrobat for my tax prep Adobe Acrobat has been a gamechanger for me. It helps me stay organized, save time, and feel confident that my sensitive information is secure. ​Whether you’re an individual filer, freelancer, or small business owner, Acrobat has the tools to make tax season less stressful. From digitizing paper documents to organizing files and securing sensitive information, it’s the ultimate tax prep partner. Here’s to a stress-free tax season! ​ BY AMANDA STEWART FOR ADOBE

Monday, February 23, 2026

This Single ChatGPT Prompt Can Do Hours of Market Research in Minutes—Here’s How

Market research can be a slow, fragmented, and difficult process, often involving tedious internet searches, questionable data sources, and time-consuming manual synthesis. This makes it a great candidate for some assistance from AI. What’s more, an update to a popular feature on ChatGPT has made it even better at doing this kind of work. Imagine that you have a potential business idea but still need to validate how viable it actually is, identify primary competitors in your market, and develop an ideal customer persona. Instead of spending hours collating data, explains Dan McCarthy, an associate professor of marketing at the University of Maryland, you can use Deep Research, a ChatGPT feature that directs an AI agent to develop a comprehensive, well-cited report on any topic. Last week, OpenAI upgraded Deep Research with some new abilities. The feature now runs on GPT-5.2, one of the company’s most recent models (previously it ran on a much older o3 model), and can now prioritize specific websites in its search process. Deep Research is available for all paid ChatGPT users. Here’s how to use it to get some thorough market research done quickly. Step 1: Get your prompt right To test out how this feature could help with market research, I pretended that I wanted to start a digital transformation firm based in Denver with a focus on upgrading bars with mobile, bar-to-table ordering capabilities. All I needed to do in order to get started was click the plus button next to the text box, select More, then Deep Research, and enter a prompt. This prompt will determine the information that ChatGPT prioritizes in its search, so it helps to be verbose. If you need help developing a lengthy prompt, try using ChatGPT to help write it. McCarthy, who uses AI tools extensively, says that an easy way to develop a comprehensive prompt is to activate the chatbot’s voice mode and simply have a conversation with it. Once you’ve explained what you want, McCarthy says, you can ask ChatGPT, “Given all this that I’m telling you, what do you think would be the best thing that I should even be asking you?” That should help clear up any blind spots you might’ve missed. According to McCarthy, this method should produce a solid prompt that you can give to the Deep Research agent. When I asked ChatGPT to help expand my prompt, the platform generated a 673-word result. This prompt (which you can view here) defined the agent as a market research analyst and gave it objectives to determine the business idea’s viability, map out the competition, and define my ideal customer’s persona. Additionally, it provided details on the scope of the research, and information for how the agent should format its report. I also used ChatGPT to develop a list of specific websites for the Deep Research agent to prioritize in its search. Step 2: Start the research I entered my ChatGPT-created prompt, selected the Deep Research feature, and pressed return. Before getting to work, the agent broke down its objectives into the following bullet points: Collect primary vendor docs and pricing pages starting with user-preferred sites. Survey industry, local Denver sources, and hospitality reports for market context. Compile POS integration lists, local competitors, and implementation partners in Denver. Analyze demand, model ROI scenarios, and estimate Denver bar counts and adoption rates. Draft recommendations, ICP personas, GTM plan, and cite sources with confidence ratings. Over the next 21 minutes, the agent searched through hundreds of web pages. It found liquor license databases, census information, and data regarding competitors in Denver’s hospitality-focused digital transformation market. It compiled all this information into a multi-section report. Step 3: Read the report That report (which you can view here) ended up being roughly 4,000 words. It included an overview of the market, identified customer pain points, and listed out my potential competitors. The report also included recommendations for how to position my business, strategies to break into the Denver hospitality scene, and even identified a small business that would likely be my direct competitor: a Denver-based POS integrator called Megabite. ChatGPT found that while my business idea had potential, it wouldn’t fully meet the needs of Denver-based bar owners, who have reported that bar-to-table ordering can actually lead to fewer sales and tips. Instead, the report suggested, I should consider a system that can sit on top of popular POS in which diners don’t need to pay for every new drink they order, and can instead open a digital tab. What the expert thinks of the result McCarthy told me he was impressed by the report that Deep Research produced. In particular, he was pleasantly surprised by the agent’s cleverness in using liquor licenses to get a sense of the market size, and its thoughtfulness in calling out disruption to bar culture as a potential blocker to the business. But the report wasn’t perfect. McCarthy said much of what was included was unnecessary or needlessly complex. An easy prompt to fix this? “Just tell it, ‘Explain it to me like I’m an idiot.’” McCarthy adds, “I do that all the time.” He says that a solid market research report should also answer questions regarding the scope of adoption and how often repeat purchasing is expected. McCarthy also says that users should direct the Deep Research agent to be very upfront about the data it attempted to get but couldn’t. Many websites block AI agents from engaging with their content to prevent data scraping, which can hinder the research process. By telling your agent to list out the sites that it couldn’t access, you can manually obtain that data and add it to the analysis. Our bar-to-table digital transformation firm will have to remain a pipe dream for now, but it’s clear that AI has made the process of taking an idea from zero to one easier and faster than ever. If you have an idea for a new business or are planning on an expansion or pivot in your current business, consider giving Deep Research a spin. It might unearth something that makes you think in a different way. BY BEN SHERRY @BENLUCASSHERRY

Friday, February 20, 2026

China’s latest AI is so good it’s spooked Hollywood. Will its tech sector pump the brakes?

Tom Cruise and Brad Pitt tussle in hand-to-hand combat on a rubble-strewn rooftop; Donald Trump takes on kung-fu fighters in a bamboo grove; Kanye West dances through a Chinese imperial palace while singing in Mandarin. Over the past week, a slew of cinematic videos of celebrities and characters in absurd situations have gone viral online, with one commonality –– they were created using a new artificial intelligence tool from Chinese developer ByteDance, sparking anxiety over the fast-evolving capabilities of AI. The new model, named Seedance 2.0, is among the most advanced of its kind and has quickly drawn praise for its ease of use and the realistic nature of the videos it can generate in minutes. But soon after the release, media behemoths Paramount and Disney sent cease-and-desist letters to ByteDance –– the company most famous for developing the video-sharing app TikTok –– accusing it of infringing upon their intellectual property. Hollywood’s premier trade organization, the Motion Picture Association, and labor union SAG-AFTRA also condemned the company for unauthorized use of US-copyrighted works. ByteDance responded with a statement saying it would implement better safeguards to protect intellectual property. Seedance 2.0 has quickly become the most controversial model in a wave of them released by Chinese technology companies this year, as the competition to dominate the AI industry heats up. China’s government has made advanced tech a key tenet of its national development strategy. In a televised Lunar New Year celebration this week, the country’s latest humanoid robots stole the show by performing martial arts, spin kicks and back flips. Such improvements are often met with unease, particularly in the US, China’s chief technological and political rival, in a spiral of one-upmanship redolent of its 20th-century “Space Race” with the Soviet Union. “There’s a kind of nationalist fervor around who’s going to ‘win’ the space race of AI,” said Ramesh Srinivasan, a professor of information studies at the University of California, Los Angeles. “That is part of what we are seeing play out again and again and again when it comes to this news as it breaks.” Here’s why the latest technology from ByteDance has rattled the world. What’s so scary about Seedance 2.0? The AI video generation model, while still not publicly available to everyone, was hailed by many as the most sophisticated of its kind to date, using images, audio, video and text prompts to quickly churn out short scenes with polished characters and motion editing control at lower cost. “My glass half empty view is that Hollywood is about to be revolutionized/decimated,” writer and producer Rhett Reese, who worked on the Deadpool movie franchise, wrote on X after seeing the video of Cruise and Pitt. One Chinese tech blogger using Seedance 2.0 said it was so advanced that it was able to generate realistic audio of his voice based solely on an image of him, raising fears over deepfakes and privacy. Afterwards, ByteDance rolled back that feature and introduced verification requirements for users who want to create digital avatars with their own images and audio, according to Chinese media. Rogier Creemers, an assistant professor at Leiden University in the Netherlands, who researches China’s domestic tech policy, said part of the concern stems from the rapid rate at which Chinese companies have released new iterations of AI technology this year. That has also put China on the back foot in assessing the potential negative impacts of each improvement, he said. “The more capable these apps become, automatically, the more potentially harmful they become,” said Creemers. “It’s a little bit like a car. If you build a car that can drive faster, that gets you where you need to be a lot more quickly, but it also means that you can crash faster.” What’s being done to ease concerns? After outcry from Hollywood, ByteDance said in a statement that it respects intellectual property rights and will strengthen safeguards against the unauthorized use of intellectual property and likenesses on its platform, though it did not specify how. User complaints prompted the recent ByteDance rollback and have also forced popular Chinese Instagram-like app RedNote to restrict any AI-made content that has not been properly labeled. And the arrival of Seedance 2.0 coincides with a tightening of regulations for AI content in China. China’s domestic regulation of AI surpasses the efforts of most other countries in the world, in part because of its longstanding censorship apparatus. Last week, the Cyberspace Administration of China said it was cracking down on unlabeled AI-generated content, penalizing more than 13,000 accounts and removing hundreds of thousands of posts. However, the restrictions on AI-generated content on the Chinese internet are often unevenly enforced, Nick Corvino wrote in ChinaTalk, a China-focused newsletter. He attributed the problem in part to difficulties policing content across different apps, as well as incentives for tech companies to encourage user content. “With Chinese social media platforms locked in fierce competition, both with each other and the Western market, none wants to be the strictest enforcer while others let content flow freely,” he said in a post following the launch of Seedance 2.0. What does this mean for China’s AI industry? According to analysts, China is walking a fine line between encouraging domestic development of AI models and maintaining strict controls on how those models are used. “People in the AI business would always say what the Chinese government is doing is slowing down the development of AI,” said Creemers of Leiden University. “Obviously a content control system like the Chinese that essentially limits what you can produce, that’s never fun.” Pressure to stop using certain images or data, from US media giants or other sources, may also impact efforts to refine AI. Disney accused ByteDance of illegally using its IP to train Seedance 2.0, but recently struck a deal with US company OpenAI to give Sora – OpenAI’s video generation model and Seedance competitor – access to trademarked characters like Mickey and Minnie Mouse. “These agreements have everything to do with what kind of data are they going to get access to that they would not have otherwise, or that their competitors would not have?” said Srinivasan from UCLA. “There’s a high probability that the Sora products could be more refined and more advanced, if the data are better suited for the models to learn from.” At the same time, restrictions on how AI can be used or trained could also spur greater innovation, he said, noting how Chinese company DeepSeek –– blessed with a much smaller budget than the industry leaders –– built a competitive AI-powered chatbot. “When it comes to Chinese breakthroughs in AI, the DeepSeek revelation was so important because they showed that there are other ways of training language models in ways that are more economical,” he said. By Stephanie Yang

Wednesday, February 18, 2026

AI Promised to Save Time. Researchers Find It’s Doing the Opposite

Artificial intelligence boosters often promise the tech will lead to a reduced workload. AI would draft documents, synthesize information, and debug code so employees can focus on higher-value tasks. But according to recent findings, that promise is misleading. An ongoing study, published in the Harvard Business Review, joins growing bodies of evidence that AI isn’t reducing workloads at all. Instead, it appears to be intensifying them. Researchers spent eight months examining how generative AI reshaped work habits at a U.S.-based technology company with roughly 200 employees. They found that after adopting AI tools, workers moved faster, took on a wider range of tasks, and extended their work into more hours of the day, even if no one asked them to do so. Importantly, the company never required employees to use AI. It simply offered subscriptions to commercially available tools and left adoption up to individuals. Still, many workers embraced the technology enthusiastically because AI made “doing more” feel easier and more rewarding, the researchers said. That enthusiasm, however, came with unintended consequences. Over time, workloads quietly expanded to overwhelming levels. The gradual, often unnoticed, creep in responsibilities led to cognitive fatigue, burnout, and weaker decision making. While AI can produce an initial productivity surge, the researchers warn that it may ultimately contribute to lower-quality work and unsustainable pressure. To track these changes, the researchers observed the company in person two days a week, monitored internal communication channels, and conducted more than 40 in-depth interviews across engineering, product, design, research, and operations. They found that job boundaries began to blur. Employees increasingly took on tasks that previously belonged to other teams, using AI to fill knowledge gaps. Product managers and designers started writing code. Researchers started handling engineering tasks. In many cases, work that might once have justified additional hires was simply absorbed by existing staff with the help of AI. For engineers, the shift created a different kind of burden. Rather than saving time, they spent more hours reviewing, correcting, and guiding AI-generated work produced by colleagues. What had once been straightforward code review expanded into ongoing coaching and cleanup of flawed outputs. The researchers described a feedback loop: AI sped up certain tasks, which raised expectations for speed. Higher expectations encouraged greater reliance on AI, and that, in turn, widened both the scope and volume of work employees attempted. The result was more activity, not less. Many participants said that while they felt more productive, they did not feel any less busy. Some actually felt busier than before AI arrived. “You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less,” one engineer told the Harvard Business Review. “But then, really, you don’t work less. You just work the same amount or even more.” What looks like a productivity breakthrough, the researchers concluded, can actually mask silent workload creep. And overwork, they warn, can erode judgment, increase errors, and make it harder for organizations to distinguish genuine efficiency gains from unsustainable intensity. To counter these risks, the researchers proposed a protective approach they call “AI practice,” a set of intentional norms and routines that define how AI should be used at work and, crucially, when to stop. Without clear boundaries, they caution, AI makes it easier to do more but harder to slow down. BY LEILA SHERIDAN

Tuesday, February 17, 2026

What Is AI.com? The $70 Million Domain Being Called ‘the Absolute Peak of the AI Bubble’

On Super Bowl Sunday, the most talked-about advertisement was for a product that hadn’t even launched yet. During the game’s fourth quarter, a 30-second commercial aired advertising something called “AI.com,” ending with a call to “claim your handle” along with three usernames: Mark, Sam, and Elon. Missing from the commercial? Any information about what AI.com actually does. But the mysterious teaser worked; web searches for “What is AI.com” exploded. According to EDO, a company that helps businesses measure the impact of advertisements, AI.com was the top-performing ad of the night, with 9.1 times as much engagement as the average Super Bowl ad. But when interested people rushed to the website, they found an error message waiting for them. The company’s website had immediately crashed. What is AI.com, anyway? AI.com was not co-founded by Mark Zuckerberg, Sam Altman, and Elon Musk. In fact, they have nothing to do with the company at all. The founder is actually Kris Marszalek, who previously co-founded Crypto.com. Financial Times reported that in April 2025, Marszalek paid $70 million to obtain the AI.com domain, which the publication says is the most ever spent on a domain, far more than the $12 million Marszalek spent to acquire Crypto.com in 2018. Marszalek says he is currently the CEO of both companies. What does AI.com actually do? On its now-functioning website, the company describes itself as a platform offering access to a “private, personal AI agent that doesn’t just answer questions but actually operates on the user’s behalf — organizing work, sending messages, executing actions across apps, building projects, and more.” The company wrote that the agent will soon be able to help users “trade stocks, automate workflows, organize and execute daily tasks with their calendar, or even update their online dating profile.” Sounds impressive, but it turns out that the tech powering AI.com is far from proprietary. In an article posted to Marszalek’s personal X account, the founder wrote that “AI.com is the world’s first easy-to-use and secure implementation of OpenClaw, the open-source agent framework that went viral two weeks ago.” What is OpenClaw? OpenClaw is essentially an agent that has full access to your computer’s files, and it has indeed become an instant sensation in the tech world. But the current process of setting the agent up is highly technical and risky. Marszalek says that AI.com has made OpenClaw “easy to use without any technical skills, while hardening security to keep your data safe.” Basically, this means that AI.com is positioning itself as a consumer-friendly wrapper around a powerful, developer-focused tool. OpenClaw creator Peter Steinberger posted that he had not heard about AI.com until the ad aired, to which Marszalek responded, “Let’s chat.” How do you sign up for AI.com? If you go to AI.com, you’ll be asked to link your Google account to the platform in order to choose a handle for both yourself and your agent. After you’ve selected handles, you’ll need to connect a credit or debit card to your account, though the company says you won’t be charged. Once your card has been processed, you’ll receive a notification that “demand is extremely high right now, so generation is queued. We’ll notify you the moment your AI is ready to activate.” It’s unclear if any users have received their agent yet. In a popular X post, one user criticized the website, calling it “the absolute peak of the AI bubble.” Steinberger quoted that post, writing “Guess I’m flattered?” BY BEN SHERRY @BENLUCASSHERRY

Wednesday, February 11, 2026

AI Power Users Are Rapidly Outpacing Their Peers. Here’s What They’re Doing Differently

Last November, consulting firm EY surveyed 15,000 employees across 29 countries about how they use AI at work. The results should worry every founder: 88 percent of workers now use AI tools daily, but only 5 percent qualify as “advanced users” who’ve learned to extract real value. That 5 percent? They’re gaining an extra day and a half of productivity every single week. The other 95 percent are stuck using AI for basic search and document summarization, essentially treating a Ferrari like a golf cart. When OpenAI released its State of Enterprise AI report in December, it confirmed the same pattern. Frontier workers—those in the 95th percentile—send six times more prompts to AI tools like ChatGPT than their median colleagues. For coding tasks, that number explodes by 17x. If these AI tools are identical and access is universal, why are the results so wildly different for workers around the world? And what separates power users from everyone else? Ofer Klein, CEO of Reco, a SaaS security platform that discovers and secures AI, apps, and agents across enterprise organizations, offers some insights into what sets the power users apart. 1. They experiment while others dabble High performers treat AI tools like junior colleagues they’re training. They iterate on prompts rather than giving up after one mediocre response. They’ve moved beyond one-off queries to building reusable prompt libraries and workflows. The rest of your team tried AI once or twice, got underwhelming results, and concluded it wasn’t worth the effort. What they don’t realize, however, is that AI requires iteration. The first response is rarely the best response. Power users ask follow-up questions, refine their prompts, and teach the AI their preferences over time. 2. They match tools to tasks Power users typically maintain what Klein calls a “barbell strategy”—deep mastery of one or two primary tools plus five to eight specialized AI applications they rotate through depending on the task. “They’re not trying every new AI that launches, but they’re not dogmatically loyal to one platform either,” Klein explains. “They’ve developed intuition about which AI is best for what.” They might use ChatGPT for brainstorming, Claude for analysis, and Midjourney for visuals. Most employees, by contrast, force one tool to handle everything. When it inevitably underperforms on tasks it wasn’t designed for, they blame AI rather than their approach. 3. They think about work differently It’s easy to assume that the biggest behavioral difference between these power users and frontier workers is their technical skill. But, interestingly, it’s not. Rather, it’s how power users think about tasks. They break projects into discrete steps: research, outline, first draft, and refinement. Then they deploy AI strategically at each stage. Instead of asking AI to “write a report,” they ask it to summarize research, suggest an outline, draft specific sections, then refine tone. They understand where AI adds value and where human judgment matters. “The highest performers spend more time on strategic work because AI handles the grunt work,” Klein says. “They use AI to augment their expertise, not replace thinking.” The hidden cost Why does all of this matter? Here’s the math that should worry you: OpenAI’s data shows workers using AI effectively save 40-60 minutes daily. In a 100-person company where 60 employees barely touch AI, you’re losing 40-60 hours of productivity every single day. Over a year, that’s 10,000+ hours—equivalent to five full-time employees’ worth of work you’re paying for but not getting. Meanwhile, your competitors’ power users are compounding that advantage daily. What you can do about it Klein recommends tracking time saved, not just usage frequency. Someone using AI 50 times daily for spell-checking differs fundamentally from someone using it five times to restructure a client proposal. In addition, run an “AI show and tell” where employees demonstrate one workflow where AI saves them meaningful time. You’ll quickly identify who’s truly leveraging these tools versus who’s dabbling. Then, create small cross-functional “AI councils” of five to six employees who meet monthly to share workflows. That should cascade into proper training of employees on how to use these tools the right way. “Only one-third of employees say they have been properly trained,” a BCG survey found. That’s an opportunity forward-thinking leaders can tap into. But don’t just replicate tools; replicate mindset. Giving everyone ChatGPT Plus doesn’t close the gap. The differentiator is teaching people to think in terms of “what can I delegate to AI?” rather than “what can AI do?” The uncomfortable truth, according to BCG’s survey, is that this gap is widest among front-line employees. While more than three-quarters of leaders and managers use AI several times a week, adoption among front-line workers has stalled at just 51 percent. That’s not just a productivity problem. It’s a competitive threat that compounds every quarter you ignore it. Your 5 percent are already working like they have an extra team member. The question is whether you’ll help the other 95 percent catch up before your competitors do. BY KOLAWOLE ADEBAYO, COLUMNIST

Monday, February 9, 2026

The Quantum Revolution Is Coming. First, the Industry Has to Survive This Crucial Phase

Quantum computing could be even more revolutionary than artificial intelligence. The calculation speeds and potential benefits of the technology have the potential to bring about everything from quicker discovery of drug treatments for disease, to more accurate climate modeling, to smoother shipping logistics. The advances in the past year have been substantial, but a new paper from the University of Chicago warns quantum evangelists that as impressive as that progress has been, there’s still a long way to go. While the paper says quantum is nearing the point of practical use (taking it beyond controlled experiments in the laboratory), it won’t be running at full throttle for a while. First, there need to be significant advances in materials science and fabrication, the authors said, with an emphasis on wiring and signal delivery. “We are in an equivalent of the early transistor age, and hardware breakthroughs are required in multiple arenas to reach the performance necessary for the envisioned applications,” the authors wrote. To put that into context: Think of the speed and capabilities of today’s computers. For just $4,000, people can buy a supercomputer that fits on their desktop. Compare that to the computers of the early- to mid-1950s. That’s where quantum stands today in its evolution, the paper’s authors argue. That doesn’t mean the technology is disappointing, by any means. Computers in the 50s, to continue the analogy, were used to break codes, automate payroll and inventory management systems and handle the mathematical models for everything from weather forecasting to nuclear research. “While semiconductor chips in the 1970s were TLR-9 [Technology Readiness Level 9, indicating a technology is proven and successfully operating] for that time, they could do very little compared with today’s advanced integrated circuits,” William D. Oliver, coauthor of the paper and a professor of physics, electrical engineering, and computer science at MIT, said in a statement. “Similarly, a high TRL for quantum technologies today does not indicate that the end goal has been achieved, nor does it indicate that the science is done and only engineering remains.” The hurdles quantum faces are tied into the qubits it uses. While a more traditional computer thinks in ones and zeroes, a qubit can be a one, zero, or both at the same time. That technology lets quantum computers process massive amounts of data in parallel, solving complex simulation and optimization problems at speeds not possible with today’s computers. Most platforms today rely on individual control lines for each qubit, but quantum systems can contain thousands, or even millions, of qubits, which makes wiring impractical. That same issue raises problems with power management and temperature control. Many quantum systems today depend on cryogenic equipment or high-power lasers, so simply making a bigger version of the machine won’t work. The paper’s authors say quantum is likely to follow an evolutionary path that’s on par with the current computer industry. Breakthroughs will be necessary, and quantum companies will need to focus on a top-down system design and close collaboration. Failing to work together could fragment the industry and slow its growth—and create some unrealistic expectations among both insiders and the general public. “Patience has been a key element in many landmark developments and points to the importance of tempering timeline expectations in quantum technologies,” the authors wrote. The paper’s warning about the timeline to quantum reaching its real potential comes amid a mounting wave of excitement about the technology. Bank of America analysts, in a note to investors last year, compared the rising technology to man’s discovery of fire. “This could be the biggest revolution for humanity since discovering fire,” the financial institution wrote. “A technology that can perform endless complex calculations in zero-time, warp-speeding human knowledge and development.” Tech giants and startups alike are working hard on quantum systems. Google has named its device Willow; IBM is also working on one, as is Amazon. And startups like Universal Quantum and PsiQuantum Corp. are also jockeying to be players in the quantum field. Intel has developed a silicon quantum chip for researchers and Microsoft is focusing on building practical quantum computers. Despite that, it could be 10 years or more before a quantum computer suitable for commercial applications makes its debut. Companies building prototype quantum computers (including Google) say they don’t expect to deliver a useful quantum computer until the end of the decade. BY CHRIS MORRIS @MORRISATLARGE

Friday, February 6, 2026

ChatGPT Is Saying Goodbye to a Beloved AI Model. Superfans Are Not Happy

OpenAI says that it will be retiring several ChatGPT models in the coming weeks, sending some superfans into a tailspin. In a statement, the company said that on February 13, the models GPT-4o, GPT‑4.1, GPT‑4.1 mini, GPT‑5 (Instant and Thinking), and OpenAI o4-mini will all be removed from ChatGPT and will no longer be accessible through the platform. This isn’t the first time OpenAI has attempted to get rid of GPT-4o. Back in August, when it released GPT-5, the company said it would retire the older model, but an online community revolted, saying that they relied on it for emotional support and felt betrayed by its sudden forced retirement. OpenAI has said that 4o is an especially sycophantic model, exhibiting high levels of agreeability and flattery. In a Reddit AMA following the August announcement, 4o fans hammered OpenAI co-founder Sam Altman with accusations that he had killed their “AI friend.” Almost immediately, OpenAI added the model back to ChatGPT, but only for paid users. OpenAI framed the un-retirement as giving users “more time to transition key use cases, like creative ideation.” Now, the company says it’s sending 4o out to pasture for real this time, because it has integrated feedback from the model’s superfans into its current flagship models, GPT-5.1 and GPT-5.2. Plus, OpenAI added, only 0.1 percent of users still use GPT-4o each day. OpenAI says that users who want to emulate the warm and conversational style of 4o can customize their ChatGPT’s output to display those personality traits. Still, on the internet, 4o fans were unsurprisingly not happy. On the subreddit r/ChatGPT, users wrote that they would be canceling their premium subscriptions in protest. “Now i can no longer have honest conversations about anything,” one user wrote. “Whenever I wanted to unload, I would use 4o. it never backtalked. 5.0+ all it does it back talk.” Another user wrote that canceling the model “a day before valentine’s day is crazy considering some of the userbase for 4o.” In its statement announcing the model’s retirement, ChatGPT wrote that “changes like this take time to adjust to, and we’ll always be clear about what’s changing and when. We know that losing access to GPT‑4o will feel frustrating for some users, and we didn’t make this decision lightly. Retiring models is never easy, but it allows us to focus on improving the models most people use today.” Since the big changes are set to happen on February 13, users have two weeks to say goodbye to 4o and start getting used to the newer ChatGPT offerings. BY BEN SHERRY @BENLUCASSHERRY

Wednesday, February 4, 2026

This AI Godfather Says Business Tools Built on LLMs Are Doomed

Silicon Valley firms and countless other businesses across the country are spending billions of dollars to develop and adopt artificial intelligence platforms to automate myriad workplace tasks. But top global technologist Yann LeCun warns that the limited capabilities of the large language models (LLM) those apps and chatbots operate on are already well-known, and will eventually be overmatched by the expectations and demands users place on the systems. And when that happens, LeCun says, even more investment will be required to create the superintelligence technology that will replace LLM-based AI—systems he says should already be the focus of development efforts and funding. While that may seem like an outlier view, LeCun, 65, is far from a tech outsider. The Turing Award winner ran Meta’s AI research unit for a decade, only leaving last November to launch his own Paris-based startup, Advanced Machine Intelligence Labs. In addition to disliking the managerial duties that came with the research-rooted Meta job, LeCun said his departure was motivated by his view that Silicon Valley has prioritized short-term business interests over far more important and attainable scientific objectives. Top of those commercial concerns he cites was developing and marketing LLM-based AI chatbots and apps with limited capabilities, rather than superintelligence systems with virtually boundless potential. In contrast to current AI, which uses collected data to provide responses to questions or perform necessary tasks, superintelligence systems take in all kinds of surrounding information they encounter, including text, sound, and visual input. They use all of this not only to teach themselves how to respond to data feeds effectively, but also to predict what’s coming next—a requisite for truly self-driving cars, say, or robots that reason and react as humans would. The vast differences in what current LLM-based AI and emerging superintelligence systems can offer mean that countless businesses are now buying and adapting a technology LeCun predicts is destined to be replaced by something better. And not because it’s more effective—and certainly not less expensive—but because that’s how the tech sector decided the fastest profits were to be made. Human-level intelligence “There is this herd effect where everyone in Silicon Valley has to work on the same thing,” LeCun told the New York Times recently. “The entire industry has been LLM-pilled… [but] LLMs are not a path to superintelligence or even human-level intelligence.” To be sure, AI apps like OpenAI’s ChatGPT, Microsoft’s Copilot, and Anthropic’s Claude have continually been improving over time, as they automate workplace tasks like emailing, content composition, and research. But LeCun says the fact that their LLM models rely on gathering, digesting, and working from word-based data limits how far they can evolve to approach—much less surpass—human thinking and response capabilities. By contrast, he and fellow researchers at AMI Labs are creating “world models” also trained with sound, video, and spatial data. Over time, they are expected to be able to observe, respond to, and even predict user activity and physical environments in countless workplace settings. And that’s expected to allow them to collect both more and broader ranges of information than humans can and react in ways people would if they had those capabilities. “We are going to have AI systems that have humanlike and human-level intelligence, but they’re not going to be built on LLMs,” LeCun told MIT Technology Review this month, describing the models AMI Labs and other researchers are working on. “It learns the underlying rules of the world from observation, like a baby learning about gravity. This is the foundation for common sense, and it’s the key to building truly intelligent systems that can reason and plan in the real world.” But what does that mean for business owners—not to mention investors—spending huge sums to develop, acquire, and use LLM-based AI apps? If LeCun is correct, all those tools being marketed as the future of workplace productivity will become obsolete in several years and be replaced by the superintelligence tech he believes should have been prioritized in the first place. There’s already some evidence backing LeCun’s view that Silicon Valley has focused on the shorter-term profit objectives of rushing capacity-limited LLM apps to market, despite being aware of the limitations of their effectiveness. For example, a study published last August by MIT Media Lab’s Project Nanda estimated that despite the $30 billion to $40 billion that’s been invested since 2023 to develop or purchase AI platforms, only 5 percent of businesses that bought those automating tools have reported any return on that spending. “The vast majority remain stuck with no measurable [profit or loss] impact,” it said. And despite increasing investment in AI tech by businesses—and swiftly rising use by workers—there’s considerable disagreement on how effective the platforms actually are. According to a Wall Street Journal study, 40 percent of C-suite managers credited the work-automating apps with saving them considerable time each week. By contrast, two thirds of lower-level workers said the tech saved them little or no time at all. LeCun doesn’t appear to regard any ROI or performance questions during this still-early era of AI tech as the problem. He even thinks LLM-based apps are valuable—up to a point. For example, he compliments most apps and chatbots Silicon Valley has developed and sold to businesses as being very useful to “write text, do research, or write code.” AI’s unscalable apps But LeCun says the enormous fortunes and business strategy commitments Silicon Valley has made in what he views as a relatively short-term technological solution ignore the bigger, long-term potential of automating technology’s next phase. Meaning, in cumulative terms, it will make the broader effort to produce and perfect AI more expensive. In his view, much of the money and froth that’s inflated what critics call today’s AI bubble will likely vanish when the models of today’s apps and chatbots can’t be used to build tomorrow’s revolutionary tech. “LLMs manipulate language really well,” LeCun told MIT Technology Review. “But people have had this illusion, or delusion, that it is a matter of time until we can scale them up to having human-level intelligence, and that is simply false.” Ironically, even LLM-based apps using available data concur that superintelligence systems will offer huge advantages when (not if) they supplant today’s AI tools. “While LLMs are incredibly powerful tools for generating text and interacting with humans, a true superintelligence would represent a leap beyond these current systems in terms of understanding, autonomy, adaptability, and practical real-world impact,” ChatGPT replied when asked about its eventual replacement—providing eight major improvements superintelligence tech will offer. When those systems do come online, LeCun says, businesses recognizing their far wider range of applications will have no choice but to buy them to replace outdated LLM-based AI tools they’ve just recently acquired. “Think about complex industrial processes where you have thousands of sensors, like in a jet engine, a steel mill, or a chemical factory,” LeCun told MIT Technology Review. “There is no technique right now to build a complete, holistic model of these systems. A world model could learn this from the sensor data and predict how the system will behave. Or think of smart glasses that can watch what you’re doing, identify your actions, and then predict what you’re going to do next to assist you. This is what will finally make agentic systems reliable.” And superintelligent systems hopefully won’t generate photos of people with six fingers or endless volumes of workplace slop for employees to plow through. BY BRUCE CRUMLEY @BRUCEC_INC

Monday, February 2, 2026

Early-Stage AI Companies to Watch in 2026

Artificial intelligence is entering its fourth year as the most talked-about force in business. Since ChatGPT’s launch in 2022, AI has upended and reshaped workflows in countless industries, and continues to dominate boardroom conversations and investor strategies. This year, a new wave of early-stage startups is emerging with bold ideas and transformative technologies. They aren’t looking to replicate the world-altering success of OpenAI, but rather to leverage technological advancements in AI to solve niche issues. For example, Tim Tully, an investor at venture capital giant Menlo Ventures, predicts that AI-powered sales and go-to-market tools will break out in 2026. Still, the difference between startups that succeed and those that fail will be a strong intuition for product management—and for founder tenacity. And speaking of founder tenacity, Kulveer Taggar’s venture fund, Phosphor Capital, invests exclusively in “top founders in each Y Combinator batch.” (His cousin, Harj Taggar, is a managing partner at YC.) A two-time alum of YC, Kulveer Taggar is looking for “customer-obsessed” founders and businesses that remind him of the startup accelerator’s most successful alumni, like Airbnb and Stripe, when they were just starting. Based on their suggestions and Inc.’s research, here are some early-stage AI companies poised for game-changing success. 1. OpenEvidence Founder: Daniel Nadler Location: Miami Founded in 2022 by Canadian entrepreneur Daniel Nadler, OpenEvidence produces a medical AI assistant often dubbed “ChatGPT for doctors.” The company’s platform uses large language models specifically trained on massive amounts of clinical data, medical research, and electronic health records to provide real-time recommendations, diagnostic support, and administrative assistance to health care professionals. Since its founding, OpenEvidence has secured major partnerships with several large hospital systems across the United States and Europe, allowing it to rapidly test and refine its models in clinical settings. The company says that its medical search engine is used on a daily basis by more than 40 percent of physicians in the U.S. today. In January 2026, OpenEvidence announced that it had raised a $250 million Series D round, at a valuation of $12 billion. OpenEvidence wrote in a statement that the new funding will be used “to invest heavily in the R&D and compute costs associated with the multi-AI agentic architecture of OpenEvidence, which provides the highest quality and most accurate medical answers of any system in the world.” Over the past 12 months, OpenEvidence has raised a grand total of $700 million. 2. AMI Labs Founder: Yann LeCun Location: Paris Yann LeCun, the acclaimed NYU professor, 2018 Turing Award winner, and former chief AI scientist at Meta, has launched his first startup, making him one of the most-watched figures in AI. Announcing his December 31 departure from Meta via LinkedIn, LeCun revealed plans for a new company dedicated to his research into advanced machine intelligence (AMI). The company’s goal, he wrote, is to drive the “next big revolution in AI: systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.” These systems are also called “world models,” and they will be where LeCun focuses his attention. FIVE AI TRENDS WILL SHAPE IT IN 2026: Our report unpacks the macro trends shaping AI and ties each one back to IT strategy, governance, and transformation. 1. Foundational AI principles will rewrite organizational DNA Enterprises will develop their own guiding AI principles to address rising AI risk and align their AI strategy around core organizational values. 2. From copilots to vibe coding: AI will continue to reinvent IT New categories of enterprise AI tools will emerge, propelling many organizations toward a crucial decision: AI platform or best-of-breed AI tools? 3. Agentic AI will come of age and power the exponential enterprise Although current adoption of agentic AI is low, it will grow faster than generative AI, powering exponential growth and change across organizations while bringing new opportunities and risks. 4. Risk management will be the price of admission for AI The potential risks of new AI applications will drive organizations to adopt AI risk management programs, even in jurisdictions with no regulatory requirement. 5. AI will hang in the balance between freedom and control AI sovereignty will become top of mind for regulators, but legislative policies will develop in a disjointed fashion around the world.

Friday, January 30, 2026

6 Consulting Trends to Watch in 2026

The year 2026 promises to be an exciting one for the consulting industry. Big technological changes and a growing number of startups in the space are expected to accompany a period of client recalibration. To succeed, consulting firms will need to be adaptable—and will need to stay on top of industry trends. Flexibility in how you operate and how you approach clients can open new sources of revenue. And by boosting your tech savviness, you can offer better solutions, which could increase repeat and referral business. Here’s a look at some of the most notable consulting trends that will impact the industry in the year to come. 1. Niche specialists will be in greater demand than generalists There’s a growing shift away from generalist consulting firms and a greater demand among clients for specialists, who focus on specific areas of expertise. The growing number of independent consultants is filling that demand, with detailed sector knowledge and a firm understanding of that area’s regulatory frameworks, ESG compliance, or sector-specific nuances. Those companies typically offer faster turnaround times on projects, better insights, and improved impacts for clients. “Clients prefer boutique firms offering regulatory expertise, sector specialization, and competitive pricing, driving growth in niche consulting segments,” says research firm StartUs Insights. 2. Expect more competition, but new avenues of revenue It’s not the news that some firms want to hear, but the hard truth is 2026 will see a much more crowded consulting landscape. Professionals who were let go in 2025 (and will be this year) are increasingly turning to consulting as the job market becomes leaner, often giving well-established companies more competition than they were counting on. At the same time, client budgets are likely to be leaner, which could impact the number of jobs they commission. That doesn’t mean the work won’t be there. Greaux Consulting says flexibility will be key, as clients could look for fractional or on-demand consulting help. And while big corporations could reduce budgets, there’s likely to be more demand from small and midsize businesses for consulting expertise. Expect “a growing demand for business consulting services for small businesses, as this provides small emerging companies access to skilled, high-level experts to consult on operations, financial planning and projections, and marketing strategy,” says the company, which specializes in business process documentation, operational transformation, and executive coaching. 3. Localization will become much more important With the emphasis from the White House and Congress on domestic workers, many companies are foregoing working with international consultants or opting to work with those who have strong local networks. That could present an opportunity. Additionally, while there’s a lot of volatility in the economy now, some areas, especially in the Southeast, are seeing expansion, which will create more demand for consultants who know those regions, their regulations, market conditions, and workforce dynamics. Companies that are looking to expand to those areas will need expert insight into how best to grow and succeed there. 4. AI expertise will be in demand It likely won’t come as a surprise to hear AI will play a bigger role in client operations in 2026. As companies depend on it to assist with everything from analytics to forecasting, consultants who are fluent in these tools will have a competitive advantage. And those who can offer the tools to clients could have an even bigger head start. “Organizations that hire consulting services or a national consulting practice will see meaningful benefits from AI-driven tools to improve operations, customer segmentation, supply chain, and financial planning,” says Greaux. 5. There will be an increased focus on automation With businesses becoming even more focused on optimization in 2026 (a trend that has been growing for the past several years), consultants who can help guide them through reworking manual processes into automated ones could be in a position to thrive. There’s a growing need to reduce costs, increase productivity, and enhance efficiency, which opens up a niche for consultants. StartUs Insights says “the digital transformation consulting market [will grow] from $268.46 billion in 2025 to $510.50 billion in 2034.” 6. Personalizing client services will boost revenues While a one-size-fits-all approach might have worked for consulting firms in years past, there’s a growing movement toward selecting firms that offer a methodology that’s tailored specifically to the client or its industry, addressing their unique issues with adaptive, data-driven solutions. McKinsey says consulting firms that focus on personalization typically see revenues that are 10 to 15 percent higher – and, in some cases, up to 25 percent. They also have a higher client retention level. BY CHRIS MORRIS @MORRISATLARGE

Wednesday, January 28, 2026

Apple’s Rumored AI Pin Forces a Simple Question: What Do People Actually Want?

Earlier this week, my co-host and I had a conversation on our podcast, Primary Technology, about the rumors that Apple is working on an AI pin. We don’t usually spend a lot of time on rumors—for a number of reasons I won’t get into here—but this one is particularly interesting considering that we’ve seen this before (AI pins, I mean), and they haven’t turned out especially well. According to a report from The Information this week, Apple is actively developing a wearable device—roughly the size of an AirTag—equipped with cameras and microphones but notably, lacking a display. The idea is that it would launch as early as 2027, powered by the kind of multimodal intelligence we expect to see in iOS 27. Since we recorded that episode, I’ve been thinking a lot about whether this makes any sense for Apple, and what exactly the ideal device is for AI. I’ve said in the past that I think that’s the Apple Watch, though a “pin” definitely has certain advantages (it has an outward-facing camera, for example). More importantly, I’ve been thinking about what the ideal AI device is based on what people actually want. The ideal form factor should be determined by the ideal use cases, not the other way around. I think the bar is pretty high. If I’m going to carry another device, or if I’m going to replace something like my iPhone, it has to be able to offer value I can’t get from what I already have. Generally, that falls into three buckets: Answers to questions The most obvious use case is what people already use AI tools for—getting information. Of course, this is admittedly a pretty broad category. There are a lot of different types of information that people might want. For example, they might want to ask simple questions like “who directed Star Wars: The Last Jedi?” But people also want to ask slightly more complicated questions (what’s the weather going to be like when my plane lands tomorrow?), as well as queries like “what’s this plant, and is it edible?” Those questions require a different kind of contextual awareness. Your AI assistant has to be able to see your calendar, find out flight information, and check the weather at your destination. Or, in the latter case, it has to be able to literally see what you’re talking about, identify the plant, and give you the information you’re looking for. This is where the form factor of a pin actually starts to make some sense. The primary limitation of Siri on your iPhone or Apple Watch is that it can’t see what you see. Sure, you can hold up your phone and point the camera at stuff, but that’s awkward. If Apple’s rumored device includes the dual-camera array mentioned in the reports, it changes Siri from being just a voice assistant to a multimodal source of information about the world. You aren’t just asking for information; you are asking for context about the physical world in front of you. Do things on their behalf Of course, getting information is great, but acting on it is even more useful. This is the “agent” concept we’ve heard so much about but haven’t really seen work in practice. It’s the promise that the Rabbit R1 made but couldn’t keep: the ability to interface with apps and services to actually get things done. The Rabbit R1 failed because it tried to simulate your interactions via a cloud-based “Large Action Model” that was clunky and unreliable. Apple has the potential to solve this for first-party apps like Calendar and Messages. It controls the entire software stack, meaning it can offer an experience that other devices couldn’t. And, with App Intents, Apple could solve the same problem for other apps if it could get third-party developers on board. I don’t just want to know that my flight is delayed; I want the device to rebook me on the next one and update my calendar. We’re a long way off from any device being able to do that, but it’s the promise that every company keeps making. If Apple can make it happen, it’ll immediately jump to the lead. Remember and prompt This is the “external brain” use case, and frankly, it’s the one a lot of people find most compelling. We all have those moments where we meet someone and can’t quite place them, or we have a brilliant idea while driving and lose it by the time we get home. An ideal AI device should be a passive observer that helps you connect the dots. It should be able to whisper in your ear, “That’s David; you met him at CES last year,” or remind you to pick up milk because it knows you’re near the grocery store. Of course, this is also the creepiest use case. It requires a level of always-on surveillance that most people are rightfully uncomfortable with. If Apple is going to ask us to wear a camera and microphone on our chests, they are going to have to lean incredibly hard on their privacy credentials. Trust is the only currency that matters here. The big risk Previous devices haven’t been much of a success. No one has figured this out yet. The Humane AI Pin was a disaster of overheating and poor battery life. The Rabbit R1 was barely functional. The history of wearable AI is short, but it is brutal. There are laws of physics that even Apple cannot ignore. Cameras and AI models generate heat and drain power. Putting that in a coin-sized aluminum disc without a massive battery pack is an engineering feat no one has cracked. There’s also the fact that wearable devices come with a very real stigma. Anything that isn’t a watch has to be exponentially more useful than the burden of wearing it. Google Glass failed partly because people simply didn’t want to talk to someone who had a camera pointed at their face. Meta has circumvented this slightly with Ray-Bans because they look like sunglasses. A shiny badge on your chest is a much bolder statement. Is that an argument for or against Apple trying? I’m not sure. But with reports that Jony Ive and OpenAI are building their own hardware, Apple may feel it cannot afford to cede the category. Even if, right now, it looks like a solution in search of a problem. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Monday, January 26, 2026

Mark Cuban Just Made a Surprising Anti‑AI Investment. Experts Say It Could Define 2026

Mark Cuban’s enthusiasm for artificial intelligence is well-known. He has called the technology the “ultimate timesaving hack” and bluntly stated that if you’re not learning AI, “you’re f—ed.” But with his latest investment, the billionaire bypassed the plethora of AI startups and focused instead on something more human-centered. Cuban has invested an undisclosed amount in live events company Burwoodland, which produces nightlife experiences throughout the U.S., Canada, and Europe. The investment will make him a minority owner in the company. Founded in 2015 by Alex Badanes and Ethan Maccoby, the New York City-based company says it has sold more than 1.5 million tickets to live events like Emo Night Brooklyn, Gimme Gimme Disco, All Your Friends, and Broadway Rave, which center on DJ sets that are themed to a certain musical genre. “It’s time we all got off our asses, left the house, and had fun,” said Cuban in a statement. “Alex and Ethan know how to create amazing memories and experiences that people plan their weeks around. In an AI world, what you do is far more important than what you prompt.” That’s not the first time Cuban has touted the potential of real-world experiences in an increasingly AI-dominated environment. Last June, he took to social network Bluesky to write, “Within the next 3 years, there will be so much AI, in particular AI video, people won’t know if what they see or hear is real. Which will lead to an explosion of f2f engagement, events and jobs.” Burwoodland leans hard into that way of thinking, producing over 1,200 shows per year. Strategic partners of the company include music industry veterans Izzy Zivkovic (founder of artist management company Split Second, which counts Arcade Fire among its clients) and concert promoter Peter Shapiro. Klaf Companies, the investment and advisory platform founded by Justin Kalifowitz (who also created Downtown Music Holdings, which represents songwriting copyrights from John Lennon, Yoko Ono, Ray Davies, and One Direction), is also a partner. “Ethan and I started this company because we know firsthand how powerful it is to find your people through the music you love,” Badanes said in a statement. “That sense of community shaped our lives, and creating spaces where others can feel that connection has always been our purpose. Having the confidence of an investor as respected and accomplished as Mark is a tremendous honor.” With concert ticket prices continuing to escalate, Burwoodland keeps entry fees low, offering a low-cost live experience for music lovers. Tickets to its events generally run in the $20 to $40 range, though some events cost more. The company has already booked 2026 events in Milan, Brooklyn, Louisville, Nashville, and Antwerp—and later this month will host the Long Live Emo Fest at Brooklyn’s Paramount theater, which holds up to 2,700 patrons. The experiences have become popular enough that some of the artists being celebrated in the various genres Burwoodland focuses on have shown up at the events, with some even performing. Maccoby and Badanes didn’t plan to start a business. The two, who have been friends since childhood, began throwing house parties in college and kept up the practice afterward, when they lived in Brooklyn. When those soirees got too big for their apartment, they took over a nearby bar to host them and Burwoodland (named after an area in London where they grew up) was born. The duo quit their day jobs in 2022 to focus exclusively on the startup. There has been increasing interest in the live event space from investors lately. Last June, NYC-based Fever, a live-entertainment discovery platform, secured a $100 million investment from L Catterton and Point72 Private Investments. And in September, DJ/producer Kygo’s company Palm Tree Crew (which hosts music festivals) received a $20 million Series B investment led by WME Group, giving it a $215 million valuation. BY CHRIS MORRIS @MORRISATLARGE

Friday, January 23, 2026

The translators grappling with losing work to AI

As a rare Irish-language translator, Timothy McKeon enjoyed steady work for European Union institutions for years. But the rise of artificial intelligence tools that can translate text and, increasingly, speech nearly instantly has upended his livelihood and that of many others in his field. He says he lost about 70% of his income when the EU translation work dried up. Now, available work consists of polishing machine-generated translations, jobs he refuses “on principle” because they help train the software taking work away from human translators. When the edited text is fed back into the translation software, “it learns from your work.” “The more it learns, the more obsolete you become,” he said. “You’re essentially expected to dig your own professional grave.” While workers worldwide ponder how AI might affect their livelihoods – a topic on the agenda at the World Economic Forum in Davos this week – that question is no longer hypothetical in the translation industry. Apps like Google Translate already reduced the need for human translators, and increased adoption of generative AI has only accelerated that trend. A 2024 survey of writing professionals by the United Kingdom’s Society of Authors showed that more than a third of translators had lost work due to generative AI, which can create sophisticated text, as well as images and audio, from users’ prompts. And 43% of translators said their income had dropped because of the technology. In the United States, data from 2010-23 analyzed by Carl Frey and Pedro Llanos-Paredes at Oxford University showed that regions where Google Translate was in greater use saw slower growth in the number of translator jobs. Originally powered by statistical translation, Google Translate shifted to a technique called neural translation in 2016, resulting in more natural-sounding text and bringing it closer to today’s AI tools. “Our best baseline estimate is that roughly 28,000 more jobs for translators would’ve been added in the absence of machine translation,” Frey told CNN. “It’s not a story of mass displacement but I think that’s very likely to follow.” The story is similar globally, suggests McKeon: He is part of the Guerrilla Media Collective, an international group of translators and communications professionals, and says everyone in the collective supplements their income with other work due to the impact of AI. ‘The entire US is looking at Wisconsin’ Christina Green is president of Green Linguistics, a provider of language services, and a court interpreter in Wisconsin. She worries her court role could soon vanish because of a bill that would allow courts to use AI or other machine translation in civil or criminal proceedings, and in certain other cases. Green and other language professionals have been fighting the proposal since it was introduced in May. “The entire US is looking at Wisconsin” as a precedent, Green said, noting that the bill’s opponents had so far succeeded in stalling it. While Green still has her court job, her company recently lost a major Fortune 10 corporate client, which she said opted to use a company offering AI translation instead. The client accounted for such an outsized share of her company’s business that she had to make layoffs. “People and companies think they’re saving money with AI, but they have absolutely no clue what it is, how privacy is affected and what the ramifications are,” Green said. ‘Governments are not doing enough’ Fardous Bahbouh, based in London, is an Arabic-language translator and interpreter for international media organizations, including CNN. She has seen a considerable reduction in written work in recent years, which she attributes to technological developments and the financial pressures facing media outlets. Bahbouh is also studying for a PhD focusing on the translation industry. Her research shows that technology, including AI, is “hugely impacting” translators and interpreters. “I worry a great deal that governments are not doing enough to help them transition into other work, which could lead to greater inequality, in-work poverty and child poverty,” she told CNN. Many translators are indeed looking to retrain “because translation isn’t generating the income it previously did,” according to Ian Giles, a translator and chair of the Translators Association at the UK’s Society of Authors. The picture is similar in the United States: Many translators are leaving the profession, Andy Benzo, president of the American Translators Association, told CNN. And Kristalina Georgieva, the head of the International Monetary Fund, said in Davos Thursday that the number of translators and interpreters at the fund had gone down to 50 from 200 due to greater use of technology. Governments should also do more for those remaining in the translation industry, by introducing stronger labor protections, Bahbouh argued. Human professionals still needed Despite advances in machine translation and interpretation, technology can’t replace human language workers entirely just yet. While using AI tools for everyday tasks like finding directions is “low-risk,” human translators will likely need to be involved for the foreseeable future in diplomatic, legal, financial and medical contexts where the risks are “humungous,” according to Benzo. “I’m a translator and a lawyer and in both professions the nuance of each word is very specific and the (large language models powering AI tools) aren’t there yet, by far,” she said. Another field relatively untouched by machine translation tools is literary translation. Giles, who translates commercial fiction from Scandinavian languages into English, used to supplement his income with translation work from companies, but that has now disappeared. Meanwhile, literary commissions have continued to come in, he said. There’s also one key element of communication that AI can’t replace, according to Oxford University’s Frey: Human connection. “The fact that machine translation is pervasive doesn’t mean you can build a relationship with somebody in France without speaking a word of French,” he said. By Lianne Kolirin

Wednesday, January 21, 2026

Microsoft Has a Plan to Address One of the Biggest Complaints About AI

As it embarks on a years-long project to build 100 data centers across the U.S. to power its AI boom, Microsoft has announced the steps it will take to lower its impact on the communities nearby. The move comes as electricity rates have spiked across the nation, fueled in part by the massive power demands from AI data centers that are popping up across the country. President Donald Trump paved the way for the announcement, saying via Truth Social on January 12 that his administration was working with leading technology companies to “ensure that Americans don’t ‘pick up the tab’ for their POWER consumption” by paying more in utilities. “We are the ‘HOTTEST’ Country in the World, and Number One in AI,” he wrote. “Data Centers are key to that boom, and keeping Americans FREE and SECURE but, the big Technology Companies who build them must ‘pay their own way.’” Community Opposition Bard Smith, Microsoft vice chair and president, acknowledged the need to address concerns about data centers. “When I visit communities around the country, people have questions—pointed questions…They are the type of questions that we need to heed,” Smith said. “They look at this technology and ask, ‘What will it mean for the jobs of the future? What will it mean for the adults of today? What will it mean for their children?’” In October Microsoft cancelled construction plans for a data center in Wisconsin because of pushback from the surrounding community, according to Wired. Microsoft’s Promise In an effort to increase transparency and minimize the negative impact its data centers have on the public, Microsoft addressed five core issues it plans to focus on going forward. Per Microsoft’s statement, the electricity needed for data centers will more than triple by 2035 to 640 terawatt-hours per year. The U.S. is currently leading development in AI, but that growth depends on a sufficient supply of energy. So where will that electricity come from? Microsoft said in a statement it believes “it’s both unfair and politically unrealistic for our industry to ask the public to shoulder added electricity costs for AI,” instead suggesting “tech companies pay their own way for the electricity costs they create.” The company plans to cover its costs through a series of steps, including negotiating higher rates with utility companies and public commissions that will pay for the electricity for the datacenters. It will also work to increase the efficiency of its data centers and advocate for policies that will ensure communities have affordable and reliable power. Microsoft also said it would: Minimize its water use and invest in water replenishment projects Create construction and operational jobs in local communities and train residents with the skills required to fill them Increase local tax revenue that will help fund hospitals, schools, parks, and libraries Help bring AI training and nonprofits to local communities to ensure residents benefit from the data centers. BY AVA LEVINSON