Friday, January 30, 2026

6 Consulting Trends to Watch in 2026

The year 2026 promises to be an exciting one for the consulting industry. Big technological changes and a growing number of startups in the space are expected to accompany a period of client recalibration. To succeed, consulting firms will need to be adaptable—and will need to stay on top of industry trends. Flexibility in how you operate and how you approach clients can open new sources of revenue. And by boosting your tech savviness, you can offer better solutions, which could increase repeat and referral business. Here’s a look at some of the most notable consulting trends that will impact the industry in the year to come. 1. Niche specialists will be in greater demand than generalists There’s a growing shift away from generalist consulting firms and a greater demand among clients for specialists, who focus on specific areas of expertise. The growing number of independent consultants is filling that demand, with detailed sector knowledge and a firm understanding of that area’s regulatory frameworks, ESG compliance, or sector-specific nuances. Those companies typically offer faster turnaround times on projects, better insights, and improved impacts for clients. “Clients prefer boutique firms offering regulatory expertise, sector specialization, and competitive pricing, driving growth in niche consulting segments,” says research firm StartUs Insights. 2. Expect more competition, but new avenues of revenue It’s not the news that some firms want to hear, but the hard truth is 2026 will see a much more crowded consulting landscape. Professionals who were let go in 2025 (and will be this year) are increasingly turning to consulting as the job market becomes leaner, often giving well-established companies more competition than they were counting on. At the same time, client budgets are likely to be leaner, which could impact the number of jobs they commission. That doesn’t mean the work won’t be there. Greaux Consulting says flexibility will be key, as clients could look for fractional or on-demand consulting help. And while big corporations could reduce budgets, there’s likely to be more demand from small and midsize businesses for consulting expertise. Expect “a growing demand for business consulting services for small businesses, as this provides small emerging companies access to skilled, high-level experts to consult on operations, financial planning and projections, and marketing strategy,” says the company, which specializes in business process documentation, operational transformation, and executive coaching. 3. Localization will become much more important With the emphasis from the White House and Congress on domestic workers, many companies are foregoing working with international consultants or opting to work with those who have strong local networks. That could present an opportunity. Additionally, while there’s a lot of volatility in the economy now, some areas, especially in the Southeast, are seeing expansion, which will create more demand for consultants who know those regions, their regulations, market conditions, and workforce dynamics. Companies that are looking to expand to those areas will need expert insight into how best to grow and succeed there. 4. AI expertise will be in demand It likely won’t come as a surprise to hear AI will play a bigger role in client operations in 2026. As companies depend on it to assist with everything from analytics to forecasting, consultants who are fluent in these tools will have a competitive advantage. And those who can offer the tools to clients could have an even bigger head start. “Organizations that hire consulting services or a national consulting practice will see meaningful benefits from AI-driven tools to improve operations, customer segmentation, supply chain, and financial planning,” says Greaux. 5. There will be an increased focus on automation With businesses becoming even more focused on optimization in 2026 (a trend that has been growing for the past several years), consultants who can help guide them through reworking manual processes into automated ones could be in a position to thrive. There’s a growing need to reduce costs, increase productivity, and enhance efficiency, which opens up a niche for consultants. StartUs Insights says “the digital transformation consulting market [will grow] from $268.46 billion in 2025 to $510.50 billion in 2034.” 6. Personalizing client services will boost revenues While a one-size-fits-all approach might have worked for consulting firms in years past, there’s a growing movement toward selecting firms that offer a methodology that’s tailored specifically to the client or its industry, addressing their unique issues with adaptive, data-driven solutions. McKinsey says consulting firms that focus on personalization typically see revenues that are 10 to 15 percent higher – and, in some cases, up to 25 percent. They also have a higher client retention level. BY CHRIS MORRIS @MORRISATLARGE

Wednesday, January 28, 2026

Apple’s Rumored AI Pin Forces a Simple Question: What Do People Actually Want?

Earlier this week, my co-host and I had a conversation on our podcast, Primary Technology, about the rumors that Apple is working on an AI pin. We don’t usually spend a lot of time on rumors—for a number of reasons I won’t get into here—but this one is particularly interesting considering that we’ve seen this before (AI pins, I mean), and they haven’t turned out especially well. According to a report from The Information this week, Apple is actively developing a wearable device—roughly the size of an AirTag—equipped with cameras and microphones but notably, lacking a display. The idea is that it would launch as early as 2027, powered by the kind of multimodal intelligence we expect to see in iOS 27. Since we recorded that episode, I’ve been thinking a lot about whether this makes any sense for Apple, and what exactly the ideal device is for AI. I’ve said in the past that I think that’s the Apple Watch, though a “pin” definitely has certain advantages (it has an outward-facing camera, for example). More importantly, I’ve been thinking about what the ideal AI device is based on what people actually want. The ideal form factor should be determined by the ideal use cases, not the other way around. I think the bar is pretty high. If I’m going to carry another device, or if I’m going to replace something like my iPhone, it has to be able to offer value I can’t get from what I already have. Generally, that falls into three buckets: Answers to questions The most obvious use case is what people already use AI tools for—getting information. Of course, this is admittedly a pretty broad category. There are a lot of different types of information that people might want. For example, they might want to ask simple questions like “who directed Star Wars: The Last Jedi?” But people also want to ask slightly more complicated questions (what’s the weather going to be like when my plane lands tomorrow?), as well as queries like “what’s this plant, and is it edible?” Those questions require a different kind of contextual awareness. Your AI assistant has to be able to see your calendar, find out flight information, and check the weather at your destination. Or, in the latter case, it has to be able to literally see what you’re talking about, identify the plant, and give you the information you’re looking for. This is where the form factor of a pin actually starts to make some sense. The primary limitation of Siri on your iPhone or Apple Watch is that it can’t see what you see. Sure, you can hold up your phone and point the camera at stuff, but that’s awkward. If Apple’s rumored device includes the dual-camera array mentioned in the reports, it changes Siri from being just a voice assistant to a multimodal source of information about the world. You aren’t just asking for information; you are asking for context about the physical world in front of you. Do things on their behalf Of course, getting information is great, but acting on it is even more useful. This is the “agent” concept we’ve heard so much about but haven’t really seen work in practice. It’s the promise that the Rabbit R1 made but couldn’t keep: the ability to interface with apps and services to actually get things done. The Rabbit R1 failed because it tried to simulate your interactions via a cloud-based “Large Action Model” that was clunky and unreliable. Apple has the potential to solve this for first-party apps like Calendar and Messages. It controls the entire software stack, meaning it can offer an experience that other devices couldn’t. And, with App Intents, Apple could solve the same problem for other apps if it could get third-party developers on board. I don’t just want to know that my flight is delayed; I want the device to rebook me on the next one and update my calendar. We’re a long way off from any device being able to do that, but it’s the promise that every company keeps making. If Apple can make it happen, it’ll immediately jump to the lead. Remember and prompt This is the “external brain” use case, and frankly, it’s the one a lot of people find most compelling. We all have those moments where we meet someone and can’t quite place them, or we have a brilliant idea while driving and lose it by the time we get home. An ideal AI device should be a passive observer that helps you connect the dots. It should be able to whisper in your ear, “That’s David; you met him at CES last year,” or remind you to pick up milk because it knows you’re near the grocery store. Of course, this is also the creepiest use case. It requires a level of always-on surveillance that most people are rightfully uncomfortable with. If Apple is going to ask us to wear a camera and microphone on our chests, they are going to have to lean incredibly hard on their privacy credentials. Trust is the only currency that matters here. The big risk Previous devices haven’t been much of a success. No one has figured this out yet. The Humane AI Pin was a disaster of overheating and poor battery life. The Rabbit R1 was barely functional. The history of wearable AI is short, but it is brutal. There are laws of physics that even Apple cannot ignore. Cameras and AI models generate heat and drain power. Putting that in a coin-sized aluminum disc without a massive battery pack is an engineering feat no one has cracked. There’s also the fact that wearable devices come with a very real stigma. Anything that isn’t a watch has to be exponentially more useful than the burden of wearing it. Google Glass failed partly because people simply didn’t want to talk to someone who had a camera pointed at their face. Meta has circumvented this slightly with Ray-Bans because they look like sunglasses. A shiny badge on your chest is a much bolder statement. Is that an argument for or against Apple trying? I’m not sure. But with reports that Jony Ive and OpenAI are building their own hardware, Apple may feel it cannot afford to cede the category. Even if, right now, it looks like a solution in search of a problem. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Monday, January 26, 2026

Mark Cuban Just Made a Surprising Anti‑AI Investment. Experts Say It Could Define 2026

Mark Cuban’s enthusiasm for artificial intelligence is well-known. He has called the technology the “ultimate timesaving hack” and bluntly stated that if you’re not learning AI, “you’re f—ed.” But with his latest investment, the billionaire bypassed the plethora of AI startups and focused instead on something more human-centered. Cuban has invested an undisclosed amount in live events company Burwoodland, which produces nightlife experiences throughout the U.S., Canada, and Europe. The investment will make him a minority owner in the company. Founded in 2015 by Alex Badanes and Ethan Maccoby, the New York City-based company says it has sold more than 1.5 million tickets to live events like Emo Night Brooklyn, Gimme Gimme Disco, All Your Friends, and Broadway Rave, which center on DJ sets that are themed to a certain musical genre. “It’s time we all got off our asses, left the house, and had fun,” said Cuban in a statement. “Alex and Ethan know how to create amazing memories and experiences that people plan their weeks around. In an AI world, what you do is far more important than what you prompt.” That’s not the first time Cuban has touted the potential of real-world experiences in an increasingly AI-dominated environment. Last June, he took to social network Bluesky to write, “Within the next 3 years, there will be so much AI, in particular AI video, people won’t know if what they see or hear is real. Which will lead to an explosion of f2f engagement, events and jobs.” Burwoodland leans hard into that way of thinking, producing over 1,200 shows per year. Strategic partners of the company include music industry veterans Izzy Zivkovic (founder of artist management company Split Second, which counts Arcade Fire among its clients) and concert promoter Peter Shapiro. Klaf Companies, the investment and advisory platform founded by Justin Kalifowitz (who also created Downtown Music Holdings, which represents songwriting copyrights from John Lennon, Yoko Ono, Ray Davies, and One Direction), is also a partner. “Ethan and I started this company because we know firsthand how powerful it is to find your people through the music you love,” Badanes said in a statement. “That sense of community shaped our lives, and creating spaces where others can feel that connection has always been our purpose. Having the confidence of an investor as respected and accomplished as Mark is a tremendous honor.” With concert ticket prices continuing to escalate, Burwoodland keeps entry fees low, offering a low-cost live experience for music lovers. Tickets to its events generally run in the $20 to $40 range, though some events cost more. The company has already booked 2026 events in Milan, Brooklyn, Louisville, Nashville, and Antwerp—and later this month will host the Long Live Emo Fest at Brooklyn’s Paramount theater, which holds up to 2,700 patrons. The experiences have become popular enough that some of the artists being celebrated in the various genres Burwoodland focuses on have shown up at the events, with some even performing. Maccoby and Badanes didn’t plan to start a business. The two, who have been friends since childhood, began throwing house parties in college and kept up the practice afterward, when they lived in Brooklyn. When those soirees got too big for their apartment, they took over a nearby bar to host them and Burwoodland (named after an area in London where they grew up) was born. The duo quit their day jobs in 2022 to focus exclusively on the startup. There has been increasing interest in the live event space from investors lately. Last June, NYC-based Fever, a live-entertainment discovery platform, secured a $100 million investment from L Catterton and Point72 Private Investments. And in September, DJ/producer Kygo’s company Palm Tree Crew (which hosts music festivals) received a $20 million Series B investment led by WME Group, giving it a $215 million valuation. BY CHRIS MORRIS @MORRISATLARGE

Friday, January 23, 2026

The translators grappling with losing work to AI

As a rare Irish-language translator, Timothy McKeon enjoyed steady work for European Union institutions for years. But the rise of artificial intelligence tools that can translate text and, increasingly, speech nearly instantly has upended his livelihood and that of many others in his field. He says he lost about 70% of his income when the EU translation work dried up. Now, available work consists of polishing machine-generated translations, jobs he refuses “on principle” because they help train the software taking work away from human translators. When the edited text is fed back into the translation software, “it learns from your work.” “The more it learns, the more obsolete you become,” he said. “You’re essentially expected to dig your own professional grave.” While workers worldwide ponder how AI might affect their livelihoods – a topic on the agenda at the World Economic Forum in Davos this week – that question is no longer hypothetical in the translation industry. Apps like Google Translate already reduced the need for human translators, and increased adoption of generative AI has only accelerated that trend. A 2024 survey of writing professionals by the United Kingdom’s Society of Authors showed that more than a third of translators had lost work due to generative AI, which can create sophisticated text, as well as images and audio, from users’ prompts. And 43% of translators said their income had dropped because of the technology. In the United States, data from 2010-23 analyzed by Carl Frey and Pedro Llanos-Paredes at Oxford University showed that regions where Google Translate was in greater use saw slower growth in the number of translator jobs. Originally powered by statistical translation, Google Translate shifted to a technique called neural translation in 2016, resulting in more natural-sounding text and bringing it closer to today’s AI tools. “Our best baseline estimate is that roughly 28,000 more jobs for translators would’ve been added in the absence of machine translation,” Frey told CNN. “It’s not a story of mass displacement but I think that’s very likely to follow.” The story is similar globally, suggests McKeon: He is part of the Guerrilla Media Collective, an international group of translators and communications professionals, and says everyone in the collective supplements their income with other work due to the impact of AI. ‘The entire US is looking at Wisconsin’ Christina Green is president of Green Linguistics, a provider of language services, and a court interpreter in Wisconsin. She worries her court role could soon vanish because of a bill that would allow courts to use AI or other machine translation in civil or criminal proceedings, and in certain other cases. Green and other language professionals have been fighting the proposal since it was introduced in May. “The entire US is looking at Wisconsin” as a precedent, Green said, noting that the bill’s opponents had so far succeeded in stalling it. While Green still has her court job, her company recently lost a major Fortune 10 corporate client, which she said opted to use a company offering AI translation instead. The client accounted for such an outsized share of her company’s business that she had to make layoffs. “People and companies think they’re saving money with AI, but they have absolutely no clue what it is, how privacy is affected and what the ramifications are,” Green said. ‘Governments are not doing enough’ Fardous Bahbouh, based in London, is an Arabic-language translator and interpreter for international media organizations, including CNN. She has seen a considerable reduction in written work in recent years, which she attributes to technological developments and the financial pressures facing media outlets. Bahbouh is also studying for a PhD focusing on the translation industry. Her research shows that technology, including AI, is “hugely impacting” translators and interpreters. “I worry a great deal that governments are not doing enough to help them transition into other work, which could lead to greater inequality, in-work poverty and child poverty,” she told CNN. Many translators are indeed looking to retrain “because translation isn’t generating the income it previously did,” according to Ian Giles, a translator and chair of the Translators Association at the UK’s Society of Authors. The picture is similar in the United States: Many translators are leaving the profession, Andy Benzo, president of the American Translators Association, told CNN. And Kristalina Georgieva, the head of the International Monetary Fund, said in Davos Thursday that the number of translators and interpreters at the fund had gone down to 50 from 200 due to greater use of technology. Governments should also do more for those remaining in the translation industry, by introducing stronger labor protections, Bahbouh argued. Human professionals still needed Despite advances in machine translation and interpretation, technology can’t replace human language workers entirely just yet. While using AI tools for everyday tasks like finding directions is “low-risk,” human translators will likely need to be involved for the foreseeable future in diplomatic, legal, financial and medical contexts where the risks are “humungous,” according to Benzo. “I’m a translator and a lawyer and in both professions the nuance of each word is very specific and the (large language models powering AI tools) aren’t there yet, by far,” she said. Another field relatively untouched by machine translation tools is literary translation. Giles, who translates commercial fiction from Scandinavian languages into English, used to supplement his income with translation work from companies, but that has now disappeared. Meanwhile, literary commissions have continued to come in, he said. There’s also one key element of communication that AI can’t replace, according to Oxford University’s Frey: Human connection. “The fact that machine translation is pervasive doesn’t mean you can build a relationship with somebody in France without speaking a word of French,” he said. By Lianne Kolirin

Wednesday, January 21, 2026

Microsoft Has a Plan to Address One of the Biggest Complaints About AI

As it embarks on a years-long project to build 100 data centers across the U.S. to power its AI boom, Microsoft has announced the steps it will take to lower its impact on the communities nearby. The move comes as electricity rates have spiked across the nation, fueled in part by the massive power demands from AI data centers that are popping up across the country. President Donald Trump paved the way for the announcement, saying via Truth Social on January 12 that his administration was working with leading technology companies to “ensure that Americans don’t ‘pick up the tab’ for their POWER consumption” by paying more in utilities. “We are the ‘HOTTEST’ Country in the World, and Number One in AI,” he wrote. “Data Centers are key to that boom, and keeping Americans FREE and SECURE but, the big Technology Companies who build them must ‘pay their own way.’” Community Opposition Bard Smith, Microsoft vice chair and president, acknowledged the need to address concerns about data centers. “When I visit communities around the country, people have questions—pointed questions…They are the type of questions that we need to heed,” Smith said. “They look at this technology and ask, ‘What will it mean for the jobs of the future? What will it mean for the adults of today? What will it mean for their children?’” In October Microsoft cancelled construction plans for a data center in Wisconsin because of pushback from the surrounding community, according to Wired. Microsoft’s Promise In an effort to increase transparency and minimize the negative impact its data centers have on the public, Microsoft addressed five core issues it plans to focus on going forward. Per Microsoft’s statement, the electricity needed for data centers will more than triple by 2035 to 640 terawatt-hours per year. The U.S. is currently leading development in AI, but that growth depends on a sufficient supply of energy. So where will that electricity come from? Microsoft said in a statement it believes “it’s both unfair and politically unrealistic for our industry to ask the public to shoulder added electricity costs for AI,” instead suggesting “tech companies pay their own way for the electricity costs they create.” The company plans to cover its costs through a series of steps, including negotiating higher rates with utility companies and public commissions that will pay for the electricity for the datacenters. It will also work to increase the efficiency of its data centers and advocate for policies that will ensure communities have affordable and reliable power. Microsoft also said it would: Minimize its water use and invest in water replenishment projects Create construction and operational jobs in local communities and train residents with the skills required to fill them Increase local tax revenue that will help fund hospitals, schools, parks, and libraries Help bring AI training and nonprofits to local communities to ensure residents benefit from the data centers. BY AVA LEVINSON

Monday, January 19, 2026

AI Expert Predicted AI Would End Humanity in 2027—Now He’s Changing His Timeline

Daniel Kokotajlo predicted the end of the world would happen in April 2027. In “AI 2027” — a document outlining the impending impacts of AI, published in April 2025 — the former OpenAI employee and several peers announced that by April 2027, unchecked AI development would lead to superintelligence and consequently destroy humanity. The authors, however are going back on their predictions. Now, Kokotajlo forecasts superintelligence will land in 2034, but he doesn’t know if and when AI will destroy humanity. In “AI 2027,” Kokotajlo argued that superintelligence will emerge through “fully autonomous coding,” enabling AI systems to drive their own development. The release of ChatGPT in 2022 accelerated predictions around artificial general intelligence, with some forecasting its arrival within years rather than decades. These predictions accrued widespread attention. Notably, JD Vance, U.S. vice president, reportedly read “AI 2027” and later urged Pope Leo XIV — who underscored AI as a main challenge facing humanity — to provide international leadership to avoid outcomes listed in the document. On the other hand, people like Gary Marcus, emeritus professor of neuroscience at New York University, disregarded “AI 2027” as a “work of fiction,” even calling various predictions “pure science fiction mumbo jumbo.” As researchers and the public alike begin to reckon with “how jagged AI performance is,” AGI timelines are starting to stretch again, according to Malcolm Murray, an AI risk management expert and one of the authors of the “International AI Safety Report.” “For a scenario like ‘AI 2027’ to happen, [AI] would need a lot of more practical skills that are useful in real-world complexities,” Murray said. Still, developing AI models that can train themselves remains a steady goal for leading AI companies. Sam Altman, OpenAI CEO, set internal goals for “a true automated AI researcher by March of 2028.” However, he’s not entirely confident in the company’s capabilities to develop superintelligence. “We may totally fail at this goal,” he admitted on X, “but given the extraordinary potential impacts we think it is in the public interest to be transparent about this.” And so, superintelligence may still be possible, but when it arrives and what it will be capable of remains far murkier than “AI 2027” once suggested. BY LEILA SHERIDAN

Wednesday, January 14, 2026

Google Just Announced Major AI Changes to Gmail. Here’s What’s Coming

Gmail is getting a major AI upgrade. Get ready for the AI Inbox, which aims to be like a ‘chief of staff in your life.’ Google has announced three new AI-powered features coming to the massively popular email platform. The company has also made a few AI features that were previously exclusive to paid subscribers available for free. The new features provide users with editorial guidance while composing emails, enhance Gmail’s search capabilities, and proactively surface insights through a new experience that the company calls an “AI Inbox.” In an interview, Gmail product lead Blake Barnes tells Inc. that all of the new AI features are designed to be additive, without fundamentally altering the simplicity that has allowed Gmail to thrive for over 20 years. To avoid any kind of disruption, Barnes says, “we made very intentional and specific decisions to extend from features that already exist in a very natural way, but using modern day technology.” Take Proofreader, one of the new AI features announced today, as an example. According to Barnes, Proofreader is essentially an upgraded version of common spellcheck tools. But unlike those tools, which typically only highlight misspellings and grammatical errors, Proofreader will suggest more editorially-minded changes, calling out instances of passive voice, suggesting to break up long sentences, and underlining repetitive statements. Gmail’s search function is also getting an upgrade. Instead of just typing keywords into the search bar, users can now enter full sentences, and in response, the platform will generate an AI Overview, just like the ones that appear at the top of most Google searches nowadays. These overviews are entirely based on information within your Gmail. According to Barnes, “we’ll scour every email in your inbox, and we’ll give you the answer to your questions right at the top.” Both Proofreader and AI Overviews in Gmail search will be available to paid subscribers of Google’s AI Pro and AI Ultra plans. Unlike those two features, which are evolutions of previously established tools, the AI Inbox is a brand-new experience. Instead of displaying your most recent emails, Barnes says, AI Inbox acts as a kind of “personal, proactive inbox assistant,” periodically scanning your inbox to identify priority emails and then grouping them into either suggested to-dos or topics to catch up on. In an example shown to Inc., an email from a dental office requesting an appointment reschedule was flagged as a to-do, and included information about alternative times in the summary. “It’s almost like you have a chief of staff in your life,” says Barnes, acknowledging the feature’s potential appeal for enterprise customers, where it could help employees to stay on top of their work. While AI Inbox is currently only available to early-access testers, Barnes says it will soon come to Google Workspace paid accounts. Barnes also announced that a few AI-powered Gmail features that previously required a subscription to use are now free for everyone. These features include Smart Reply, a tool that suggests short responses to emails; Help Me Write, a tool that generates and edits text through prompts; and AI Summaries, a feature that condenses and summarizes full email threads. Gmail was able to fully deploy these features without sacrificing quality, according to Barnes, because of efficiency gains achieved by Google teams at the software and hardware level. “They’re not getting a watered-down version,” says Barnes. “They’re getting the best we have.” BY BEN SHERRY @BENLUCASSHERRY

Sunday, January 11, 2026

Fears of an AI bubble were nowhere to be found at the world’s biggest tech show

Las Vegas, NV: Robots took over the floor at the biggest technology show of the year: I watched a towering humanoid robot march forward, spin its head and wave at an excited crowd. Then I almost bumped into a four-legged doglike robot behind me. They’re just a couple of the many robots I encountered this week designed for a range of purposes, from playing chess to performing spinal surgery. These are common occurrences on the Las Vegas Convention Center’s show floor during CES, which wrapped on Friday. Every January, companies from around the world gather to flaunt new technologies, products and services. The show is just as much spectacle as it is substance; many of the most eye-catching wares either haven’t come to fruition (like flying cars) or are wildly expensive and impractical (think TVs that cost tens of thousands of dollars). But CES provides a glimpse into the bets being made by industry giants like Nvidia, Intel, Amazon and Samsung. AI once again dominated the conference. Companies showed off everything from humanoid robots they claim will staff factories to refrigerators you can open with your voice to the next-generation chips that will power it all. CES, in some ways, turned the Strip into a bubble of its own, shielded from AI skepticism. CNN asked a handful of tech executives at CES about an AI bubble and how it might impact their businesses. Some said their businesses aren’t relevant to the bubble concerns, while others expressed optimism about AI’s potential and said they are focused on building products that show it. “We’re in the earliest stage of what’s possible. So when I hear we’re in a bubble, I’m like… This isn’t a fad,” said Panos Panay, Amazon’s devices and services chief. “It’s not going to pass.” Growing concerns of an AI bubble Tech companies poured more than $61 billion into data center investments in 2025, according to S&P Global, fueling concerns that investments may be far outpacing demand. And investments are only expected to grow, with Goldman Sachs reporting that AI companies are estimated to invest more than $500 billion in capital expenditures this year. Julien Garran, researcher and partner for research firm MacroStrategy Partnership, said in a report last year that the AI bubble is 17 times bigger than the dot com bubble. Most of the concerns around an AI bubble have centered on investments in data centers built for AI tasks that are too power-hungry for devices like laptops and smartphones to handle alone. Nvidia, the poster child of the AI boom and the company at the center of the bubble debate, announced at CES that the next version of its computing platform that powers those data centers is arriving in the second half of this year. When asked about the AI bubble, executives from chipmakers Intel and Qualcomm pointed to their respective companies’ efforts to improve how computers process AI tasks locally rather than in the cloud. Qualcomm, which makes chips for smartphones and other products, announced last year that it’s expanding into data centers. But that represents a very small part of its business. “As far as we’re concerned, where we operate is not where the bubble conversation exists,” Akash Palkhiwala, Qualcomm’s chief financial officer and chief operating officer, told CNN. Intel is focused on products that are important to its consumers, like chips that boost laptop performance, rather than making a big bet “that takes a lot of investment that may or may not make it,” said its client computing group head, Jim Johnson. CK Kim, executive vice president and head of Samsung’s digital appliances business, said in an interview through an interpreter that it’s not for him to say whether the industry is in an AI bubble. He added that the company is more focused on whether AI is bringing value to consumers. AI and the hunt for the next big thing What that “value” looks like is exactly what the thousands of exhibitors at CES tried to demonstrate this week. Humanoid robots were a big part of that equation for companies like Nvidia, Intel, Hyundai and Qualcomm, all of which announced new tech to power human-shaped robots. Boston Dynamics and Hyundai debuted Atlas, a humanoid robot developed in partnership with Google’s DeepMind AI division designed for industrial work like order fulfillment. It’ll be deployed to Google DeepMind and Hyundai’s Robotics Metaplant Applications center in the coming months, and additional customers will adopt it in early 2027. “With one investment, we can explore any application in the world, from industrial use cases to retail use cases to home use cases,” Aya Durbin, who leads Boston Dynamics’ humanoid application product strategy, said in an interview at Hyundai’s booth when asked what’s driving the interest in humanoid robots. (Hyundai owns a controlling stake in Boston Dynamics.) Tech companies have also been chasing the next big product following the smartphone and think AI could be key to finding it. At CES, a wave of companies introduced discrete listening devices that can record conversations or voice notes. These products included AI jewelry from a startup called Nirva, the Index 01 ring from smartwatch maker Pebble and the now Amazon-owned wristband from Bee. Speaking to gadgets is often faster than typing, but Amazon and Nirva also see their devices as another means to gather data that can provide insights about a user’s life, though doing so will surely raise privacy concerns. Business leaders seem to agree that AI is here to stay — even for those like Pete Erickson, CEO of tech events and education company Modev, who said the industry is indeed in a bubble. But Erickson also believes AI is “just a part of our lives” now. “I don’t think it’s going anywhere,” he said. By Lisa Eadicicco

Friday, January 9, 2026

OpenAI May Want Users to Start Interacting With AI in a Different Way

OpenAI recently reorganized several of its teams in order to focus on improving its audio-generation AI models, according to a new report—and these improvements are crucial to OpenAI pulling off one of its most high-profile projects. According to a report from the Information, OpenAI is prioritizing audio AI development because the technology will be at the core of OpenAI’s much-anticipated line of physical devices, created by legendary iPhone designer Jony Ive. OpenAI bought Ive’s design startup, io Products, for $6.5 billion in May 2025, with the explicit goal of creating a new generation of AI-powered devices. In the months since, rumors have circulated that the devices would eschew screens in favor of an audio-based operating system, in the form of a pair of smart glasses or an Amazon Echo-like speaker. OpenAI CEO Sam Altman further added to those rumors during a December 2025 interview with journalist Alex Kantrowitz, in which he said that using a screen would limit OpenAI’s device to “the same way we’ve had graphical user interfaces working for many decades.” During a presentation in summer 2025, according to the Information, researchers working on the devices told OpenAI staff that the company’s initial device will process information about its surroundings through audio and video, and “act like a companion that works alongside its user, proactively giving suggestions to help the user achieve their goals, rather than as a simple conduit to apps and other software.” In practice, the device sounds remarkably similar to the one featured in the 2015 film Her. In that film, humans wear devices that allow their AI companions to see and hear the world, while providing commentary through an earpiece. The first device isn’t expected for at least another year, but there’s reportedly an obstacle between the vision and reality—OpenAI’s audio models aren’t good enough yet. Currently, the Information reports, OpenAI researchers believe that their audio models aren’t as fast or accurate at answering user questions. To rectify this, OpenAI has reportedly “unified several engineering, product and research teams around the goal of improving audio models for its future devices.” The Information reported that this push is headed by Kundan Kumar, a researcher previously at Character.ai. Other key figures include product research lead Ben Newhouse and ChatGPT product manager Jackie Shannon. Their efforts have produced a new architecture for audio models that the Information’s sources say are more natural, emotive, and accurate. A new audio model built on this architecture is expected to be released in Q1 of 2026. The other challenge to OpenAI’s audio ambitions? Their current customers barely use ChatGPT’s existing audio features, according to a former OpenAI employee that the Information spoke with. For OpenAI’s devices to catch on, the company needs to train the AI-using public to embrace audio as an operating system. BY BEN SHERRY @BENLUCASSHERRY

Wednesday, January 7, 2026

Sergey Brin Has Some Advice for Students in the Age of AI

On December 12, Sergey Brin (The Google co-founder spoke at the Stanford School of Engineering's centennial) offered advice for students facing AI right now. The co-founder of Google and parent company Alphabet appeared onstage with Stanford president Jonathan Levin and dean Jennifer Widom for the School of Engineering’s 100-year anniversary. Brin received his master’s degree in computer science from Stanford University in 1995, before meeting prospective PhD student Larry Page and founding Google a few years later. When Widom asked if he would recommend a computer science major to current students, Brin said he chose that field because he was passionate about it, which made it a “no-brainer.” Still, he wouldn’t suggest students change their academic plans solely because of AI. “I wouldn’t go off and, like, switch to comparative literature because you think the AI is good at coding,” Brin said. “When the AI writes the code, and just to be honest, sometimes doesn’t work, it’ll make a mistake that’s pretty significant. You know, getting a sentence wrong in your essay about comparative literature isn’t going to really have that consequence. So, it’s honestly easier for AI to do some of the, you know, creative things.” Levin asked Brin more broadly about the advice he had for students who are facing AI today. “The AI we have today is very different from the AI that we had five years ago, or the AI we are going to have in five years,” Brin said. “I think it’s tough to really forecast. I mean, I would for sure use AI to your benefit. There are just so many things that you can do.” He added that he personally “turn[s] to AI all the time now,” whether to help him find gifts for people close to him or to brainstorm new ideas and products. “It doesn’t do it for me, because I’ll typically ask it, ‘Give me five ideas, blah, blah, blah,’ and probably three of them are going to be junk in some way, but I’ll just be able to tell,” he said. “But two will have some grain of brilliance, or possibly put it in perspective for me or something like that, that I’ll be able to refine and think through my ideas.” BY AVA LEVINSON

Tuesday, January 6, 2026

7 Predictions for 2026, From Coffee-Making Humanoid Robots to AI Helping Treat Disease

At nearly 70 years old, artificial intelligence is just coming into its prime. AI was the most transformative force in technology in 2025, and is also the buzzword on the lips of many futurists, analysts, and investors for the coming year. Fittingly, 2026 will also mark 70 years since the seminal Dartmouth Summer Research Project on AI—the 1956 gathering of scientists that is widely considered to be the event during which AI research as a field was born. Generative AI in particular has moved the sector forward. Even just in the three years since OpenAI released ChatGPT, it has transformed the world of business. But given how quickly AI is moving, it can be difficult to determine what that might mean for the future. Inc. spoke to three futurists with their fingers on the pulse of technology to find out more about what we can expect 2026 to bring. The death of SEO Natural language interactions with chatbots via mobile apps and browsers will all but replace the use of conventional internet search in 2026, Future Today Strategy Group CEO Amy Webb predicts. This means that gone are the days of tabs, links, ads, affiliates and click-throughs in favor of “conversation and intent,” Webb noted in an emailed memo. “The blunt reality is that people are getting to the information they’re looking for much, much faster than having to sift through endless pages of search results,” she tells Inc. Webb says this ongoing trend will continue to be transformative for consumers, who can find what they are looking for “faster, easier, better,” thanks to generative AI. For businesses, however, it is likely to pose a problem. “It’s not entirely clear why AI systems are delivering answers to you and in what order,” Webb says. “All of these companies that have spent money on [SEO] or search engine marketing or making sure they have a strong digital brand and presence—none of that may matter going forward.” Although a handful of companies have already sprung up to provide GEO—or generative engine optimization—services, Webb wonders if they are selling “snake oil.” She says GEO businesses would have to have “significantly more data and access to information on how the models were trained than any of those companies are willing to divulge.” Real-time translation advancements The possibilities that AI presents for translation have been a focus of the field almost since the beginning. John-Clark Levin, who works as the lead researcher for legendary tech futurist and computer scientist Ray Kurzweil, says that the basic science problems have essentially been solved. Next year, he predicts, is the year that AI-powered translation services will overcome the hurdles necessary to integrate into platforms where they are needed most. One real-world example of an app where AI translation is already automatically integrated into messaging is Uber. “I was in Paris earlier this year and found so many more Uber drivers in Paris speak English than I remember,” Levin says. “Then I realized that’s because I’m saying, ‘Good day,’ and they’re reading, ‘Bonjour’ and vice versa.” A transformative use case of this technology is on freelance marketplaces like Upwork. Today, skilled coders in countries like Pakistan face significant language barriers that limit their abilities to earn the types of wages that English-speaking IT workers do. But automatic, integrated AI translation could change all that, Levin says. Furthermore, Levin expects to see even more advancements in translation for video chat applications. In 2026, he says it is likely there could be a demonstration of technology that provides real-time voice translation on video chat applications with real-time lip syncing. That means, for example, that if Levin was speaking in English to an audience in Beijing, they would hear his voice speaking Mandarin, as well as see his lips “making the shapes of Mandarin sounds at the same time in real-time.” Levin says that the impressive technology will likely be too pricey to deploy at scale in 2026. (Akool, which topped the 2025 Inc. 5000 list, offers technology similar to what Levin is talking about.) Authenticity and analog aesthetics Anatola Araba, founder of R3imagine Story Lab, anticipates the preferences of younger generations driving up demand for what she called “phygital”—a hybrid of physical and digital—experiences. (Araba says R3imagine Story Lab specializes in this type of storytelling that blends physical worlds with digital elements.) Advancements in technology like augmented, virtual and mixed reality, as well as AI, can take this to the next level. Phygital experiences can be a boon for brands looking to drive engagement, she says, while also urging companies to be culturally sensitive when crafting these types of immersive worlds. “In this age of digital overload, we see people craving this real sense of connection with others—especially younger GenZ audiences that want to be more analog,” she says. Speaking of digital overload, Araba anticipates a surge in the analog aesthetic in marketing and advertising. It’s no secret that GenZ seems to crave nostalgia, but Araba says the thirst for the analog—think penpals, ripped paper collages, vinyl, and film photography—is also a backlash to the uncanny perfection of AI. She anticipates brands jumping on board with a trend that is already taking over social media platforms like Pinterest. “In marketing or in advertising, that authentic voice is what draws us the most,” she says. “Similarly in generating brand assets, that feeling of being human or aesthetically analog, even if you use AI to do it, is definitely drawing everyone, especially the younger generation.” AI for health care Levin anticipates AI will continue to be used for drug discovery in 2026. He notes that there is already a drug for a deadly lung disease that was designed end-to-end by AI and successfully completed a phase 2A clinical trial. Although he doesn’t anticipate full Food and Drug Administration approval of that or any AI-designed drug in 2026, he predicts “notable successes in earlier stage trials” as well as tools getting “amazing results” for preclinical work. Webb also anticipates generative AI making a substantial mark on the world of biotech and health care in 2026, through capabilities like DNA and RNA editing, and protein engineering. She calls it “generative biology,” and, like Levin, says she thinks existing tools like Nvidia’s Evo 2 and DeepMind’s AlphaGenome will be used in 2026 to rapidly iterate new drugs, as well as make other discoveries. “It very likely portends new options in how we treat disease, come up with climate resistant vegetables and nuts, and create synthetic organisms,” she says “It signals that we are going to see the true birth of the bio economy.” Araba predicts that sleep optimization with the assistance of AI and connected devices like Apple Watches and Oura Rings will become a greater area of focus in 2026, building on research showing a strong link between longevity and sleep quality. She also sees an increase in the use of AI for medical note taking, but cautions that AI can potentially reinforce systemic biases in the medical field. Robots making us coffee Nothing says the future like robots, and Levin and Webb have some ideas about how the field of robotics might evolve in 2026. Levin anticipates 2026 being the year that a humanoid robot could pass Apple co-founder Steve Wozniak’s coffee test. The coffee test is a challenge adopted from comments Wozniak made in a 2007 interview, and is considered by some to be an alternative to the Turing Test of computer intelligence. If a robot is capable of solving the so-called coffee test, it is capable of entering an unfamiliar kitchen and making a cup of coffee, which requires not only the ability to walk and move with dexterity, but also the use of computer vision and reasoning to locate ingredients and operate machinery. Webb, however, thinks of humanoid robots as something of a distraction from how robots will really integrate into society, thanks to advancements in physical AI. Webb paints a picture of a scenario she finds “more plausible” in the next few years than a humanoid robot walking into a kitchen to make coffee. She anticipates a cooler-shaped delivery bot, something like the ones already making deliveries in some U.S. cities, unlocking a small robot door in a consumer’s home with a code, entering the kitchen, taking inventory of missing items, creating a list for a person to approve, and then restocking what’s missing. “It is very, very, very important for everybody to decouple ‘robot’ from ‘human-like form factor,’” she says, adding that hinging hopes for robotics on humanoid form factors may mean missing out on other miraculous innovations already underway. The bubble will burst—but it might not matter Are we in an AI bubble? That’s the question on everyone’s minds, including Levin’s. He says it all depends on the time horizon. Amazon, he says, would have been considered a casualty of the dot-com bubble if an investor had bought shares in 1999 and sold them in 2001. But if they held onto those same shares for 15 years, that investor would have substantially won out. “It is more likely than not that there will be a market correction in AI between here and when we finally get to artificial general intelligence, but on the other side of that, there will be enormous value created,” Levin says. That doesn’t mean there won’t be pain. Levin says what could contribute to a market correction is the peak capability of generative AI remaining far enough ahead of the reliability of various tools and platforms that they are still not widely adopted by businesses. Companies that are likely to be hit the hardest by a market correction are those that build AI wrapper apps, whereas frontier AI labs, particularly those with scale like Google and Meta, will likely be able to “spend through the correction.” In fact, he says, those companies might even welcome a correction. “If I were Google or Meta thinking about the prospects of this, they would almost like to see a correction make it harder for OpenAI and Anthropic to raise money, knowing that they could just spend through it and hopefully get an advantage on the way to AGI,” he says. Finally, a word of warning Although predictions from Araba, Levin, and Webb often look at the positive side of what AI and other technological breakthroughs can mean for society, Levin also sees several potential downsides in the coming year. AI job disruption isn’t just a future concern, he says, it’s something that’s happening now through disinvestment rather than displacement. Although AI today isn’t necessarily powerful or reliable enough to replace human workers in many instances, companies are starting to acknowledge that it one day will be. This is contributing to a trend of disinvestment in certain industries where leaders believe AI may advance more quickly than they can recoup an expensive investment. Two sectors Levin flags as rife for this type of change are call centers and Hollywood. He pointed, for example, to Tyler Perry’s 2024 decision to pause an $800 million investment in his Atlanta studio after the release of OpenAI’s video generator tool, Sora. Levin also says that 2026 could be the year when there is a major safety event involving AI. There could, he says, be a major hack or cyberattack or an incident in which “a deployed LLM is caught scheming against humans.” And as furor builds around AI, he also unfortunately predicts a risk of AI-motivated violence. “There’s enough alarm about AI that the pool of people with violent tendencies and the pool of people who are alarmed enough to lash out at someone or something in the AI space are both growing and will likely start to overlap,” he says. BY CHLOE AIELLO @CHLOBO_ILO

Friday, January 2, 2026

Walmart’s CEO Just Gave a Sobering Prediction About AI. The Time to Prepare Is Now

Doug McMillon, as the CEO of Walmart, runs the largest private employer in the United States. When he talks about the future of work, it isn’t theory—it’s the lived reality of millions of families. In fact, more than 2.1 million people around the world get a paycheck from Walmart. That’s why it matters that, speaking at a workforce conference in Bentonville, Arkansas, last week, Walmart’s CEO didn’t mince words about artificial intelligence. “It’s very clear that AI is going to change literally every job,” McMillon said, according to The Wall Street Journal. “Maybe there’s a job in the world that AI won’t change, but I haven’t thought of it.” Look, a lot of people have predicted that AI will change the way we work in the future. For that matter, people are predicting that AI will change the way we do pretty much everything. It’s already changing the way we look for and process information. And it’s having a real impact on creative work, from generating ideas to editing photos. But this is different. This isn’t some kind of edge case where AI is doing something that benefits niche work. This is a sober assessment from someone who thinks about the livelihoods of millions of people, from truck drivers to warehouse workers and store managers. So far, much of the AI conversation around work has been about replacing humans with robots or computers capable of doing everything from menial tasks to coding. The pitch is that companies will save extraordinary costs as humans are replaced with AI that can do more work, faster, and cheaper. The fear among many employees is that automation will come for knowledge work the same way robots came for manufacturing. McMillon’s warning is different: AI isn’t confined to Silicon Valley jobs. It’s coming for the retail floor, the supply chain, the back office, and the call center. For example, AI can already predict what items a store will sell and when, automatically adjusting orders. That doesn’t eliminate the need for employees—but it will definitely change what their job looks like. McMillon also made another point: Walmart’s overall head count will likely stay flat, even as its revenue grows. That—if you think about it—isn’t just surprising, it’s incredibly revealing. The assumption is that AI equals fewer jobs. Instead, Walmart expects them to be different. To make that happen, the company is mapping which roles will shrink, which will grow, and which will stay stable. The strategy is to invest in reskilling so workers can move into the new jobs AI creates. “We’ve got to create the opportunity for everybody to make it to the other side,” McMillon said. This is the part of the warning many leaders ignore. Pretending AI won’t affect your workforce is irresponsible. Pretending AI only means job cuts is short-sighted. The challenge is to figure out what your workforce looks like and what you need to do to make the transition. There are a few reasons that Walmart’s perspective matters. The obvious one is because it’s the largest private employer in the world. It is the company that, single-handedly, affects the greatest number of people when it makes a change to its workforce. That’s why AI isn’t just a technology problem; it’s a leadership problem. It’s one thing for McMillon to say “AI will change every job.” It’s another thing to commit that Walmart will still employ millions of people, even if the jobs look different. He’s saying the responsibility to guide workers through change rests squarely on leaders’ shoulders. That’s a message worth hearing far beyond the company’s Bentonville headquarters. AI is often pitched as a productivity story. That’s true, but the bigger story is about people. Technology that changes “literally every job” also changes lives, families, and communities. The ripple effect is enormous when you’re a company the size of Walmart. By the way, Walmart isn’t perfect, but its approach offers a model. Instead of framing AI as cost-cutting, it’s framing AI as a transformation challenge. That may seem like semantics, but reframing the conversation makes all the difference between a fearful workforce and a resilient one. McMillon’s prediction is sobering precisely because it’s credible. He isn’t selling software or trying to impress investors. He’s planning for how millions of his own employees will navigate the AI future. If you’re leading a business—whether that’s 20 people or 20,000—the message is pretty clear. AI is going to change every job. Your job is to be thinking hard about what that means for your company. It means thinking about how it will impact your people and coming up with a plan. It seems like almost everyone agrees that AI will change almost everything about the way we all work. The only question is whether you’ll help your people prepare or leave them to figure it out on their own. By then, it will be too late. That’s why every leader should start now. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN