Wednesday, January 21, 2026
Microsoft Has a Plan to Address One of the Biggest Complaints About AI
As it embarks on a years-long project to build 100 data centers across the U.S. to power its AI boom, Microsoft has announced the steps it will take to lower its impact on the communities nearby. The move comes as electricity rates have spiked across the nation, fueled in part by the massive power demands from AI data centers that are popping up across the country.
President Donald Trump paved the way for the announcement, saying via Truth Social on January 12 that his administration was working with leading technology companies to “ensure that Americans don’t ‘pick up the tab’ for their POWER consumption” by paying more in utilities.
“We are the ‘HOTTEST’ Country in the World, and Number One in AI,” he wrote. “Data Centers are key to that boom, and keeping Americans FREE and SECURE but, the big Technology Companies who build them must ‘pay their own way.’”
Community Opposition
Bard Smith, Microsoft vice chair and president, acknowledged the need to address concerns about data centers.
“When I visit communities around the country, people have questions—pointed questions…They are the type of questions that we need to heed,” Smith said. “They look at this technology and ask, ‘What will it mean for the jobs of the future? What will it mean for the adults of today? What will it mean for their children?’”
In October Microsoft cancelled construction plans for a data center in Wisconsin because of pushback from the surrounding community, according to Wired.
Microsoft’s Promise
In an effort to increase transparency and minimize the negative impact its data centers have on the public, Microsoft addressed five core issues it plans to focus on going forward.
Per Microsoft’s statement, the electricity needed for data centers will more than triple by 2035 to 640 terawatt-hours per year. The U.S. is currently leading development in AI, but that growth depends on a sufficient supply of energy. So where will that electricity come from?
Microsoft said in a statement it believes “it’s both unfair and politically unrealistic for our industry to ask the public to shoulder added electricity costs for AI,” instead suggesting “tech companies pay their own way for the electricity costs they create.”
The company plans to cover its costs through a series of steps, including negotiating higher rates with utility companies and public commissions that will pay for the electricity for the datacenters. It will also work to increase the efficiency of its data centers and advocate for policies that will ensure communities have affordable and reliable power.
Microsoft also said it would:
Minimize its water use and invest in water replenishment projects
Create construction and operational jobs in local communities and train residents with the skills required to fill them
Increase local tax revenue that will help fund hospitals, schools, parks, and libraries
Help bring AI training and nonprofits to local communities to ensure residents benefit from the data centers.
BY AVA LEVINSON
Monday, January 19, 2026
AI Expert Predicted AI Would End Humanity in 2027—Now He’s Changing His Timeline
Daniel Kokotajlo predicted the end of the world would happen in April 2027. In “AI 2027” — a document outlining the impending impacts of AI, published in April 2025 — the former OpenAI employee and several peers announced that by April 2027, unchecked AI development would lead to superintelligence and consequently destroy humanity.
The authors, however are going back on their predictions. Now, Kokotajlo forecasts superintelligence will land in 2034, but he doesn’t know if and when AI will destroy humanity.
In “AI 2027,” Kokotajlo argued that superintelligence will emerge through “fully autonomous coding,” enabling AI systems to drive their own development. The release of ChatGPT in 2022 accelerated predictions around artificial general intelligence, with some forecasting its arrival within years rather than decades.
These predictions accrued widespread attention. Notably, JD Vance, U.S. vice president, reportedly read “AI 2027” and later urged Pope Leo XIV — who underscored AI as a main challenge facing humanity — to provide international leadership to avoid outcomes listed in the document. On the other hand, people like Gary Marcus, emeritus professor of neuroscience at New York University, disregarded “AI 2027” as a “work of fiction,” even calling various predictions “pure science fiction mumbo jumbo.”
As researchers and the public alike begin to reckon with “how jagged AI performance is,” AGI timelines are starting to stretch again, according to Malcolm Murray, an AI risk management expert and one of the authors of the “International AI Safety Report.” “For a scenario like ‘AI 2027’ to happen, [AI] would need a lot of more practical skills that are useful in real-world complexities,” Murray said.
Still, developing AI models that can train themselves remains a steady goal for leading AI companies. Sam Altman, OpenAI CEO, set internal goals for “a true automated AI researcher by March of 2028.”
However, he’s not entirely confident in the company’s capabilities to develop superintelligence. “We may totally fail at this goal,” he admitted on X, “but given the extraordinary potential impacts we think it is in the public interest to be transparent about this.”
And so, superintelligence may still be possible, but when it arrives and what it will be capable of remains far murkier than “AI 2027” once suggested.
BY LEILA SHERIDAN
Wednesday, January 14, 2026
Google Just Announced Major AI Changes to Gmail. Here’s What’s Coming
Gmail is getting a major AI upgrade. Get ready for the AI Inbox, which aims to be like a ‘chief of staff in your life.’
Google has announced three new AI-powered features coming to the massively popular email platform. The company has also made a few AI features that were previously exclusive to paid subscribers available for free.
The new features provide users with editorial guidance while composing emails, enhance Gmail’s search capabilities, and proactively surface insights through a new experience that the company calls an “AI Inbox.”
In an interview, Gmail product lead Blake Barnes tells Inc. that all of the new AI features are designed to be additive, without fundamentally altering the simplicity that has allowed Gmail to thrive for over 20 years. To avoid any kind of disruption, Barnes says, “we made very intentional and specific decisions to extend from features that already exist in a very natural way, but using modern day technology.”
Take Proofreader, one of the new AI features announced today, as an example. According to Barnes, Proofreader is essentially an upgraded version of common spellcheck tools. But unlike those tools, which typically only highlight misspellings and grammatical errors, Proofreader will suggest more editorially-minded changes, calling out instances of passive voice, suggesting to break up long sentences, and underlining repetitive statements.
Gmail’s search function is also getting an upgrade. Instead of just typing keywords into the search bar, users can now enter full sentences, and in response, the platform will generate an AI Overview, just like the ones that appear at the top of most Google searches nowadays. These overviews are entirely based on information within your Gmail. According to Barnes, “we’ll scour every email in your inbox, and we’ll give you the answer to your questions right at the top.”
Both Proofreader and AI Overviews in Gmail search will be available to paid subscribers of Google’s AI Pro and AI Ultra plans.
Unlike those two features, which are evolutions of previously established tools, the AI Inbox is a brand-new experience. Instead of displaying your most recent emails, Barnes says, AI Inbox acts as a kind of “personal, proactive inbox assistant,” periodically scanning your inbox to identify priority emails and then grouping them into either suggested to-dos or topics to catch up on. In an example shown to Inc., an email from a dental office requesting an appointment reschedule was flagged as a to-do, and included information about alternative times in the summary.
“It’s almost like you have a chief of staff in your life,” says Barnes, acknowledging the feature’s potential appeal for enterprise customers, where it could help employees to stay on top of their work. While AI Inbox is currently only available to early-access testers, Barnes says it will soon come to Google Workspace paid accounts.
Barnes also announced that a few AI-powered Gmail features that previously required a subscription to use are now free for everyone. These features include Smart Reply, a tool that suggests short responses to emails; Help Me Write, a tool that generates and edits text through prompts; and AI Summaries, a feature that condenses and summarizes full email threads.
Gmail was able to fully deploy these features without sacrificing quality, according to Barnes, because of efficiency gains achieved by Google teams at the software and hardware level. “They’re not getting a watered-down version,” says Barnes. “They’re getting the best we have.”
BY BEN SHERRY @BENLUCASSHERRY
Sunday, January 11, 2026
Fears of an AI bubble were nowhere to be found at the world’s biggest tech show
Las Vegas, NV: Robots took over the floor at the biggest technology show of the year: I watched a towering humanoid robot march forward, spin its head and wave at an excited crowd. Then I almost bumped into a four-legged doglike robot behind me.
They’re just a couple of the many robots I encountered this week designed for a range of purposes, from playing chess to performing spinal surgery. These are common occurrences on the Las Vegas Convention Center’s show floor during CES, which wrapped on Friday. Every January, companies from around the world gather to flaunt new technologies, products and services.
The show is just as much spectacle as it is substance; many of the most eye-catching wares either haven’t come to fruition (like flying cars) or are wildly expensive and impractical (think TVs that cost tens of thousands of dollars). But CES provides a glimpse into the bets being made by industry giants like Nvidia, Intel, Amazon and Samsung.
AI once again dominated the conference. Companies showed off everything from humanoid robots they claim will staff factories to refrigerators you can open with your voice to the next-generation chips that will power it all. CES, in some ways, turned the Strip into a bubble of its own, shielded from AI skepticism.
CNN asked a handful of tech executives at CES about an AI bubble and how it might impact their businesses. Some said their businesses aren’t relevant to the bubble concerns, while others expressed optimism about AI’s potential and said they are focused on building products that show it.
“We’re in the earliest stage of what’s possible. So when I hear we’re in a bubble, I’m like… This isn’t a fad,” said Panos Panay, Amazon’s devices and services chief. “It’s not going to pass.”
Growing concerns of an AI bubble
Tech companies poured more than $61 billion into data center investments in 2025, according to S&P Global, fueling concerns that investments may be far outpacing demand.
And investments are only expected to grow, with Goldman Sachs reporting that AI companies are estimated to invest more than $500 billion in capital expenditures this year. Julien Garran, researcher and partner for research firm MacroStrategy Partnership, said in a report last year that the AI bubble is 17 times bigger than the dot com bubble.
Most of the concerns around an AI bubble have centered on investments in data centers built for AI tasks that are too power-hungry for devices like laptops and smartphones to handle alone. Nvidia, the poster child of the AI boom and the company at the center of the bubble debate, announced at CES that the next version of its computing platform that powers those data centers is arriving in the second half of this year.
When asked about the AI bubble, executives from chipmakers Intel and Qualcomm pointed to their respective companies’ efforts to improve how computers process AI tasks locally rather than in the cloud.
Qualcomm, which makes chips for smartphones and other products, announced last year that it’s expanding into data centers. But that represents a very small part of its business.
“As far as we’re concerned, where we operate is not where the bubble conversation exists,” Akash Palkhiwala, Qualcomm’s chief financial officer and chief operating officer, told CNN.
Intel is focused on products that are important to its consumers, like chips that boost laptop performance, rather than making a big bet “that takes a lot of investment that may or may not make it,” said its client computing group head, Jim Johnson.
CK Kim, executive vice president and head of Samsung’s digital appliances business, said in an interview through an interpreter that it’s not for him to say whether the industry is in an AI bubble. He added that the company is more focused on whether AI is bringing value to consumers.
AI and the hunt for the next big thing
What that “value” looks like is exactly what the thousands of exhibitors at CES tried to demonstrate this week. Humanoid robots were a big part of that equation for companies like Nvidia, Intel, Hyundai and Qualcomm, all of which announced new tech to power human-shaped robots.
Boston Dynamics and Hyundai debuted Atlas, a humanoid robot developed in partnership with Google’s DeepMind AI division designed for industrial work like order fulfillment. It’ll be deployed to Google DeepMind and Hyundai’s Robotics Metaplant Applications center in the coming months, and additional customers will adopt it in early 2027.
“With one investment, we can explore any application in the world, from industrial use cases to retail use cases to home use cases,” Aya Durbin, who leads Boston Dynamics’ humanoid application product strategy, said in an interview at Hyundai’s booth when asked what’s driving the interest in humanoid robots. (Hyundai owns a controlling stake in Boston Dynamics.)
Tech companies have also been chasing the next big product following the smartphone and think AI could be key to finding it. At CES, a wave of companies introduced discrete listening devices that can record conversations or voice notes. These products included AI jewelry from a startup called Nirva, the Index 01 ring from smartwatch maker Pebble and the now Amazon-owned wristband from Bee.
Speaking to gadgets is often faster than typing, but Amazon and Nirva also see their devices as another means to gather data that can provide insights about a user’s life, though doing so will surely raise privacy concerns.
Business leaders seem to agree that AI is here to stay — even for those like Pete Erickson, CEO of tech events and education company Modev, who said the industry is indeed in a bubble.
But Erickson also believes AI is “just a part of our lives” now.
“I don’t think it’s going anywhere,” he said.
By Lisa Eadicicco
Friday, January 9, 2026
OpenAI May Want Users to Start Interacting With AI in a Different Way
OpenAI recently reorganized several of its teams in order to focus on improving its audio-generation AI models, according to a new report—and these improvements are crucial to OpenAI pulling off one of its most high-profile projects.
According to a report from the Information, OpenAI is prioritizing audio AI development because the technology will be at the core of OpenAI’s much-anticipated line of physical devices, created by legendary iPhone designer Jony Ive.
OpenAI bought Ive’s design startup, io Products, for $6.5 billion in May 2025, with the explicit goal of creating a new generation of AI-powered devices. In the months since, rumors have circulated that the devices would eschew screens in favor of an audio-based operating system, in the form of a pair of smart glasses or an Amazon Echo-like speaker.
OpenAI CEO Sam Altman further added to those rumors during a December 2025 interview with journalist Alex Kantrowitz, in which he said that using a screen would limit OpenAI’s device to “the same way we’ve had graphical user interfaces working for many decades.”
During a presentation in summer 2025, according to the Information, researchers working on the devices told OpenAI staff that the company’s initial device will process information about its surroundings through audio and video, and “act like a companion that works alongside its user, proactively giving suggestions to help the user achieve their goals, rather than as a simple conduit to apps and other software.”
In practice, the device sounds remarkably similar to the one featured in the 2015 film Her. In that film, humans wear devices that allow their AI companions to see and hear the world, while providing commentary through an earpiece.
The first device isn’t expected for at least another year, but there’s reportedly an obstacle between the vision and reality—OpenAI’s audio models aren’t good enough yet. Currently, the Information reports, OpenAI researchers believe that their audio models aren’t as fast or accurate at answering user questions. To rectify this, OpenAI has reportedly “unified several engineering, product and research teams around the goal of improving audio models for its future devices.”
The Information reported that this push is headed by Kundan Kumar, a researcher previously at Character.ai. Other key figures include product research lead Ben Newhouse and ChatGPT product manager Jackie Shannon. Their efforts have produced a new architecture for audio models that the Information’s sources say are more natural, emotive, and accurate. A new audio model built on this architecture is expected to be released in Q1 of 2026.
The other challenge to OpenAI’s audio ambitions? Their current customers barely use ChatGPT’s existing audio features, according to a former OpenAI employee that the Information spoke with. For OpenAI’s devices to catch on, the company needs to train the AI-using public to embrace audio as an operating system.
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, January 7, 2026
Sergey Brin Has Some Advice for Students in the Age of AI
On December 12, Sergey Brin (The Google co-founder spoke at the Stanford School of Engineering's centennial) offered advice for students facing AI right now. The co-founder of Google and parent company Alphabet appeared onstage with Stanford president Jonathan Levin and dean Jennifer Widom for the School of Engineering’s 100-year anniversary.
Brin received his master’s degree in computer science from Stanford University in 1995, before meeting prospective PhD student Larry Page and founding Google a few years later.
When Widom asked if he would recommend a computer science major to current students, Brin said he chose that field because he was passionate about it, which made it a “no-brainer.” Still, he wouldn’t suggest students change their academic plans solely because of AI.
“I wouldn’t go off and, like, switch to comparative literature because you think the AI is good at coding,” Brin said. “When the AI writes the code, and just to be honest, sometimes doesn’t work, it’ll make a mistake that’s pretty significant. You know, getting a sentence wrong in your essay about comparative literature isn’t going to really have that consequence. So, it’s honestly easier for AI to do some of the, you know, creative things.”
Levin asked Brin more broadly about the advice he had for students who are facing AI today.
“The AI we have today is very different from the AI that we had five years ago, or the AI we are going to have in five years,” Brin said. “I think it’s tough to really forecast. I mean, I would for sure use AI to your benefit. There are just so many things that you can do.”
He added that he personally “turn[s] to AI all the time now,” whether to help him find gifts for people close to him or to brainstorm new ideas and products.
“It doesn’t do it for me, because I’ll typically ask it, ‘Give me five ideas, blah, blah, blah,’ and probably three of them are going to be junk in some way, but I’ll just be able to tell,” he said. “But two will have some grain of brilliance, or possibly put it in perspective for me or something like that, that I’ll be able to refine and think through my ideas.”
BY AVA LEVINSON
Tuesday, January 6, 2026
7 Predictions for 2026, From Coffee-Making Humanoid Robots to AI Helping Treat Disease
At nearly 70 years old, artificial intelligence is just coming into its prime.
AI was the most transformative force in technology in 2025, and is also the buzzword on the lips of many futurists, analysts, and investors for the coming year. Fittingly, 2026 will also mark 70 years since the seminal Dartmouth Summer Research Project on AI—the 1956 gathering of scientists that is widely considered to be the event during which AI research as a field was born.
Generative AI in particular has moved the sector forward. Even just in the three years since OpenAI released ChatGPT, it has transformed the world of business. But given how quickly AI is moving, it can be difficult to determine what that might mean for the future.
Inc. spoke to three futurists with their fingers on the pulse of technology to find out more about what we can expect 2026 to bring.
The death of SEO
Natural language interactions with chatbots via mobile apps and browsers will all but replace the use of conventional internet search in 2026, Future Today Strategy Group CEO Amy Webb predicts. This means that gone are the days of tabs, links, ads, affiliates and click-throughs in favor of “conversation and intent,” Webb noted in an emailed memo.
“The blunt reality is that people are getting to the information they’re looking for much, much faster than having to sift through endless pages of search results,” she tells Inc.
Webb says this ongoing trend will continue to be transformative for consumers, who can find what they are looking for “faster, easier, better,” thanks to generative AI. For businesses, however, it is likely to pose a problem.
“It’s not entirely clear why AI systems are delivering answers to you and in what order,” Webb says. “All of these companies that have spent money on [SEO] or search engine marketing or making sure they have a strong digital brand and presence—none of that may matter going forward.”
Although a handful of companies have already sprung up to provide GEO—or generative engine optimization—services, Webb wonders if they are selling “snake oil.” She says GEO businesses would have to have “significantly more data and access to information on how the models were trained than any of those companies are willing to divulge.”
Real-time translation advancements
The possibilities that AI presents for translation have been a focus of the field almost since the beginning. John-Clark Levin, who works as the lead researcher for legendary tech futurist and computer scientist Ray Kurzweil, says that the basic science problems have essentially been solved. Next year, he predicts, is the year that AI-powered translation services will overcome the hurdles necessary to integrate into platforms where they are needed most. One real-world example of an app where AI translation is already automatically integrated into messaging is Uber.
“I was in Paris earlier this year and found so many more Uber drivers in Paris speak English than I remember,” Levin says. “Then I realized that’s because I’m saying, ‘Good day,’ and they’re reading, ‘Bonjour’ and vice versa.”
A transformative use case of this technology is on freelance marketplaces like Upwork. Today, skilled coders in countries like Pakistan face significant language barriers that limit their abilities to earn the types of wages that English-speaking IT workers do. But automatic, integrated AI translation could change all that, Levin says.
Furthermore, Levin expects to see even more advancements in translation for video chat applications. In 2026, he says it is likely there could be a demonstration of technology that provides real-time voice translation on video chat applications with real-time lip syncing. That means, for example, that if Levin was speaking in English to an audience in Beijing, they would hear his voice speaking Mandarin, as well as see his lips “making the shapes of Mandarin sounds at the same time in real-time.”
Levin says that the impressive technology will likely be too pricey to deploy at scale in 2026. (Akool, which topped the 2025 Inc. 5000 list, offers technology similar to what Levin is talking about.)
Authenticity and analog aesthetics
Anatola Araba, founder of R3imagine Story Lab, anticipates the preferences of younger generations driving up demand for what she called “phygital”—a hybrid of physical and digital—experiences. (Araba says R3imagine Story Lab specializes in this type of storytelling that blends physical worlds with digital elements.) Advancements in technology like augmented, virtual and mixed reality, as well as AI, can take this to the next level. Phygital experiences can be a boon for brands looking to drive engagement, she says, while also urging companies to be culturally sensitive when crafting these types of immersive worlds.
“In this age of digital overload, we see people craving this real sense of connection with others—especially younger GenZ audiences that want to be more analog,” she says.
Speaking of digital overload, Araba anticipates a surge in the analog aesthetic in marketing and advertising. It’s no secret that GenZ seems to crave nostalgia, but Araba says the thirst for the analog—think penpals, ripped paper collages, vinyl, and film photography—is also a backlash to the uncanny perfection of AI. She anticipates brands jumping on board with a trend that is already taking over social media platforms like Pinterest.
“In marketing or in advertising, that authentic voice is what draws us the most,” she says. “Similarly in generating brand assets, that feeling of being human or aesthetically analog, even if you use AI to do it, is definitely drawing everyone, especially the younger generation.”
AI for health care
Levin anticipates AI will continue to be used for drug discovery in 2026. He notes that there is already a drug for a deadly lung disease that was designed end-to-end by AI and successfully completed a phase 2A clinical trial. Although he doesn’t anticipate full Food and Drug Administration approval of that or any AI-designed drug in 2026, he predicts “notable successes in earlier stage trials” as well as tools getting “amazing results” for preclinical work.
Webb also anticipates generative AI making a substantial mark on the world of biotech and health care in 2026, through capabilities like DNA and RNA editing, and protein engineering. She calls it “generative biology,” and, like Levin, says she thinks existing tools like Nvidia’s Evo 2 and DeepMind’s AlphaGenome will be used in 2026 to rapidly iterate new drugs, as well as make other discoveries.
“It very likely portends new options in how we treat disease, come up with climate resistant vegetables and nuts, and create synthetic organisms,” she says “It signals that we are going to see the true birth of the bio economy.”
Araba predicts that sleep optimization with the assistance of AI and connected devices like Apple Watches and Oura Rings will become a greater area of focus in 2026, building on research showing a strong link between longevity and sleep quality. She also sees an increase in the use of AI for medical note taking, but cautions that AI can potentially reinforce systemic biases in the medical field.
Robots making us coffee
Nothing says the future like robots, and Levin and Webb have some ideas about how the field of robotics might evolve in 2026.
Levin anticipates 2026 being the year that a humanoid robot could pass Apple co-founder Steve Wozniak’s coffee test. The coffee test is a challenge adopted from comments Wozniak made in a 2007 interview, and is considered by some to be an alternative to the Turing Test of computer intelligence. If a robot is capable of solving the so-called coffee test, it is capable of entering an unfamiliar kitchen and making a cup of coffee, which requires not only the ability to walk and move with dexterity, but also the use of computer vision and reasoning to locate ingredients and operate machinery.
Webb, however, thinks of humanoid robots as something of a distraction from how robots will really integrate into society, thanks to advancements in physical AI. Webb paints a picture of a scenario she finds “more plausible” in the next few years than a humanoid robot walking into a kitchen to make coffee. She anticipates a cooler-shaped delivery bot, something like the ones already making deliveries in some U.S. cities, unlocking a small robot door in a consumer’s home with a code, entering the kitchen, taking inventory of missing items, creating a list for a person to approve, and then restocking what’s missing.
“It is very, very, very important for everybody to decouple ‘robot’ from ‘human-like form factor,’” she says, adding that hinging hopes for robotics on humanoid form factors may mean missing out on other miraculous innovations already underway.
The bubble will burst—but it might not matter
Are we in an AI bubble? That’s the question on everyone’s minds, including Levin’s. He says it all depends on the time horizon. Amazon, he says, would have been considered a casualty of the dot-com bubble if an investor had bought shares in 1999 and sold them in 2001. But if they held onto those same shares for 15 years, that investor would have substantially won out.
“It is more likely than not that there will be a market correction in AI between here and when we finally get to artificial general intelligence, but on the other side of that, there will be enormous value created,” Levin says.
That doesn’t mean there won’t be pain. Levin says what could contribute to a market correction is the peak capability of generative AI remaining far enough ahead of the reliability of various tools and platforms that they are still not widely adopted by businesses. Companies that are likely to be hit the hardest by a market correction are those that build AI wrapper apps, whereas frontier AI labs, particularly those with scale like Google and Meta, will likely be able to “spend through the correction.” In fact, he says, those companies might even welcome a correction.
“If I were Google or Meta thinking about the prospects of this, they would almost like to see a correction make it harder for OpenAI and Anthropic to raise money, knowing that they could just spend through it and hopefully get an advantage on the way to AGI,” he says.
Finally, a word of warning
Although predictions from Araba, Levin, and Webb often look at the positive side of what AI and other technological breakthroughs can mean for society, Levin also sees several potential downsides in the coming year.
AI job disruption isn’t just a future concern, he says, it’s something that’s happening now through disinvestment rather than displacement. Although AI today isn’t necessarily powerful or reliable enough to replace human workers in many instances, companies are starting to acknowledge that it one day will be. This is contributing to a trend of disinvestment in certain industries where leaders believe AI may advance more quickly than they can recoup an expensive investment.
Two sectors Levin flags as rife for this type of change are call centers and Hollywood. He pointed, for example, to Tyler Perry’s 2024 decision to pause an $800 million investment in his Atlanta studio after the release of OpenAI’s video generator tool, Sora.
Levin also says that 2026 could be the year when there is a major safety event involving AI. There could, he says, be a major hack or cyberattack or an incident in which “a deployed LLM is caught scheming against humans.” And as furor builds around AI, he also unfortunately predicts a risk of AI-motivated violence.
“There’s enough alarm about AI that the pool of people with violent tendencies and the pool of people who are alarmed enough to lash out at someone or something in the AI space are both growing and will likely start to overlap,” he says.
BY CHLOE AIELLO @CHLOBO_ILO
Friday, January 2, 2026
Walmart’s CEO Just Gave a Sobering Prediction About AI. The Time to Prepare Is Now
Doug McMillon, as the CEO of Walmart, runs the largest private employer in the United States. When he talks about the future of work, it isn’t theory—it’s the lived reality of millions of families. In fact, more than 2.1 million people around the world get a paycheck from Walmart. That’s why it matters that, speaking at a workforce conference in Bentonville, Arkansas, last week, Walmart’s CEO didn’t mince words about artificial intelligence.
“It’s very clear that AI is going to change literally every job,” McMillon said, according to The Wall Street Journal. “Maybe there’s a job in the world that AI won’t change, but I haven’t thought of it.”
Look, a lot of people have predicted that AI will change the way we work in the future. For that matter, people are predicting that AI will change the way we do pretty much everything. It’s already changing the way we look for and process information. And it’s having a real impact on creative work, from generating ideas to editing photos.
But this is different. This isn’t some kind of edge case where AI is doing something that benefits niche work. This is a sober assessment from someone who thinks about the livelihoods of millions of people, from truck drivers to warehouse workers and store managers.
So far, much of the AI conversation around work has been about replacing humans with robots or computers capable of doing everything from menial tasks to coding. The pitch is that companies will save extraordinary costs as humans are replaced with AI that can do more work, faster, and cheaper.
The fear among many employees is that automation will come for knowledge work the same way robots came for manufacturing. McMillon’s warning is different: AI isn’t confined to Silicon Valley jobs. It’s coming for the retail floor, the supply chain, the back office, and the call center.
For example, AI can already predict what items a store will sell and when, automatically adjusting orders. That doesn’t eliminate the need for employees—but it will definitely change what their job looks like.
McMillon also made another point: Walmart’s overall head count will likely stay flat, even as its revenue grows. That—if you think about it—isn’t just surprising, it’s incredibly revealing. The assumption is that AI equals fewer jobs. Instead, Walmart expects them to be different.
To make that happen, the company is mapping which roles will shrink, which will grow, and which will stay stable. The strategy is to invest in reskilling so workers can move into the new jobs AI creates. “We’ve got to create the opportunity for everybody to make it to the other side,” McMillon said.
This is the part of the warning many leaders ignore. Pretending AI won’t affect your workforce is irresponsible. Pretending AI only means job cuts is short-sighted. The challenge is to figure out what your workforce looks like and what you need to do to make the transition.
There are a few reasons that Walmart’s perspective matters. The obvious one is because it’s the largest private employer in the world. It is the company that, single-handedly, affects the greatest number of people when it makes a change to its workforce. That’s why AI isn’t just a technology problem; it’s a leadership problem.
It’s one thing for McMillon to say “AI will change every job.” It’s another thing to commit that Walmart will still employ millions of people, even if the jobs look different. He’s saying the responsibility to guide workers through change rests squarely on leaders’ shoulders. That’s a message worth hearing far beyond the company’s Bentonville headquarters.
AI is often pitched as a productivity story. That’s true, but the bigger story is about people. Technology that changes “literally every job” also changes lives, families, and communities. The ripple effect is enormous when you’re a company the size of Walmart.
By the way, Walmart isn’t perfect, but its approach offers a model. Instead of framing AI as cost-cutting, it’s framing AI as a transformation challenge. That may seem like semantics, but reframing the conversation makes all the difference between a fearful workforce and a resilient one.
McMillon’s prediction is sobering precisely because it’s credible. He isn’t selling software or trying to impress investors. He’s planning for how millions of his own employees will navigate the AI future.
If you’re leading a business—whether that’s 20 people or 20,000—the message is pretty clear. AI is going to change every job. Your job is to be thinking hard about what that means for your company. It means thinking about how it will impact your people and coming up with a plan.
It seems like almost everyone agrees that AI will change almost everything about the way we all work. The only question is whether you’ll help your people prepare or leave them to figure it out on their own. By then, it will be too late. That’s why every leader should start now.
EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN
Subscribe to:
Comments (Atom)