Wednesday, February 19, 2025
Microsoft Says Workers are Already Using AI to Lighten Workloads, at Risk to Their Own Brainpower
Eccentric science-fiction author and technophile Douglas Adams once wrote about how tech was taking an effort-saving role in people’s lives: “Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself,” he explained, and “video recorders watched tedious television for you” for much the same reason. But we’re in the AI era now, and a new Microsoft study suggests that Adams’s metaphor still applies: AI is able to take on much of that “tedious thinking” for you, saving you all the bother of actually working while at work.
The new study actually warns that some knowledge workers are risking becoming overly reliant on generative AI, and their “problem-solving skills may decline as a result,” technology news site The Register says.
The study acknowledges that people have objected to the impact of various technologies on the human mind since forever—from writing (a fundamental, ancient form of technology) all the way up to the internet. It also agrees that these worries are “not unfounded.” “Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved,” the authors write, noting that any type of automation can deprive people of chances to practice using their minds.
In the survey, they asked 319 knowledge workers who used generative AI at least every week if they turn on their brains and apply “critical thinking” when they use tools like ChatGPT or Microsoft Copilot.
The findings were stark. Survey respondents said that when they had high confidence that an AI tool would do well at a particular task, they felt less need to apply their own critical thinking. On the other hand, when a worker had high confidence in their own skills and less in the AI’s, they felt good about putting effort into evaluating the output the AI gave them and then improving it.
AI is redefining how we see work
It all boils down to the fact that when knowledge workers use AI tools, it shifts the way they think about performing activities like analysis, synthesis, and evaluation of information. The AI moves a worker’s focus from information gathering to information verification when using an AI to help try to understand something, and when using an AI for problem solving, the shift is away from carrying out the actual solving process to task stewardship.
Think of it like this: When aircraft didn’t have an autopilot, fliers had to concentrate the whole time on operating the airplane—navigating, controlling, reacting to technical challenges, and feeling the way the wind was blowing. Modern day jetliner pilots have a very different job. They have to be able to fly the plane manually in case of unexpected problems, but minute to minute, what they’re often doing is monitoring the aircraft as it automatically flies itself to make sure it’s doing the right thing.
Microsoft’s new research suggests that when people use an AI to help them solve work tasks, they’re doing the same thing—offloading the boring, slow, or difficult bits of the work to the AI and then managing the AI tool to get the desired output. The worry here is that over time people who used to hone their critical thinking skills all the time at work may lose some of that ability.
One reassuring piece of pro-human info from the survey was that workers in high stakes workplaces or situations (like seeking medical advice from an AI) were conscious of the risk of over-relying on AI outputs that could be problematic, flawed, or flat-out wrong. Those respondents said they used their own thinking skills more.
So what should we do about this? Should you worry that your workforce is going to become dimmer over time, human drudges merely shoveling data mindlessly into and out of an AI system? Not at all. The researchers suggest that one trick would be to design AI tools so they’ve got systems built into them that support worker skill development over the long term, The Register explains.
And AIs should encourage workers to reflect on what’s happening when they’re interacting with AI outputs and even help the workers in this action—essentially keeping their minds focused, not blindly trusting the AI. It’s also possible that, as a good employer, you could give your staff tasks that keep their brains ticking over—ones that don’t need an AI boost.
BY KIT EATON @KITEATON
Monday, February 17, 2025
Why do so many products that don’t seem to need AI integration still feel the need to include it?
The world didn’t ask for an AI-designed shoe. Nor did young parents across the country clamor for AI-enabled baby changing pads. It’s almost certain that your cat doesn’t know the difference between doing its business in a regular litter box versus one that is AI-enhanced.
Yet all of these products exist, because the drumbeat of supposed progress in Silicon Valley is immutable. Super Bowl commercials were a prime example of AI’s societal stranglehold. OpenAI, Salesforce, and Google all aired commercials for their AI products during Sunday’s big game.
How did we get here? We obviously didn’t start out with AI-powered teddy bears and a chatbot that specializes in erectile dysfunction.
For the past couple of years, startups and tech giants alike have anchored themselves to AI hype because it’s been the easiest way to stay relevant or appear like the kind of mold-breaker that can turn the heads of almighty venture capitalists. After all, neither sovereign wealth funds nor institutional investors like stagnation.
It’s been a trickle-down dynamic between AI’s influence on the tech world and consumer products in general. It started with OpenAI: Two years after initially getting bankrolled by Microsoft, the ChatGPT maker is closing a $40 billion investment from SoftBank, the Japanese investment titan. AI euphoria—or contagion—has swept Silicon Valley, and startups have the best chance of getting funding if they develop AI or otherwise integrate it into a product that’s apparently crying out for an overdue disruption—like the humble litter box. Last year, AI-related companies welcomed $100 billion in VC investment globally, which is roughly one-third of the $314 billion spent on all tech startups, a Crunchbase analysis shows.
Undergirding all of this is a belief called “technological determinism,” Arun Sundararajan, a professor of technology, operations and statistics at New York University told Inc. last month.
“If technology can do it, then it will happen,” Sundararajan explained. “As soon as the technological capability comes along, somehow, magically, it will enter our reality.”
The latter half of that statement is undeniable. AI is being shoehorned into products that seemingly do not need to be enhanced by machine learning or large language models. Even in the summer of 2023, experts were warning that superfluous products riding the AI wave are eerily reminiscent of the firms that collapsed and caused market chaos in the dot-com crash.
Last summer, Gayle Jennings-O’Byrne, CEO and general partner at VC firm Wocstar, said basically the same thing: “The mindset of VCs, versus the reality of what these business models and companies are going to look like, [is] just going to propel what we’re calling a bubble,” she told Inc.
But inevitably, there will be more expansion, iteration, and pursuit of growth. Despite losing $5 billion last year, OpenAI is eyeing locations for new data centers in 16 different states, and Meta says it will allocate $65 billion for AI development this year. Even though ChatGPT has 200 million monthly users, it’s unclear to some experts whether the product adds anything of value to productivity and economic output. The worry is that consumer tools that can generate images and text on demand are more of a parlor trick than a new form of electricity.
“We know ChatGPT has about 200 million unique monthly users, but the question is how many of them are using it in a way that will lead to significant productivity improvements/cost reductions. I don’t really know the answer to that question,” Daron Acemoglu, an economist at the Massachusetts Institute of Technology told NPR last October.
BY SAM BLUM @SAMMBLUM
Friday, February 14, 2025
These Are the Jobs AI Will Replace
Question: Do you have a job that could be replaced by AI?
OK, that was a trick question. Everyone’s job could be replaced by AI. That’s how they frightened CEOs into buying it.
Better question: How do you know if your job will be replaced by AI?
Let me answer that this way.
There are two kinds of salespeople in this world. One kind is people who are good at selling and the other kind is people who are Salesforce Wizards. Similarly, there are two kinds of marketing people in this world. People who are good at marketing and HubSpot Gurus.
You see where I’m going?
To answer the question, we need to talk about the difference between expendable knowledge workers and irreplaceable knowledgeable workers.
I’m Not Trying to Scare Anyone
There’s actually a little bit of hope to talk about. For once.
See, way back in the olden days—a.k.a., the summer of 2023—I predicted which jobs were most likely to be replaced by the coming Generative AI wave.
TL;DR: I was one of the progenitors of early generative AI, in 2010, building a platform that enterprises like Yahoo and the Associated Press used to write insightful, informative narratives from nothing but raw data. Even back then, we knew that what we were doing would eliminate jobs. But everyone around us was confused as to which jobs our story-writing computers were going to eliminate.
In 2010, we weren’t going to replace journalists or writers, at least not the good ones. Our tech was going to eliminate a new breed of “data scientists,” and only the sketchy ones.
Those data scientists were knowledge workers. They knew how to use databases and SQL and R and Python to get insights out of the data. But it took the journalists, the knowledgeable workers, to make those insights make sense in context for the reader.
I Was Right!
Fast-forward to today and that battle is still going on. The threat has multiplied, of course, but not exponentially, because even today’s agentic AI is certainly not a font of unlimited contextual knowledge.
What I learned back in 2010 and what still holds true today is that technical evolution has a way of calling out the rote-task knowledge workers in any industry. Back then, it was Johnny-come-lately data scientists. Today, it’s Salesforce Wizards and HubSpot Gurus.
And AI does the calling out almost instantaneously, in a way that’s obvious when it’s not hallucinating.
As I said in that 2023 article, it was only a matter of time before early-2023 generative AI was going to hit the knowledge economy, at which point, those rote-task knowledge workers should start worrying about their jobs.
To clarify which jobs were in peril, I believe I used the phrase, “any white-collar, butt-in-a-seat, pixel-pushing, spreadsheet-spelunking job that the influx of data wrought on the workforce.”
But even back in 2010, those jobs were starting to disappear, thanks to the automation that was and still is part and parcel of Big AI (or whatever). Those rote-task jobs were only being used as stepping stones to turn those knowledge workers into knowledgeable workers.
In 2023, I said that most knowledge workers had about five to 10 years before they became obsolete.
I Was Wrong!
Well, we’re only in year three, maybe four, and corporate America seems hell-bent on eliminating the jobs of both knowledge workers and knowledgeable workers and letting God sort it out.
That’s a huge problem. It has everything to do with how AI was sold into the enterprise, (i.e., FOMO), and that has been my problem with AI the whole time.
In trying to reach maximum productivity, we just went all-in on the promise of AI and redefined productivity to meet it.
However, as with most overreach cycles in business, I believe that’s finally changing. It might be too little and too late, but the AI bills are starting to come due.
AI Hype Meets Financial Reality
Even back in early 2024, obvious leaks in the AI productivity dream bucket started becoming very public.
For example, this article from a fellow Inc. writer (go team!) digging into a survey from Upwork notes that 96 percent of C-suite executives expect AI to increase productivity, while 77 percent of employees actually using the tools as they exist today experienced decreased productivity.
This isn’t a bright red warning flag or anything, but it does at least show the chasm-like mismatch in expectations versus reality that’s been snowballing over the last year.
Now, in late 2023, I also said that AI was coming for SaaS, and everyone laughed at me again.
Well, I was both right and wrong there too.
I was right about AI replacing SaaS knowledge workers—those solely responsible for knowing how to get useful insights out of platforms like Google Analytics… or Salesforce or HubSpot.
I was wrong in assuming corporate America would respond to this technical evolution sensibly and with caution and care for its employees.
Boy, was I wrong. As I said, companies threw all kinds of babies out with all kinds of bathwater.
To their own detriment.
And here we are.
We’re Not Out of the Woods Yet
We’re not at the end of the AI hype cycle, but I believe we’re beyond peak AI hype.
So the answer, the real answer for how to become irreplaceable, is the same and as simple as it ever was.
Become a knowledgeable worker.
Be unbeatable at what you do. Let AI handle the rote-task drudgery like staring at HubSpot all day.
Because, yes, AI can come up with code or creative work or even make the hiring and firing decisions for you. It just can’t do it completely in context, and those skills which separate great coders and marketers and salespeople and CEOs are the same skills they always were.
Look—it’s not how good you are at AI. It’s how good you are at everything that AI should not be doing. Which is a lot.
OK. So now it’s just a matter of hiring back all those knowledgeable workers we lost—and are still losing—in the AI enterprise coup.
Let’s hope our hiring system isn’t broken beyond all recognition.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO
Wednesday, February 12, 2025
The AI Energy Score attempts to quantify the environmental harm of AI models. Here’s how to use it.
Artificial Intelligence is notoriously energy-hungry. Now, a new tool from Salesforce is attempting to quantify the toll AI models are taking on the planet.
“AI models vary dramatically in terms of their environmental impact, from big general models to small domain-specific models,” says Boris Gamazaychikov, head of AI sustainability at Salesforce. “What the AI Energy Score is aiming for is to develop a standardized way to actually start measuring these things.”
AI Energy Score, launched Monday by Salesforce in collaboration with Hugging Face, Cohere, and Carnegie Mellon University, assigns a star rating to assess the energy efficiency of various AI models relative to other models. The ratings are available for 10 distinct tasks, from text and image generation to summarization and automatic speech recognition. A five-star rating is the most energy-efficient, whereas a one-star rating is least. It also shares a number estimate of the quantity of GPU energy consumed per 1,000 search queries in Watt hours.
As of Feb. 10, DistilGPT2, a model developed by Hugging Face that is meant to be a “faster, lighter version of GPT-2,” was the most energy efficient, consuming 1.31 Wh of energy per 1,000 text generation queries.
Alongside an interactive landing page with more information, the site also has a label generator that Gamazaychikov hopes developers will use to showcase their scores and drive awareness.
For now, the tool only rates open-source models hosted on Hugging Face, a platform for data scientists, AI developers, and enthusiasts. That includes some models from Meta, Microsoft, Mistral, and StabilityAI. Salesforce also submitted its own open- and closed-source models for rating. Closed-source, or proprietary models, which include many big names such as OpenAI’s GPT-3 and 4, Anthropic’s Claude, and Google’s Gemini, are not readily available. But Gamazaychikov says there is a way for developers of these proprietary models to conduct the analysis securely and then submit it if they so choose.
“By making a really easy to use, clear, standardized approach, we’re reducing any kind of friction between those leading companies being able to not do this,” Gamazaychikov says, adding that the tool was developed using feedback from academia, government officials, and AI companies. “We’re also hoping to increase some pressure, showing that it’s important to disclose this type of information.”
Although Salesforce welcomes individuals to experiment with the tool to get a sense of how their use of AI models could take a toll on the environment, Gamazaychikov says “the primary audience is probably enterprises, or those that are integrating these types of models within their products.” Furthermore, a tool like AI Energy Score could help companies calculate indirect greenhouse gas emissions throughout the value chain.
The launch of AI Energy Score follows Salesforce’s debut last fall of Agentforce, which allows customers to build or customize AI-powered agents that automate tasks or augment employee labor.
Gamazaychikov says Salesforce also hopes to inspire greater change by giving regulators a jumping-off point for assessing the emissions-load of AI. He also envisions promoting the use of smaller, local models at a time when large reasoning models, which could consume even more energy than current large language models, are in development.
A 2024 study from Goldman Sachs found that AI could drive up the energy demand of data centers by 160 percent by the end of the decade, even as the growing global population exerts additional pressures on the world’s energy needs. This comes at a time when leaders are contending with the worsening effects of climate change, prompting big tech companies to look to renewable energy and innovations in nuclear power to satisfy the growing demand.
Want to try it out? Head on over to huggingface.co/AIEnergyScore and give it a whirl.
BY CHLOE AIELLO @CHLOBO_ILO
Monday, February 10, 2025
Are you anxious about AI? Nvidia’s co-founder and CEO has some tips for how to find your feet in this new landscape.
If, during the past few years, you’ve felt like the future is barreling towards you like an oncoming train, you are not alone. The shock (and stock market dip) that accompanied last week’s announcement of a cheaper AI model from Chinese company DeepSeek shows that even experts can be blindsided by how fast tech innovation is happening.
No one can take away that feeling of whiplash completely. Change is happening too fast for that. But if you’re looking for a good guide to help you get a handle on our AI-filled future, entrepreneur Jensen Huang should probably be at the top of your list.
Huang is the CEO of Nvidia, maker of the chips driving the current AI boom. He built Nvidia into a $3-billion juggernaut by spotting the imminent rise of AI before just about anyone else and betting his company on it. Other tech CEOs fawn over his vision.
What does Huang see for the future? Perhaps more importantly for entrepreneurs, how does he recommend you prepare?
Huang: AI is like the interstate highway system
That was the topic of conversation when Huang appeared on Cleo Abram’s Huge Conversations podcast recently. It’s an hour-long discussion, and if you’re fascinated by AI, then the whole thing is worth a watch. (I’ve embedded the complete interview at the end of this column.)
Perhaps the most immediately actionable insight was Huang’s advice for everyday people wondering how best to prepare themselves for the coming AI revolution.
On the podcast, Huang likens the change to the shift that arrived when the U.S. built the interstate highway system. Fast roads were the essential new technology at the heart of this change, but a whole ecosystem of other possibilities quickly developed around it.
“Suburbs start to be created and distribution of goods from east to west is no longer a concern. All of a sudden, gas stations are cropping up on highways. Fast food restaurants show up. Motels show up, because people are traveling across the state,” Huang says.
AI will be similar. Machines that can do many tasks better and faster than humans will create ripple effects that change many aspects of how we do our jobs and live our lives. How can you try to peek around the corner and get a glimpse of what that might look like? Huang suggests you ask yourself two key questions.
If the drudgery it takes to do my job disappears, what changes?
Some people worry that AI might take away jobs, making many workers superfluous. Huang doesn’t share this fear. He believes human insight and creativity will still be important, but what we spend our time on will be different. AI will kill rote donkey work.
“Suppose that your job continues to be important, but the effort by which you do it went from a week long to almost instantaneous, that the effort and drudgery basically goes to zero. What are the implications of that?” Huang asks.
Imagine you have an AI software programmer in your pocket that can write any software program you dream up. Or consider how it would impact your work if you could describe a rough idea and an AI could quickly produce a prototype for you to interact with.
Innovations like these, Huang insists, shouldn’t make us feel threatened. They should make us feel empowered and excited about all the higher-level thinking and problem solving we’ll be freed to do.
“I think it’s going to be incredibly fun,” he says.
How can I use AI to do my job better now?
If Huang’s first question is designed to get you thinking about what your work might look like 10 years from now, his second nudges you to consider what you can do now to prepare for that future.
Huang tells Abrams that he has an AI tutor with him at all times. It’s a practice he recommends to just about everyone. “The knowledge of almost any particular field, the barriers to that understanding have been reduced,” he says. “If there’s one thing I would encourage everybody to do, it’s go get an AI tutor right away.”
But don’t stop there. Huang’s more general point is that the more you experiment with AI now, the better prepared you’ll be to use it to your advantage as it develops. “If I were a student today, the first thing I would do is learn AI,” he declares a bit later in the podcast.
He doesn’t mean learn technical details of the math behind the machines—though if you’re into that, certainly have at it. He means playing around with current tools like ChatGPT and Gemini to get comfortable with how to prompt them effectively.
“Learning to interact with AI is not unlike being someone who is really good at asking questions,” he claims. “Promoting AI is very similar.” It’s a skill that requires honing.
The end goal for everyone should be to begin thinking though how AI can best assist you with your work. “If I were a student today,” Huang continues, “doesn’t matter what field of science I am going to go into or what profession I am, I am going to ask myself, how am I going to use AI to do my job better?”
Other AI experts agree with Huang
Huang’s two questions are a great place to begin if you want to start to get a handle on how AI is going to affect you. But he’s hardly the only expert weighing in.
There is no shortage of books on AI you can read to try and wrap your head around the technology. Other CEOs, like OpenAI’s Sam Altman and Bill Gates, have also weighed in on what our AI-future may look like and how to prepare.
Even experts are still trying to figure out the future of AI, so don’t feel bad if you’re overwhelmed too. But while technologists are still building the future, they all agree we shouldn’t let anxiety or uncertainty get in the way of experimentation. The time for all of us to start thinking about the future of AI and playing with these tools is now.
EXPERT OPINION BY JESSICA STILLMAN @ENTRYLEVELREBEL
https://www.youtube.com/watch?v=7ARBJQn6QkM
Friday, February 7, 2025
Here’s How OpenAI’s New Deep Research Tool Could Change Your Workplace
OpenAI continues to champion the rise of AI agents, and now may be pushing that promotion to another level. Agents are next-generation AI tools, capable of acting on their own in a digital environment, and they’re potentially much more useful than the question-and-response AI chatbot systems we’re all getting used to. Demonstrating exactly how transformative agents could be, OpenAI has just released a new tool for ChatGPT called Deep Research that seems like it can speedily tackle a critical business task that could normally eat up days or even weeks of a worker’s time: gathering data and synthesizing it into a report.
In an FAQ page explaining what the tool can do, OpenAI explains it’s “perfect for people who do intense knowledge work in areas like finance, science, and law,” or “researchers and discerning shoppers who need thorough, precise, and reliable research.” It’s particularly good at “finding niche, non-intuitive information that would involve multiple steps across numerous websites,” OpenAI says.
It works like this: You ask the tool to look for information on a particular topic, adding images, files, or extra data like PDFs or spreadsheets to add context and help explain your query, which could be useful in, say, a question about financial information. OpenAI says it will sometimes pop up a form to ask for specific information before it starts gathering data so it can create a “more focused and relevant” answer. The final report is “fully documented with clear citations to sources,” so you can make sure that the information it found is both relevant and correct—streamlining the important step of checking whether the AI has hallucinated the info or if it’s real.
Speaking at an event in Washington D.C. to show off the new tool, OpenAI chief product officer Kevin Weil made some bold claims about the tech, the New York Times reported. It’ll be able to do “complex research tasks that might take a person anywhere from 30 minutes to 30 days,” Weil said, adding that the tool will complete these tasks in maybe five to 30 minutes. It’s also able to search recursively, meaning it can do a single search, then, when that leads to other data sources, it can look for those too.
If this sounds a lot like the kind of data-gathering task you might set for an intern or a junior employee when you begin a new program at work or when you encounter a novel problem that’s holding up a big project, then you’re likely thinking along the right lines. These seem to be exactly the sort of use cases that OpenAI has in mind.
Users on Reddit who have used the tool are singing its praises. One highlighted the “time differential between the time it takes to complete its work compared to a human,” noting that “by some OpenAI employee estimates, it seems to be roughly 15x at the moment,” meaning it can complete a research task about 15 times faster than a human.
That raises the question of when Deep Research could become cheaper and more effective to use than tasking an expensive worker to tackle these workplace chores. The Redditor projects how this might play out: If we imagine more advanced AI models that “can perform all the tasks of a lower-skill office job, but complete 3 weeks of work in a single working day,” then it’s quite simple to imagine “the cost of labour rapidly approaching zero as certain job sectors become automated.”
Another user summed it up even more clearly: “Pro user here, just tried out my first Deep Research prompt and holy moly was it good. The insights it provided frankly I think would have taken a person, not just a person, but an absolute expert at least an entire day of straight work and research to put together, probably more.”
This new tool may reignite the “will AI steal my job?” debate, but it also has great potential to transform many office tasks. Since it can speedily perform research, your staff may have more work hours available to actually respond to the data delivered from the research task, versus spending time trawling for info.
BY KIT EATON @KITEATON
Wednesday, February 5, 2025
OpenAI Just Released o3-mini, Its Most Cost-Efficient Model Yet
OpenAI just released o3-mini, a miniature version of its upcoming flagship AI model. The new model is the company’s first “small reasoning model,” capable of using a train-of-thought process to complete tasks more accurately. The model’s launch, now available both on ChatGPT and through OpenAI’s API, caps off a week that also saw the company strengthen its ties with the United States government in the form of announcements about ChatGPT Gov and a partnership with the U.S. National Laboratories.
In a blog post today, OpenAI shared that it anticipates o3-mini will be particularly useful for tasks involving science, math, and coding. The company’s testing indicates that o3-mini outperforms its predecessor, o1-mini, across several math and coding benchmarks, and in some aspects even outperforms the full o1 model. Like o1, users will be able to determine how much effort o3-mini puts into its reasoning, which could help developers save money when building applications that don’t require full effort.
Subscribers at ChatGPT’s $200 per month Pro tier will get unlimited access to o3-mini, while those who pay $20 for ChatGPT’s Plus tier will be allowed 150 messages to o3-mini per day. Free users will also get a chance to try the model, but it’s unclear for how long. Developers who want to use OpenAI’s API to create new applications with o3-mini will pay $1.10 per one million input tokens and $4.40 per million outputs tokens. (Tokens are grammar elements that have been converted into data that can be processed by an AI model.)
The model’s launch comes just as OpenAI is wrapping up a big week. On Tuesday, the AI market leader announced ChatGPT Gov, a version of the LLM that’s been tailored “to provide U.S. government agencies with an additional way to access OpenAI’s frontier models.”
In a blog post announcing ChatGPT Gov, OpenAI wrote that government agencies will be able to deploy the system in their own Microsoft Azure cloud environment. “Self-hosting ChatGPT Gov,” according to OpenAI, “enables agencies to more easily manage their own security, privacy, and compliance requirements, such as stringent cybersecurity frameworks.”
The company added that it anticipates this new service will make it easier for government agencies to approve the analysis of “non-public sensitive data” by ChatGPT. ChatGPT Gov has a similar operating system to ChatGPT Enterprise, OpenAI’s business-focused product.
OpenAI also shared that since 2024, “more than 90,000 users across more than 3,500 US federal, state, and local government agencies have sent over 18 million messages on ChatGPT to support their day-to-day work.” The Air Force Research Laboratory uses the tool for basic administrative support, and the Commonwealth of Pennsylvania is taking part in a pilot program with ChatGPT that OpenAI claims has reduced the time spent on routine tasks by “approximately 105 minutes per day on the days they used it.”
ChatGPT has also been used for some time to enhance research at Los Alamos National Laboratory in New Mexico, the birthplace of the atomic bomb. But a major new deal means OpenAI will soon have an even larger presence there.
The company announced on Thursday that it has agreed to deploy current and future flagship AI models on Venado, a supercomputer in Los Alamos built in collaboration with Nvidia. According to OpenAI’s blog post announcing the deal, the computer was designed to “drive scientific breakthroughs in materials science, renewable energy, astrophysics, and more,” and it will be a shared resource for researchers at Los Alamos, Lawrence Livermore, and Sandia National Labs.
As for how the models will be used, OpenAI says researchers will probe the technology for its potential to identify new approaches for treating and preventing diseases, improve detection of national security threats, and unlock “the full potential of natural resources.” The models will also be used to support Los Alamos’ nuclear security program, but use cases will be carefully decided on an individual basis in consultation with government officials and OpenAI researchers with security clearances.
In a post on Linkedin, OpenAI national security policy and partnerships lead Katrina Mulligan said that she joined OpenAI “because I believed that some of the most consequential national security decisions of the decade would be made at companies like this and I wanted a seat at that table. Today’s announcement of our partnership with the National Labs to advance the future of science is exactly the kind of game-changing decision I wanted to have a role in making.”
BY BEN SHERRY @BENLUCASSHERRY
Monday, February 3, 2025
Doubling Lifespans and Superintelligence: AI CEOs Are Saying Some Wild Stuff. Is Any of It True?
The AI revolution has been awash in hype for years, but it’s now truly on the cusp of sparking global transformation—if you take recent CEO and investor statements as gospel.
The World Economic Forum in Davos, Switzerland, has offered a platform for leaders of AI’s biggest startups to wax lyrical about the industry’s bright prospects. Anthropic CEO Dario Amoedi jumped at the chance to laud AI’s power far beyond chatbots, saying that AI could allow humans to double their lifespans within the next decade.
“If you think about what we might expect humans to accomplish in an area like biology in 100 years, I think a doubling of the human lifespan is not at all crazy. And then if AI is able to accelerate that, we may be able to get that in five to 10 years,” he said at a panel last week called Technology in the World.
Amoedi also told reporters that superintelligence, or artificial general intelligence (AGI), is a feasible prospect by 2027. For many AI evangelists, the idea of building AI that possesses more knowledge than the collective sum of humanity is a pet topic and holy grail. OpenAI CEO Sam Altman wrote in September that such a milestone is within the industry’s grasp: “It is possible that we will have superintelligence in a few thousand days,” Altman claimed.
The World Economic Forum in Davos, Switzerland, has offered a platform for leaders of AI’s biggest startups to wax lyrical about the industry’s bright prospects. Anthropic CEO Dario Amoedi jumped at the chance to laud AI’s power far beyond chatbots, saying that AI could allow humans to double their lifespans within the next decade.
“If you think about what we might expect humans to accomplish in an area like biology in 100 years, I think a doubling of the human lifespan is not at all crazy. And then if AI is able to accelerate that, we may be able to get that in five to 10 years,” he said at a panel last week called Technology in the World.
Amoedi also told reporters that superintelligence, or artificial general intelligence (AGI), is a feasible prospect by 2027. For many AI evangelists, the idea of building AI that possesses more knowledge than the collective sum of humanity is a pet topic and holy grail. OpenAI CEO Sam Altman wrote in September that such a milestone is within the industry’s grasp: “It is possible that we will have superintelligence in a few thousand days,” Altman claimed.
BY SAM BLUM @SAMMBLUM
Friday, January 31, 2025
DeepSeek Is the Wake-Up Call Our AI Overlords Needed
The days when a news story out of the tech sector leads the entire day’s news cycle don’t happen as often as they used to. It almost makes me long for the chaotic bitcoin billionaire gold rush nonsense. But when a little Chinese AI company called DeepSeek takes a trillion-dollar chunk out of the U.S. stock market—in what can best be described as a nerd drive-by? Well—that story goes above the fold, to use a term my journalist friends love.
Is it time to panic? Should you panic?
Don’t panic.
Unless you’re Sam Altman. Then you may want to check the cash reserves and ask everybody to work a couple of late nights.
Because DeepSeek, the “little guy,” is coming for the lunches of OpenAI, Anthropic, Google, Microsoft, Nvidia… everyone.
And it doesn’t stop there.
No One Expected the Spanish Inquisition?
No seriously, you’re fine. Just don’t look at your retirement account for a couple weeks.
This is a gut check—and one that was long coming, because it accompanies any meteoric mainstream adoption of a new technology.
Look, I kid. Sam Altman is never going to read my columns. And I don’t miss the bitcoin FOMO or Tom Brady hawking crypto or NFT bro billionaires. I don’t pine for the days of Yo or Klout or Napster or Flooz.
Yeah, I threw Napster in there. I was right there with you, but stealing songs is stealing songs.
But for real. In a 2025 dominated by Temu and Shein, no one imagined that a Chinese company might just drop in and deliver a just-as-good product for a fraction of the price?
It looks like DeepSeek did just that. And I’m aware that in this same Year of Our Lord 2025, we should definitely wait for triplicate and quadruplicate verification on the veracity of those claims. But holy cow, it begs the question:
Do you really need all that money and all of that hardware and power to build your super-cool chatbot?
I mean, at this point, it looks like DeepSeek did not need all that—pending quadruplicate and maybe quintuplicate independent verification, which will likely never happen because China.
But it begs a deeper question. The one currently keeping Sam and Dario and Sundar and Satya and especially poor Jensen up at night. Man, Jensen alone lost $20 billion in a day.
Oh, here’s a link to a quick primer on the carnage if you need it. Then come back and I’ll ask the question out loud that we’re all asking internally.
What Are We Doing Here?
Or, more bluntly, “Is this AI shit real or what?”
Or, with context, “Exactly where is this ‘future of AI’ that’s either always right around the corner or stuck just behind the AI wall?”
I’ve been telling you what AI is and isn’t for a couple years now, because I started working with this science back in 2010. I’m no AI oracle (is Oracle into AI?), but I know enough to be able to connect the dots.
AI is just a series of “if-this-then-that” (IFTTT) calculations—but really, really fast. Back in 2010, we could scare up enough processing power to make the output faster and slightly more elegant at scale than humans could imagine. And I mean that literally. That’s where IFTTT starts to look like magic.
The processing capabilities that existed 15 years ago meant we were just scratching the surface. Well, that surface is now wide open and we’re approaching the core.
But the core is still IFTTT. Only it’s so fast now, the machines can’t even comprehend it.
And when you’re selling that into the mainstream, well, you’re gonna get hop-ons. You’re going to get companies that slash their workforces and invest in the promise of a future where machines can increase productivity at a pace that we “can’t imagine.”
I saw a thread on Hacker News where some poor dope was having to justify to his or her leadership why AI hasn’t increased the productivity of their software development by 10x.
Again, in 2025, this might be a troll. But it tracks. And more importantly, it’s laugh-to-keep-from-crying funny.
Don’t Panic
Look, it’s like Hans Gruber said when the cops showed up. This was inevitable, and, as it happens, necessary.
This is going to force everyone involved to take a much needed time-out and reflect on what all that money, all that energy, and all those layoffs are adding up to.
In the meantime, yo, don’t use Chinese software. Don’t be like the kids rebelling against the TikTok ban by giving even more of their data over to an entity known to surreptitiously take it, aggregate it, and use it.
What was it my dad used to say? It’s “cutting off your nose to spite your face.” Yeah, that’s brutal but you’ll remember it. He was awesome like that.
But then also ask yourself, should you really be putting that many eggs into AI’s basket?
This is a pivotal moment in the future of tech. And every one of these takes a reckoning of overconfidence and hubris to shake out the real and lasting benefits and opportunities.
I’m already seeing social-media posts from scammers giving you step-by-step instructions on a DeepSeek built trading algorithm guaranteed to game the stock market. I think they just took the crypto template and changed a couple words, maybe using ChatGPT.
Wait for all that noise to die down before you make any decisions. I’ll be following along if you want to hop on my email list.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO
Wednesday, January 29, 2025
What is DeepSeek, the Chinese AI startup that shook the tech world?
A surprisingly efficient and powerful Chinese AI model has taken the technology industry by storm. It’s called DeepSeek R1, and it’s rattling nerves on Wall Street.
The new AI model was developed by DeepSeek, a startup that was born just a year ago and has somehow managed a breakthrough that famed tech investor Marc Andreessen has called “AI’s Sputnik moment”: R1 can nearly match the capabilities of its far more famous rivals, including OpenAI’s GPT-4, Meta’s Llama and Google’s Gemini — but at a fraction of the cost.
The company said it had spent just $5.6 million powering its base AI model, compared with the hundreds of millions, if not billions of dollars US companies spend on their AI technologies. That’s even more shocking when considering that the United States has worked for years to restrict the supply of high-power AI chips to China, citing national security concerns. That means DeepSeek was supposedly able to achieve its low-cost model on relatively under-powered AI chips.
What is DeepSeek?
The company, founded in late 2023 by Chinese hedge fund manager Liang Wenfeng, is one of scores of startups that have popped up in recent years seeking big investment to ride the massive AI wave that has taken the tech industry to new heights.
Liang has become the Sam Altman of China — an evangelist for AI technology and investment in new research. His hedge fund, High-Flyer, focuses on AI development.
Like other AI startups, including Anthropic and Perplexity, DeepSeek released various competitive AI models over the past year that have captured some industry attention. Its V3 model raised some awareness about the company, although its content restrictions around sensitive topics about the Chinese government and its leadership sparked doubts about its viability as an industry competitor, the Wall Street Journal reported.
But R1, which came out of nowhere when it was revealed late last year, launched last week and gained significant attention this week when the company revealed to the Journal its shockingly low cost of operation. And it is open-source, which means other companies can test and build upon the model to improve it.
The DeepSeek app has surged on the app store charts, surpassing ChatGPT Monday, and it has been downloaded nearly 2 million times.
Why is DeepSeek such a big deal?
AI is a power-hungry and cost-intensive technology — so much so that America’s most powerful tech leaders are buying up nuclear power companies to provide the necessary electricity for their AI models.
Meta last week said it would spend upward of $65 billion this year on AI development. Sam Altman, CEO of OpenAI, last year said the AI industry would need trillions of dollars in investment to support the development of high-in-demand chips needed to power the electricity-hungry data centers that run the sector’s complex models.
So the notion that similar capabilities as America’s most powerful AI models can be achieved for such a small fraction of the cost — and on less capable chips — represents a sea change in the industry’s understanding of how much investment is needed in AI. The technology has many skeptics and opponents, but its advocates promise a bright future: AI will advance the global economy into a new era, they argue, making work more efficient and opening up new capabilities across multiple industries that will pave the way for new research and developments.
Andreessen, a Trump supporter and co-founder of Silicon Valley venture capital firm Andreessen Horowitz, called DeepSeek “one of the most amazing and impressive breakthroughs I’ve ever seen,” in a post on X.
If that potentially world-changing power can be achieved at a significantly reduced cost, it opens up new possibilities — and threats — to the planet.
What does this mean for America?
The United States thought it could sanction its way to dominance in a key technology it believes will help bolster its national security. Just a week before leaving office, former President Joe Biden doubled down on export restrictions on AI computer chips to prevent rivals like China from accessing the advanced technology.
But DeepSeek has called into question that notion, and threatened the aura of invincibility surrounding America’s technology industry. America may have bought itself time with restrictions on chip exports, but its AI lead just shrank dramatically despite those actions.
DeepSeek may show that turning off access to a key technology doesn’t necessarily mean the United States will win. That’s an important message to President Donald Trump as he pursues his isolationist “America First” policy.
Wall Street was alarmed by the development. US stocks were set for a steep selloff Monday morning. Nvidia (NVDA), the leading supplier of AI chips, whose stock more than doubled in each of the past two years, fell 12% in premarket trading. Meta (META) and Alphabet (GOOGL), Google’s parent company, were also down sharply, as were Marvell, Broadcom, Palantir, Oracle and many other tech giants.
Are we really sure this is a big deal?
The industry is taking the company at its word that the cost was so low. No one is really disputing it, but the market freak-out hinges on the truthfulness of a single and relatively unknown company. The company notably didn’t say how much it cost to train its model, leaving out potentially expensive research and development costs. (Still, it probably didn’t spend billions of dollars.)
It’s also far too early to count out American tech innovation and leadership. One achievement, albeit a gobsmacking one, may not be enough to counter years of progress in American AI leadership. And a massive customer shift to a Chinese startup is unlikely.
“The DeepSeek model rollout is leading investors to question the lead that US companies have and how much is being spent and whether that spending will lead to profits (or overspending),” said Keith Lerner, analyst at Truist. “Ultimately, our view, is the required spend for data and such in AI will be significant, and US companies remain leaders.”
Although the cost-saving achievement may be significant, the R1 model is a ChatGPT competitor — a consumer-focused large-language model. It hasn’t yet proven it can handle some of the massively ambitious AI capabilities for industries that — for now — still require tremendous infrastructure investments.
“Thanks to its rich talent and capital base, the US remains the most promising ‘home turf’ from which we expect to see the emergence of the first self-improving AI,” said Giuseppe Sette, president of AI market research firm Reflexivity.
By David Goldman
Monday, January 27, 2025
Why OpenAI’s Agent Tool May Be the First AI Gizmo to Improve Your Workplace
Many of us have by now chatted to one of the current generation of smart AI chatbots, like OpenAI’s market-leading ChatGPT, either for fun or for genuine help at work. Office uses include assistance with a tricky coding task, or getting the wording just right on that all important PowerPoint briefing that the CEO wants.
The notable thing about all these interactions is that they’re one way: the AI waits for users to query it before responding. Tech luminaries insist that next-gen “agentic” AIs are different and can actually act with a degree of autonomy on their user’s behalf. Now rumors say that OpenAI’s agent tool, dubbed Operator, may be ready for imminent release. It could be a game changer.
The news comes from a software engineer that news site TechCrunch says has a “reputation for accurately leaking upcoming AI products,” Tibor Blaho. Blaho says he’s found evidence of Operator inside the desktop computer version of OpenAI’s ChatGPT app, and publicly hidden information on OpenAI’s website, including data comparing Operator’s performance to other AI systems.
AI agents are snippets of AI-powered code that can be given the ability to “act” in digital environments. This means giving an agent the ability to control a users’ computer, for example, which means it can fill in information on a webform, or even write code. According to OpenAI’s CEO Sam Altman, agents are the next big thing in AI, and they could totally change the way many officer workers spend their day.
Different AI companies have already tried releasing agent-based tools, with Google’s system, for example, being designed to let retailers “operate more efficiently and create more personalized shopping experiences to meet the demands of the AI era,” and Salesforce’s “Agentforce” tool able to act like a sales rep. OpenAI’s entry to the agent marketplace could be far more transformational.
That’s because if an agent can fill in webforms, that means it could be trusted with some necessary but highly mundane office tasks that eat into worker’s daily hours and potentially impact their ability to make their employers’ more money. For example, remember when your company fired Steve from accounts—the really useful guy who handled your business travel requests—in the name of efficiency? Yup, it meant you and all the other staff had to spend hours wrestling with confusing forms instead of actually working. An AI agent might be able to do most, if not all, of that form-wrangling for you.
The one question hovering over OpenAI’s plans is how well Operator will actually work, which will indirectly impact how much time it may be able to save the average office cubicle dweller. The performance numbers Blaho unearthed on OpenAI’s website suggest Operator isn’t totally reliable yet, depending on the task it’s been asked to do. When tasked with signing up to a cloud services provider and launching a virtual machine (a web-based portal to a cloud based computer system) Operator could only succeed 60 percent of the time, the data say. When asked to create a Bitcoin wallet, it only succeeded 10 percent of the time, for example.
These are preliminary numbers, and they may change when OpenAI actually does release Operator—which TechCrunch says could happen this month. But they’re an important reminder that, as with other generative AI systems that your office may be trying out, AI just can’t be trusted right now. Before you make decisive choices based on the AI’s advice, or use any other form of AI output, it’s worth running a fact-checking process, to make sure the information is genuine and not “hallucinated” at all. This advice may be doubly relevant when it comes to letting AI agents actually interact with your company’s computers.
BY KIT EATON @KITEATON
Friday, January 24, 2025
OpenAI CEO Sam Altman Says This Will Be the No.1 Most Valuable Skill in the Age of AI
Ask some of the top minds in the field what the future of artificial intelligence will look like, and you’ll get wildly different answers.
Some talk about super intelligence, AI personal assistants for all, and a world free of want. Others warn of the robot apocalypse. A few even argue that the potential of current AI models is overblown. But what just about everyone can agree on is that sometime quite soon, AI will fundamentally change how we live and work.
How should we entrepreneurs best prepare ourselves (and our kids)? That’s another question experts haven’t been shy about taking a stab at. Many suggest we hone the fundamentally human skills that machines still struggle to replicate – things like adaptability, empathy, and interacting with the physical world.
But when asked for his opinion on a recent episode of Adam Grant’s Re:Thinking podcast, Sam Altman – CEO of OpenAI, the company behind ChatGPT – mentioned a different skill as the most important one to cultivate if you want to thrive in an AI-filled world.
Sam Altman: My kid will never be smarter than AI.
Unsurprisingly for a guy selling AI, Altman agrees with those who see a whole lot of transformative AI in our collective future.
“Eventually, I think the whole economy transforms,” he predicts. But don’t worry too much that a robot will steal everyone’s jobs. “We always find new jobs, even though every time we stare at a new technology, we assume they’re all gonna go away,” he continues.
How to best prepare for this economic transformation is a conversation he has a personal stake in. Altman’s professional and financial future is clearly assured. But he and his husband are expecting a child soon. What skills does he think his future child needs to focus on to thrive in this AI-filled future?
Not intelligence. “My kid is never gonna grow up being smarter than AI,” he tells Grant.
“There will be a kind of ability we still really value, but it will not be raw, intellectual horsepower to the same degree,” Altman believes. So if sheer IQ isn’t the key to future success, what is? “Figuring out what questions to ask will be more important than figuring out the answer,” he says.
And he doesn’t just mean asking AI better questions. “The prompting tricks that a lot of people were using in 2023 are no longer relevant, and some of them are never gonna be necessary again,” Altman claims later in the episode.
Connectors beat collectors?
So what does Altman mean exactly when he says asking questions will be more important than answering them once AI becomes smarter than humans? The answer isn’t 100 percent clear, though Grant takes a stab at summarizing what Altman might be trying to say:
“We used to put a premium on how much knowledge you had collected in your brain, and if you were a fact collector, that made you smart and respected. And now I think it’s much more valuable to be a connector of dots than a collector of facts that if you can synthesize and recognize patterns, you have an edge.”
Back when Altman was in school, the OpenAI CEO responds, teachers tried to ban what they then called “the Google.” The thinking was, if you could just look up facts, then why bother memorizing them? Wouldn’t we all end up intellectually poorer in the long run?
Clearly, the teachers lost this battle. Thanks to the internet, we just learned “how to do more difficult, more impactful, more interesting things,” Altman claims.
He concludes: “I expect AI to be like that too.”
A few questions and a takeaway
Now, looking around at the current moment in global affairs, I think it’s fair to ask whether those ‘90s teachers might have had a point about the internet’s potential effect on our collective intellect. I personally am not sure that facts are in greater rather than lesser supply today than back when I first encountered “the Google.”
Nor am I sure that the tenor of the discussion or the problems we’re solving (or usually not solving) today are on some higher plane of human achievement. A few minutes on Twitter/X can really make you wonder. Though to be fair to Altman, AI is already powering incredible scientific, if not social, breakthroughs.
You can also find defenders of rote memorization who point out that it’s hard to connect dots you don’t recall exist or that you can conceive of only hazily without time-consuming googling.
But putting these objections aside for a moment, Altman is surely right that humans will never beat machines at recalling facts. What research (like this fun study that pitted AI against 4-year-olds) suggests we still excel at is looking at those facts in an unconventional light or pairing them with other unexpected facts, aka asking questions or connecting dots.
The future is creative.
Another word for this very human ability? Creativity. People ask creative questions about what facts mean and how they might fit together in a way that AI (so far) does not.
Which suggests that if Sam Altman wants his future child to thrive in a world of AI — or if any entrepreneur out there is hoping to prepare themselves or their offspring for the world of the future — focusing on exercising your creativite muscles is probably one smart way to go.
EXPERT OPINION BY JESSICA STILLMAN @ENTRYLEVELREBEL
Wednesday, January 22, 2025
OpenAI Details How It Would Like AI to Be Regulated
OpenAI, the Sam Altman-led company that ushered in the AI era with the late 2022 release of ChatGPT, has laid out its idealized vision for how the United States government, and the incoming Trump administration, can grow and regulate the country’s burgeoning artificial intelligence industry.
In a document titled “AI In America: OpenAI’s Economic Blueprint,” the company calls for infrastructure investments, collaboration between private AI companies and government agencies, and a light touch of federal regulation, rather than letting each state decide their own rules.
In an introduction, OpenAI vice president of global affairs Chris Lehane compared the current state of the AI industry with the early days of the automotive industry. He pointed out that the United Kingdom’s auto industry was stunted by overregulation, while the United States became the car capital of the world by “merging private-sector vision and innovation with public-sector enlightenment.”
Lehane wrote that the U.S. has another chance to be a leader in a potentially massive market, but only if the government and industry can work together. “In the same way the federal government helped clear the way for the nascent automobile industry to grow, including by preempting a state-by-state tangle of roads and rules,” wrote OpenAI, “it should clear the way for the AI industry’s development of frontier models.”
The context behind this document is that AI companies are increasingly worried that, without any federal regulation, individual states will develop their own rules. In 2024, California governor Gavin Newsom vetoed a bill that would’ve instituted safety requirements for AI models, but did sign a bill mandating the disclosure of training data for AI models.
One solution proposed by OpenAI is to develop pathways for companies that develop large language models to have their models evaluated by a government agency. “In return,” the proposal says, “these companies would receive preemption from state-by-state regulations on the types of risks that the same national security agencies would handle.” (Notably, OpenAI does not propose any sort of government evaluation that AI models would need to undergo in order to be publicly deployed.)
But state governments still have a big role to play in OpenAI’s grand vision. OpenAI proposed that state and local governments create “AI economic zones” in order to speed up the permitting process for building both data centers and power-generating infrastructure like wind farms, solar arrays, and nuclear reactors. The company estimates that there’s “$175 billion in global funds waiting to be invested in AI infrastructure,” and if the United States doesn’t take those funds and build infrastructure here, the Chinese Communist Party will happily step in.
OpenAI is embarking on a nationwide “Innovating for America” initiative to sell its vision at the federal, state, and local level, starting with a Washington, D.C. event on January 30, in which CEO Altman will preview new OpenAI technology and discuss its ability to drive growth.
It still remains to be seen how Trump’s relationship with OpenAI rival Elon Musk will impact the administration’s reaction to these proposals, but one thing is certain: Whatever the Trump administration decides to do with AI regulation will have major repercussions for businesses that build and use AI models.
BY BEN SHERRY @BENLUCASSHERRY
Monday, January 20, 2025
ChatGPT Gets Into the Robot Business, Which Could Change Your Office Routines
It looks like artificial intelligence will soon leave computer screens and get into the physical realm. The “ChatGPT moment for robotics is coming,” said Nvidia co-founder and CEO Jensen Huang. It seems either Huang knew something we didn’t, or he was incredibly prescient because there’s fresh information that the world-leading AI brand OpenAI is very serious about getting into robotics, and its hardware leader Caitlin Kalinowski has even posted job descriptions on X.
In a Q&A session after giving a keynote address at the 2025 Consumer Electronics Show in Las Vegas last week, Huang stirred up a lot of excitement about the future of with dramatic ideas about Nvidia’s future in “physical AI”—which means real world, tangible AI hardware. A.k.a. robots. Huang even went as far as saying companies should concentrate on developing humanoid robots because they can tackle difficult terrain that would stymie a wheeled machine.
Kalinowski’s job post explained how she was really excited about posting for OpenAI’s first “robotics hardware roles,” including positions for “two very senior tech lead engineering roles” and a technical program manager. The engineers will help the company “design the sensor suite for our robots,” Kalinowski explained, and one will need experience “designing gears, actuators, motors and linkages for robots.” The program manager role will be a “fun, scrappy role to start,” she noted, and it will include work on “standing up our training lab, and keeping us running smoothly as we cycle through our product design phases.”
News site TechCrunch dug into the details of the job postings, and found information that shows OpenAI’s planning on “general purpose” and “adaptive” robots, powered by special AI models the company develops, and notes that one listing shows plans for developing and producing hardware at “high volume (1M+).”
This news makes it abundantly clear that OpenAI will move speedily into robotics alongside developing its code-based AI products, and try to start building robot hardware sooner rather than later. Kalinowski’s words lend support to the idea OpenAI may be following the startup-style “move fast and break things” mentality that has served other disruptive hardware companies, like SpaceX, so well. And the “general purpose” description tallies nicely with Huang’s call for developing humanoid robots—systems that can maneuver and help out in existing factory or even office workspaces typically designed around the needs of the human body.
Why should we care about this niche bit of news? After all, it’s just a job listing for a handful of engineering posts in one company.
The fact is that OpenAI could already be one of the best-placed companies in the world to develop AI-powered hardware, like a humanoid robot. The AI leader has access to huge computer power, which will be needed to train the robots to move, react to commands and so on. Thanks to the nature of generative AI system training, it also has access to gargantuan amounts of training data—and it’s easy to imagine plenty of this information could be useful for teaching a robot to learn spoken commands, and detect objects automatically using machine vision.
OpenAI’s CEO Sam Altman has also been pushing the notion of AI “agents” as the next big development for AI technology—these are AI systems that can operate autonomously and even make decisions and perform digital actions like filling in forms on websites. This technology will likely translate quite directly into giving real world robots a degree of autonomy—a vital skill if they’re going to work alongside people, who can make surprising decisions to move, speak or perform an action at a moment’s notice.
Traditional robotics are a longtime feature of industrial engineering, particularly for manufacturing items like cars. But these machines tend to be static, perform one or two particular tasks, and require very precise positioning of tools and other equipment. Until now robots have often lacked access to the kind of real-world decision making made possible by the AI revolution. OpenAI also joins the ranks of other companies developing cutting edge AI-powered robots: Tesla’s Optimus is one well known example, but other companies like Figure are also making progress.
So will your next coworker be an android? Tech luminary Peter Diamandis certainly thinks so. Last year he predicted “millions, then billions of humanoid robots” are coming. OpenAI’s ChatGPT technology, along with other AIs are already helping transform office work. Its entrance to the robotics market is certainly going to accelerate that process.
BY KIT EATON @KITEATON
Friday, January 17, 2025
According to a new report, many businesses expect to increase their budgets for generative AI in 2025.
If there was one lesson that was extraordinarily clear at this year’s CES, it’s that generative AI is poised to be a massive force for businesses in the coming months and years. You couldn’t walk 10 steps through the Las Vegas Convention Center halls without encountering a new product featuring or promoting artificial intelligence in some form or fashion.
AI spending doesn’t look to be slowing down anytime soon, either. Businesses spent $13.8 billion on genAI in 2024. That’s five times the $2.3 billion spent in 2023, according to data from Menlo Ventures. But determining what to spend on generative AI, whether it’s as a tool to help employees or a feature to include in your product or service, can be a challenge.
Unfortunately, there’s no simple answer—and even some experts say they’re stumped, since AI is still such a new field. KPMG’s latest AI Quarterly Pulse Survey shows that 68 percent of large companies plan to invest between $50 million and $250 million over the next year. And a growing number of leaders are prepared to spend. One year ago, just 45 percent of companies planned investments in that price range.
Among small businesses, the number is, of course, much lower. A report from ServiceDirect found over half of small businesses plan to spend more than $10,000 per year on AI tools.
The amount they plan to spend varies by the size of the company. Some 58 percent of businesses with fewer than 10 employees plan to increase their AI budgets by more than $5,000 over the next 12 to 24 months, while 67 percent of businesses with 10 to 50 employees plan to increase their budgets by that amount or more in the same time frame.
Among companies with more than 50 employees, 77 percent expect to increase their budgets by more than $5,000 on AI solutions in the next 12 to 24 months—on top of their existing heavy investments in the technology.
ROI and pricing
The big question, of course, is what sort of return will businesses see on their investment? The frenzy surrounding the technology has prompted some companies to adopt AI even when the benefits are questionable at best. (At the aforementioned CES, AI was being incorporated into everything from air fryers to plants.)
Only one-third of leaders surveyed by KPMG say they expect to be able to measure the return on their investment in the next six months, with none believing they have yet reached that stage.
“Leaders are putting real dollars behind agents, but with mounting pressure to demonstrate ROI, getting the value story right is critical,” said Steve Chase, vice chair of AI and digital innovation at KPMG, in a statement. “The dynamic nature of AI demands new ways to measure value—beyond the limits of a conventional business case. As leaders work to define the right metrics, those measures must be tightly aligned with the business strategy and should account for the cost of not investing.”
Part of the problem is the cost of generative AI in the first place. James D. Wilton, a former associate partner at McKinsey & Company and founder of Monevate, a pricing and monetization consulting firm, makes the argument that the pricing models AI firms currently use are hurting the industry’s adoption.
There are two types of pricing models used most frequently these days among AI companies: license fees and per-query charges. Neither is ideal, says Wilton. The subscription model assumes a one-size-fits-all approach, but that license fee is often so high that it is unaffordable for smaller users.
The charge-per-query might make more sense, but users don’t get value every time they ask a genAI system a question. It often takes several iterations before the technology gives you the answer you’re looking for.
“The challenge is it’s not very value aligned,” says Wilton. “It’s aligned to the costs the AI generators will incur, but that’s not necessarily where the value is for the user.”
One alternative, he says, is outcome-based pricing models, which charge businesses per satisfactory resolution. (Zendesk offers something like this currently, he notes.)
“The more directly you can tie your pricing to the way the product creates value, the lower the ROI you need to give the customer, because the customer will do less work in order to realize the value,” he says.
BY CHRIS MORRIS @MORRISATLARGE
Wednesday, January 15, 2025
The Most Exciting Tech for Your Business at CES 2025
The annual Consumer Electronics Show is drawing to a close in Las Vegas. Known as the gadget-head’s Super Bowl, CES is both the launchpad for the next big things in tech, as well as a home for plenty of less serious inventions. Flavor-enhancing spoon, anyone?
As usual, this year’s CES was no different, offering an array of weird and wonderful gadgets and announcements from some of the top leaders in tech. Here are a selection of some that could actually help your business—or just come in handy as you run it.
AI-powered laptops
Sure, you could spring for a $3,000 supercomputer from the AI behemoth itself, Nvidia, but PC-maker Lenovo also announced a suite of AI-powered commercial computers, some that come at a kinder price point.
Lenovo’s AI-powered commercial laptops, the ThinkPad X9 14 Edition and X9 15 Aura Edition, range in cost from $1,399 to $1,549, whereas its ThinkCentre neo 50q QC desktop starts at just $849. The new ThinkPads come with an AI assistant built in. Lenovo AI Now is based on Meta’s Llama 3.0 LLM, but stores user data locally. It helps users search and summarize documents and retrieve information across various devices, among other things.
The ThinkCentre neo 50q QC is marketed specifically for small and medium-sized businesses, due to its compact form and AI-powered performance. It features Qualcomm’s Snapdragon X chips or Snapdragon X Plus 8-core processors.
A super fast phone charger
Nothing tosses a wrench into a workday like a dead battery. Enter Swippitt, a three-part “Instant Power System” that is meant to fully restore a phone’s charge in only two seconds. The system consists of a toaster-sized and toaster-shaped charging hub that contains five charged batteries, a phone case with a smart battery, as well as an app. After inserting a dying phone into the hub, it swaps a used battery out of the case for a charged one. (You’ll need to use the system’s Link case for everything to work.) The Swippit offers compatibility for iPhones 14 and above for now, but is expected to launch for Android later in 2025. Here’s more on how it works. Add one to your office and no one will complain of a low phone battery again.
A temperature-regulating chair
If you’ve ever dreamed of moving your car’s seat warmer into your office chair—this one’s for you. Razer’s Project Arielle contains both a self-regulating heater that can reach 86 degrees fahrenheit, and bladeless fan technology that pushes air through the chair’s mesh back, as TechCrunch reported. A panel built into the chair controls the features. Although technically a gaming chair, Project Arielle seems like it could have great applications for long days spent in overly chilly or warm offices—if it ever goes into production. For now, it is still a concept, but think about how quickly it could help you be more productive (or just fall asleep at your desk).
An AI travel agent
Five years after teasing the product during CES 2020, Delta announced its AI-powered Delta Concierge. For now, the concierge service will offer suggestions based on a person’s travel plans with natural language text and voice input, instead of conventional menu selection. In the future, Delta aspires for the assistant to remove some uncertainty from flying by helping to rebook flights in the case of delays or cancellations, navigating unfamiliar airports and even managing transportation after leaving the airport—potentially with its new travel partner, Uber. It will be found in the Fly Delta app.
Smart glasses to get you off your phone
Smart glasses were all the rage at CES. They ran the gamut from simple designs that play with lens color or contain bluetooth speakers to augmented-reality glasses tricked out with screens and battery packs. Somewhere in the middle are the types of glasses The Verge referred to as “all day companions,” that mimic the size and approximate weight of regular glasses but with built-in display, AI assistants and—in some cases—camera capabilities. These glasses could amp up productivity with notifications available at a glance—or at the very least endow you with the swagger of an early adapter.
A robot bartender for those in-office happy hours
Let’s be real. A robot bartender may not be essential, and it may not improve efficiency, but it could very well take the edge off of the day. Richtech Robotics announced an AI-powered, robot bartender called ADAM. Intended for the hospitality industry, the robot can make more than 50 kinds of drinks from cocktails to coffee drinks and can even chat a little. Not only that, but ADAM is already at work inside a Georgia Walmart, as well as in the Texas Rangers baseball stadium.
BY CHLOE AIELLO, REPORTER @CHLOBO_ILO
Monday, January 13, 2025
As You Bet on AI, Make Sure It’s Not Your Strategy That’s Artificial
There was an axiom we used in venture capital that said that the industry had a memory of only 10 years. Whatever was learned 10 or more years past, in other words—from what to avoid, to what to prioritize—was said to be forgotten, only to be painfully re-learned repeatedly at decade intervals. The origin of the phrase was directly tied to the lesson that the predicted impact of any new innovation is always grossly overstated. You don’t have to have ever worked in venture investing to know this to be true. Just flip back in your memory to see.
An eon ago, the fax machine was claimed to foretell the end of physical mail. It wasn’t, nor years later was email, though it too was unveiled with similar claims. Desktop computing was boasted to be the demise of large data management and mainframe computing. To be sure, desktops, then laptops, then smartphones all dramatically changed how we do what we do; but the cloud and server farms show the prediction to have been over-imagined.
Most artifical intelligence innovations are simply tools.
These tools’ actual impact occurs only in the context of the larger purposes, plans, and strategies they serve. Yet, too often, we speak of the innovation as strategy itself, as if “using more artificial intelligence in 2025 and beyond” is enough to represent a strategy. It’s not. It’s also why this latest version of this repeated lesson isn’t just about overstatement. It’s about confusing tools and tactics with strategy.
In the last year, my research has put me in touch with many of the organizations considered leaders in AI, including in determining its uses in and impact on business. Even those developing the tools feel a sense of marvel at what AI can do. In turn, they’ve spent much of the past few years striving to put AI to work as quickly as possible, in part for its speculated promises, and no doubt too for the assumed rewards AI might bring.
For the AI developers paying close attention, two things give them pause. The first has caught most of them by surprise: AI is proving to have power beyond what even its designers know, and to a degree, none of them can predict nor control fully. In response to that particular awakening, in 2023, a group of leading firms suggested that there should be a collective pause taken to think about the deeper implications of what AI might bring, consider the possible ripple effects that might result, and jointly explore how shared guidelines might be followed.
Some, surprisingly quite a few, were willing to sign on to a formal agreement. Few were prepared to act. The technology’s promises, even unverified, were just too great not to speed ahead, logic be damned.
The cost of confusing AI tools with strategy.
The second thing giving leaders in AI pause was more disturbing in its implications. It was the stark reality that in the rush to embrace AI, an increasing numnber of organizations suddenly found themselves struggling to answer seemingly simple questions like: What business are we in? So busy were they chasing the tool, that they found themselves suddenly having a hard time remembering what business goals, mission, even strategy the tool was there to support. Almost without their noticing, their priorities had become unintentionally inverted, with the tool no longer the supporting mechanism, but instead the dominant focus. It began firm by firm, but the error of putting AI ahead of the strategies it should be supporting has quickly developed into a disturbing, even dangerous trend. As broader evidence, a recent report from consulting firm McKinsey & Company called this out, and warned of the costs of confusing tools and tactics with strategy.
“It’s time for a reset,” McKinsey declared. “The initial enthusiasm and flurry of activity (around AI) is giving way to second thoughts and recalibrations as companies realize that capturing AI’s enormous potential value is harder than expected.” More than just a passing observation, McKinsey was blunt. “With 2024 shaping up to be the year for AI to prove its value,” they wrote, “companies should keep in mind the hard lessons learned … that competitive advantage comes from building organizational and technological capabilities.” It isn’t that AI has no role, it made clear, referring specifically to generative AI. But incorporating it into any organization’s strategy cannot take form as a simple add-on. To leverage AI effectively means “rewiring the business.”
Any experienced leader with a memory stretching beyond any one innovation cycle understands that strategy is an ongoing recalculation—a reconsideration, reconfirmation, and if need be a reorganization—of how all the pieces and parts that give an organization advantage fit together. Including its tools. Thoughtlessly adding anything new or blindly jumping on the latest bandwagon in and of itself yields no lasting advantage. As magical as AI seems right now, it must be part of this larger recalculation. It is not a strategy in and of itself. Without a doubt, AI will bring change–it already has–and advantages, though likely different than those predicted. It will not, however, displace the fundamental truth that ongoing success requires far greater strategic thought and effort.
EXPERT OPINION BY LARRY ROBERTSON, FOUNDER, LIGHTHOUSE CONSULTING @LRSPEAKS
Saturday, January 11, 2025
5 AI Tools to Save You Time
Work smarter, not harder. This adage has always rung true for me. In the age of AI, it’s no longer just a good idea. It’s a necessity.
As the CEO of DOXA Talent, a conscious outsourcing company that helps businesses build high-performing, borderless teams, I constantly think about the future of work. Instead of resisting change, I embrace it at every turn.
When AI tools started popping up, it was a no-brainer for me to incorporate them into my daily routine. With my never-ending to-do list and the constant demands that come with running a business, it’s been a complete game-changer.
Leaders who aren’t leveraging AI are missing out on valuable opportunities to save time, optimize brainpower, and elevate themselves and their businesses. Here are some tools that have enabled me to do just that.
Perplexity AI
Whether I want to know about the latest tools for streamlining project management for borderless teams or ways to optimize the onboarding process for new clients, Perplexity AI is my go-to for answering questions.
Similarly to ChatGPT, this research and conversational search engine answers queries using natural language predictive text. What makes it better than GPT-4 is the accurate, up-to-date information and citations it provides. I’m also a big fan of the ability to ask questions via voice or text, which adds even more flexibility to my busy days.
Fathom
There’s a good reason why Fathom is a top-rated AI notetaker. This tool has completely transformed my meetings for the better.
Gone are the days when note-taking during meetings was a necessity. Fathom records, transcribes, highlights, and summarizes key points from Zoom, Google Meet, or Microsoft Teams meetings. It even composes action items afterward, so you don’t have to.
This AI tool has made it possible for me to completely focus on the conversation at hand, allowing me to think more strategically and enhance productivity. Plus, if I ever need to revisit key sections of a call, they’re easy to find and just a click away.
After integrating Fathom into my day-to-day life, the thought of any leader going without it is, well, unfathomable.
Loom AI
If you haven’t jumped on the Loom AI train yet, you’re missing out on one of the easiest and most effective ways to communicate with your team.
Loom in and of itself is a handy tool for sharing video messages that have a more personal touch. Loom AI has made it even easier to effectively communicate with team members with features like AI-generated titles, summaries, and custom messages that can be used when you share Looms. It also auto-assigns action items and removes filler words, which are huge timesavers for me.
With studies showing that video messaging improves information recall by up to 83 percent as compared with text-only messaging, Loom AI is an indispensable tool for clear, effective communication.
Speechify
Read three times faster, remember two times more, and reduce your stress. That’s what Speechify promises to you, and as a CEO constantly on the go, I’ve found it invaluable.
This text-to-speech AI allows me to listen to any website, document, or book of my choosing. It’s available via mobile, Chrome extension, and desktop app, making it incredibly convenient.
Simply put, if you’re a busy professional, Speechify is a must.
PhantomBuster
Do you find lead generation to be time-consuming and costly? Me, too—but in the age of AI, it doesn’t have to be.
Enter PhantomBuster, the AI tool that’s taking the lead in a new era of lead generation.
Thanks to PhantomBuster, my team and I have been able to cast a wide net on Facebook, Instagram, and LinkedIn, build new relationships on those platforms, and nurture existing ones. It’s never been easier to gather information about potential leads and leverage automation to network with them.
With PhantomBuster, you get more leads with less effort (just like it says on its site). I can’t think of a single CEO who wouldn’t want to scale their business with that model.
The reality is, AI tools aren’t going anywhere. With an expected annual growth rate of 37 percent from 2023 to 2030, they will only continue to transform the way we work and run our businesses. Leaders who embrace this technology now will have a competitive edge over those who resist it.
As for me, I’m going to only continue to add to this list. AI tools don’t just help us keep up. They empower us to get ahead. I plan to take full advantage of that.
EXPERT OPINION BY ENTREPRENEURS' ORGANIZATION @ENTREPRENEURORG
Friday, January 10, 2025
Delta Just Announced Its Plan to Use AI to Solve the Worst Thing About Traveling
On Tuesday at CES, Delta Air Lines kicked off its 100th birthday year with a keynote at Sphere. I guess if you have some stuff you want to announce, packing a few thousand people into a place like Sphere is a good way to do that. Add in some special guests like Viola Davis, Tom Brady, a motorcycle Uber driver, and lots of digital fireworks, and you have a party.
The highlights of the party were a series of announcements the company rolled out, including a new partnership with Uber—replacing Lyft as the airline’s official rideshare partner—as well as a partnership with YouTube that will allow SkyMiles members to watch YouTube Premium for free when signed in to Delta’s in-flight entertainment system. Delta also said it planned to complete its rollout of free Wi-Fi across its global fleet by the end of this year.
One of the more interesting announcements was what the company called Delta Concierge, an AI-powered personal assistant within the Fly Delta app.
“Delta Concierge will serve as a thread across your experience,” said Ed Bastian, Delta’s CEO. The idea is that it will “serve as an AI-powered personal assistant that combines the context of who our customers are and how they like to travel, with the deep knowledge and insights we’ve built as the world’s most reliable airline.”
Initially, Delta Concierge will offer travelers suggestions based on their preferences and their travel plans. It will also allow for natural language text and voice input, making it easier to interact with than finding your way through a selection of menus.
“Delta Concierge will offer features like natural language text and voice input and travel updates such as passport expiration alerts,” the company said. “Future updates will include options such as flight changes.”
That last part is where things really get interesting. By far, the worst thing about travel is uncertainty. Air travel, especially, is full of uncertainty. There are literally millions of moving parts that all have to keep moving in order for you to get where you’re going. Sometimes, one of those moving parts breaks. Sometimes, the weather doesn’t cooperate, or crew members get sick. Sometimes, a software update grounds an entire airline for a few days. When that happens, the ability to simply ask the app to “Find me alternative flights to my destination,” and have it understand all of what that means would be a game-changer.
But, even if everything goes the way it’s supposed to, for a lot of people, there is still a lot of uncertainty—especially if you don’t fly frequently. Having the app proactively let you know how to get to your gate, which security line to use, or the fastest way to get from the airport to your destination is a big deal. I don’t know how much of it is only possible because of AI, but if it works, I also don’t care.
In the example shared in the keynote, Delta Concierge lets a traveler know that traffic is especially bad and suggests they take a Joby air taxi. Of course, you can’t actually do that yet. To be fair, I’m sure Delta could send you that notification when Delta Concierge rolls out, but Joby hasn’t received final regulatory approval. And when it does, you can bet that Delta will get a cut. Joby and Delta have a partnership to bring the air taxi to New York City and Los Angeles.
Delta isn’t commenting on what LLM it is using, and—as for privacy—it says that customers are not automatically opted into the Delta Concierge experience. Additionally, it does say that “customer data will be safeguarded and protected according to our Privacy Policy, industry standards, and best practice.”
Delta says it will begin launching in a “phased approach” this year, but it is yet to be seen what all of this really looks like when it arrives on your devices. A lot of companies have made big promises about how AI is going to change all sorts of products and experiences, and the vast majority of them are so early-stage that it’s not clear if they will ever materialize.
On the other hand, Delta has a pretty good recent track record of keeping these types of promises. Two years ago, the company announced it was bringing fast, free Wi-Fi to all of its planes. During the keynote, Bastian said it expects to complete that by the end of this year. In fact, he stated publicly that “many of the features we’ve shown today will be on our planes this year.”
I expect we’ll see Delta rolling out its Concierge this year, though some of the more interesting features are probably further down the road. Delta painted a pretty compelling future of how the airline will use AI to personalize the travel experience. It’s making a pretty big promise, which is risky. On the other hand, if it can solve the worst thing about travel, it seems like a pretty intelligent bet.
EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN
Wednesday, January 8, 2025
Sam Altman Says AI Agents Will Transform the Workforce in 2025
Sam Altman says the two years since the launch of ChatGPT, a period that has catapulted him to fame as the public face of the artificial intelligence industry, have been the most “unpleasant years of my life so far.” In a new blog post, Altman reflected on the path that he’s walked since ChatGPT’s November 2022 start, including what he learned from his very-public firing in 2023, and made a big prediction about AI’s impact in 2025.
Here are the biggest takeaways from Altman’s January 2025 lengthy blog post, titled “Reflections.”
Altman’s firing still haunts him
Altman was publicly fired by OpenAI’s board in November 2023, just before ChatGPT’s first birthday. Five days later, he was reinstated as CEO. In the blog post, Altman reveals some personal details regarding the firing, which happened over a video call while he was in Las Vegas.
Looking back, he says the whole event was a “big failure of governance by well-meaning people, myself included,” but one that he believes has made him a more thoughtful leader. Another lesson from the firing? The importance of having a board with diverse viewpoints and experience handling unexpected challenges.
In particular, Altman singled out two figures who he said “went so far above and beyond the call of duty” to rescue Altman from his brief banishment: Airbnb founder Brian Chesky and venture capitalist Ron Conway. Without going into detail, Altman recalled “being in the foxhole” with Chesky and Conway, who “used their vast networks for everything needed and were able to navigate many complex situations. And I’m sure they did a lot of things I don’t know about.”
He believes in scaling laws
Altman has long been a believer in scaling laws, a mathematical assumption that the more data a neural network is trained on, the smarter it becomes. In his blog post, he theorized that businesses also have a scaling law: As growth increases, so does turnover.
Acknowledging that OpenAI’s executive team has seen a massive amount of turnover since ChatGPT’s launch, Altman wrote that “startups usually see a lot of turnover at each new major level of scale, and at OpenAI numbers go up by orders of magnitude every few months.” According to Altman, the fracturing of OpenAI’s C-suite, including the departures of chief technology officer Mira Murati and chief scientist Ilya Sutskever, are a natural result of OpenAI’s ascendancy.
OpenAI’s structure, and its future
OpenAI’s leadership reportedly spent much of 2024 determining how to transform its current structure as an entity with a capped for-profit arm and a nonprofit arm into a more conventional moneymaking entity. Altman wrote in his post that he had “no idea we would need such a crazy amount of capital” to develop super-advanced artificial intelligence.
To obtain that kind of capital, OpenAI is planning on converting its for-profit arm into a public benefit corporation. In an official statement released in December, OpenAI wrote that “investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness.”
A key test of this planned new OpenAI structure will be how the company sells enterprises on AI agents, which are designed to take specific actions and automate workflows. Altman wrote in his blog that 2025 could be the year that AI agents are integrated into the workforce and predicted they would “materially change the output of companies.”
Beyond 2025, OpenAI is turning its aim beyond useful tools to “superintelligence,” super-advanced AI models capable of outperforming humans at nearly any task and ushering in a new era of abundance and prosperity.
“We love our current products,” wrote Altman, “but we are here for the glorious future.”
OpenAI’s naming struggles
Altman is effusive about OpenAI’s capabilities in nearly all areas, save for one notable exception: naming stuff. The company has a history of giving its new AI models and products confusing names like GPT-4, GPT-4o, GPT-4o Mini, o1, and o1 Mini. In July 2024, when announcing GPT-4o Mini, Altman responded to a post on X suggesting that OpenAI needed to revamp its naming scheme with “lol yes we do.”
In his blog post, Altman says that originally, ChatGPT was named Chat With GPT-3.5, adding that OpenAI is “much better at research than we are at naming things.”
All together, the post is nearly 2,000 words, so if you don’t feel like reading the whole item, you’re in luck: When asked to summarize the screed in a single sentence, ChatGPT 4o provided the following: “OpenAI’s journey over the past nine years, marked by the launch of ChatGPT and transformative progress in AI development, has been a mix of extraordinary innovation, intense challenges, and a vision for creating beneficial AGI, culminating in a reflection on resilience, gratitude, and the promise of a super-intelligent future.”
BY BEN SHERRY @BENLUCASSHERRY
Monday, January 6, 2025
6 Ways the Workplace Will Change in 2025
Nearly five years ago now, the pandemic upended the way millions of people worked seemingly overnight. Now, at the tail-end of 2024, the workplace is still undergoing a number of consequential changes – and in 2025, even more shifts are on the menu.
This was a year marked by several watershed moments for work. One of the largest tech companies in the world, Amazon, announced a five-day return-to-office policy. Companies like Lowe’s, Ford, and Walmart rolled back their DEI efforts. And Donald Trump’s win sparked a flurry of questions about the future of immigrant labor, workplace regulations, and more.
With the new year nearly upon us, here’s how experts expect that work and the workplace will change in 2025 – and how business owners can prepare to greet those evolutions with aplomb.
1. More talent searches will start inside.
In 2024, the labor market continued its cooling trend, rappelling down from its heights around the Great Resignation. Now, with job openings generally slowing and quits well below rates of recent years, small businesses are finding it a bit easier to fill their open positions, according to recent data from National Federation of Independent Business.
And yet, because many companies had engaged in labor hoarding in 2023 — holding tightly to their talent as the job market softened — many leaders were in an interesting pickle in 2024, says Jeanne MacDonald, CEO of recruitment process outsourcing at Korn Ferry, the global consulting firm.
“They hired a ton of people, spent a lot of money, and then early 2024 was, ‘Well, wait a minute, what are we going to do with all this talent?'” MacDonald says.
As a result, 2024 was the year where MacDonald saw “internal mobility or internal recruiting, more so than external, than we’ve ever seen,” she says. Companies were turning toward their current workforce, evaluating what skills already existed internally and discovering how to leverage them in a new way, she says.
That approach is only going to become more popular in 2025, predicts Andrew McCaskill, a career expert at LinkedIn. Even if hiring opportunities do accelerate again, that’s still an expense, he says, and companies are figuring out that their “next best employee may be my current employee, just moved to another team,” he says.
Thus, companies must also consider how to create internal systems or programs that facilitate this kind of mobility, McCaskill says, such as allowing team members to “raise their hand for stretch assignments” or take “tours of duty in other parts of the business,” he says. This way, company leaders are actively upskilling and preparing team members for internal movement.
2. Managers could get a burnout-busting tool.
One story that didn’t change much over the past two years: managers are still being pushed to brink of burnout, experts say.
In 2023, a Gartner survey found that the average manager had 51 percent more responsibilities than they could “effectively manage”; this year, a different Gartner survey found that three-quarters of HR leaders say their managers are “overwhelmed by the expansion of their responsibilities.”
Since the pandemic, managers have been on the front lines of new hybrid and remote arrangements, responsible worker retention, and other key workplace changes, as Inc. has previously reported. And in 2024, managers were still asked to do “more with less” and still feeling limited by a lack of autonomy, says Emily Field, partner at McKinsey.
But this year saw some changes that could make a meaningful difference for managers moving forward, Field says – namely, the use of generative AI. Indeed, by incorporating some of these tools into their managers’ workflows in 2025, leaders have an opportunity to “free up capacity for managers,” Field argues.
Now, the onus is on teams to find exactly which tools would be most helpful in alleviating their managers’ specific pressures in 2025 — and Field says an “experimentation mindset” will serve companies well here. “Let’s test and learn,” she says, “and then let’s refine based on what serves us.”
3. Gen Z will move up the ranks.
Generation Z is now firmly entrenched in the workforce. They represent nearly a fifth of the U.S. labor force and, as of the second quarter of 2024, now outnumber Baby Boomers, according to the U.S. Department of Labor. And in 2025, they’re projected to hit another milestone — next year, about one in 10 managers will be Gen Z, according to a recent report from Glassdoor.
This stands to be an interesting transition, considering a few recent surveys suggesting that Gen Z’s entry into the workplace has been bumpy. Sixty percent of companies in one survey said they’d fired Gen Z employees who’d been hired earlier in 2024. In another, 57 percent of U.S. Gen Z workers surveyed said they were uninterested in becoming middle managers, a trend the report deemed “conscious unbossing.”
And yet, Glassdoor lead economist Daniel Zhao believes that differences between Gen Z and other generations have been overstated. In fact, while Zhao believes that Gen Z’s management style may be different, he says this will have less to do with their generation and more with “what constitutes good leadership right now.”
Namely, “there’s much more emphasis in the last five years on emotional intelligence for leaders and managers,” he says, as well as “much more discussion around employee well-being, setting boundaries, providing clarity.” With those factors at play, “Gen Z is being asked to raise the bar on good leadership,” Zhao adds.
To help them do this, providing manager training will be critical, Zhao says. He also suggests finding ways to give these less experienced workers opportunities to flex their managerial muscles, such as overseeing a project: “That might be a way to give folks the opportunity to get their feet wet … before dropping them into the deep end.”
4. DEI programs will be put further in the crosshairs.
In 2023, the Supreme Court’s decision to strike down affirmative action spurred a wave of lawsuits aimed at diversity, equity, and inclusion programs, and this year brought on an “escalation” of those legal attacks, says David Glasgow, the executive director of the Meltzer Center for Diversity, Inclusion, and Belonging, a research center within New York University’s School of Law.
Trump’s election only poured more “fuel on the fire of anti-DEI backlash,” Glasgow says — and in 2025, he expects to see even more attacks, including at the federal government level.
That said, this flurry of legal activity “doesn’t necessarily mean that the lawsuits are going to be successful,” Glasgow says. Indeed, the Meltzer Center is tracking more than 100 cases, and numerous have been settled or dropped.
Overall, though, the state of DEI remains “uncertain” and “complicated,” says Tory Clarke, co-founder and partner at the New York City-based executive search firm Bridge Partners. Companies who weren’t “fully committed” to DEI work are backing out, she says, while others are worried about being attacked next.
In a survey published earlier this year, Bridge Partners found that 66 percent of companies had increased their DEI investments in the past year, an 11-point drop compared to 2023. But that doesn’t mean companies are giving up: In fact, almost three-quarters of those surveyed with a DEI program already in place said they planned to “increase their commitment to DEI within the next two years” – evidence of companies “going underground” with their commitments to wait out the backlash, Clarke told Inc. at the time.
In the meantime, Clarke says she’s seeing companies move DEI efforts under other areas within their organizations, such as human resources, or changing the language related to those programs. Indeed, more than 50 percent of senior executives in a Conference Board survey this year said they’d made changes to DEI terminology.
In 2025, Clarke expects that corporate DEI work will continue, even if the “players may get rearranged” or the work “may go under the radar.” And, ultimately, companies may need to “reintroduce” their DEI efforts to cut through the noise and backlash, says Arthur Woods, chief business officer at Bridge Partners.
“It will likely need to be embedded and democratized a lot more,” he says.
5. The 4-day workweek will still be a distant dream.
This year, there’s been plenty of hubbub about working arrangements. But it’s clear that workers continue to value flexible arrangements – and many companies are still offering flexibility in various forms. One buzzy option hasn’t caught on widely quite yet, though: the four-day workweek.
In March, Senator Bernie Sanders (I-Vt.) propelled conversation about the four-day workweek to the forefront when he introduced legislation aimed reducing the standard workweek from 40 to 32 hours over four years. This came after the United Auto Workers union’s unsuccessful 2023 bargaining for a 32-hour workweek, which sparked further conversations about the four-day workweek this year, says Dale Whelehan, CEO of the nonprofit 4 Day Week Global.
But overall, Whelehan senses that adoption of four-day workweeks has slowed. In September 2023, 4 Day Week Global conducted a pilot program in Germany, successfully recruiting 41 organizations to participate; now, it’s launching a pilot in France with only 10 companies on board, he says.
“I think local economies and local politics is playing an influential role in whether businesses perceive they can take a risk on something like this at this moment in time,” Whelehan says. Broader feelings of uncertainty, including leading up to the presidential election, also played a role in American companies sticking to the status quo, Whelehan believes.
“There has been a lot of ongoing discussion, but not necessarily action happening at the state level in the U.S.,” he says.
And yet, many U.S. workers are hopeful this could change — even if it takes a while. In a survey from the job site Monster earlier this year, 46 percent of workers surveyed believed that four-day workweeks would catch on over the next 30 years.
Next year in particular, Whelehan says, pro-flexible work sentiments could grow. As larger companies like Amazon enforce return-to-office mandates, he expects that negative consequences of those pushes will emerge — in retention, burnout, or even climate change concerns, he says — bringing flexibility back to the forefront of conversations about the future of work.
“For no rational reason can I see how the older models of work will win out in 2025,” Whelehan says.
6. AI will make more inroads.
This was the year that AI at work got “real,” according to a report from Microsoft and LinkedIn in May. According to the report, in the prior six months the use of generative AI among global knowledge workers had nearly doubled. About three-quarters are now using it, citing benefits like saved time, increased creativity, and more.
Younger workers and leaders are particularly eager to bring these tools into work, according to multiple reports this year. In fact, beyond boosting their efficiency, younger managers (and wannabe managers) believe AI can help them become better leaders, enhancing their “communication to improve problem solving and facilitate better relationships,” according to a report from Google Workspace.
And AI is poised to play a “big role in 2025,” McCaskill says. “I think that we’re gonna see more and more companies that are hiring for…artificial intelligence understanding and solutions.” At LinkedIn, for instance, participation in AI courses on its learning platform have increased “fivefold year-over-year,” he says.
Still, companies will be thinking about what parts of AI to implement, as well as the “change management structure” associated with those changes, McCaskill adds, setting the stage for a “pace of change” in the workplace that’s “only going to get faster and deeper” next year.
In the face of this rapid change, it’s important that companies be careful about how they approach incorporating AI, says Jessica Burkland, an assistant professor of practice in organizational behavior at Babson College. She recommends making sure that managers as well as employees truly understand the technology. If they don’t, their teams could be “utilizing technology in a way that’s disrupting workflows as opposed to augmenting the workflows,” she says.
AI isn’t the only technology making progress in workplaces. Virtual reality, for instance, has made headway in corporate training programs this year, says Jeremy Bailenson, founding director of Stanford University’s Virtual Human Interaction Lab. According to Bailenson, VR has demonstrated strong capabilities to simulate “really intense and special situations that give you a teachable moment,” like an active shooter drill.
As technologies like AI and VR continue to evolve and permeate workplaces next year, how that’s managed will set companies apart, Burkland says: “These technologies are only as good as their implementation.”
BY SARAH LYNCH
Subscribe to:
Posts (Atom)