Friday, January 31, 2025
DeepSeek Is the Wake-Up Call Our AI Overlords Needed
The days when a news story out of the tech sector leads the entire day’s news cycle don’t happen as often as they used to. It almost makes me long for the chaotic bitcoin billionaire gold rush nonsense. But when a little Chinese AI company called DeepSeek takes a trillion-dollar chunk out of the U.S. stock market—in what can best be described as a nerd drive-by? Well—that story goes above the fold, to use a term my journalist friends love.
Is it time to panic? Should you panic?
Don’t panic.
Unless you’re Sam Altman. Then you may want to check the cash reserves and ask everybody to work a couple of late nights.
Because DeepSeek, the “little guy,” is coming for the lunches of OpenAI, Anthropic, Google, Microsoft, Nvidia… everyone.
And it doesn’t stop there.
No One Expected the Spanish Inquisition?
No seriously, you’re fine. Just don’t look at your retirement account for a couple weeks.
This is a gut check—and one that was long coming, because it accompanies any meteoric mainstream adoption of a new technology.
Look, I kid. Sam Altman is never going to read my columns. And I don’t miss the bitcoin FOMO or Tom Brady hawking crypto or NFT bro billionaires. I don’t pine for the days of Yo or Klout or Napster or Flooz.
Yeah, I threw Napster in there. I was right there with you, but stealing songs is stealing songs.
But for real. In a 2025 dominated by Temu and Shein, no one imagined that a Chinese company might just drop in and deliver a just-as-good product for a fraction of the price?
It looks like DeepSeek did just that. And I’m aware that in this same Year of Our Lord 2025, we should definitely wait for triplicate and quadruplicate verification on the veracity of those claims. But holy cow, it begs the question:
Do you really need all that money and all of that hardware and power to build your super-cool chatbot?
I mean, at this point, it looks like DeepSeek did not need all that—pending quadruplicate and maybe quintuplicate independent verification, which will likely never happen because China.
But it begs a deeper question. The one currently keeping Sam and Dario and Sundar and Satya and especially poor Jensen up at night. Man, Jensen alone lost $20 billion in a day.
Oh, here’s a link to a quick primer on the carnage if you need it. Then come back and I’ll ask the question out loud that we’re all asking internally.
What Are We Doing Here?
Or, more bluntly, “Is this AI shit real or what?”
Or, with context, “Exactly where is this ‘future of AI’ that’s either always right around the corner or stuck just behind the AI wall?”
I’ve been telling you what AI is and isn’t for a couple years now, because I started working with this science back in 2010. I’m no AI oracle (is Oracle into AI?), but I know enough to be able to connect the dots.
AI is just a series of “if-this-then-that” (IFTTT) calculations—but really, really fast. Back in 2010, we could scare up enough processing power to make the output faster and slightly more elegant at scale than humans could imagine. And I mean that literally. That’s where IFTTT starts to look like magic.
The processing capabilities that existed 15 years ago meant we were just scratching the surface. Well, that surface is now wide open and we’re approaching the core.
But the core is still IFTTT. Only it’s so fast now, the machines can’t even comprehend it.
And when you’re selling that into the mainstream, well, you’re gonna get hop-ons. You’re going to get companies that slash their workforces and invest in the promise of a future where machines can increase productivity at a pace that we “can’t imagine.”
I saw a thread on Hacker News where some poor dope was having to justify to his or her leadership why AI hasn’t increased the productivity of their software development by 10x.
Again, in 2025, this might be a troll. But it tracks. And more importantly, it’s laugh-to-keep-from-crying funny.
Don’t Panic
Look, it’s like Hans Gruber said when the cops showed up. This was inevitable, and, as it happens, necessary.
This is going to force everyone involved to take a much needed time-out and reflect on what all that money, all that energy, and all those layoffs are adding up to.
In the meantime, yo, don’t use Chinese software. Don’t be like the kids rebelling against the TikTok ban by giving even more of their data over to an entity known to surreptitiously take it, aggregate it, and use it.
What was it my dad used to say? It’s “cutting off your nose to spite your face.” Yeah, that’s brutal but you’ll remember it. He was awesome like that.
But then also ask yourself, should you really be putting that many eggs into AI’s basket?
This is a pivotal moment in the future of tech. And every one of these takes a reckoning of overconfidence and hubris to shake out the real and lasting benefits and opportunities.
I’m already seeing social-media posts from scammers giving you step-by-step instructions on a DeepSeek built trading algorithm guaranteed to game the stock market. I think they just took the crypto template and changed a couple words, maybe using ChatGPT.
Wait for all that noise to die down before you make any decisions. I’ll be following along if you want to hop on my email list.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO
Wednesday, January 29, 2025
What is DeepSeek, the Chinese AI startup that shook the tech world?
A surprisingly efficient and powerful Chinese AI model has taken the technology industry by storm. It’s called DeepSeek R1, and it’s rattling nerves on Wall Street.
The new AI model was developed by DeepSeek, a startup that was born just a year ago and has somehow managed a breakthrough that famed tech investor Marc Andreessen has called “AI’s Sputnik moment”: R1 can nearly match the capabilities of its far more famous rivals, including OpenAI’s GPT-4, Meta’s Llama and Google’s Gemini — but at a fraction of the cost.
The company said it had spent just $5.6 million powering its base AI model, compared with the hundreds of millions, if not billions of dollars US companies spend on their AI technologies. That’s even more shocking when considering that the United States has worked for years to restrict the supply of high-power AI chips to China, citing national security concerns. That means DeepSeek was supposedly able to achieve its low-cost model on relatively under-powered AI chips.
What is DeepSeek?
The company, founded in late 2023 by Chinese hedge fund manager Liang Wenfeng, is one of scores of startups that have popped up in recent years seeking big investment to ride the massive AI wave that has taken the tech industry to new heights.
Liang has become the Sam Altman of China — an evangelist for AI technology and investment in new research. His hedge fund, High-Flyer, focuses on AI development.
Like other AI startups, including Anthropic and Perplexity, DeepSeek released various competitive AI models over the past year that have captured some industry attention. Its V3 model raised some awareness about the company, although its content restrictions around sensitive topics about the Chinese government and its leadership sparked doubts about its viability as an industry competitor, the Wall Street Journal reported.
But R1, which came out of nowhere when it was revealed late last year, launched last week and gained significant attention this week when the company revealed to the Journal its shockingly low cost of operation. And it is open-source, which means other companies can test and build upon the model to improve it.
The DeepSeek app has surged on the app store charts, surpassing ChatGPT Monday, and it has been downloaded nearly 2 million times.
Why is DeepSeek such a big deal?
AI is a power-hungry and cost-intensive technology — so much so that America’s most powerful tech leaders are buying up nuclear power companies to provide the necessary electricity for their AI models.
Meta last week said it would spend upward of $65 billion this year on AI development. Sam Altman, CEO of OpenAI, last year said the AI industry would need trillions of dollars in investment to support the development of high-in-demand chips needed to power the electricity-hungry data centers that run the sector’s complex models.
So the notion that similar capabilities as America’s most powerful AI models can be achieved for such a small fraction of the cost — and on less capable chips — represents a sea change in the industry’s understanding of how much investment is needed in AI. The technology has many skeptics and opponents, but its advocates promise a bright future: AI will advance the global economy into a new era, they argue, making work more efficient and opening up new capabilities across multiple industries that will pave the way for new research and developments.
Andreessen, a Trump supporter and co-founder of Silicon Valley venture capital firm Andreessen Horowitz, called DeepSeek “one of the most amazing and impressive breakthroughs I’ve ever seen,” in a post on X.
If that potentially world-changing power can be achieved at a significantly reduced cost, it opens up new possibilities — and threats — to the planet.
What does this mean for America?
The United States thought it could sanction its way to dominance in a key technology it believes will help bolster its national security. Just a week before leaving office, former President Joe Biden doubled down on export restrictions on AI computer chips to prevent rivals like China from accessing the advanced technology.
But DeepSeek has called into question that notion, and threatened the aura of invincibility surrounding America’s technology industry. America may have bought itself time with restrictions on chip exports, but its AI lead just shrank dramatically despite those actions.
DeepSeek may show that turning off access to a key technology doesn’t necessarily mean the United States will win. That’s an important message to President Donald Trump as he pursues his isolationist “America First” policy.
Wall Street was alarmed by the development. US stocks were set for a steep selloff Monday morning. Nvidia (NVDA), the leading supplier of AI chips, whose stock more than doubled in each of the past two years, fell 12% in premarket trading. Meta (META) and Alphabet (GOOGL), Google’s parent company, were also down sharply, as were Marvell, Broadcom, Palantir, Oracle and many other tech giants.
Are we really sure this is a big deal?
The industry is taking the company at its word that the cost was so low. No one is really disputing it, but the market freak-out hinges on the truthfulness of a single and relatively unknown company. The company notably didn’t say how much it cost to train its model, leaving out potentially expensive research and development costs. (Still, it probably didn’t spend billions of dollars.)
It’s also far too early to count out American tech innovation and leadership. One achievement, albeit a gobsmacking one, may not be enough to counter years of progress in American AI leadership. And a massive customer shift to a Chinese startup is unlikely.
“The DeepSeek model rollout is leading investors to question the lead that US companies have and how much is being spent and whether that spending will lead to profits (or overspending),” said Keith Lerner, analyst at Truist. “Ultimately, our view, is the required spend for data and such in AI will be significant, and US companies remain leaders.”
Although the cost-saving achievement may be significant, the R1 model is a ChatGPT competitor — a consumer-focused large-language model. It hasn’t yet proven it can handle some of the massively ambitious AI capabilities for industries that — for now — still require tremendous infrastructure investments.
“Thanks to its rich talent and capital base, the US remains the most promising ‘home turf’ from which we expect to see the emergence of the first self-improving AI,” said Giuseppe Sette, president of AI market research firm Reflexivity.
By David Goldman
Monday, January 27, 2025
Why OpenAI’s Agent Tool May Be the First AI Gizmo to Improve Your Workplace
Many of us have by now chatted to one of the current generation of smart AI chatbots, like OpenAI’s market-leading ChatGPT, either for fun or for genuine help at work. Office uses include assistance with a tricky coding task, or getting the wording just right on that all important PowerPoint briefing that the CEO wants.
The notable thing about all these interactions is that they’re one way: the AI waits for users to query it before responding. Tech luminaries insist that next-gen “agentic” AIs are different and can actually act with a degree of autonomy on their user’s behalf. Now rumors say that OpenAI’s agent tool, dubbed Operator, may be ready for imminent release. It could be a game changer.
The news comes from a software engineer that news site TechCrunch says has a “reputation for accurately leaking upcoming AI products,” Tibor Blaho. Blaho says he’s found evidence of Operator inside the desktop computer version of OpenAI’s ChatGPT app, and publicly hidden information on OpenAI’s website, including data comparing Operator’s performance to other AI systems.
AI agents are snippets of AI-powered code that can be given the ability to “act” in digital environments. This means giving an agent the ability to control a users’ computer, for example, which means it can fill in information on a webform, or even write code. According to OpenAI’s CEO Sam Altman, agents are the next big thing in AI, and they could totally change the way many officer workers spend their day.
Different AI companies have already tried releasing agent-based tools, with Google’s system, for example, being designed to let retailers “operate more efficiently and create more personalized shopping experiences to meet the demands of the AI era,” and Salesforce’s “Agentforce” tool able to act like a sales rep. OpenAI’s entry to the agent marketplace could be far more transformational.
That’s because if an agent can fill in webforms, that means it could be trusted with some necessary but highly mundane office tasks that eat into worker’s daily hours and potentially impact their ability to make their employers’ more money. For example, remember when your company fired Steve from accounts—the really useful guy who handled your business travel requests—in the name of efficiency? Yup, it meant you and all the other staff had to spend hours wrestling with confusing forms instead of actually working. An AI agent might be able to do most, if not all, of that form-wrangling for you.
The one question hovering over OpenAI’s plans is how well Operator will actually work, which will indirectly impact how much time it may be able to save the average office cubicle dweller. The performance numbers Blaho unearthed on OpenAI’s website suggest Operator isn’t totally reliable yet, depending on the task it’s been asked to do. When tasked with signing up to a cloud services provider and launching a virtual machine (a web-based portal to a cloud based computer system) Operator could only succeed 60 percent of the time, the data say. When asked to create a Bitcoin wallet, it only succeeded 10 percent of the time, for example.
These are preliminary numbers, and they may change when OpenAI actually does release Operator—which TechCrunch says could happen this month. But they’re an important reminder that, as with other generative AI systems that your office may be trying out, AI just can’t be trusted right now. Before you make decisive choices based on the AI’s advice, or use any other form of AI output, it’s worth running a fact-checking process, to make sure the information is genuine and not “hallucinated” at all. This advice may be doubly relevant when it comes to letting AI agents actually interact with your company’s computers.
BY KIT EATON @KITEATON
Friday, January 24, 2025
OpenAI CEO Sam Altman Says This Will Be the No.1 Most Valuable Skill in the Age of AI
Ask some of the top minds in the field what the future of artificial intelligence will look like, and you’ll get wildly different answers.
Some talk about super intelligence, AI personal assistants for all, and a world free of want. Others warn of the robot apocalypse. A few even argue that the potential of current AI models is overblown. But what just about everyone can agree on is that sometime quite soon, AI will fundamentally change how we live and work.
How should we entrepreneurs best prepare ourselves (and our kids)? That’s another question experts haven’t been shy about taking a stab at. Many suggest we hone the fundamentally human skills that machines still struggle to replicate – things like adaptability, empathy, and interacting with the physical world.
But when asked for his opinion on a recent episode of Adam Grant’s Re:Thinking podcast, Sam Altman – CEO of OpenAI, the company behind ChatGPT – mentioned a different skill as the most important one to cultivate if you want to thrive in an AI-filled world.
Sam Altman: My kid will never be smarter than AI.
Unsurprisingly for a guy selling AI, Altman agrees with those who see a whole lot of transformative AI in our collective future.
“Eventually, I think the whole economy transforms,” he predicts. But don’t worry too much that a robot will steal everyone’s jobs. “We always find new jobs, even though every time we stare at a new technology, we assume they’re all gonna go away,” he continues.
How to best prepare for this economic transformation is a conversation he has a personal stake in. Altman’s professional and financial future is clearly assured. But he and his husband are expecting a child soon. What skills does he think his future child needs to focus on to thrive in this AI-filled future?
Not intelligence. “My kid is never gonna grow up being smarter than AI,” he tells Grant.
“There will be a kind of ability we still really value, but it will not be raw, intellectual horsepower to the same degree,” Altman believes. So if sheer IQ isn’t the key to future success, what is? “Figuring out what questions to ask will be more important than figuring out the answer,” he says.
And he doesn’t just mean asking AI better questions. “The prompting tricks that a lot of people were using in 2023 are no longer relevant, and some of them are never gonna be necessary again,” Altman claims later in the episode.
Connectors beat collectors?
So what does Altman mean exactly when he says asking questions will be more important than answering them once AI becomes smarter than humans? The answer isn’t 100 percent clear, though Grant takes a stab at summarizing what Altman might be trying to say:
“We used to put a premium on how much knowledge you had collected in your brain, and if you were a fact collector, that made you smart and respected. And now I think it’s much more valuable to be a connector of dots than a collector of facts that if you can synthesize and recognize patterns, you have an edge.”
Back when Altman was in school, the OpenAI CEO responds, teachers tried to ban what they then called “the Google.” The thinking was, if you could just look up facts, then why bother memorizing them? Wouldn’t we all end up intellectually poorer in the long run?
Clearly, the teachers lost this battle. Thanks to the internet, we just learned “how to do more difficult, more impactful, more interesting things,” Altman claims.
He concludes: “I expect AI to be like that too.”
A few questions and a takeaway
Now, looking around at the current moment in global affairs, I think it’s fair to ask whether those ‘90s teachers might have had a point about the internet’s potential effect on our collective intellect. I personally am not sure that facts are in greater rather than lesser supply today than back when I first encountered “the Google.”
Nor am I sure that the tenor of the discussion or the problems we’re solving (or usually not solving) today are on some higher plane of human achievement. A few minutes on Twitter/X can really make you wonder. Though to be fair to Altman, AI is already powering incredible scientific, if not social, breakthroughs.
You can also find defenders of rote memorization who point out that it’s hard to connect dots you don’t recall exist or that you can conceive of only hazily without time-consuming googling.
But putting these objections aside for a moment, Altman is surely right that humans will never beat machines at recalling facts. What research (like this fun study that pitted AI against 4-year-olds) suggests we still excel at is looking at those facts in an unconventional light or pairing them with other unexpected facts, aka asking questions or connecting dots.
The future is creative.
Another word for this very human ability? Creativity. People ask creative questions about what facts mean and how they might fit together in a way that AI (so far) does not.
Which suggests that if Sam Altman wants his future child to thrive in a world of AI — or if any entrepreneur out there is hoping to prepare themselves or their offspring for the world of the future — focusing on exercising your creativite muscles is probably one smart way to go.
EXPERT OPINION BY JESSICA STILLMAN @ENTRYLEVELREBEL
Wednesday, January 22, 2025
OpenAI Details How It Would Like AI to Be Regulated
OpenAI, the Sam Altman-led company that ushered in the AI era with the late 2022 release of ChatGPT, has laid out its idealized vision for how the United States government, and the incoming Trump administration, can grow and regulate the country’s burgeoning artificial intelligence industry.
In a document titled “AI In America: OpenAI’s Economic Blueprint,” the company calls for infrastructure investments, collaboration between private AI companies and government agencies, and a light touch of federal regulation, rather than letting each state decide their own rules.
In an introduction, OpenAI vice president of global affairs Chris Lehane compared the current state of the AI industry with the early days of the automotive industry. He pointed out that the United Kingdom’s auto industry was stunted by overregulation, while the United States became the car capital of the world by “merging private-sector vision and innovation with public-sector enlightenment.”
Lehane wrote that the U.S. has another chance to be a leader in a potentially massive market, but only if the government and industry can work together. “In the same way the federal government helped clear the way for the nascent automobile industry to grow, including by preempting a state-by-state tangle of roads and rules,” wrote OpenAI, “it should clear the way for the AI industry’s development of frontier models.”
The context behind this document is that AI companies are increasingly worried that, without any federal regulation, individual states will develop their own rules. In 2024, California governor Gavin Newsom vetoed a bill that would’ve instituted safety requirements for AI models, but did sign a bill mandating the disclosure of training data for AI models.
One solution proposed by OpenAI is to develop pathways for companies that develop large language models to have their models evaluated by a government agency. “In return,” the proposal says, “these companies would receive preemption from state-by-state regulations on the types of risks that the same national security agencies would handle.” (Notably, OpenAI does not propose any sort of government evaluation that AI models would need to undergo in order to be publicly deployed.)
But state governments still have a big role to play in OpenAI’s grand vision. OpenAI proposed that state and local governments create “AI economic zones” in order to speed up the permitting process for building both data centers and power-generating infrastructure like wind farms, solar arrays, and nuclear reactors. The company estimates that there’s “$175 billion in global funds waiting to be invested in AI infrastructure,” and if the United States doesn’t take those funds and build infrastructure here, the Chinese Communist Party will happily step in.
OpenAI is embarking on a nationwide “Innovating for America” initiative to sell its vision at the federal, state, and local level, starting with a Washington, D.C. event on January 30, in which CEO Altman will preview new OpenAI technology and discuss its ability to drive growth.
It still remains to be seen how Trump’s relationship with OpenAI rival Elon Musk will impact the administration’s reaction to these proposals, but one thing is certain: Whatever the Trump administration decides to do with AI regulation will have major repercussions for businesses that build and use AI models.
BY BEN SHERRY @BENLUCASSHERRY
Monday, January 20, 2025
ChatGPT Gets Into the Robot Business, Which Could Change Your Office Routines
It looks like artificial intelligence will soon leave computer screens and get into the physical realm. The “ChatGPT moment for robotics is coming,” said Nvidia co-founder and CEO Jensen Huang. It seems either Huang knew something we didn’t, or he was incredibly prescient because there’s fresh information that the world-leading AI brand OpenAI is very serious about getting into robotics, and its hardware leader Caitlin Kalinowski has even posted job descriptions on X.
In a Q&A session after giving a keynote address at the 2025 Consumer Electronics Show in Las Vegas last week, Huang stirred up a lot of excitement about the future of with dramatic ideas about Nvidia’s future in “physical AI”—which means real world, tangible AI hardware. A.k.a. robots. Huang even went as far as saying companies should concentrate on developing humanoid robots because they can tackle difficult terrain that would stymie a wheeled machine.
Kalinowski’s job post explained how she was really excited about posting for OpenAI’s first “robotics hardware roles,” including positions for “two very senior tech lead engineering roles” and a technical program manager. The engineers will help the company “design the sensor suite for our robots,” Kalinowski explained, and one will need experience “designing gears, actuators, motors and linkages for robots.” The program manager role will be a “fun, scrappy role to start,” she noted, and it will include work on “standing up our training lab, and keeping us running smoothly as we cycle through our product design phases.”
News site TechCrunch dug into the details of the job postings, and found information that shows OpenAI’s planning on “general purpose” and “adaptive” robots, powered by special AI models the company develops, and notes that one listing shows plans for developing and producing hardware at “high volume (1M+).”
This news makes it abundantly clear that OpenAI will move speedily into robotics alongside developing its code-based AI products, and try to start building robot hardware sooner rather than later. Kalinowski’s words lend support to the idea OpenAI may be following the startup-style “move fast and break things” mentality that has served other disruptive hardware companies, like SpaceX, so well. And the “general purpose” description tallies nicely with Huang’s call for developing humanoid robots—systems that can maneuver and help out in existing factory or even office workspaces typically designed around the needs of the human body.
Why should we care about this niche bit of news? After all, it’s just a job listing for a handful of engineering posts in one company.
The fact is that OpenAI could already be one of the best-placed companies in the world to develop AI-powered hardware, like a humanoid robot. The AI leader has access to huge computer power, which will be needed to train the robots to move, react to commands and so on. Thanks to the nature of generative AI system training, it also has access to gargantuan amounts of training data—and it’s easy to imagine plenty of this information could be useful for teaching a robot to learn spoken commands, and detect objects automatically using machine vision.
OpenAI’s CEO Sam Altman has also been pushing the notion of AI “agents” as the next big development for AI technology—these are AI systems that can operate autonomously and even make decisions and perform digital actions like filling in forms on websites. This technology will likely translate quite directly into giving real world robots a degree of autonomy—a vital skill if they’re going to work alongside people, who can make surprising decisions to move, speak or perform an action at a moment’s notice.
Traditional robotics are a longtime feature of industrial engineering, particularly for manufacturing items like cars. But these machines tend to be static, perform one or two particular tasks, and require very precise positioning of tools and other equipment. Until now robots have often lacked access to the kind of real-world decision making made possible by the AI revolution. OpenAI also joins the ranks of other companies developing cutting edge AI-powered robots: Tesla’s Optimus is one well known example, but other companies like Figure are also making progress.
So will your next coworker be an android? Tech luminary Peter Diamandis certainly thinks so. Last year he predicted “millions, then billions of humanoid robots” are coming. OpenAI’s ChatGPT technology, along with other AIs are already helping transform office work. Its entrance to the robotics market is certainly going to accelerate that process.
BY KIT EATON @KITEATON
Friday, January 17, 2025
According to a new report, many businesses expect to increase their budgets for generative AI in 2025.
If there was one lesson that was extraordinarily clear at this year’s CES, it’s that generative AI is poised to be a massive force for businesses in the coming months and years. You couldn’t walk 10 steps through the Las Vegas Convention Center halls without encountering a new product featuring or promoting artificial intelligence in some form or fashion.
AI spending doesn’t look to be slowing down anytime soon, either. Businesses spent $13.8 billion on genAI in 2024. That’s five times the $2.3 billion spent in 2023, according to data from Menlo Ventures. But determining what to spend on generative AI, whether it’s as a tool to help employees or a feature to include in your product or service, can be a challenge.
Unfortunately, there’s no simple answer—and even some experts say they’re stumped, since AI is still such a new field. KPMG’s latest AI Quarterly Pulse Survey shows that 68 percent of large companies plan to invest between $50 million and $250 million over the next year. And a growing number of leaders are prepared to spend. One year ago, just 45 percent of companies planned investments in that price range.
Among small businesses, the number is, of course, much lower. A report from ServiceDirect found over half of small businesses plan to spend more than $10,000 per year on AI tools.
The amount they plan to spend varies by the size of the company. Some 58 percent of businesses with fewer than 10 employees plan to increase their AI budgets by more than $5,000 over the next 12 to 24 months, while 67 percent of businesses with 10 to 50 employees plan to increase their budgets by that amount or more in the same time frame.
Among companies with more than 50 employees, 77 percent expect to increase their budgets by more than $5,000 on AI solutions in the next 12 to 24 months—on top of their existing heavy investments in the technology.
ROI and pricing
The big question, of course, is what sort of return will businesses see on their investment? The frenzy surrounding the technology has prompted some companies to adopt AI even when the benefits are questionable at best. (At the aforementioned CES, AI was being incorporated into everything from air fryers to plants.)
Only one-third of leaders surveyed by KPMG say they expect to be able to measure the return on their investment in the next six months, with none believing they have yet reached that stage.
“Leaders are putting real dollars behind agents, but with mounting pressure to demonstrate ROI, getting the value story right is critical,” said Steve Chase, vice chair of AI and digital innovation at KPMG, in a statement. “The dynamic nature of AI demands new ways to measure value—beyond the limits of a conventional business case. As leaders work to define the right metrics, those measures must be tightly aligned with the business strategy and should account for the cost of not investing.”
Part of the problem is the cost of generative AI in the first place. James D. Wilton, a former associate partner at McKinsey & Company and founder of Monevate, a pricing and monetization consulting firm, makes the argument that the pricing models AI firms currently use are hurting the industry’s adoption.
There are two types of pricing models used most frequently these days among AI companies: license fees and per-query charges. Neither is ideal, says Wilton. The subscription model assumes a one-size-fits-all approach, but that license fee is often so high that it is unaffordable for smaller users.
The charge-per-query might make more sense, but users don’t get value every time they ask a genAI system a question. It often takes several iterations before the technology gives you the answer you’re looking for.
“The challenge is it’s not very value aligned,” says Wilton. “It’s aligned to the costs the AI generators will incur, but that’s not necessarily where the value is for the user.”
One alternative, he says, is outcome-based pricing models, which charge businesses per satisfactory resolution. (Zendesk offers something like this currently, he notes.)
“The more directly you can tie your pricing to the way the product creates value, the lower the ROI you need to give the customer, because the customer will do less work in order to realize the value,” he says.
BY CHRIS MORRIS @MORRISATLARGE
Wednesday, January 15, 2025
The Most Exciting Tech for Your Business at CES 2025
The annual Consumer Electronics Show is drawing to a close in Las Vegas. Known as the gadget-head’s Super Bowl, CES is both the launchpad for the next big things in tech, as well as a home for plenty of less serious inventions. Flavor-enhancing spoon, anyone?
As usual, this year’s CES was no different, offering an array of weird and wonderful gadgets and announcements from some of the top leaders in tech. Here are a selection of some that could actually help your business—or just come in handy as you run it.
AI-powered laptops
Sure, you could spring for a $3,000 supercomputer from the AI behemoth itself, Nvidia, but PC-maker Lenovo also announced a suite of AI-powered commercial computers, some that come at a kinder price point.
Lenovo’s AI-powered commercial laptops, the ThinkPad X9 14 Edition and X9 15 Aura Edition, range in cost from $1,399 to $1,549, whereas its ThinkCentre neo 50q QC desktop starts at just $849. The new ThinkPads come with an AI assistant built in. Lenovo AI Now is based on Meta’s Llama 3.0 LLM, but stores user data locally. It helps users search and summarize documents and retrieve information across various devices, among other things.
The ThinkCentre neo 50q QC is marketed specifically for small and medium-sized businesses, due to its compact form and AI-powered performance. It features Qualcomm’s Snapdragon X chips or Snapdragon X Plus 8-core processors.
A super fast phone charger
Nothing tosses a wrench into a workday like a dead battery. Enter Swippitt, a three-part “Instant Power System” that is meant to fully restore a phone’s charge in only two seconds. The system consists of a toaster-sized and toaster-shaped charging hub that contains five charged batteries, a phone case with a smart battery, as well as an app. After inserting a dying phone into the hub, it swaps a used battery out of the case for a charged one. (You’ll need to use the system’s Link case for everything to work.) The Swippit offers compatibility for iPhones 14 and above for now, but is expected to launch for Android later in 2025. Here’s more on how it works. Add one to your office and no one will complain of a low phone battery again.
A temperature-regulating chair
If you’ve ever dreamed of moving your car’s seat warmer into your office chair—this one’s for you. Razer’s Project Arielle contains both a self-regulating heater that can reach 86 degrees fahrenheit, and bladeless fan technology that pushes air through the chair’s mesh back, as TechCrunch reported. A panel built into the chair controls the features. Although technically a gaming chair, Project Arielle seems like it could have great applications for long days spent in overly chilly or warm offices—if it ever goes into production. For now, it is still a concept, but think about how quickly it could help you be more productive (or just fall asleep at your desk).
An AI travel agent
Five years after teasing the product during CES 2020, Delta announced its AI-powered Delta Concierge. For now, the concierge service will offer suggestions based on a person’s travel plans with natural language text and voice input, instead of conventional menu selection. In the future, Delta aspires for the assistant to remove some uncertainty from flying by helping to rebook flights in the case of delays or cancellations, navigating unfamiliar airports and even managing transportation after leaving the airport—potentially with its new travel partner, Uber. It will be found in the Fly Delta app.
Smart glasses to get you off your phone
Smart glasses were all the rage at CES. They ran the gamut from simple designs that play with lens color or contain bluetooth speakers to augmented-reality glasses tricked out with screens and battery packs. Somewhere in the middle are the types of glasses The Verge referred to as “all day companions,” that mimic the size and approximate weight of regular glasses but with built-in display, AI assistants and—in some cases—camera capabilities. These glasses could amp up productivity with notifications available at a glance—or at the very least endow you with the swagger of an early adapter.
A robot bartender for those in-office happy hours
Let’s be real. A robot bartender may not be essential, and it may not improve efficiency, but it could very well take the edge off of the day. Richtech Robotics announced an AI-powered, robot bartender called ADAM. Intended for the hospitality industry, the robot can make more than 50 kinds of drinks from cocktails to coffee drinks and can even chat a little. Not only that, but ADAM is already at work inside a Georgia Walmart, as well as in the Texas Rangers baseball stadium.
BY CHLOE AIELLO, REPORTER @CHLOBO_ILO
Monday, January 13, 2025
As You Bet on AI, Make Sure It’s Not Your Strategy That’s Artificial
There was an axiom we used in venture capital that said that the industry had a memory of only 10 years. Whatever was learned 10 or more years past, in other words—from what to avoid, to what to prioritize—was said to be forgotten, only to be painfully re-learned repeatedly at decade intervals. The origin of the phrase was directly tied to the lesson that the predicted impact of any new innovation is always grossly overstated. You don’t have to have ever worked in venture investing to know this to be true. Just flip back in your memory to see.
An eon ago, the fax machine was claimed to foretell the end of physical mail. It wasn’t, nor years later was email, though it too was unveiled with similar claims. Desktop computing was boasted to be the demise of large data management and mainframe computing. To be sure, desktops, then laptops, then smartphones all dramatically changed how we do what we do; but the cloud and server farms show the prediction to have been over-imagined.
Most artifical intelligence innovations are simply tools.
These tools’ actual impact occurs only in the context of the larger purposes, plans, and strategies they serve. Yet, too often, we speak of the innovation as strategy itself, as if “using more artificial intelligence in 2025 and beyond” is enough to represent a strategy. It’s not. It’s also why this latest version of this repeated lesson isn’t just about overstatement. It’s about confusing tools and tactics with strategy.
In the last year, my research has put me in touch with many of the organizations considered leaders in AI, including in determining its uses in and impact on business. Even those developing the tools feel a sense of marvel at what AI can do. In turn, they’ve spent much of the past few years striving to put AI to work as quickly as possible, in part for its speculated promises, and no doubt too for the assumed rewards AI might bring.
For the AI developers paying close attention, two things give them pause. The first has caught most of them by surprise: AI is proving to have power beyond what even its designers know, and to a degree, none of them can predict nor control fully. In response to that particular awakening, in 2023, a group of leading firms suggested that there should be a collective pause taken to think about the deeper implications of what AI might bring, consider the possible ripple effects that might result, and jointly explore how shared guidelines might be followed.
Some, surprisingly quite a few, were willing to sign on to a formal agreement. Few were prepared to act. The technology’s promises, even unverified, were just too great not to speed ahead, logic be damned.
The cost of confusing AI tools with strategy.
The second thing giving leaders in AI pause was more disturbing in its implications. It was the stark reality that in the rush to embrace AI, an increasing numnber of organizations suddenly found themselves struggling to answer seemingly simple questions like: What business are we in? So busy were they chasing the tool, that they found themselves suddenly having a hard time remembering what business goals, mission, even strategy the tool was there to support. Almost without their noticing, their priorities had become unintentionally inverted, with the tool no longer the supporting mechanism, but instead the dominant focus. It began firm by firm, but the error of putting AI ahead of the strategies it should be supporting has quickly developed into a disturbing, even dangerous trend. As broader evidence, a recent report from consulting firm McKinsey & Company called this out, and warned of the costs of confusing tools and tactics with strategy.
“It’s time for a reset,” McKinsey declared. “The initial enthusiasm and flurry of activity (around AI) is giving way to second thoughts and recalibrations as companies realize that capturing AI’s enormous potential value is harder than expected.” More than just a passing observation, McKinsey was blunt. “With 2024 shaping up to be the year for AI to prove its value,” they wrote, “companies should keep in mind the hard lessons learned … that competitive advantage comes from building organizational and technological capabilities.” It isn’t that AI has no role, it made clear, referring specifically to generative AI. But incorporating it into any organization’s strategy cannot take form as a simple add-on. To leverage AI effectively means “rewiring the business.”
Any experienced leader with a memory stretching beyond any one innovation cycle understands that strategy is an ongoing recalculation—a reconsideration, reconfirmation, and if need be a reorganization—of how all the pieces and parts that give an organization advantage fit together. Including its tools. Thoughtlessly adding anything new or blindly jumping on the latest bandwagon in and of itself yields no lasting advantage. As magical as AI seems right now, it must be part of this larger recalculation. It is not a strategy in and of itself. Without a doubt, AI will bring change–it already has–and advantages, though likely different than those predicted. It will not, however, displace the fundamental truth that ongoing success requires far greater strategic thought and effort.
EXPERT OPINION BY LARRY ROBERTSON, FOUNDER, LIGHTHOUSE CONSULTING @LRSPEAKS
Saturday, January 11, 2025
5 AI Tools to Save You Time
Work smarter, not harder. This adage has always rung true for me. In the age of AI, it’s no longer just a good idea. It’s a necessity.
As the CEO of DOXA Talent, a conscious outsourcing company that helps businesses build high-performing, borderless teams, I constantly think about the future of work. Instead of resisting change, I embrace it at every turn.
When AI tools started popping up, it was a no-brainer for me to incorporate them into my daily routine. With my never-ending to-do list and the constant demands that come with running a business, it’s been a complete game-changer.
Leaders who aren’t leveraging AI are missing out on valuable opportunities to save time, optimize brainpower, and elevate themselves and their businesses. Here are some tools that have enabled me to do just that.
Perplexity AI
Whether I want to know about the latest tools for streamlining project management for borderless teams or ways to optimize the onboarding process for new clients, Perplexity AI is my go-to for answering questions.
Similarly to ChatGPT, this research and conversational search engine answers queries using natural language predictive text. What makes it better than GPT-4 is the accurate, up-to-date information and citations it provides. I’m also a big fan of the ability to ask questions via voice or text, which adds even more flexibility to my busy days.
Fathom
There’s a good reason why Fathom is a top-rated AI notetaker. This tool has completely transformed my meetings for the better.
Gone are the days when note-taking during meetings was a necessity. Fathom records, transcribes, highlights, and summarizes key points from Zoom, Google Meet, or Microsoft Teams meetings. It even composes action items afterward, so you don’t have to.
This AI tool has made it possible for me to completely focus on the conversation at hand, allowing me to think more strategically and enhance productivity. Plus, if I ever need to revisit key sections of a call, they’re easy to find and just a click away.
After integrating Fathom into my day-to-day life, the thought of any leader going without it is, well, unfathomable.
Loom AI
If you haven’t jumped on the Loom AI train yet, you’re missing out on one of the easiest and most effective ways to communicate with your team.
Loom in and of itself is a handy tool for sharing video messages that have a more personal touch. Loom AI has made it even easier to effectively communicate with team members with features like AI-generated titles, summaries, and custom messages that can be used when you share Looms. It also auto-assigns action items and removes filler words, which are huge timesavers for me.
With studies showing that video messaging improves information recall by up to 83 percent as compared with text-only messaging, Loom AI is an indispensable tool for clear, effective communication.
Speechify
Read three times faster, remember two times more, and reduce your stress. That’s what Speechify promises to you, and as a CEO constantly on the go, I’ve found it invaluable.
This text-to-speech AI allows me to listen to any website, document, or book of my choosing. It’s available via mobile, Chrome extension, and desktop app, making it incredibly convenient.
Simply put, if you’re a busy professional, Speechify is a must.
PhantomBuster
Do you find lead generation to be time-consuming and costly? Me, too—but in the age of AI, it doesn’t have to be.
Enter PhantomBuster, the AI tool that’s taking the lead in a new era of lead generation.
Thanks to PhantomBuster, my team and I have been able to cast a wide net on Facebook, Instagram, and LinkedIn, build new relationships on those platforms, and nurture existing ones. It’s never been easier to gather information about potential leads and leverage automation to network with them.
With PhantomBuster, you get more leads with less effort (just like it says on its site). I can’t think of a single CEO who wouldn’t want to scale their business with that model.
The reality is, AI tools aren’t going anywhere. With an expected annual growth rate of 37 percent from 2023 to 2030, they will only continue to transform the way we work and run our businesses. Leaders who embrace this technology now will have a competitive edge over those who resist it.
As for me, I’m going to only continue to add to this list. AI tools don’t just help us keep up. They empower us to get ahead. I plan to take full advantage of that.
EXPERT OPINION BY ENTREPRENEURS' ORGANIZATION @ENTREPRENEURORG
Friday, January 10, 2025
Delta Just Announced Its Plan to Use AI to Solve the Worst Thing About Traveling
On Tuesday at CES, Delta Air Lines kicked off its 100th birthday year with a keynote at Sphere. I guess if you have some stuff you want to announce, packing a few thousand people into a place like Sphere is a good way to do that. Add in some special guests like Viola Davis, Tom Brady, a motorcycle Uber driver, and lots of digital fireworks, and you have a party.
The highlights of the party were a series of announcements the company rolled out, including a new partnership with Uber—replacing Lyft as the airline’s official rideshare partner—as well as a partnership with YouTube that will allow SkyMiles members to watch YouTube Premium for free when signed in to Delta’s in-flight entertainment system. Delta also said it planned to complete its rollout of free Wi-Fi across its global fleet by the end of this year.
One of the more interesting announcements was what the company called Delta Concierge, an AI-powered personal assistant within the Fly Delta app.
“Delta Concierge will serve as a thread across your experience,” said Ed Bastian, Delta’s CEO. The idea is that it will “serve as an AI-powered personal assistant that combines the context of who our customers are and how they like to travel, with the deep knowledge and insights we’ve built as the world’s most reliable airline.”
Initially, Delta Concierge will offer travelers suggestions based on their preferences and their travel plans. It will also allow for natural language text and voice input, making it easier to interact with than finding your way through a selection of menus.
“Delta Concierge will offer features like natural language text and voice input and travel updates such as passport expiration alerts,” the company said. “Future updates will include options such as flight changes.”
That last part is where things really get interesting. By far, the worst thing about travel is uncertainty. Air travel, especially, is full of uncertainty. There are literally millions of moving parts that all have to keep moving in order for you to get where you’re going. Sometimes, one of those moving parts breaks. Sometimes, the weather doesn’t cooperate, or crew members get sick. Sometimes, a software update grounds an entire airline for a few days. When that happens, the ability to simply ask the app to “Find me alternative flights to my destination,” and have it understand all of what that means would be a game-changer.
But, even if everything goes the way it’s supposed to, for a lot of people, there is still a lot of uncertainty—especially if you don’t fly frequently. Having the app proactively let you know how to get to your gate, which security line to use, or the fastest way to get from the airport to your destination is a big deal. I don’t know how much of it is only possible because of AI, but if it works, I also don’t care.
In the example shared in the keynote, Delta Concierge lets a traveler know that traffic is especially bad and suggests they take a Joby air taxi. Of course, you can’t actually do that yet. To be fair, I’m sure Delta could send you that notification when Delta Concierge rolls out, but Joby hasn’t received final regulatory approval. And when it does, you can bet that Delta will get a cut. Joby and Delta have a partnership to bring the air taxi to New York City and Los Angeles.
Delta isn’t commenting on what LLM it is using, and—as for privacy—it says that customers are not automatically opted into the Delta Concierge experience. Additionally, it does say that “customer data will be safeguarded and protected according to our Privacy Policy, industry standards, and best practice.”
Delta says it will begin launching in a “phased approach” this year, but it is yet to be seen what all of this really looks like when it arrives on your devices. A lot of companies have made big promises about how AI is going to change all sorts of products and experiences, and the vast majority of them are so early-stage that it’s not clear if they will ever materialize.
On the other hand, Delta has a pretty good recent track record of keeping these types of promises. Two years ago, the company announced it was bringing fast, free Wi-Fi to all of its planes. During the keynote, Bastian said it expects to complete that by the end of this year. In fact, he stated publicly that “many of the features we’ve shown today will be on our planes this year.”
I expect we’ll see Delta rolling out its Concierge this year, though some of the more interesting features are probably further down the road. Delta painted a pretty compelling future of how the airline will use AI to personalize the travel experience. It’s making a pretty big promise, which is risky. On the other hand, if it can solve the worst thing about travel, it seems like a pretty intelligent bet.
EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN
Wednesday, January 8, 2025
Sam Altman Says AI Agents Will Transform the Workforce in 2025
Sam Altman says the two years since the launch of ChatGPT, a period that has catapulted him to fame as the public face of the artificial intelligence industry, have been the most “unpleasant years of my life so far.” In a new blog post, Altman reflected on the path that he’s walked since ChatGPT’s November 2022 start, including what he learned from his very-public firing in 2023, and made a big prediction about AI’s impact in 2025.
Here are the biggest takeaways from Altman’s January 2025 lengthy blog post, titled “Reflections.”
Altman’s firing still haunts him
Altman was publicly fired by OpenAI’s board in November 2023, just before ChatGPT’s first birthday. Five days later, he was reinstated as CEO. In the blog post, Altman reveals some personal details regarding the firing, which happened over a video call while he was in Las Vegas.
Looking back, he says the whole event was a “big failure of governance by well-meaning people, myself included,” but one that he believes has made him a more thoughtful leader. Another lesson from the firing? The importance of having a board with diverse viewpoints and experience handling unexpected challenges.
In particular, Altman singled out two figures who he said “went so far above and beyond the call of duty” to rescue Altman from his brief banishment: Airbnb founder Brian Chesky and venture capitalist Ron Conway. Without going into detail, Altman recalled “being in the foxhole” with Chesky and Conway, who “used their vast networks for everything needed and were able to navigate many complex situations. And I’m sure they did a lot of things I don’t know about.”
He believes in scaling laws
Altman has long been a believer in scaling laws, a mathematical assumption that the more data a neural network is trained on, the smarter it becomes. In his blog post, he theorized that businesses also have a scaling law: As growth increases, so does turnover.
Acknowledging that OpenAI’s executive team has seen a massive amount of turnover since ChatGPT’s launch, Altman wrote that “startups usually see a lot of turnover at each new major level of scale, and at OpenAI numbers go up by orders of magnitude every few months.” According to Altman, the fracturing of OpenAI’s C-suite, including the departures of chief technology officer Mira Murati and chief scientist Ilya Sutskever, are a natural result of OpenAI’s ascendancy.
OpenAI’s structure, and its future
OpenAI’s leadership reportedly spent much of 2024 determining how to transform its current structure as an entity with a capped for-profit arm and a nonprofit arm into a more conventional moneymaking entity. Altman wrote in his post that he had “no idea we would need such a crazy amount of capital” to develop super-advanced artificial intelligence.
To obtain that kind of capital, OpenAI is planning on converting its for-profit arm into a public benefit corporation. In an official statement released in December, OpenAI wrote that “investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness.”
A key test of this planned new OpenAI structure will be how the company sells enterprises on AI agents, which are designed to take specific actions and automate workflows. Altman wrote in his blog that 2025 could be the year that AI agents are integrated into the workforce and predicted they would “materially change the output of companies.”
Beyond 2025, OpenAI is turning its aim beyond useful tools to “superintelligence,” super-advanced AI models capable of outperforming humans at nearly any task and ushering in a new era of abundance and prosperity.
“We love our current products,” wrote Altman, “but we are here for the glorious future.”
OpenAI’s naming struggles
Altman is effusive about OpenAI’s capabilities in nearly all areas, save for one notable exception: naming stuff. The company has a history of giving its new AI models and products confusing names like GPT-4, GPT-4o, GPT-4o Mini, o1, and o1 Mini. In July 2024, when announcing GPT-4o Mini, Altman responded to a post on X suggesting that OpenAI needed to revamp its naming scheme with “lol yes we do.”
In his blog post, Altman says that originally, ChatGPT was named Chat With GPT-3.5, adding that OpenAI is “much better at research than we are at naming things.”
All together, the post is nearly 2,000 words, so if you don’t feel like reading the whole item, you’re in luck: When asked to summarize the screed in a single sentence, ChatGPT 4o provided the following: “OpenAI’s journey over the past nine years, marked by the launch of ChatGPT and transformative progress in AI development, has been a mix of extraordinary innovation, intense challenges, and a vision for creating beneficial AGI, culminating in a reflection on resilience, gratitude, and the promise of a super-intelligent future.”
BY BEN SHERRY @BENLUCASSHERRY
Monday, January 6, 2025
6 Ways the Workplace Will Change in 2025
Nearly five years ago now, the pandemic upended the way millions of people worked seemingly overnight. Now, at the tail-end of 2024, the workplace is still undergoing a number of consequential changes – and in 2025, even more shifts are on the menu.
This was a year marked by several watershed moments for work. One of the largest tech companies in the world, Amazon, announced a five-day return-to-office policy. Companies like Lowe’s, Ford, and Walmart rolled back their DEI efforts. And Donald Trump’s win sparked a flurry of questions about the future of immigrant labor, workplace regulations, and more.
With the new year nearly upon us, here’s how experts expect that work and the workplace will change in 2025 – and how business owners can prepare to greet those evolutions with aplomb.
1. More talent searches will start inside.
In 2024, the labor market continued its cooling trend, rappelling down from its heights around the Great Resignation. Now, with job openings generally slowing and quits well below rates of recent years, small businesses are finding it a bit easier to fill their open positions, according to recent data from National Federation of Independent Business.
And yet, because many companies had engaged in labor hoarding in 2023 — holding tightly to their talent as the job market softened — many leaders were in an interesting pickle in 2024, says Jeanne MacDonald, CEO of recruitment process outsourcing at Korn Ferry, the global consulting firm.
“They hired a ton of people, spent a lot of money, and then early 2024 was, ‘Well, wait a minute, what are we going to do with all this talent?'” MacDonald says.
As a result, 2024 was the year where MacDonald saw “internal mobility or internal recruiting, more so than external, than we’ve ever seen,” she says. Companies were turning toward their current workforce, evaluating what skills already existed internally and discovering how to leverage them in a new way, she says.
That approach is only going to become more popular in 2025, predicts Andrew McCaskill, a career expert at LinkedIn. Even if hiring opportunities do accelerate again, that’s still an expense, he says, and companies are figuring out that their “next best employee may be my current employee, just moved to another team,” he says.
Thus, companies must also consider how to create internal systems or programs that facilitate this kind of mobility, McCaskill says, such as allowing team members to “raise their hand for stretch assignments” or take “tours of duty in other parts of the business,” he says. This way, company leaders are actively upskilling and preparing team members for internal movement.
2. Managers could get a burnout-busting tool.
One story that didn’t change much over the past two years: managers are still being pushed to brink of burnout, experts say.
In 2023, a Gartner survey found that the average manager had 51 percent more responsibilities than they could “effectively manage”; this year, a different Gartner survey found that three-quarters of HR leaders say their managers are “overwhelmed by the expansion of their responsibilities.”
Since the pandemic, managers have been on the front lines of new hybrid and remote arrangements, responsible worker retention, and other key workplace changes, as Inc. has previously reported. And in 2024, managers were still asked to do “more with less” and still feeling limited by a lack of autonomy, says Emily Field, partner at McKinsey.
But this year saw some changes that could make a meaningful difference for managers moving forward, Field says – namely, the use of generative AI. Indeed, by incorporating some of these tools into their managers’ workflows in 2025, leaders have an opportunity to “free up capacity for managers,” Field argues.
Now, the onus is on teams to find exactly which tools would be most helpful in alleviating their managers’ specific pressures in 2025 — and Field says an “experimentation mindset” will serve companies well here. “Let’s test and learn,” she says, “and then let’s refine based on what serves us.”
3. Gen Z will move up the ranks.
Generation Z is now firmly entrenched in the workforce. They represent nearly a fifth of the U.S. labor force and, as of the second quarter of 2024, now outnumber Baby Boomers, according to the U.S. Department of Labor. And in 2025, they’re projected to hit another milestone — next year, about one in 10 managers will be Gen Z, according to a recent report from Glassdoor.
This stands to be an interesting transition, considering a few recent surveys suggesting that Gen Z’s entry into the workplace has been bumpy. Sixty percent of companies in one survey said they’d fired Gen Z employees who’d been hired earlier in 2024. In another, 57 percent of U.S. Gen Z workers surveyed said they were uninterested in becoming middle managers, a trend the report deemed “conscious unbossing.”
And yet, Glassdoor lead economist Daniel Zhao believes that differences between Gen Z and other generations have been overstated. In fact, while Zhao believes that Gen Z’s management style may be different, he says this will have less to do with their generation and more with “what constitutes good leadership right now.”
Namely, “there’s much more emphasis in the last five years on emotional intelligence for leaders and managers,” he says, as well as “much more discussion around employee well-being, setting boundaries, providing clarity.” With those factors at play, “Gen Z is being asked to raise the bar on good leadership,” Zhao adds.
To help them do this, providing manager training will be critical, Zhao says. He also suggests finding ways to give these less experienced workers opportunities to flex their managerial muscles, such as overseeing a project: “That might be a way to give folks the opportunity to get their feet wet … before dropping them into the deep end.”
4. DEI programs will be put further in the crosshairs.
In 2023, the Supreme Court’s decision to strike down affirmative action spurred a wave of lawsuits aimed at diversity, equity, and inclusion programs, and this year brought on an “escalation” of those legal attacks, says David Glasgow, the executive director of the Meltzer Center for Diversity, Inclusion, and Belonging, a research center within New York University’s School of Law.
Trump’s election only poured more “fuel on the fire of anti-DEI backlash,” Glasgow says — and in 2025, he expects to see even more attacks, including at the federal government level.
That said, this flurry of legal activity “doesn’t necessarily mean that the lawsuits are going to be successful,” Glasgow says. Indeed, the Meltzer Center is tracking more than 100 cases, and numerous have been settled or dropped.
Overall, though, the state of DEI remains “uncertain” and “complicated,” says Tory Clarke, co-founder and partner at the New York City-based executive search firm Bridge Partners. Companies who weren’t “fully committed” to DEI work are backing out, she says, while others are worried about being attacked next.
In a survey published earlier this year, Bridge Partners found that 66 percent of companies had increased their DEI investments in the past year, an 11-point drop compared to 2023. But that doesn’t mean companies are giving up: In fact, almost three-quarters of those surveyed with a DEI program already in place said they planned to “increase their commitment to DEI within the next two years” – evidence of companies “going underground” with their commitments to wait out the backlash, Clarke told Inc. at the time.
In the meantime, Clarke says she’s seeing companies move DEI efforts under other areas within their organizations, such as human resources, or changing the language related to those programs. Indeed, more than 50 percent of senior executives in a Conference Board survey this year said they’d made changes to DEI terminology.
In 2025, Clarke expects that corporate DEI work will continue, even if the “players may get rearranged” or the work “may go under the radar.” And, ultimately, companies may need to “reintroduce” their DEI efforts to cut through the noise and backlash, says Arthur Woods, chief business officer at Bridge Partners.
“It will likely need to be embedded and democratized a lot more,” he says.
5. The 4-day workweek will still be a distant dream.
This year, there’s been plenty of hubbub about working arrangements. But it’s clear that workers continue to value flexible arrangements – and many companies are still offering flexibility in various forms. One buzzy option hasn’t caught on widely quite yet, though: the four-day workweek.
In March, Senator Bernie Sanders (I-Vt.) propelled conversation about the four-day workweek to the forefront when he introduced legislation aimed reducing the standard workweek from 40 to 32 hours over four years. This came after the United Auto Workers union’s unsuccessful 2023 bargaining for a 32-hour workweek, which sparked further conversations about the four-day workweek this year, says Dale Whelehan, CEO of the nonprofit 4 Day Week Global.
But overall, Whelehan senses that adoption of four-day workweeks has slowed. In September 2023, 4 Day Week Global conducted a pilot program in Germany, successfully recruiting 41 organizations to participate; now, it’s launching a pilot in France with only 10 companies on board, he says.
“I think local economies and local politics is playing an influential role in whether businesses perceive they can take a risk on something like this at this moment in time,” Whelehan says. Broader feelings of uncertainty, including leading up to the presidential election, also played a role in American companies sticking to the status quo, Whelehan believes.
“There has been a lot of ongoing discussion, but not necessarily action happening at the state level in the U.S.,” he says.
And yet, many U.S. workers are hopeful this could change — even if it takes a while. In a survey from the job site Monster earlier this year, 46 percent of workers surveyed believed that four-day workweeks would catch on over the next 30 years.
Next year in particular, Whelehan says, pro-flexible work sentiments could grow. As larger companies like Amazon enforce return-to-office mandates, he expects that negative consequences of those pushes will emerge — in retention, burnout, or even climate change concerns, he says — bringing flexibility back to the forefront of conversations about the future of work.
“For no rational reason can I see how the older models of work will win out in 2025,” Whelehan says.
6. AI will make more inroads.
This was the year that AI at work got “real,” according to a report from Microsoft and LinkedIn in May. According to the report, in the prior six months the use of generative AI among global knowledge workers had nearly doubled. About three-quarters are now using it, citing benefits like saved time, increased creativity, and more.
Younger workers and leaders are particularly eager to bring these tools into work, according to multiple reports this year. In fact, beyond boosting their efficiency, younger managers (and wannabe managers) believe AI can help them become better leaders, enhancing their “communication to improve problem solving and facilitate better relationships,” according to a report from Google Workspace.
And AI is poised to play a “big role in 2025,” McCaskill says. “I think that we’re gonna see more and more companies that are hiring for…artificial intelligence understanding and solutions.” At LinkedIn, for instance, participation in AI courses on its learning platform have increased “fivefold year-over-year,” he says.
Still, companies will be thinking about what parts of AI to implement, as well as the “change management structure” associated with those changes, McCaskill adds, setting the stage for a “pace of change” in the workplace that’s “only going to get faster and deeper” next year.
In the face of this rapid change, it’s important that companies be careful about how they approach incorporating AI, says Jessica Burkland, an assistant professor of practice in organizational behavior at Babson College. She recommends making sure that managers as well as employees truly understand the technology. If they don’t, their teams could be “utilizing technology in a way that’s disrupting workflows as opposed to augmenting the workflows,” she says.
AI isn’t the only technology making progress in workplaces. Virtual reality, for instance, has made headway in corporate training programs this year, says Jeremy Bailenson, founding director of Stanford University’s Virtual Human Interaction Lab. According to Bailenson, VR has demonstrated strong capabilities to simulate “really intense and special situations that give you a teachable moment,” like an active shooter drill.
As technologies like AI and VR continue to evolve and permeate workplaces next year, how that’s managed will set companies apart, Burkland says: “These technologies are only as good as their implementation.”
BY SARAH LYNCH
Friday, January 3, 2025
5 Steps That OpenAI Thinks Will Lead to Artificial Intelligence Running a Company
Earlier this month, Bloomberg reported that OpenAI had defined five distinct stages of innovation in AI, from rudimentary chatbots to advanced systems capable of doing the work of an entire organization. These stages could inform OpenAI’s future plans as it works toward its ultimate goal of building artificial general intelligence, an AI smart and capable enough to perform all of the same work as a human.
According to the Bloomberg report, OpenAI’s leaders shared the following five stages internally to employees in early July during an all-hands meeting:
Stage 1: “Chatbots, AI with conversational language”
Stage 2: “Reasoners, human-level problem solving”
Stage 3: “Agents, systems that can take actions”
Stage 4: “Innovators, AI that can aid in invention”
Stage 5: “Organizations, AI that can do the work of an organization”
On July 23, the company posted briefly about the topic on X: “We are developing levels to help us and stakeholders categorize and track AI progress. This is a work in progress and we’ll share more soon.”
Olivier Toubia, the Glaubinger Professor of Business at Columbia Business School, believes the five steps more closely resemble a plan to make human workers obsolete than a roadmap to artificial general intelligence. With the exception of reasoning, he says, all of the outlined stages are more focused on business uses than they are on the actual science.
Toubia broke down what entrepreneurs need to know about OpenAI’s five stages:
Stage 1: Chatbots
Bloomberg reported that OpenAI told employees that the company is still currently on the first stage, dubbed Chatbots. This stage is best exemplified by OpenAI’s own ChatGPT, which shocked the world with its ability to converse in natural language when it was released in late 2022. Many organizations are using chatbots to enhance their internal productivity, Toubia says, while others are using the tech to power outward-facing customer service bots.
While these chatbots may seem superhumanly smart at first glance, they’re smoother talkers than they are operators. Chatbots will often make up and present false information with full confidence, and unless they’ve been set up to retrieve info from a businesses’ data center, they don’t have much commercial utility. Even Sam Altman has referred to the current iteration of ChatGPT as “incredibly dumb.”
Stage 2: Reasoners
OpenAI told employees that it is close to creating AI models that could be classified in its second stage: Reasoners. According to Bloomberg, Reasoners are systems “that can do basic problem-solving tasks as well as a human with a doctorate-level education who doesn’t have access to any tools.”
Last week, Reuters reported that OpenAI is currently at work on a new “reasoning” AI model, code-named Strawberry, focused on capabilities like being able to plan ahead and work through difficult problems with multiple steps. Reuters reported that leaders in the AI space believe that by improving reasoning, their models will be empowered to handle a wide variety of tasks, “from making major scientific discoveries to planning and building new software applications.”
Stage 3: Agents
OpenAI doesn’t believe that innovation in artificial intelligence has reached the Agent stage, which it refers to as “systems that can take actions on a user’s behalf.” Outside of OpenAI, much has been made of the potential value of digital workers who can operate autonomously, but few companies have wholeheartedly embraced the concept of AI Agents. Lattice, a popular HR software provider, recently announced plans to onboard AI Agents directly into a company’s org chart, but scuttled the idea after online backlash.
“From what I understand,” says Toubia, AI Agents “could replace you for a few days when you go on vacation.” Such an Agent would act as a proxy for vacationing employees, picking up the slack, completing simple tasks, and keeping vacationers updated on what happened while they were away.
“I have a bit of a cynical view on this one,” Toubia says, “people will welcome an Agent that’s going to let them go on vacation more often, but based on the next steps, the goal is not just to replace you when you go on vacation, it’s to replace you altogether.”
Stage 4: Innovators
According to Bloomberg, Innovators refers to “AI that can aid in invention.” In some ways, Toubia says, AI Innovators are already here. They’re helping people generate ideas, write code, and create art. “With a bit of guidance,” he says, “you can get ChatGPT to come up with ideas for a new app or a new digital product, and then create code and promotional materials.” Because of this, Toubia predicts that Innovators, as defined by OpenAI, will mostly come in the form of AI systems specifically developed to help prototype, build, and manufacture physical products.
Stage 5: Organizations
In OpenAI’s proposed final stage of artificial intelligence innovation, AI systems will become advanced and smart enough to do the work of an entire organization. Toubia says this should be a wake-up call for managers, who may have previously considered themselves safe from being replaced by AI, adding that even a company’s founders could be considered expendable if the system finds that they’re standing in the way of true efficiency.
Toubia worries that by classifying Organizations as the final step in its “roadmap to intelligence,” OpenAI may be tipping its hand regarding its ambitions. “This really seems to be a roadmap toward taking over the world,” he says, “replacing complete organizations and making humans obsolete in the process.” Going forward, he says, it may be CEOs who need to justify their paychecks.
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, January 1, 2025
AI: This company is making industrial robots more ‘human’
One of the original stereotypes about robots is that their movements are stiff and abrupt, something that endures in the “robot dance” that first became popular in the 1980s.
Robots have since evolved and now exhibit far more human-like qualities, with movements that have become softer and subtler. However, that has been true mostly for humanoid robots, which are a tiny minority compared to the industrial robots that have helped manufacture our goods — such as cars — for decades.
Around 3 million robots work in factories around the world, with about a third of those in the automotive industry, according to an industry body. Now, a company called Micropsi Industries is looking to make even industrial robots closer to humans. “We make a control system that allows industrial robots to do things that without our software they couldn’t do,” says Ronnie Vuine, Micropsi’s founder, “which is essentially having hand-eye coordination and adapting to changing conditions in the environment as they do their work in a factory.”
The company’s first product, called MIRAI, uses artificial intelligence (AI) and cameras to train robots to perform tasks that would be impossible via traditional, pre-programmed movements.
Vuine became interested in AI while a student at Berlin’s Humboldt University in the 2000s. “There was a working group that was interested in how machines learn in the real world when there’s no engineer around to tell them what to do, but they just need to sort out and find out what to do to survive. How would you do that? So that’s been our research interest.”
Vuine says that AI was distinctly unfashionable at the time, but when Google purchased AI company Deep Mind in 2014, it showed the team how AI had become more mainstream and was the motivation they needed to push forward. Micropsi was founded in the same year.
The company is now developing its products for various brands of manufacturing robots. “By far the most advanced industry when it comes to deploying robots at scale is automotive,” Vuine says. “Cars are the most complex artifact we make at scale as humans. We also make planes, and they’re more complex, but we don’t make as many of them. Cars are just the most advanced automation game we play.”
Demonstrating the technology to CNN, MIRAI allowed a robot arm to pick up a thin computer cable dangling from a person’s hand and plug it into a switch — a delicate task that is considered too hard to manually engineer a robot to do. Vuine says MIRAI can teach the robot how to do this in about an hour, with a human tutor. After that, using cameras and lights to see what it’s doing, the robot can perform the task by itself. “The cable, of course, jiggles about, so you can’t fully know and predict where that’s going to be, but the robot will reliably pick it and then insert it,” he says.
This opens up options for automation to carry out tasks previously handled by humans, which could prove especially useful in producing electric cars. “Automotive is moving to electric. There’s much more cables to be plugged in,” says Vuine. “Of course, it’s terribly important in electronics, where you have ribbon cables (to connect to circuit boards). All of these applications couldn’t be done with robots (previously). You would have to use a human, or you couldn’t do it at all, and would need to redesign your product for manufacturability.”
Having recently moved its headquarters from Berlin to San Francisco, the company is now looking to expand from cars to other products, like power tools and white goods, as well as other fields altogether, like logistics. In the future, the system could power humanoid robots, too. “The software that drives the robot would be very much applicable outside a factory, in a service robot that does your dishes,” Vuine says. “In fact, we sometimes do playful demos that show these capabilities.”
The hurdle to that expansion is not the software, he adds, but robots themselves. “Robots are not made of soft material like humans. They’re made of metal, so it really hurts if they hit you. You need to go very slowly, and you need to put lots of safety around and lo and behold, you’ve created a machine that’s too expensive and too cumbersome to actually live in your home. We just haven’t solved that yet.”
By Jacopo Prisco and Evan John, CNN
Subscribe to:
Posts (Atom)