Monday, June 30, 2025
Intuit’s CEO Says These AI Tools Will Streamline Your Accounting Department
As temperatures in New York City hit record highs this past Tuesday, an event space in Manhattan offered some much-needed respite from the heat—as well as a glimpse at how AI may soon change the face of small business bookkeeping.
Intuit, the tech giant behind the QuickBooks accounting platform, previewed technology that it’s now making public: a suite of AI agents meant to handle some of the behind-the-scenes accounting and bookkeeping work. Each of four AI tools on display promised to automate a different aspect of business management on QuickBooks—payments, accounting, finance and customer acquisition—with video walk-throughs showing the software doing everything from reconciling books to suggesting what late fee a business owner should implement to ranking inbound sales leads as either “cold,” “warm,” or “hot.”
Payroll, marketing and project management bots are also on their way, according to the company.
It’s always a good idea to take these pre-screened tech demos with a grain of salt, especially when it comes to AI, a technology that can act in surprising and unpredictable ways. Still, Intuit seems all-in on the machine learning revolution, and as it rolls out these agents (a process that, for American customers, starts July 1), it’s worth understanding the company’s vision for the future of accounting.
Inc. spoke with Intuit CEO Sasan Goodarzi about why his company is embracing AI and how he plans to maintain customer privacy and security in the process.
What are you guys launching, and how does it fit into your vision of the role AI will play in bookkeeping?
The biggest thing we’ve learned from our customers is that they’re using way too many apps, spending way too much money and don’t really know what’s going on in their business. And when you think about a business, whether it’s a solopreneur or a several hundred million dollar business, time is everything: so what we are launching is a virtual team of AI agents and AI-enabled human experts that can, in essence, do all the work for our customers while still leaving them in control. We have 12,000 experts that sit on our data and AI platform and are not only making our AI agents better every day, but are actually working with the AI agents to deliver a customer experience.
Is this the first time you’ve integrated agentic AI into one of your products?
This is years in the making in terms of all of the data investments that we’ve made—because AI is useless without data—and all of the data services we’ve built. So the essence of what we’ve launched has been in the works for years. About a year and a half ago, we launched multiple agentic experiences. But this is the first time we’ve launched a broad-based set of agents that can manage everything from leads to cash.
So if I’m using QuickBooks, what does my interfacing with the AI look like?
In very practical terms, we’ll go into their Gmail with their permission—if that’s the source of what they use to communicate—or SMS, and be able to help them manage which leads are hot versus cold. Then they can click on the hot leads and we’ll help them interact with that customer, suggest what they should communicate. We’ll create an estimate for them, and an invoice. So we feed them the most important actions and choices they should make so that it’s forefront—like, following up on invoices, or ‘It’s time to take out a line of credit.’ But they’re always in control. They always see why we’re recommending something, and if they choose to change it, they can.
We’re in a CPA shortage right now. Do you see this as a workaround to having less than enough human accounting labor?
In general, with anything that’s very driven by intensive labor, a lot of it will get automated by AI—but I also believe that it will fuel the reallocation of people’s time to things that are more productive. 75 percent of accountants are retiring, and the inflow of accountants has really dried up, so we see this as an opportunity not only to fuel the success of our accounting partners but to automate a lot of the manual work.
People are obviously dealing with a lot of sensitive financial information and data in this system. What is your approach to data privacy and security?
Trust is everything, and so we declared more than 20 years ago a set of data principles that we’ve adhered to ever since. One is that it’s the customer’s data, not ours. Two, we would never sell the data and we would only use the data for the benefit of our customers, which is why the way we have built our platform—both our data models and our AI models—only our Intuit financial large language models get trained by our customers’ data. The data never leaves our four walls. The data is never used to train LLMs, as an example, that are outside our four walls. We also have governance; we have a lot of human spot-checks in terms of how the models are operating to make sure there’s no bias in the models. We have AI models that inspect AI models as well as human experts that inspect the models.
Accuracy is so key in something like accounting, and a lot of people are still concerned about hallucinations with AI. What can you say about how you ensure accuracy with high-stakes bookkeeping and financial decisions?
Everything comes down to accuracy. That’s why what we’ve launched today is such a big deal, because it’s about financial forecasts, accounting, taxes. We’ve invested heavily in our own models. Anytime we detect an anomaly where we believe it’s not accurate, we automatically bring in a human expert to confirm.
A lot of companies are still figuring out how to integrate AI into their workflows. What are some best practices you would encourage small business owners to adopt when implementing AI?
In order for you to grow, you need to digitize your entire business so that everything from lead to cash is digitized. Then, when we show them what our platform can do and the capabilities which are all data and AI-driven, that’s where there’s a light switch, which is: ‘Rather than using all these different apps that don’t talk to each other, I can run my business in one place.’ So for a business that we serve, it’s all about having one platform that automates all their workflows, versus the customer automating their own workflows.
Are there any mistakes you see small businesses make when they’re incorporating AI?
Customers are over-digitized today. Four years ago, when I talked to customers, whether they were businesses or accountants, it was: ‘How do they move from Excel, Google Sheets, shoeboxes with receipts to using a platform to run their business.’ Today, they’re using up to 10 apps to run their business—and the larger they are, the more apps they use. They use one app to manage their pipeline; they use one app for estimating; one app for invoicing; one app for accounting. All their data is trapped in a bunch of different apps.
BY BRIAN CONTRERAS @_B_CONTRERAS_
Friday, June 27, 2025
I Asked ChatGPT a Simple Question. It Literally May Have Saved My Life
As a general rule, one thing you should definitely not do is start typing random symptoms you might be experiencing into Google Search, or, say, an AI chat box. If you’re concerned that something might be wrong, you should probably just go see your doctor.
I use ChatGPT a dozen or so times a day for research and getting answers to general information questions. I ask it about everything from how to fix a chainsaw to explaining company earnings reports. That’s probably why I did the thing you aren’t supposed to do, even though I know better.
It started a few months ago with something I couldn’t quite put my finger on. I’d get short of breath when doing ordinary things like walking up a flight of stairs or mowing the lawn, or carrying a bag chair to the sideline of a soccer game. In May, I was exhausted after walking with my roller carry-on to the gate of a redeye flight. All of those are things I’ve done hundreds of times, but recently it seemed to take a lot more effort.
For months, I chalked it up to getting older, needing more exercise, or maybe just a busy season catching up with me. However, the symptoms didn’t go away—they got worse. I began coughing more often. Breathing deeply made me wince. And worst of all, I was tired all the time.
Still, I would have put the symptoms into the category of annoying, not alarming. I have long suffered from seasonal allergies, so the trouble taking deep breaths and coughing was easy to play off as just a bad year for tree pollen. I didn’t think it was an emergency. I wasn’t sick in the usual sense—no fever, no sore throat, no runny nose. Just a slow, steady decline. I’d heard stories about long COVID, maybe that was it? Or maybe it was just stress. I kept putting it off.
Until one night when I randomly did the thing I do dozens of times a day: I asked ChatGPT.
“I’ve been having some strange symptoms over the past few months. What is most likely wrong?”
Then I listed what had been bothering me:
an increased shortness of breath when doing routine physical activity like walking up stairs or mowing the lawn
a feeling of congestion in my lungs, resulting in coughing that sometimes brings up material… sometimes that includes a slight reddish tint
extreme fatigue and increased blood pressure.
The AI didn’t hedge. The very first possibility it gave me wasn’t a cold. It wasn’t stress. It wasn’t even anxiety. The top result:
“The combination of progressive shortness of breath, cough with possible blood (hemoptysis), and fatigue, especially when paired with elevated blood pressure, could point toward a cardiopulmonary issue, with congestive heart failure (CHF) or pulmonary hypertension among the leading concerns.”
Then came the most alarming part. “These symptoms warrant medical evaluation without delay,” it wrote. “Please see a cardiologist or primary care doctor right away — or go to urgent care or an ER if the symptoms are getting worse.”
I froze. Heart failure? That felt dramatic. Like something that happens to much older people, or people with a known heart condition. I’m relatively young. I’m not an athlete, but I don’t smoke or consume more than a few glasses of alcohol a year.
Still, the answer rattled me enough that I called my doctor the next morning. After a quick physical exam, they ordered a battery of tests. First, a CT of my lungs to rule out blood clots, then bloodwork, and finally an echocardiogram.
It’s never a good sign when you’re having an ultrasound of your heart and the sonographer stops, gets up from their chair, and says, “I’ll be right back to finish the rest of these images,” before leaving the room. The only reason that happens is because they saw something very bad, and they’re going to get a doctor.
Sure enough, a moment later, he came back in the room with a cardiac fellow.
“Do you know why your primary doctor ordered this test?” he asked.
“Well, he was worried about blood clots in my lungs, but we ruled that out with the CT,” I replied. “He ordered this test to check for congestive heart failure.”
“Yeah, you have heart failure,” the doctor said.
Specifically, the echo showed that my ejection fraction, a measure of how much blood your heart empties every time it pumps, was around 25. That’s a little less than half of what is normal. My BNP—a blood marker that rises when the heart is under stress—was dramatically elevated. My heart was struggling to keep up, and my body was sending all the signals. I just didn’t know how to read them.
What’s scary is that I almost didn’t do anything. I almost didn’t ask the question. I almost didn’t listen. I assumed I was just out of shape and had a bad case of allergies. Without that initial answer pushing me to take it seriously, there’s a good chance I’d still be trying to power through it—getting worse without knowing why—all the while putting myself at risk of not knowing why.
The truth is, heart failure can be subtle in its early stages. The signs aren’t always dramatic. You might not clutch your chest or collapse. You might just feel a little more tired. A little more winded. A little off. But when your heart is struggling to circulate blood effectively, your entire body feels it—and if you ignore it long enough, the consequences can be irreversible.
Google, by the way, suggested I might have bronchitis or tuberculosis. Neither of those is something you should ignore, but I was pretty sure I didn’t have TB, and bronchitis doesn’t usually last months, and often comes with a fever.
ChatGPT, on the other hand, noticed a pattern. It gave me language to describe what was happening. It raised a red flag. It helped me take myself seriously. And, it nudged me to take the whole thing seriously and get checked out.
That’s not just impressive. In my case, it may have been lifesaving.
It’s easy to treat AI chatbots like a novelty or a parlor trick. But the thing they are really good at is taking complex information and synthesizing in ways that you wouldn’t be able to do on your own. Sometimes, asking the right question at the right time—no matter who, or what, you ask—can change everything.
Apple’s CEO, Tim Cook, has frequently said that it thinks the company’s greatest contribution could be in health care. I think that may be true. I am amazed by the ability of technology to reveal information about ourselves that we could never know or understand on our own. We are incredibly fortunate to live in a time where we have such ready access to heart monitoring and other data sensors built into our smartwatches.
I share this story to encourage you. If your body is telling you something, listen. If something feels off, don’t wait until it becomes a crisis. Technology is not a substitute for medical care, but it can be a powerful tool for insight, direction, and action—especially when you’re stuck in that limbo between “I’m probably fine” and “I might need help.”
Thankfully, modern medicine is incredible, and this is something doctors are well-equipped to treat. I’ve been given more meds than I ever thought I’d be taking at 45 years old, and we’re working on ruling out a few other possible causes of my heart failure (once the insurance company comes around about what’s actually “medically necessary”—but that’s a different story). This is something I’ll be managing for the rest of my life, but thanks to a simple question I asked ChatGPT, I’m hoping it will be a long one.
I reached out to OpenAI for comment on this story, but the company did not immediately respond to my request.
EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN
Wednesday, June 25, 2025
Your Employees Hate These Tasks at Work. They Say AI Can Help
New research commissioned by AI writing tool Grammarly and conducted by Talker Research found nearly half of the workers who responded hate the repetitive office tasks that make up the daily grind. The 44 percent total is no surprise, and you’ve probably had similar thoughts when you have to fill in a travel budget request form for Steve in Accounts—yet again. But it’s the AI era, and workers are increasingly aware that there are tools that can help wipe out this recurring drudgery—and 62 percent of the survey respondents said there are plenty of tasks they’d like to speed up with AI.
The Grammarly study, which involved 2,000 knowledge workers, defined as people who work with computers in some way in the office, showed that certain tasks that have to do interrupt their momentum, getting in the way of productive work. The New York Post reported that respondents they have to undertake 53 of these tasks a week, on average, and they cost the typical worker some 3.5 hours of meaningful work.
And while over four in 10 people hate carrying out repetitive tasks, there’s a generational spread in the data: 57 percent of Gen Z workers don’t enjoy mundane tasks, only 42 percent of Gen X feel the same. Anecdotally, this makes sense—Gen Z is used to living online, dealing with the fast-paced, constantly changing digital realm of social media, memes, and a world that never seems to stand still. That’s part of why many reports say that they feel very differently about the typical office job and workplace norms.
The survey also asked what kind of tasks workers would like to use AI for, and how they’d like the experience to be. Fully 35 percent of people wanted AI to help with the tiresome task of drafting an email, and 34 percent said they’d want an AI to help with repetitive tasks like sorting spreadsheet data. Another 33 percent wanted the AI to draft meeting notes so they wouldn’t have to, and 31 percent wanted AI to carry out workflows automatically.
The speed at which AI tools are evolving is a big factor in respondents’ interest in convenience. The data also showed 49 percent of workers would favor a tool that’s easy to use, and 35 percent said they wanted one that’s easy to prompt. Prompting an AI is a skill that can be developed, and it can greatly impact a generative tool like a chatbot, leading to a tighter, more useful output if you properly refine your questions .
Since only 38 percent of the respondents’ companies have a preexisting policy on AI use, this makes sense—many companies are lagging on training staff and openly allowing them to use AI to save time. Over 50 percent of respondents said they wished their company was more open to AI use, and Gen Z led this sentiment, with 67 percent feeling this way, compared to just 45 percent of Gen X.
Why should you care about these statistics? They do, after all, simply back up plenty of reports about the rising popularity of AI, and the fact that younger workers are more keen to adopt the technology to relieve them of humdrum office tasks than older workers are—even though, on the whole, roughly half of all workers are keen to use AI to do mundane jobs. The data even back up recent advice from the likes of entrepreneur Mark Cuban, who pointed out that AI is already great at taking on repetitive tasks.
You should care precisely because the survey shows you how your staff could be using AI to free up their day to tackle more productive work. And because if your company is pro-AI, and you really want to reap the full benefits, it would be best for your business if every age cohort in your workforce was as keen to use AI as every other. Grammarly’s data show that maybe you should put an official AI use policy in place. Then you should train users of every age on how to make the most of what AI can offer, instead of relying on your digitally savvy younger workers to carry out your AI wishes.
BY KIT EATON @KITEATON
Monday, June 23, 2025
Tensions Are Flaring Between Microsoft and OpenAI
Cracks are reportedly forming in the previously-solid relationship between OpenAI and Microsoft as the two companies renegotiate the terms of their partnership.
According to new reporting from The Information and The Wall Street Journal, the two companies have been negotiating for more than eight months regarding issues of ownership, profit-sharing, and exclusivity.
Here’s the context: OpenAI is attempting to convert from a non-profit into a for-profit public benefit corporation (also known as a B Corp). According to The Information, OpenAI wants Microsoft, by far its largest investor, to only have a 33 percent ownership stake in the company. The same report alleges that OpenAI also wants to alter terms that give Microsoft exclusive rights to resell OpenAI’s API in the cloud, and prevent Microsoft from getting access to AI code editor Windsurf, which OpenAI is currently in the process of acquiring.
In 2019, years before the debut of ChatGPT, Microsoft invested $1 billion in OpenAI as part of a deal that gave the Bill Gates-founded company exclusive rights to resell OpenAI’s API in its Azure cloud computing platform. In total, Microsoft has invested over $13 billion in OpenAI. According to a January 2025 blog post, OpenAI and Microsoft’s contract will expire in 2030. Other cloud computing providers, like Google and Amazon, would likely jump at the chance to resell OpenAI’s popular models.
The year 2030 is also when Microsoft will lose exclusive access to OpenAI’s IP, and according to The Information, Microsoft is looking to extend that period of exclusivity. Microsoft uses OpenAI’s tech to power many of its new and AI-upgraded products, and OpenAI’s models serve as linchpin for Copilot, Microsoft’s lineup of AI-powered personal and work assistants.
In addition, OpenAI is reportedly looking to change the terms of its revenue-sharing deal with Microsoft. Under the current deal, OpenAI will share 20 percent of its revenue with Microsoft until 2030. Now, OpenAI is hoping to reduce that percentage.
As for Windsurf, a platform that uses AI to assist software engineers, the Information reports that leaders at OpenAI are concerned that their relationship with Microsoft could complicate a planned $3 billion acquisition of the company. That’s because Microsoft also owns GitHub Copilot, a major competitor to Windsurf.
According to the Journal, tensions between Microsoft and OpenAI have deepend to the point that OpenAI executives have discussed going to antitrust regulators with accusations that Microsoft has been engaging in anticompetitive behavior. However, OpenAI and Microsoft told The Information in a joint statement that “we have a long-term, productive partnership that has delivered amazing AI tools for everyone. Talks are ongoing and we are optimistic we will continue to build together for years to come.”
BY BEN SHERRY @BENLUCASSHERRY
Saturday, June 21, 2025
5 Ways Entrepreneurs Are Rethinking SEO Amid the Rise of GenAI
Cody Barbo has a folder on his phone filled with all the AI assistants, and at least once every month, the Trust and Will co-founder and CEO asks ChatGPT, Claude, Gemini, Meta AI, Grok, and every other model the same question: What is the easiest, cheapest, and best place to set up my will online?
The entrepreneur wants to see how his San Diego startup ranks against the rest of the estate planning industry when it comes to the answers that large language models are spitting out to hundreds of millions of users.
“We’re competitive, so we’re like, is it us? Is it the OG, LegalZoom? Or is it a new player?” says Barbo, whose company has landed on the Inc. 5000 for the past two years after posting an average three-year growth rate of 1,127 percent. “Our search volume is getting eaten up by these tools.”
There’s no standard term yet for what Barbo is doing, but founders have started calling this practice “search everywhere optimization,” “generative engine optimization,” and “answer engine optimization.” While the acronyms vary, founders agree that finding the SEO equivalent for AI is the future of marketing, as more consumers take their queries to AI agents instead of search engines.
Even though the space is rapidly evolving in real time, founders are uncovering which strategies are most effective at surfacing company mentions in generative AI responses. Here’s what they’ve learned.
Invest in long-form content
Founders say one approach that has really worked so far is investing in long-form editorial content. That’s why Trust and Will has put so much emphasis on educational content on its website, publishing regular articles by industry experts and conducting a 10,000-person study in January. Brand-tracking software company Tracksuit has taken a similar approach, publishing articles about marketing, case studies, and original research.
“Right now, it seems to be long-form content that works,” says Tracksuit co-founder and CEO Connor Archbold. “White papers and actual research, I think, will become important.”
Become the trusted answer
Length is not the only factor to consider when creating editorial content for AI to crawl and cite. Founders recommend leaning into a niche and becoming the go-to source for industry leaders’ frequently asked questions.
“In the era of AI-powered search, visibility means becoming the trusted answer,” says Cassi Janakos Chavez, co-founder and COO of corporate lactation services company and two-time Inc. 5000 honoree Healthy Horizons. “We focus on maintaining our website’s authority with in-depth resources that show we are the leading expert on workplace lactation.”
Prioritize founder-first storytelling
When it comes to optimizing AI search, Tyler Eide, founder and creative director of the Seattle-based design agency Parker Studio, has been advising his clients to lean into founder-first storytelling.
“There’s only so much you can control… so the best thing to do is to control what you can,” says Eide, whose company has worked with Google, Lululemon, and Zappos. That means having his founder clients ask themselves: “How can I get out into as many places as possible with the story that I want people to have?”
Expand your entire digital footprint
If links are the currency of search rankings, brand mentions are the most important factor for determining visibility in AI responses, says Andy Crestodina. The co-founder and chief marketing officer of Orbit Media Studios, a Chicago-based digital agency focused on web development and website optimization, says, “Have the biggest digital footprint. Make sure your brand is everywhere.”
Go on podcasts, participate in webinars, post on LinkedIn, write for every possible website, issue press releases, and repeat your elevator pitch in any video that may get transcribed. Make sure your company is listed on every review site, trade association website, and conceivable directory. Crestodina calls this basic digital public relations.
“Fill the web with your company name and surround it with relevant industry terms. Your new goal is to be in as much AI training data as possible,” says Crestodina, whose company has landed on the Inc. 5000 three times. “Be all over the place.”
Keep testing
Like Barbo, keep experimenting to see if and when your company name comes up after prompting AI agents with questions.
“Best practices are good hypotheses. Everything I just suggested should be tested,” says Crestodina. “Use data to confirm or reject that hypothesis, and then iterate, rinse, repeat. It takes a lot of humility to be good at this.”
BY ALI DONALDSON @ALICDONALDSON
Wednesday, June 18, 2025
Google’s New AI Feature Turns Search Results Into Podcasts
Auditory learners rejoice: Google has introduced an experimental new feature that turns some search results into AI-generated podcasts.
The feature is called Audio Overview, and it can only be accessed by opting in on Google’s Search Labs page. According to Google, Audio Overview allows users to “listen to a concise conversation generated with AI, providing a preview of information from the top search results in response to your query, with links out to the web to explore more.”
Audio Overview is powered by Gemini, Google’s family of AI models, and functions similarly to NotebookLM, Google’s popular app that enables users to upload documents in order to generate a podcast conversation between two AI-powered voices. These artificial voices summarize and discuss the uploaded material, and can be directed to structure their conversation in specific formats, such as an audio FAQ or as a study guide. Google advertises NotebookLM as “your personalized AI research assistant.”
With Audio Overview, users don’t need to upload documents to NotebookLM in order to generate a podcast. When you type a question into the Google search bar, a widget may appear in the results asking if you’d like to generate an Audio Overview. According to Google, the Audio Overview feature will activate when Google’s systems “determine it might be useful.” When asked “how are small businesses using AI,” the Audio Overview generated a five-minute podcast in which the two AI voices discussed AI’s ability to help people generate content, analyze market trends, and create targeted advertising campaigns.
Audio Overview also lists the websites it used to source its answer, so people can double-check its accuracy. Early users are encouraged to give feedback on the quality of the Audio Overviews as Google fine-tunes the feature.
For entrepreneurs who do their best learning by listening to audiobooks and podcasts, or who spend a lot of time traveling, Audio Overview could be a useful tool for researching and studying a subject while multitasking. To activate Audio Overview, users will need to navigate to Google’s Search Labs page, where the company keeps its experimental new search features, and turn the Audio Overview experience on.
Google says the feature is currently only available with English-speaking voices and only accessible in the United States.
BY BEN SHERRY @BENLUCASSHERRY
Tuesday, June 17, 2025
New Research Finds That AI Is Creating More Jobs and Higher Pay
Opinions on how artificial intelligence will affect employment differ considerably—often radically. The most recent demonstration of the divide arose last week, when Anthropic CEO Dario Amodei warned that AI apps like the kind his company is developing risk pushing joblessness up to 20 percent. That led serial entrepreneur Mark Cuban to counter with the prediction it’s more likely the tech will enable full employment instead. Now, a new study by consultancy PwC comes down somewhere in between those views, albeit far closer to the optimistic Cuban perspective.
In its 2025 AI Jobs Barometer report, PwC said forecasts by skeptics that AI will unleash employment doom have failed to materialize in industries already embracing the tech most. It also noted the initial consequences of AI adoption included higher job creation, increased pay for those new positions, and reduced inequality by allowing people without university degrees to combine their skills with many AI-executed tasks formerly reserved for knowledge workers.
“In contrast to worries that AI could cause sharp reductions in the number of jobs available, this year’s findings show jobs are growing in virtually every type of AI-exposed occupation, including highly automatable ones,” said PwC global chief AI officer Joe Atkinson in the report. “AI is amplifying and democratizing expertise, enabling employees to multiply their impact, and focus on higher-level responsibilities.”
Those and other findings in the study may serve to undermine some of the more dire predictions made by AI critics. To create the study, PwC analyzed 800 million job posting and combed through thousands of financial reports of businesses in a variety of sectors. In addition to finding companies that had integrated the tech most reported increased job creation, the consultancy found five other ways AI adoption creates positive effects.
For example, wages rose “twice as quickly in those industries most exposed to AI compared to those least exposed,” it said—even for people working in highly automatable roles. One reason for that increased pay was surging productivity, with the companies having embraced AI the most quadrupling their output compared with slower adopting businesses.
With AI increasingly automating research, writing, accounting, and a lot of other duties previously handled by knowledge workers, employees without college degrees were able to add those tasks to the skills they were initially hired for by using AI. By broadening the range and deepening the value of their work for employers, those people benefitted from the average 56 percent wage premium that businesses have accorded people with AI capabilities, PwC said.
In other words, rather than pushing employees out of jobs by automating them, AI has thus far permitted workers to enhance their value to businesses by doing a wider number of tasks, boosting their productivity, and increasing their pay as they did. But to make that shift, the PwC report said, people—and their employers—had to be both willing and quick to adjust to the demands of the quickly developing tech.
“AI’s rapid advance is not just reshaping industries, but fundamentally altering the workforce and the skills required,” said PwC global workforce leader Pete Brown. “This is not a situation that employers can easily buy their way out of. Even if they can pay the premium required to attract talent with AI skills, those skills can quickly become out of date without investment in the systems to help the workforce learn.”
Of course, skeptics may argue that in examining initial consequences of the fairly recent and still developing tech, PwC may have merely focused on an initially positive phase the could be followed by mass job cuts as companies master AI and automate as many roles as they can. But PwC global chief commercial officer Carol Stubbing argues that history suggests otherwise.
“We know that every time we have an industrial revolution, there are more jobs created than lost,” Stubbing said, with the caveat that employees will need to continually revolutionize their skills to harness the powers of AI. “So the challenge, we believe, is not that there won’t be jobs. It’s that workers need to be prepared to take them.”
BY BRUCE CRUMLEY @BRUCEC_INC
Friday, June 13, 2025
At WWDC, Apple has to address its two biggest pain points: AI and developer relations
Apple’s annual Worldwide Developer’s Conference (WWDC) has always been more than just a showcase of software updates—it’s a statement of intent. It reveals to the developer community—as well as to the world at large—what Apple thinks is most important about each of its platforms.
But as WWDC 2025 approaches, the stakes feel dramatically higher than usual. With tech giants like Google, Microsoft, and OpenAI sprinting ahead in artificial intelligence, the pressure is on Apple not only to catch up—but to prove it still belongs in the conversation.
What we can expect
This year’s WWDC is shaping up less like a grand unveiling and more like a make-or-break update. Apple already introduced Apple Intelligence at last year’s WWDC, promising privacy-first generative features and a smarter Siri. But in the year since, very little has materialized. The features arrived late, rolled out slowly, and failed to generate much excitement. Many of them—notification summaries, writing tools, and image playgrounds—have been problematic and remain a disappointment compared with the competition.
There are suggestions that we may see updates that make Apple Intelligence slightly more accessible to third-party developers—such as new APIs for summarization, task automation, or suggested replies. And Apple could announce that it’s bringing Google Gemini on board as an option for users in the same way it announced ChatGPT integration last year.
But none of this is expected to move the needle dramatically. What Apple is likely to do is what it always does: emphasize its privacy advantages, show off beautifully controlled demos, and wrap incremental upgrades in the language of elegant design and trustworthiness.
What we won’t see
What’s unlikely to show up is exactly what Apple arguably needs most: a clear leap forward in generative AI. Despite launching Apple Intelligence last year, Apple still doesn’t have a competitive large language model of its own, and there’s no indication it plans to introduce one. Most of the AI work is still being powered by OpenAI behind the scenes, with Apple acting more as a front-end for someone else’s technology than a platform leader.
Also missing: any meaningful upgrade to Siri. For all the talk of AI-enhanced assistants, Siri remains inconsistent, brittle, and far behind the real-time reasoning and multimodal capabilities demonstrated by OpenAI’s GPT-4o or Google’s Gemini Live. There’s no expectation Apple will unveil a Siri that can navigate your device, interact fluidly with apps, or hold a context-rich, back-and-forth conversation.
What’s also off the table is any kind of public roadmap toward Apple building its own foundational models. Unlike Meta, Google, or even smaller players like Anthropic, Apple has remained silent on training its own GPT-class LLMs or competing in the model layer. There’s also no public cloud-based developer model in sight, leaving Apple notably absent from the infrastructure layer of AI.
So while Apple may refresh its AI pitch, expand access to existing features, and try to recapture some excitement, it will still be operating from behind—repackaging existing partnerships and product polish as innovation. The lack of a bold leap forward could once again reinforce the perception that Apple is falling further behind in a race that’s rapidly redefining the future of computing.
Absence of a strategy
The absence of a clear and ambitious AI strategy would be more than a missed opportunity—it could be a strategic blunder. In the past year, Microsoft embedded AI into Windows, Office, and Azure. Google placed AI front and center in Android and Search. OpenAI launched GPT-4o, a multimodal assistant capable of real-time voice conversation, coding, and document analysis. Nvidia became the world’s most valuable chipmaker, and Meta open-sourced increasingly competitive models.
Apple, meanwhile, has remained conspicuously quiet.
That silence, once mistaken for secrecy or caution, now risks being interpreted as stagnation. If Apple shows up at WWDC and doesn’t address its two biggest pain points—AI and developer relations—it will come across as out of step with the rest of the tech industry, and out of touch with one of its most important audiences.
Worse, Apple’s hesitation could threaten its control over the user experience. If the most advanced AI models live inside ChatGPT, Gemini, or Copilot, then users will increasingly rely on those tools across their devices—even on iPhones. The irony? Apple, the company that revolutionized how we interact with devices, could become a middleman in someone else’s AI ecosystem.
The moment of truth
To be fair, Apple’s slow approach is not without logic. It prizes user privacy, and generative AI presents significant privacy risks. Running models on-device rather than in the cloud is a key part of Apple’s identity, and that kind of development takes time.
Still, the market is shifting, and consumer expectations are changing. AI isn’t just a backend feature anymore—it’s becoming the front end of computing. If Apple doesn’t show a compelling AI vision at WWDC, it could find itself boxed out of that future, even if its devices remain dominant in the present.
This WWDC won’t just be a software update; it’s a referendum on whether Apple can still lead in defining the next era of computing. The company that once made “insanely great” products must now convince the world it can make insanely smart ones, too.
Tim Cook and Apple’s executive team are walking a tightrope. If they were to deliver a truly compelling AI story—especially one that respects user privacy and works seamlessly across devices—they could reset the narrative and take a leadership role on their own terms. All signs, however, point to Apple just pressing through without acknowledging what everyone else can already see: the company is far behind and has nothing to show for its efforts to catch up.
EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN
Wednesday, June 11, 2025
Meta Eyes $10 Billion AI Investment
Meta Platforms is reportedly considering a more than $10 billion investment in artificial intelligence startup Scale AI.
The deal is still in talks and could potentially change, Bloomberg News reported on Sunday, citing “people familiar with the matter.”
Scale AI declined to comment and Meta did not return Reuters’ request for comment, the outlet reported.
Scale AI is a data labeling startup backed by Nvidia, Amazon, and Meta. It also helps researchers exchange AI-related information, with users in more than 9,000 municipalities. It was founded in 2016 and was last valued at nearly $14 billion.
Monday, June 9, 2025
Mark Cuban Just Made a Bold Prediction About the Future of AI and Human Interaction
Mark Cuban, billionaire entrepreneur, basketball team owner, pharmaceutical-industry-upsetter, and former regular on the iconic Shark Tank reality TV investment show, knows a thing or two about innovation.
And yesterday, he took to Bluesky to make this bold prediction about AI:
Within the next 3 years, there will be so much AI, in particular AI video, people won’t know if what they see or hear is real. Which will lead to an explosion of f2f engagement, events and jobs.
Those that were in the office will be in the field.
Call it the Milli Vanilli effect
It’s worth taking notice. Cuban’s effectively saying we’ve reached the end of the complex “will AI steal my job?” debate. And the answer is yes, more or less. At the very least, Cuban painted a bold picture about how AI is going to transform society in the next 36 months or so, and that picture includes a dramatic change in where new jobs will be created.
His argument is simple. There is going to be so much AI and it will advance so much, he thinks, that we just won’t be able to resist it. The content that AI systems will generate will also be so convincing, or so useful, that people may more or less give up trying to tell if something is real or not. So society will fall back to relying on genuine face-to-face (f2f, as Cuban abbreviates it) human connections for fun and work—something that AI just can’t replace.
Milli Vanilli, in case you don’t know, was a successful German R&B duo from the 1980s and ‘90s, selling some 30 million singles internationally. But their fans were shocked when the band’s manager revealed the two singers, Fab Morvan and Rob Pilatus, were just lip synching and didn’t actually perform any of the vocals on their hugely popular tracks. Their performance had been so convincing that they won a Grammy Award for Best New Artist in February 1990. (It was subsequently revoked.)
By invoking this musical scandal, Cuban suggests he thinks AI is going to do the same thing—act convincingly like the real thing, while actually being artificial—but pretty much everywhere. This is going to upset many traditional models for how we live our lives, including how we work.
His “those that were in the office will be in the field” line is particularly eye-opening.
This seems to be a prediction that the traditional office job will be more or less dead. That’s one in the eye for all the micromanaging demonstrated by company leaders enforcing strict “return to office” rules right now. Instead, Cuban imagines that AI will be so convincing that people won’t rely on phone calls, texts, emails or even video calls to complete their jobs—and they’ll have to be out and about, traveling, actually meeting clients, customers, contractors, and co-workers in person.
In Cuban’s mind, this transformation will create a whole host of new jobs.
Picture it as a 21st-century equivalent of the arrival of the desktop PC in the workplace—a revolutionary device that abolished the old-fashioned typing pool, and later even many secretarial posts. But it also created a whole new slew of jobs, from programmers to Excel experts to the engineers in company IT departments.
Some commenters replying to Cuban’s post agree. “You’re absolutely right. The digital will become unreliable. We’ll have to get back together in person,” wrote one. Another noted that they already “tell up and coming musicians, performers, actors this all the time. become exceptional—because exceptional, authentic experiences and connections will be more valuable than ever.” But other commenters struck a more skeptical note, pointing out that “AI is scary, but I think if companies start replacing people, riots are going to happen and it won’t happen,” or even suggesting “A bill making any artificial account posing as a human being (bots) illegal is desperately needed, like yesterday.”
Why should you care about this?
For one reason: Cuban’s optimism stands out among much of the much bleaker forecasting about AI’s impact on the world. For example, an MIT professor recently issued a stern warning that we can stop AI from stealing everyone’s job—but only if we act really fast, to prevent AI from steamrolling over everything.
If you can bring some of Cuban’s optimism into your workplace, that might lift worker’s worries about AI. Meanwhile you should take note of his warning and really plan for some serious re-skilling or up-skilling of your workforce to get them ready for the face-to-face future he predicts. Then you might be better placed to benefit from the AI revolution.
BY KIT EATON @KITEATON
Friday, June 6, 2025
Why This IBM Exec Says AI Adoption Should Be Led by HR
HR is the natural choice to lead company-wide adoption of AI, according to Nickle LaMoreaux, senior vice president and chief human resources officer at IBM, who took to LinkedIn to make her case.
She sat down Monday with LinkedIn chief people officer Teuila Hanson in the social-media platform’s latest episode of Conversations with CHROs, and Inc. got an exclusive first look. The two discussed issues that are keeping HR up at night. LaMoreaux said she believes HR should take the reins on AI adoption because the department is an expert on both skills and culture change.
“AI is about the technology, but it is about a lot more than that. It is about willingness to change how you lead people through the different roles of managers and leaders,” LaMoreaux said.
Although many companies choose to give this responsibility to leaders who deal with new technologies—chief product officers, head of engineering, line of business owner, etc.—LaMoreaux says these professionals are good at adopting tech to complete job-related tasks, but they lack the skills to ensure company-wide adoption.
Hanson points out that when HR is handling benefit enrollment or performance management, they’re always thinking about how these processes will affect employees at different stages of their careers: the applicant, new hire, employees who want to be promoted, managers, and leaders. They need to consider how team dynamics will be affected by these company-wide processes, she said.
“We’re the employee experience function, and so it’s sort of natural for us to be in this space and really think through how the change is going to hold,” Hanson said.
On the subject of change, LaMoreaux said AI created a major culture shift in how IBM measures employee performance. It started with using data to measure skills—ones employees had, new incoming skills, and skills that were becoming irrelevant. Pairing this data with employee business performance initially resulted in unusual results that frustrated some managers—for example, some top performers were rated as ineligible for promotions because their skills were out of date.
LaMoreaux said it was a “really, really difficult” transition, but managers started “to see first-hand how quickly these jobs were changing,” and that someone “could go from being a top performer one year to an average performer or even a low performer the next year.” But IBM felt it still wasn’t addressing the full picture.
“We started to realize, in this age of AI, even that wasn’t enough, that the world is just moving so quickly that we actually had to start evaluating on this idea of behaviors,” LaMoreaux said.
IBM now measures employees’ capacity for entrepreneurial spirit, curiosity, and being OK with failure while trying new skills. IBM measures whether employees are “OK with failure” to determine their resiliency, LaMoreaux said. She added that IBM’s HR team was inspired by how the company’s product and development teams treat feedback as a problem solving tool, rather than a reflection of employee performance.
“When you get negative feedback, it’s not a bad thing. It means you have this feedback and you can pivot. So we kind of really learned a lot from their culture and ways of working,” LaMoreaux said.
While there’s a lot of public concern over AI’s potential to replace jobs, LaMoreaux says that using it to identify certain behaviors helps guide employees toward their professional goals. She says anyone who has “ambition” and can “pivot with the changes” will be part of creating new jobs that work with AI.
BY KAYLA WEBSTER
Wednesday, June 4, 2025
An AI-Powered Startup Can Now Perform This Important Task Better Than Doctors
A healthtech startup has used OpenAI’s technology to develop a new, customized AI model that they claim can outperform real physicians at generating accurate medical codes and notes.
Ambience Healthcare, a San Francisco-based company that provides healthcare workers with an AI-powered platform, says that its new model is specifically designed to identify ICD-10 codes, which are used to classify diseases and medical conditions for billing and record-keeping purposes. By recording a doctor-patient conversation, the AI model can intuit the conditions being discussed and generate notes with accurate codes.
From corporate giants like Amazon, which recently launched AI tools for medical practitioners, to startups like Suki AI, businesses large and small are creating AI-powered products aimed at helping doctors and clinicians spend less time on administrative work, like writing and filing medical notes. According to a 2024 study published by Google Cloud and The Harris Poll, clinicians spend an average of nearly 28 hours per week on administrative tasks, and 94 percent of them say their administrative workloads prevent them from spending more time helping patients.
In a press release, Ambience chief medical officer William H. Morris said that its platform can be understood “as a scribe that fluently speaks both clinical language and the intricate healthcare billing rulebook from day one.” In addition to helping clinicians finalize their notes faster, he claims, it will also ensure that “revenue cycle teams receive cleaner, more accurate, and audit-ready charts.”
Ambience’s platform also integrates natively with several popular electronic health record providers, such as Epic and Oracle. This means that after clinicians review and sign off on notes made in the platform, they’re immediately added to the patient’s record.
In February 2024, Ambience raised $70 million in a series B round co-led by OpenAI’s startup fund, and the two firms started collaborating. In partnership with OpenAI’s startup solutions team, Ambience created a dataset of complex clinical cases, all labeled with accurate ICD-10 codes. By fine-tuning an OpenAI model with this high-quality proprietary data, Ambience was able to create a powerful medical coding model.
In a test that pitted Ambience’s new model against 18 board-certified physicians, the company said that “Ambience’s AI demonstrated a 27 percent relative improvement in coding performance compared to the expert physician baseline.” These coding improvements could help medical practitioners ensure that they always get paid for the work they perform, and mitigate the risk of billing errors.
BY BEN SHERRY @BENLUCASSHERRY
Tuesday, June 3, 2025
Here Come the Androids: Millions of Humanoid Robots Could Solve the Factory Labor Crunch
Humanoid robots, a.k.a. androids, could achieve a massive leap in automating many jobs because they fit into the complex work environments we’ve carefully shaped over the years around human bodies and how they work. Add the unarguably cool Star Wars/C-3PO sci-fi angle, and it’s easy to understand why people have tried for years to make these machines a reality. Now a report in industry news site Automotive News shows how wage costs could spur automakers to send more of these robots marching onto factory floors to help fill out their production line capacities—something that’s been difficult due to higher labor costs and worker shortages.
The report sites recent analysis by U.K. consultancy firm IDTechEx that predicts that some 1.6 million humanoid robots could be working in the automotive factory sector by 2035. U.S. Bureau of Labor Statistics data show that in the manufacturing sector as a whole in 2024, employers spend $45.29 per hour on average on workers. But conversely, Automotive News says venture capital firm UP.Partners predicts the “labor” cost for humanoid robots would be around $1.29 per hour. That equates to under 3 percent of the cost of human labor—on an hourly basis at least.
This statistic alone is a compelling reason why carmakers will adopt robots, especially when recent research shows most Americans don’t want to work in factories, even though they approve of the idea of boosting U.S. manufacturing. As the market now stands, factory labor shortages aren’t likely to diminish. But there’s another reason carmakers may be on the front line of adopting humanoid robot workers: Because they can use their expertise to make and sell robots to other people. Automotive News quotes David Kehr, president of humanoid robotics at the Germany-based drivetrain maker Schaeffler, who explained “We’re a motion technology company, and what do humanoids do? They utilize motion. This is in our DNA.”
This stance echoes many bold predictions made by Tesla CEO Elon Musk, who’s staked the future of his EV company on humanoid robots, rather than cars—and has promised to test Tesla’s Optimus robot on factory floors in very large numbers. Other reports highlighted the presence of Chinese-made humanoid robots at the recent Shanghai Auto Show, with numerous automakers talking about billion-dollar-scale investments, vowing robots will soon actually be put to work in factories.
But robots are expected to mechanically stride into other industries as well, and perhaps sooner than the general public might think.
Industry news site TheRobotReport recently noted that Houston-based humanoid robot and AI developer Persona AI had just raised a $27-million funding round to help it develop robots to work on shipyards. The company said its machines would marry the high-precision actions of typical industrial robots with the kind of subtle dexterity that humans possess, making the robots suitable for the rugged and demanding environments of manufacturing ships.
It also highlighted worker safety in its plans—essentially moving fragile humans from the riskiest environments. The company has already signed an agreement with HD Hyundai, the Korea-based manufacturer that holds about 10 percent of the global ship-building market share. It expects to be ready to sell humanoid robots for shipyard work in about 18 months.
Meanwhile, speaking at the Computex 2025 trade show (which markets itself as the “World’s Largest AI Exhibition”) this week, Nvidia CEO Jensen Huang also painted a bold picture of humans working alongside robots. Describing data centers of the future as “AI factories”—because AI is now infrastructure “just like the internet, just like electricity”—Huang spoke of a future where people worked alongside AI agents and robots and machines fill job gaps left vacant by people.
Why should you care about this?
For one simple reason: Even if your company isn’t directly involved in the manufacturing industry, what’s happening there is a litmus test for almost any workplace in the future. Carmaking is already highly robotic, so the industry is ready to adopt the next-generation of robots in humanoid form, and once they’ve demonstrated their value in these scenarios, they’ll be adapted for many other jobs. If you’re building a long-term business plan for your company, you might want to think about how robots could help.
BY KIT EATON @KITEATON
Friday, May 30, 2025
Searching for a Job in AI? Industry Leaders Are Looking for 1 Specific Trait
Demand for professionals to work on AI-related projects is massive. It’s so significant, in fact, that it’s playing a pivotal role in shaping the entire U.S. labor market. That’s according to a recent analysis by enterprise AI company Vertitone, which found that 81,298 openings for AI-related jobs were posted in 2024, a 24.5 percent increase over 2023.
That demand isn’t just limited to the engineers who develop AI models. Thousands of companies are hiring salespeople, marketers, and subject matter experts of all kinds to improve their AI offerings, inform the public about them, and sell their new AI products.
If you’re a professional looking to break into the red-hot AI industry, you probably have a few questions: What exactly are managers looking for when hiring for AI-specific projects? How much knowledge should you have about the inner workings of AI?
We asked founders and executives from across the tech world to share their AI hiring secrets, and one word kept surfacing: curiosity.
Nearly everyone we talked to, whether from behemoths like Google or startups like AI legal assistant developer Harvey, said they are actively hiring employees who exhibit an open mind and a willingness to try new things. That’s because, as managers and executives build AI tools, they’re also adopting AI-enhanced workflows, and candidates who have already shown a willingness to experiment with new technologies and concepts are more likely to adapt to the rapid pace of AI development.
Here’s what they had to say.
Will Grannis, CTO, Google Cloud
Google Cloud is responsible for the search giant’s cloud computing business, popular workplace apps, and Gemini, the company’s brand of AI services. In Q4 2024, Google Cloud brought in revenues of nearly $12 billion, a 30-percent increase over Q4 2023.
By the time I’m speaking with a candidate, we already know that they have the fundamentals in place. So at that point I’m trying to uncover their curiosity and passion. Two questions I almost always ask are: 1) Teach me about something you’re currently learning about, at a third-grade level, and 2) If you could wave a magic wand and make any technology instantly appear, what would you conjure and why? By the end of these discussions, I have a pretty good idea if they enjoy and prioritize learning, and I get a bit of a window into where they think technology can have the greatest impact.
When hiring junior AI engineers who may not have large-scale platform and systems backgrounds, I am particularly interested in those who are curious, highly collaborative, and willing to challenge the status quo. This is especially important in a field like AI, where frameworks, tools, methods, and technology are all changing so fast. For more senior engineers, I’m looking for humility, systems understanding that combines customer empathy and platform realities at scale, excitement in coaching and mentoring, and curiosity that drives them to stay hands-on with the latest technologies.
Smita Hashim, Chief Product Officer, Zoom
Zoom is the public video conferencing platform that took the world by storm during the Covid-19 pandemic. It’s now enhancing its services with AI. In Q3 2024, the company reported revenue of $1.17 billion, up 3.6 percent year-over-year.
We’ve found that the most successful hires are those who bring a combination of relevant expertise and a growth mindset. While we expect certain baseline knowledge depending on the role, we invest significantly in ongoing learning and development. We’re particularly interested in candidates who have a strong sense of curiosity and enjoy tracking the fast-moving AI innovations, as well as experience working with various platforms that can help strengthen our interoperability with different systems and services that our customers use.
For engineering roles, we prioritize candidates with strong technical foundations in machine learning and software development, but equally important is their ability to collaborate across teams and understand how AI fits into our broader product strategy. For go-to-market and marketing specialists, we seek individuals who can effectively communicate AI’s value proposition while maintaining a realistic understanding of its capabilities and limitations. We look for candidates who understand AI’s fundamental concepts and implications for business, but we don’t expect everyone to be an AI expert. What’s crucial is their willingness to learn and adapt as the technology evolves.
Ryan Kelly, Chief Communications Officer, Recursion
Recursion is a public biotech company that uses artificial intelligence to advance drug discovery, design, and development. The company generated $26 million in Q3 2024.
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, May 28, 2025
An Entrepreneur’s Guide to Choosing an AI for Your Business
Picking the right AI platform for your business can be a daunting task. From AI-focused startups like OpenAI and Anthropic to established titans like Google and Microsoft, several companies are offering products that enable whole workforces to take advantage of advanced AI models.
These tech firms say their business-focused platforms can make employees more efficient and decrease time spent on repetitive work, but how can you determine which platform best fits your company’s needs? We asked experts from the worlds of academia, business, and consulting to help figure it out.
Here’s your step-by-step guide to getting started.
First, think about your use case
Arun Chandrasekaran, a Gartner vice president with a specialization in AI, says that before making any decisions regarding an AI solution for your business, you should think deeply about your use case. Consider if it requires a full SaaS application that can immediately be put to work or if it could be done with a customized AI model, which may take longer to create but could be more valuable.
Chandrasekaran says that if you already have a unique AI use case that could potentially be a differentiating factor for your business, you might want to consider developing your own custom model, a process typically accomplished by using an API. Through this strategy, you can avoid paying pricey subscriptions, but will need to pay an API fee every time the custom platform is used. You could also utilize an open-source model like Meta’s Llama, which are technically free to use but require high-power GPUs to run locally.
Olivier Toubia, a professor of business at Columbia Business School, suggests that entrepreneurs with a clear AI use case consider going for an industry-specific AI platform, like legal AI assistant Harvey or customer service platform Sanas, rather than one created by a major AI lab. These platforms come with tools and models designed to assist specific use cases, making them a worthwhile plug-and-play option.
But, if you don’t have a laser-focused use case and are just looking for a toolkit for your employees to experiment and develop new workflows with AI, Chandrasekaran says an enterprise AI platform is the way to go. But there are several factors to consider when determining which platform is right for your business.
Consider the data issues
According to Chandrasekaran, understanding a vendor’s data policies should be among your top concerns. First, you need to know where the data used to train the vendor’s AI models was sourced from. A model that’s been indiscriminately trained on data found across the internet is going to be different from a model that’s specifically been trained on United States case law, for example.
You also should confirm that the AI platform of your choice can connect to your major data sources. Popular consumer-level platforms like ChatGPT and Claude let users upload individual files in order to create customized chatbots, and their enterprise offerings enable businesses to directly connect their cloud-based data so employees can leverage them at any time. “If you’re using [AI] in customer service, you may want your AI to be integrated with your Salesforce data,” explains Chandrasekaran, “or if you’re using it for HR functions, you may want to plug it into your Workday system.”
ChatGPT Team, for example, recently introduced a feature called “internal knowledge,” which enables administrators to connect their organizations’ shared Google Drives to the platform and quickly surface or analyze information. And Microsoft 365 Copilot naturally integrates well with organizations that make heavy use of Microsoft apps like SharePoint and Excel.
Just make sure to confirm that your selected platform can actually connect to your data hubs. As of May 2025, only ChatGPT Team users have access to the internal knowledge feature, with enterprise access expected to be added later this summer. Plus, the feature can currently only connect to Google, not other cloud providers like SharePoint and Dropbox.
Chandrasekaran says that organizations eager to connect their data sources to AI should check out Glean, a “work AI” platform sporting connections to “hundreds” of data sources, including Google, Microsoft, Slack, Box, and Dropbox.
Be sure to ask about security and access
A key question to ask vendors, according to Falk Gottlob, chief product officer of AI-powered translation platform Smartcat, is if content processed by the enterprise AI platform will be used to train new models. This is “table stakes,” says Gottlob, and if a vendor won’t commit to not training on your data, they may be a bad partner.
BY BEN SHERRY @BENLUCASSHERRY
Monday, May 26, 2025
With Employee AI Use Rising, So Does the Risk of Business Data Theft
With artificial intelligence (AI) now an widely accessible cutting-edge tool, many businesses are urging their employees to use the free apps to speed and improve their work. But as IT managers who contributed to a recent Reddit thread pointed out, that casual approach creates considerable risks of strategic data loss for companies, unless they establish clear and effective rules for how and when workers upload content to those less than secure third-party platforms.
Those observations were made in an IT/Managers subreddit thread this month titled, “Copy. Paste. Breach? The Hidden Risks of AI in the Workplace.” In it, participants discussed the initial question of whether employee use of third-party AI apps “without permission, policies, or oversight,” doesn’t risk leading their companies “sleepwalking into a compliance minefield” of potential data theft. The general answer was affirmative, and then some.
“Too late, you’re already there,” said a contributor called Anthropic_Principles, who then referred to other programs, apps, and even hardware that employees use without companies authorizing them as official business tools. “As with all things Shadow IT related. Shadow IT reflects the unmet IT needs in your organization.”
In explaining that, redditors noted a major irony feeding the problem.
It starts with employees needing AI-powered help to transcribe recordings of meetings, or to boil lengthy documents into summaries or emails. But even though widely used business tech like Microsoft’s Teams and Zoom already contain those capabilities, companies frequently deactivate them to prevent confidential information from being stored on outside servers.
Deprived of those, workers often turn to the same alternative AI bots that bosses urged them to use for less sensitive tasks. Most do so unaware that meeting transcriptions and in-house documents stored on those apps’s servers are nowhere near as safe from hackers as they are in highly protected systems of corporate partners like Microsoft–whose tools were deactivated on security grounds in the first place.
“This creates a perfect storm: employees need AI-powered summaries to stay productive, but corporate policies often restrict the very tools that could provide them safely,” notes a recent article in unified communications and tech news site UC Today. “So they turn to the path of least resistance—free, public AI tools that offer no data protection guarantees.”
That’s increasingly resulting in expensive and painful self-inflicted damage to businesses.
A recent survey by online risk management and security specialist Mimecast found the rate of data theft, loss, or leaking arising from employees uploading content to third-party platforms has risen by an average 28 percent per month since 2021. The typical cost to companies victimized by those is around $15 million per incident, part of the total $10.5 trillion in losses from global cyberattacks forecast this year.
How can businesses escape that circular trap? Contributors to the Reddit thread, and UC Today have some suggestions. They’re are based on assumptions that employees must and will use AI-driven tools, but that their companies need to ensure they do so safely.
For starters, they say, businesses should approve use of AI applications integrated into programs like Teams or 365 Copilot that they already used for other tasks, and whose security protections are far stronger than freestanding online apps. From there, UC Today recommends:
Auditing current AI usage across your company by all levels of employees
Using that to develop clear AI rules that balance security requirements with user productivity objectives
Educating and training employees on the benefits AI can offer them, as well as the risks it represents in various use cases
And finally, monitoring and enforcing those policies with security tools or in-house IT managers to ensure compliance, and to detect any accidental or intentional corner-cutting by staff.
“This is not an IT issue at its heart,” noted subredditor Sad-Contract9994, saying it is more about broader data loss prevention (DLP) efforts. “This is a governance and DLP issue. There should be company policy on when and where company data can be exfiltrated off your systems—to any service at all.”
BY BRUCE CRUMLEY @BRUCEC_INC
Saturday, May 24, 2025
I Tested 5 AI Assistants—and What I Found Was Surprising
Recently, the Washington Post invited me to join a blue-ribbon panel of communication experts for an AI writing experiment. Tech reporter Geoffrey Fowler pitched the idea as an old-fashioned bake-off with a modern twist. He asked us to test five popular AI tools on how well they could write five kinds of difficult work and personal emails.
Why emails?
“It’s one of the first truly useful things AI can do in your life,” says Fowler. “And the skills AI demonstrates in drafting emails also apply to other kinds of writing tasks.”
In total, the panel of judges evaluated 150 emails. While one AI tool was the clear winner, the experiment highlighted the benefits of AI writing and communication assistants—and one big limitation.
Since we were asked to read all the emails blind, we did not know which were written by ChatGPT, Microsoft Copilot, Google Gemini, DeepSeek, or Anthropic’s Claude. Fowler also had us score emails he had written to see if we could distinguish between AI and a human writer.
The best AI writing assistant
The clear winner was Claude.
“On average, Claude’s emails felt more human than the others,” Fowler noted. Another judge, Erica Dhawan, said, “Claude uses precise, respectful language without being overly corporate or impersonal.”
DeepSeek came in second place, followed by Gemini, ChatGPT and, in last place, Copilot. Although Copilot is widely available in Windows, Word, and Outlook, the judges agreed that its emails sounded too much like AI. “Copilot began messages with some variation of the super-generic ‘hope you’re well’ on three of our five tests,” said Fowler.
While Claude won this competition, I later learned that my scores showed a preference for the human-written emails. And that’s because all the AI assistants had one big limitation.
According to Fowler, “Our five judges didn’t always agree on which emails were the best. But they homed in on a core issue you should be aware of while using AI: authenticity. Even if an AI was technically ‘polite’ in its writing, it could still come across as insincere to humans.”
My takeaway:
AI tools are great for outline, flow, and clarity of argument. But they’re often stilted, formal, robotic, and lack personalization, emotion, and empathy.
AI assistants have trouble with creativity because the architecture on which they’re based (large language models) generates content with “syntactic coherence,” an academic term for stringing sentences together that flow naturally and follow grammar rules. But as you know, rules are meant to be broken.
Steve Jobs broke the rules
For example, in 1997 Apple’s Steve Jobs launched one of the most iconic campaigns in marketing history. The company was close to bankruptcy, and needed something to attract attention and stand out.
Apple’s now-famous television ad—nicknamed “the crazy ones”—featured black and white portraits of rebels and visionaries such as Bob Dylan, John Lennon, Martin Luther King Jr, and others. The marketing campaign is credited with redefining Apple’s brand identity helping to save it from financial ruin.
If the writing had been turned over to AI, it wouldn’t have happened.
How do I know? Claude told me.
“If asked to create a slogan like Apple’s famous campaign in my default mode, I would almost certainly have written ‘Think Differently’ rather than ‘Think Different,'” Claude acknowledges. “My training emphasizes grammatical correctness. The proper adverbial form to modify the verb ‘think’ would be ‘differently,’ and I’d be inclined to follow this established rule.”
Claude says it can analyze why the campaign worked “after the fact … but generating that kind of deliberate grammatical rebellion doesn’t come naturally to me.”
AI doesn’t have a rebellious streak because—breaking news—it’s not human. Some bots might perform better than others at simulating human qualities in their writing samples, but they don’t have the one thing you have: a unique voice built on years of personal experiences and creative insights.
AI is a helper, an assistant. Use it to brainstorm ideas, clarify thoughts, summarize documents, and gather and organize information. Those are all important and time-consuming tasks. But while AI can enhance communication, it shouldn’t replace the communicator.
As more people rely on AI assistants to write emails, resumes, memos, and presentations, there’s a real danger that many people will sound alike—corporate recruiters are already spotting this trend.
But you’re not like everyone else. You have a unique and powerful story to share. Don’t let artificial voices silence your authentic one.
EXPERT OPINION BY CARMINE GALLO, HARVARD INSTRUCTOR, KEYNOTE SPEAKER, AUTHOR, ‘THE BEZOS BLUEPRINT’ @CARMINEGALLO
Wednesday, May 21, 2025
How This AI-Powered Company Is Tapping OpenAI’s New Image Generator
OpenAI’s wildly-popular image-generation AI model has only been open to commercial use for a few weeks, but businesses are already saying the new model is a cut above the competition.
In late April, OpenAI made a new model called gpt-image-1 available through the company’s API. The model is the core piece of technology behind ChatGPT’s updated image-generation capabilities (the one famous for all those Studio Ghibli-style memes). Now, any business can give their applications the same image-generation abilities as ChatGPT.
That service has been a game changer for Mariam Naficy, founder of AI-powered jewelry and homegoods marketplace Arcade.
Arcade is a first-of-its-kind online marketplace in which people can use AI to generate images of products and then commission independent artisans to turn those images into real objects. Customers generate an image of jewelry, rugs, or pillows, and then Arcade uses its own AI systems to analyze the image, determine the materials needed, select an artisan or maker to produce the piece, and set a price. According to Naficy, over 800,000 products have been designed on Arcade since the platform launched in September 2024.
Naficy, a multi-time Inc. Female Founders honoree, says that her team wanted to add a feature that would allow users to make small edits and adjustments to product images using natural language, and were deep in discussions with a major AI provider to use their image-generation model. “We were testing it out, and we were going to roll it out,” says Naficy, but just as they had finished designing the user experience for the new feature, OpenAI dropped the new image-generation model on April 23.
Naficy says that internal testing made it clear that OpenAI’s new image-gen model was “vastly better” when compared to the rival company, and because Arcade already had their UX designed, it was easy for the team to slot in the new OpenAI model. The difference between the two models was “pretty stark in terms of success rates,” says Naficy. The company still uses an AI model produced by one of OpenAI’s competitors to generate the initial images, but switches to gpt-image-1 for the edits.
That quality comes with a cost, though, as gpt-image-1 is more expensive than the rival company’s model. But cost isn’t a major factor to Naficy, who says that Arcade is well-funded enough that using a more expensive model is a worthwhile trade-off if customers are having a great experience while using Arcade. In March, the company raised $25 million in a series A round, bringing its total funding to $42 million.
Here’s how it works in practice: I prompted Arcade to generate an image of “a thick gold wedding band featuring rough handworking,” and was presented with several variants, including one originally priced at $212. I then asked the new OpenAI-powered editor to add a small ruby to the ring, which raised the price to $232.
“I think the model is quite a big unlock for both creators and consumers on our site,” says Naficy, who believes that by enabling users to more easily adjust their creations, more customers will buy AI-generated products, boosting business for Arcade and the artisans on the platform.
Arcade’s business model and use of AI is unique. Other platforms allow users to generate product designs, like AI fashion design startup The New Black, or generate prints to go on clothing, like lingerie brand Adore Me’s AM By You feature; but none allow consumers to generate an AI image and then commission a human artisan to turn that image into a physical product. Naficy previously told Inc. that the most difficult thing about building a business on the cutting edge is that “there’s no playbook to follow,” so she’s using best practices to write her own playbook. For instance, Arcade blocks users from generating products featuring copyrighted materials and IP.
Naficy says that if the new OpenAI model can encourage users to create more product designs, it will be a success for Arcade. “It’s almost like we’re stocking our fish pond with great products,” says Naficy, “the more prompts and edits you make on our site, the more products we create that other people can buy.”
BY BEN SHERRY @BENLUCASSHERRY
Monday, May 19, 2025
An Entrepreneur’s Guide to Every ChatGPT AI Model
ChatGPT is more than just a chatbot—it’s a powerful platform that can be used to make some aspects of your work much easier. But it isn’t one-size-fits-all. The platform lets users choose between several distinct AI models, all with their own strengths, weaknesses, and use cases.
Think of ChatGPT as a toolbox with multiple screwdrivers inside; just like different screws require different screwdrivers, different tasks and challenges require different AI models. But with so many models, nearly all of which have technical names like GPT-4o, o1 pro, and o4-mini-high, it isn’t always clear which models you should use, and when.
Plus, ChatGPT’s lineup of models changes depending on your account status. Some are available to Free users of the app, some to Plus users who pay $20 a month, and some only for high-fee Pro and Enterprise accounts.
Whether you’re using ChatGPT through the Free plan, the Plus plan, the $200 per month Pro plan, the SMB-focused Team plan, or the Enterprise plan to help your business get things done, we’ve created a living guide to ChatGPT’s current lineup of AI models. We’ll run through strengths, weaknesses, and example use cases for each of the models, and give you some points on how to use them in your everyday life. (One note: We’ll only be discussing models available on ChatGPT, not those exclusive to OpenAI’s API, such as GPT 4.1.)
Ready? Let’s dive in.
Available for free
GPT-4o and GPT-4.1 mini
GPT-4o is ChatGPT’s default, flagship model. It’s a jack-of-all-trades, but a master of none, and should be used for everyday tasks like brainstorming ideas, summarizing documents, or just throwing ideas around.
GPT-4o was OpenAI’s first multimodal model, meaning it can process and analyze text, audio, images, and video. Imagine you had to miss a meeting at work in which a new business process you developed was introduced. You could upload a recording of the video to GPT-4o and ask it to analyze the meeting participants’ reactions to the new process.
Other examples for how to use GPT-4o, according to OpenAI, include drafting follow-up emails and proofreading reports. The model doesn’t top the charts on any major benchmarks, but is fast and reliable, making it a great daily companion. There’s also a smaller, even faster version of the model called GPT-4o-mini, which OpenAI suggests using to handle small tasks that need to be repeated on a large scale.
GPT-4o is available to every ChatGPT user, and is the only model available to users on the free tier. Free users can send GPT-4o a limited number of messages depending on the current traffic on ChatGPT. When free users exceed the allowed number of messages, they’ll be temporarily switched to GPT-4.1 mini, a smaller, faster version of a GPT-4.1, a model exclusive to paid subscribers.
Recently, OpenAI had to roll back an update to GPT-4o that made the model too “sycophantic,” emphasizing the point that these models are constantly changing and evolving.
Available to ChatGPT Plus and ChatGPT Team members
GPT-4.5 (research preview)
Released as a beta-like research preview in February 2025, GPT-4.5 has gained a reputation for being OpenAI’s most “emotionally intelligent” AI model, and was trained on significantly more data than its predecessors. OpenAI says GPT-4.5 is particularly skilled at writing, and should be used for creative projects and to help improve communications between people. Plus, because of its increased training data, GPT-4.5 tends to hallucinate less than other models.
Because of the model’s enhanced ability to pick up on emotional tone, OpenAI has suggested that people use GPT-4.5 like a virtual therapist. At work, you could ask the model to help you figure out how to breach potentially fraught topics like pay and working conditions, or use it to help edit your end-of-week memo to employees.
In your personal life, GPT-4.5 can serve as a neutral third party for you to talk things out with. If you’re just looking for a human-like personality to chat with, this is the model for you.
GPT-4.1
Initially released exclusively for business use through OpenAI’s API, GPT-4.1 is an AI model specifically designed for software developers. It has a high level of capability when it comes to coding, and following complicated instructions, making it a useful companion for professionals who need to maintain codebases or develop applications.
GPT-4.1 is also notable for its large context window size. In AI, context window size refers to the amount of tokens, or units of information, that a model can process at once. A larger context window means the model can process more data, enabling users to analyze larger datasets.
Due to popular demand from software developers, OpenAI added GPT-4.1 to ChatGPT in mid-May 2025.
o3
OpenAI’s o3 model is the company’s top-of-the-line “reasoning model,” meaning that instead of immediately attempting to answer queries, it will take time to reason out the best way to fulfill your request, and will solve the problem over multiple steps. o3 has a kind of inner monologue that it uses to talk with itself about what a user is requesting, and options for accomplishing the task. Then, it can search the internet for that data, analyze the information, and determine whether more steps are required.
OpenAI says that you should use o3 to solve tasks that require strategic planning and detailed analysis, like developing a market environment report or converting sales data from an Excel spreadsheet into a forecast graphic. According to AI model evaluation company Vals, o3 is the best-performing AI model for handling tax-related questions and analysis, reaching 79 percent accuracy in the benchmark.
Vals also found that o3 is the top model for replicating the work of an entry-level financial analyst, but even then answered only 48 percent of questions accurately. Outside of work, o3 can act like a powerful web search assistant that will continue searching even when you’re not using the app. Imagine you were attempting to fix an appliance but can’t find the instruction manual—you could take a photo of the machine and have o3 find the manual online for you.
o4-mini and o4-mini-high
o4-mini is the latest edition of OpenAI’s line of smaller, faster reasoning models. Mini models haven’t been trained on the same amount of data as their full-size counterparts, which means they aren’t as useful at answering questions about the world, pop culture, or history, but they make up for that with enhanced capabilities in coding, science, and math.
OpenAI also offers a model called o4-mini-high, which basically just means that the model has been trained to put in a higher level of effort, markedly improving performance. OpenAI says that o4-mini and o4-mini-high should be used for STEM-related queries and programming assistance. If you need to analyze a very large data set in a short amount of time or need to quickly review a code repository, this might be your solution.
Available for ChatGPT Pro and ChatGPT Enterprise members
o1 pro
Only those who have enterprise accounts or subscribe to OpenAI’s $200 per month Pro tier can use o1 pro, a version of OpenAI’s original reasoning model that uses additional compute to think harder.
Like the name suggests, this model is really only for people looking to do highly technical work with ChatGPT, like analyzing and rewriting large amounts of code or creating documentation that needs to contain very specific details and information. For instance, OpenAI says that the model could be used to draft a detailed risk-analysis memo for an EU data-privacy rollout. Be sparing with your use of o1 pro, though, because you only get five requests per month.
Retired models
As OpenAI releases more capable models, they occasionally sunset older ones that have been replaced by better and cheaper alternatives.
GPT-4
At the end of April 2025, the company announced that it would fully remove GPT-4 from ChatGPT. GPT-4 came out in early 2023 and marked a major step up for AI development, but has since been eclipsed by GPT-4o, which offers the same experience as its predecessor but is faster, cheaper, and more capable.
GPT-4o mini
In mid-May 2025, OpenAI announced that ChatGPT would replace GPT-4o-mini, a smaller, faster version of GPT-4o, with new model GPT-4.1 mini. Previously, free ChatGPT users would be forced to use GPT-4o mini after sending a limited number of messages to GPT-4o, but will now use GPT-4.1 mini.
The future
Finally, one model to look forward to: OpenAI says that it will launch GPT-5, the company’s next flagship model, this summer. According to Sam Altman, GPT-5 will unite OpenAI’s “GPT” and “o” series of models into a single line.
“How about we fix our model naming by this summer,” Altman wrote on X in April, “and everyone gets a few more months to make fun of us (which we very much deserve) until then?”
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, May 14, 2025
Employees Fear the Stigma of Using AI at Work, According to a Study
Business owners who want their employees to integrate artificial intelligence (AI) applications and improve how they do their jobs may face a bigger challenge than their staff learning how to use those tools effectively. A new study indicates many workers hesitate to adopt the tech out of concerns colleagues will disdain them as being lazy if they do—and also that those negative perceptions are very much a part of today’s workplace.
Researchers at Duke University’s Fuqua School of Business, Management and Organizations determined that many employees attach a considerable degree of stigma to colleagues using AI in the workplace. They found that biases against apps not only dissuade many workers from using them in fear they may be looked down upon by AI-wary colleagues. They discovered that some people who adopted the tech at their jobs got labeled negatively, which affected their workplace status.
“Individuals who use AI tools face negative judgments about their competence and motivation from others,” the study’s authors wrote. “These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation.”
The researchers came up with that conclusion after putting 4,400 participants through four different test scenarios.
The first case consisted of a group of employees being assigned an AI tech tool. They were then asked to anticipate how colleagues would rate them in terms of laziness, replaceability, competence, and diligence for using the app. They largely expected negative evaluations from coworkers on all four criteria, expecting higher scores than a neutral baseline on laziness and replaceability, and lower results for the rest.
To compare those answers, a second scenario asked another cohort of employees for their perceptions of first group members for having used AI, ranking them in terms of laziness, diligence, competence, independence, self-assurance, ambition, and dominance. Their responses were generally as unflattering as members of the initial group had feared–meaning their feared expectations were matched by equally dismal evaluations.
Within those two balancing outcomes, researchers found two other important consequences of those biases.
First off, people in the group adopting the AI tool reported the expected stigma undermined their willingness to use the tech, and made them less inclined to report having done so to colleagues or managers. Secondly, the negative attitudes attached to people who’d used AI were so strong that they remained constant despite the age, gender, race, or job title of colleagues who’d adopted it.
“We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness,” that study’s authors wrote. “This suggests that the social stigmatization of AI use… appears to be a general one.”
The reasons for that negativity aren’t new, the researchers noted.
Claims that innovations meant to make work easier or more efficient—like calculators or e-books—are a cop-out to avoid doing the more demanding underlying tasks have been around for decades.
And questions about whether reliance on those tools will undermine or even destroy users’ abilities to continue performing the original tasks go way, way back. The authors cited the ancient Greek philosopher Plato asking whether people who embraced “a new invention for learning (writing) would ever develop true wisdom.”
But the stigma now attached to AI adopters at work are important for businesses to address.
One of the experiments found that managers are similarly basing decisions—including many hiring choices—on their own biases. Many believed that employees or candidates who used AI were somehow lazy, or trying to gain advantages their colleagues. In other words, not only are workers harnessing the power of AI being looked down on, but in some cases they get marginalized for their efforts.
The second is how the anti-AI prejudices reversed themselves when use of the tech was described as clearly beneficial and productive for complex projects—or even as a logical and smart way to speed and improve daily tasks. Once their utility was explained, resistance vanished.
The contrast suggests business owners who want employees to adopt AI to speed up and improve aspects of their jobs need to create clear, explicit general guidelines for how and when the tech should be used. They’d also be wise to communicate to workers who embrace AI—and those prone to snubbing it—why using those tools is an active effort to enhance results, not a dodge to shirk the work at hand.
But as the study’s results suggest, changing those attitudes will require some workplace attitude adjustment.
“This apparent tension between AI’s documented benefits and people’s reluctance to use it raises a critical question,” the authors note. “(A)re people who use AI actually evaluated less favorably than people who receive other forms of assistance at work?”
BY BRUCE CRUMLEY @BRUCEC_INC
Monday, May 12, 2025
This Founder Just Launched an AI Clone of Himself. Should You?
Entrepreneurs are in a constant, never-ending battle with time. Between managing direct reports, training new employees, and growing the business, it can feel like there aren’t enough hours in the day to get everything done. But what if you could offload some of that work to someone you trust implicitly: a version of yourself?
That’s the future imagined by AI clone startup Delphi. They train custom AI models on an individual’s writing, along with their appearances on videos and podcasts, to create a digital double. That digital double can be texted, called, and even video-chatted with. The company has roughly 1,600 paying customers currently.
Tyler Denk, founder of fast-growing newsletter startup Beehiiv, recently released his own Delphi-created AI clone, and made it free for subscribers of his personal newsletter, Big Desk Energy (BDE). Denk wrote in his blog that the AI clone, which he named DenkBot, can converse via text and speech, and “has been trained on everything I’ve ever written, all of my social media posts, every podcast interview I’ve ever done, and a handful of other resources (like Beehiiv support docs).” BDE subscribers could use DenkBot to get advice sourced from Denk’s newsletter without needing to sift through dozens of posts.
“Tyler’s always been a founder that I and a lot of founders in San Francisco look up to,” says Dara Ladjevardian, Delphi’s co-founder and CEO. Ladjevardian connected with Denk on one of Denk’s annual Costa Rica excursions, and the two discovered that their platforms had much in common.
“A lot of our customers are writers who want to scale their expertise in an interactive way,” says Ladjevardian.
Ladjevardian says that one of the company’s top use cases is for business leaders and CEOs. Those execs can upload their memos, talks, and presentations to ensure the AI clone retains their knowledge and voice, and then provide the clone as a resource for employees, capable of helping new hires get familiar with workflows and aligning teams around a central vision.
According to Ladjevardian, Delphi’s AI clones can also help generate business leads. Of course, Ladjevardian has a clone of himself, and it is accessible through Delphi’s website. Ladjevardian says that he gets a notification whenever his clone talks to an engineer with experience in AI, or whenever it talks to someone who identifies as an influencer or coach. This helps him track down potential new employees and customers.
Ladjevardian says that building “heavy hallucination guardrails” has been a priority for Delphi since the company’s 2021 founding, and they are constantly working on tools to ensure AI clones don’t do or say anything their human counterparts wouldn’t.
To that end, Delphi users can customize their AI clone’s “creativity score,” which determines how faithful the clone is to the materials it was trained on. If you give your clone a low creativity score, says Ladjevardian, “it will only say things that it’s trained on.” If somebody asks a low-creativity Delphi clone a question that it can’t answer, the clone notifies the real person, who can then “hop into the conversation and improve that answer.”
On the flip side, you can set your AI clone to be more adaptive, and give it clearance to attempt to answer questions like you would. If a high-creativity Delphi clone gets something wrong, Ladjevardian says, users can hop into the conversation again, to correct faulty answers or provide the clone with additional training data.
Denk, who was featured on the cover of Inc.’s 2024 Best in Business issue, wrote in his blog that the “paradigm of searching and filtering archives of content for answers” will soon give way to tech-based alternatives like Delphi, enabling readers to surface information without searching through dozens of blog posts. Beehiiv hasn’t announced any formal partnership with Delphi, but it’s not difficult to see the appeal for newsletter creators, especially ones specializing in giving advice.
Delphi is far from the only company offering AI cloning services. In late 2024, MasterClass debuted On Call, a platform for talking to AI clones of MasterClass instructors like Mark Cuban and Gordon Ramsay. Even café chain Le Pain Quotidien has developed an AI clone of founder Alain Coumont. Google recently published a paper investigating what happens to AI clones when their human counterparts die, describing such clones as “generative ghosts.”
BY BEN SHERRY @BENLUCASSHERRY
Friday, May 9, 2025
Why AI Won’t Replace Venture Capitalists Any Time Soon
The Wall Street Journal recently reported on a data company using AI to forecast startups’ financial futures, and it caused quite a stir in venture capital circles. After all, that level of in-depth analysis has long been a core advantage afforded to a venture firm’s crack analytics team. With that edge somewhat offloaded to AI, it invites the question: Is it reasonable to think AI will replace VCs entirely?
Not in the foreseeable future.
How do I know? I have been building custom AI models for Salesforce Ventures since joining the team back in 2022 and have seen first-hand how vital the human element is in investment decisions. The models my team builds help inform our investment strategy, making sure we find the right investment opportunities at the right time, and this is complemented by the expertise of our investors.
While AI excels at recognizing patterns from historical data—valuable input for investment decisions—relying on past performance alone in venture is a strategic misstep.
In fact, the most successful venture investors make calculated bets on novel ideas that historical patterns would caution against. Venture deals are based on far more than just the terms of a financial transaction—investors and founders alike consider a wide range of qualitative factors before striking a deal, including communication style, personal chemistry, range of relationships, and much more.
This process requires judgment that transcends algorithms; experienced investors must quickly assess the potential for novel markets and products alike, while ascertaining critical founder qualities that defy quantitative analysis, such as grace under fire, hunger, and maturity.
Critically, venture capital is unique in that access to critical information is typically asymmetrical. Founders meticulously control what information about their company is publicly available, which can bias training datasets—and subsequent AI models—in substantial and unpredictable ways.
Both founders and investors strategically curate how information appears in the broader market, with databases functioning more as carefully managed signaling platforms. This is especially true for early-stage startups, where data platforms predominantly showcase what founders and investors want the market to perceive rather than providing neutral fact-based insights into actual developments and outcomes.
As with any job, AI is best viewed as a powerful enabler, enhancing efficiency and helping investors focus their limited time on the companies and founders that make the most sense for their investment mandate. At Salesforce Ventures, for example, we focus our AI tools on amplifying investor capabilities and automating repetitive back office work rather than attempting to replace human interaction, connection, common sense, and, ultimately, shrewd judgment.
Our AI models help categorize companies based on their innovations and surface opportunities that align with our investment theses in specific sectors. Yet we never over-rely on these metrics, as models that attempt to predict startup outcomes can make success feel like a foregone conclusion—and short-circuit critical investor engagement and ongoing support.
Critically, AI tools free our investors of spreadsheet analysis and number-crunching that too often get in the way of what our team values most: meeting founders shaping the future. Our investors are better prepared to ask smart questions informed by AI, while gaining more bandwidth to focus on connection and empathy that builds foundational relationships.
Yet human expertise and capabilities remain indispensable. Investors bring holistic evaluation skills to make sense of factors that are difficult to capture in a machine-readable format: the founding team’s social dynamics, the depth of previous experiences, and product-market fit now and in the future. These attributes are difficult to discern even for the most experienced investors; trying to extract these qualitative features for the training dataset of an AI model has been an impossible task so far.
Importantly, these human dynamics that deeply influence the investment process don’t stop once the wire is sent.
The best investors understand that venture is not a passive asset class and work tirelessly to support their founders. While an individual investor can have only so much impact, a critical component of the overall success of an investment decision is the follow-through: creating value for the company wherever possible, through introductions, advice, candid feedback, and support during the inevitable tough times. The best investors create differentiated value for founders through a robust community and privileged access to the resources founders need to scale their businesses.
Those who believe success in venture capital can come from following the public equity market quant playbook—investing based on large-scale data, methodological expertise, toeing ownership thresholds, and repeatable experimentation—fundamentally misunderstand venture as an asset class.
Venture dollars are deployed to back people, who in turn build companies. These decisions are made based on human evaluation, asymmetrical information, and an opinionated view of the future.
That’s not to say fast-improving AI models won’t play an increasingly meaningful role in shaping investment decisions. But as long as human-led companies are raising capital, it’s vital to have another human in the room to make an optimal decision.
EXPERT OPINION BY BRIAN MURPHY
Subscribe to:
Posts (Atom)