IMPACT
..building a unique and dynamic generation.
Wednesday, November 19, 2025
What Adobe Knows About AI That Most Tech Companies Don’t
Last week, I was talking with a graphic designer about Adobe MAX, and they shared with me the most unexpected review of an AI feature I’ve ever heard. “Photoshop will rename your layers for you!” he said, without hesitating.
The feature he was referring to was that Photoshop can now look at the content on each of your layers and rename them for you. Since most people don’t give a lot of thought to naming layers as they create them, this might be one of the most useful features Adobe has ever created. It’s certainly one of the most useful AI features that any company has come up with so far, mostly because it does something very helpful but that no one wants to do.
Helpful over hype
And, that’s the point. In fact, that reaction explains more about Adobe’s AI strategy than anything the company demoed during its keynote.
It’s not the kind of feature that gets a lot of hype, but I don’t know anyone who regularly uses Photoshop who wouldn’t prefer to have AI handle one of the most universally hated chores in design: cleaning up a pile of unnamed layers.
I think you can make the case that Adobe just made the loudest, clearest argument yet that AI isn’t a side feature. In many ways, it is the product now. Almost every announcement touched Firefly, assistants that operate the apps for you, “bring your own model” integrations, or Firefly Foundry—the infrastructure layer that lets enterprises build their own private models.
What Adobe understands
But beneath it all, Adobe is doing something most tech companies still aren’t. Instead of looking for ways to bolt AI onto its products, Adobe is building AI into the jobs customers already hired Adobe to help them do.
When I sat down with Eric Snowden, Adobe’s SVP of Design, at WebSummit this past week, he used a phrase that stuck with me: “utilitarian AI.”
Sure, there were plenty of shiny new AI features that Adobe announced like Firefly Image Model 5, AI music and speech generation, podcast editing features in Audition, and even partner models like Google’s Gemini and Topaz’s super-resolution built directly into the UI.
But Snowden lit up talking about auto-culling in Lightroom.
“You’re a wedding photographer. You shoot 1,000 photos; you have to get to the 10 you want to edit. I don’t think there’s anybody who loves that process,” he told me. Auto-culling uses AI to identify misfires, blinks, bad exposures, and the frames you might actually want.
Ultilitarian AI is underrated
That’s what he means by utilitarian AI—AI that makes the stuff you already have to do dramatically less painful. They force you into an “AI mode,” but instead save you time while you go about the tasks you already do.
Snowden describes Photoshop’s assistant like a self-driving car: you can tell it where to go, but you can grab the wheel at any time—and the entire stack of non-destructive layers is still there. You’re not outsourcing your creative judgment—you’re outsourcing the tedious tasks so. you can work on the creative process..
That’s Adobe’s first insight–that AI should improve the actual job, not invent a new one.
The second insight came out of a conversation we had about who AI helps most. I told Snowden I have a theory: AI is most useful right now to people who either already know how to do a thing, or don’t know how to use the steps but know what the result should be. For both of those people AI helps save them meaningful time.
That’s how I use ChatGPT for research. I could do 30 Google searches for something, but ChatGPT will just do them all at the same time and give me a summary of the results. I know what the results should be, and I’m able to evaluate whether they are accurate.
The same is true for people using Lightroom, Photoshop, or Premiere. You know what “right” looks like, so you know whether the tool got you closer or not. AI can do many of the tasks, but it’s still up to humans to have taste.
AI has no taste
Which is why Snowden didn’t hesitate: designers and creative pros are actually better positioned in an AI world—not worse.
“You need to know what good looks like,” he told me. “You need to know what done looks like. You need to know why you’re making something.” Put the same AI tool in front of an engineer and a designer and, according to Snowden, “90 times out of 100, you can guess which is which,” even if both are typing prompts into the same tool. That means taste becomes the differentiator.
Snowden told me he spent years as a professional retoucher. “I think about the hours I spent retouching photos, and I’m like, I would have liked to go outside,” he said. Being able to do that skill was important, but it wasn’t the work. The finished product was the work, and AI can compress everything between the idea and the result.
Trust has never mattered more
The third thing Adobe understands—and frankly, most companies haven’t even started wrestling with—is trust. I have, many times, said that trust is your most valuable asset. If you’re Adobe, you’ve built up that trust over decades with all kinds of creative professionals. There is a lot riding on whether these AI tools are useful or harmful to creatives, as well as to their audiences.
So, Adobe didn’t just ship AI features; it is building guardrails around them. For example, the Content Authenticity Initiative will tag AI-edited or AI-generated content with verifiable metadata.
Snowden’s framing is simple: “We’re not saying whether you should consume it or not. We just think you deserve to know how it was made so you can make an informed choice.”
Then there’s the part most people never see—the structure that lets a company Adobe’s size move this fast.
Understanding how customers want to use AI
Snowden’s team actually uses the products they design. He edits photos in Lightroom outside of work. Adobe runs a sort of internal incubator where anyone can pitch new product ideas directly to a board. Two of the most important new tools—Firefly Boards and Project Graph—came out of that program.
When AI arrived, Adobe already had the mechanism to act on it. It didn’t need to reinvent itself or reorganize. It just needed to point an existing innovation engine at a new set of problems.
That’s the lesson here: Adobe isn’t chasing AI because it’s suddenly trendy with features no one is sure how anyone will use. It saw AI as a powerful way to improve the jobs its customers already do.
That’s the thing so many tech companies still miss. AI is not a strategy. It’s not even the product. It’s a utility—one that works only if you know what your customers are trying to accomplish in the first place.
So far, it seems like Adobe does. And that’s why its AI push feels less like a pivot and more like a product finally catching up to the way creative work actually happens.
EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN
Monday, November 17, 2025
How to Grow Your Social Following as a Founder—and Which Platforms to Use
So you want to build in public—documenting the process of founding, launching, and growing your business online—but you’re not sure which platform to use. You could use Substack or Beehiiv to send newsletters, Medium to write blog posts, TikTok or YouTube to post videos, LinkedIn, X, or Bluesky to share text-based posts, or Instagram to post photos.
There’s no right answer. Founders of all kinds have grown their businesses by posting on each of these platforms—and many use more than one. Plus, there’s plenty of overlap: You can post TikTok-like videos on Instagram and share X-like text posts on Substack.
Still, if you’re at the very beginning of your building in public journey, it’s a good idea to focus your efforts on just one. Here’s a guide to help you pick between some of the most popular platforms right now: Substack, Beehiiv, TikTok, LinkedIn, and X.
Choose Substack if…
You’re a founder in the politics, media, fashion, or beauty space who enjoys storytelling.
Substack, which launched as a newsletter platform in 2017 but now bills itself as a subscription network, reports hosting more than 50 million active subscriptions and 5 million paid subscriptions. The platform recently added video and livestream features in order to court creators who use other paid subscription platforms, but the majority of its content is still long-form and text-based. If you’re considering building in public on Substack, you need to have a love for writing—or at the very least, storytelling.
Newsletters on politics, fashion, and beauty seem to do especially well on Substack, which makes it a solid choice of platform if your company is in any of these industries. Many new-age media organizations including The Ankler and The Free Press publish on Substack, which means it’s also a great pick for media entrepreneurs and founders in adjacent industries like public relations.
“Substack is where founders can reach audiences who genuinely value a direct, personal connection,” Christina Loff, the platform’s head of lifestyle partnerships tells Inc. over email. “The publications that perform best all share a common thread: a strong, human voice.” Examples of founders whose publications do this well, she adds, include Rebecca Minkoff, who has more than 6,000 subscribers; Dianna Cohen of Crown Affair, who has more than 13,000; and Rachelle Hruska MacPherson of GuestofaGuest.com and Lingua Franca, who has more than 260,000.
Choose TikTok if…
Your business is targeting Gen Z.
It’s no secret that TikTok dominates in attracting young users—and keeping them engaged. The video sharing app rose to fame in 2020 and now has an estimated 170 million American users, many of whom are 28 years old and under. In fact, according to TikTok, 91 percent of Gen Z internet users “have discovered something” on the platform in the past month. So if you’re a young founder, or if you’re starting a business that’s targeting Gen Z customers, TikTok is probably your best bet.
All you really need to get started on TikTok is a smartphone and basic video-editing skills. Nadya Okamoto, the co-founder of sustainable period care brand August, for one, has grown her audience to 4.4 million in just four years by filming her daily routine, answering product questions, and posting get-ready-with-me videos. Boutique candy brand Lil Sweet Treat’s founder Elly Ross has gained more than 36,300 followers by documenting her experience of opening four storefronts and launching a line of candy.
Before you fully commit to building in public on TikTok, remember that there’s still a minute possibility that the platform will get banned in the U.S. on December 16.
Choose LinkedIn if…
You’re a founder in the business-to-business space.
As a work-centric social media platform, LinkedIn is a great place for you to build in public if your company makes products for or provides services to other businesses. Still, there’s a lot of competition on the platform. More than 69 million companies and 243 million American professionals use LinkedIn, according to the company—and almost all of them are posting about their own careers.
BY ANNABEL BURBA @ANNIEBURBA
Friday, November 14, 2025
Why Some AI Leaders Say Artificial General Intelligence Is Already Here
Artificial intelligence is still a relatively new technology, but one that has been seeing seemingly exponential jumps in its capabilities. The next big milestone many founders in the industry have discussed is artificial general intelligence (AGI), the ability for these machines to think at the same level as a human being. Now, some of AI’s biggest names say they believe we could already be at that point.
The recent Financial Times Future of AI summit gathered Nvidia CEO Jensen Huang, Meta AI’s Yann LeCun, Canadian computer scientist Yoshua Bengio, World Labs founder Fei-Fei Li, Nvidia chief scientist Bill Dally, and Geoffrey Hinton (often referred to as the “Godfather of AI“) together to discuss the state of the technology. And some of those leaders in the field said they felt AI was already topping or close to topping human intelligence.
“We are already there … and it doesn’t matter, because at this point it’s a bit of an academic question,” said Huang. “We have enough general intelligence to translate the technology into an enormous amount of society-useful applications in the coming years. We are doing it today.”
Others said we may not even realize that it has happened. While most forecasts for the arrival of AGI still put it at several years down the road, LeCun said he didn’t expect it would be an event, like the release of ChatGPT. Instead, it’s something that will happen gradually over time—and some of it has already started.
AI companies are generally less bullish on the subject of AGI than the panelists. OpenAI has said if it chooses to IPO in the future, that will help it work toward the AGI milestone. Elon Musk, last year, predicted AGI would be achieved by the end of 2025 (updating his previous prediction of 2029). Last month, he wrote in a social media post that the “probability of Grok 5 achieving AGI is now at 10 percent and rising.”
Not all of the AI leaders said they felt AGI was here. Bengio, who was awarded the Turing Award in 2019 for achievements in AI, said it was certainly possible, but the technology wasn’t quite there yet.
“I do not see any reason why, at some point, we wouldn’t be able to build machines that can do pretty much everything we can do,” said Bengio. “Of course, for now … it’s lacking, but there’s no conceptual reason you couldn’t.”
AI, he continued, was a technology that had “a lot of possible futures,” however. And that makes it hard to forecast. Basing decisions today on where you think the technology will go is a bad strategy, he said.
World Labs founder Li straddled the question, saying there were parts of AI that would supersede human intelligence and parts that would never be the same. “They’re built for different purposes,” she said. “How many of us can recognize 22,000 objects? How many humans can translate 100 languages? Airplanes fly, but they don’t fly like birds. … There is a profound place for human intelligence to always be critical in our human society.”
Hinton, meanwhile, opted to look beyond AGI to superintelligence, an AI milestone where the technology is considerably smarter than humans. There are several startups exploring this space now, including Ilya Sutskever’s Safe Superintelligence and Mira Murati’s Thinking Machines Lab.
“How long before if you have a debate with a machine, it will always win?” Hinton posited. “I think that is definitely coming within 20 years.”
BY CHRIS MORRIS @MORRISATLARGE
Wednesday, November 12, 2025
AI Isn’t Replacing Jobs. AI Spending Is
For decades now, we have been told that artificial intelligence systems will soon replace human workers. Sixty years ago, for example, Herbert Simon, who received a Nobel Prize in economics and a Turing Award in computing, predicted that “machines will be capable, within 20 years, of doing any work a man can do.” More recently, we have Daniel Susskind’s 2020 award-winning book with the title that says it all: A World Without Work.
Are these bleak predictions finally coming true? ChatGPT turns 3 years old this month, and many think large language models will finally deliver on the promise of AI replacing human workers. LLMs can be used to write emails and reports, summarize documents, and otherwise do many of the tasks that managers are supposed to do. Other forms of generative AI can create images and videos for advertising or code for software.
From Amazon to General Motors to Booz Allen Hamilton, layoffs are being announced and blamed on AI. Amazon said it would cut 14,000 corporate jobs. United Parcel Service (UPS) said it had reduced its management workforce by about 14,000 positions over the past 22 months. And Target said it would cut 1,800 corporate roles. Some academic economists have also chimed in: The St. Louis Federal Reserve found a (weak) correlation between theoretical AI exposure and actual AI adoption in 12 occupational categories.
Yet we remain skeptical of the claim that AI is responsible for these layoffs. A recent MIT Media Lab study found that 95% of generative AI pilot business projects were failing. Another survey by Atlassian concluded that 96% of businesses “have not seen dramatic improvements in organizational efficiency, innovation, or work quality.” Still another study found that 40% of the business people surveyed have received “AI slop” at work in the last month and that it takes nearly two hours, on average, to fix each instance of slop. In addition, they “no longer trust their AI-enabled peers, find them less creative, and find them less intelligent or capable.”
If AI isn’t doing much, it’s unlikely to be responsible for the layoffs. Some have pointed to the rapid hiring in the tech sector during and after the pandemic when the U.S. Federal Reserve set interest rates near zero, reports the BBC’s Danielle Kaye. The resulting “hiring set these firms up for eventual workforce reductions, experts said—a dynamic separate from the generative AI boom over the last three years,” Kaye wrote.
Others have pointed to fears that an impending recession may be starting due to higher tariffs, fewer foreign-worker visas, the government shutdown, a backlash against DEI and clean energy spending, ballooning federal government debt, and the presence of federal troops in U.S. cities.
For layoffs in the tech sector, a likely culprit is the financial stress that companies are experiencing because of their huge spending on AI infrastructure. Companies that are spending a lot with no significant increases in revenue can try to sustain profitability by cutting costs. Amazon increased its total CapEx from $54 billion in 2023 to $84 billion in 2024, and an estimated $118 billion in 2025. Meta is securing a $27 billion credit line to fund its data centers. Oracle plans to borrow $25 billion annually over the next few years to fulfill its AI contracts.
“We’re running out of simple ways to secure more funding, so cost-cutting will follow,” Pratik Ratadiya, head of product at AI startup Narravance, wrote on X. “I maintain that companies have overspent on LLMs before establishing a sustainable financial model for these expenses.”
We’ve seen this act before. When companies are financially stressed, a relatively easy solution is to lay off workers and ask those who are not laid off to work harder and be thankful that they still have jobs. AI is just a convenient excuse for this cost-cutting.
Last week, when Amazon slashed 14,000 corporate jobs and hinted that more cuts could be coming, a top executive noted the current generation of AI is “enabling companies to innovate much faster than ever before.” Shortly thereafter, another Amazon rep anonymously admitted to NBC News that “AI is not the reason behind the vast majority of reductions.” On an investor call, Amazon CEO Andy Jassy admitted that the layoffs were “not even really AI driven.”
We have been following the slow growth in revenues for generative AI over the last few years, and the revenues are neither big enough to support the number of layoffs attributed to AI, nor to justify the capital expenditures on AI cloud infrastructure. Those expenditures may be approaching $1 trillion for 2025, while AI revenue—which would be used to pay for the use of AI infrastructure to run the software—will not exceed $30 billion this year. Are we to believe that such a small amount of revenue is driving economy-wide layoffs?
Investors can’t decide whether to cheer or fear these investments. The revenue is minuscule for AI-platform companies like OpenAI that are buyers, but is magnificent for companies like Nvidia that are sellers. Nvidia’s market capitalization recently topped $5 trillion, while OpenAI admits that it will have $115 billion in cumulative losses by 2029. (Based on Sam Altman’s history of overly optimistic predictions, we suspect the losses will be even larger.)
The lack of transparency doesn’t help. OpenAI, Anthropic, and other AI creators are not public companies that are required to release audited figures each quarter. And most Big Tech companies do not separate AI from other revenues. (Microsoft is the only one.) Thus, we are flying in the dark.
Meanwhile, college graduates are having trouble finding jobs, and many young people are convinced by the end-of-work narrative that there is no point in preparing for jobs. Ironically, surrendering to this narrative makes them even less employable.
The wild exaggerations from LLM promoters certainly help them raise funds for their quixotic quest for artificial general intelligence. But it brings us no closer to that goal, all while diverting valuable physical, financial, and human resources from more promising pursuits.
By Gary N. Smith and Jeffrey Funk
Subscribe to:
Comments (Atom)