Friday, November 21, 2025

What’s Next for AI? Andreessen Horowitz Founders Share Their Thoughts

Stocks of companies tied to artificial intelligence have been hitting stratospheric levels for over a year now, thrilling investors, but also causing concerns about a potential AI bubble. As startups close breathtaking funding rounds, like the $40 billion OpenAI collected in March of this year, fears of an AI bubble are growing — and some say a burst could be even bigger than the dot-com bubble of the late 1990s. The bubble theory is hotly debated. Some within the industry say they agree that the investment landscape is bloated, including OpenAI co-founded Sam Altman. Other experts, like Goldman Sachs, however, say we’re not in one (yet) — and Fed chair Jerome Powell has been skeptical of the bubble calls. As that debate rages, investors continue to fund AI startups. Few investors are in as deep as Marc Andreessen and Ben Horowitz. Their venture firm, Andreessen Horowitz (commonly called a16z), has sunk billions into the AI space. In April, it was reported the company was in early talks to raise a massive $20 billion AI-focused fund. The two investors recently came together at a16z’s Runtime conferences to talk about where AI can go beyond chatbots. Neither was willing to make any specific predictions about AI’s forthcoming capabilities, saying it’s too early to even imagine that. Andreessen likened AI to the personal computer in 1975, noting there was no way at that time to imagine what PCs would be capable of today. However, he expects similar levels of advancement — from a stronger starting point. AI, he said, is already approaching levels of human creativity — and while Andreessen would love to see humans continue to have superiority in that area, he thinks it’s unlikely. Tools like OpenAI’s Sora 2 video, for instance, are already capable of creating realistic scenes, animations, and special effects — and the introduction of AI actress Tilly Norwood has caused an outcry and prompted debate in Hollywood. “I wanna like hold out hope that there is still something special about human creativity,” he said. “And I certainly believe that, and I very much want to believe that. But, I don’t know. When I use these things, I’m like, wow, they seem to be awfully smart and awfully creative. So I’m pretty convinced that they’re gonna clear the bar.” Horowitz agreed, saying that while AI might not currently create at the same level as human artists, whether painters or hip-hop performers, that’s largely due to how little it has learned so far. It’s just a matter of time before it has an equal or superior level of talent. And some artists are already looking to use AI to collaborate, he said. “With the current state of the technology, kind of the pre-training doesn’t have quite the right data to get to what you really wanna see, but, you know, it’s pretty good,” he said. “Hip-hop guys are interested because it’s almost like a replay of what they did — they took other music and built new music out of it. AI is a fantastic creative tool. It way opens up the palette.” While AI can devour as many data sets as programmers throw at it, that doesn’t give the technology situational awareness. It is, in essence, book smarts versus street smarts. But the robotics field is expanding quickly. Elon Musk and Tesla are working on humanoid robots and Robotics company 1X has already started to take preorders for a $20,000 humanoid robot that will ‘live’ and work around your home. Once that technology and AI are blended, Andreessen said, AI will see a significant jump in actionable intelligence. “When we put AI in physical objects that move around the world, you’re gonna be able to get closer to having that integrated intellectual, physical experience,” he said. “Robots that are gonna be able to gather a lot more real-world data. And so, maybe you can start to actually think about synthesizing a more advanced model of cognition.” While there are plenty of experts who warn the AI market could be in a bubble right now, including OpenAI CEO and co-founder Sam Altman, Horowitz dismisses the idea, saying bubbles occur when supply outstrips demand — and that’s not the case with AI. “We don’t have a demand problem right now,” he said. “The idea that we’re going to have a demand problem five years from now, to me, seems quite absurd. Could there be weird bottlenecks that appear, like we don’t have enough cooling or something like that? Maybe. But, right now, if you look at demand and supply and what’s going on and multiples against growth, it doesn’t look like a bubble at all to me.” BY CHRIS MORRIS @MORRISATLARGE

Wednesday, November 19, 2025

What Adobe Knows About AI That Most Tech Companies Don’t

Last week, I was talking with a graphic designer about Adobe MAX, and they shared with me the most unexpected review of an AI feature I’ve ever heard. “Photoshop will rename your layers for you!” he said, without hesitating. The feature he was referring to was that Photoshop can now look at the content on each of your layers and rename them for you. Since most people don’t give a lot of thought to naming layers as they create them, this might be one of the most useful features Adobe has ever created. It’s certainly one of the most useful AI features that any company has come up with so far, mostly because it does something very helpful but that no one wants to do. Helpful over hype And, that’s the point. In fact, that reaction explains more about Adobe’s AI strategy than anything the company demoed during its keynote. It’s not the kind of feature that gets a lot of hype, but I don’t know anyone who regularly uses Photoshop who wouldn’t prefer to have AI handle one of the most universally hated chores in design: cleaning up a pile of unnamed layers. I think you can make the case that Adobe just made the loudest, clearest argument yet that AI isn’t a side feature. In many ways, it is the product now. Almost every announcement touched Firefly, assistants that operate the apps for you, “bring your own model” integrations, or Firefly Foundry—the infrastructure layer that lets enterprises build their own private models. What Adobe understands But beneath it all, Adobe is doing something most tech companies still aren’t. Instead of looking for ways to bolt AI onto its products, Adobe is building AI into the jobs customers already hired Adobe to help them do. When I sat down with Eric Snowden, Adobe’s SVP of Design, at WebSummit this past week, he used a phrase that stuck with me: “utilitarian AI.” Sure, there were plenty of shiny new AI features that Adobe announced like Firefly Image Model 5, AI music and speech generation, podcast editing features in Audition, and even partner models like Google’s Gemini and Topaz’s super-resolution built directly into the UI. But Snowden lit up talking about auto-culling in Lightroom. “You’re a wedding photographer. You shoot 1,000 photos; you have to get to the 10 you want to edit. I don’t think there’s anybody who loves that process,” he told me. Auto-culling uses AI to identify misfires, blinks, bad exposures, and the frames you might actually want. Ultilitarian AI is underrated That’s what he means by utilitarian AI—AI that makes the stuff you already have to do dramatically less painful. They force you into an “AI mode,” but instead save you time while you go about the tasks you already do. Snowden describes Photoshop’s assistant like a self-driving car: you can tell it where to go, but you can grab the wheel at any time—and the entire stack of non-destructive layers is still there. You’re not outsourcing your creative judgment—you’re outsourcing the tedious tasks so. you can work on the creative process.. That’s Adobe’s first insight–that AI should improve the actual job, not invent a new one. The second insight came out of a conversation we had about who AI helps most. I told Snowden I have a theory: AI is most useful right now to people who either already know how to do a thing, or don’t know how to use the steps but know what the result should be. For both of those people AI helps save them meaningful time. That’s how I use ChatGPT for research. I could do 30 Google searches for something, but ChatGPT will just do them all at the same time and give me a summary of the results. I know what the results should be, and I’m able to evaluate whether they are accurate. The same is true for people using Lightroom, Photoshop, or Premiere. You know what “right” looks like, so you know whether the tool got you closer or not. AI can do many of the tasks, but it’s still up to humans to have taste. AI has no taste Which is why Snowden didn’t hesitate: designers and creative pros are actually better positioned in an AI world—not worse. “You need to know what good looks like,” he told me. “You need to know what done looks like. You need to know why you’re making something.” Put the same AI tool in front of an engineer and a designer and, according to Snowden, “90 times out of 100, you can guess which is which,” even if both are typing prompts into the same tool. That means taste becomes the differentiator. Snowden told me he spent years as a professional retoucher. “I think about the hours I spent retouching photos, and I’m like, I would have liked to go outside,” he said. Being able to do that skill was important, but it wasn’t the work. The finished product was the work, and AI can compress everything between the idea and the result. Trust has never mattered more The third thing Adobe understands—and frankly, most companies haven’t even started wrestling with—is trust. I have, many times, said that trust is your most valuable asset. If you’re Adobe, you’ve built up that trust over decades with all kinds of creative professionals. There is a lot riding on whether these AI tools are useful or harmful to creatives, as well as to their audiences. So, Adobe didn’t just ship AI features; it is building guardrails around them. For example, the Content Authenticity Initiative will tag AI-edited or AI-generated content with verifiable metadata. Snowden’s framing is simple: “We’re not saying whether you should consume it or not. We just think you deserve to know how it was made so you can make an informed choice.” Then there’s the part most people never see—the structure that lets a company Adobe’s size move this fast. Understanding how customers want to use AI Snowden’s team actually uses the products they design. He edits photos in Lightroom outside of work. Adobe runs a sort of internal incubator where anyone can pitch new product ideas directly to a board. Two of the most important new tools—Firefly Boards and Project Graph—came out of that program. When AI arrived, Adobe already had the mechanism to act on it. It didn’t need to reinvent itself or reorganize. It just needed to point an existing innovation engine at a new set of problems. That’s the lesson here: Adobe isn’t chasing AI because it’s suddenly trendy with features no one is sure how anyone will use. It saw AI as a powerful way to improve the jobs its customers already do. That’s the thing so many tech companies still miss. AI is not a strategy. It’s not even the product. It’s a utility—one that works only if you know what your customers are trying to accomplish in the first place. So far, it seems like Adobe does. And that’s why its AI push feels less like a pivot and more like a product finally catching up to the way creative work actually happens. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Monday, November 17, 2025

How to Grow Your Social Following as a Founder—and Which Platforms to Use

So you want to build in public—documenting the process of founding, launching, and growing your business online—but you’re not sure which platform to use. You could use Substack or Beehiiv to send newsletters, Medium to write blog posts, TikTok or YouTube to post videos, LinkedIn, X, or Bluesky to share text-based posts, or Instagram to post photos. There’s no right answer. Founders of all kinds have grown their businesses by posting on each of these platforms—and many use more than one. Plus, there’s plenty of overlap: You can post TikTok-like videos on Instagram and share X-like text posts on Substack. Still, if you’re at the very beginning of your building in public journey, it’s a good idea to focus your efforts on just one. Here’s a guide to help you pick between some of the most popular platforms right now: Substack, Beehiiv, TikTok, LinkedIn, and X. Choose Substack if… You’re a founder in the politics, media, fashion, or beauty space who enjoys storytelling. Substack, which launched as a newsletter platform in 2017 but now bills itself as a subscription network, reports hosting more than 50 million active subscriptions and 5 million paid subscriptions. The platform recently added video and livestream features in order to court creators who use other paid subscription platforms, but the majority of its content is still long-form and text-based. If you’re considering building in public on Substack, you need to have a love for writing—or at the very least, storytelling. Newsletters on politics, fashion, and beauty seem to do especially well on Substack, which makes it a solid choice of platform if your company is in any of these industries. Many new-age media organizations including The Ankler and The Free Press publish on Substack, which means it’s also a great pick for media entrepreneurs and founders in adjacent industries like public relations. “Substack is where founders can reach audiences who genuinely value a direct, personal connection,” Christina Loff, the platform’s head of lifestyle partnerships tells Inc. over email. “The publications that perform best all share a common thread: a strong, human voice.” Examples of founders whose publications do this well, she adds, include Rebecca Minkoff, who has more than 6,000 subscribers; Dianna Cohen of Crown Affair, who has more than 13,000; and Rachelle Hruska MacPherson of GuestofaGuest.com and Lingua Franca, who has more than 260,000. Choose TikTok if… Your business is targeting Gen Z. It’s no secret that TikTok dominates in attracting young users—and keeping them engaged. The video sharing app rose to fame in 2020 and now has an estimated 170 million American users, many of whom are 28 years old and under. In fact, according to TikTok, 91 percent of Gen Z internet users “have discovered something” on the platform in the past month. So if you’re a young founder, or if you’re starting a business that’s targeting Gen Z customers, TikTok is probably your best bet. All you really need to get started on TikTok is a smartphone and basic video-editing skills. Nadya Okamoto, the co-founder of sustainable period care brand August, for one, has grown her audience to 4.4 million in just four years by filming her daily routine, answering product questions, and posting get-ready-with-me videos. Boutique candy brand Lil Sweet Treat’s founder Elly Ross has gained more than 36,300 followers by documenting her experience of opening four storefronts and launching a line of candy. Before you fully commit to building in public on TikTok, remember that there’s still a minute possibility that the platform will get banned in the U.S. on December 16. Choose LinkedIn if… You’re a founder in the business-to-business space. As a work-centric social media platform, LinkedIn is a great place for you to build in public if your company makes products for or provides services to other businesses. Still, there’s a lot of competition on the platform. More than 69 million companies and 243 million American professionals use LinkedIn, according to the company—and almost all of them are posting about their own careers. BY ANNABEL BURBA @ANNIEBURBA

Friday, November 14, 2025

Why Some AI Leaders Say Artificial General Intelligence Is Already Here

Artificial intelligence is still a relatively new technology, but one that has been seeing seemingly exponential jumps in its capabilities. The next big milestone many founders in the industry have discussed is artificial general intelligence (AGI), the ability for these machines to think at the same level as a human being. Now, some of AI’s biggest names say they believe we could already be at that point. The recent Financial Times Future of AI summit gathered Nvidia CEO Jensen Huang, Meta AI’s Yann LeCun, Canadian computer scientist Yoshua Bengio, World Labs founder Fei-Fei Li, Nvidia chief scientist Bill Dally, and Geoffrey Hinton (often referred to as the “Godfather of AI“) together to discuss the state of the technology. And some of those leaders in the field said they felt AI was already topping or close to topping human intelligence. “We are already there … and it doesn’t matter, because at this point it’s a bit of an academic question,” said Huang. “We have enough general intelligence to translate the technology into an enormous amount of society-useful applications in the coming years. We are doing it today.” Others said we may not even realize that it has happened. While most forecasts for the arrival of AGI still put it at several years down the road, LeCun said he didn’t expect it would be an event, like the release of ChatGPT. Instead, it’s something that will happen gradually over time—and some of it has already started. AI companies are generally less bullish on the subject of AGI than the panelists. OpenAI has said if it chooses to IPO in the future, that will help it work toward the AGI milestone. Elon Musk, last year, predicted AGI would be achieved by the end of 2025 (updating his previous prediction of 2029). Last month, he wrote in a social media post that the “probability of Grok 5 achieving AGI is now at 10 percent and rising.” Not all of the AI leaders said they felt AGI was here. Bengio, who was awarded the Turing Award in 2019 for achievements in AI, said it was certainly possible, but the technology wasn’t quite there yet. “I do not see any reason why, at some point, we wouldn’t be able to build machines that can do pretty much everything we can do,” said Bengio. “Of course, for now … it’s lacking, but there’s no conceptual reason you couldn’t.” AI, he continued, was a technology that had “a lot of possible futures,” however. And that makes it hard to forecast. Basing decisions today on where you think the technology will go is a bad strategy, he said. World Labs founder Li straddled the question, saying there were parts of AI that would supersede human intelligence and parts that would never be the same. “They’re built for different purposes,” she said. “How many of us can recognize 22,000 objects? How many humans can translate 100 languages? Airplanes fly, but they don’t fly like birds. … There is a profound place for human intelligence to always be critical in our human society.” Hinton, meanwhile, opted to look beyond AGI to superintelligence, an AI milestone where the technology is considerably smarter than humans. There are several startups exploring this space now, including Ilya Sutskever’s Safe Superintelligence and Mira Murati’s Thinking Machines Lab. “How long before if you have a debate with a machine, it will always win?” Hinton posited. “I think that is definitely coming within 20 years.” BY CHRIS MORRIS @MORRISATLARGE

Wednesday, November 12, 2025

AI Isn’t Replacing Jobs. AI Spending Is

For decades now, we have been told that artificial intelligence systems will soon replace human workers. Sixty years ago, for example, Herbert Simon, who received a Nobel Prize in economics and a Turing Award in computing, predicted that “machines will be capable, within 20 years, of doing any work a man can do.” More recently, we have Daniel Susskind’s 2020 award-winning book with the title that says it all: A World Without Work. Are these bleak predictions finally coming true? ChatGPT turns 3 years old this month, and many think large language models will finally deliver on the promise of AI replacing human workers. LLMs can be used to write emails and reports, summarize documents, and otherwise do many of the tasks that managers are supposed to do. Other forms of generative AI can create images and videos for advertising or code for software. From Amazon to General Motors to Booz Allen Hamilton, layoffs are being announced and blamed on AI. Amazon said it would cut 14,000 corporate jobs. United Parcel Service (UPS) said it had reduced its management workforce by about 14,000 positions over the past 22 months. And Target said it would cut 1,800 corporate roles. Some academic economists have also chimed in: The St. Louis Federal Reserve found a (weak) correlation between theoretical AI exposure and actual AI adoption in 12 occupational categories. Yet we remain skeptical of the claim that AI is responsible for these layoffs. A recent MIT Media Lab study found that 95% of generative AI pilot business projects were failing. Another survey by Atlassian concluded that 96% of businesses “have not seen dramatic improvements in organizational efficiency, innovation, or work quality.” Still another study found that 40% of the business people surveyed have received “AI slop” at work in the last month and that it takes nearly two hours, on average, to fix each instance of slop. In addition, they “no longer trust their AI-enabled peers, find them less creative, and find them less intelligent or capable.” If AI isn’t doing much, it’s unlikely to be responsible for the layoffs. Some have pointed to the rapid hiring in the tech sector during and after the pandemic when the U.S. Federal Reserve set interest rates near zero, reports the BBC’s Danielle Kaye. The resulting “hiring set these firms up for eventual workforce reductions, experts said—a dynamic separate from the generative AI boom over the last three years,” Kaye wrote. Others have pointed to fears that an impending recession may be starting due to higher tariffs, fewer foreign-worker visas, the government shutdown, a backlash against DEI and clean energy spending, ballooning federal government debt, and the presence of federal troops in U.S. cities. For layoffs in the tech sector, a likely culprit is the financial stress that companies are experiencing because of their huge spending on AI infrastructure. Companies that are spending a lot with no significant increases in revenue can try to sustain profitability by cutting costs. Amazon increased its total CapEx from $54 billion in 2023 to $84 billion in 2024, and an estimated $118 billion in 2025. Meta is securing a $27 billion credit line to fund its data centers. Oracle plans to borrow $25 billion annually over the next few years to fulfill its AI contracts. “We’re running out of simple ways to secure more funding, so cost-cutting will follow,” Pratik Ratadiya, head of product at AI startup Narravance, wrote on X. “I maintain that companies have overspent on LLMs before establishing a sustainable financial model for these expenses.” We’ve seen this act before. When companies are financially stressed, a relatively easy solution is to lay off workers and ask those who are not laid off to work harder and be thankful that they still have jobs. AI is just a convenient excuse for this cost-cutting. Last week, when Amazon slashed 14,000 corporate jobs and hinted that more cuts could be coming, a top executive noted the current generation of AI is “enabling companies to innovate much faster than ever before.” Shortly thereafter, another Amazon rep anonymously admitted to NBC News that “AI is not the reason behind the vast majority of reductions.” On an investor call, Amazon CEO Andy Jassy admitted that the layoffs were “not even really AI driven.” We have been following the slow growth in revenues for generative AI over the last few years, and the revenues are neither big enough to support the number of layoffs attributed to AI, nor to justify the capital expenditures on AI cloud infrastructure. Those expenditures may be approaching $1 trillion for 2025, while AI revenue—which would be used to pay for the use of AI infrastructure to run the software—will not exceed $30 billion this year. Are we to believe that such a small amount of revenue is driving economy-wide layoffs? Investors can’t decide whether to cheer or fear these investments. The revenue is minuscule for AI-platform companies like OpenAI that are buyers, but is magnificent for companies like Nvidia that are sellers. Nvidia’s market capitalization recently topped $5 trillion, while OpenAI admits that it will have $115 billion in cumulative losses by 2029. (Based on Sam Altman’s history of overly optimistic predictions, we suspect the losses will be even larger.) The lack of transparency doesn’t help. OpenAI, Anthropic, and other AI creators are not public companies that are required to release audited figures each quarter. And most Big Tech companies do not separate AI from other revenues. (Microsoft is the only one.) Thus, we are flying in the dark. Meanwhile, college graduates are having trouble finding jobs, and many young people are convinced by the end-of-work narrative that there is no point in preparing for jobs. Ironically, surrendering to this narrative makes them even less employable. The wild exaggerations from LLM promoters certainly help them raise funds for their quixotic quest for artificial general intelligence. But it brings us no closer to that goal, all while diverting valuable physical, financial, and human resources from more promising pursuits. By Gary N. Smith and Jeffrey Funk

Monday, November 10, 2025

A New AI Agent Wants to Schedule Your Life—Should You Let It?

Have you ever thought your working life would be easier with an executive assistant? A suite of new AI agents are cropping up, promising to take on the work and deliver all the benefits of having an EA without you actually having to hire anyone for the job. And, ostensibly, all for a far lower price tag. To find out if technology could do a better job than I could at making my schedule work for me, I tested out a free trial of Blockit, a new AI-powered agent that integrates with a user’s calendars and email. When signing up for the tool, Blockit promised me that in as little as five minutes it could learn the same amount of information about my schedule, habits, and preferences as a human EA might over the course of several months. Here’s how Blockit works: The AI agent learns your preferences for taking meetings, including when and where you like to conduct certain kinds of business. Then, you can copy the Blockit bot into emails or Slack messages with your contacts and give it instructions to set up a meeting at your chosen time and place. It sounded fantastically simple, but after using the tool, I realized that letting Blockit’s AI into my schedule required more than a little work on my part, too. Here are my three biggest takeaways from letting AI into my schedule for a week. You need to work to make it work for you Blockit’s onboarding process involves answering multiple questions about your habits and schedule, some of which got me thinking a little more about where, in fact, I like to work. So if you like to take certain meetings in a coffee shop near your office, you need to tell Blockit the exact address and the AI will make a note of it for future reference. Similarly, if you have an office or work from home on certain days, Blockit will log that, too. Doing this means that when you copy Blockit’s bot into an email with a contact that you want to get a coffee with, the bot will schedule a meeting at your preferred spot, invite the other person to it, and block off the time on your calendar that it will take you to get there from wherever you told it you would be working that day. That’s extremely helpful! But it also requires you to make some concrete decisions about where and when you will be working—and that’s not always totally obvious if you are in an industry that regularly puts you in many different locations on short notice. Blockit, to its credit, can keep up—it will even ask you to confirm if you are traveling if you tell it to set a meeting in an unfamiliar city. But if you are a busy CEO, keeping your AI agent up to date on your schedule might not always be top of mind. Another interesting Blockit feature is its codewords function. Users can teach the AI codewords that trigger certain actions: For example, say I sign off an email agreeing to a meeting with “best wishes” and copy Blockit to set something up. I could have already set “best wishes” as a codeword meaning that this meeting is not high priority, can be set sometime three or four weeks away, and can be canceled if I get another, higher priority request for the same time between now and then. It’s a clever idea, but again, I had to go through the work of teaching Blockit my codewords, a process that the desktop app doesn’t make particularly intuitive. Overall, I had to spend a solid chunk of time training Blockit—it definitely took more than five minutes of work to get value from this tool. If you’re already feeling stretched, taking those hours to invest in the AI might not be your top priority. But if you do, it may be worth it. Blockit needs access to everything An obstacle I ran into early with Blockit was that it didn’t want to work with just one Google calendar—it wanted access to every calendar app I had access to. That would be fine if the people who owned those other calendars were also Blockit users, which they were not. Blockit only works if you share all your calendar data with it, and if you are an entrepreneur or contractor who works regularly with other companies and are copied into their calendar, you likely don’t have the authority to give Blockit permission to see everything you can see. You might also have some personal privacy concerns that would prevent you from sharing certain information with Blockit. As a result, you might end up letting the app see only half the picture—which could make it less adept at sorting your schedule out for you. Another hurdle for the AI was the fact that I don’t schedule everything in my calendar. I don’t block time-off for certain kinds of work, or log when I’m taking free time. I also often block off a day in my calendar with reminders like “parents arriving today,” and it looks like I’m busy all day—but I’m not really. I tried to clean up my calendar and make it more faithful to what my days actually look like, but I gave up after spending an hour on planning out just two weeks into the future. In that sense, Blockit might be better suited to someone who is starting from scratch—say, joining a new company—or whose company has a calendar system that has become overwrought. Advantages of large-scale integration Blockit is supercharged when other people in your contacts list have Blockit too. Your AI agent can directly communicate with their AI agent and set a meeting up for you with minimal human engagement required. Unfortunately, none of my regular contacts have Blockit. The company behind it has put nothing into marketing it, so its customer base is word-of-mouth only. This brings me back to a realization I raised earlier: Blockit may work best on a company-wide scale rather than on an individual level. The app is genuinely helpful for individuals, but if it were integrated across a team or a company, I can see it taking on some of the core functions of a secretary or EA with little effort. (What the final pricing would be in my case, should I continue to use it past the free trial, is unclear.) That would also get over another potential hurdle with Blockit: Not everyone is used to having an AI agent ask them for their availability. If you’re trying to book a coffee date with your elderly relative, for example, or set up an intro call with a first-time contact, they might be a little skeptical. On a company-wide scale, however, Blockit may be just as intuitive as other AI-powered productivity tools, whether they be schedulers like Sunsama, Structured, or Todoist; note-takers like Fireflies.ai or Otter.ai; or management systems like Airtable or Jira. And, importantly, if your company invests in a tool like Blockit, it would likely become just as big a part of employee workflow as any other software-as-a-service product. BY CLAIRE CAMERON, FREELANCE WRITER

Friday, November 7, 2025

Small Businesses Aren’t Seeing the Same AI Gains as Big Corporations. Here’s Why

Companies of all sizes and sectors are moving swiftly to boost productivity by integrating artificial intelligence applications to automate tasks previously performed by employees. But recent reports clash significantly in calculating AI’s effects on humans, while also diverging on whether larger corporations seem to benefit from AI more than smaller businesses. AI should go in replacing humans in a given workplace, and how beneficial machines taking over from employees is to the results sought in using the tech. The first of those inquiries came from Wells Fargo chief equity strategist Ohsung Kwon, who compared changes in revenue generated per each worker on the staffs of big S&P 500 firms. He then made the same calculation for companies on the small-cap Russell 2000 index. Using the 2022 release of OpenAI’s ChatGPT AI bot as the starting point, Kwon’s team determined that the increased scaling abilities of larger corporations allowed them to benefit from the tech’s automating capabilities to boost the output—and with it, revenue—of workers they employed. During the same period, by contrast, it found productivity in the modest-size businesses fell. ″While productivity for the S&P 500 has soared 5.5 [percent] since ChatGPT, it’s down 12.3 [percent] for the Russell 2000,” Kwon wrote in a recent note to clients that was featured in a CNBC report on the differing results of AI adoption in business. “We see other examples of diverging trends in consumer, industrial, and financial markets.” But much like today’s big news that Amazon is laying off 14,000 corporate employees as it expands its use of AI across the business, Kwon’s measurement of productivity gains appears to depend mainly on human workers losing their jobs to the tech. Even if overall output remains the same or even dips following tech-driven head count reductions, the lower number of total workers—and payroll savings added back into the bottom line—mechanically boosts per employee claims of revenue generated. In addition to Amazon, the CNBC report lists big companies—including Meta, UPS, Starbucks, Oracle, Microsoft, and Google—that have announced big staff cuts this year. Those were undertaken to streamline their structures, but primarily to make way for use of AI to automate many of the tasks eliminated that employees previously performed. That willingness of big companies to sacrifice employees, cut labor costs, and boost revenue by scaling their use of AI appears to explain why they’ve benefited more from the tech than small-business owners under Kwon’s analysis. After all, even entrepreneurs responsive to investor demands for increasing returns tend to be more hesitant about laying off people they’re often working in close contact with than corporate managers. Any aversion to company founders cutting staff as an integral part of AI adoption may well also explain another of Kwon’s findings. While the S&P 500 rose 74 percent since use of AI took off in 2022, the Russell 2000 increased by just 39 percent—probably reflecting investor views about where the biggest, fastest potential boosts to share prices are. Still, none of that means smaller companies are holding back on introducing the tech to their workplaces or missing out on the productivity gains it can offer. A recent survey of small-business owners in the U.S., Australia, Canada, and the U.K. by Intuit QuickBooks Small Business Insights found nearly 70 percent of respondents used AI on a daily basis, with 75 percent reporting increased productivity as a result. Around 15 percent of participants said adoption of the tech had allowed them to create jobs, with only 5 percent saying they’d cut head counts instead. Results of a recent study by business consultancy Deloitte also measured successful adoption of AI in ways other than merely reducing head counts and costs. Its Humans x Machines report argues that both big corporations and small businesses that focus primarily on the tech rather than the employees who use it end up with disappointing results. Its survey found that nearly 60 percent of responding companies that deployed AI first and asked workplace questions about its use and effectiveness later are 1.6 times more likely to report lower return on their investment than other businesses. Companies with the best outcomes, the report said, are organizations that allowed human relations and other managers to work with staff to identify the most useful kinds of AI applications, train workers to adopt them, and then encourage continued deployment of those tools across the business. The report concluded that the tech will never meet its effectiveness potential unless business leaders prepare employees to enable that beforehand. “[M]ost organizations are investing heavily in AI, but not enough in the work design needed to unlock its value,” said Deloitte U.S. human capital head of research and chief futurist David Mallon in comments about the study. “This shouldn’t be an ‘either/or’ approach—it should be a ‘both/and’ strategy to maximize value. Organizations that take a technology-first approach struggle to scale, while those that intentionally design roles, workflows, and decision-making to integrate humans and machines are more likely to exceed their ROI expectations.” BY BRUCE CRUMLEY @BRUCEC_INC

Wednesday, November 5, 2025

This 1 Skill Is the Most Important for the AI Era, Say Leaders From LinkedIn, Meta, and Box

Artificial intelligence is already redefining the workplace. That was driven home this week by Amazon and Chegg, which both announced substantial layoffs. Amazon plans to cut 14,000 jobs as it invests more in AI, while Chegg said it was laying off 45 percent of its workforce as it confronts what it calls “the new realities of AI.” For workers and business owners, that’s a pair of warning shots highlighting the uncertainty and volatility of the years to come. Box CEO Aaron Levie, LinkedIn chief economic opportunity officer Aneesh Raman, and Clara Shih, Meta’s head of business AI, were recent guests at the Masters of Scale Summit in San Francisco to discuss the rise of AI and the changes it will bring. Entrepreneurs, said Shih, could be the people who are best suited to maximize AI’s productivity, thanks to their ability to pivot quickly and their near-obsessive tracking of what’s up and coming. Asked which skill will be most needed in the AI era, Shih said, “I think entrepreneurship, which is defined by pursuit of opportunity without regard to resource constraint, because the underlying substrate of our resources is continually shifting. And so we have to be constantly… on our toes, literally, just to pay attention to these trends and continually reinvent ourselves and our companies.” Raman expanded on that thinking, saying he believed curiosity would be the most valuable skill, while still namechecking the rest of the five Cs that represent critical soft skills in business leadership: curiosity, compassion, creativity, courage, and communication. “We all have to get better at all of those,” he said. “And then you have the sort of habits of resilience, adaptability, learn how to learn quick, learn how to fail fast.” While this week’s news of layoffs was discouraging, Levie said he’s optimistic about the long-term employment outlook as AI expands. Ultimately, he believes, AI will result in more hiring, since businesses can get a higher return on investment from each worker. He points to the evolution of the advertising agency as an analogy. In the 1980s, he said, it could take weeks to draw, print, and scale an ad. The rise of Photoshop slashed that turnaround time, though. Had people in the ’80s known what it could do, they would have feared massive layoffs in their field. The industry, of course, continues to thrive—and companies that previously couldn’t afford to advertise found themselves able to, thanks to lower costs. Levie says he suspects the impact of AI will be much the same, only on a broader scale. The technology, he says, will broaden the playing field, giving companies that don’t have the budget for many of the experts and tools that larger businesses do a chance to compete at the same level. “Each organization will look different, but… just imagine, let’s say, the small businesses that always have a structural disadvantage versus large companies because of their lack of access to talent and resources,” he said. “If you imagine them all being weaponized with the same expert lawyer and expert marketing and expert product development and software engineer as any kind of mid- or large-size company, what’s going to happen next is you’re going to have just a tremendous amount of growth of new organizations emerge with lots more productivity in a number of categories.” Additionally, he said, AI is still a work-in-progress. Despite the AI evangelists heralding the changes it will bring, it has few functional uses at the moment for many businesses. That gives owners and workers time to prepare for changes to come, learn complimentary skills, and become familiar with AI’s abilities—and how to maximize its potential. “The reality is that if you drop AI into today’s business process, it’s going to actually do very little,” Levie said. “It’s not world-changing. […] We thought AI would work how we do. It turns out it might be the case that we have to work how AI does, and we have to be actually in service of the agent to make it most productive.” BY CHRIS MORRIS @MORRISATLARGE

Monday, November 3, 2025

Technologists to AI Cheerleaders: Stop Being So Creepy

These days, AI is definitely near the peak of the hype cycle, when pronouncements about a new technology reach their most fevered pitch. But even given that reality, CEOs of AI companies and other assorted AI boosters have been saying a lot of creepy and extreme stuff lately. Some of these comments are just overenthusiastic salesmanship. Earlier this year, Anthropic CEO Dario Amodei, for example, predicted that AI would be writing 90 percent of all code in six months. That’s clearly not coming true. But other predictions are just scary. Amodei also warned that AI would eliminate 50 percent of white-collar jobs within five years. Fortune has a whole roundup of terrifying remarks from OpenAI boss Sam Altman. Sample comment: “Mitigating the risk of extinction from A.I. should be a global priority.” There are also the peculiar cases of AI diehards who see the technology in spiritual terms. One former Google AI engineer started an AI-worshipping church. Elon Musk is admittedly not the most reliable narrator, but according to him, Google co-founder Larry Page wants to create “basically digital god.” For everyday nontechnical people, these comments are creepy but also confusing. How much of this should we take seriously? A handful of fascinating recent blog posts and newsletters offer some reassuring answers. The vast majority of those in the trenches building AI view such comments as deeply unhelpful and inaccurate. Those who know it best wish everyone would talk about AI like a “normal technology.” What AI technologists think The first of these reports comes from seasoned entrepreneur and tech industry executive Anil Dash. On his blog recently, he highlighted the huge gap between how AI company bosses and influencers talk about AI and how those building the technology talk about it. Those with technical roles but lower public profiles share “an extraordinary degree of consistency in their feelings about AI,” claims Dash. He sums up their stance like this: “Technologies like LLMs have utility, but the absurd way they’ve been over-hyped, the fact they’re being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.” As evidence, Dash cites his conversations with his many friends and contacts within the industry. He also links to this lengthy paper by Princeton AI experts Arvind Naryanan and Sayash Kapoor. It argues we should treat AI as a normal, if revolutionary, technology like electricity or the internet, not some “potentially superintelligent entity.” Is not being an AI booster bad for your career? AI engineers know, it is possible to build AI that is not centralized in the hands of a few big companies, that treats the creators of content used to train these models fairly, and that isn’t terrible for the environment. They also understand that a reasonable public discussion about AI is necessary to achieve these aims. But many are afraid to speak out publicly, Dash claims. “Mid-level managers and individual workers who know this is the common-sense view on AI are concerned that simply saying that they think AI is a normal technology like any other, and should be subject to the same critiques and controls, and be viewed with the same skepticism and care, fear for their careers,” he writes. On her blog, programmer Gina Trapani seconds this view. She too says that a more reasonable discussion of AI can be bad for career advancement. Her most AI literate friends are also the people with the most sober view of AI’s potential and pitfalls. “The majority of people who work with and in technology hold a moderate view of AI, as any other normal technology with valid use cases and real problems that need to be fixed,” she writes. But, she continues, “tech people don’t talk about measured AI enough (probably because they want to keep their job).” Stop being creepy about AI! Both Trapani and Dash’s take home message is directed at those tasked with explaining AI to the general public. You can feel Dash virtually screaming through his keyboard at the Altmans and Amodeis of the world and their many imitators when he writes: “Stop being so goddamn creepy and weird about the technology! It’s just tech, everything doesn’t have to become some weird religion that you beat people over the head with, or gamble the entire stock market on.” “It’s creepy to tell people they’ll lose their jobs if they don’t use AI. It’s weird to assume AI critics hate progress and are resisting some inevitable future,” Trapani admonishes. But there is a takeaway here for everyday entrepreneurs too. If you worry that AI hype is badly overblown and discussions of the technology would be more helpful if everyone just calmed down, you are far from alone. The vast majority of AI engineers apparently agree with you. Hopefully, that will empower you to push back against overheated hype and have more level-headed conversations about AI with those in your professional circle. EXPERT OPINION BY JESSICA STILLMAN @ENTRYLEVELREBEL