Wednesday, April 30, 2025

OpenAI’s New Image-Gen Tech Is Now Available for Businesses, and Some Are Already Using It

OpenAI just made its latest image generation model—the one that sparked a viral internet trend of Studio Ghibli-inspired memes—available for software developers to integrate into their applications and products. OpenAI first introduced the image generation model, named gpt-image-1, as a feature in ChatGPT in late March. In just its first week of release, according to OpenAI, 130 million users created over 700 million images with gpt-image-1. The new model boasts several improvements over Dall-E, OpenAI’s earlier family of image-generation models. The most obvious improvement is the model’s ability to accurately render text; previous models struggled to generate text, and would usually return images of gibberish, but the new model is much more accurate and can create clear, legible text. While much of the attention that gpt-image-1 received was due to its ability to simulate anime, it can also create realistic textures, edit parts of an existing image, and create images in various aspect ratios. According to OpenAI, several businesses were given early access to gpt-image-1, including Adobe, which will offer the model within its suite of products, and Airtable, which is using the model to offer auto-translation of marketing materials. Other companies experimenting with the model include Canva, which is using the tech to transform sketches into “stunning graphic elements,” and HubSpot, which is “exploring how OpenAI’s new AI image generation capabilities can help customers create marketing and sales collateral.” On X, OpenAI CEO Sam Altman wrote that the API version of the model also enables developers to set their own moderation sensitivity level. If you don’t want your application to create images of people in bathing suits, for instance, you can adjust the moderation levels. The API also allows developers to control how much the model prioritizes speed versus quality when generating a new image, if the background is transparent or opaque, and the dimensions of the image. OpenAI says that in practice, developers will pay $0.02, $0.07, and $0.19 per generated image for low, medium, and high-quality square images. More information about the API’s pricing scheme can be found here. For businesses looking to expand their creative marketing options without breaking the bank, OpenAI’s new model could be a game-changer. BY BEN SHERRY @BENLUCASSHERRY

Monday, April 28, 2025

I Tested 5 AI Assistants—and What I Found Was Surprising

Recently, the Washington Post invited me to join a blue-ribbon panel of communication experts for an AI writing experiment. Tech reporter Geoffrey Fowler pitched the idea as an old-fashioned bake-off with a modern twist. He asked us to test five popular AI tools on how well they could write five kinds of difficult work and personal emails. Why emails? “It’s one of the first truly useful things AI can do in your life,” says Fowler. “And the skills AI demonstrates in drafting emails also apply to other kinds of writing tasks.” In total, the panel of judges evaluated 150 emails. While one AI tool was the clear winner, the experiment highlighted the benefits of AI writing and communication assistants—and one big limitation. Since we were asked to read all the emails blind, we did not know which were written by ChatGPT, Microsoft Copilot, Google Gemini, DeepSeek, or Anthropic’s Claude. Fowler also had us score emails he had written to see if we could distinguish between AI and a human writer. The best AI writing assistant The clear winner was Claude. “On average, Claude’s emails felt more human than the others,” Fowler noted. Another judge, Erica Dhawan, said, “Claude uses precise, respectful language without being overly corporate or impersonal.” DeepSeek came in second place, followed by Gemini, ChatGPT and, in last place, Copilot. Although Copilot is widely available in Windows, Word, and Outlook, the judges agreed that its emails sounded too much like AI. “Copilot began messages with some variation of the super-generic ‘hope you’re well’ on three of our five tests,” said Fowler. While Claude won this competition, I later learned that my scores showed a preference for the human-written emails. And that’s because all the AI assistants had one big limitation. According to Fowler, “Our five judges didn’t always agree on which emails were the best. But they homed in on a core issue you should be aware of while using AI: authenticity. Even if an AI was technically ‘polite’ in its writing, it could still come across as insincere to humans.” My takeaway: AI tools are great for outline, flow, and clarity of argument. But they’re often stilted, formal, robotic, and lack personalization, emotion, and empathy. AI assistants have trouble with creativity because the architecture on which they’re based (large language models) generates content with “syntactic coherence,” an academic term for stringing sentences together that flow naturally and follow grammar rules. But as you know, rules are meant to be broken. Steve Jobs broke the rules For example, in 1997 Apple’s Steve Jobs launched one of the most iconic campaigns in marketing history. The company was close to bankruptcy, and needed something to attract attention and stand out. Apple’s now-famous television ad—nicknamed “the crazy ones”—featured black and white portraits of rebels and visionaries such as Bob Dylan, John Lennon, Martin Luther King Jr, and others. The marketing campaign is credited with redefining Apple’s brand identity helping to save it from financial ruin. If the writing had been turned over to AI, it wouldn’t have happened. How do I know? Claude told me. “If asked to create a slogan like Apple’s famous campaign in my default mode, I would almost certainly have written ‘Think Differently’ rather than ‘Think Different,'” Claude acknowledges. “My training emphasizes grammatical correctness. The proper adverbial form to modify the verb ‘think’ would be ‘differently,’ and I’d be inclined to follow this established rule.” Claude says it can analyze why the campaign worked “after the fact … but generating that kind of deliberate grammatical rebellion doesn’t come naturally to me.” AI doesn’t have a rebellious streak because—breaking news—it’s not human. Some bots might perform better than others at simulating human qualities in their writing samples, but they don’t have the one thing you have: a unique voice built on years of personal experiences and creative insights. AI is a helper, an assistant. Use it to brainstorm ideas, clarify thoughts, summarize documents, and gather and organize information. Those are all important and time-consuming tasks. But while AI can enhance communication, it shouldn’t replace the communicator. As more people rely on AI assistants to write emails, resumes, memos, and presentations, there’s a real danger that many people will sound alike—corporate recruiters are already spotting this trend. But you’re not like everyone else. You have a unique and powerful story to share. Don’t let artificial voices silence your authentic one. EXPERT OPINION BY CARMINE GALLO, HARVARD INSTRUCTOR, KEYNOTE SPEAKER, AUTHOR, ‘THE BEZOS BLUEPRINT’ @CARMINEGALLO

Sunday, April 27, 2025

OpenAI’s New Models Could Be Its Smartest and Most Powerful, Thanks to This Feature

On Wednesday, OpenAI introduced two new artificial intelligence models to the world. The models, named o3 and o4-mini, are both part of the Sam Altman-led company’s “O” series of reasoning models, which means they’re capable of taking time to “think” through how to best answer a query. OpenAI says o3 and o4-mini are “our smartest and most capable models to date,” thanks to one special feature: tool use. In the AI industry, “tools” usually refer to special abilities that can be bestowed upon an AI model, like the ability to write and run code, search the internet, use a web browser, and parse through internal databases. These abilities are what transform AI models into AI agents, and o3 and o4-mini are OpenAI’s first reasoning models with access to these tools. When you ask o3 or o4-mini a question, it will spend a short amount of time thinking through which tools would be most useful for completing the tasks. It then starts a multi-step process to answer the question. For example, when asked to predict how Donald Trump’s proposed tariffs will affect the burgeoning American AI industry, o3 thought for 25 seconds and then delivered a report sourced from recent articles by Time, Reuters, Axios, and Forbes. The report found, in part, that tariffs would make the hardware that powers AI “noticeably pricier … Expect higher upfront costs, a squeeze on smaller AI outfits, and a brand‑new round of supply‑chain gymnastics.” However, there’s a catch here: Because the model’s training data only goes up to June 2024, the model assumed the tariffs in question referred to Trump’s earlier suggestion of 10 percent tariffs across the board and a 60 percent tariff on China. Of course, American tariffs on Chinese imports are now up to 245 percent. This process isn’t entirely new, and should be familiar to anyone who has used ChatGPT’s Deep Research feature, which similarly turns the platform into an AI agent that can scour the internet and create a lengthy report on the topic of your choosing. Deep Research was actually built on an earlier version of the o3 model, but the current version of o3 is much faster, although may return less thorough answers. While o3 is designed to be a research whiz, o4-mini was created to serve as a coding companion. The model has set new benchmarks across several software engineering tests, and “performs especially strongly at visual tasks like analyzing images, charts, and graphics.” In examples, OpenAI researchers showed that both models will double-check answers to math questions, and will explain how they came to their conclusions. On X, Sam Altman opined on the new models’ capabilities, writing that “the ability of the new models to effectively use tools together has somehow really surprised me. Intellectually i knew this was going to happen but it hits different to see it.” Entrepreneurs are only just getting their hands on the new models now, but they could be helpful for automating internal workflows that were previously too complicated, for example. BY BEN SHERRY @BENLUCASSHERRY

Wednesday, April 23, 2025

Humans Don’t Want Factory Jobs. This $70,000 Robot Could Be the Future of Manufacturing

As the AI revolution edges closer to putting robots next to human at some jobs, it may be time to brush up on sci-fi writer Isaac Asimov’s three “laws of robotics”.* While companies like Tesla, OpenAI, Figure and even Apple are either dabbling in the android field or charging right ahead with developing fully functioning, AI-powered machines like Elon Musk’s Optimus, one company just stole the spotlight by putting its robot on sale. Meet Reachy 2, a $70,000 humanoid robot from Hugging Face, an AI startup. Given that few American humans want to work in factory jobs, the timing may be great. The breakthrough, announced by Hugging Face co-founder Clément Delangue in a post on X, is the first commercially available open-source humanoid robot, news site Decrypt.co notes. In his X post Delangue admits the price of the machine is expensive, but he also pointed out it’s “already in use at Cornell, Carnegie Mellon & major AI labs for robotics research and education.” The open-source feature and the device’s hackability are standout features here: it means people who buy one can easily tinker with the way the robot works, from code through to hardware, and optimize it exactly to match their needs—and it’s all approved by Hugging Face. The New York-based AI-centric company, which is best known as a platform and community for sharing AI-building technology, says that Reachy 2 is a “a versatile, expressive, and open robotic platform designed to explore the future of human-robot interaction, assistive robotics, and AI-driven behavior.” The robot looks vaguely human from the waist up (its lower torso is a tripod-like arrangement that allows the device to pivot) but its system includes stereo vision, microphones and a lidar system for detecting and locating distant objects. The company says it’s useful if you want to “build an expressive assistant, a teleoperated avatar, or a robot that learns from demonstration,” and highlights that it’s really not a consumer-facing product yet. Reachy 2 is more aimed at researchers trying to adapt robot technologies for future uses, like working in factories. But since the device is now on the market, there may be nothing stopping a manufacturing company buying one and testing it on the shop floor. Hugging Face was previously known for its AI services rather than hardware, but it acquired the makers of Reachy 2 on Monday this week—buying a company called Pollen Robotics for an undisclosed amount. Decrypt noted the company explained it was going to pursue physical versions of its AI systems in a separate X post, noting that Hugging Face now believes “robotics could be the next interface for AI” in a way that is “open, affordable, and hackable.” This may set it apart from companies like Tesla, which has bold ambitions for its Optimus machine, which may not be a fully open-source, hackable machine (given the way Tesla currently manages intellectual property rights for the code for its EVs). Hugging Face’s AI robotics ambitions may have kicked off at just the right moment. Though there is broad support for efforts to increase the number of manufacturing jobs in the U.S., a recent survey highlighted that the average American would prefer not to work in that kind of role. Entrepreneur Peter Diamandis’ prediction that “millions, then billions” of humanoid robots are coming sets the scene for companies like Hugging Face to sell boutique humanoid robots to companies large and small who want to sample a bit of the future of robot-centric factory work right now. *Asimov’s three laws… file them away for future arguments around the water cooler: A robot may not injure a human being or, through inaction, allow a human being to come to harm A robot must obey the orders of human beings except where such orders would conflict with the First Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law (we may be many years away from having to worry about this one, though, as Reachy 2’s cute demeanor demonstrates). BY KIT EATON @KITEATON

Monday, April 21, 2025

This Startup Just Promoted an AI to CEO

Does your company need a human CEO? Not anymore, according to a website-building startup, HeyBoss. The company just replaced its human CEO, Xiaoyin Qu, with an AI named Astra last week, according to AIM Research. If your company employs people, you need a CEO who can attract and motivate talented people, adapt the company’s strategy swiftly and effectively to rapidly changing industry headwinds and tailwinds, and invent entirely new products as the company’s core ones mature and decline. Can an AI CEO do all of those things? Should your company follow suit? The answer depends. What Does HeyBoss Do? HeyBoss builds websites, apps, or games within nine minutes. Having tried to build my own website with tools like Wix, the company sounds like it’s delivering a quantum value leap—much more user benefit for the money than existing products or services. As I noted in Hungry Startup Strategy, a QVL is why a customer would take the risk of buying from a startup instead of an established company—and HeyBoss seems to offer one. The company enables users to “input a product idea in a single line be it a website, app, or game, and within nine minutes, it produces a live product complete with real design, clean code, back-end infrastructure, optimized copy, search engine optimization, and hosting,” AIM Research wrote. Why Did HeyBoss Replace Its CEO? Astra, the new CEO, has taken on more responsibility over time, and has undergone “months” of testing. She first started out working on making the company’s online games better—helping refine visuals, optimize code, and accelerate production. Qu’s team realized Astra could optimize apps and website development as well—opening up a 100-fold larger opportunity, Qu told AIM Research. As HeyBoss expanded to enable users to generate or upload their projects, access personal workspaces, and tap a public community feed, Astra took on more responsibility. Since December, Astra has outperformed humans by taking on “thousands of projects simultaneously, adapting in real time, and generating fully operational digital products in under 10 minutes,” AIM Research reported. On April 8, Qu announced Astra’s promotion to CEO—which is an eye-catching way of saying Astra coordinates AI programs performing functions such as engineering, designing, product management, writing, and SEO tasks. In effect, Astra is a specialized application of Salesforce’s Agentforce, which coordinates the operation of so-called agentic AI. As I noted in my book Brain Rush: How to Invest and Compete in the Real World of Generative AI, agentic AI can act as an admin of sorts, and can perform tasks like planning a vacation— choosing the best flights, hotels, and restaurants. But it has limitations. In the case of HeyBoss, I think there is less than meets the eye. Astra’s job sounds to me more like the chief operating officer of a collection of software programs. Having said that, I admire the strategic logic of Qu’s decision. As she wrote in her public announcement, promoting Astra was “one of my toughest [decisions],” made in response to customers who told Qu that Astra is “faster, smarter, more reliable than you,” reported AIM Research. Ouch! Is Your Company Ready for an AI boss? First ask yourself: What does a CEO do? Why does one person have all those job responsibilities? Which of those jobs can AI do better than a human? Which of these tasks still require a highly skilled person? Does your company make physical products or provide services? If your company delivers a service, it could be a candidate for an AI CEO. For the AI CEO to outperform a human one, the following conditions must be met: All your service’s business activities can be performed by AI faster and more effectively than by people. You can build an AI agent—like Astra—to coordinate these activity-specific AIs. Your AI agent can perform coordination tasks quickly and effectively as demand for the service increases. But if your company employs people, an AI CEO may not be in the cards at the moment. That’s because there does not seem to be compelling evidence an AI can do a better job of attracting and motivating talented people to realize a compelling vision for a company. What’s more, an AI CEO has yet to demonstrate the skills needed to create a company’s future. For now, while many people are not good at these tasks, a human CEO is also essential to adapting the company’s strategy to changing industry headwinds and tailwinds and inventing entirely new growth curves. EXPERT OPINION BY PETER COHAN, FOUNDER, PETER S. COHAN & ASSOCIATES @PETERCOHAN

Friday, April 18, 2025

CEOs Are Telling Their Employees to Embrace AI—or Become Irrelevant

CEOs are urging their teams to embrace AI tools or otherwise be rendered obsolete—and they’re doing so publicly. First, Shopify CEO Tobi Lütke published on X what was intended to be an internal memo on the necessity of employees adopting AI. He posted the lengthy directive on social media because “it was in the process of being leaked and (presumably) shown in bad faith,” he wrote. The move seemed to inspire a bit of FOMO from Micha Kaufman, CEO of gig work marketplace Fiverr, who posted screenshots of a very similar corporate directive a day later. Kaufman says his Fiverr AI mandate was sent out via email to employees on Monday, making its timing almost identical to Lütke’s. On the topic of hiring, the men echoed each other. “It does not make sense to hire more people before we learn how to do more with what we have,” Kaufman wrote. “Before asking for more head count and resources, teams must demonstrate why they cannot get what they want done using AI,” Lütke said. The CEOs are imparting a similar message, presumably meant to spark a fire at their respective companies: AI is coming to reshape the global workforce, and with that, no job is safe from upheaval. It’s a message that has been resonating for a while, as fears of so-called AI agents capable of automating jobs descend on the corporate rank-and-file. Only now, though, top corporate leaders are publicly declaring the inevitable. As Lütke put it: “Using AI effectively is now a fundamental expectation of everyone at Shopify.” He argued that not using AI amounts to stagnation. Performance reviews will now consider employees’ AI usage. “Stagnation is almost certain, and stagnation is slow-motion failure. If you’re not climbing, you’re sliding,” he wrote. Kaufman struck a somewhat alarmist tone, writing: “So here is the unpleasant truth: AI is coming for your jobs. Heck, it’s coming for my job, too. This is a wake-up call.” The Fiverr chief executive seemed to suggest that ignoring AI now means an inevitable battle for relevance later. “If you do not become an exceptional talent at what you do, a master, you will face the need for a career change in a matter of months. I am not trying to scare you. I am not talking about your job at Fiverr. I am talking about your ability to stay in your profession in the industry,” he wrote. Kaufman was unavailable to comment. It’s possible that corporate leaders are trying to squeeze every ounce of productivity out of their workforces, now that automating certain tasks is more accessible than ever. To be sure, Shopify is obsessed with tracking productivity: An internal tool at the company, GSD, tracks the status of every project. GSD, which stands for “get shit done,” is now inseparable from AI: “The prototype phase of any GSD project should be dominated by AI exploration,” Lütke’s announcement said. BY SAM BLUM @SAMMBLUM

Wednesday, April 16, 2025

OpenAI Releases GPT-4.1, a New Family of Models Designed for Coding

OpenAI’s lineup of powerful AI models is growing. Today, the company behind ChatGPT has announced the GPT-4.1 “model family,” a collection of three new AI models “purpose-built for developers.” The models can now be used via OpenAI’s API. According to OpenAI, the GPT-4.1 family will consist of three models: the normal-size GPT-4.1, the smaller and more modestly priced GPT-4.1 mini, and the even smaller and cheaper GPT-4.1 nano. These models will be available only through OpenAI’s API, and can’t be accessed through ChatGPT’s model picker. OpenAI says this is because the latest version of its GPT-4o model incorporates many of the same improvements in ChatGPT. The GPT-4.1 models have been trained on data with a knowledge cutoff of June 2024, and are said to outperform OpenAI’s top models at coding, effectively following complex instructions, and understanding large datasets. If developers want their AI applications to access more current information, they’ll need to connect the model to the internet. The GPT-4.1 models will also be able to process up to one million tokens of context at a time, significantly more than models from Google and OpenAI rival Anthropic. OpenAI says that the new GPT-4.1 models have been optimized to help software developers in coding and programming. The company adds that models will assist in front-end coding, will make fewer extraneous edits to code, and follow formats and structures more reliably. AI-assisted coding is one of the most common use cases for generative AI, and has been a focus for Anthropic. According to OpenAI, the GPT-4.1 model is 26 percent less expensive than GPT-4o when answering questions or performing tasks, but the real cost efficiencies can be found in GPT-4.1 nano. The nano model is said to be OpenAI’s fastest, cheapest model ever released, which could make it an attractive option for small businesses looking to dip a toe in the AI waters without making a big financial investment. The company also announced that the GPT-4.1 models would replace GPT-4.5 Preview, a model released in February that had high levels of capability in emotional intelligence, but was extremely expensive compared with most AI models. An OpenAI representative said that GPT-4.1 offers “improved or similar performance on many key capabilities at lower latency and cost,” making GPT-4.5 unnecessary. BY BEN SHERRY @BENLUCASSHERRY

Monday, April 14, 2025

Venture Capital Has Never Been This Obsessed With AI, New Data Shows

U.S. venture capital is becoming increasingly focused on a select cohort of investment prospects, with artificial intelligence the industry’s clear priority, data released today by the research firm PitchBook shows. The first quarter of 2025 saw $91.5 billion in U.S. venture capital activity spread out across an estimated 3,990 deals, PitchBook’s new data indicates. That’s a massive increase in allocated capital compared with Q1 of 2024—up by more than double from $42.4 billion—despite a (very slightly) smaller number of deals, down from 3,995 year-over-year. Notably, a big chunk of the Q1 2025 money came through in a single massive outlier deal: the $40 billion, SoftBank-led funding round that OpenAI just closed. It’s a world of haves and have nots, says Kyle Stanford, PitchBook’s director of American venture research. “The U.S. market has become very bifurcated between a handful of companies able to raise an endless amount of money and the rest of the market, which continues to struggle through a capital shortage,” Stanford says. The primary culprit? Artificial intelligence, says Stanford—a sector that remains beloved by VCs. “Seventy-one percent of total deal value in the U.S. went to AI investments,” Stanford says. “That amount is highly biased with OpenAI’s $40 billion round. Though excluding that deal, AI still captured 48.5 percent of the total invested during the quarter on one-third of completed deals.” No quarter since at least 2015 has seen AI so totally dominate venture capitalists’ checkbooks. This isn’t a new phenomenon, as 2024 saw a rise in VC deal values and deal volume compared with 2023, with much of that venture activity coming from AI deals. Several of those investments included established AI powerhouses such as OpenAI, xAI, and Anthropic. The gold rush of AI deals provided “a false sense of growth,” PitchBook warned at the time—a trend which does not appear to have waned in the subsequent three months. Meanwhile, the exit environment is also still well below its Covid-era highs, meaning investors have less liquidity to pump back into new investments. “Exit activity showed signs of excitement in Q1 with the high-profile IPO of CoreWeave, the announcement of a $32 billion acquisition of Wiz (yet to be completed), and several other well-known brand IPO filings,” says Stanford. “However, outside of those few transactions, the liquidity market remained subdued. Just 12 companies completed public listings, and liquidity worries abound within the market.” BY BRIAN CONTRERAS @_B_CONTRERAS_

Friday, April 11, 2025

Google Claims Its New Gemini AI Is Smarter and Cheaper Than OpenAI’s Best

On April 4, Google made its most powerful AI model yet, Gemini 2.5 Pro, available to software developers. The new model outperforms offerings from OpenAI and Anthropic across several benchmarks while being cheaper to use. Google first revealed Gemini 2.5 Pro in late March, the first in a line of Gemini 2.5 models. You can expect other models in the 2.5 line to have names that denote their price and capabilities, similar to Gemini 2.0 Flash, a lower-cost model revealed in February. All models in the 2.5 line will be thinking models, meaning they can reason through the best way to answer a question by using an internal dialogue. Gemini 2.5 Pro was initially only available on the Gemini app and website, but is now available for commercial use through an API. According to Google, Gemini 2.5 Pro exhibits high levels of capability in math and science. The model outperformed OpenAI’s, Anthropic’s, xAI’s, and DeepSeek’s latest models in benchmarks that test high-level science and math. And in a benchmark meant to test the model’s agentic coding skills, Gemini 2.5 Pro came in second, behind only Anthropic’s Claude 3.7 Sonnet. In brief, 2.5 Pro seems to be a whiz at those subjects. In a blog post announcing Gemini 2.5 Pro’s API debut, Google senior product manager Logan Kilpatrick wrote that the model had been “priced competitively.” Like most other AI APIs, developers will need to pay a fee to Google every time the model processes a new input and creates a new output. This is done through a process called “tokenization,” in which input data is broken up into a series of “tokens,” to be processed by the model. The number of tokens in an input/output determines the size of the API fee. Essentially, more data means more tokens, which means more money. Google lists the precise pricing scheme in the blog post. Developers can also “ground” Gemini 2.5 Pro’s outputs with Google search, enabling the model to access information from across the internet, instead of being restricted to its training data. They’ll get 1,500 free searches every day, but will have to pay $35 for every thousand searches after that. How does all this compare with the competition? OpenAI’s current flagship model, GPT-4.5, is much more expensive. Gemini 2.5 Pro is also slightly cheaper than GPT-4o, OpenAI’s most popular model. And Gemini 2.5 Pro is cheaper than Anthropic’s latest model, Claude 3.7 Sonnet. Ultimately, if you’re building an AI agent that needs to be able to quickly and efficiently search through the internet, like an assistant for buying plane tickets, Gemini 2.5 Pro’s capabilities and integration with Google could make it an attractive option. And if you just want to explore Google’s AI offerings in general, downloading the Gemini app will let you chat with Gemini for free. BY BEN SHERRY @BENLUCASSHERRY

Wednesday, April 9, 2025

Forecast: AI’s Rise Will Cut Search Engine Traffic, Affecting Advertising

A new report from the research firm Gartner, has some unsettling news for search engine giants like Google and Microsoft’s Bing. It predicts that as everyday net users become more comfortable with AI tech and incorporate it into their general net habits, chatbots and other agents will lead to a drop of 25 percent in “traditional search engine volume.” The search giants will then simply be “losing market share to AI chatbots and other virtual agents.” One reason to care about this news is to remember that the search engine giants are really marketing giants. Search engines are useful, but Google makes money by selling ads that leverage data from its search engine. These ads are designed to convert to profits for the companies whose wares are being promoted. Plus placing Google ads on a website is a revenue source that many other companies rely on–perhaps best known for being used by media firms. If AI upends search, then by definition this means it will similarly upend current marketing practices. And disrupted marketing norms mean that how you think about using online systems to market your company’s products will have to change too. AI already plays a role in marketing. Chatbots are touted as having copy generating skills that can boost small companies’ public relations efforts, but the tech is also having an effect inside the marketing process itself. An example of this is Shopify’s recent AI-powered Semantic Search system, which uses AI to sniff through the text and image data of a manufacturer’s products and then dream up better search-matching terms so that they don’t miss out on matching to customers searching for a particular phrase. But this is simply using AI to improve current search-based marketing systems. AI–smart enough to steal traffic More important is the notion that AI chatbots can “steal” search engine traffic. Think of how many of the queries that you usually direct at Google-from basic stuff like “what’s 200 Farenheit in Celcius?” to more complex matters like “what’s the most recent games console made by Sony?”–could be answered by a chatbot instead. Typing those queries into ChatGPT or a system like Microsoft’s Copilot could mean they aren’t directed through Google’s labyrinthine search engine systems. There’s also a hint that future web surfing won’t be as search-centric as it is now, thanks to the novel Arc app. Arc leverages search engine results as part of its answers to user queries, but the app promises to do the boring bits of web searching for you, neatly curating the answers above more traditional search engine results. AI “agents” are another emergent form of the tech that could impact search-AI systems that’re able to go off and perform a complex sequence of tasks for you, like searching for some data and analyzing it automatically. Google, of course, is savvy regarding these trends, and last year launched its own AI search push, with its Search Generative Experience. This is an effort to add in some of the clever summarizing abilities of generative AI systems to Google’s traditional search system, saving users time they’d otherwise have spent trawling through a handful of the top search results in order to learn the actual answer to the queries they typed in. But as AI use expands, and firms like Microsoft double– and triple-down on their efforts to incorporate AI into everyone’s digital lives, the question of the role of traditional search compared to AI chatbots and similar tech remains an open one. AI will soon impact how you think about marketing your company’s products and Search Engine Optimization to bolster traffic to your website may even stop being such an important factor. So if you’re building a long-term marketing strategy right now it might be worth examining how you can leverage AI products to market your wares alongside more traditional search systems. It’s always smart to skate to where the puck is going to be versus where it currently is. BY KIT EATON @KITEATON

Monday, April 7, 2025

The Rise of Tesla’s Optimus: How AI Robots Could Reshape Manufacturing

In a bipartisan event designed to showcase U.S. manufacturing prowess on Capitol Hill, two Tesla Optimus humanoid robots seem to have stolen the show on Wednesday. Fox News described a scene where attendees were “crowding the machines as they struck various poses.” Speaking at the event, House select committee on China Chairman John Moolenaar, R-Mich., was particularly struck, he said, by the “amazing technology” on display, and how “a lot of the same technology that’s in a vehicle is used in these humanoid robots.” Moolenaar’s comments come as automaker Tesla remains in the spotlight thanks to its CEO Elon Musk’s controversial role in the Trump White House, and also as reports say the company is “leading the trend” of deploying humanoid robots in auto plants. All this leads to the big question: if robots really prove to have advantages over human workers in many manual work settings like factory floors, are the days of seeing people working in these environments truly numbered? Industry news outlet Digitimes reported that “since the second half of 2024” at least, Tesla has “increasingly emphasized AI as the core driver of its business strategy for the coming years.” The EV maker is intent on “commercializing autonomous driving software and humanoid robots.” This chimes with many previous reports concerning Musk’s interest in AI tech (via his xAI startup, chatbot “Grok” built into X, and AI’s role in self-driving Tesla cars and “robotaxis”) and with Musk’s own frequent, sometime grandiose pronouncements that Tesla’s future is really in robotics, not EVs. Humanoid robots may be a trillion-dollar market, Musk has previously said, and in mid-2024 he promised that Tesla’s Optimus robots would be used in its offices this year, with low production numbers. He expanded on that intial vow and said in 2026 high production levels would mean the robots could be sold to other companies. But Digitimes’ contention that Tesla is now leading the charge in deploying humanoid robots actually on the factory floor suggests this timeline has now been accelerated. This resonates with a few statements from Musk made mere weeks ago at an investors’ earnings call. Musk said then that the plan was now to build 10,000 Optimus machines this year. He admitted it was ambitious, but when asked about this he said “will those several thousand Optimus robots be doing useful things by the end of the year? Yes, I’m confident they will do useful things.” At the U.S. manufacturing showcase in Washington, D.C., Rep Carlos Gimenez, R-Fla., a member of the House select committee on China, said that he imagined the robots would have “unbelievable applications,” Fox noted. Beyond car making, Gimenez suggested “maybe in agriculture,” because a “lot of our farmers are going out of business, can’t compete labor-wise,” but if you “get a couple of robots that can actually do very detailed farm work and drive the labor costs down, we’ll save the American farmer.” But the meeting went beyond a choreographed robot demo and veered into policy lobbying. The Associated Press reported that robot companies at the event, including Tesla, used the opportunity to push lawmakers to develop a national strategy to boost U.S. makers in an international race to develop transformational AI-powered robot tech, mainly in competition with China. But if Musk’s promise to build millions of Tesla humanoid robots comes true, or rival firms like Figure achieve success with their own humanoid machines, are we talking about a similar issue facing computer programmers at the moment, i.e. “will AI steal workers’ jobs?” It’s tricky, and the truth is nobody knows. One Reddit thread on the matter saw a commenter point out the impressive advances in robotics by companies like Tesla, and wonder “do you really think we’ll see these robots replacing human workers on [construction industry] job sites in the near future? I mean, they could handle the dangerous and repetitive tasks that come with the job, but what does that mean for us?” The replies were mainly in the negative, with one person noting “Building construction is way too unpredictable and open-ended to do with robots, no matter how many AI buzzwords you fit in,” and another saying simply “No.” But one commenter said “Yes, humanoid robots will likely replace humans in nearly every job they currently do…. eventually. Whether that time span is going to be 2 years or 200 remains to be seen.” Humanoid robots on the factory floor would have some obvious benefits. They could work 24-7-365, they don’t need vacations and wouldn’t go on strike, and the “days since last injury” ticker on the factory wall would just tick upwards forever. But the potential loss of wages for real humans is more problematic, from a social and political point of view. Jeff Cardenas, co-founder and CEO of the Texas-based humanoid robot startup Apptronik, spoke about the complexities of a robot transformation in Washington. Cardenas told the news service he sees humanoids as “having roles both practically and in capturing the imagination of the public” for robot tech, and he thinks a national strategy would promote the “education of a new generation of robotics engineers and scientists.” So, robots may well steal some people’s jobs, and soon, but that could mean other people will get jobs in designing and repairing the robots themselves. BY KIT EATON @KITEATON

Saturday, April 5, 2025

Are Your Messages Really Secure? How to Use Encrypted Apps Safely

Political media were sent into a tizzy after Jeffrey Goldberg, editor-in-chief of The Atlantic, published a story on March 24 revealing that weeks earlier, he had been accidentally added to a group chat on encrypted messenger app Signal. What made this group chat remarkable was that it featured several senior officials from the Trump administration, including Vice President JD Vance and Defense Secretary Pete Hegseth, discussing plans to bomb targets in Yemen. The incident instantly ignited fiery criticism over the Trump administration’s security practices. Former transportation secretary Pete Buttigieg wrote on Threads that “from an operational security perspective, this is the highest level of fuckup imaginable.” So, what are encrypted messaging apps, when should you use them in your business, and how can you prevent screwups like this epic one from the White House? Here’s a brief guide. What are encrypted messaging apps? Encrypted messenger apps enable people to send text messages that are protected with end-to-end encryption, a process in which an outgoing message is scrambled into gibberish, sent over the internet, and then unscrambled on the recipient’s device. This process is achieved by using “keys,” which are lines of code that encrypt and decipher text; they prevent anyone other than the sender and recipient from reading messages, even the platform being used to send them. Two devices with matching keys can securely pass messages to each other. Many messaging apps offer end-to-end encryption as table stakes: Apple’s iMessage added end-to-end encryption in 2011, and WhatsApp switched to the security measure in 2016. What makes Signal unique is that it’s a nonprofit powered by an open-source protocol, funded by grants and donations. This means that, unlike 23andMe for example, there’s no risk of Signal getting acquired by a profit-seeking company. A good example of a form of smartphone-based communication that’s not end-to-end encrypted? An old-fashioned SMS text. In summary, if you want to avoid the snooping eyes of a third party, consider using an app with end-to-end encryption, like iMessage, WhatsApp, or Signal. What are the best practices for using these apps? Just because you’re using a messaging app that offers end-to-end encryption, it doesn’t mean that your conversation is totally secure. “We should all be very careful not to assume that encryption equals security,” says Matt Howard, senior vice president and chief marketing officer at data security platform Virtru, which helps enterprise clients (including the Department of Defense) control the flow of data within their organizations. Using end-to-end encryption is necessary for keeping your communications secure, he says, but it’s just the start of a healthy security strategy. The most important security measure you can take, according to Howard, is to ensure all of your devices have strong password protection and multifactor authentication. “Oftentimes, the importance of basic hygiene around passwords is overlooked,” he says, adding that poor password hygiene is a leading cause of data breaches. Howard also says that when you use end-to-end encryption services like Signal, you should be intentional about your data retention policies. Apps like Signal and Discord allow users to set messages to auto-delete after a certain period of time. But your business may want to preserve encrypted text for future records or to stay in compliance with any external vendors you may be working with. There are other common-sense steps to take too. For example, if you’re looking at your phone in a public place, all the encryption in the world isn’t going to stop someone from potentially reading your messages over your shoulder. And a screenshot from an otherwise private conversation could be shared more widely, too. One more piece of advice: Be deliberate when adding people to the conversation. When sharing sensitive information with others, Howard says, “just make sure you know the identities of the people you’re choosing to share it with—maybe double check the people who have been invited to the group chat before you hit send.” BY BEN SHERRY @BENLUCASSHERRY

Wednesday, April 2, 2025

Signal, WhatsApp, and iMessage: Which Messaging App Is Most Secure?

I don’t know very much about what goes into war planning, but I assume that the communications infrastructure that supports that kind of thing is a solved problem for the government. There are secure telephone and video systems, as well as Secure Compartmented Information Facilities (SCIF) that allow the key players to review the most sensitive information about military activities. Typically, I assume, those sorts of conversations aren’t had using consumer messaging platforms on the Secretary of Defense’s iPhone. Also, I sort of assume that the people involved are smart enough and tech-savvy enough to notice that a journalist has entered the group chat. Apparently, not. There are a lot of questions raised by what is now certainly the most infamous group chat in the world, in which the Vice President, Secretaries of Defense and State, the Director of National Intelligence, CIA Director, and National Security Advisor were messaging about plans to bomb Houthi rebels in Yemen. We know about the chat because someone accidentally added Jeffrey Goldberg, the editor of The Atlantic. One question that a lot of readers might be wondering is just how secure the most popular messaging apps are. Here’s a rundown. Signal Signal, the app in question in this case, is end-to-end encrypted (E2EE). That means that messages are sent in an encrypted format and can only be read by the recipient. At the core of its encryption is the Signal Protocol, and open source protocol that allows for public inspection. That decreases the chances of hidden vulnerabilities. Signal also uses a form of encryption that ensures that even if a session key is compromised, previous messages stay encrypted. Signal also allows the most privacy since you don’t have to link a phone number to use the service (unlike other apps on this list). It also allows for contact verification so that you can ensure that the person you’re messaging is who they say they are. In general, Signal is widely considered the most secure consumer messaging app because third-parties can verify its security claims, and the company does not have access to metadata about your conversations. iMessage If you only send messages to other iPhone users, Apple’s iMessage platform is arguably the best and most secure option. Unlike Signal, Apple’s protocol is proprietary and not open for inspection by third-party security researchers. That makes it harder to verify that it is as secure as it claims, but Apple is well known for its commitment to security and privacy. One advantage is that Apple uses a 1:1 encryption model for group chats, which means that every message is encrypted individually for each member of the group. This is technically more secure than Signal’s Sender Key method, though it means that iMessage group chats are much more limited in terms of group size (due to the resources required for all of that individual encryption). Apple also says its encryption is designed for post-quantum computing. The idea is that eventually quantum computers will be able to break encryption easily enough to read protected messages, but Apple is designing its algorithm to resist those types of future capabilities. There are, however, two main drawbacks to Apple’s messaging platform. The first is that once you start messaging anyone with an Android device, it will fall back to RCS, or, worse, SMS—neither of which are encrypted within the Messages app. RCS supports E2EE, but Apple has not implemented the ability to send encrypted messages to Android devices. The other is that if you use iCloud backup for your messages, and aren’t using Advanced Data Protection, a copy of your messages is stored on Apple’s servers. While they are encrypted at rest, the company is able to turn them over if requested by law enforcement because it retains a key. WhatsApp WhatsApp uses the Signal Protocol (see above), meaning it offers a reliably secure form of protection for messages by default. One problem with WhatsApp is that, while the content of your messages may be encrypted, the metadata about the messages you send, and who you send them to, is not. That information is collected and stored by WhatsApp. Some people are also less than enthusiastic about using an app owned by Meta, which isn’t exactly known for its ability to keep its hands off of user data. It does, however, have the benefit of a massive user base, which means that there’s a good chance that the person you want to message with will be using WhatsApp. The app also has the best feature set for group messaging by far. Telegram To be clear, Telegram is not an E2EE messaging platform by default. Every regular message you send is encrypted in transit, and is encrypted as it is stored on Telegram’s servers, but that’s not the same thing as being encrypted so that only the recipient can read your message. This makes your messages vulnerable to anyone who has access to those servers. The app does allow you to create a “Secret Chat,” which is encrypted, and you can even set these to delete after a period of time. Still, if you care about protecting your text conversations, there are far better options on this list. Messenger Meta’s “other” messaging platform started rolling out E2EE last year, which should eventually put it on par with WhatsApp. The drawback here is that the rollout is happening over time, which means that not every user will immediately have it turned on by default. In addition, you might have some chats that are protected, and others that aren’t, and the average user isn’t going to know how to tell the difference. The bottom line It’s worth mentioning, however, that it does not matter how private or secure the encryption is on a messaging platform—if you include someone in a group chat and send a message to that group, they’re going to be able to read the message. Or, put another way, the problem here has nothing to do with encryption, and everything to do with human error. Most of these apps offer a secure form of E2EE for consumers, but there is no guarantee your messages will stay secret if you text them to a journalist. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN