Wednesday, October 16, 2024

Inside AI’s $1 Trillion Cash Bonfire

Generative AI has lured an enormous amount of capital. Indeed, Goldman Sachs estimates companies will spend $1 trillion to use AI chatbots in their operations. A recent example of such capital flows is OpenAI’s recent $6 billion capital raise. This October 2024 investment nearly doubled the ChatGPT provider’s private market value to $157 billion, according to The New York Times. A look inside the mindset of executives who are driving the investment of this capital reveals a powerful duality, according to my book Brain Rush. The battle between two fears, missing out on the next big thing and big lawsuits or reputational damage, makes it hard for companies to earn a return on that investment. Keeping that $1 trillion from going up in flames depends on whether generative AI can find a killer app. A case in point is Apple’s iTunes Store — which resulted in a near-quadrupling of iPod sales. The iTunes Store was a killer app because it made the iPod so much more useful for consumers. For example, joggers who formerly listened to music on a Sony Walkman flocked to the iPod. Why? It was smaller and lighter and enabled them to customize playlists. Could generative AI find a killer app? Some technology waves change the world — notably the World Wide Web. Others don’t — such as virtual reality. Despite the massive hype and enormous investment, generative AI appears unlikely to change the world. Waves that change the world pass three tests: They make life much better for people. Many people eagerly pay a high price for the new technology. Future profits more than offset the investment required to build it. Sadly, generative AI does not pass any of these tests. Before getting into the reasons, I have observed ChatGPT doing interesting things — but so far nothing significant enough to change the world. For instance, in January, one of my Babson College students uploaded a book I assigned for a course into ChatGPT. He told me he spoke his questions to ChatGPT and the book responded in “a very high quality” voice. I recently used Google’s Deep Dive to create a podcast about Brain Rush led by two AI hosts. Generative AI has yet to make peoples’ lives much better Such uses do not make people willingly pay a high price to keep using it. In a September Babson class, only two students — out of 30 — used ChatGPT occasionally for research. Because of hallucinations, the students had to double-check its results — reducing the technology’s value. Hallucinations are a feature — rather than a bug — of large language models. That’s because they are trained on data — not all of which is accurate — to make predictions about the next word in a sentence. Sometimes the LLM guesses right, sometimes not. The price users pay does not cover generative AI’s costs Neither ChatGPT nor Microsoft Copilot earn enough revenue to cover its costs. In 2024, OpenAI expects to generate about $3.7 billion in annual revenue while spending $8.7 billion — producing a $5 billion loss, noted the Times. While Microsoft declined to quantify its Copilot revenue in the most recent quarter, the AI-powered assistant’s costs are significantly higher than its revenue. For example, GitHub Copilot, a service that helps programmers create, fix, and translate code, costs between $20 and $80 per month to run — way above the service’s $10 per month subscription fee, according to The Wall Street Journal. People are not willing to pay enough for ChatGPT and Copilot because they fail the critical test of a killer app — they do not relieve customer pain more effectively than current products. Performance and cost issues with Copilot are causing customers to pause their $30-per-month-per-user Copilot contracts for Office 365, according to The Information. Profits may fall short of the investment needed to build AI chatbots A big part of companies’ investments in generative AI is the electricity the data centers need to keep AI chatbots running. Indeed, in September 2024, Microsoft spent an estimated $16 billion on a 10-year contract with Constellation Energy, the company that operates Three Mile Island — one of whose reactors famously melted down in 1979, according to my September 2024 Forbes post. As I noted in my Value Pyramid case study, most generative AI use cases help people overcome creator’s block — such as the anxiety about writing an email. Fewer generative AI applications help improve the productivity of business functions such as customer service or coding. And few, if any, applications of AI chatbots enable companies to add new sources of revenue. Until leaders deploy AI chatbots to make life much better for many people, their $1 trillion investment could go up in smoke. EXPERT OPINION BY PETER COHAN, FOUNDER, PETER S. COHAN & ASSOCIATES @PETERCOHAN

Monday, October 14, 2024

Here’s Why OpenAI’s Deal With Hearst Should Scare Google

If AI really is the future of digital tech, it comes with a pretty awful side dish: AI data scraping. Popular large language model applications are so hungry for data that AI makers have frequently resorted to questionable methods when grabbing as much data as possible from wherever they can, regardless of any moral or legal implications. But in a surprising reversal of this trend, the Wall Street Journal reports that market leading AI brand OpenAI has signed a deal with giant publishing house Hearst, giving the AI maker access to the stable of over 20 magazine brands and 40 newspapers that Hearst owns—some of which are world-famous titles like Esquire and Cosmopolitan. The Journal notes that the terms of the deal aren’t disclosed. And why would they be? OpenAI is technically still a startup, albeit a giant one, and it doesn’t want to give away too many business secrets. We can speculate about the number of zeros after the dollar sign, of course, but the amount going into Hearst’s bank account almost doesn’t matter. That’s because it’s what this deal signifies that’s important. OpenAI will have access to millions of words, photos and other content from Hearst’s vast publishing archive. Hearst will see its content “surfaced” inside ChatGPT when users search for a relevant topic—with citations and direct links so that when a user clicks on a link, they’ll be taken to a Hearst site and thus could be shown ads, boosting revenues. OpenAI, by including this material inside its chatbot, helps keep ChatGPT useful and up to date for users, and thus boosts its revenues. Both companies use business jargon to describe this mutual gain, noting the deal will “enhance the utility and reach” of their offerings, the paper says. Interestingly, the Journal also quotes Hearst Newspapers President Jeff Johnson on the complicated topic of AI “theft” of journalistic IP—a tricky issue that has seen newspapers, notably the New York Times, sue the AI maker. Johnson explained how important it was that journalism by “professional journalists” remain at the forefront of AI products’ output. Is this simply a case of “if you can’t beat ‘em, sell it to ‘em?” There is a wrinkle here, and one that the Journal doesn’t investigate. The New York Times’ case, like several others against AI companies from newspapers, book authors and even record labels, rests on the fact that real human work is being used to train a very inhuman AI. Once it has the data, it can create all-but “cloned” output, mirroring the style and content of the training material. If an AI can, say, write pieces like a well-known newspaper journalist, isn’t that a threat to that person’s livelihood? Music A-listers recently penned an open letter decrying AI as a threat to human creativity, exploring a very similar hypothesis. Will giving OpenAI access to Hearst’s archive merely mean the next-gen ChatGPT system can sound like any Cosmo writer on any given topic? Will readers act on sex and fashion tips from a digital creator that has no experience of either? (And, if you think about it, does that even matter?) What this also means is that AI systems really might be the future of the online search experience. Why would users need to exit ChatGPT to find, say, the latest celebrity gossip when they could simply stay logged into the chatbot to read that stuff, contributing to OpenAI’s revenues. This is another shot across Google’s bows, a serious challenge to a search giant that’s kept a stranglehold on search for decades. BY KIT EATON @KITEATON

Saturday, October 12, 2024

How AI can level the playing field in learning and education.

Patricia Scanlon, Ireland’s first AI ambassador, thinks that AI can make learning more efficient and accessible if we rethink our education system. AI is rapidly transforming almost all spheres of human activity; from the way we work and the way we create content to even the way we find novel solutions to age-old problems. While there is much excitement around what AI brings with it, some are also cautious about its implications on the future of jobs and the damage advanced AI can do if in the wrong hands. One such area of human endeavour that AI has already started to upend is learning. The power of generative AI is disrupting classrooms in schools and colleges world-over, with educators scrambling to devise policies to prevent the technology’s misuse. Individualised help But with every disruption comes an opportunity to makes things better. Patricia Scanlon, Ireland’s first AI ambassador, thinks that when it comes to the impact of AI on education, the novel tech can actually be used as a force for good in learning and development. “There’s a lot of power in being able to take AI and level the playing field in some ways,” Scanlon said in her keynote speech to an audience of more than 350 people at the Learnovation Summit held in the Aviva Stadium in Dublin today (5 October). The annual summit was organised by the Learnovate Centre, a learning technology research centre based in Trinity College Dublin. “Not everybody has access to low student-teacher ratios, after school tutors, helpful parents at home, English as their first language – you can see how that more individualised help can really help in education.” But it’s not just children in schools that can benefit from the equalising ability of AI technology. Adults, too, Scanlon said, can make the most out of LLMs [large language model] that can help them learn things they never had access to learning before. “Maybe somebody never gets to go to college, but they can educate themselves to a certain point with AI. Not the point of a full format education system, but a tool to help and that’s where the productivity aspect comes in,” she explained. “And then in the working world, AI can be hugely helpful – particularly for people with dyslexia. You’re levelling the playing field for people like that, who struggle to write in the blank page.” ‘Turn education system upside down’ But is the effect of AI on learning, especially for younger people and children, really such a bed of roses? Scanlon said there are ways in which these tools could be harmful to our development if we start to rely on it too much. “We wonder if kids are ever going to be able to write for themselves or engage in critical thinking. Conversely, AI can help to ensure that integrity – but it’s going to take work,” she went on. “You can use the LLM to create live questioning that somebody couldn’t possibly be prepared for, and change the questions based on the answers to drill downs someone’s knowledge. “Then, together with a little bit of security and analytics, or maybe their style of writing or what they said before or what we know that LLMs produce, you can get to something more like an oral assessment or a defence of a thesis if you want, and AI can help that.” According to Scanlon, the easy thing to do would be to ban AI. But it’s not necessarily the best way forward. What’s far more beneficial, she argued, is to “turn our whole education system upside down” and look at AI in a different light. “It’s not going away, so we need to think about how we can use this tool to help with critical thinking, to help them [learners] progress in all aspects of teaching and learning.”

Wednesday, October 9, 2024

What Your Business Can Learn From the World’s Greatest Mathematician’s View of AI

Artificial intelligence is capable of some stunning feats, from generating mind-bogglingly convincing imagery and video to chatting in an incredibly human-like voice. Users’ growing embrace of AI for fun and help at work shows the technology already has practical applications–even before it evolves to super-genius levels–and may take over some roles in the workplace. But UCLA math professor Terence Tao, known as the “Mozart of Math,” isn’t particularly worried that AI is coming for his job soon. Tao, considered the world’s greatest living mathematician, spoke to the Atlantic recently about AI, and his words have impact far beyond the world of equations, proofs and hypotheses. Tao was asked about the impact of AI on the field of math, because he’d recently posted some scathing comments about OpenAI’s latest and supposedly greatest GPT o1 model. Touted as the first model from the leading AI brand that can “reason” as well as simply answer back to user queries, Tao posted on social platform Mathstodon that in his opinion the cutting edge AI was only as smart as a “mediocre, but not completely incompetent” graduate student. The magazine elicited a more in-depth explanation of Tao’s views, and what he said was deeply interesting for anyone who’s thinking about embracing AI into their workplace, or those who are worried that AI will displace people from their jobs. Entrepreneur Peter Thiel, for example, recently suggested AI could actually “come for” roles that rely on math first. Expanding on his criticism of ChatGPT, Tao said his remarks were misinterpreted. What he had been trying to do, rather than dismiss GPT o1’s capabilities, was point out that he “was interested in using these tools as research assistants.” That’s because a research project has “a lot of tedious steps: You may have an idea and you want to flesh out computations, but you have to do it by hand and work it all out.” This sort of methodical task is exactly what “reasoning” AI models should be great at, saving time and money for people like Tao, whose job involves this kind of data processing. Tao thinks AI tech–at least for now–is only good at this type of “assistant” role, and not necessarily a shining example of excellence here either. And Tao’s concluding remarks are even more telling. Asked about how AI is taking over some methodical math work, Tao pointed out that this has always been true of technology. To adapt, humans simply “have to change the problems you study.” In terms of AI use, Tao noted he’s “not super interested in duplicating the things that humans are already good at. It seems inefficient.” Instead he foresees a time when, “at the frontier, we will always need humans and AI. They have complimentary strengths.” So far, so very math nerdy. But, sitting at your desk in your office, working at tasks that seemingly have little to do with sophisticated mathematics, why should you care what Tao thinks? At least partly because of the kind of sentiment that Thiel voiced about the future of work. In terms of the question “will AI steal my job?” Tao is very definitely on the side of voices that argue “no.” In a similar way that the PC changed the average office job, AI will simply change what employees do hour by hour. The tech will mechanize some humdrum “research” tasks, and actually allow some of your workers to work more efficiently at tasks that directly generate revenues. So if you’ve been hesitant to embrace AI’s promise thus far, maybe you can go ahead—and reassure your staff that they’re not at risk. BY KIT EATON @KITEATON