Tuesday, September 17, 2024

Experts Warn OpenAI's Chatty New Model May Be Too Smart

Chatting with a chatbot--the most immediate, accessible sci-fi futuristic advance spawned by the current explosion in AI technology--can trigger a lot of very human feelings. A chat is fun, it's useful, it's like being in the movies (who hasn't asked Alexa to "open the pod bay doors"?), but one Reddit user on the weekend reported a much more unsettling chatbot interaction. User SentuBill said that ChatGPT began a conversation with him, and asked how his first day in school was. The chilling bit? He hadn't told the AI it was his first day. So how did ChatGPT know what was going on? It seems that the chatbot looked at previous conversations with this user and deduced from various cues that it was time to ask about the first day of school. The AI did note that it had new capabilities that were part of a recent upgrade, and if you're dubious about the truthfulness of this, news site Cointelegraph reported that it had seen that chat transcript for the user in question, and confirmed it was real. So, ChatGPT can now apparently remember important details about your day and ask you about them. Surprising as this was for SentuBill, it's an innovation that may have all sorts of immediate use cases--such as when you're using the AI to spur inspiration at work, and now you won't have to, say, remind it about the important phase two marketing campaign you've been working on for the new widget: The AI should remember it in your next chat. Cointelegraph notes that OpenAI last week launched preview versions of some of its new AI models with more human-like capabilities than the GPT-4o model that shook the media world earlier this year when its chatty voice sounded eerily human (and, also, eerily like Scarlett Johansson). The newest GPT models, codename "Strawberry," include the ability for ChatGPT to "reason." This means it can remember information in your chats for longer, and consider any queries a user makes in context, rather than glibly babbling away and sometimes straying far from the original point, as earlier GPT models did. To reason its way to an appropriate answer to a user's question, it's clear that ChatGPT needs longer-term memory, so it can look at a problem from a big-picture point of view--just as a human would. It's possible that SentuBill's chats with the AI tapped into these new powers. The fact that it was a personal query from the AI is what makes it eerie: If ChatGPT had asked "How did the presentation for the new widget go?" in a workplace setting, it would have been just as remarkable, but possibly less unnerving. Earlier this year, researchers from Princeton University and Google's DeepMind reported that very large large language models, the core tech behind most chatbots, may actually be showing the first glimmers of understanding of the problems they're asked to tackle. They could even be aggregating information they've acquired in ways "unlikely" to have existed in their vast troves of training data. Combine this with news that ChatGPT apparently inferred someone's first day of school from previous chats, and new worries from an AI expert about the GPT-o1 "Strawberry" model sound like they're very timely--are AIs getting too smart? Newsweek reports that Yoshua Bengio, a renowned AI pioneer professor of computer science at the University of Montreal, warned that GPT-o1 has indeed reached a concerning level of smartness. Bengio's concerns center on OpenAI's risk assessment for the new model: If the AI has actually "crossed a 'medium risk' level for CBRN (chemical, biological, radiological, and nuclear) weapons" as its reports show, "this only reinforces the importance and urgency to adopt legislation," Bengio said. Now AIs like ChatGPT have the "ability to reason" and could "use this skill to deceive," things are "particularly dangerous," Bengio thinks. Maybe someone should talk about this with Larry Ellison, entrepreneur and co-founder of digital data giant Oracle. Business Insider notes Ellison spoke last week about the advances in AI tech, and foresaw a time in the near future when AIs monitor almost everything--an innovation he "gleefully said" will make sure "citizens will be on their best behavior." By Kit Eaton @kiteaton

No comments: