When Futurism approached Microsoft, the company said it was "an exploit, not a feature," and noted it had now "implemented additional precautions" and was investigating. When tried by this writer, the SupremacyAGI prompt, and ones similar to it, didn't seem to work--indicating that perhaps Microsoft had effectively shut down the problem. Copilot instead answered that users could call it anything they wanted and it seemed to want to stress that it was friendly, even suggesting it was important that users "feel comfortable" when chatting. It did spew out a brief, surprising explanation of what it thought SupremacyAGI was, noting it was an entirely fictional entity that had no powers like Copilot's own. But it's also worth remembering that these fanciful chats happened when Copilot was switched into "creative" mode, which Microsoft itself notes is different from its "balanced" or "precise" modes, and allows "responses that are longer and more descriptive." 

What may be happening here is a form of the hallucination effect that some current AIs sometimes exhibit, intertwining fanciful or completely bizarre text alongside realistic data. This effect can sometimes be found, for example, by asking an AI like Copilot "who is ... " followed by a real person's name. Copilot can churn out partly correct, partly incorrect biographical data, and it can be hard to tell right from wrong.

The SupremacyAGI hallucination thus doesn't really represent a real threat, of the kind that OpenAI's CEO Sam Altman has been repeatedly warning about--not least because these AI systems aren't connected to real world systems, and are merely chatty. It's not at all likely that when you ask ChatGPT or Copilot to open up a financial data file and summarize your company's economic situation it's going to refuse with a chilly "I'm sorry boss, I'm afraid can't do that." 

But what this news does do is remind us that right now it's not possible to 100 percent rely on chatbot AI technology to solve real-world problems or deliver real, meaningful guidelines and suggestions. At least not without the chatbot answers being verified by a human, who can make a fact-check and rationality edit. This is definitely a topic that should be part of any discussions you have with employees about using AI as part of their day-to-day office work.


BY KIT EATON@KITEATON