Friday, November 17, 2023

ANTROPIC'S PITCH TO PRODUCE SAFE A.I.

Business leaders should take action to benefit many groups of people. By that I mean, that CEOs should make employees, customers, investors, and communities better off. Companies that trade off the well-being of employees, customers, and communities to reward their investors are taking a big risk.

This comes to mind in considering Anthropic, a San Francisco-based provider of technology used to build Generative AI chatbots. In 2021, Daniela Amodei and Dario Amodei, previously OpenAI executives, started Anthropic out of concern their employer cared more about commercialization than safety, according to Cerebral Valley

By October 2023, the 192-employee company had raised a total of $7.2 billion at a $25 billion valuation - five times more than the company's value in May. With clients including Slack, Notion, and Quora, Pitchbook forecasts Anthropic's 2023 revenue will double to $200 million and The Information reported the company expects to reach $500 million in revenue by the end of 2024.

Caring about employees, customers, and communities

Anthropic's cofounder Dario Amodei, a Princeton-educated physicist who led the OpenAI teams that built GPT-2 and GPT-3, is Anthropic's CEO. His younger sister, Daniela Amodei, who oversaw OpenAI's policy and safety teams, is Anthropic's president. As Daniela said, "We were the safety and policy leadership of OpenAI, and we just saw this vision for how we could train large language models and large generative models with safety at the forefront," The New York Times reported.

Anthropic's cofounders put their values into their product. They believe that the company's Claude 2 - a rival to ChatGPT - can summarize larger documents and produce factually correct, non-offensive results. Claude 2 can summarize up to about 75,000 words - the length of a typical book, Users can input large documents and receive summaries in the form of a memo, letter or story. In contrast, ChatGPT can handle a much smaller input of about 3,000 words, according to the Times

Arthur AI, a machine learning monitoring platform, concluded Claude 2 had the most "self-awareness" - meaning it accurately assessed its knowledge limits and only answered questions for which it had training data to support, CNBC wrote. Anthropic's concern about safety caused the company not to release the first version of Claude -- which the company developed in 2022 -- because employees were afraid people might misuse it. Anthropic delayed the release of Claude 2 because the company's red-teamers uncovered new ways it could become dangerous, according to the Times

Using a self-correcting constitution to build safer Generative AI 

When the Amodeis started the company, they thought Anthropic would do safety research using other companies' AI models. They soon concluded innovative research was only possible if they built their own models. To do that, they needed to raise hundreds of millions of dollars to afford the expensive computing equipment required to build the models. They decided Claude should be helpful, harmless, and honest, the Times wrote.

To that end, Anthropic used Constitutional AI - the interaction between two AI models: one operating according to a written list of principles from sources such as the UN's Universal Declaration of Human Rights and a second AI to evaluate how well the first one followed its principles -- correcting it when necessary, according to the Times.

In July 2023, Amodei provided examples of Claude 2′s improvements over the prior version. Claude 2 scored 76.5% on the Bar exam's multiple choice section, up from Claude's 73 percent. The newest model scored 71 percent on the Python coding test, up from the prior version's 56%. Amodai said Claude 2 "was twice as good at giving harmless responses," CNBC wrote. Anthropic is a valuable Generative AI resource for companies eager to limit a chatbot's risk to their brand.


BY PETER COHAN, FOUNDER, PETER S. COHAN & ASSOCIATES@PETERCOHAN

No comments: