Friday, June 28, 2024

Is it a Person, or an AI Chatbot? Tests Say Maybe We Can't Tell

We're used to chatting to AIs, and now it seems we can't tell AIs from real people. This a sign it's time for you to take advantage of the way AI can boost your business. Everyone from cocky entrepreneur Elon Musk to inventor and futurist Ray Kurzweil--as well as plenty of other kibitzers--has predicted that the moment when artificial intelligence becomes as smart as, and then smarter than a human is very near. Possibly as soon as 2027. When it happens, it will be an important moment because then our human-made digital friends may become more useful than many people are in the workplace, upending the economy. Fresh research suggests chatbots are already at human levels of smarts in some respects, which should make everyone from politicians to megacorporations to one-person businesses sit up and take note: OpenAI's GPT-4 model fooled people into thinking it was human more than half the time. News site TechRadar reports on work done by experts at the Department of Cognitive Science at UC San Diego: the team carried out a classic, controlled "Turing test" to see how modern AI chat systems compared to chatting to a real person. In the experiments, 500 participants each had a 5 minute text-based conversation with another party, and then had to say if they thought that the other participant in the chat was a human or not and explain why. They were either chatting to a real person, OpenAI's GPT-3.5 or more advanced GTP-4 models, or ELIZA--an amazing experimental chatbot from MIT in the 1960s that used simple rules to answer queries, and which was partly designed to model classic psychotherapy. The results of the test are stunning. Humans were correctly spotted by the experiment's participants 67 percent of the time--which makes sense: our brains are very good at spotting the subtleties that give away real human interaction (though not for one in three of the human chat tests). ELIZA performed surprisingly well for 60 year-old tech--participants thought it was a human just 22 percent of the time. But ChatGPT's GPT-3.5 model was identified as human half the time, and GPT-4 was thought to be a human 54 percent of the time. As TechRadar puts it, this means the San Diego experiment's volunteers were "no better than chance" at identifying GPT-4 as AI. The Turing test, named for famed early computing pioneer Alan Turing, has long been considered the core criterion for deciding how smart an AI is. Though Mustafa Suleyman, co-founder of Google's DeepMind AI division, has suggested that a more modern, meaningful version should ask if an AI is smart enough to act as a CEO, the Turing test remains useful. Since many interactions with current generation AI systems are via text-based chat, the failure of so many test subjects in the San Diego experiment to distinguish an AI chatbot from a human is meaningful. The lesson from the experiment is clear: AI's aren't just "hype." Since increasingly capable AI models keep being released, with ChatGPT's newest model, GPT-5, just around the corner, they're only going to get more convincingly human. If your company is being slow to adopt AI tech, you should probably speed up that process or risk being left behind by your competitors. Or you could choose to avoid AI technology, employ only human staff and differentiate yourself from competing businesses by trading on the old-fashioned vibe: "we build our products using real people only!" But if you try this route, you might want to learn from old Samsung and LG mistakes. About 10 years ago, both tried smartphone slogans that landed weirdly: "Designed for humans" makes you wonder if other phones are meant for aliens, and the "most human phone ever" was not a winning tagline for its marketing efforts. Maybe they should test those slogans with an AI chatbot. BY KIT EATON @KITEATON

Wednesday, June 26, 2024

A Clever Hack to Guard Against AI Hallucinations

While large companies have been experimenting with generative AI, small ones are moving slowly. Are small businesses missing the benefits of AI chatbots such as Microsoft Copilot and ChatGPT? Or is a more tentative approach wise for small businesses? Most businesses are holding back. Roughly five percent of companies in the U.S. are using generative AI, according to a Census Bureau survey featured in a report by The New York Times. One small business is finding value and risk in generative AI. Win-Tech, an aerospace manufacturing company with 41 employees in Kennesaw, Georgia, is using ChatGPT for "writing emails to employees, analyzing data, and drafting basic procedures for the company's front office," Allison Giddens, a Win-Tech co-president, told the Times. Win-Tech faces challenges in its efforts to deploy generative AI to boost productivity. For example, ChatGPT sometimes gives "off base" responses, the company must take care not to share proprietary information with the chatbot, and there are no AI applications for boosting manufacturing floor productivity. "There's not a whole heck of a lot of use cases for the shop floor yet," she explained to the Times. Climbing the AI Value Pyramid There is no one-size-fits-all approach to generative AI. Your generative AI strategy depends on your objectives, your capabilities, the needs of your customers, and the strengths and weaknesses of your competitors, as I described in my new book, Brain Rush: How to Invest and Compete in the Real World of Generative AI. One approach to generative AI is to climb what I call the value pyramid--a set of three broad uses for AI chatbots whose adoption by companies is less common as you approach the top. The generative AI value pyramid has three levels. Overcome creator's block At the base of the pyramid are the many ways people use AI chatbots to get started on an activity--such as writing an email or a report, creating a photo or video, or coding software. By helping overcome creator's block, AI chatbots can increase your people's productivity. This is the easiest way for small businesses to get value now. However, since other companies can do the same, your advantage will be fleeting. Boost customer service and sales productivity The middle layer of the value pyramid can increase your productivity. For example, in the temporary job placement industry, generative AI can reduce significantly the number of candidates a recruiter sends to a company before finding a match. AI chatbots can also help a company to resolve customer questions much more quickly. However, such customer-facing applications have risks--most notably, the risk of sharing proprietary information with the world and hallucinations--incorrect responses to a prompt that risk a company's reputation. CEOs fear generative AI hallucinations could threaten their company's reputation. For instance, Google's AI advised people to add glue to pizza, Forbes careers contributor Jack Kelly noted. And Air Canada's AI chatbot made up a refund policy for a customer -- and a Canadian tribunal forced the airline to issue a real refund based on its AI-invented policy, Wired reported. If you can overcome these risks, the second level of the value pyramid can make you more productive in a way that is difficult for rivals to copy, giving you a more sustainable competitive advantage. Create new growth curves The top level of the pyramid is the most valuable one. I think companies aspire to create new growth curves for their business by using generative AI to help their customers grow faster. For example, Boston-based Bullhorn is striving to help recruiting firms to grow faster by making them much more productive at selling, so each salesperson can sell more, as I wrote in a June Inc. post. Focus on the first two levels for now. Protect your customer relationships from AI hallucinations Technology companies are providing solutions that could protect your customer relationships from AI hallucinations. An example is Aporia, a Tel Aviv- and San Jose, California-based AI control platform. In May, Aporia introduced "real-time guardrails for multimodal AI applications," according to SiliconAngle. The new guardrails let engineers add a layer of security and control between the app and the user. "The system detects and mitigates 94 percent of hallucinations before they reach users via chat, audio or video," reported Israel21c. Aporia says its technology works well. "Our competition is trying internal workarounds. They are over-engineering prompts. 'If there is profanity, don't answer.' Aporia offers better detection in real-time," Aporia CEO Liran Hason told me in a June 3 interview. As your company climbs the value pyramid, use AI control to protect your reputation. EXPERT OPINION BY PETER COHAN, FOUNDER, PETER S. COHAN & ASSOCIATES @PETERCOHAN

Monday, June 24, 2024

Anthropic Just Announced Its Most Advanced AI Model Yet. These Are Its Top Use Cases

Anthropic just raised the bar for AI. Again. The AI startup, which was founded by a team of former OpenAI researchers in 2021, has announced Claude 3.5 Sonnet, the first in its upcoming 3.5-era line of generative AI models. Anthropic says the new model is significantly smarter and cheaper than its current most powerful model, Claude 3 Opus, not to mention twice as fast. It has clear use cases for IT, customer service, and financial services. In a blog post introducing the new model, Anthropic wrote that Claude 3.5 Sonnet sets new industry benchmarks, outperforming OpenAI's flagship model, GPT-4o, at graduate-level reasoning and coding proficiency. The company also claims the new and improved Claude shows "marked improvement in grasping nuance, humor, and complex instructions, and is exceptional at writing high-quality content with a natural, relatable tone." So, how can businesses take advantage of Claude 3.5 Sonnet's power? Anthropic says that when given the relevant tools, the model can help IT teams "independently write, edit, and execute code with sophisticated reasoning and troubleshooting capabilities." Need to update an older application that's starting to show its age? You could use Claude 3.5 Sonnet to help migrate your codebase. Another potential use case is financial analysis. Anthropic says that Claude 3.5 Sonnet "enables financial institutions and fintech companies to quickly identify patterns, extract insights, and make data-driven decisions for investment strategies, risk assessment, and personalized financial advice." Claude 3.5 Sonnet also represents a leap forward for Anthropic's AI-powered vision capabilities, especially when it comes to tasks that require visual reasoning, like interpreting charts and graphs. The model outperformed GPT-4o at answering questions about the visual contents of charts, documents, and science diagrams. Anthropic says the model can also accurately transcribe text from imperfect images, which the company calls a "core capability for retail, logistics, and financial services, where AI may glean more insights from an image, graphic or illustration than from text alone." Anthropic also announced Artifacts, a new feature that marks "Claude's evolution from a conversational AI to a collaborative work environment." Going forward, when a user asks Claude to generate a piece of content like a snippet of code, an image, or a document, a dynamic workspace opens in a window next to the chatbot. In the near future, whole teams and eventually whole organizations will be able to use this workspace to "see, edit, and build upon Claude's creations in real-time, seamlessly integrating AI-generated content into their projects and workflows." As for how Artifacts could be used by businesses, Anthropic says app developers can use the feature to incrementally build and refine code; design teams can use it to collaboratively create and refine user interfaces and experiences; and legal teams can use it to analyze contracts and iterate on legal agreements. James Clough, CTO of legal AI startup Robin AI, said in a statement that Claude 3.5 Sonnet has improved the speed and accuracy of the company's AI-powered contract reviews, allowing legal professionals to focus on strategy while AI handles analysis. Clough added that Sonnet outperformed GPT-4o in Robin's internal testing "and is exactly the sort of leap forward we need to keep demonstrating value to our customers." One other use case suggested by Anthropic? Journalism. The company says that "journalists and editors can use Artifacts to brainstorm, outline, and draft well-researched, engaging stories with Claude's assistance in gathering information and providing feedback." (Professional journalists may very well disagree with that claim.) The other members of the Claude 3.5 model family, the smaller and cheaper Claude 3.5 Haiku, and the larger and more expensive Claude 3.5 Opus, will both be released later this year. Other features in the planning stages include a sense of memory, so that Claude will remember its prior interactions with a user, and the ability for Claude to search the web. The company says its aim is to "substantially improve the tradeoff curve between intelligence, speed, and cost every few months." In a statement, Anthropic co-founder and CEO Dario Amodei said that Anthropic's goal is to develop an AI system that can "work alongside people and software in meaningful ways," and that new features like Artifacts are early experiments in this direction. "We're committed to improving Claude's capabilities regularly," Amodei said, "always with a focus on features that address real business needs."

Friday, June 21, 2024

Apple Just Killed the Emoji and It's a Stroke of Genius

Let's talk about Genmoji. Despite what is kind of a weird name (presumably, it's meant to combine "generative" with "emoji"), it's a pretty cool idea. It's also sure to be a highlight feature from Apple's WWDC this week. After all, people really like emoji. Like, they really like emoji. More than 92 percent of people say they use emoji on a daily basis. According to Emojipedia, more than five billion emojis are sent every day on Facebook Messenger alone. Still, emoji are part of the Unicode standard, and there are only 3,782 as of version 15.1. For years, a consortium--of which Apple is a member--decided which new emoji should be included, and that's that. The rest of us just get to choose from their list. That means that while there is almost always an emoji for every text or tweet, sometimes it's hard to find the perfect way to express whatever it is you're trying to say. Now, Apple will let you conjure up pretty much whatever emoji you want. The group of people who get to decide which new emoji should be created literally just got a lot larger. To be clear, Genmoji aren't actually emoji, which are technically text that an app renders as a smiley face or taco or whatever. Instead, they are sent as tiny images. You can use them in the body of a text message, or as a tapback--the feature that lets you add emphasis or a reaction to a message. Previously, there were only six tapback options. Genmoji was one of the things that had been rumored ahead of WWDC, and--to be honest--I was skeptical. The idea that Apple was going to let you create your own custom emoji on the fly seemed like a gimmick. It seemed as though Apple must be desperate to find ways to sprinkle generative AI into its upcoming releases and, if this was the best they could come up with, things must be pretty bad in Cupertino. Obviously, Genmoji wasn't the only thing Apple came up with, but it might be one of the most popular. After all, there are a lot of people who upgrade their iPhone to the latest version of iOS just to make sure they have whatever new emojis have been added. Sometimes people like gimmicks. Most people don't care about things like the ability to transcribe notes in the Notes app, as helpful as it might be to someone like me who is going to use it all the time. They care about things like customizing their home screen and having more fun and expressive ways to communicate. Emojis have always been a big part of that. But now, Apple has pretty much put an end to emoji as a standard. Maybe not technically--the consortium isn't going anywhere--but practically speaking, it's not going to matter what new emojis are added, at least not if you have an iPhone. There's no more waiting for a far-off body to approve a set of updates; you can just do it on the fly. To be clear, there's no indication Apple is no longer going to support the emoji standard, but rather, it's going to give its users the freedom to go beyond the basics. Not only that, but it's another feature that you can get on an iPhone but not on other devices. If you're a parent looking to buy a phone for your teenager, you can be sure they're going to ask you for an iPhone capable of doing this. Then, there's the fact that Genmoji are a part of Apple Intelligence, which is available only on the iPhone 15 Pro series, and will presumably be available on all new iPhones. That alone is a reason a lot of people are going to want to upgrade, even if they would have typically waited longer. And that might be the biggest reason this move is a stroke of genius. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Wednesday, June 19, 2024

How Monogram Is Using AI to Help Empower Incarcerated Youth

There's a new tool in the fight for juvenile justice: an artificial intelligence-powered library. Attorneys, judges, and even individuals who are incarcerated can now access the latest neuroscience and social science research at their own reading comprehension level, with the help of design engineering agency Monogram (No. 589 on 2023's Inc. 5000 list). The Alpharetta, Georgia-based startup teamed up with the Center for Law, Brain & Behavior (CLBB) NeuroLaw Library at Massachusetts General Hospital, which is a Harvard Medical School teaching hospital, to create an AI-powered digital library. The resource is intended to democratize information on adolescent brains and behavior in hopes that it will be a valuable resource for people caught up in, or working in, the juvenile justice system. "We're using AI to help Harvard get this information out to as many people as possible," Monogram co-founder and managing director DJ Patel says. "And if we can get even one person out of jail that doesn't need to be in jail, we've won." Users who access the free online library will find articles from academic journals, amicus briefs, affidavits, court cases, as well as educational videos and toolkits all related to juvenile justice. The information is available to anyone, but specifically intended for people who are incarcerated, their legal representation, prosecutors, and other stakeholders in the juvenile justice system. That also includes judges, probation officers, and policymakers. Google's generative AI, Gemini 1.5 Pro, came into play by generating accessible translations of research and legal documents. Users who open a document will see a slider on one side of their screen. Moving the slider to the left adjusts the reading comprehension level in five intervals, with the rightmost option being the paper's original text and the leftmost a summary at roughly a 6th-grade comprehension level. Patel imagines a child in jail using the library. Although that person may not understand the original text of a document, Monogram's slider "can take that document and translate it into whatever you're capable of understanding," Patel says. Monogram is a design engineering agency that partners with companies like Vercel, Algolia, Stripe, and Contentful to design cutting edge websites and branding for clients like GitHub, BigCommerce, Google's Angular, grocery store chain Hy-Vee, and others. Monogram came to work with Stephanie Tabashneck, the founding director of the CLBB NeuroLaw Library, through a referral from a friend of Tabashneck's who had worked with Monogram. The company was then selected as a vendor to respond to a request for proposal. The prompt seemed like a simple one: build a digital library to make the neuroscience research at CLBB widely available. The intention was to "level the playing field, especially for children and late adolescents who were facing serious charges," says Tabashneck, who is also a forensic psychologist and attorney. The idea originally came from CLBB co-founder Dr. Judith Edersheim, who asked if Tabashneck would direct the library about a year and a half ago. "At that time we had no idea what it meant to create a digital library so a lot of this was building the plane while it was flying with half the parts missing or falling off," Tabashneck says. Tabashneck says the primary audience for the NeuroLibrary's research is people working in the legal system or people who are incarcerated, and that it is "not uncommon for defendants to be educating their attorneys," as people can become "remarkably educated" about the legal system during a long incarceration. The resource is meant to help people understand their legal options, as well as to make sense of their own brain functioning, particularly if they committed a horrific crime when they were younger. Tabashneck also emphasized that while people who are incarcerated may enter the system as juveniles, they may only be eligible for parole when they are adults, so a wide range of age groups would likely use the resource. Of course, a document being available doesn't necessarily mean that it's accessible. Much of the research that could fundamentally change someone's sentencing-and life-can be found in dense academic texts, kept effectively out of reach due to the time, education level, or technology required to use it. Tabashneck says the team always wanted to include educational training and supplemental information to make the information more accessible, but it was Monogram that dreamed up the idea to use AI. "It actually was an 'aha moment' that happened during the creative discovery process," says TJ Kohli, Monogram co-founder and creative director. "Libraries don't provide context. They provide the content, the data that you're looking for. In this case, we were able to build a really good library and provide context in addition to that with AI." Tabashneck says AI was a "real game changer" for accessibility, given there are roughly 500 different articles in the database. "If you're including each iteration of reading level, that's 2,500 different writing products," she says. That doesn't mean people weren't involved. Generative AI has a reputation for hallucinations, or generating false or misleading information. To solve that problem, the CLBB team tasked research assistants with reading over every AI-mediated translation of every research paper to ensure that there were no inconsistencies. Tabashneck adds that it wasn't as simple as prompting the AI to generate grade-level versions of the text because "the translation will make it sound like you're talking to a kid, and say things like, 'Isn't it cool that your brain does this," which she says may come across as "patronizing." Kohli adds that labeling on the slider and its various reading levels was left intentionally abstract so as not to "alienate" readers. Beyond that, they also used AI to generate a short summary of a text that gives readers an immediate sense of what they are reading. Compared to other work from Monogram, CLBB's website is pared back and utilitarian, or as Kohli says, "fast, efficient, intuitive" and designed to make every click count. The website, for example, starts loading a page the instant a user hovers over a link. "If you're in a correctional facility, and you have 10 minutes to be on a computer, lag time matters," Tabashneck says. "These are things that we did for accessibility, but contributed to a stronger website overall." The NeuroLaw Library is a cutting edge resource to address the ongoing problem of youth incarceration in the U.S. The U.S. leads the industrial world in incarceration of children, according to Human Rights Watch. Estimates on the numbers of people who are incarcerated under the age of 18 vary. A 2021 report from the Sentencing Project tallies roughly 25,000 minors in youth facilities, as well as an additional 2,300 in adult facilities, whereas the American Civil Liberties Union estimates that some 60,000 youth are incarcerated "on any given day" in the United States. And the "system is very Black and brown," according to Marsha Levick, chief legal officer and co-founder of the Juvenile Law Center. Black minors are about 4.7 times more likely to be detained as white ones, according to the Sentencing Project. The Juvenile Law Center has a "working relationship," Levick says, with CLBB, relying on its research to inform cases. The NeuroLaw Library also contains a number of amicus briefs from the Juvenile Law Center and other organizations. Although Levick says the number of youth who are incarcerated is "still too high," given the harms of juvenile incarceration, it still represents a substantial decline over the past 15 years. Research on social and behavioral science, psychology, and neuroscience has informed a series of landmark court cases that have, among other actions, eliminated the death penalty for adolescent crimes and found mandatory life-without-parole sentences unconstitutional. Even if teenagers tend to be impulsive and vulnerable to peer pressure, "this idea that once a bad kid, always a bad kid-or always a bad adult-is not supported by neuroscience," Tabashneck says. What research does suggest is that certain structural and functional changes in the brain occur during adolescence, according to a paper published in the Journal of the American Judges Association in 2014 and found on the CLBB NeuroLaw Library. These changes contribute to increased reward-seeking or risk-taking behavior, and lack of impulse control when emotions are high. The paper cites studies that suggest there is no fixed age or timeline when an adolescent brain matures into an adult brain. "Neurodevelopment and neuroscience really helps to explain the unexplainable," says Tabashneck. "This information can help shed light on why someone who at 16 did something really terrible may not be still at risk at age 50." The website launched on June 10, so it's too soon for any potential success stories to emerge from the slow-moving justice system. But CLBB demonstrated the resource to organizations including the Juvenile Law Center, the National Center for Juvenile and Family Court Judges, and the Sentencing Project and received "some great feedback," Tabashneck says. Furthermore, those organizations as well as the ACLU and the Innocence Project are helping to spread the word about the library. Already, they are seeing major traction. The Maine Department of Corrections says it is currently exploring "widespread implementation" of CLBB's NeuroLaw Library in both juvenile and adult justice facilities across the state. Tabashneck says that could mean making the content accessible for corrections staff, featuring the resource on the department's website, and working to potentially make it available on tablets in the juvenile facilities. Implementation could take place as soon as this summer or fall. Levick, who has been working in the juvenile justice system for decades, says one measure of success for a tool like CLBB would be the extent to which it can "shrink the footprint of the juvenile justice system." "It's critically important to us that however we respond to criminal conduct by children, that we treat kids as kids," Levick says, "and that we really focus on responses that are designed to rehabilitate them, to restore their interactions, to think about restorative justice between young people and victims and survivors."

Monday, June 17, 2024

Inside a 'Weird Economy,' a Rare IPO, and the State of AI

From a business news standpoint, the first few months of 2024 had it all: the rare IPO of a social media company, a very strange economic situation facing founders, and enough developments in artificial intelligence to train a new LLM. Inc.'s editors have been chewing over all of it. In this roundtable episode of From the Ground Up, we hear from Inc. reporter Ben Sherry about the state of AI use in the American workforce, the latest in the AI safety debate out of Silicon Valley, and what's going on within OpenAI. New Inc. editor-in-chief Mike Hofman discusses the unusual state of the American economy, and how entrepreneurs are feeling amid wildly mixed signals from the Fed, consumers, and what seems like a cooling labor market. "We have a real mismatch, I think, in the labor market," Hofman says, "and entrepreneurs are trying to figure out how to both benefit from that by creating opportunities for their companies to supply larger companies, but also how to compete for talent with other folks," Hofman says. We also examine what's happened at Reddit since its March IPO--and how the massive community-based social network finally, after years of false starts, made its unusual public debut. For the full episode, click on the player above, or find What I Know on Apple Podcasts, Spotify, or anywhere you listen to audio. What follows is the raw transcript of this episode of From the Ground Up. Diana Ransom, co-host: I am Inc. executive editor Diana Ransom. Christine Lagorio-Chafkin, co-host: And I'm editor-at-large Christine Lagorio-Chafkin. Ransom: And you're listening to From The Ground Up. Today's episode, Quarterly Review. We asked a few key people here at Inc. about some of the most important topics in business for the past quarter. Lagorio-Chafkin: Right. We talked about some little-known factors in the Reddit IPO and the company's relationship with AI. We also spoke to staff reporter Ben Sherry about this quarter's AI headlines. But first, there have been some developments inside Inc. lately, and we have an important introduction to make. Ransom: Joining us is Inc. editor-in-chief Mike Hofman, and we have reporter Ben Sherry. Thanks for being here. Mike Hofman: I'm excited. This is fun. Ben Sherry: Thanks so much for having us. Ransom: Welcome all. So, Christine, let's kick off a little bit by talking about some things that are new at Inc. So we just, in April, launched our Female Founders package. Tell us a little bit about one of the cover stars that you covered. Lagorio-Chafkin: Yeah, absolutely. I love that we're starting with all about us instead of all about the business news. But yes, our Female Founders package came out in April and I had the privilege of writing about Cloudflare and its co-founder Michelle Zatlyn, who we put her at the top of the list for a couple reasons actually. Firstly, she's at the helm of this company that is one of the backbones of the internet, of the global internet. It protects about a third, almost a third of the global internet. And that's really important ... Ransom: That's shocking. Lagorio-Chafkin: It's a shocking number. I think it's between a quarter and a third right now, but it's really important this year in particular because in 2024, it's not just the U.S. that is holding national elections. It is half of the world. Ransom: It's like 64 countries. Lagorio-Chafkin: Half of the global population will be voting in a national election this year. And Cloudflare goes a long way for protecting even municipal sites online, keeping them up, keeping information up for voters in the face of election fraud and disinformation and attempted electoral interference. So very important company, but also I thought they were very impressive because they do a lot of giving back. A lot of that work is pro bono. They protect the work of journalists, work of organizations that keep factual information online around the globe, and schools too, municipalities as well. So they don't siphon out their non-profit work really. I mean, they don't call it a nonprofit internally. They don't have just one employee who is dedicated to doing that work, or as so many companies sort of do, they have it worked into their regular workflow. So as soon as a small government school, nonprofit is onboarded into their process, like regular engineers, regular folks in the company take on the work just as if it was a paid customer like Google. Ransom: Yeah. I also love that she's a female founder who runs a publicly traded company, and there are precious few female founders who do that. Lagorio-Chafkin: Yeah, absolutely. Ransom: And then the origin story with the lava lamps wall, got to love the lava lamp wall. Lagorio-Chafkin: Well, that's just a fun detail. In their headquarters in San Francisco, they have this sort of remnant of their early days, which is a bunch of lava lamps. There's about 100, and they're not just cute dorm room-esque decor. They provide part of the secure encryption keys that make Cloudflare run. Ransom: They actually do? Lagorio-Chafkin: Yes, they work. Okay. So there's cameras facing them in the lobby area. And so ... Ransom: I thought it was metaphorically. Lagorio-Chafkin: No. Ransom: Wow. Lagorio-Chafkin: So these cameras are sort of taking images every few seconds of the lamps, and therefore the movements, which are inherently random, but it's not just the lamps, it's people walking in front of the lamps too that are forming the image which forms part of a secure encryption key. Hofman: It's like kitsch with a purpose! Lagorio-Chafkin: Totally. Hofman: Strategic kitch. Ransom: I love it. That's a great detail. So another thing that's new at Inc. is Mike. Mike Hofman. Welcome aboard. Hofman: Hello. Hello. Ransom: So two months now on the job? Hofman: Yeah, yeah. I joined Inc. at the end of March and my first day was actually going to South by Southwest to host the Inc. Founders House there. And also just recently hosted the Inc. Founders House in Philadelphia, which was great to get out and to meet the founders and entrepreneurs who love Inc. and who we love. It's been really fun so far. Ransom: Yeah, absolutely. We've been keeping you busy. This is impressive. Hofman: Yeah, yeah. And this is also a return for me. I spent the first 15 years of my career at Inc. as a young reporter. Inc. was at the time headquartered in Boston and still owned by our founder, Bernard Goldhirsh, and then moved to New York with Inc. And so I'm really delighted to be back and this is such amazing brand and such an amazing audience to write about and think about and to interact with. Ransom: So speaking of our amazing audience, when you were in Philly, what kind of conversations were you having? What were some themes that jumped out at you? Hofman: Well, it's really interesting. It was a great audience and people were really upbeat and excited and enthusiastic about their companies. And at the same time, they were talking about how running their companies was difficult. This is a weird economy right now in a difficult time. And it's sort of a tale of two economies where you can have Nvidia have become the second-highest company in terms of valuation on the public markets right now, which is amazing. And obviously the AI boom, which Ben's going to talk about and the chip boom. And then at the same time the GDP is slowing down and was just revised downward for the first quarter. And the jobs reports are, we have another one coming up between the time we tape this and when this is published. But there's obviously a sort of tightening or cooling labor market. And it'll be interesting to see as we go into the summer, is the economy still growing? The folks in Philadelphia were really upbeat and excited and ... Ransom: As entrepreneurs are. Hofman: Yeah. Doing things with TikTok. But that was a big theme, but it was really interesting to talk about. Ransom: They were filming TikToks while you were there? Hofman: They were both making TikToks, but also everybody had a TikTok marketing strategy. It's definitely something that this generation of companies is really focused on. So it was great to be there. Ransom: Wow. And I also love how small businesses in particular, the Fed is really looking to see in terms of the labor market and whether they're going to move rates. So this is to be, we're sort of at the leading edge of what's actually going to move the needle for interest rates. Hofman: It's also, it's such a funny job market because the job market for engineers and developers and nurses, and there are lots of categories where people are really desperate to hire. And then there are at the same time with mass layoffs in tech and other large companies, you're seeing people have a hard time finding a job that fits their skill sets. And so we have a real mismatch, I think, in the labor market and entrepreneurs are trying to figure out how to both benefit from that by creating opportunities for their companies to supply larger companies, but also have to compete for talent with other folks. Ransom: Did you get any sense of the strategy for dealing with the uncertainty? Hofman: Well, entrepreneurs are exuberant and optimistic by nature and have this sort of, can-do, I-can-fix-it attitude. So I was talking to one group and they were really funny, I thought, because I said, "Well, what's the hardest thing about sales right now for an entrepreneurial company?" And they were mostly in software and they said, "Well, it's really hard because nobody has any budget." And I was like, "Okay, well that seems like a big problem if you're selling into companies and they don't have any money to pay you with." And they were like, "Right, that's totally a big problem. But what you have to do is ..." And then they sort of laid out the pipeline that they have for nurturing possible accounts for the future. So they have this sort of spirited can-do attitude that even in the face of a lot of noes, obviously entrepreneurs are very resilient. And I think that they just sort of feel like the uncertainty right now is perplexing, but let's just push through it. And, you know, some group of them will and others I think will have a hard time. Ransom: Yeah. I love the can-do attitude. It's so reassuring for all of us regular people. So speaking of totally opposite from regular people, irregular people, let's talk about AI for a second. You, Ben, not you. Obviously the topic of conversation recently has been about Google's changes to overviews or even just the adoption of overviews. But I'd love to have you give us a sense of the land in terms of how small businesses are reviewing what's happening right now and how are they using it? Sherry: Yeah, so I've been pretty heavily reporting on AI for the last roughly two years, pretty much since right after I got to Inc. And basically have seen the transition happen a few months after I got here, ChatGPT launched and kind of everything changed after that. And so I think that what I've heard from entrepreneurs is that 2024 is the year of actually taking all of the experimentation that they had been doing in 2023 and actually codifying those policies and putting together plans. And that's a really big deal because there are a lot of studies out, there's this Microsoft study that basically says that 78 percent of the workforce is using AI at work, even if their workplace doesn't tell them to or have any policies in place around it. So people are going ahead and using the tech because they see opportunities for it to be useful at work. Ransom: But they're also being secretive about it? Sherry: Yes. And I think that a big part of that is because people feel that it's a cheat or a hack or a shortcut, and that if they were found or outed for using generative AI, they would be looked at as lazy or not really doing the job that they were hired for. And I think that most entrepreneurs and business leaders would say, "If the work is getting done and it's getting done faster and cheaper, then I'm all for it." But I think it's really important for companies to be putting together policies and guidelines so that anybody ... There's no confusion about whether or not AI is allowed or where AI is allowed. Ransom: Or you're going to get in trouble if you do adopt it. Sherry: Yeah, exactly. I think people are worried. Yeah, they're worried about getting caught getting in trouble, and it definitely is a problem that needs to be addressed. I say that to a lot of entrepreneurs, just, "Your employees are using AI, even if you don't know about it, they are using it. So it's better to understand how they're using it and how they're using it to do their jobs better and then make a policy from there." Ransom: But aren't we all really using AI all the time? Even if you do a Google search, based on this overviews AI thing, aren't we? Lagorio-Chafkin: Yeah, it just comes up. It's in your search. Absolutely. Ransom: Yeah. So I feel like whenever I see those stats of, "75 percent of your employees use AI." It's like, yeah. Lagorio-Chafkin: Of course. Ransom: Why isn't that more? Sherry: At the end of the day, obviously there's a lot of marketing speak. This is a product that they are aggressively trying to sell to enterprises, to individuals. So I think that yes, it probably isn't as prevalent as some of these studies would lead us to believe because of some of the vagaries and the ways that things are stated. Companies have been using AI and machine learning for decades. We recently ran a story in a recent issue of Inc. about ways that companies are using AI, and we didn't just look at generative AI. We looked at machine learning, and one of those companies was Neptune Flood Insurance, which is a company that basically developed a quote generator to basically give people quotes for flood insurance. Lagorio-Chafkin: This is an unlikely company because it's a very difficult market to do flood insurance in Florida, which is where he's based, but yes, absolutely. Ben, I was just talking to this serial entrepreneur who said immediately at the launch of ChatGPT, he talked to his employees. He said, "Listen, you guys are all allowed to use this. Use it for anything. Try it out, but know that you are responsible for your own work and the quality of your work." Hofman: I think that's interesting. I talked to somebody at one point who said that generative AI makes being mediocre very easy, but if you don't then take the next level of your mediocre work and turn into something special, that's where you're falling short. And maybe that's where either some of the disconnect between CEOs and the people who work for them, or managers and their staff, or clients who are getting decks from you. If the work is not great, it doesn't matter if you spend all day working on it or if you cooked it up on ChatGPT in just a couple of minutes. Sherry: Yeah, If I'm putting together a website and I want to outfit it with photography of a product or of people, maybe I'm putting together a concert company or something, and I say, "Generate some images of people having fun at a concert." I think it's very obvious when an image is AI-generated at this point. It's been enough time that we're all pretty good at figuring that out. Unless it's been very heavily prompted and engineered, then it might look a little bit better. But if you're just going to go very rudimentary and just sort of say, "One sentence prompt, it has what I want. I'm going to put it up," that might be seen as being cheap or lazy. I think that it's definitely important to think about how your AI use is being perceived by your customers, and also your investors and just your employees even. How are they feeling about how much effort and work is being put into the presentation of the company? Lagorio-Chafkin: But we're not in full AI backlash yet, right? Hofman: I was going to ask, where do you think the AI safety debate is right now? Are people still really worried about it, or do you think that that has cooled a little bit in sort of the post-OpenAI? Ransom: Wasn't there just another open letter calling for the end of humanity or saying that AI was going to--not "calling for." Sherry: Bring it on, honestly. Hofman: A bot wrote that. Sherry: Yeah. Did an AI write this? Ransom: No, it was DeepMind. Sherry: Yes, some former DeepMind and OpenAI employees. Ransom: Yeah, and even some anonymous OpenAI employees. Sherry: Yes. And a few Anthropic employees, I believe. Ransom: Right, right. Sherry: Yeah. I think that ... Ransom: And to be clear, they were not calling for the end of humanity. They were just saying that AI is going to bring about the end of humanity, just that. Sherry: They were basically calling for their NDAs to be lifted so that they can talk about the concerns that they have, about the ways that some of these bigger companies like OpenAI are potentially chasing profits and being first to market over very rigorous safety standards. Sam Altman had made a very big deal about dedicating 20 percent of OpenAI's compute to the Superalignment team, that it was supposed to be fully dedicated to exploring all of the potential issues and societal problems that generative AI could come up with. And it came out pretty quickly that they were getting a fraction of that compute, and that team pretty much entirely left. And now I think there's a lot of worry that these companies, that OpenAI was founded as a nonprofit, then became a for-profit company because it wanted to fundraise enough so that they could develop the technology to create artificial general intelligence, basically an AI that's as capable of completing tasks as a human. They needed to become, they needed capitalism, basically. They needed the public's money and consumers and enterprises. And so at what point does chasing dollars sort of supersede ,,, Ransom: Yeah, the profit motive? Sherry: At what point does that supersede the very high-minded safety standards? And also, how much do businesses care? If you look ... Yann LeCun, who's the Meta's chief technology officer, I believe we'll have to check that, he basically has come out and said that a lot of these people that are kind of forecasting doom and saying, "Maybe the world is going to end in a few years are," he basically calls them doomers, and that's a word that I've seen a lot online, is basically there are people that have very strong feelings about the potential for harm of generative AI. And then there are people who say, "Imagine if the Wright brothers were so concerned about planes carrying bombs back in 19, whenever they were creating their first airplane ..." Ransom: That's so dismissive. Sherry: Yeah. Ransom: Isn't it? Sherry: Yeah, right. It is dismissing it. It's saying like, "Oh, well, if we were going to be worried about future technology, why would we innovate ever?" And to a certain extent, I get that; you can't be so worried about the future that you do nothing, but you also need to be cognizant of using dangerous technology and how to use it responsibly. Ransom: Yeah, I agree that we're sort of in this netherworld of whether ... is it a good thing? Is it not a good thing? We don't know at this point. I think the concern, Mike, to your point, I don't think that has been resolved yet. Hofman: Yeah. I would say too, whether it's a good thing or a bad thing, it will be misused. And so then what guardrails, if any, exist to prevent it from being misused? And that to me is sort of an interesting, important question that is really hard to know, because the equities on both sides are pretty strong. Ransom: And we also kind of got into this in a past episode we did with Shira Lazar. She's sort of like one of these OG influencers, and she was talking about how as AI becomes more and more prevalent, people are going to be more and more interested in the people behind the companies, behind the machines, even more so than they are now, so people in our role actually telling these stories; people really want to know about Sam Altman because he's got his finger on the button, and if it's really that consequential about what he does and what his company does, we really want to know he's the right person for the job. Sherry: And when you think about the Scarlett Johansson debacle, where that was sort of why he was originally fired by the board was they said that he was not being consistently candid with them about things, and it came out the board did not know that he had asked Scarlett Johansson to be the voice of GPT-4. There's definitely some worry about if you're going to have somebody that's in charge of this world-changing technology, if you yourself are saying, "I'm worried that this technology will end humanity," and then you're continuing to not be honest with your board and sort of be sneaky,. I think that I saw some people recently saying that at Y Combinator, which Sam Altman ran at the age of 29, I'm 29, that's terrifying. Ransom: You're highly competent. Sherry: I don't have that level of scary ambition. I think I have a normal level of ambition, but they talk about being sneaky at Y Combinator, or not sneaky, but sort of crafty, where you're willing to bend the rules just a little bit to get what you want. And I think that if you think about the way that GPT-3, GPT-4 have been trained, it's all about asking for forgiveness rather than permission. They've trained on everything, and then they went and made deals. Lagorio-Chafkin: Right. Reddit had to cut them off before, now they have a big deal with Reddit. Right? Sherry: Yeah. Lagorio-Chafkin: It's interesting. And Sam has been a long-time friend of Reddit .. Sherry: Oh yeah, and owner. Lagorio-Chafkin: ... and Steve Huffman. Yeah. I mean, he's a very big stakeholder. Ransom: So speaking of Reddit, Christine, you've been busy. So if you don't know to our dear listeners, our co-host here, Christine Lagorio-Chafkin is also the writer of book called We Are the Nerds, which is the definitive book about Reddit. So you're the perfect person to talk about their IPO. Let's do it. Lagorio-Chafkin: Yeah, absolutely. I actually went to the New York Stock Exchange on March 21st in the morning for the bell ringing, and it was really fun to see the guys who I'd been reporting on for years prior ring the bell, although they did not ring the bell themselves. Steve Huffman, the founder, did not get up there, the co-founder did not get up there. Ransom: Did he send a Snoo up? Lagorio-Chafkin: He sent a guy in a giant stuffed Snoo suit up to ring the bell, which was a very Reddity thing to do, have the everyman figure go up and ring the bell. Sherry: Are they really into Snoo inside of Reddit? Lagorio-Chafkin: Yes, they are. Yes, they are. They call employees Snoos. Sherry: The gritty of business. Lagorio-Chafkin: Yeah, right, right. Sherry: We need a gritty. Lagorio-Chafkin: So anyway, it was an exciting day to see that. So anyway, it was an exciting day to see that Reddit actually could finally pull off this IPO because ... Ransom: It's been like 15, 17 years in the making. Lagorio-Chafkin: It's 18 years old, and they filed to IPO in 2021. Then the market started to look really shaky, and then there was the real slowdown in IPOs. Last year, something like a quarter of companies IPO'd than usually do. Ransom: Right because there was this SPAC bubble and then nothing. Lagorio-Chafkin: Everything slowed. Right. And especially tech companies were not going public. So, I talked to Steve after the bell ringing and after, before the company sort of celebrated, but after the bell ringing, and I asked him about this that time period, and he said, this waiting game actually made us a lot stronger because we had all these false starts. We kept saying, we're going public, and we kept doing a thing called testing the waters, which is sort of a pre-IPO road show. You go and talk to groups of investors, gauge their interests, answer all the hard questions about your company, which if you're Reddit, there are a lot of hard questions that investors need to learn, wrap their heads around ... Ransom: Like, how do you make money? Lagorio-Chafkin: Exactly. How do you make money? What's with all the porn? There's a lot of dark corners of Reddit. But they had actually done this process and had these false starts five times in those years, and they were also ... Ransom: Was Steve there for all of them? Lagorio-Chafkin: Yeah. He was there. He was leading it. He had brought on a new finance team, and they had started to act like a public company too. They were doing their quarterly earnings calls and trying to get in shape. After two years of that, the investors knew who they were, had a solid placement, knew how they were going to act on opening day, and so did Reddit. So I think that really helped them secure that opening day market. Ransom: They had a pretty good pop, right? Lagorio-Chafkin: It was a good pop and then it declined a bit, but it's been remarkably sort of steady for this place that birthed so much meme stock mania, and is inherently sort of unstable. It's been pretty stable all things considered. Ransom: Yeah, and so what led to the pop? I mean, was this the AI deal that they worked out with OpenAI? Lagorio-Chafkin: What was it, within the week, they had just worked out the deal with Google AI. Ransom: It was Google, yeah. Lagorio-Chafkin: Yes, absolutely. It was a $60 million deal. So I think that, I mean they again had a little bump recently when they made the deal with OpenAI. Ransom: I mean, how do you do something like that and try to square things away with your customers? Lagorio-Chafkin: Yeah, it's interesting because the Redditors would not have endorsed those deals. They don't want their information being crawled. Sherry: Yeah. And they're not even getting a cut. Lagorio-Chafkin: Exactly. Ransom: But it was being crawled anyway, right? I mean ... Lagorio-Chafkin: It had previously been then when Reddit cut off API access to third parties without paying, I would assume that angered OpenAI, but it didn't or it didn't significantly because now they have another deal with them. It angered a lot of users who prefer third-party apps to the ones that Reddit itself has built. It's been a huge Reddit user drama over the years, but certainly it's still causing a lot of grief on Reddit itself. Ransom: Yeah, I'm impressed that they hosted an AMA during the quiet period, or ask me almost anything. Lagorio-Chafkin: Yeah, they took the questions, vetted them by lawyers, and then made a video. So it was sort of an AMA, but a lawyered AMA. Ransom: But some of them were pretty tough questions. I was impressed. So Steve's pay package came up. Lagorio-Chafkin: Oh, sure. Yeah, absolutely. Ransom: $193 million in pay. Lagorio-Chafkin: Mostly in stock. I think his actual pay, the actual pay is really reasonable for a CEO. But he's making a ton of money off of the stock now, and they've all sold bits of stock. Some executives have sold more than bits. Jen Wong's pay package I was like, wow, that was amazing. Ransom: Oh, really? Lagorio-Chafkin: Yeah. Ransom: She's the COO? Lagorio-Chafkin: Yes. Ransom: Okay. Lagorio-Chafkin: Yeah. But yeah, Sam Altman, also, his family fund and several of his funds had a lot of money in Reddit, and a lot of that stemmed from this deal back when he had just taken over Y Combinator. And Reddit was in this really insecure time in 2014. It had a lot of infighting amongst Redditors and moderators and had a new CEO at the time, and its future did not look bright. And the CEO at the time, Yishan Wong went to meet with Sam Altman and said, we're in trouble. We need some help. And Sam put together a $50 million round for him, it sort of seemed like out of the goodness of his heart. Of course, he may have seen the future. There was no OpenAIs at this point, there was no ... Ransom: He's like if I do this favor, I'm going to call on you one day. Lagorio-Chafkin: Right, right, right. But I mean, this was more money than Reddit maybe even needed at the time. And he included a couple of folks that Yishan wanted in the package. He got Peter Thiel involved. He got Snoop Dogg involved. So he sort of really held Reddit up for a few years, joined the board, and ... Sherry: He was briefly the CEO, right? Lagorio-Chafkin: He was. Fun fact. For what? Eight days or something ... Hofman: Something like that. What strikes me as interesting, is that you talk about how so far the stock has been very stable and as impressive as that, I think it's really impressive that the platform has been stable, given that there's a lot going on there when you compare it with some of the other big tech platforms that had all kinds of dynamic things happening with them, I'm thinking of old-school ones like Digg that have sort of paled in comparison. At one time, Digg and Reddit were neck and neck competitors. Lagorio-Chafkin: Oh my gosh, yes, Quora. Remember that? Hofman: Yeah. Totally. Lagorio-Chafkin: When it was a strong place. Reddit is the last great remaining text-based place online, and I do think Google's new AI search is only going to be good for Reddit because people are searching now for, if they've got a real human question or a real logistical question, they will want to hear from other people rather than from AI. So they're like, what's this bolt on my dishwasher doing? Something's broken in my house. What is it? They search that on Reddit, you get a real person who's already gone through that exact same situation. Sherry: Well, I will say that one of the big problems with the AI overviews right now is that they are pulling answers from Reddit that are not accurate, or are sarcastic or jokes. So somebody will say, how do I make a pizza? And somebody will say ... Ransom: Glue. Lagorio-Chafkin: Put glue on it. Sherry: Put glue on it. And the AI will just kind of pull that and say, oh, we think that this is a legitimate answer. And it's because it kind of has trouble understanding humor. So it is interesting that Reddit is a very rich source of data for these language models to learn from. I mean, it's an incredible source of data, Lagorio-Chafkin: But perhaps more useful if you're a human. Sherry: Yeah. It's not always correct. Not all that data is of the same quality. Ransom: Is artificial general intelligence expected to be funny or to understand humor? Sherry: I think it would have to. I think it would have to be able ... Because if you're thinking of an AI that is capable of doing anything that a human can do, stand-up comedy is something that a human does, and being able to find ... I mean, then you're getting into a kind of philosophical question about what makes something funny. Lagorio-Chafkin: Yeah, we're getting deep. Hofman: It's sort of interesting, right? On Reddit, sarcasm is a feature, not a bug. And in AI sarcasm is a bug not a feature. Sherry: Yeah. And so if you're training something, if you're getting a huge corpus and you're not going through and getting rid of all that bug data, it's difficult. I don't know if there's really a way of going through and sort of filtering or cleaning that data. Data cleaning is obviously a huge part of training large language models. But I think that's probably something that Google and OpenAI are dealing with right now as they kind of go through all that Reddit data. Hofman: So if we worry about whether or not AI is going to destroy the world, which I think we were talking about earlier, maybe it's like the snarky Redditors are the people who might just save us. Is that the idea? Lagorio-Chafkin: I love that. I love that. Sherry: I don't know. Hofman: They would like that too. Lagorio-Chafkin: My feeling about AI in general and the state of it right now is that it is not as useful as I would love to see a technology actually be. I mean, when we compare it to the airplane, oh my god, it's like a hunk of metal flying through the sky, that's amazing and we can go to different places and the use ... Sherry: It changes everything. Lagorio-Chafkin: It changes everything. And I think that we've seen the uses of AI be okay so far, right, in terms of the real world. And maybe they've caused harm to our gathering of information to universities, to various things. And I think that you usually see a technology launch and be a little more impressive. Ransom: I think it's picking its battles, in terms of AI being disruptive. It's certainly disruptive for the publishing industry. It's disruptive for universities. I mean, we had a Q&A with Reid Hoffman who's involved in AI obviously, but he was talking about how I guess he and Bill Gates have also talked. Bill Gates talked about this too. But basically, AI presents this idea or opportunity of having a tutor in your pocket because all smartphones can be AI compatible and who needs professors anymore, when your phone can tell you exactly how to code something or the philosophy or philosopher behind such and such. Lagorio-Chafkin: I have typed into ChatGPT some prompts asking for some jokes about OpenAI. "Why did the AI cross the road? To optimize the chicken's neural network." Sherry: Ha ha. Lagorio-Chafkin: "Why did the AI go to art school? Because it wanted to learn how to draw better conclusions." Ransom: Ooh. That one's ... Sherry: Okay. That's okay. Lagorio-Chafkin: A little self-deprecating. Sherry: Next level. Lagorio-Chafkin: "Why did Sam Altman start carrying a map? To navigate the constantly evolving landscape of AI." Sherry: That's pretty funny. Ransom: These are painful. Lagorio-Chafkin: Oh my god. I can keep going, but they're really not worth it. Sherry: I personally have used chat GPT mostly just to generate Frasier fan fiction. Lagorio-Chafkin: Frasier, the show? Ransom: Yes. Sherry: Yeah, like Frasier Crane. Hofman: That's niche. That's kind of a niche pursuit. Sherry: Yeah. Don't tell anyone that, but... Ransom: I'm happy to hear it. Sherry: But yeah, I've said it for a long time. I think that the truest use of language models is memes. It's creating memes. It is when you don't really have high stakes on something and you're just looking to create a funny little thing, a little song, a little image, that is where AI shines. Ransom: Well, thank you so much. This has been enlightening. I've learned a lot about you, Ben, and I really appreciate your time, Mike and Christine. Lagorio-Chafkin: Mike Hofman. Welcome once again. Hofman: Yeah, it's been fun to be here, yeah. Sherry: Glad to be here. Thanks for having us. Lagorio-Chafkin: That's all for today's episode of From The Ground Up. Ransom: Be sure to subscribe on Apple Podcasts, Spotify or your podcast platform of choice. Also, if you liked this episode or have suggestions of what topics you'd like to hear about, leave us a review on Apple Podcasts or reach out to us on Inc.'s social channels on LinkedIn, X, or Instagram. Lagorio-Chafkin: From the Ground Up is produced by Julia Shu, Blake Odom, and Avery Miles. Mix and Sound Design by Nicholas Torres. Our executive producer is Josh Christensen. Thanks for listening and we will see you next week. Sherry: Happy to talk about a Frasier Crane whenever.

Friday, June 14, 2024

3 Lessons for Navigating the AI Transformation

Generative artificial intelligence has advanced at an astonishing rate. Since ChatGPT launched in late 2022, new iterations of generative AI have been released several times a month. The pace of technological change has major impacts on organizations and the workforce. According to consultancy PricewaterhouseCoopers CEO survey, 45 percent of CEOs say their companies will not be viable within the next decade if they continue on the same path. Automation is set to change a third of global jobs over the next 15 to 20 years. AI and the pace of change have forced leaders in every industry to answer the following question: How can your employees benefit from new tech, like GenAI, and prepare workforces to adapt as quickly as technology evolves? Luckily, there are lessons to be drawn from the disruption and transformation cloud computing created in the early 2000s. 1. Don't worry about the initial hesitation. Cloud computing enabled faster time to market, scalability, collaboration, data loss protection, and more. Yet even with these potential benefits, many organizations were hesitant to move data and operations to the cloud due to security concerns, legacy investments in on-premise infrastructure, and uncertainty around reliability and uptime. Eventually, competitive pressures and tangible values shown by those who made the jump to the cloud overcame that skepticism. A mindset shift had to happen before people could move from on-premise to cloud technology. What can be learned from this when considering AI? The same initial hesitation is present. Leaders are concerned about ethical issues, accessibility to data, lost jobs, the possibility of human obsolescence, and more. With AI, the big disruption is that improvement as a linear process is coming to an end. The technology allows for sustainable transformation where complexity is simplified. Every process that emerges can be captured and examined for future improvement. Organizational leaders can reevaluate regularly, in real-time, so continuous improvement becomes organic. Yet another widespread mindset shift has to take place to realize the exponential way AI creates the opportunity to improve and become more effective. 2. Reskilling and upskilling are required to take advantage of new technology capabilities. Cloud computing introduced new disciplines like cloud architecture, DevOps, and site-reliability engineering that required extensive retraining. IT staff were upskilled through certifications, and cloud specialists were engaged to manage migrations. With AI, mountains of data can be shifted through more efficiently. AI prompt engineering and model training/deployment reskilling programs are becoming essential. This shift also means that companies must understand what skills their workforces have and what skills will be needed for the future. This will require a shift from focusing on jobs to skills. What differs from the past, is that AI isn't going to replace the work that people do--it's going to augment the work people do. People and AI will need to work together to learn from and improve each other. This new way of working essentially treats AI more like a coworker than a static technology. 3. Some legacy roles will become obsolete, but new jobs will emerge. New technology can sometimes mean that the workforce is treated like a bottom-line cost that machines can reduce. While it's true that some jobs will become obsolete, emerging tech also creates jobs that didn't exist before. With cloud computing, the need for maintenance, upgrades, and procuring hardware was drastically reduced. Roles like data center operator and system administrator for on-premise infrastructure diminished. However, new roles like cloud operations, cloud security, cloud developers/engineers, solutions architects, and managed services providers have emerged. Roles with repetitive tasks are declining due to AI, but new jobs like machine-learning engineers, data scientists, and MLOps are emerging. Other roles will likely be created including AI product managers, AI ethics managers, and conversational AI developers. It's important to remember that organizational leaders have the responsibility of deploying AI to harness the skills and capabilities of their workforce. It will fall to them to look toward innovation and new ways of working to set their organizations apart from the competition. Their employees' creativity will be one of the most important assets in this new AI-powered age. Like cloud computing instigated widespread change, AI has the potential to be the "printing press" moment of the current times, reinventing how new skills, roles, processes, and business models are promoted as it becomes embedded across industries. EXPERT OPINION BY SANIA KHAN, CHIEF ECONOMIST AND HEAD OF MARKET INSIGHTS, EIGHTFOLD AI @SANIAKHANLAIQUE

Wednesday, June 12, 2024

AI Tool Lets Small Businesses Make Apps, Just by Telling It What They Need

In its predictions of 2024 workplace trends, website building service Wix suggested that small business leaders should get serious about AI. Given the explosion in AI tech so far, we know that prediction really hit the target, so much so that the company may profit from its own advice. Wix plans this week to launch an AI tool that sounds genuinely useful for any company that needs to quickly build an app: Wix's tool lets you do this normally technical task just by telling the tool what you want. Those awkward meetings where a deeply non-technical executive waves their hands vaguely and demands an app that "just does this sort of thing" suddenly sound a lot less tricky. News site TechCrunch reports on the new tool, which automatically generates an app after a user describes their needs in straightforward language. Once the app is built, users can adjust its design with an editing function--inserting branding imagery, changing the layout, and adding extra features. The final app is said to be "fully native" to either iOS or Android platforms, which means it should run just like any other human-made app optimized for iPhones or Android devices. When everything is ready, you can submit the app to the official app stores, ready for people to actually download. The whole process sounds a lot like Wix's AI-powered website builder, which it revealed in July last year. Speaking to TechCrunch, Wix's co-founder and CEO Avishai Abrahami said that the idea is to "to offload most, if not all, of the hard work from the user," when it comes to building apps. The simple chatbot-style interface, already familiar to many users of OpenAI's ChatGPT and Google's Gemini, means there's a direct line between what a user asks the AI tool to do and final app. "The more detailed the answers to the prompts during setup, the more personalized and complete the AI-generated app will be," Abrahami said. One sticky point, which anyone who's familiar with current-generation AI will spot, is that when you put a query into an AI engine, you can't guarantee the "truthfulness" of what comes out 100 percent, thanks to the actual design of the AI algorithms. In chatbots, this can result in sometimes strange, unsettling or just downright incorrect answers. This problem is also known to affect code that AIs spit out when asked to solve, for example, a thorny programming problem. An incorrect section of code built into an app produced by an AI tool like Wix's could do much more harm than merely telling someone incorrect information, and could make the app fail or possibly even compromise a smartphone's security. But Abrahami was bullish about this problem, noting that Wix's team will improve "the product all the time." Among the clutch of AI tools hitting the market lately, where the artificial intelligence is built into code repositories, Microsoft's business tools, and even Google's search engine, Wix's new tool appears immediately like it would appeal to smaller business users, particularly companies that have minimal--or zero--coding expertise on their team. Apps are also a vital part of daily business for many enterprises, and the ability to very quickly pull together an app that serves as a storefront for your products could be very useful for business planning purposes. But would you want Wix to build a final version of a customer-facing app? And will tech like this put many coders with app-making expertise out of work? That remains to be seen, but as with much current AI-created content, there is still a big role for humans in the loop--the experts who can tinker with what the AI produces, iron out any flaws, make sure that no one else's intellectual property is harmed, and make specific tweaks that maybe only a human coding expert can manage. BY KIT EATON @KITEATON

Tuesday, June 11, 2024

How the Tech Industry Stopped Building Things Customers Want

Seriously, when was the last time you waited in line to buy the latest and greatest tech product? When was the last time you got the email and clicked the link to learn more about the webinar for the software platform that would "take your business to the next level"? Have we become that jaded? Or does tech just suck now? It's a little bit of both. Many Symptoms, One Cause In a recent post, I dove into all the reasons why it's so much harder to start and grow a business in 2024. One of the reasons I gave is that the consumer has fallen out of love with technology. This isn't just gadgets. This isn't just market malaise, inflationary effects, or an extended gap between tech trends. This is a complete loss of the pulse of the customer, in both business and consumer tech products. I'm a lifelong early adopter and even a fanboy of new tech, and I can't name-drop the most recent version of iPhone, Pixel, Xbox, PlayStation, Quest, Tesla, or Mac/Windows/Watch OS--or, what's more important, explain why you should upgrade to it. At the same time, as a technology industry executive, I can't think of one functional piece of business software that I need to do a better job than it's doing today. In fact, I honestly can't think of any single platform you might take away from me and make my job harder. Maybe Zoom? And I can definitely name three or four of those same platforms, like Slack or Monday, that if you shut me out tomorrow, it would probably make my job easier -- more enjoyable at the least, as long as I had enough of a heads-up that I could spin up a Google doc to replace it. Oh! OK! Google docs! I could not function without them. That's kind of underwhelming though, right? What the eff happened to technology? The tech industry started telling the customer what they wanted They were wrong. I'm not saying that Sam Altman sits in his office, which I imagine looks like Darth Vader's chamber on a star destroyer, and is constantly thinking of new ways to shove chatbots down our throats. I'm not saying that. But in all seriousness, nowhere is this phenomenon more obvious and relevant than the stunted mass adoption of the electric car. For like the 13th time, too. I'll be the first to tell you that I'm a car guy and that electric vehicles ultimately make sense -- battery-material mining and coal-powered electricity are indeed problems that need solutions. The quest for mass adoption of electric vehicles has been a thing since that first day some rich guy got out of his golf cart at the end of a round and thought to himself, "We should put these on a highway at 80 mph." But until Tesla made electric vehicles that functioned as well as or better than ICE vehicles, no one took it seriously. Electric cars were always the metric system of transportation. But once more rich guys got involved and started leaning on green EPA standards and figuring out that they would need to build a robust fueling infrastructure themselves, and that they would need to sell those two things very hard without their necessarily being 100 percent true, a real, old-fashioned groundswell started. I mean, even I, back in 2022, said to myself, "This might be my last ICE vehicle." OK. Then everyone figured out the fuel isn't really clean and the infrastructure isn't really happening overnight, which means the bang isn't really worth the buck yet. And so now we're back to pickups and SUVs and Buc-ees is a thing. So just because a lot of rich and powerful people want change to happen doesn't mean the market will follow. The tech industry started relying on the customer to tell them what they wanted The customer was wrong. The customer is often wrong. I don't know how many times I need to say this. Don't get me wrong. I'm a proponent of listening to your customers and letting them help drive your direction. But, let me put it this way. If you're clutching your side and your friend says you might need to have your appendix taken out, you don't just hand them a scalpel and tell them to start cutting. Customers will always tell you what they think they want. Especially in tech. But customers rarely focus on the causes of their problems--they just know the pain and symptoms. You need a doctor. Tech companies have always been slow to hire people with the skills, experience, and knowledge to translate pain and symptoms into causes and solutions. They'd rather just start surgery right there in the street. Think about how many tech companies listened to the customer telling them that cable television was bloated and expensive. Google, Amazon, Apple, Disney, and all the media companies that wanted to become technified digital media companies got right on that problem. Speaking of old and overused product tenets, here's another, paraphrased: The customer doesn't want more choices, they want more confidence in their choices. Guess what the tech industry did. Spoiler alert: More choices. The tech industry followed the wrong trends Man, I know I keep beating up on genAI, but let's scream it again for the folks in the back of the room. THE CUSTOMER DOES NOT WANT GENERATIVE AI. At least not as it's being presented to them. Sorry about all the yelling. Look, I'm both pro and con on AI, less pro on genAI. I like AI. A lot. I'm just not sure how many more overpromised and underdelivered splashy headline genAI use cases the tech industry can keep flinging at the customer before they all throw their hands up en masse. Sexy trends die quickly. I mean, this just happened with crypto and NFTs, voice social networks, AR/VR, even wearables, 3-D televisions, and WeWork's whole work-is-life vibe. The boring trends are what's going to lead to long-term success. They produce tech that doesn't look like magic. It doesn't even look like tech. It's something that can seamlessly be integrated into people's lives without their having to sacrifice anything or work hard to get the benefit. That's what tech is supposed to do. And that's the real reason why the tech industry stopped building what customers want. Customers don't buy technology They buy the benefits of technology. Yeah, I could just stop this post here. But I think the reason the tech industry is seeing so little faith and hope from the customer is that it has forgotten what this means. I believe it's a "Well, yeah, but ..." problem. You keep telling the customer what they want. Well, yeah, but the early adoption is off the charts. You keep over-relying on the customer to tell you what to build. Well, yeah, but you're supposed to listen to your customer. You keep following the wrong trends. Well, yeah, but there's so much short-term gain there for the taking. You keep selling technology and not benefit. Well, yeah, but ... And here's where I think the tech companies keep punching themselves in the face. It used to be that tech things were produced by tech people and sold to other tech people. Then Grandma got a mobile phone because she had to -- she couldn't live easily without one. The tech industry has always had a disdain for the non-tech tech user. But tech isn't monolithic anymore, or as unified as it was even 10 years ago. My mother-in-law often comes to me with problems with her god-awful mobile phone. And I tell her, I can build an app for it, but I have no idea why you're not getting your email on it. This is where the tech industry is today. Every user is both a tech user and a non-tech user. My mother-in-law. Me. You. Every. Single. One. Until the tech industry figures out how to serve and sell to all of them, the mass malaise will continue. EXPERT OPINION BY JOE PROCOPIO, FOUNDER, TEACHINGSTARTUP.COM @JPROCO

Friday, June 7, 2024

What to Look For in an AI Training Provider

If you are in a leadership role today, chances are that your organization is already working to harness artificial intelligence for higher productivity, efficiency, and quality. But how do companies move from recognizing the promise that AI holds to actually creating a more productive, AI-empowered workforce? From my experience in working with large organizations on their AI training needs, the key factor in successful AI adoption is not whether staff are using AI, but rather how they are using it. One MIT study found that while GenAI usage can improve a worker's performance by up to 40 percent, it can actually reduce performance when used to tackle a task beyond AI's capabilities. Organizations therefore need an AI workplace strategy that includes high-quality, business-relevant AI training. But there are two major challenges in finding this. First, current AI training tends to be divided into either very technical courses in areas like machine learning or courses on AI basics, with little in between. Few courses are targeted at what is important to most companies--namely, training employees to apply AI tools within their own particular roles. This means a new industry-centric approach to AI training is needed, with providers to match. Second, as demand for AI training has grown, the number of courses on the market has also multiplied--leaving many leaders unsure about which training partners can add value. Not just any training will do: Companies must truly understand their business needs and find AI training solutions that precisely meet them. So how can leaders identify AI training partners that will deliver? Below, I share some features that will help you find the right learning provider for your business. 1. Specialized Courses by Discipline or Role Beyond general training programs that help workers learn the ropes of popular AI tools, quality providers will also offer more specialized courses such as AI for cybersecurity, AI for customer service, and prompt engineering. 2. Quality Assurance Training providers need to ensure that human oversight and quality assurance are embedded into their course creation processes. This is essential to avoid the problem of AI inaccuracy or hallucinations, especially when AI is used to create learning content or where AI-powered virtual tutors are being utilized. 3. Personalized Learning One of the most exciting possibilities of AI is its potential to personalize learning for students. For example, my company, Chegg Skills, has developed a learning companion that is trained within the curriculum students follow and hence more closely tailored to what they need to learn. It will be able to adapt content to the user's learning style, provide personalized recommendations based on individual progress, and tailor instruction to the educational background and experience of each learner. 4. Tailored Solutions for Your Organization Your business has unique needs that specialized training providers will be able to understand. They can discuss your company's skills goals and challenges with you, and develop bespoke programs accordingly. They can also offer course recommendations, adjust the parameters of an existing course to suit you, or include organization-specific content. 5. Learning Methodologies It's important to look for methodologies grounded in best practice and offer a mix of approaches such as interactive assessment, simulations, real-world case studies, hands-on exercises, and human support. Does the platform incorporate active learning techniques, encourage problem-solving, and test knowledge with practical scenarios? 6. Regular Updates At the current pace of AI development, AI training programs created today could be obsolete within six months or even sooner--unless providers enact frequent, documented updates as new releases advance. 7. Track Record and Analytics While content can now be generated overnight, great learning outcomes are more elusive. It's important to find a learning provider with a solid track record on learner outcomes and deep expertise in supporting working adult learners. In particular, providers must ensure quality by collecting data on outcomes for their AI training programs and adjusting their content to improve results accordingly. What are the benefits? Quality AI training offers organizations a way to ensure that AI tools are utilized responsibly and in accord with best practices and business objectives. Through training, employers will have an opportunity to reduce the stigma around AI use and boost collaboration, which in turn generates more creative use of AI tools as employee expertise grows. As Bill Gates commented on AI: "Businesses will distinguish themselves by how well they use it." As AI verticalizes for different sectors, companies need training that elevates their learning and development strategy, maximizes business outcomes, and helps them grasp tomorrow's opportunities. By investing in training today, organizations are building solid workforce AI experience and adapting to change before it occurs--whatever the future may bring. EXPERT OPINION BY MEGAN O'CONNOR, HEAD OF STRATEGIC PARTNERSHIPS, CHEGG @MEGANMOCONNOR

Wednesday, June 5, 2024

Nvidia Rolls Out New AI Chip Amid Soaring Demand

There's plenty of hype in the current wave of artificial intelligence evangelism, but when the CEO of the company that makes most of the chips powering the current surge in generative content creation promises upgraded models each year to meet demand, it's worth listening. Recent comments by Nvidia chipmaker CEO Jenson Huang suggest AI is moving even faster--in ways that are truly transformational for business users--than even the most breathless media accounts indicate. Huang on Sunday addressed the annual Computex conference in Taiwan, and said the push to develop new systems to keep pace with AI development forced Nvidia itself to double its pace of chip upgrades. Huang revealed the company's next generation of AI-enabling chips, dubbed "Rubin," a mere three months after unveiling its then-new "Blackwell" model. The move reflected Nvidia shifting to what Huang described as a "one-year rhythm" going forward, to meet continually rising demands from business users. "Today, we're at the cusp of a major shift in computing," Huang said in his keynote address. "With our innovations in AI and accelerated computing, we're pushing the boundaries of what's possible and driving the next wave of technological advancement." Huang spoke of "AI agents" capable of not only carrying out specific tasks, but also combining an array of functionalities to fulfill more complex objectives, and a wide variety of use cases. Those include using mobile robots that integrate and respond to work environments, and even creating a virtual, AI-based model of life on earth. But for that to happen, faster, stronger, and more energy-efficient chips must be able to handle data bottlenecks, currently a major obstacle. Automated organizing, planning, accounting, and content creation are already being transformed by AI business applications. Huang said new high-powered chips will weave different kinds of generative AI applications together, and make the tech available to businesses, organizations, and eventually individuals mining its capabilities with simple prompts. This transformation will only work if developers transition away from six-decade-old computing concepts and techniques for retrieving stored data, Huang said. Instead, next-generation systems will continually create new, constantly updated solutions that AI agents generate from the most recently available information. The unrelenting pace of development has reset Nvidia's chip upgrade plans and delivery schedules. The "Blackwell" platform unveiled in March will reach customers later this year, and a "Blackwell Ultra" successor is going to market in 2025. The even more powerful "Rubin" model announced Sunday will be available in 2026. The stepped-up pace reflects Nvidia's efforts to protect its leadership in AI chip sales--a market where it has an approximately 70 percent share. Rivals AMD and Intel have other ideas, and are rushing to roll out powerful AI platform chips as well. Huang said unlocking the potential of AI depends on enhancing performance, energy use, and autonomous graphics processing unit (GPU) and central processing unit (CPU) functions--even as they work independently toward common goals. Further development of those cross-nurturing platforms, he said Sunday, will permit businesses using them to speed result delivery by a factor of 100, at just 1.5 times the cost. Huang said companies will also find financial benefits in economies of scale. "The more you buy, the more you save," Huang said, before admitting some listeners might have trouble getting their heads around the reasoning. "That's called CEO math. It's not accurate, but it is correct." It's also what you might expect a maker of AI chips would say, especially as he looks to stoke sales and strengthen his market-leading position. BY BRUCE CRUMLEY @BRUCEC_INC

Monday, June 3, 2024

Email Innovator Superhuman Says Its AI Search Is Twice as Fast as Gmail's

Superhuman says it can beat Gmail in a race--of AI tools. The email app recently rolled out a suite of email optimization tools, including a new collection of AI-powered features developed with OpenAI's GPT models. The new features are "2-3x faster than the AI search in Gmail and Outlook," Superhuman founder Rahul Vohra wrote in a recent blog post. Superhuman's service starts at $30 per month per user, or $25 per month for an annual subscription. Additional pricing options exist for enterprise customers with more than 15 users. Founded in 2014 to help people achieve inbox zero in minutes rather than hours, Superhuman was one of the earliest companies to directly collaborate on AI-powered features with OpenAI. These include instantly generating draft replies to messages, tightening up emails, and auto-summarizing threads. Superhuman's newest GPT-powered feature, Ask AI, is essentially a chatbot-based search engine for your inbox. Say your company is planning an offsite retreat and you need to find booking info for each member of your team. You could spend a lengthy portion of time searching through keywords and usernames to compile a document, or you could just ask Superhuman AI to scan through your inbox and pull out all the relevant information. And for those worried about the AI generating false information, also known as a hallucination, the chatbot includes clickable citations, which link directly to the email from which that specific information is being pulled. Still, Vohra acknowledges that the system doesn't get it right 100 percent of the time yet. At its I/O developer conference last week, Google announced new Gmail capabilities powered by its proprietary large language model Gemini, including a similar feature in which users can talk to a chatbot to search through their email. Vohra says he isn't worried about the Goliath-size competition, insisting that Superhuman is "more contextualized and personalized" than Google's "corporate-sounding" AI. To demonstrate Superhuman's speed advantage over Gmail, Vohra sent Inc. a video in which a Superhuman employee posed a few questions to Superhuman's and Gmail's AI assistants, both pulling from the same inbox. The employee asked both email clients "when is the product team offsite," and then asked follow-up questions like "which hotel should I stay at?" At first, the responses were generally similar in content, although Superhuman's were consistently faster. As more follow-up questions were added, though, the Gmail assistant began hallucinating, while Superhuman continued accurately pulling from the inbox. Google did not respond to a request for comment regarding the speed or accuracy of Gmail's AI assistant. Vohra says that Superhuman's internal metrics show that users are saving an average of four hours a week that had previously been devoted to inbox management. "Stop searching, and start finding," he says. "You'll be much better off simply asking the AI what it is you want."