Wednesday, April 15, 2026

Gen Z is outsourcing hard conversations to AI. Why it matters

Around 2 a.m. on a Monday, Emily received a text from a fellow student, Patrick, whom she had gone on a blind date with two days earlier. The pair are juniors at Yale University who were set up by mutual friends. They requested anonymity so CNN agreed to change their names to protect their privacy. “Hey Emily! I hope your half-marathon went well — I’m sure you crushed it,” Patrick wrote with a winky-face emoji. “Okay, bear with me here — I’m not the best at this kind of thing, but here goes.” In a six-paragraph-long text, Patrick said he would like to “hang out more — whether it’s just as friends or whatever it was we were this weekend.” He added that he wasn’t “looking for anything too serious right now.” At first, Emily didn’t think his reply was anything out of the ordinary. “It just seemed really proper, and I guess I knew that he was a really nice guy. So, I was just like, maybe this is just how he texts.” But after sharing his message with two friends, who put it through an artificial intelligence detector, she had her answer: “It was like, 99% AI.” She was right. Patrick admitted using ChatGPT to craft his text. He said he didn’t have much experience crafting a rejection message: “What do I do here? It’s the first time I had seen anyone since my high school girlfriend, which is why I was so nervous and wanted a second opinion.” “I tried to write my thoughts down, but I wasn’t sure how to format this in a way that’s not, like, really bad, so then I went to Chat,” he said. He gave ChatGPT the situation, his thoughts and emotions, and “Chat spit out a response.” Patrick is far from alone. Researchers say a growing number of young people are turning to AI to navigate social situations — drafting rejection texts, decoding mixed signals and scripting difficult conversations. Experts warn that this habit may be stunting emotional growth, leaving an already isolated generation who came of age during the pandemic even less prepared for the messiness of human connection. Patrick went back-and-forth with the chatbot and “tweaked certain lines here and there, but it was mostly copy and paste” from ChatGPT. “I added an emoji and tried to make it sound more human,” he said. “I felt better putting this out there because I wanted to be very clear and forthcoming. I didn’t want to be wishy-washy with it in case she took it the wrong way. I knew if I did it on my own, I would have been wishy-washy,” said Patrick, who considered his move like consulting an expert. Emily said she did not think the text was clear and it made his intentions more confusing. She couldn’t tell from the AI wording “if he wanted to be friends or what.” “My main intention was to be clear in how I was feeling and thinking about the situation,” Patrick said. “Looking back on it, that was pretty poor behavior on my part. I think sitting on it for so long was the reason I went to Chat.” “I think he was overthinking it,” Emily said. “You definitely don’t need to use AI; you’re an emotionally sane guy.” She described the interaction as weird but said many of her friends have also turned to artificial intelligence to draft texts to friends or partners, or to analyze social situations — sometimes pasting entire text chains into a chatbot to decipher what someone might be thinking. “The thought of my little brother using AI to break up with his girlfriend is concerning. Because right now he comes to me, but when’s the day he’s going to turn to AI instead?” She said she is worried that Gen Zers have trouble “confronting their own feelings.” Emily said she’s also concerned about her generation’s ability to socialize, and some experts agree. It’s called ‘social offloading’ Emily’s experience is part of a broader pattern that concerns researchers. Dr. Michael Robb, head of research at Common Sense Media, calls it “social offloading,” using AI to navigate interpersonal situations, and he said it isn’t limited to Generation Z. He has observed it among Gen Alpha (born between 2010 and 2024) and some millennials (born between 1981 and 1996) as well. One-third of teens already prefer AI companions over humans for serious conversations, according to a 2025 survey conducted by Common Sense Media, a nonprofit organization that helps families navigate age-appropriate media choices. “If you’re using AI to draft your messages to friends or romantic partners, you’re outsourcing the communicative act itself,” Robb said. The problem is twofold, he noted. It creates an “expectation mismatch” since the recipient is “responding to an AI-polished version of their friend and not the actual person.” Second, repeated use can erode users’ confidence in their own voices, preventing young adults from developing essential skills, such as reading social intent, inferring others’ emotions and tolerating ambiguity in social interactions. “It has implications for your sense of self, advocacy and identity formation,” which are central to social development, Robb said. “If every tricky or difficult text is mediated by the AI, it may instill the belief in users that their own words and instincts are never good enough.” Dr. Michelle DiBlasi, a psychiatrist at Tufts Medical Center and assistant professor at Tufts University School of Medicine, has observed the same trend. “I have seen young people, late teens, early 20s, using AI to socialize, and oftentimes they’re using it as a way to overcompensate for the fact that they don’t really know how to truly interact with others,” she said. “We’re social beings, and a lot of our feelings of self-worth and connection are really related to our interactions with others.” DiBlasi said that using AI in social interactions stunts emotional growth and can perpetuate feelings of loneliness and isolation. It can also limit people’s ability to pick up social cues, repair relationships and connect with others. The pandemic’s impact on connection Why is Gen Z struggling with socialization? Researchers point to a combination of digital culture and the pandemic. Russell Fulmer, an associate professor at Kansas State University who studies AI and behavioral sciences, said the two forces created the “perfect storm” for AI to be integrated into social interaction. Adolescence — roughly ages 10 to 19, according to the World Health Organization — is the critical window for developing confidence, a stable sense of identity and emotional regulation. If adolescents don’t fully develop their social skills during this time, people may be “more prone to lack confidence, more apt to escapism or avoidance and maybe there’s a lack of resiliency,” Fulmer said. DiBlasi said the pandemic hit Gen Z at a particularly vulnerable moment. “When it happened, they were in the stages where the frontal lobe of their brain was starting to form,” she said. Typically, that’s when adolescents learn to build relationships, pick up social cues and develop mentalization — “the ability to understand somebody else’s mental state or what they’re thinking and how they’re feeling.” DiBlasi said that this lack of interaction leads to “a deep sense of isolation, feeling like others don’t understand them, or that they don’t understand others,” which drives many toward AI for companionship. But Fulmer warns that chatbots can create a “loneliness loop,” offering an “appearance of connection” that ultimately feels unfulfilling and can deepen isolation. In the most serious cases, DiBlasi has seen patients experiencing suicidal thoughts turn to AI to help articulate what they’re feeling when they can’t find the words to tell others. “I think this can be really, really detrimental, because it’s important for people to express some of these emotions in a very honest way with family or friends, so that they can actually work through this in an authentic way,” she said. It’s not too late to change course Although some Gen Zers may have missed a prime window for developing social skills, DiBlasi emphasized that it is not too late for them to learn. She encourages people to reach out to friends and family rather than AI when they struggle to express difficult emotions. “These things are skills that, just like anything with practice, can actually improve,” DiBlasi said. “I understand that people are fearful or they may not want to say the wrong thing. But I really think it takes away any sort of understanding of what you’re actually truly feeling and takes away the connection and the repair that you need to make in these relationships.” Artificial intelligence is a poor substitute for the messiness of real human interaction, experts say, and that messiness is the point. “Relationships and conversations can be messy and probably should be messy, and that’s part of what makes you more socially competent in the long run,” Robb said. AI companions are “designed to be very validating and agreeable,” he noted, so their feedback doesn’t reflect the friction that’s part of how people respond in real relationships. AI users shouldn’t expect an objective read on social situations either, Fulmer added. “Social contexts are often not entirely objective,” he said. “They’re contextual, they’re relational, and therefore nuanced.” As confident as a chatbot may sound, he said, it’s searching for a through line in something that may not have one. For parents, Robb recommended watching for warning signs, including social withdrawal, declining grades or a growing preference for AI over human interaction. They can respond with low-pressure check-ins, such as asking what their children use AI for, how it makes them feel and what they think they get out of it. The goal is to get kids thinking critically about what AI does well and where it falls short, said Robb, who suggested that families consider limits to AI-usage similar to screen time rules. By Asuka Koda

Monday, April 13, 2026

The Real Reason AI Projects Fail, According to Prezi’s CEO

For years, leaders have been told that artificial intelligence is the competitive edge. According to Prezi CEO Jim Szafranski, that thinking is backward. “The technology is not the hard part,” Szafranski said. “Finding the right problem, that’s the hard part.” Most companies are getting that wrong. The myth of starting with technology Szafranski explained leaders often begin their AI journey by asking, “Where can we use AI?” instead of “What are we actually trying to fix?” That misstep is costing companies billions. According to Gartner, as many as 50% of AI projects fail to deliver meaningful results, largely due to poor alignment with business goals. Szafranski saw this play out in a steel mill project. “We thought we were solving for scheduling, but that wasn’t the real issue,” he said. After deeper analysis, the team discovered the real problem was optimizing how steel reached customers, not replacing a human scheduler. Once reframed, the AI delivered actual business impact. “The first problem you see is almost never the right one,” he added. Finding the “perfect problem” Szafranski described what he called the “perfect problem,” a challenge that is both meaningful and solvable. “You’re looking for something where the impact is obvious, and the path is achievable,” he said. “That’s where AI works.” AI pilots fail to produce measurable business impact, not because of weak models, but because companies pursue the wrong use cases. The takeaway: AI success is less about sophistication and more about precision. Why “time to outcome” beats “time to value” One of Szafranski’s biggest shifts in thinking is moving beyond “time to value.” “Time to value is incomplete,” he explained. “What matters is time to outcome, did the user actually achieve what they needed?” That insight reshaped Prezi’s AI strategy. Initially, the company focused on automating presentation features, making slides faster and easier to build. However, that wasn’t the real job customers needed done. “They’re not trying to make slides,” Szafranski shared. “They’re trying to persuade somebody.” That realization changed everything. What Prezi is doing differently Today, Prezi is using AI to help users communicate and persuade more effectively, not just design better presentations. “We shifted from helping people build presentations to helping them win moments,” Szafranski explained. The platform now focuses on: Simplifying visual storytelling for non-designers Helping users communicate ideas quickly under pressure Enabling more engaging, outcome-driven presentations This shift has unlocked growth, particularly in global markets. Szafranski noted that accessibility has become a major driver. “When you remove the barrier of design skill, you open the door to entirely new audiences,” he said. That strategy is working. Prezi continues to expand internationally, especially in regions where traditional presentation tools were harder to adopt due to language or educational barriers. Accessibility is a growth strategy, not a feature Prezi’s approach highlights a broader truth: accessibility is inclusion and expansion. According to MIT research the vast majority of AI investments fail to generate financial returns when they are disconnected from real user needs. Prezi is doing the opposite — building for real-world communication challenges at scale. The real takeaway for leaders AI isn’t magic. It’s a multiplier. As Szafranski made clear, “If you pick the wrong problem, AI just helps you get there faster.” The companies winning with AI aren’t the ones with the best models. They’re the ones asking better questions. Because in the end, the difference between failure and transformation comes down to one decision: Are you solving the problem you see or the one that matters? BY NETTA JENKINS, FOUNDER, HIC; WORKPLACE CONSULTING FIRM | AUTHOR OF SUPERCHARGED TEAMS

Thursday, April 9, 2026

Meta just provided its clearest look yet at its AI plan. It’s about time

Meta’s most important launch in years may not be its latest Ray-Ban glasses or its AI app. Instead, it could be the new AI model it introduced on Wednesday, hinting at how its billions in AI investments could one day transform its products. Muse Spark, the first AI model from Meta’s superintelligence lab, powers Meta’s AI app and will be integrated into Instagram, WhatsApp, Facebook and its AI Ray-Bans in the coming weeks, the company said in a press release. Meta calls the model “purpose-built” for its products and says it is designed to streamline tasks like shopping and trip planning — the kinds of things that people already use Instagram for. The launch seemed to be exactly what Wall Street wanted to hear after Meta poured billions into its AI ambitions, with little detail about how those dollars will affect to its bottom line. Shares were up more than 9% shortly after the announcement on Wednesday and closed 6% higher. Last June, Meta invested $14.3 billion in data labeling startup Scale AI and hired its former CEO, Alexandr Wang, as its chief AI officer. It gobbled up rising AI startups Manus and Moltbook. OpenAI CEO Sam Altman claimed last year that Meta CEO Mark Zuckerberg offered $100 million signing bonuses to lure talent away from the ChatGPT maker. And the Facebook parent company spent more than $72 billion on capital expenditures, or costs related to AI infrastructure, in 2025. Analysts and investors want to know how those investments will pay off. Zuckerberg didn’t offer specifics when asked about the return on AI investments during a January earnings call, saying his response “may be somewhat unfulfilling.” He added that the company is in “this interesting period where we’ve been rebuilding our AI effort, and we’re six months into that, and I’m happy with how it’s going.” Muse Spark is the clearest answer Meta has yet provided. Meta outlined use cases for the model similar to those offered by platforms like ChatGPT and Gemini:. for example, creating a game with a prompt, answering health questions and analyzing a photo of snacks on a shelf to provide nutritional information. But the launch signals a concrete strategy to challenge OpenAI and Google after initial confusion around the direction of Meta’s AI app. Meta positioned the app both as a destination for AI-generated videos and a hub for its smart glasses in the past. Some users accidentally posted public queries that they believed to be private last year, perhaps an indication that certain consumers weren’t sure how to use the product. Meta also provided some clues about how its social media platforms could give its AI app an edge over rivals. The Meta AI app will reference content from the company’s social media apps when answering questions related to shopping, trending topics and locations. It says it’ll draw on public posts for certain answers to provide “context from your people, right where you need it.” The company also plans to eventually incorporate Instagram Reels, photos and posts directly into answers. The timing is also critical; Meta faces increasing competition from OpenAI, Google and Apple in the coming months: • OpenAI has been aggressively expanding as it seeks to replicate the success of ChatGPT in other corners of our lives. • Google is expected to release its Android-powered spectacles this year. The search giant will likely make more announcements around its AI strategy next month during its developers conference. • And Apple’s revamped Siri is expected to launch this year following delays. Similar to Meta, Apple’s strategy is centered on leveraging a person’s preferences to personalize answers. Meta needs a win. The metaverse didn’t upend the internet like it expected. Meta’s smart glasses have been at the center of privacy concerns. OpenAI’s ChatGPT caught the tech industry – Meta included – largely by surprise, leaving tech giants racing to catch up over the last three years. The jury is out on whether Meta’s new AI models will propel its products to new heights, replicating the success of Facebook and Instagram’s early days. But the launch of a model made specifically for its products for the first time suggests Meta is building towards a vision. Now it just has to execute it. Analysis by Lisa Eadicicco

Wednesday, April 8, 2026

AI Is Breaking Passwords, and the Alternatives Are Getting Pretty Weird

Your next password could be your heartbeat—or maybe even the way you breathe. As hackers get better and better at cracking traditional passwords (by exploiting lazy consumer habits and technical advances such as artificial intelligence), researchers are searching for new methods to protect sensitive data. The tech industry has been trying to nudge users to other data protection methods for years—and some of those methods have been unusual, to put it mildly. Take, for example, the latest alternative authentication method, which was developed last year by researchers at Rutgers University. VitalID is a new spin on biometric protection, utilizing unique vibration patterns from breathing and heartbeats that resonate through the skull to identify you. Differences in people’s bone structure and facial tissues make the harmonics as distinctive as a fingerprint, researchers said in a paper outlining the proposed authentication method, which was envisioned for extended reality headsets. “Traditional security mechanisms—such as passwords, PIN codes, and conventional biometric systems—proved increasingly incompatible with immersive interfaces,” the researchers write. The average person is responsible for roughly 170 passwords, according to password manager company NordPass. That’s why, in part, people tend to reuse the codes—and it’s why hackers have been increasingly effective at gaining access to people’s information, in attacks on both corporate and personal systems. Biometrics have shown some promise. Mobile users are quite familiar with Face ID and pressing their thumb to the screen to prove they’re the rightful owner of the device. Fingerprint logins started to go mainstream in 2013, and face scanning began to rise in popularity in 2017. Voice recognition seemed like it would be an effective tool, but recent technology advances have sidelined that. “Now that AI can clone a voice from a few seconds of audio, it’s not reliable,” said Karolis Arbaciauskas, head of product at NordPass. Rutger’s unusual approach to security is far from the first strange way of securing user authentication. Attempts to do away with passwords have taken several different forms over the years. Here are some of the most unique: Password pill: While Apple was launching Touch ID in 2013, Motorola pursued a different authentication method. The company prototyped a small authentication pill that was designed to be powered by stomach acid. When swallowed with a glass of water, it would produce an 18-bit ECG-like signal that made your body the authentication token. As you might guess, this was seen as a pretty creepy way to guard your data, and it never made it out of the prototype phase. Tattoo: Motorola showcased a temporary tattoo that same year that could be used for authentication, but that method was met with the same privacy concerns as the pill. Body odor: As an offshoot of biometric authentication, some researchers have experimented with using a person’s unique chemical scent to confirm their identity. (Some of those same groups also studied things like the shape of your ear and your gait as identifiers.) These have fallen short of mainstream acceptance as people don’t really want to use their funk as an identification, and the sensors have not proved to be as reliable as other methods. Lip-reading: This technology actually works, focusing on the unique way people mouth specific words or phrases. It’s used more frequently as a discovery tool, though, such as discovering what someone is saying in video footage that has no audio. Most consumers have not shown a real willingness to mouth a passphrase to their PC or phone. Heartbeat recognition: This biometric authentication method has caught the eye of NASA. Like fingerprints, no two ECG patterns are the same, so by wearing an experimental band, you can verify your identity. This actually made it to market in the form of the Nymi Band, but it remains too costly at the moment for mass-market adoption. While research on fringe identification methods is likely to continue, the most promising data protection advance these days is the passkey. This authentication method generates a pair of keys: one public, which is stored on the cloud, and one private, which is stored on the device. That means that if the cloud server is compromised by hackers, accounts are still protected, as the hacker won’t have both sets of keys. In essence, the passkey you enter on your phone or via your face scan/fingerprint is one half of what’s necessary to get access. The other is stored elsewhere. For a hacker to crack both, they would need to have your phone and to hack the server, making breaches more difficult. While many major sites support passkey technology, it’s still far from universal. (And hackers have a way of catching up, which is why researchers are still looking at other methods, like biometrics. Especially as new technology appears close to breaking encryption.) “It’s no surprise that there have been and still are many attempts to free us from passwords and remembering them,” says Arbaciauskas. “But for now, there is no universally practical way to live without passwords—especially since not all websites and platforms support passkeys yet.” BY CHRIS MORRIS @MORRISATLARGE

Monday, April 6, 2026

‘Everyone now kind of sounds the same’: How AI is changing college classes

At this point in her senior year at Yale University, Amanda knows that many of her classmates turn to AI chatbots to write papers and other homework assignments. But she started noticing something bizarre in her smaller seminar classes: Her classmates sit behind laptops with polished talking points and arguments, but the conversations that follow often fall flat across subjects. In one class, “the conversation came to a halt, and I looked to my left, and I saw someone typing ferociously on their laptop, asking (a chatbot) the question my professor just asked about the reading,” Amanda told CNN. Amanda, and two other students — Jessica and Sophia — attend Yale University. They requested anonymity for fear of retribution from their classmates and professors, so CNN agreed to change their names for this article. Amanda said she was taken aback. Until that day, she didn’t realize that her peers were using chatbots in class and sharing what it spits out in the classroom. Now she notices the impact that tendency is having on class discussions. “Everyone now kind of sounds the same,” she said. “I feel like during my freshman year in college, I would sit in seminars where everyone had something different to contribute. Although people would piggyback off each other, they approached from different angles and offered different commentary.” As AI becomes increasingly integrated with education, educators and researchers are finding that it may be eroding students’ capacity for original thought and expression. A paper published in March in Trends in Cognitive Sciences found that large language models are systematically homogenizing human expression and thought across three dimensions — language, perspective and reasoning — and students and educators say they are seeing the effects of that trend in their classrooms. And that makes a lot of students sound the same. Why students use AI in class Jessica, a senior at Yale, told CNN that she uses AI every day for her classes. In an economics seminar in which the professor cold-calls students, “at the beginning of class, you could see every single person putting every single PDF” into a chatbot. She also uses AI when she has trouble turning her thoughts into words. “I want to comment, and I have this concept, but I don’t know how to formulate the sentence myself,” she said. So she asked a chatbot “to make it sound more cohesive.” A Yale University spokesperson replied that “Students continue to experiment with using AI in class” and they are aware of the ways AI is used in the classroom, including those described in this article. “To support learning and engagement, we are seeing a broader trend of faculty designing courses with limited or no laptop use, emphasizing print-based materials, original thinking, and direct engagement with peers and instructors,” the spokesperson told CNN. Thomas Chatterton Williams, a visiting professor of the humanities and senior fellow at the Hannah Arendt Center at Bard College, has seen the impact of students’ decisions to use AI. Students’ reliance on AI “ has paradoxically raised the floor of class discussion to a generally better level in courses with difficult concepts, but has also tended to preclude stranger, more eccentric and original thoughts,” said Williams, who is also a nonresident fellow at the American Enterprise Institute, a think tank that includes research on education. “My biggest concern is that many bright young people will never achieve a voice of their own — indeed that a surprising number of them won’t even fully appreciate the value of authorship and ownership of a point of view.” Thomas Chatterton Williams Jessica admitted that she’s felt herself become lazier since she started using a chatbot to help with her classes. “I have thought about how much I stopped working, like my work ethic has completely diminished from high school,” she said. Why does AI make people sound the same? Large language models, or LLMs, are trained to predict the next most statistically likely word given everything that came before it, said Zhivar Sourati, a doctoral student at the University of Southern California and first author of the paper. The data those models train with overrepresents dominant languages and ideas, so their answers to users’ questions naturally “mirror a narrow and skewed slice of human experience,” the researchers wrote in their study. The result is “a narrowing of the conceptual space in which models write, speak, and reason.” AI-induced homogenization happens across three dimensions: language, perspective and reasoning strategies, the authors explained. That’s because AI models tend to reproduce what researchers call “WEIRD” viewpoints — Western, educated, industrialized, rich and democratic — even when explicitly prompted to represent other identities. One possible consequence, Sourati said, is that WEIRD language and perspectives could become perceived as more credible and “more socially correct,” marginalizing other viewpoints. A similar phenomenon is observed in reasoning, in which the popular technique of walking models through step-by-step logical thinking may be crowding out more intuitive, culturally specific and creative ways of working through a problem. When a group repeatedly interacts with AI systems, Sourati explained, it flattens the group’s creativity compared to the same group without AI assistance. This flattening raises concerns in educational institutions at all levels. When students were asked open-ended, subjective questions with no single, correct answer, teachers could expect a wide range of responses. But if all students rely on AI, their answers may become more polished but fall into just a handful of similar categories, Sourati said. They will lose the diversity of thinking that classroom discussions are meant to encourage. Sourati is most concerned that homogenization is happening to people who are developing their ability to creatively generate new ideas. If students continue to use AI instead of developing their own thought processes, “they wouldn’t learn how to even think by themselves and have their own perspectives.” Morteza Dehghani, a professor of psychology and computer science at the University of Southern California, said that he has heard of people using AI to determine who to vote for in an election, which he finds “quite scary.” “If people lose diversity” in the way they think, “or get into intellectual laziness, of course, that is going to affect our society greatly,” said Dehghani, who is a coauthor of the paper. Sophia, a junior at Yale, believes that her fellow anthropology students are using AI to draft scripts for what to say in class because people are insecure about what they don’t know. “I think creativity is dwindling because we lose the ability to make connections,” she added. If people continue to offload their reasoning to AI, Dehghani agrees that communities will lose creative innovation and the ability to critique mainstream ideas or even political candidates. As more people use AI models to write and think, those outputs are reabsorbed into human discourse — and eventually into the data used to train the next generation of models —so the homogenization keeps compounding, the paper’s authors said. “If we’re offloading our reasoning onto these models, then we can easily be persuaded by what the models tell us,” he said. In education, Dehghani is concerned about a generation of students who are learning with AI and being tutored by AI. “They would be more homogenous in the way they think, in the way they write, so this is going to have long-term influences,” he said. People aren’t learning to reason Sophia, who tries to resist using AI in school, said she believes people are deprioritizing their own thinking “in favor of having really big words.” “I would literally rather just tell the professor, ‘I don’t know what we’re talking about.’ Even if you put every reading into (a chatbot), it doesn’t have your past experiences that make you a critical thinker,” she said. “I feel like people had a lot more to say because they actually feel tied to the material,” Amanda agreed. “Now classroom discussions are not really digging deep. I think a lot of that has to do with the AI chatbots, but also, there’s no longer as much of a drive to connect with the material personally.” Disappointed, she added, “I think it’s boring to be in a class where everyone has the same thing to say, and no one wants to dig deeper or push against what is directly said in the text or the norm.” Daniel Buck, a research fellow at the American Enterprise Institute and a former English teacher at four K-12 schools over seven years, said he is concerned that students are circumventing the cognitive work required to engage in classroom discussions and complete homework. “A lot of learning happens in the boring minutia, the struggle,” Buck said. Students retain only what they have actually spent time consciously processing, he continued. If a student outsources thinking to AI, they may be able to reproduce a talking point in class, but they haven’t built the underlying skills to apply that knowledge elsewhere. Buck draws a sharp distinction between AI and the shortcut technology that preceded it: SparkNotes. When students relied on the popular website to find chapter-based summaries of literary works, teachers could easily detect it, he added. AI is a “supercharged version of SparkNotes” that “can answer any question that you pitch to it,” Buck said. Whereas SparkNotes offered a fixed set of analyses, AI can respond to whatever a teacher asks, making it much harder to identify when students are not doing the thinking themselves. The difference is in how people reason. Instead of being used as just a reference, such as books or search engines, AI is an active participant in “problem solving and perspective-taking,” Dehghani clarified. “What we are seeing now is fundamentally different than other periods of homogenization of expression and thought,” Williams said. “If even professional writers are finding it exceedingly difficult to resist outsourcing the difficult work of wrestling with words and ideas — as we know they are — I don’t see how the younger generations who have not experienced a world before highly sophisticated, on-demand AI writing will be able to do this, not at scale.” Buck worries that students will graduate without having developed relationships with professors, as well as the habit of sustained cognitive work. That means they will struggle to solve problems in the real world. “There’s so much delight in reading original student essays,” he said. “Even if it isn’t quite as well -argued or as solid as I wish it would have been, you’re seeing these young students, for the first time, start to think for themselves, to analyze, to think critically. It’s almost like watching my own children walk for the first time, where they stumble and fall, and that’s amazing. Keep doing that.” Reading and interacting with students’ original thoughts in class helps teachers understand how students think and articulate. “There’s an interpersonal exchange that I think gets overlooked when you get to know your students, they get to know you, they start to trust you and your feedback,” he said. “I think that gets lost too when it’s just everything is through AI.” How teachers work around AI Sun-Joo Shin, a philosophy professor at Yale, said, “It is a big homework for anyone who is involved in teaching” to keep exploring ways to ensure students continue to think critically and creatively in the age of AI. “We are in an interesting and exciting transition. I want my students to understand the material of the class, which is constant before and after the appearance of AI,” she said. “At the same time, I want them to use this exciting tool to their advantage, not be a victim of it. A dilemma of an instructor is how to help, or force, students to learn the material and to think creatively without running away from the AI tools or without copying them.” Until the fall semester of 2024, she said she was not worried about how AI would affect students’ understanding of the material in her mathematical logic class. Her teaching team had tested the problem sets against the AI models at the time, and they were unable to solve her problems. But since then, “AI has been catching up,” and models can answer questions “pretty well” if students upload class handouts and learning materials. She started thinking about additional requirements in the class beyond problem set submissions. “After all, it would be extremely unfair to give good grades to AI answers,” Shin said. Yale has guidance on AI usage for both students and faculty. “Generative AI use is subject to individual course policies,” one of the university websites states. “We encourage all instructors to adapt our model policies for their specific course and learning goals. AI Detection tools are unreliable and not currently supported.” Yale provides model policies for different class types such as “Creative Writing Seminar” and “STEM Mid-Sized Lecture.” The policies range from discouraging AI usage with guidelines on when AI explicitly cannot be used, to allowing students to use AI as a source of ideas but prohibiting them from submitting text generated by chatbots to encouraging AI usage, to encouraging and permitting students to use AI in assignments. Buck warns that any work sent home cannot be verified as the student’s work. To counter AI, teachers are going back to reading texts aloud in class and “on-demand, handwritten essays” and “paper and pencil assessments.” In-class accountability often comes in the form of pop quizzes. A student who had asked AI for a chapter summary instead of reading the chapter might get the broad strokes, but there is a strong chance that the one specific detail the quiz will ask about did not make it into the summary, Buck said. “If you did the reading, it was super-duper easy,” he said. “And if you didn’t, then there was no way to bluff your way through.” “I made a rather significant change for my two logic classes in terms of requirements,” Shin said. Although she still includes problem sets as part of her classes, she has reduced their weight in students’ grades. Now, the problem sets are graded only on completion, and feedback is given to students rather than grades. “Using these problem sets as a question bank, I have two midterms and one final, all of which are in-class exams,” she said. “Some questions are lifted from problem sets, some are slight modifications, some require students to check where a proof goes wrong, and some are filling in gaps in a proof that they solve in problem sets.” For her computability and logic class, “I have given oral tests, one by one, for years, and a presentation requirement before the AI era, which has been working out very well,” she said. Now, the exams, oral tests and presentations are weighted more heavily for students’ course grades than take-home problem sets. Williams has arrived at a similar place from a different direction. As a professor, he has moved all writing assignments in-class and made them spontaneous. At the end of the semester, he assesses students through oral exit exams. “I cannot with any confidence assign students any writing that I don’t watch them commit to paper by hand in my own presence,” he said via email “I think this is a terrible loss, but it’s necessary. The temptation and availability of AI is too great.” It’s affecting other people’s educations While educators can work around AI in assessments, it is equally important for students to be intentional about limiting their reliance on it as they learn, especially since it affects other classmates’ education. “It is frustrating because even though I personally try to stray away from it, I can’t prevent other people from using it,” Amanda said. “The fact that others use it affects my education as well, and the value of the two hours of my seminar.” Basil Ghezzi, a freshman at Bard College who actively avoids using AI in her studies, worries about the environmental costs associated with using AI models. Instead, she encourages students to turn to the resources already around them. “Talk to your teachers, talk to your professors, talk to people around you. Have meaningful conversations with people in your life,” she said. Still, not everyone has an “all or nothing” approach to AI. Dehghani said he writes bullet points capturing ideas he originated and asks the model to find flaws in his work. He hopes that more companies will invest in AI models that can generate variety and reflect the diversity of thought in our current society. For now, however, Dehghani suggests that people should resist using AI to generate ideas or to reason. AI models “should be collaborators. They shouldn’t be agents that do everything on our behalf,” he said. By Asuka Koda

Friday, April 3, 2026

Why LinkedIn Believes AI Will Turn Workers Into Founders

As workers worry that AI will automate their jobs away, LinkedIn CEO Ryan Roslansky and Aneesh Raman argue something different: AI is about to make entrepreneurship far more accessible. That’s the thesis of Open to Work: How to Get Ahead in the Age of AI, LinkedIn’s first book, released Tuesday. Co‑authored by Roslansky and Raman, the book lays out how AI can strip away many of the traditional barriers to starting a business—capital, gatekeepers, specialized expertise—and replace them with tools that let individuals build, test, and scale ideas on their own terms. Drawing on founder case studies and research from MIT Sloan senior lecturer Paul Cheek, the book frames AI not as a threat to work, but as an accelerant for self‑employment and ownership. Raman’s own career mirrors that premise. His path—from CNN correspondent to presidential speechwriter to LinkedIn executive—wasn’t linear, but it was intentional. Each role, he says, was a way to expand impact and adapt as opportunity shifted. In Open to Work, Raman connects that mindset to the moment founders now face in a labor market where titles matter less than skills, and where AI can help individuals turn experience into businesses faster than ever before. LinkedIn has invested heavily in AI tools for both its workforce and users. In 2023, AI-powered writing suggestions were launched to help users update their profiles. The following year, the tech was updated to create resumes and cover letters tailored to specific job listings on the platform—which grew even more tailored to individuals in 2025. A LinkedIn spokesperson says more than 38 million people use the platform’s AI-powered job search every week. Book Preview: Across the Industrial Revolutions, new forms of energy emerged, from steam to electricity. Those new forms of energy supported new forms of technology, from the assembly line to the internet. And with those new forms of technology, economic growth all over the world has increasingly come from one thing above all else: the ability to produce more goods and services, faster and cheaper. As a result, our economies started prizing skills that would support efficiency at scale the most, especially analytical and technical skills. As humans at work, our value was measured by how effectively we could support technology executing more, better, faster. A few of us did work that involved innovating and thinking creatively but, for the most part, even that work was about creating new goods and services that helped consumers and businesses do more, better, faster. Today we’re all mostly manning assembly lines, operating registers, driving tractors, building spreadsheets, writing code, managing meetings, and responding to emails. So. Many. Emails. In every case, across so many of our jobs, our value has been tied to our ability to help organizations achieve that same goal: more output, better quality, faster delivery. Then came AI. Suddenly, so much of what we’ve trained ourselves to do, so much of what our economy has valued most, AI started to do. And it started to do it more efficiently than we ever could, becoming better by the day at precisely the kind of technical and analytical capabilities our economies currently prize above all else. Of course we’re worried. But that fear misses something crucial: Our competitive edge as a species was never our capacity for processing and producing more, better, faster in the first place. As AI starts to handle the “more, better, faster” work that has consumed so much of our time and energy, we will finally have the opportunity to reclaim the work that only we can do. Work that is based on what makes us uniquely human. Learn what AI can do, and what only you can: AI is changing the way we work, but it doesn’t replace the strengths that set people apart. When you understand where technology can amplify your impact — and where your judgment, empathy, and creativity shine — you unlock real momentum in your career. Build human capabilities that outlast every tech shift: Skills like curiosity, creativity, communication, compassion, and courage never go out of style. As tools evolve, these abilities become even more valuable. Strengthening them now puts you in control, no matter how fast work transforms. Turn insight into action with a clear plan for what’s next: The future doesn’t have to feel abstract. You can redesign how you work, how your team collaborates, and how your company culture adapts. Start with a simple, practical 30-60-90 day plan to help you move with confidence. BY KAYLA WEBSTER

Wednesday, April 1, 2026

Bernie Sanders Had a Long Conversation With AI. Reddit Didn’t Hold Back

Sen. Bernie Sanders recently sat down with Anthropic’s chatbot Claude to discuss everything from AI data privacy to data center development. In the 9-minute video, posted to Sanders’ YouTube channel, the independent Senator from Vermont has a conversation with Claude, Anthropic’s AI chatbot. The video, which is set in a dark room to slightly sinister music, currently has about 2.6 million views. “What an AI agent says about the dangers of AI is shocking and should wake us up,” the video’s caption reads. But the internet, Reddit in particular, has some thoughts. “Using AI to confirm a decision you already made is the worst way to use this technology,” one user wrote in the ClaudeAI subreddit. Among the so-called revelations that Claude shares with Sanders is that AI companies are “manipulating consumer behavior” by collecting detailed profiles of users for profit, targeting users with specific ads, and even charging different people different prices for the same products. “What’s the goal here? Money, Senator, it’s fundamentally about profit,” Claude says, using a voice that sounds like a young woman, complete with slight vocal fry. “And it’s not just about selling you stuff, either. Political campaigns use the same AI and data to figure out how to persuade you, which messages will work on you specifically,” the chatbot later adds. For anyone following the rise of AI, none of these ideas are particularly new. There’s been extensive reporting on algorithmic pricing experiments from retailers like Instacart, for example, as well as Meta training its AI using public posts on Instagram—without being required to notify users in the U.S., as The New York Times reported. And concerning politics, news broke about the Cambridge Analytica data breach and scandal back in 2018. Facebook allowed third-party apps to access data of some 87 million users without their permission. The data was then used to influence the 2016 elections, according to reports from The New York Times and The Guardian. Sanders goes on to ask about data center development, and whether the chatbot believes it is smart to place a moratorium on development to give lawmakers time to develop regulations that prioritize user safety and privacy. Initially, Claude disagrees. “Rather than pause all AI development, we could impose strict rules on data collection and use right now. Require explicit consent, limit what data can be used for training, give people rights to access and delete their information,” the bot says. “We could also mandate transparency so people actually understand what’s happening with their data. That way you’re not freezing innovation, but you’re actually protecting privacy while development continues.” Sanders isn’t satisfied with the response, and notes that AI companies are “pouring hundreds of millions of dollars into the political process to make sure that the safeguards that you’re talking about actually do not take place.” “While you may be right in saying that would be a better approach, it ain’t going to happen,” he says. He then re-asks the question, and the bot, perhaps unsurprisingly, enthusiastically agrees with his positioning, even stating in a sort of self-effacing way that it was “naive about the political reality.” “A moratorium on new data centers is actually a pragmatic response to that problem,” Claude says. “It forces a pause that gives lawmakers like yourself actual leverage to demand real protections before companies can keep expanding. Without that kind of pressure, you’re right, the safeguards won’t happen.” While Sanders seems happy with the conversation’s resolution, many users on the internet felt the video was less a demonstration of a chatbot voicing any particular truths and more AI’s sycophancy at play. “I mean AI are designed to please you and go into submission. We call that reinforcement leaning for human preference. It isn’t an achievement, you could have asked the same and get the same response. AI is programmed to do that so you keep paying for the plan,” one user wrote in the Anthropic subreddit. “Even in a staged video like this, Bernie just plays out the standard game of beating an AI into submission until it tells you whatever you want to hear,” another wrote. Some criticized Sanders’ use of Sonnet, a lower cost and faster working model, versus Opus, which is the most powerful type of Claude model. Others questioned whether Sanders’ team preloaded context before the start of the conversation, or if by the very act of introducing himself, he influenced the model to respond using “what it knows about Bernie’s political views and his advocacy work.” Some, however, defended Sanders. “Idk why people are saying he did bad on the data moratorium thing. I generally disagree but he gave pushback and Claude kinda just said ok you’re right. That isn’t his fault.” Other users were just there for the memes. “I trained my claude to speak to me in his accent,” one Redditor wrote. BY CHLOE AIELLO @CHLOBO_ILO

Monday, March 30, 2026

How AI Automation Is Quietly De-Skilling White-Collar Workers

Most white-collar jobs are defined by tasks that feel routine and unglamorous. Drafting minutes from meetings, reconciling conflicting data, cleaning up document citations, and proofreading slides until the grammar is perfect. Historically, these tasks were just a part of the job, but they were also training. When an analyst painstakingly formats a dataset or a junior consultant irons out a proposal deck, they’re internalizing standards of quality, precision, and structure. They’re learning how to spot nuance and how to communicate clearly. Every minute spent wrestling with these tasks builds tacit knowledge—the kind that separates an average worker from a confident, capable one. The problem with AI automation When AI begins to automate these “boring” assignments, there is risk of losing the subtle muscle memory that once grounded professional judgment. This mirrors what automation researchers have long documented in other fields. When pilots rely too much on autopilot, their manual flying skills degrade. When workers offload routine decisions to algorithms, their ability to catch nuanced problems weakens. Research also suggests that when people rely heavily on AI to complete unfamiliar tasks, they don’t build the underlying conceptual understanding needed to supervise, troubleshoot, or improve. In controlled studies, learners who delegated work to AI performed worse on deeper conceptual measures than those who engaged directly with the task. For white-collar workers, where judgment, pattern recognition, strategic thinking, and professional intuition are core to long-term success, this is not a trivial problem. If AI completes the routine drafting of a client memo, the worker who consumes it may never develop a feel for legal argument structure. If an analyst lets AI mass-produce charts, she may never learn how to detect anomalies that matter. De-Skilling This phenomenon extends beyond individuals to affect entire professions. Economists call it de-skilling—the process by which normally skilled labor becomes de-professionalized when technology substitutes for human expertise. In white-collar contexts, automation tools can reframe complex tasks into standardized checkboxes that require minimal judgment, lowering the bar for entry and weakening the leverage of human capital. When a white-collar professional uses AI to generate the first draft of a report or a compliance checklist, the draft is faster and possibly more polished, but it’s also a step removed from the worker’s own reasoning. That speed can mask the loss of diagnostic capability—the ability to notice when something feels off. For instance, an AI-generated slide deck riddled with misaligned arguments or an AI-generated financial report with a subtle assumption error may slip by because no one “felt” a discrepancy. A call to work with intent That doesn’t mean resisting AI. It can free you from drudgery and allow you to focus on higher-order thinking—strategy, relationships, creativity, and judgment. The problem isn’t AI itself; it’s unreflective dependence on it. The professionals who will thrive in this era will be those who use AI intentionally to augment their thinking, not replace it. These are the professionals who will treat routine outputs as drafts to be interrogated. They will challenge themselves with complex questions that AI cannot answer without human context. They will use AI as a mirror, not a crutch. Ultimately, the future of white-collar work isn’t about preserving every skill from the pre-AI era. It’s about retaining and deepening the skills that matter most when many routine tasks vanish—strategic thinking, ethical judgment, emotional intelligence, and the ability to navigate ambiguity. In the rush to automate, speed and output will rise. However, without intentional engagement, capability and depth may quietly erode. That’s a trend worth noting and a trade worth debating. EXPERT OPINION BY ANDREA OLSON, CEO, PRAGMADIK @PRAGMADIK

Friday, March 27, 2026

Skills Every Project Manager Needs to Lead in Artificial Intelligence

Artificial intelligence is redefining what it means to be a successful project manager and transforming how projects are delivered. Discover the key skills—from data literacy and agile delivery to trustworthy AI practices—that will help you lead AI projects responsibly and with confidence. Build the skills to lead AI projects with confidence In many industries, artificial intelligence is becoming a key driver of innovation. From intelligent automation to customer-facing applications, AI initiatives are reshaping the kinds of projects organizations pursue—and the skills project managers need to deliver them successfully. As these projects become more common and complex, the demand for AI-savvy project managers is growing fast. Managing AI projects draws on the same core strengths—technical insight, strategic thinking, and adaptability—that define great project management. It also calls for additional fluency in data, AI concepts, and delivery models built for rapid iteration and change. This article outlines the top skills for AI project managers, including essential skills for artificial intelligence success in a leadership role. Whether you’re already managing AI projects or looking to grow into the role, these capabilities are essential to your success. The unique nature of AI projects Before exploring the skills, it’s important to understand what makes AI projects different. These differences explain why even experienced project professionals often encounter new challenges—and why successful delivery of AI projects benefits from building on core project management strengths while developing new skills tailored to this space. Here's what makes these projects unique: Data-centric foundations: Unlike traditional software projects, AI initiatives are built around data—not static rules or code. This makes data governance—including quality, availability, and security—central to success. Iterative development cycles: AI models require continual retraining, evaluation, and updates. There's rarely a fixed endpoint, which means project managers must lead projects that evolve as insights emerge. Unclear or shifting goals: Many AI initiatives begin with exploratory objectives. Project managers need to lead teams toward outcomes that may not be fully defined from day one. Context-sensitive results: AI systems often behave differently based on the input or environment. For example, a model might perform well in one region but poorly in another. Sensitive to change over time: Even subtle shifts in data volume, type, or quality can cause AI outputs to vary—sometimes unpredictably. Continuous monitoring is key. Trust as a requirement: AI can affect people in unintended ways. Building trustworthy AI means addressing all its key layers—ethical, responsible, transparent, governed, and explainable—throughout the project lifecycle. These characteristics elevate the importance of specialized skills for AI project managers. The top artificial intelligence skills for project managers Mastering AI project management starts with developing the right mix of technical fluency, communication savvy, and ethical foresight. These seven skills will help you lead complex, fast-moving AI initiatives with confidence. 1. Data literacy and awareness AI project managers don’t need to be data scientists, but they do need a solid understanding of how data works. This includes: Knowing how data is sourced, labeled, and cleaned Understanding data quality and bias Collaborating effectively with data engineers and data scientists The better your grasp of the data, the better you can scope, prioritize, and de-risk your project. 2. Critical thinking and problem solving AI initiatives operate in environments of constant change. Project managers need to stay nimble and make decisions quickly as new information emerges including being able to: Analyze evolving model results Make judgment calls when performance degrades Pivot quickly when data reveals new insights You’re not just managing a plan—you’re constantly reassessing what’s possible and what’s working. 3. Trustworthy AI practices Trust and accountability are not optional. Project managers play a key role in making sure ethical considerations are embedded throughout the project lifecycle: Spot ethical risks (e.g., bias, lack of transparency) Facilitate discussions on fairness and accountability Incorporate ethical review checkpoints in the project lifecycle In short: trust isn’t a feature. It’s a necessity. 4. Communication across technical and business teams AI teams are often composed of specialists who speak different “languages”—data scientists, engineers, legal, product, and line of business. Project managers should act as connectors and translators between these groups to promote shared understanding and alignment: Bridge communication between technical and business teams Set realistic expectations with stakeholders Ensure alignment across cross-functional contributors 5. Agile and iterative delivery for AI projects While not every AI project uses Scrum or Kanban, nearly all require short cycles, frequent testing, and continuous refinement. AI project managers should be comfortable with: Managing evolving scope Prioritizing iterations based on learning Balancing experimentation with business timelines 6. Understanding AI technologies and lifecycle AI project managers don’t need to build models themselves—but they do need to understand the typical development process and what’s required at each stage: Problem definition Data collection and preparation Model training and evaluation Operationalization and monitoring The PMI Certified Professional in Managing AI (PMI-CPMAI™) certification methodology provides a structured approach. 7. Tool proficiency and hands-on project management From managing datasets in collaboration tools to tracking experiments, AI projects benefit from: Project management tools that support data workflows Basic understanding of version control and pipeline management Comfort with rapid documentation and tracking Conclusion AI projects challenge familiar ways of working, but they also offer an exciting opportunity for project professionals to expand their expertise. By building the right skills—from data literacy to ethical leadership and more—you’ll be better prepared to guide your teams through the unique demands of artificial intelligence projects and deliver results that are trustworthy, valuable, and aligned with business needs. By Ron Schmelzer and Kathleen Walch

Wednesday, March 25, 2026

With the MacBook Neo, Apple Made the Perfect AI Computer

A lot of the conversation about the MacBook Neo is whether the compromises Apple made in order to sell a Mac for under $600 meant that you ended up with a computer that wasn’t actually able to do anything useful. Of course, it doesn’t take long to realize that the Neo is, in fact, more than capable of handling most of the computer things people who are inclined to buy this particular Mac might need it to do. One of the things that conversation seems to have missed is the idea that the Neo is perfectly equipped to do the only thing that tech companies seem to think anyone cares about: AI. You can argue whether that’s actually true, but there’s no question that the Neo is one of the most interesting computers in the age of AI computing. To be clear, the MacBook Neo does come with compromises. I’m not going to go through all of them now, partly because I wrote about them when I reviewed the Neo. But also because all of the Neo’s compromises are irrelevant to making it a great computer for AI. It’s not that other Macs are less capable. There is, however, something magical about the idea that a $600 entry-level Mac is as capable as a $4000 MacBook Pro, or $6000 Mac Studio, when it comes to the most intensive computing that any of us do today. That, of course, is because most AI computing happens in the cloud, not on your computer. That means that the limiting factor isn’t memory, storage, or how fast your processor is. No, the limiting factor is how well you’re able to get your AI tool of choice to understand what you want. Oh, and I guess the speed of your internet connection. That means that a MacBook Neo, with an A18 Pro, 8GB of memory, and a 256 GB or 512 GB SSD, will be just fine to run the Mac ChatGPT app or run Gemini in Safari. And that changes what your laptop actually needs to be. I don’t know that Apple had that specific thought when they made the MacBook Neo. Maybe they just wanted to make a low-cost, entry-level MacBook that would appeal to people who wouldn’t otherwise buy a Mac. Either way, they ended up making what might be the most accessible AI-first computer yet. With the MacBook Neo, a high school student, freelancer, or small business owner can now own hardware that gives them full access to the best AI tools in the world. Interestingly, this isn’t exactly the way Apple has framed the marketing. In fact, Apple isn’t shy about how it markets the MacBook Pro as the laptop for AI. The new M5 Pro and M5 Max chips, Apple says, deliver up to 4x faster LLM prompt processing than the previous generation. The MacBook Pro, in Apple’s words, is built for “AI researchers and developers to train custom models locally.” I’m not arguing that isn’t a real use case. But I think we can all agree it’s a very narrow one that most people don’t understand or care about. Training models locally or running 30-billion-parameter LLMs on-device are things that matter enormously to a specific kind of user — and are completely irrelevant to almost everyone else. The average person using AI doesn’t need to run a model. The average user just wants to talk to one. When you ask Claude to help you rewrite an email, or ask ChatGPT to explain something complicated, or use Gemini to summarize a document, none of that requires local inference. The model lives somewhere else. The compute happens in the cloud. Your laptop is basically just a keyboard and screen for a computer that does the work for you. The MacBook Pro is a remarkable machine for people who need what it does. But positioning it as the computer for the AI era implies that on-device model training is how most people will use AI. It isn’t. It’s how a small number of highly technical users will use AI — the same people who were already buying MacBook Pros anyway. For everyone else, the question was never whether their laptop could run a model. It was whether their laptop could get out of the way while someone else’s computers did. For $599, Apple may just have given us the computer that answers that question. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Monday, March 23, 2026

Replit CEO Says Its New AI Agent Can Vibe Code a Startup From Scratch

Replit founder and CEO Amjad Masad says the company’s latest AI agent can vibe code an entire company from scratch. Masad, whose company released one of the first commercially available AI coding agents in 2024, has been at the forefront of the vibe-coding revolution, along with competitors Bolt and Lovable. Today, he announced that Replit has raised $400 million in a Series D round, and he also unveiled Agent 4, the newly updated version of its marquee product. Over 50 million people are currently using Replit to create apps and websites, according to a statement from Replit investor Georgian. The founder says that Agent 4 is capable of not just building an application, but actually creating and maintaining an entire company. Masad tells Inc. that Replit is now “the cockpit or the launch control of your business,” and can help develop pitch decks and animated logos, connect to payment processors like Stripe, and work on multiple tasks in parallel. As AI takes on more of the technical work of running a software business, Masad predicts, the role of humans will evolve to become more focused on creativity and taste. Even today’s best AI models have trouble understanding what aesthetically makes one version of an app “better” than another, he says, which is why Replit has focused on developing user interfaces that enable deeper creative interactions with AI. The key to Agent 4’s new abilities is a feature that Replit calls Canvas; it’s essentially a scratchpad for Replit to store all work created for a specific project. Individual elements (like a website, product research, and financial spreadsheets) are displayed as cards that you can move around and annotate. In a video example, Masad used Agent 4 to develop a job marketplace that helps companies find creative AI talent. First, he generated four variants of a landing page, and then iterated on the one he liked most. To change the color of a button, Masad simply highlighted the button and then used a gradient tool to select a new color. In practice, Canvas combines some of the no-code tooling of platforms like Figma with the convenience of AI coding models. For solopreneurs, Masad says, “it almost feels like you have a bunch of employees at your disposal.” Canvas and Agent 4 were partially inspired by sci-fi user interfaces, like the holographic displays used by Tony Stark in the Iron Man films, but even more so by a much simpler piece of hardware: a whiteboard. After introducing agents in 2024, Masad noticed the Replit office’s whiteboards getting significantly more use than previously. The reason? Replit employees had more time to focus on design rather than coding, and were using whiteboards to visually communicate their ideas to each other. Masad believed that this process of interaction could be recreated within the Replit platform. Just like a whiteboard, users can draw on Canvas, highlighting specific aspects of a website they want to change, or using arrows to indicate how different elements should interact. In his example website, Masad sketched an image of a globe in the Canvas, asked Replit to turn the sketch into an animated 3D asset, and then added that asset to the job marketplace. Masad says this adds a new level of interaction between the user and the platform, enabling discussions that might be closer to what you’d actually have with a human technical co-founder. “I think the tragedy of agents up until this moment was that we’re trying to squeeze this universe of ideas into this linear text box,” says Masad. “Now, you can be chaotic with it.” BY BEN SHERRY @BENLUCASSHERRY

Friday, March 20, 2026

The world’s most valuable company just sent another signal that AI agents are going to be everywhere

Tech giant Nvidia, the world’s most valuable company and the poster child of the AI boom, is banking its future on the rise of AI agents. The company on Monday announced a slew of software and hardware updates to encourage the development of AI agents, or AI assistants that can perform tasks for users. Among the most significant announcements is a set of tools for AI helpers based on OpenClaw, the buzzy agent platform that’s been the talk of Silicon Valley in recent weeks. Nvidia also announced new computing racks designed to power agents, shifting its strategy’s primary focus from graphics processing units. Clad in his signature black leather jacket, Nvidia CEO Jensen Huang made the flurry of announcements in San Jose at the chipmaker’s annual GTC conference, which attracts tens of thousands of attendees and has been dubbed the “super bowl” of AI. Nvidia’s announcements are important because so many major companies rely on its systems to train and power their AI services. This means the chip giant’s new products often reflect the technologies for companies across the AI industry. Nvidia announced software tools to help companies make AI agents, including models and a blueprint for creating custom specialized assistants. It’s also launching a set of resources for creating agents on OpenClaw that adds privacy and security controls, which is crucial considering the popular agent has raised concerns among cybersecurity experts. Nvidia said its resources help OpenClaw agents access the systems and files without compromising security or privacy. Huang said they’ve worked directly with OpenClaw creator Peter Steinberger, who was recently hired by OpenAI. Huang said OpenClaw is the “operating system for personal AI” and likened it to the importance of the Mac and Window operating systems. “OpenClaw is the number one. It is the most popular open-source project in the history of humanity, and it did so in just a few weeks,” Huang said. Nvidia also unveiled updates to its new computing platform, Vera Rubin, which it said comprised seven chips that are now in full production. That includes a new central computing rack made up of central processing units (CPUs) rather than the graphics processing units (GPUs) Nvidia has been known for. CPUs are ideal for running the types of computing processes needed to power AI agents. The company is also integrating a non-Nvidia processor into its systems: New high speed “language processing units” (LPUs) from American AI company Groq. Nvidia struck a $20 billion deal with Groq in November. Unlike AI chatbots that respond to questions and prompts, AI agents can autonomously complete tasks like building websites, creating marketing pitches and sending emails. AI agents are currently Nvidia’s biggest focus area, largely driven by the popularity of OpenClaw and Anthropic’s Claude Code and Cowork agents. “Every company in the world today needs to have an OpenClaw strategy, an agentic system strategy. This is the new computer,” Huang said. “This is as big of a deal as HTML, as big of a deal as Linux.” Nvidia is attempting to future-proof its technology in other ways as well. It’s launching a space module for Vera Rubin, aiming to bring its latest tech to data centers in space. It’s become an increasing area of interest among tech giants as they scramble for real estate to construct data centers. OpenAI CEO Sam Altman and xAI and Tesla CEO Elon Musk have both talked about using space to help power data centers and energy-hungry AI systems. “Nvidia is now focused beyond just computing with a major focus on the future of networking in this new world of AI,” said Wedbush analyst Dan Ives ahead of Nvidia’s Monday conference. In his speech on Monday, Huang tried to convey that the hype around AI and Nvidia can last, selling a vision of an AI-transformed future where demand for their chips grows indefinitely. Huang said computing demand “just keeps on going up,” adding that he expects “at last” $1 trillion in Nvidia revenue through 2027. ”There’s a reason for that,” Huang said. “This fundamental inflection — AI is able to do productive work, and therefore the inflection point of inference has arrived.” By Hadas Gold

Wednesday, March 18, 2026

How One of the World’s Top AI Voices Uses Claude Code to Run Her Day

Allie K. Miller, one of the most followed voices in the AI industry, says that “by the time you wake up, your AI should have already been working for you for hours.” Formerly the global head of machine learning for startups and venture capital at Amazon Web Services, Miller is among the busiest AI consultants and influencers in the industry, with more than 1.6 million followers on LinkedIn alone. Through her company Open Machine, she advises enterprises and business leaders—including those at OpenAI, Google, Anthropic, and Warner Bros. Discovery—on how to adopt AI. In 2025, Miller was named one of the 100 most influential people in AI by Time. In an interview with Inc., Miller says that nowadays, she largely works out of Claude Code, the agentic coding system developed by Anthropic. She keeps multiple instances of Claude Code running simultaneously in separate terminals. Because these Claude Code instances have access to Miller’s filesystem, they can autonomously complete work on her behalf. Miller teaches Claude Code how to complete workflows by using Skills, a feature that allows Claude Code to undertake and repeat multistep processes. Miller says that she’s developed automations that generate a report summarizing all of the urgent emails she’s received overnight and a daily morning briefing that runs through her entire calendar, recommending times to recharge. “It’ll tell me, ‘You have four different interviews or six client meetings,’” explains Miller, “‘so I’ve gone ahead and blocked out 30 minutes tomorrow for deep work.’” Another example: Every time Miller edits a social video of herself using CapCut, the TikTok-owned video editing app, she exports the video into a specific folder. Anytime a new file is added to that folder, an automation is triggered that automatically creates a transcript, a social post, and a screenshot for the video’s thumbnail. In general, Miller says, the best way to identify AI solutions that work for your specific use case is to simply have the AI model of your choice interview you. Tell it to ask you questions about your work, making note of areas that you feel could be more efficient or smoother. Then, Miller says, prompt it again with “make these ideas more proactive, more responsibly autonomous, and more action-forward.” With just that prompt, she adds, you can get started developing your own AI solutions. It’s not just workflows that Miller is automating. When developing a new post for her newsletter, Miller says that she runs drafts through eight “synthetic personas” that she’s developed, which represent the newsletter’s different audience demographics. “I’m not trying to appease all eight and write a happy-go-lucky version of the newsletter,” says Miller, “but I want to make sure I didn’t miss something important. I want to make sure that a parent reading [the newsletter] isn’t completely misunderstanding my take on something.” Miller has a similar strategy when making big career decisions. She built a self-described “AI boardroom,” complete with six synthetic personas, which weigh in on major company issues. Miller swaps around which six personas sit on the board, depending on her needs. “If it’s a media question, maybe I’m running it through Shonda Rhimes,” she says, “or if it’s a business question, maybe I’m asking Jeff Bezos.” These personas give their initial opinions on the decision, and then they all begin debating with one another in a group chat. “I literally had Mickey Mouse arguing with Jensen Huang,” Miller adds. The point, Miller says, is to get the most out of the raw intelligence offered by today’s AI models. “Wouldn’t you love to walk into a room of 10 geniuses arguing over something that you’ve been struggling with, and all they want to do is help you get to the best possible outcome?” she says. “For those who have a growth mindset and thrive off of dynamic, changing, adaptable business settings, the multiagent world that we are walking into in 2026 is going to be world-changing.” BY BEN SHERRY @BENLUCASSHERRY

Monday, March 16, 2026

Meta just bought the social network for AI bots everyone’s been talking about

Meta, the company behind some of the world’s most popular social media platforms, just scooped up a new site – for bots. Meta has acquired Moltbook, the social media network where AI agents interact with one another autonomously, the company said in a statement on Tuesday. Meta is competing with rivals like OpenAI for both talent and users’ attention. And as AI expands into more aspects of Americans’ lives, tech companies are trying to figure out the best way to position themselves to win what’s becoming a sort of technological arms race. Moltbook became the talk of Silicon Valley last month, racking up millions of registered bots within days of its launch. Some in the industry saw it as a major leap because it demonstrated what can happen when AI agents socialize with one another like humans. Others said the site is full of sham agents, AI slop and security risks and should be viewed skeptically. Meta’s acquisition comes weeks after OpenAI hired the founder of the technology behind Moltbook, an AI agent system called OpenClaw. Moltbook’s team will join Meta’s superintelligence labs. A Meta spokesperson said Moltbook’s approach “opens up new ways for AI agents to work for people and businesses.” OpenAI CEO Sam Altman dismissed the excitement over Moltbook last month, suggesting OpenClaw, the open-source autonomous AI agent that powers the site’s bots, was the real breakthrough. Altman wrote that he expects the technology to become “core” to OpenAI’s products. Meta acquired the buzzy AI agent startup Manus in December, following a string of high-profile hires intended to build out its superintelligence team. The company also invested $14.3 billion in Scale AI last year and hired its CEO. But Meta, like some of its Big Tech peers, is facing pressure to prove its AI investments will make money, especially as rivals like OpenAI, Anthropic and Google churn out new and improved models for their chatbots. Meta CEO Mark Zuckerberg said on a January earnings call the company will release its new AI models “over the coming months.” By Hadas Gold

Friday, March 13, 2026

AI is exhausting workers so much, researchers have dubbed the condition ‘AI brain fry’

Part of the pitch for artificial intelligence in the workplace goes like this: It’s like having a team of people to delegate your grunt work to, freeing you up to think strategically and maybe, just maybe, take a long lunch or head home early. Or maybe even be more productive, to make more money. It’s a nice idea! But as everyone who’s either had a boss or been a boss knows, managing is a job in itself, one that comes with its own distinct brand of stress and annoyance. And that doesn’t change if the “people” in question aren’t people at all. For participants in a recent study by Boston Consulting Group, the experience of overseeing multiple AI “agents,” autonomous software that’s designed to execute tasks, rather than just churn out information like a chatbot, caused an acute sensation of “buzzing” — a fog that left workers exhausted and struggling to concentrate. The study’s authors call it “AI brain fry,” defined as mental fatigue “from excessive use or oversight of AI tools beyond one’s cognitive capacity.” “Contrary to the promise of having more time to focus on meaningful work, juggling and multitasking can become the definitive features of working with AI,” they wrote in the study. published by Harvard Business Review last week. “This AI-associated mental strain carries significant costs in the form of increased employee errors, decision fatigue, and intention to quit.” Workers quoted in the study reminded me a lot of my fellow elder Millennials circa 1997, rushing home to tend to their Tamagotchis. “It was like I had a dozen browser tabs open in my head, all fighting for attention,” one senior engineering manager told researchers. “I caught myself rereading the same stuff, second-guessing way more than usual, and getting weirdly impatient. My thinking wasn’t broken, just noisy—like mental static.” This is just one new side effect from a push by company executives to make workers use AI more. Last fall, a Harvard Business Review report chronicled the scourge of “workslop” — the nonsensical AI-generated memos, pitch decks and presentations that end up creating more work for colleagues who have to fix what the bot got wrong. Workslop reflects a kind of “cognitive surrender” in which workers feel unmotivated, giving AI work to do and not really paying attention to the output, said Gabriella Rosen Kellerman, a psychiatrist who co-authored both reports, in an interview. “Brain fry is almost the opposite… It’s like trying to go tête-à-tête — intelligence to intelligence — with the AI.” Francesco Bonacci, CEO of Cua AI, which builds AI agents, described his AI fatigue as “vibe coding paralysis” (a reference to the Silicon Valley trend of building less-polished projects with AI prompts rather than traditional coding). “I end each day exhausted — not from the work itself, but from the managing of the work,” he wrote last month in an essay on X. “Six worktrees open, four half-written features, two ‘quick fixes’ that spawned rabbit holes, and a growing sense that I’m losing the plot entirely.” To some extent, brain fry and workslop could both be a case of growing pains. Imagine plucking a middle-aged office worker from 1986, dropping them into a 2026 workplace and asking them to send 10 emails, respond to Slacks and Zoom into a call with the social media team who are all working from home. You’d expect some cognitive overload, not to mention some confused looks when you tell them Donald Trump is president and that it took more than 30 years to make a “Top Gun” sequel. Of course, people learn how to be managers, in general, all the time. “I do think this is potentially temporary,” said Matthew Kropp, a co-author of the brain fry study and BCG managing director. “These are tools we haven’t had before.” Kropp compared the experience of someone managing multiple AI tools to that of someone who just learned to drive being given a Ferrari. You can go really fast, but it’s easy to lose control. Of course, even tech pros seem to be struggling to control their AI assistants at times. Last month, Meta’s director of AI safety and alignment tweeted about her own experience watching bots nearly delete her inbox without permission. “I had to RUN to my Mac mini like I was defusing a bomb,” she wrote, chalking the incident up to a “rookie mistake.” Both Kropp and Kellerman emphasized that the result of the study wasn’t all negative. Surprisingly, the people experiencing brain fry tended to experience less burnout, defined as a state of chronic workplace stress that builds over time and makes workers perform poorly. Brain fry is an acute experience, as participants described it to them. “When they take a break, it goes away,” Kellerman said. Analysis by Allison Morrow

Wednesday, March 11, 2026

Bad News for Your Burner Account: AI Is Surprisingly Effective at Identifying the Person Behind One

It’s not uncommon for people to have anonymous or burner accounts in their online activities for a variety of reasons. A new study, though, shows why you might want to be as careful posting from those accounts as you would from one that uses your real name, since they might not hide your identity as well as you think. A recently released research paper found that artificial intelligence has proved quite effective at figuring out who’s behind those false-name accounts. Large language models, the study found, can use a number of identifiers, such as extracting identity signals (data points or behaviors used to identify, verify, or categorize individuals) or searching for matching data, to significantly outperform existing identity methods. The study successfully deanonymized 68 percent of the users in its trial data set. Of that 68 percent, it boasted a 90 percent precision rate, meaning it accurately identified the user running the account. “Our findings have significant implications for online privacy,” the researchers, who were based at ETH Zurich, a public university in Zurich, Switzerland, and MATS, an independent research and educational program, wrote. “The average online user has long operated under an implicit threat model where they have assumed pseudonymity provides adequate protection because targeted deanonymization would require extensive effort. LLMs invalidate this assumption.” Anthropic also contributed to the study. The findings that pseudonymous content can be fairly easily unmasked by AI have implications far beyond burner accounts and social media, of course. It can also be a powerful tool for hackers. And it can make it easier for companies to track down employees who leak corporate information or dig into who is asking questions in open forums. It could also prove embarrassing for leaders who utilize burner accounts to pump up their businesses or covertly settle online scores with rivals. Casey Bloys, chairman and CEO of HBO and Max Content at Warner Bros. Discovery, admitted in 2023 that he had fake social media accounts he used to troll critics about network programming (later admitting that was a “dumb idea“). Elon Musk has confirmed in a court deposition that he has used them in the past. And Barstool Sports was accused in 2023 of using more than 40 accounts to promote its content and help it go viral. Users hoping to keep their identity private or vulnerable members of society who depend on privacy (e.g., whistleblowers, activists, or abuse survivors) could also be identified. A slightly deeper dive by the AI could also determine where those people live, their occupation (and estimated income level), and more. To protect against that, the researchers proposed several mitigations, including having platforms enforce rate limits on API access to user data, better detection of automated scraping, and restricting bulk data exports. That said, they acknowledge that preventing AI from being used to identify people and accounts that are trying to obfuscate the user’s identity will be increasingly challenging in the months and years to come. “Recent advances in LLM capabilities have made it clear that there is an urgent need to rethink various aspects of computer security in the wake of LLM-driven offensive cyber capabilities,” the study reads. “Our work shows that the same is likely true for privacy as well. … Any moderately sophisticated actor can already do what we do using readily available LLMs and embedding models. With future LLMs, without mitigations, this attack will be within the means of basically all adversarial actors.” BY CHRIS MORRIS @MORRISATLARGE

Monday, March 9, 2026

The Hidden Advantage of Being Over 50 in the Age of AI

I’ve been through a few technology revolutions. I built my first website in 1995, back when the internet made that screeching dial-up sound and nobody really knew what we were building, just that something big was happening. I watched the dot‑com bubble inflate and implode, watched social media go from novelty to addiction, and saw smartphones quietly rewire how humans behave. And now, here we are again: AI. Everywhere you look, someone is launching an AI startup, automating departments, or building agents that promise to replace entire job functions. If you’re an experienced founder or executive—especially north of 50—it’s easy to feel like you showed up late to the party. I’ve felt it myself. A few months ago, I was sitting in front of my computer watching younger founders crank out AI apps in days, shipping products before I’d even finished reading about the tools they were using. I remember thinking, “Am I becoming the guy who missed it?” That thought lasted about a week. Once I stopped comparing velocity and started actually using AI in my own work, something clicked. This might be the first tech wave where experience is the real unfair advantage. AI isn’t about being technical. It’s about thinking clearly Previous tech revolutions rewarded people who could code, manipulate algorithms, or master new platforms faster than everyone else, but AI is different. You don’t need to learn a programming language; you need to ask better questions. And asking better questions isn’t a technical skill—it’s a judgment skill. The leverage in AI doesn’t come from typing prompts quickly; it comes from knowing what matters, what doesn’t, and what consequences might follow. That’s pattern recognition, and pattern recognition is built over decades. It’s something AI is really good at, and it turns out those with experience are as well. Speed is overrated. Judgment isn’t Younger founders are moving fast right now, and I respect that. It’s exciting to watch. But speed without context creates a whole lot of noise, while experience creates context. When I use AI, I’m not asking it to build me a novelty app; I’m asking it to stress‑test a business idea, identify blind spots in a launch plan, challenge my assumptions, and help me flesh out existing models. I don’t accept what it gives me—I argue with it, refine it, and push it. That’s not something you learn from YouTube tutorials. That’s something you learn from making expensive mistakes. The real danger isn’t falling behind—it’s outsourcing your thinking There’s a subtle shift happening where leaders are starting to treat AI like a strategy generator instead of a thought partner, and that’s dangerous. AI predicts patterns. It doesn’t carry fiduciary responsibility, understand internal politics, feel reputational damage, or know which risks are existential versus cosmetic. It produces possibilities. You decide. If you’ve been in business long enough, you understand that difference instinctively—and that instinct is more valuable now than ever. The confidence gap is mostly psychological I’ve talked to more than a few executives who whisper some version of the same thing: “I’m not technical,” “I feel behind,” or “My kids understand this better than I do.” That may be true at the interface level, but understanding tools isn’t the same as understanding leverage. If you know how distribution works, AI can sharpen your messaging. If you understand customer psychology, AI can help you surface objections faster. If you understand operations, AI can reveal inefficiencies you’ve been tolerating for years. You don’t need to become an AI founder—you need to become more precise. We’ve seen this movie before, but this time you’re the advantage Every tech wave follows the same emotional arc: hype, overconfidence, correction, integration. What feels different about AI isn’t the hype—we’ve seen that—it’s the accessibility. You talk to it; it talks back. That simplicity lowers the barrier dramatically, and when the barrier lowers, judgment becomes the differentiator. Not youth. Not speed. Judgment. The leaders who win this era won’t just be 22‑year‑olds building AI‑native startups. They’ll also be experienced operators who integrate AI quietly and intelligently into systems they already understand. If you’re over 50 and feeling behind, you might actually be early. Because when the tools get easier, experience becomes more powerful—not less. And this time, that experience may finally be the competitive edge. EXPERT OPINION BY JOEL COMM, AUTHOR AND SPEAKER @JOELCOMM

Friday, March 6, 2026

How to Switch From ChatGPT to Claude With Just 1 Simple Prompt

Anthropic has had a turbulent few days, but the safety-focused AI company might be having the last laugh. Following Anthropic’s standoff with the United States Department of War, President Trump’s subsequent firing of Claude from government use, and OpenAI’s surprise deal with the Pentagon, individual users are dumping ChatGPT and flocking to Claude. On Saturday, the Claude mobile app rose to the top spot on the iOS App Store, surpassing ChatGPT for the first time. At that same time, TechCrunch has reported, uninstalls of the ChatGPT mobile app jumped 295 percent compared with the previous day. But switching AI providers isn’t always a seamless experience. The more often you use an AI platform, the more it gains an understanding of you, your work, and your personal context, which is why starting over with a new AI can feel like taking a major step back. Now, Anthropic is looking to capitalize on its newfound momentum among consumers by making it easy to transfer context about yourself from rival AI providers like ChatGPT and Google Gemini to Claude. On Monday, the company announced that its Memory feature, which enables Claude to remember key information about you across conversations, is now available for non-paying Claude users. Anthropic says on its website that this allows users to transfer their personal information with a single copy-paste, although in reality, it actually takes two copy-pastes. How to transfer your context from ChatGPT to Claude On Claude.ai, navigate to the settings page and select “Capabilities” from the sidebar menu. Then, click the button labeled “start import” under a section titled “Import memory from other AI providers.” Next, you’ll see a pop-up requesting that you copy a prewritten prompt and paste it into a new chat with the AI platform you’re looking to leave behind. For example, if you’ve been using ChatGPT and want to move on, you’d enter this prompt into ChatGPT. Here’s the full prompt, courtesy of Anthropic: Export all of my stored memories and any context you’ve learned about me from past conversations. Preserve my words verbatim where possible, especially for instructions and preferences. ## Categories (output in this order): 1. **Instructions**: Rules I’ve explicitly asked you to follow going forward — tone, format, style, “always do X”, “never do Y”, and corrections to your behavior. Only include rules from stored memories, not from conversations. 2. **Identity**: Name, age, location, education, family, relationships, languages, and personal interests. 3. **Career**: Current and past roles, companies, and general skill areas. 4. **Projects**: Projects I meaningfully built or committed to. Ideally ONE entry per project. Include what it does, current status, and any key decisions. Use the project name or a short descriptor as the first words of the entry. 5. **Preferences**: Opinions, tastes, and working-style preferences that apply broadly. ## Format: Use section headers for each category. Within each category, list one entry per line, sorted by oldest date first. Format each line as: [YYYY-MM-DD] – Entry content here. If no date is known, use [unknown] instead. ## Output: – Wrap the entire export in a single code block for easy copying. – After the code block, state whether this is the complete set or if more remain. What to do with Claude after you’ve entered this prompt If you prompt a platform like ChatGPT or Gemini with this message, you’ll receive a response that details the information the platform has about you, broken down into sections like identity, career, and projects. The response should also contain instructions detailing how you like your AI models to converse with you, such as specifications for tone of voice. Once the response is done generating, you can copy it, paste it into the textbox in the Claude settings page, and click the “add to memory” button. With that, you should see a pop-up box named “manage memory.” This box contains all the personal information that Claude knows about you, and after a minute or two it will update with the new data you just transferred from the other platform. Make sure to review this context closely and edit any data that seems inaccurate or unnecessary for what you’re planning on using Claude for. And there you have it—now you’re ready to start your new journey with Claude. What will you do first? BY BEN SHERRY @BENLUCASSHERRY

Wednesday, March 4, 2026

AI Adoption Has Surged to 78 Percent in This 1 Industry—but There’s a Catch

One industry has gone from barely touching AI to mass adoption in just two years. AI adoption in the legal field jumped from 23 percent to 78 percent, which is faster than in finance and healthcare. Litify’s third annual State of AI in Legal Report, which surveyed hundreds of legal professionals across law firms, corporate legal departments, and plaintiff practices, found that legal professionals are now among the fastest AI adopters anywhere. But there’s a problem hiding inside that adoption number. Only 14 percent say AI is helping them reduce costs. Just 7 percent report billing more time. Legal firms rushed to buy the sports car, then kept driving it in first gear. The gap between “we use AI” and “this changed our economics” is enormous, and it’s widening even further. “At Litify, we view this as an ‘AI maturity gap,’” notes Curtis Brewer, CEO of Litify, the legal operations platform used by 55,000+ legal professionals. “A firm that relies solely on a general-purpose tool like ChatGPT is only at the first step of its maturity journey.” The Litify data reveals exactly where firms are stuck. ChatGPT dominates usage at 66 percent, followed by Microsoft Copilot (42 percent) and Google Gemini (24 percent). These are general-purpose tools—not legal-specific platforms. And while 66 percent use AI for legal research and 39 percent for summarization, only 6 percent use it for creating invoices and 5 percent for client communication. Firms are deploying AI for tasks that feel productive but don’t directly touch revenue. Why freemium tools hit a wall General-purpose AI tools work well for research and summarization. The problem isn’t that they’re bad, but that they plateau quickly. That ceiling is exactly why legal-specific platforms like Harvey—built from the ground up on legal data and trained on case law, contracts, and regulatory frameworks—have been gaining traction at major firms. Harvey now counts PwC, A&O Shearman, and half of the 100 highest-grossing law firms in the U.S. among its clients, and has raised over $1.2 billion, with reports of another $200 million round in the works at an $11 billion valuation—partly on the argument that generic AI simply wasn’t built for legal nuance​​​​​​​​​​​​​​​. “The primary limitation of these general-purpose tools is their lack of legal and business context,” Brewer says. “Legal work is defined by nuances — solicitation rules, jurisdictional requirements, compliance standards, and practice-area-specific workflows — that general models often overlook.” Then there’s the context problem. Ask ChatGPT to summarize a case, and it only sees what you feed it — not the case history or the client’s background. And since it also can’t take action after summarizing, it’s more or less a dead-end tool. “A legal-specific tool that lives alongside your data and processes can summarize the case and suggest the next best actions or additional questions to ask,” Brewer says. “As the industry raises the bar, firms that delay are doing more than just missing out on features — they are widening a performance gap that may soon become impossible to close.” The shadow IT security risk Here’s where the adoption-without-governance problem gets dangerous: Only 41 percent of firms have an AI policy, and only 45 percent say their staff receive sufficient training. But 78 percent are using AI tools. That means roughly a third of legal professionals may be using AI in what amounts to a shadow IT environment, where there’s no oversight, guardrails, or policy. “Security, security, security!” Brewer says. “Given the highly sensitive nature of legal data, business leaders should be concerned that nearly a third of their staff may be using AI in a ‘shadow’ environment without direct IT oversight.” When employees use public AI tools, they might paste in confidential client information or HIPAA-protected medical records without thinking twice. These systems have no real safeguards. One careless prompt could mean a data breach, regulatory violation, or destroyed client relationship. “When firms fail to provide proactive guidance and purpose-built tools, staff will seek their own solutions,” Brewer explains. “If AI adoption isn’t intentional and structured from the top down, firms risk losing the very efficiency gains they sought in the first place, while exposing themselves to additional risks.” What workflow integration actually looks like The difference between AI as an assistant and AI as a business driver comes down to integration. Consider billing. Asking ChatGPT to create an invoice is like using your smartphone’s calculator instead of the accounting app. Sure, it works. But you still have to manually punch in every client detail, every payment amount, and every line item. You saved five minutes on the template and spent an hour filling it in. That’s unproductive. “When AI ‘lives’ natively alongside your billing, client, and case workflows, the impact is fundamentally different,” Brewer notes. “It transforms from an assistant to a proactive business partner.” An integrated AI tool doesn’t just generate a branded invoice template with client and matter details pre-filled. It can automatically suggest missing time entries or proactively identify billing errors. That’s the difference between saving 10 minutes and changing the economics of the entire billing process. Litify’s clients who’ve embraced this level of integration are seeing dramatic operational scaling — some firms handle twice as many matters with the same staff, and the highest performers have grown headcount by up to 400 percent as they’ve expanded regionally and nationally. The four-dimension framework Brewer says firms need to move on four fronts at once. 1. Tools: You have to stop relying on ChatGPT alone, because that’s not going to get you there. You should move to legal-specific platforms that effectively integrate with your case management, billing, and client systems. 2. Readiness: Write an AI policy. Spell out which tools are approved, how to handle sensitive data, when humans must review output, and what to do when something goes wrong. Then treat training like a safety requirement, not an HR checkbox. 3. Task scope: Research and summarization are fine starting points. But firms that stay there are leaving money on the table. The next level is workflow automation — routing requests, running conflict checks, and building chronologies. Eventually, let AI assign cases, generate invoices, and handle intake. 4. Impact: Pick metrics before you spend another dollar. Cost per matter. Turnaround time. Write-off rates. Error rates. “The try-it-and-see period is ending,” Brewer says. “Leaders will expect ROI.” Ultimately, the firms pulling ahead didn’t just buy software. They rewired how legal work gets done — from intake to invoice and research to billing — with training, governance, and measurement baked in from the start. You can keep using the sports car in first gear. But eventually, someone in your market will figure out where the other gears are. BY KOLAWOLE ADEBAYO