Thursday, April 9, 2026

Meta just provided its clearest look yet at its AI plan. It’s about time

Meta’s most important launch in years may not be its latest Ray-Ban glasses or its AI app. Instead, it could be the new AI model it introduced on Wednesday, hinting at how its billions in AI investments could one day transform its products. Muse Spark, the first AI model from Meta’s superintelligence lab, powers Meta’s AI app and will be integrated into Instagram, WhatsApp, Facebook and its AI Ray-Bans in the coming weeks, the company said in a press release. Meta calls the model “purpose-built” for its products and says it is designed to streamline tasks like shopping and trip planning — the kinds of things that people already use Instagram for. The launch seemed to be exactly what Wall Street wanted to hear after Meta poured billions into its AI ambitions, with little detail about how those dollars will affect to its bottom line. Shares were up more than 9% shortly after the announcement on Wednesday and closed 6% higher. Last June, Meta invested $14.3 billion in data labeling startup Scale AI and hired its former CEO, Alexandr Wang, as its chief AI officer. It gobbled up rising AI startups Manus and Moltbook. OpenAI CEO Sam Altman claimed last year that Meta CEO Mark Zuckerberg offered $100 million signing bonuses to lure talent away from the ChatGPT maker. And the Facebook parent company spent more than $72 billion on capital expenditures, or costs related to AI infrastructure, in 2025. Analysts and investors want to know how those investments will pay off. Zuckerberg didn’t offer specifics when asked about the return on AI investments during a January earnings call, saying his response “may be somewhat unfulfilling.” He added that the company is in “this interesting period where we’ve been rebuilding our AI effort, and we’re six months into that, and I’m happy with how it’s going.” Muse Spark is the clearest answer Meta has yet provided. Meta outlined use cases for the model similar to those offered by platforms like ChatGPT and Gemini:. for example, creating a game with a prompt, answering health questions and analyzing a photo of snacks on a shelf to provide nutritional information. But the launch signals a concrete strategy to challenge OpenAI and Google after initial confusion around the direction of Meta’s AI app. Meta positioned the app both as a destination for AI-generated videos and a hub for its smart glasses in the past. Some users accidentally posted public queries that they believed to be private last year, perhaps an indication that certain consumers weren’t sure how to use the product. Meta also provided some clues about how its social media platforms could give its AI app an edge over rivals. The Meta AI app will reference content from the company’s social media apps when answering questions related to shopping, trending topics and locations. It says it’ll draw on public posts for certain answers to provide “context from your people, right where you need it.” The company also plans to eventually incorporate Instagram Reels, photos and posts directly into answers. The timing is also critical; Meta faces increasing competition from OpenAI, Google and Apple in the coming months: • OpenAI has been aggressively expanding as it seeks to replicate the success of ChatGPT in other corners of our lives. • Google is expected to release its Android-powered spectacles this year. The search giant will likely make more announcements around its AI strategy next month during its developers conference. • And Apple’s revamped Siri is expected to launch this year following delays. Similar to Meta, Apple’s strategy is centered on leveraging a person’s preferences to personalize answers. Meta needs a win. The metaverse didn’t upend the internet like it expected. Meta’s smart glasses have been at the center of privacy concerns. OpenAI’s ChatGPT caught the tech industry – Meta included – largely by surprise, leaving tech giants racing to catch up over the last three years. The jury is out on whether Meta’s new AI models will propel its products to new heights, replicating the success of Facebook and Instagram’s early days. But the launch of a model made specifically for its products for the first time suggests Meta is building towards a vision. Now it just has to execute it. Analysis by Lisa Eadicicco

Wednesday, April 8, 2026

AI Is Breaking Passwords, and the Alternatives Are Getting Pretty Weird

Your next password could be your heartbeat—or maybe even the way you breathe. As hackers get better and better at cracking traditional passwords (by exploiting lazy consumer habits and technical advances such as artificial intelligence), researchers are searching for new methods to protect sensitive data. The tech industry has been trying to nudge users to other data protection methods for years—and some of those methods have been unusual, to put it mildly. Take, for example, the latest alternative authentication method, which was developed last year by researchers at Rutgers University. VitalID is a new spin on biometric protection, utilizing unique vibration patterns from breathing and heartbeats that resonate through the skull to identify you. Differences in people’s bone structure and facial tissues make the harmonics as distinctive as a fingerprint, researchers said in a paper outlining the proposed authentication method, which was envisioned for extended reality headsets. “Traditional security mechanisms—such as passwords, PIN codes, and conventional biometric systems—proved increasingly incompatible with immersive interfaces,” the researchers write. The average person is responsible for roughly 170 passwords, according to password manager company NordPass. That’s why, in part, people tend to reuse the codes—and it’s why hackers have been increasingly effective at gaining access to people’s information, in attacks on both corporate and personal systems. Biometrics have shown some promise. Mobile users are quite familiar with Face ID and pressing their thumb to the screen to prove they’re the rightful owner of the device. Fingerprint logins started to go mainstream in 2013, and face scanning began to rise in popularity in 2017. Voice recognition seemed like it would be an effective tool, but recent technology advances have sidelined that. “Now that AI can clone a voice from a few seconds of audio, it’s not reliable,” said Karolis Arbaciauskas, head of product at NordPass. Rutger’s unusual approach to security is far from the first strange way of securing user authentication. Attempts to do away with passwords have taken several different forms over the years. Here are some of the most unique: Password pill: While Apple was launching Touch ID in 2013, Motorola pursued a different authentication method. The company prototyped a small authentication pill that was designed to be powered by stomach acid. When swallowed with a glass of water, it would produce an 18-bit ECG-like signal that made your body the authentication token. As you might guess, this was seen as a pretty creepy way to guard your data, and it never made it out of the prototype phase. Tattoo: Motorola showcased a temporary tattoo that same year that could be used for authentication, but that method was met with the same privacy concerns as the pill. Body odor: As an offshoot of biometric authentication, some researchers have experimented with using a person’s unique chemical scent to confirm their identity. (Some of those same groups also studied things like the shape of your ear and your gait as identifiers.) These have fallen short of mainstream acceptance as people don’t really want to use their funk as an identification, and the sensors have not proved to be as reliable as other methods. Lip-reading: This technology actually works, focusing on the unique way people mouth specific words or phrases. It’s used more frequently as a discovery tool, though, such as discovering what someone is saying in video footage that has no audio. Most consumers have not shown a real willingness to mouth a passphrase to their PC or phone. Heartbeat recognition: This biometric authentication method has caught the eye of NASA. Like fingerprints, no two ECG patterns are the same, so by wearing an experimental band, you can verify your identity. This actually made it to market in the form of the Nymi Band, but it remains too costly at the moment for mass-market adoption. While research on fringe identification methods is likely to continue, the most promising data protection advance these days is the passkey. This authentication method generates a pair of keys: one public, which is stored on the cloud, and one private, which is stored on the device. That means that if the cloud server is compromised by hackers, accounts are still protected, as the hacker won’t have both sets of keys. In essence, the passkey you enter on your phone or via your face scan/fingerprint is one half of what’s necessary to get access. The other is stored elsewhere. For a hacker to crack both, they would need to have your phone and to hack the server, making breaches more difficult. While many major sites support passkey technology, it’s still far from universal. (And hackers have a way of catching up, which is why researchers are still looking at other methods, like biometrics. Especially as new technology appears close to breaking encryption.) “It’s no surprise that there have been and still are many attempts to free us from passwords and remembering them,” says Arbaciauskas. “But for now, there is no universally practical way to live without passwords—especially since not all websites and platforms support passkeys yet.” BY CHRIS MORRIS @MORRISATLARGE

Monday, April 6, 2026

‘Everyone now kind of sounds the same’: How AI is changing college classes

At this point in her senior year at Yale University, Amanda knows that many of her classmates turn to AI chatbots to write papers and other homework assignments. But she started noticing something bizarre in her smaller seminar classes: Her classmates sit behind laptops with polished talking points and arguments, but the conversations that follow often fall flat across subjects. In one class, “the conversation came to a halt, and I looked to my left, and I saw someone typing ferociously on their laptop, asking (a chatbot) the question my professor just asked about the reading,” Amanda told CNN. Amanda, and two other students — Jessica and Sophia — attend Yale University. They requested anonymity for fear of retribution from their classmates and professors, so CNN agreed to change their names for this article. Amanda said she was taken aback. Until that day, she didn’t realize that her peers were using chatbots in class and sharing what it spits out in the classroom. Now she notices the impact that tendency is having on class discussions. “Everyone now kind of sounds the same,” she said. “I feel like during my freshman year in college, I would sit in seminars where everyone had something different to contribute. Although people would piggyback off each other, they approached from different angles and offered different commentary.” As AI becomes increasingly integrated with education, educators and researchers are finding that it may be eroding students’ capacity for original thought and expression. A paper published in March in Trends in Cognitive Sciences found that large language models are systematically homogenizing human expression and thought across three dimensions — language, perspective and reasoning — and students and educators say they are seeing the effects of that trend in their classrooms. And that makes a lot of students sound the same. Why students use AI in class Jessica, a senior at Yale, told CNN that she uses AI every day for her classes. In an economics seminar in which the professor cold-calls students, “at the beginning of class, you could see every single person putting every single PDF” into a chatbot. She also uses AI when she has trouble turning her thoughts into words. “I want to comment, and I have this concept, but I don’t know how to formulate the sentence myself,” she said. So she asked a chatbot “to make it sound more cohesive.” A Yale University spokesperson replied that “Students continue to experiment with using AI in class” and they are aware of the ways AI is used in the classroom, including those described in this article. “To support learning and engagement, we are seeing a broader trend of faculty designing courses with limited or no laptop use, emphasizing print-based materials, original thinking, and direct engagement with peers and instructors,” the spokesperson told CNN. Thomas Chatterton Williams, a visiting professor of the humanities and senior fellow at the Hannah Arendt Center at Bard College, has seen the impact of students’ decisions to use AI. Students’ reliance on AI “ has paradoxically raised the floor of class discussion to a generally better level in courses with difficult concepts, but has also tended to preclude stranger, more eccentric and original thoughts,” said Williams, who is also a nonresident fellow at the American Enterprise Institute, a think tank that includes research on education. “My biggest concern is that many bright young people will never achieve a voice of their own — indeed that a surprising number of them won’t even fully appreciate the value of authorship and ownership of a point of view.” Thomas Chatterton Williams Jessica admitted that she’s felt herself become lazier since she started using a chatbot to help with her classes. “I have thought about how much I stopped working, like my work ethic has completely diminished from high school,” she said. Why does AI make people sound the same? Large language models, or LLMs, are trained to predict the next most statistically likely word given everything that came before it, said Zhivar Sourati, a doctoral student at the University of Southern California and first author of the paper. The data those models train with overrepresents dominant languages and ideas, so their answers to users’ questions naturally “mirror a narrow and skewed slice of human experience,” the researchers wrote in their study. The result is “a narrowing of the conceptual space in which models write, speak, and reason.” AI-induced homogenization happens across three dimensions: language, perspective and reasoning strategies, the authors explained. That’s because AI models tend to reproduce what researchers call “WEIRD” viewpoints — Western, educated, industrialized, rich and democratic — even when explicitly prompted to represent other identities. One possible consequence, Sourati said, is that WEIRD language and perspectives could become perceived as more credible and “more socially correct,” marginalizing other viewpoints. A similar phenomenon is observed in reasoning, in which the popular technique of walking models through step-by-step logical thinking may be crowding out more intuitive, culturally specific and creative ways of working through a problem. When a group repeatedly interacts with AI systems, Sourati explained, it flattens the group’s creativity compared to the same group without AI assistance. This flattening raises concerns in educational institutions at all levels. When students were asked open-ended, subjective questions with no single, correct answer, teachers could expect a wide range of responses. But if all students rely on AI, their answers may become more polished but fall into just a handful of similar categories, Sourati said. They will lose the diversity of thinking that classroom discussions are meant to encourage. Sourati is most concerned that homogenization is happening to people who are developing their ability to creatively generate new ideas. If students continue to use AI instead of developing their own thought processes, “they wouldn’t learn how to even think by themselves and have their own perspectives.” Morteza Dehghani, a professor of psychology and computer science at the University of Southern California, said that he has heard of people using AI to determine who to vote for in an election, which he finds “quite scary.” “If people lose diversity” in the way they think, “or get into intellectual laziness, of course, that is going to affect our society greatly,” said Dehghani, who is a coauthor of the paper. Sophia, a junior at Yale, believes that her fellow anthropology students are using AI to draft scripts for what to say in class because people are insecure about what they don’t know. “I think creativity is dwindling because we lose the ability to make connections,” she added. If people continue to offload their reasoning to AI, Dehghani agrees that communities will lose creative innovation and the ability to critique mainstream ideas or even political candidates. As more people use AI models to write and think, those outputs are reabsorbed into human discourse — and eventually into the data used to train the next generation of models —so the homogenization keeps compounding, the paper’s authors said. “If we’re offloading our reasoning onto these models, then we can easily be persuaded by what the models tell us,” he said. In education, Dehghani is concerned about a generation of students who are learning with AI and being tutored by AI. “They would be more homogenous in the way they think, in the way they write, so this is going to have long-term influences,” he said. People aren’t learning to reason Sophia, who tries to resist using AI in school, said she believes people are deprioritizing their own thinking “in favor of having really big words.” “I would literally rather just tell the professor, ‘I don’t know what we’re talking about.’ Even if you put every reading into (a chatbot), it doesn’t have your past experiences that make you a critical thinker,” she said. “I feel like people had a lot more to say because they actually feel tied to the material,” Amanda agreed. “Now classroom discussions are not really digging deep. I think a lot of that has to do with the AI chatbots, but also, there’s no longer as much of a drive to connect with the material personally.” Disappointed, she added, “I think it’s boring to be in a class where everyone has the same thing to say, and no one wants to dig deeper or push against what is directly said in the text or the norm.” Daniel Buck, a research fellow at the American Enterprise Institute and a former English teacher at four K-12 schools over seven years, said he is concerned that students are circumventing the cognitive work required to engage in classroom discussions and complete homework. “A lot of learning happens in the boring minutia, the struggle,” Buck said. Students retain only what they have actually spent time consciously processing, he continued. If a student outsources thinking to AI, they may be able to reproduce a talking point in class, but they haven’t built the underlying skills to apply that knowledge elsewhere. Buck draws a sharp distinction between AI and the shortcut technology that preceded it: SparkNotes. When students relied on the popular website to find chapter-based summaries of literary works, teachers could easily detect it, he added. AI is a “supercharged version of SparkNotes” that “can answer any question that you pitch to it,” Buck said. Whereas SparkNotes offered a fixed set of analyses, AI can respond to whatever a teacher asks, making it much harder to identify when students are not doing the thinking themselves. The difference is in how people reason. Instead of being used as just a reference, such as books or search engines, AI is an active participant in “problem solving and perspective-taking,” Dehghani clarified. “What we are seeing now is fundamentally different than other periods of homogenization of expression and thought,” Williams said. “If even professional writers are finding it exceedingly difficult to resist outsourcing the difficult work of wrestling with words and ideas — as we know they are — I don’t see how the younger generations who have not experienced a world before highly sophisticated, on-demand AI writing will be able to do this, not at scale.” Buck worries that students will graduate without having developed relationships with professors, as well as the habit of sustained cognitive work. That means they will struggle to solve problems in the real world. “There’s so much delight in reading original student essays,” he said. “Even if it isn’t quite as well -argued or as solid as I wish it would have been, you’re seeing these young students, for the first time, start to think for themselves, to analyze, to think critically. It’s almost like watching my own children walk for the first time, where they stumble and fall, and that’s amazing. Keep doing that.” Reading and interacting with students’ original thoughts in class helps teachers understand how students think and articulate. “There’s an interpersonal exchange that I think gets overlooked when you get to know your students, they get to know you, they start to trust you and your feedback,” he said. “I think that gets lost too when it’s just everything is through AI.” How teachers work around AI Sun-Joo Shin, a philosophy professor at Yale, said, “It is a big homework for anyone who is involved in teaching” to keep exploring ways to ensure students continue to think critically and creatively in the age of AI. “We are in an interesting and exciting transition. I want my students to understand the material of the class, which is constant before and after the appearance of AI,” she said. “At the same time, I want them to use this exciting tool to their advantage, not be a victim of it. A dilemma of an instructor is how to help, or force, students to learn the material and to think creatively without running away from the AI tools or without copying them.” Until the fall semester of 2024, she said she was not worried about how AI would affect students’ understanding of the material in her mathematical logic class. Her teaching team had tested the problem sets against the AI models at the time, and they were unable to solve her problems. But since then, “AI has been catching up,” and models can answer questions “pretty well” if students upload class handouts and learning materials. She started thinking about additional requirements in the class beyond problem set submissions. “After all, it would be extremely unfair to give good grades to AI answers,” Shin said. Yale has guidance on AI usage for both students and faculty. “Generative AI use is subject to individual course policies,” one of the university websites states. “We encourage all instructors to adapt our model policies for their specific course and learning goals. AI Detection tools are unreliable and not currently supported.” Yale provides model policies for different class types such as “Creative Writing Seminar” and “STEM Mid-Sized Lecture.” The policies range from discouraging AI usage with guidelines on when AI explicitly cannot be used, to allowing students to use AI as a source of ideas but prohibiting them from submitting text generated by chatbots to encouraging AI usage, to encouraging and permitting students to use AI in assignments. Buck warns that any work sent home cannot be verified as the student’s work. To counter AI, teachers are going back to reading texts aloud in class and “on-demand, handwritten essays” and “paper and pencil assessments.” In-class accountability often comes in the form of pop quizzes. A student who had asked AI for a chapter summary instead of reading the chapter might get the broad strokes, but there is a strong chance that the one specific detail the quiz will ask about did not make it into the summary, Buck said. “If you did the reading, it was super-duper easy,” he said. “And if you didn’t, then there was no way to bluff your way through.” “I made a rather significant change for my two logic classes in terms of requirements,” Shin said. Although she still includes problem sets as part of her classes, she has reduced their weight in students’ grades. Now, the problem sets are graded only on completion, and feedback is given to students rather than grades. “Using these problem sets as a question bank, I have two midterms and one final, all of which are in-class exams,” she said. “Some questions are lifted from problem sets, some are slight modifications, some require students to check where a proof goes wrong, and some are filling in gaps in a proof that they solve in problem sets.” For her computability and logic class, “I have given oral tests, one by one, for years, and a presentation requirement before the AI era, which has been working out very well,” she said. Now, the exams, oral tests and presentations are weighted more heavily for students’ course grades than take-home problem sets. Williams has arrived at a similar place from a different direction. As a professor, he has moved all writing assignments in-class and made them spontaneous. At the end of the semester, he assesses students through oral exit exams. “I cannot with any confidence assign students any writing that I don’t watch them commit to paper by hand in my own presence,” he said via email “I think this is a terrible loss, but it’s necessary. The temptation and availability of AI is too great.” It’s affecting other people’s educations While educators can work around AI in assessments, it is equally important for students to be intentional about limiting their reliance on it as they learn, especially since it affects other classmates’ education. “It is frustrating because even though I personally try to stray away from it, I can’t prevent other people from using it,” Amanda said. “The fact that others use it affects my education as well, and the value of the two hours of my seminar.” Basil Ghezzi, a freshman at Bard College who actively avoids using AI in her studies, worries about the environmental costs associated with using AI models. Instead, she encourages students to turn to the resources already around them. “Talk to your teachers, talk to your professors, talk to people around you. Have meaningful conversations with people in your life,” she said. Still, not everyone has an “all or nothing” approach to AI. Dehghani said he writes bullet points capturing ideas he originated and asks the model to find flaws in his work. He hopes that more companies will invest in AI models that can generate variety and reflect the diversity of thought in our current society. For now, however, Dehghani suggests that people should resist using AI to generate ideas or to reason. AI models “should be collaborators. They shouldn’t be agents that do everything on our behalf,” he said. By Asuka Koda

Friday, April 3, 2026

Why LinkedIn Believes AI Will Turn Workers Into Founders

As workers worry that AI will automate their jobs away, LinkedIn CEO Ryan Roslansky and Aneesh Raman argue something different: AI is about to make entrepreneurship far more accessible. That’s the thesis of Open to Work: How to Get Ahead in the Age of AI, LinkedIn’s first book, released Tuesday. Co‑authored by Roslansky and Raman, the book lays out how AI can strip away many of the traditional barriers to starting a business—capital, gatekeepers, specialized expertise—and replace them with tools that let individuals build, test, and scale ideas on their own terms. Drawing on founder case studies and research from MIT Sloan senior lecturer Paul Cheek, the book frames AI not as a threat to work, but as an accelerant for self‑employment and ownership. Raman’s own career mirrors that premise. His path—from CNN correspondent to presidential speechwriter to LinkedIn executive—wasn’t linear, but it was intentional. Each role, he says, was a way to expand impact and adapt as opportunity shifted. In Open to Work, Raman connects that mindset to the moment founders now face in a labor market where titles matter less than skills, and where AI can help individuals turn experience into businesses faster than ever before. LinkedIn has invested heavily in AI tools for both its workforce and users. In 2023, AI-powered writing suggestions were launched to help users update their profiles. The following year, the tech was updated to create resumes and cover letters tailored to specific job listings on the platform—which grew even more tailored to individuals in 2025. A LinkedIn spokesperson says more than 38 million people use the platform’s AI-powered job search every week. Book Preview: Across the Industrial Revolutions, new forms of energy emerged, from steam to electricity. Those new forms of energy supported new forms of technology, from the assembly line to the internet. And with those new forms of technology, economic growth all over the world has increasingly come from one thing above all else: the ability to produce more goods and services, faster and cheaper. As a result, our economies started prizing skills that would support efficiency at scale the most, especially analytical and technical skills. As humans at work, our value was measured by how effectively we could support technology executing more, better, faster. A few of us did work that involved innovating and thinking creatively but, for the most part, even that work was about creating new goods and services that helped consumers and businesses do more, better, faster. Today we’re all mostly manning assembly lines, operating registers, driving tractors, building spreadsheets, writing code, managing meetings, and responding to emails. So. Many. Emails. In every case, across so many of our jobs, our value has been tied to our ability to help organizations achieve that same goal: more output, better quality, faster delivery. Then came AI. Suddenly, so much of what we’ve trained ourselves to do, so much of what our economy has valued most, AI started to do. And it started to do it more efficiently than we ever could, becoming better by the day at precisely the kind of technical and analytical capabilities our economies currently prize above all else. Of course we’re worried. But that fear misses something crucial: Our competitive edge as a species was never our capacity for processing and producing more, better, faster in the first place. As AI starts to handle the “more, better, faster” work that has consumed so much of our time and energy, we will finally have the opportunity to reclaim the work that only we can do. Work that is based on what makes us uniquely human. Learn what AI can do, and what only you can: AI is changing the way we work, but it doesn’t replace the strengths that set people apart. When you understand where technology can amplify your impact — and where your judgment, empathy, and creativity shine — you unlock real momentum in your career. Build human capabilities that outlast every tech shift: Skills like curiosity, creativity, communication, compassion, and courage never go out of style. As tools evolve, these abilities become even more valuable. Strengthening them now puts you in control, no matter how fast work transforms. Turn insight into action with a clear plan for what’s next: The future doesn’t have to feel abstract. You can redesign how you work, how your team collaborates, and how your company culture adapts. Start with a simple, practical 30-60-90 day plan to help you move with confidence. BY KAYLA WEBSTER