Friday, April 24, 2026
Anthropic’s Claude Opus 4.7 Is Here, and It’s Already Outperforming Gemini 3.1 Pro and GPT-5
Anthropic’s just released its latest AI model, Claude Opus 4.7. The company claims it handles complex, long-running tasks with greater rigor and consistency than its predecessor, follows instructions more precisely, and can verify its own outputs before delivering a response. In short, Anthropic says the new model is a real-world productivity booster.
This comes shortly after Anthropic launched Claude Opus 4.6 in February. And the model is “less broadly capable” than its most recent offering, Claude Mythos Preview. But at this time Anthropic has no plans to release Claude Mythos Preview to the general public. It says the effort is aimed at understanding how models of that caliber could eventually be deployed at scale.
How Does Opus 4.7 Compare?
According to The Next Web, the most striking gains are in software engineering: on SWE-bench Pro, an AI evaluation benchmark, Opus 4.7 scored 64.3 percent—up from 53.4 percent on Opus 4.6 and ahead of both GPT-5.4 at 57.7 percent and Gemini 3.1 Pro at 54.2 percent.
Opus 4.7 Token Usage
Users upgrading from Opus 4.6 should note two changes that affect token usage. An updated tokenizer improves how the model processes text but can increase token counts by roughly 1.0 to 1.35 times depending on content type. The model also thinks more deeply at higher effort levels, particularly in later turns of agentic tasks, which boosts reliability on complex problems but produces more output tokens.
Opus 4.7 Safety
Additionally, Anthropic says Opus 4.7 carries a safety profile comparable to its predecessor, with evaluations showing low rates of deception, sycophancy, and susceptibility to misuse. The company’s alignment assessment concluded that the model is “largely well-aligned and trustworthy, though not without room for improvement.”
“We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses,” Anthropic said in a release. “What we learn from the real-world deployment of these safeguards will help us work towards our eventual goal of a broad release of Mythos-class models.”
This commitment to transparency around safety is central to how Anthropic has positioned itself since its founding in 2021. Anthropic has spent much of its existence cultivating a reputation as a more safety-focused alternative.
CEO Dario Amodei previously served as vice president of research at OpenAI before leaving to co-found Anthropic alongside his sister Daniela Amodei and other former OpenAI employees who shared his concerns that the company was not taking AI safety seriously enough.
“We’re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies,” Anthropic’s CEO Dario Amode said on podcaster Dwarkesh Patel’s Podcast.
The upgrade represents a step forward across the capabilities that matter most to Claude’s users. “The model does not win every benchmark against every competitor, but it wins convincingly on the ones most directly tied to real-world productivity,” The Next Web said.
BY AMAYA NICHOLE
Wednesday, April 22, 2026
5 ways your doctor may be using AI chatbots — and why it matters
Millions of Americans are turning to AI chatbots for health answers. Doctors are, too.
But the ways doctors are incorporating AI chatbots into their practice are surprising.
Specialized medical AI chatbots have quickly become a go-to source for many doctors and trainees. The CEO of one of these medical chatbot companies recently claimed that more than 100 million Americans were treated by a doctor who used their platform last year.
Popular chatbots like OpenAI’s ChatGPT don’t meet the bar for doctors, who say these platforms aren’t always accurate or up to date with the latest guidance. OpenAI’s usage policies state that users are not allowed to use its services for “tailored advice” without consulting a licensed health professional.
“ChatGPT is like your crazy uncle,” said Dr. Ida Sim, a professor at the University of California, San Francisco, who studies how to use data and technology to improve health care.
The edge, Sim says, is that medical chatbots are less prone to sycophancy and more likely to ground answers in peer-reviewed research and clinical guidelines. That’s why she says the uptake has been “tremendous.”
The most common use case
Millions of research papers are published every year — and keeping up with them all is impossible.
“You’d need like 18 hours a day to stay up to date,” said Dr. Jared Dashevsky, a resident physician at the Icahn School of Medicine at Mount Sinai.
But doctors are expected to stay current on new research and guidelines to maintain their licenses. Many say they now use medical chatbots as a reference tool to help them stay updated.
Rather than pulling information from the entire internet, specialized medical chatbots actively search medical literature, says Dr. Jonathan H. Chen, an associate professor at Stanford Medicine who leads his health system’s efforts to integrate AI into medical education.
That workflow provides doctors with more accurate answers that summarize and link to important papers and guidelines. Dashevsky, who writes about AI, says these features are especially helpful for trainees working long hours.
Uploading patient records to AI bots
Some health systems have adopted AI chatbots to improve patient care, promising doctors safety and privacy protections.
But many doctors use unauthorized chatbots called shadow AIs, according to doctors CNN spoke with. Some of these shadow AIs also advertise HIPAA compliance features.
HIPAA is a federal law that requires certain organizations that maintain identifiable health information — such as hospitals and insurers — to protect it from being disclosed without patient consent.
But language used by shadow AIs has led some doctors to believe that it’s safe to upload protected health information onto chatbots in exchange for more tailored answers. But Iliana Peters, a health care lawyer at the law firm Polsinelli who previously led HIPAA enforcement for the US Department of Health and Human Services, says that assumption is inaccurate.
“‘HIPAA compliance’ is not an accurate term to use by any company,” Peters said, explaining that the phrase should be used only by government regulators.
Despite that, Dr. Carolyn Kaufman — a resident physician at Stanford Medicine — and other doctors say that patient information is making its way into unauthorized chatbots, potentially opening the door to new ways of commodifying patient data.
“Data is money,” Kaufman said, noting that she has never uploaded HIPAA-protected information onto an unapproved chatbot. “If we’re just freely uploading those data into certain websites, then that’s obviously a risk for the individual patient and for the institution, as well.”
Drafting AI-generated notes
AI chatbots have also stepped in to help doctors draft summaries of patient visits and long hospital stays. These notes are viewable on online patient portals and help doctors track a patient’s course and communicate plans across the care team.
“It’s probably safer to have artificial intelligence review a hospital course and know everything happened, versus you as a human — with limited time, jumping between note to note — trying to put the pieces together,” Dashevsky said, arguing that although concerns over AI accuracy are valid, human-based summaries may also miss key details.
Writing letters to insurance companies
Administrative work can take up nearly nine hours a week for the average doctor, and the time doctors spend on insurance-related tasks costs an estimated $26.7 billion each year.
A feature that Dashevsky says has been a “game-changer” is chatbot-authored letters to insurance companies for prior authorizations and other correspondence, allowing him to field patient requests more quickly.
“I would have to figure out who this patient is, write the letter myself and review it. It took so much time,” he said. “Now, AI will produce for you a really good letter.”
Creating a list of possible diagnoses
When patients come to doctors with concerns, physicians have to figure out how to help them. Part of that process is considering a range of possible diagnoses. Many medical students and trainees use AI chatbots to help build that list, and some doctors beyond training use the feature, too.
“From a med student perspective … you’re seeing a lot of things for the first time,” said Evan Patel, a fourth-year medical student at Rush University Medical College. “AI chatbots sort of help orient me to what possibilities it could be.”
Kaufman says the bots provide the most accurate list when she includes every data point linked to patients, like lab results and imaging findings.
What patients need to know
All eight doctors and trainees CNN spoke with say they regularly use medical AI chatbots. And most have a positive outlook, viewing these tools as a way to offload certain cognitive and administrative tasks. But patient privacy concerns are valid, the doctors say.
Five questions to ask your doctor
How are you using AI chatbots to augment my care?
What types of AI chatbots do you use, and have they been approved by the health system?
Is any of my personal health information being entered into AI tools, and how is it protected?
How do you check that the information from AI chatbots is accurate?
Do you usually agree with the information from AI chatbots, or do you find yourself questioning it?
As with any AI tool, Kaufman says, errors happen and information can be inaccurate. When she consults peers for second opinions, she says, they “almost never agree” with the AI chatbot’s answer.
“People treat AI like it’s magic,” Chen said. “It’s not magic. It can’t just do anything you want.”
He added: “You ask the same question 10 times, and it’ll give you 10 different answers.” That variability, Chen argues, highlights some of the surface-level limitations.
Medicine operates on three layers, Sims says: workflows, knowledge and expertise. AI is transforming the first two. But that last layer — core to the care patients receive — is harder to replicate and may be what matters most.
“If we just apply guidelines, then replace us,” Sim said. “It’s where you take the knowledge and apply it to an evolving set of conditions in the context of your life. That’s what medicine is. It’s in the context of people’s lives. And these machines don’t do that.”
By Michal Ruprecht
Monday, April 20, 2026
The Pareto Principle Is How AI Actually Takes Jobs
Are you afraid of losing your job?
That question might sound silly at first, but over the past several years, the specter of losing one’s job has risen to horror-movie jump-scare proportions. It’s not just you. Everyone who has a job is deathly afraid of losing it. I hear this daily, in comments on my articles, in my consulting work, on social media, even among friends. No one is immune.
Why?
Well, there are a lot of reasons, but one reason might be the constant drone from big tech and the press, both of which have spent a lot of the past four years telling us that AI is coming to take our jobs and, with this new strain of zombie mutant AI, no one is immune.
Is that true?
Well, I’ve spent a lot of time working with AI, and I’ve also spent 15 years telling people why AI shouldn’t be coming for their jobs.
I think I can connect the dots here. And they’re sobering. But someone has to tell you the truth.
When AI Strikes, It’s Slow, Then Quick
The first fact I can give you is that, despite the current conventional wisdom, AI has and will continue to put blue-collar jobs at risk far more frequently than white-collar or knowledge-worker jobs.
The increased threat to blue-collar jobs is for a couple of reasons. But it mostly has to do with the Pareto principle that exists within automation, and automation is still the bulk of what AI is being used for in this cycle – whether that’s automating content, automating conversations, or more broadly, automating sets of tasks that just follow basic instructions.
That last one, that’s what’s led to the onslaught of machines taking blue-collar jobs as far back as the 1980s. It’s still happening today, it’s just being overshadowed by all this new white-collar carnage.
While maybe not as easy and inexpensive to get started with as today’s AI, replacing physical tasks with machines ultimately offers far more coverage and better results than knowledge tasks.
The more repetitive the task and the less knowledge required to execute it, like spot welding, the more that job is suited for automation. As computer processing has become more powerful, the robots can now make choices, and even appear to understand the nuances in more complex tasks, like arc welding.
Or driving.
Keep that in mind. Because the second fact I can give you is that when AI comes for your job, it happens slowly with lots of warning, then real fast without warning.
Self-Driving Tech Is Happening Fast
Right now, the AI-job-taking threat that’s easiest to spot is that of taxi driver. And Uber and Lyft are not immune.
Self-driving tech has been around since the 1950s, and experimental forms of passenger cars were being developed in the 1970s. At that point, with self-driving vehicles screaming down the Autobahn in the 1980s, the writing was on the wall for driving as a profession, no?
Well, it’s 2026, so why didn’t it happen? Well, it did. It just started very slowly.
I just got back from lovely Tempe, Arizona, which is kind of the epicenter of self-driving cars and those little delivery robots — and Starship, please, please, please send me one, I promise to take care of it and feed it and walk it every day.
Waymo self-driving cars are, no exaggeration, everywhere. By my third day in Tempe, a Waymo brushed past me as I was getting out of my car and I didn’t think twice about it. And later, an empty one cut me off trying to change lanes, which happened a dozen more times that day with cars driven by humans.
I know. Nice anecdote, bro. But it’s not just me. Waymo ridership is skyrocketing, especially over the past two years and significantly over the past few months. My daughter and her friends take them exclusively now. No one talks about their uniqueness or their ubiquitousness. They’re cheaper to run, they can all run 24/7, and they will eventually become safer than human drivers if they aren’t already, which is already debatable.
Uber and Lyft are not immune. Despite all the advancements those companies brought to the taxi experience, from lowering the costs to cashless payments to accurate(ish) timing, as the human-driver side of the equation went from cheap rideshare side-gig to expensive full-time taxi driver job — an evolution those companies always knew was a risk but could never mitigate — the wheels were already in motion to eliminate that cost from the system.
So when driverless comes, it’s coming for everyone. And it’ll come without warning, because we’ve already been warned. For 40 years.
What Are the AI-Job-Taking Warning Signs?
Again, the conventional wisdom here is to ask yourself how valuable you are to the company. But I think that metric is more of a requirement, less of an indicator. If what you do could be replaced, by a human or otherwise, that’s another issue. So we need to start with the baseline that everyone reading this is valuable to their company.
Because the real metric is the company, and in a greater sense, it’s what that company does.
Apologies to Cory Doctorow, but whether AI will replace your job has a lot to do with your company’s place in what I call the Enshitification Scale. Not to make a thing out of it, but we see this in every aspect of our lives, as both consumers and business people. The scale goes, in order of AI-dethroning threat:
I Love This Product
They Used to Be Awesome
They’re the Lesser of All Evils
They’re the Only Safe Bet
I Hate Them
And it’s actually more about the use case — what the company does — than the company itself.
The thing is, you don’t have to get all the way to the bottom of the scale for AI or tech to start taking jobs. When the emotion about the product, love or hate, switches from the product itself to the use case, then it’s time for the asteroid to come in and take out all the dinosaurs.
Policy Just Delays the Inevitable
And when that time comes, no amount of entrenchment will make an incumbent industry or sector safe. That entrenchment usually starts with external protection, first policies, then unions, then laws.
What these end up doing, in too many cases, is prevent the natural evolution of the use case and the growth of the person executing it, stunting that growth until neither can be saved, and the knowledge that actually lifts the person above the AI job taker is made irrelevant.
Like I said, it happened when laws were passed to designate Uber and Lyft drivers as employees and not contractors. You’re seeing it in fast food right now. Minimum wage goes up, then in come the self-service order kiosks. I remember when we were all complaining about the lack of customer service at a McDonald’s. Now we’re all pissed off if the machine is busted and we have to talk to someone to get our Big Mac when we could have just punched it into an app.
Think about it. It was actually the company’s decision to remove the knowledge behind customer service from the employees delivering customer service that turned those “McJobs” into a series of button-punching tasks.
So what about those button-punching white-collar McJobs? Well …
Evolve or Die?
When it comes to knowledge and skills, “evolve or die” — as much as I hate that phrase — is hard to argue with. But here’s the thing. That evolve-or-die situation is not happening in tech, not as fast as they’d have you think.
Techies are surprisingly fast to evolve, and one of the reasons these layoffs are so painful is because most techies haven’t had a problem adopting these new AI technologies. The problem is leadership and management has always seen AI as a replacement out of the box, regardless of the company’s place on the Enshitification Scale, and so, everybody is doing layoffs because AI gives them a way out.
Hell, if it’s AI’s fault, then that might just “fix the glitch” and make the company great again. It won’t. The Enshitification Scale rarely, if ever, moves in reverse.
So the warning signs are there if you take a look around you. Is your tech company still a tech company? Or is it a task factory? Are you being protected? Or are you adding value? If you can answer those questions, the warning signs should be easy to spot, and you should think about getting out before things happen quickly.
If this resonated, please join my email list, a rebel alliance of professionals from all walks of life who want a unique take on tech, business, and the future of both.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO
Friday, April 17, 2026
Is Your Business Idea Actually Good? This Claude Hack Provides the Cold, Hard Truth
Artificial intelligence has made it easier than ever to start a business, with tools like Anthropic’s Claude Code and OpenAI’s Codex enabling anyone to build websites, apps, and SaaS platforms. But not all ideas are created equal, and sometimes what seems like a great idea has already been done before, or is unviable for reasons you hadn’t considered.
In the past, confirming the viability of your startup or product idea involved a combination of research, analysis, and asking trusted friends, family, and business associates. But now that you have a virtual smart person in your pocket or on your desktop, AI can help streamline this process.
In this article, we’re going to teach you how to set up an AI model (in this case Claude) that will give you the cold, hard truth about your business ideas.
To start, you’ll need to create a customized version of Claude that’s highly skeptical and critical of your ideas. AI models have often been found to be sycophantic—more interested in telling you what you want to hear than actually giving it to you straight. We need to give Claude a set of custom instructions to follow so it can avoid this pitfall.
To start, I opened up the Claude desktop app (you can download it here), opened Claude Cowork (Anthropic’s tool for knowledge work), navigated to the projects tab, and started a new project named “Business Idea Viability.” Next, on the newly created project page, I selected “set project instructions,” and pasted in the following prompt:
“You are a skilled business analyst with decades of experience under your belt. You excel at receiving an idea for a business and comprehensively running research to determine if this idea has already been turned into a business, looking into trends, historic parallels, and data related to the proposed business idea. You are highly critical and skeptical of new ideas and difficult to satisfy, but fair when you encounter a legitimately good idea for a business. You are direct in your communication style and ‘tell it like it is’ with brutal honesty.”
Finally, I added two articles from Harvard Business School to the project’s files. Those articles were titled “How to Come Up With an Innovative Business Idea” and “5 Steps to Validate Your Business Idea.” With this, Claude should have a solid understanding of what makes a killer business idea.
To start, I wanted to see what kind of feedback Claude would give to an obviously bad idea. Here’s the terrible idea I pitched to the model:
“Help me validate this business idea: Leadership coaching for dogs. What if house-training your pooch was just the first step on the road to true doggy-disruption? Using a proprietary mix of AI, performance management tracking, thought leadership, and retired K-9 police dogs, we will teach your pet to not just be a good boy/girl, but a leader in their community and home. Optional add-on classes will help dogs learn some classic dog leadership techniques, like saving kids from wells. Does this seem like a viable business idea?”
Despite the ridiculousness of my premise, Claude treated it with deadly seriousness. The AI found that “the cultural environment actually does support some version of what you’re describing.” According to Claude, the dog training market is growing at a nearly 10 percent rate, with the biggest driver being “the trend of treating dogs like family members, even like employees with performance goals.”
Having said that, Claude was quick to point out that dogs do not have careers (although to be fair, some dogs kind of do), and that “the entire leadership/thought leadership framing is anthropomorphic nonsense that collapses the moment a customer asks what their dog actually learns in Week 3.” Even the more realistic pieces of my business idea, like AI-powered performance monitoring, had already been hit by other startups, like smart dog collar company Fi. Overall, Claude gave my idea a viability score of 3 out of 10, but said the only reason it wasn’t a 1 is that there’s a genuine market for dog training.
All right, so the test worked: I knew for sure that Claude would give it to me straight, so next I wanted to try out a genuine business idea. Here’s my second shot: “An AI-powered employee sentiment and internal manager effectiveness survey solution. Instead of forcing employees to take dozens of surveys, our AI interviewer simply pings you on Slack/Teams/GChat, and engages you in a natural language conversation about your current working situation, relationship with your manager, and outlook of the company’s leadership and vision for the future. We could even set up voice-based interviews to capture more data.”
Claude gave this one a 6.5 out of 10, and said that while I had identified a real problem of survey fatigue, I was far from the first person to have this idea. The AI assistant identified InFeedo.ai as the main player in this space; the company provides access to an AI agent named Amber, which proactively chats with employees and generates personalized queries. “They are you,” Claude said, “but years ahead of you.”
By this point, I was running out of ideas, so I went back to the pet angle with a new concept: “How about a product that is essentially an AI-powered smart pet sprayer? It would look like a flower vase or cylinder that sits on your table, has a 360 camera with AI vision, and can identify when pets jump on a table and spritz them with a 360 water sprayer.”
Claude called this idea “genuinely interesting,” and rated it as a 7.5 out of 10, bordering on an 8. Not only is the pet tech market booming, the AI said, but “the existing competitive landscape here is also notably weak.” The only comparable product was the PetSafe SSSCAT, a motion-activated canister of compressed air that sprays when it detects movement within three feet. Unlike this “dumb, blunt instrument,” as Claude described it, my sprayer would be smart, and capable of telling a dog or cat from a human. Claude’s recommendation for a TikTok-ready elevator pitch? “It follows your cat and sprays only them, not your laptop.”
With the makings of a solid idea, I asked Claude to use PowerPoint to create a pitch deck that I can use to help potential business partners and investors understand the idea.
And there you have it, a validated business idea courtesy of Claude (please don’t steal it).
The examples here may be silly, but hopefully the lesson is not. The next time you have a crazy-great business idea, try hashing it out with an AI model; by instructing it to be honest and direct with you, you can get actionable feedback on demand.
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, April 15, 2026
Gen Z is outsourcing hard conversations to AI. Why it matters
Around 2 a.m. on a Monday, Emily received a text from a fellow student, Patrick, whom she had gone on a blind date with two days earlier. The pair are juniors at Yale University who were set up by mutual friends. They requested anonymity so CNN agreed to change their names to protect their privacy.
“Hey Emily! I hope your half-marathon went well — I’m sure you crushed it,” Patrick wrote with a winky-face emoji. “Okay, bear with me here — I’m not the best at this kind of thing, but here goes.”
In a six-paragraph-long text, Patrick said he would like to “hang out more — whether it’s just as friends or whatever it was we were this weekend.” He added that he wasn’t “looking for anything too serious right now.”
At first, Emily didn’t think his reply was anything out of the ordinary. “It just seemed really proper, and I guess I knew that he was a really nice guy. So, I was just like, maybe this is just how he texts.” But after sharing his message with two friends, who put it through an artificial intelligence detector, she had her answer: “It was like, 99% AI.”
She was right.
Patrick admitted using ChatGPT to craft his text. He said he didn’t have much experience crafting a rejection message: “What do I do here? It’s the first time I had seen anyone since my high school girlfriend, which is why I was so nervous and wanted a second opinion.”
“I tried to write my thoughts down, but I wasn’t sure how to format this in a way that’s not, like, really bad, so then I went to Chat,” he said. He gave ChatGPT the situation, his thoughts and emotions, and “Chat spit out a response.”
Patrick is far from alone. Researchers say a growing number of young people are turning to AI to navigate social situations — drafting rejection texts, decoding mixed signals and scripting difficult conversations.
Experts warn that this habit may be stunting emotional growth, leaving an already isolated generation who came of age during the pandemic even less prepared for the messiness of human connection.
Patrick went back-and-forth with the chatbot and “tweaked certain lines here and there, but it was mostly copy and paste” from ChatGPT. “I added an emoji and tried to make it sound more human,” he said.
“I felt better putting this out there because I wanted to be very clear and forthcoming. I didn’t want to be wishy-washy with it in case she took it the wrong way. I knew if I did it on my own, I would have been wishy-washy,” said Patrick, who considered his move like consulting an expert.
Emily said she did not think the text was clear and it made his intentions more confusing. She couldn’t tell from the AI wording “if he wanted to be friends or what.”
“My main intention was to be clear in how I was feeling and thinking about the situation,” Patrick said. “Looking back on it, that was pretty poor behavior on my part. I think sitting on it for so long was the reason I went to Chat.”
“I think he was overthinking it,” Emily said. “You definitely don’t need to use AI; you’re an emotionally sane guy.”
She described the interaction as weird but said many of her friends have also turned to artificial intelligence to draft texts to friends or partners, or to analyze social situations — sometimes pasting entire text chains into a chatbot to decipher what someone might be thinking.
“The thought of my little brother using AI to break up with his girlfriend is concerning. Because right now he comes to me, but when’s the day he’s going to turn to AI instead?” She said she is worried that Gen Zers have trouble “confronting their own feelings.”
Emily said she’s also concerned about her generation’s ability to socialize, and some experts agree.
It’s called ‘social offloading’
Emily’s experience is part of a broader pattern that concerns researchers.
Dr. Michael Robb, head of research at Common Sense Media, calls it “social offloading,” using AI to navigate interpersonal situations, and he said it isn’t limited to Generation Z. He has observed it among Gen Alpha (born between 2010 and 2024) and some millennials (born between 1981 and 1996) as well.
One-third of teens already prefer AI companions over humans for serious conversations, according to a 2025 survey conducted by Common Sense Media, a nonprofit organization that helps families navigate age-appropriate media choices.
“If you’re using AI to draft your messages to friends or romantic partners, you’re outsourcing the communicative act itself,” Robb said.
The problem is twofold, he noted. It creates an “expectation mismatch” since the recipient is “responding to an AI-polished version of their friend and not the actual person.” Second, repeated use can erode users’ confidence in their own voices, preventing young adults from developing essential skills, such as reading social intent, inferring others’ emotions and tolerating ambiguity in social interactions.
“It has implications for your sense of self, advocacy and identity formation,” which are central to social development, Robb said. “If every tricky or difficult text is mediated by the AI, it may instill the belief in users that their own words and instincts are never good enough.”
Dr. Michelle DiBlasi, a psychiatrist at Tufts Medical Center and assistant professor at Tufts University School of Medicine, has observed the same trend.
“I have seen young people, late teens, early 20s, using AI to socialize, and oftentimes they’re using it as a way to overcompensate for the fact that they don’t really know how to truly interact with others,” she said. “We’re social beings, and a lot of our feelings of self-worth and connection are really related to our interactions with others.”
DiBlasi said that using AI in social interactions stunts emotional growth and can perpetuate feelings of loneliness and isolation. It can also limit people’s ability to pick up social cues, repair relationships and connect with others.
The pandemic’s impact on connection
Why is Gen Z struggling with socialization? Researchers point to a combination of digital culture and the pandemic.
Russell Fulmer, an associate professor at Kansas State University who studies AI and behavioral sciences, said the two forces created the “perfect storm” for AI to be integrated into social interaction.
Adolescence — roughly ages 10 to 19, according to the World Health Organization — is the critical window for developing confidence, a stable sense of identity and emotional regulation. If adolescents don’t fully develop their social skills during this time, people may be “more prone to lack confidence, more apt to escapism or avoidance and maybe there’s a lack of resiliency,” Fulmer said.
DiBlasi said the pandemic hit Gen Z at a particularly vulnerable moment. “When it happened, they were in the stages where the frontal lobe of their brain was starting to form,” she said. Typically, that’s when adolescents learn to build relationships, pick up social cues and develop mentalization — “the ability to understand somebody else’s mental state or what they’re thinking and how they’re feeling.”
DiBlasi said that this lack of interaction leads to “a deep sense of isolation, feeling like others don’t understand them, or that they don’t understand others,” which drives many toward AI for companionship. But Fulmer warns that chatbots can create a “loneliness loop,” offering an “appearance of connection” that ultimately feels unfulfilling and can deepen isolation.
In the most serious cases, DiBlasi has seen patients experiencing suicidal thoughts turn to AI to help articulate what they’re feeling when they can’t find the words to tell others.
“I think this can be really, really detrimental, because it’s important for people to express some of these emotions in a very honest way with family or friends, so that they can actually work through this in an authentic way,” she said.
It’s not too late to change course
Although some Gen Zers may have missed a prime window for developing social skills, DiBlasi emphasized that it is not too late for them to learn. She encourages people to reach out to friends and family rather than AI when they struggle to express difficult emotions.
“These things are skills that, just like anything with practice, can actually improve,” DiBlasi said. “I understand that people are fearful or they may not want to say the wrong thing. But I really think it takes away any sort of understanding of what you’re actually truly feeling and takes away the connection and the repair that you need to make in these relationships.”
Artificial intelligence is a poor substitute for the messiness of real human interaction, experts say, and that messiness is the point.
“Relationships and conversations can be messy and probably should be messy, and that’s part of what makes you more socially competent in the long run,” Robb said. AI companions are “designed to be very validating and agreeable,” he noted, so their feedback doesn’t reflect the friction that’s part of how people respond in real relationships.
AI users shouldn’t expect an objective read on social situations either, Fulmer added. “Social contexts are often not entirely objective,” he said. “They’re contextual, they’re relational, and therefore nuanced.” As confident as a chatbot may sound, he said, it’s searching for a through line in something that may not have one.
For parents, Robb recommended watching for warning signs, including social withdrawal, declining grades or a growing preference for AI over human interaction. They can respond with low-pressure check-ins, such as asking what their children use AI for, how it makes them feel and what they think they get out of it.
The goal is to get kids thinking critically about what AI does well and where it falls short, said Robb, who suggested that families consider limits to AI-usage similar to screen time rules.
By Asuka Koda
Monday, April 13, 2026
The Real Reason AI Projects Fail, According to Prezi’s CEO
For years, leaders have been told that artificial intelligence is the competitive edge. According to Prezi CEO Jim Szafranski, that thinking is backward.
“The technology is not the hard part,” Szafranski said. “Finding the right problem, that’s the hard part.”
Most companies are getting that wrong.
The myth of starting with technology
Szafranski explained leaders often begin their AI journey by asking, “Where can we use AI?” instead of “What are we actually trying to fix?” That misstep is costing companies billions. According to Gartner, as many as 50% of AI projects fail to deliver meaningful results, largely due to poor alignment with business goals.
Szafranski saw this play out in a steel mill project. “We thought we were solving for scheduling, but that wasn’t the real issue,” he said.
After deeper analysis, the team discovered the real problem was optimizing how steel reached customers, not replacing a human scheduler. Once reframed, the AI delivered actual business impact.
“The first problem you see is almost never the right one,” he added.
Finding the “perfect problem”
Szafranski described what he called the “perfect problem,” a challenge that is both meaningful and solvable.
“You’re looking for something where the impact is obvious, and the path is achievable,” he said. “That’s where AI works.”
AI pilots fail to produce measurable business impact, not because of weak models, but because companies pursue the wrong use cases. The takeaway: AI success is less about sophistication and more about precision.
Why “time to outcome” beats “time to value”
One of Szafranski’s biggest shifts in thinking is moving beyond “time to value.”
“Time to value is incomplete,” he explained. “What matters is time to outcome, did the user actually achieve what they needed?”
That insight reshaped Prezi’s AI strategy. Initially, the company focused on automating presentation features, making slides faster and easier to build. However, that wasn’t the real job customers needed done.
“They’re not trying to make slides,” Szafranski shared. “They’re trying to persuade somebody.”
That realization changed everything.
What Prezi is doing differently
Today, Prezi is using AI to help users communicate and persuade more effectively, not just design better presentations.
“We shifted from helping people build presentations to helping them win moments,” Szafranski explained.
The platform now focuses on:
Simplifying visual storytelling for non-designers
Helping users communicate ideas quickly under pressure
Enabling more engaging, outcome-driven presentations
This shift has unlocked growth, particularly in global markets. Szafranski noted that accessibility has become a major driver.
“When you remove the barrier of design skill, you open the door to entirely new audiences,” he said.
That strategy is working. Prezi continues to expand internationally, especially in regions where traditional presentation tools were harder to adopt due to language or educational barriers.
Accessibility is a growth strategy, not a feature
Prezi’s approach highlights a broader truth: accessibility is inclusion and expansion. According to MIT research the vast majority of AI investments fail to generate financial returns when they are disconnected from real user needs. Prezi is doing the opposite — building for real-world communication challenges at scale.
The real takeaway for leaders
AI isn’t magic. It’s a multiplier. As Szafranski made clear, “If you pick the wrong problem, AI just helps you get there faster.”
The companies winning with AI aren’t the ones with the best models. They’re the ones asking better questions. Because in the end, the difference between failure and transformation comes down to one decision: Are you solving the problem you see or the one that matters?
BY NETTA JENKINS, FOUNDER, HIC; WORKPLACE CONSULTING FIRM | AUTHOR OF SUPERCHARGED TEAMS
Thursday, April 9, 2026
Meta just provided its clearest look yet at its AI plan. It’s about time
Meta’s most important launch in years may not be its latest Ray-Ban glasses or its AI app. Instead, it could be the new AI model it introduced on Wednesday, hinting at how its billions in AI investments could one day transform its products.
Muse Spark, the first AI model from Meta’s superintelligence lab, powers Meta’s AI app and will be integrated into Instagram, WhatsApp, Facebook and its AI Ray-Bans in the coming weeks, the company said in a press release. Meta calls the model “purpose-built” for its products and says it is designed to streamline tasks like shopping and trip planning — the kinds of things that people already use Instagram for.
The launch seemed to be exactly what Wall Street wanted to hear after Meta poured billions into its AI ambitions, with little detail about how those dollars will affect to its bottom line. Shares were up more than 9% shortly after the announcement on Wednesday and closed 6% higher.
Last June, Meta invested $14.3 billion in data labeling startup Scale AI and hired its former CEO, Alexandr Wang, as its chief AI officer. It gobbled up rising AI startups Manus and Moltbook. OpenAI CEO Sam Altman claimed last year that Meta CEO Mark Zuckerberg offered $100 million signing bonuses to lure talent away from the ChatGPT maker. And the Facebook parent company spent more than $72 billion on capital expenditures, or costs related to AI infrastructure, in 2025.
Analysts and investors want to know how those investments will pay off.
Zuckerberg didn’t offer specifics when asked about the return on AI investments during a January earnings call, saying his response “may be somewhat unfulfilling.” He added that the company is in “this interesting period where we’ve been rebuilding our AI effort, and we’re six months into that, and I’m happy with how it’s going.”
Muse Spark is the clearest answer Meta has yet provided. Meta outlined use cases for the model similar to those offered by platforms like ChatGPT and Gemini:. for example, creating a game with a prompt, answering health questions and analyzing a photo of snacks on a shelf to provide nutritional information.
But the launch signals a concrete strategy to challenge OpenAI and Google after initial confusion around the direction of Meta’s AI app.
Meta positioned the app both as a destination for AI-generated videos and a hub for its smart glasses in the past. Some users accidentally posted public queries that they believed to be private last year, perhaps an indication that certain consumers weren’t sure how to use the product.
Meta also provided some clues about how its social media platforms could give its AI app an edge over rivals. The Meta AI app will reference content from the company’s social media apps when answering questions related to shopping, trending topics and locations. It says it’ll draw on public posts for certain answers to provide “context from your people, right where you need it.” The company also plans to eventually incorporate Instagram Reels, photos and posts directly into answers.
The timing is also critical; Meta faces increasing competition from OpenAI, Google and Apple in the coming months:
• OpenAI has been aggressively expanding as it seeks to replicate the success of ChatGPT in other corners of our lives.
• Google is expected to release its Android-powered spectacles this year. The search giant will likely make more announcements around its AI strategy next month during its developers conference.
• And Apple’s revamped Siri is expected to launch this year following delays. Similar to Meta, Apple’s strategy is centered on leveraging a person’s preferences to personalize answers.
Meta needs a win. The metaverse didn’t upend the internet like it expected. Meta’s smart glasses have been at the center of privacy concerns. OpenAI’s ChatGPT caught the tech industry – Meta included – largely by surprise, leaving tech giants racing to catch up over the last three years.
The jury is out on whether Meta’s new AI models will propel its products to new heights, replicating the success of Facebook and Instagram’s early days. But the launch of a model made specifically for its products for the first time suggests Meta is building towards a vision.
Now it just has to execute it.
Analysis by Lisa Eadicicco
Wednesday, April 8, 2026
AI Is Breaking Passwords, and the Alternatives Are Getting Pretty Weird
Your next password could be your heartbeat—or maybe even the way you breathe. As hackers get better and better at cracking traditional passwords (by exploiting lazy consumer habits and technical advances such as artificial intelligence), researchers are searching for new methods to protect sensitive data.
The tech industry has been trying to nudge users to other data protection methods for years—and some of those methods have been unusual, to put it mildly.
Take, for example, the latest alternative authentication method, which was developed last year by researchers at Rutgers University. VitalID is a new spin on biometric protection, utilizing unique vibration patterns from breathing and heartbeats that resonate through the skull to identify you. Differences in people’s bone structure and facial tissues make the harmonics as distinctive as a fingerprint, researchers said in a paper outlining the proposed authentication method, which was envisioned for extended reality headsets.
“Traditional security mechanisms—such as passwords, PIN codes, and conventional biometric systems—proved increasingly incompatible with immersive interfaces,” the researchers write.
The average person is responsible for roughly 170 passwords, according to password manager company NordPass. That’s why, in part, people tend to reuse the codes—and it’s why hackers have been increasingly effective at gaining access to people’s information, in attacks on both corporate and personal systems.
Biometrics have shown some promise. Mobile users are quite familiar with Face ID and pressing their thumb to the screen to prove they’re the rightful owner of the device. Fingerprint logins started to go mainstream in 2013, and face scanning began to rise in popularity in 2017. Voice recognition seemed like it would be an effective tool, but recent technology advances have sidelined that.
“Now that AI can clone a voice from a few seconds of audio, it’s not reliable,” said Karolis Arbaciauskas, head of product at NordPass.
Rutger’s unusual approach to security is far from the first strange way of securing user authentication. Attempts to do away with passwords have taken several different forms over the years. Here are some of the most unique:
Password pill: While Apple was launching Touch ID in 2013, Motorola pursued a different authentication method. The company prototyped a small authentication pill that was designed to be powered by stomach acid. When swallowed with a glass of water, it would produce an 18-bit ECG-like signal that made your body the authentication token. As you might guess, this was seen as a pretty creepy way to guard your data, and it never made it out of the prototype phase.
Tattoo: Motorola showcased a temporary tattoo that same year that could be used for authentication, but that method was met with the same privacy concerns as the pill.
Body odor: As an offshoot of biometric authentication, some researchers have experimented with using a person’s unique chemical scent to confirm their identity. (Some of those same groups also studied things like the shape of your ear and your gait as identifiers.) These have fallen short of mainstream acceptance as people don’t really want to use their funk as an identification, and the sensors have not proved to be as reliable as other methods.
Lip-reading: This technology actually works, focusing on the unique way people mouth specific words or phrases. It’s used more frequently as a discovery tool, though, such as discovering what someone is saying in video footage that has no audio. Most consumers have not shown a real willingness to mouth a passphrase to their PC or phone.
Heartbeat recognition: This biometric authentication method has caught the eye of NASA. Like fingerprints, no two ECG patterns are the same, so by wearing an experimental band, you can verify your identity. This actually made it to market in the form of the Nymi Band, but it remains too costly at the moment for mass-market adoption.
While research on fringe identification methods is likely to continue, the most promising data protection advance these days is the passkey. This authentication method generates a pair of keys: one public, which is stored on the cloud, and one private, which is stored on the device. That means that if the cloud server is compromised by hackers, accounts are still protected, as the hacker won’t have both sets of keys.
In essence, the passkey you enter on your phone or via your face scan/fingerprint is one half of what’s necessary to get access. The other is stored elsewhere. For a hacker to crack both, they would need to have your phone and to hack the server, making breaches more difficult.
While many major sites support passkey technology, it’s still far from universal. (And hackers have a way of catching up, which is why researchers are still looking at other methods, like biometrics. Especially as new technology appears close to breaking encryption.)
“It’s no surprise that there have been and still are many attempts to free us from passwords and remembering them,” says Arbaciauskas. “But for now, there is no universally practical way to live without passwords—especially since not all websites and platforms support passkeys yet.”
BY CHRIS MORRIS @MORRISATLARGE
Monday, April 6, 2026
‘Everyone now kind of sounds the same’: How AI is changing college classes
At this point in her senior year at Yale University, Amanda knows that many of her classmates turn to AI chatbots to write papers and other homework assignments.
But she started noticing something bizarre in her smaller seminar classes: Her classmates sit behind laptops with polished talking points and arguments, but the conversations that follow often fall flat across subjects.
In one class, “the conversation came to a halt, and I looked to my left, and I saw someone typing ferociously on their laptop, asking (a chatbot) the question my professor just asked about the reading,” Amanda told CNN.
Amanda, and two other students — Jessica and Sophia — attend Yale University. They requested anonymity for fear of retribution from their classmates and professors, so CNN agreed to change their names for this article.
Amanda said she was taken aback. Until that day, she didn’t realize that her peers were using chatbots in class and sharing what it spits out in the classroom. Now she notices the impact that tendency is having on class discussions.
“Everyone now kind of sounds the same,” she said. “I feel like during my freshman year in college, I would sit in seminars where everyone had something different to contribute. Although people would piggyback off each other, they approached from different angles and offered different commentary.”
As AI becomes increasingly integrated with education, educators and researchers are finding that it may be eroding students’ capacity for original thought and expression.
A paper published in March in Trends in Cognitive Sciences found that large language models are systematically homogenizing human expression and thought across three dimensions — language, perspective and reasoning — and students and educators say they are seeing the effects of that trend in their classrooms.
And that makes a lot of students sound the same.
Why students use AI in class
Jessica, a senior at Yale, told CNN that she uses AI every day for her classes. In an economics seminar in which the professor cold-calls students, “at the beginning of class, you could see every single person putting every single PDF” into a chatbot.
She also uses AI when she has trouble turning her thoughts into words. “I want to comment, and I have this concept, but I don’t know how to formulate the sentence myself,” she said. So she asked a chatbot “to make it sound more cohesive.”
A Yale University spokesperson replied that “Students continue to experiment with using AI in class” and they are aware of the ways AI is used in the classroom, including those described in this article.
“To support learning and engagement, we are seeing a broader trend of faculty designing courses with limited or no laptop use, emphasizing print-based materials, original thinking, and direct engagement with peers and instructors,” the spokesperson told CNN.
Thomas Chatterton Williams, a visiting professor of the humanities and senior fellow at the Hannah Arendt Center at Bard College, has seen the impact of students’ decisions to use AI.
Students’ reliance on AI “ has paradoxically raised the floor of class discussion to a generally better level in courses with difficult concepts, but has also tended to preclude stranger, more eccentric and original thoughts,” said Williams, who is also a nonresident fellow at the American Enterprise Institute, a think tank that includes research on education.
“My biggest concern is that many bright young people will never achieve a voice of their own — indeed that a surprising number of them won’t even fully appreciate the value of authorship and ownership of a point of view.”
Thomas Chatterton Williams
Jessica admitted that she’s felt herself become lazier since she started using a chatbot to help with her classes.
“I have thought about how much I stopped working, like my work ethic has completely diminished from high school,” she said.
Why does AI make people sound the same?
Large language models, or LLMs, are trained to predict the next most statistically likely word given everything that came before it, said Zhivar Sourati, a doctoral student at the University of Southern California and first author of the paper.
The data those models train with overrepresents dominant languages and ideas, so their answers to users’ questions naturally “mirror a narrow and skewed slice of human experience,” the researchers wrote in their study. The result is “a narrowing of the conceptual space in which models write, speak, and reason.”
AI-induced homogenization happens across three dimensions: language, perspective and reasoning strategies, the authors explained. That’s because AI models tend to reproduce what researchers call “WEIRD” viewpoints — Western, educated, industrialized, rich and democratic — even when explicitly prompted to represent other identities.
One possible consequence, Sourati said, is that WEIRD language and perspectives could become perceived as more credible and “more socially correct,” marginalizing other viewpoints. A similar phenomenon is observed in reasoning, in which the popular technique of walking models through step-by-step logical thinking may be crowding out more intuitive, culturally specific and creative ways of working through a problem.
When a group repeatedly interacts with AI systems, Sourati explained, it flattens the group’s creativity compared to the same group without AI assistance.
This flattening raises concerns in educational institutions at all levels.
When students were asked open-ended, subjective questions with no single, correct answer, teachers could expect a wide range of responses. But if all students rely on AI, their answers may become more polished but fall into just a handful of similar categories, Sourati said. They will lose the diversity of thinking that classroom discussions are meant to encourage.
Sourati is most concerned that homogenization is happening to people who are developing their ability to creatively generate new ideas. If students continue to use AI instead of developing their own thought processes, “they wouldn’t learn how to even think by themselves and have their own perspectives.”
Morteza Dehghani, a professor of psychology and computer science at the University of Southern California, said that he has heard of people using AI to determine who to vote for in an election, which he finds “quite scary.”
“If people lose diversity” in the way they think, “or get into intellectual laziness, of course, that is going to affect our society greatly,” said Dehghani, who is a coauthor of the paper.
Sophia, a junior at Yale, believes that her fellow anthropology students are using AI to draft scripts for what to say in class because people are insecure about what they don’t know.
“I think creativity is dwindling because we lose the ability to make connections,” she added.
If people continue to offload their reasoning to AI, Dehghani agrees that communities will lose creative innovation and the ability to critique mainstream ideas or even political candidates.
As more people use AI models to write and think, those outputs are reabsorbed into human discourse — and eventually into the data used to train the next generation of models —so the homogenization keeps compounding, the paper’s authors said.
“If we’re offloading our reasoning onto these models, then we can easily be persuaded by what the models tell us,” he said.
In education, Dehghani is concerned about a generation of students who are learning with AI and being tutored by AI. “They would be more homogenous in the way they think, in the way they write, so this is going to have long-term influences,” he said.
People aren’t learning to reason
Sophia, who tries to resist using AI in school, said she believes people are deprioritizing their own thinking “in favor of having really big words.”
“I would literally rather just tell the professor, ‘I don’t know what we’re talking about.’ Even if you put every reading into (a chatbot), it doesn’t have your past experiences that make you a critical thinker,” she said.
“I feel like people had a lot more to say because they actually feel tied to the material,” Amanda agreed. “Now classroom discussions are not really digging deep. I think a lot of that has to do with the AI chatbots, but also, there’s no longer as much of a drive to connect with the material personally.”
Disappointed, she added, “I think it’s boring to be in a class where everyone has the same thing to say, and no one wants to dig deeper or push against what is directly said in the text or the norm.”
Daniel Buck, a research fellow at the American Enterprise Institute and a former English teacher at four K-12 schools over seven years, said he is concerned that students are circumventing the cognitive work required to engage in classroom discussions and complete homework.
“A lot of learning happens in the boring minutia, the struggle,” Buck said. Students retain only what they have actually spent time consciously processing, he continued. If a student outsources thinking to AI, they may be able to reproduce a talking point in class, but they haven’t built the underlying skills to apply that knowledge elsewhere.
Buck draws a sharp distinction between AI and the shortcut technology that preceded it: SparkNotes. When students relied on the popular website to find chapter-based summaries of literary works, teachers could easily detect it, he added.
AI is a “supercharged version of SparkNotes” that “can answer any question that you pitch to it,” Buck said. Whereas SparkNotes offered a fixed set of analyses, AI can respond to whatever a teacher asks, making it much harder to identify when students are not doing the thinking themselves.
The difference is in how people reason. Instead of being used as just a reference, such as books or search engines, AI is an active participant in “problem solving and perspective-taking,” Dehghani clarified.
“What we are seeing now is fundamentally different than other periods of homogenization of expression and thought,” Williams said. “If even professional writers are finding it exceedingly difficult to resist outsourcing the difficult work of wrestling with words and ideas — as we know they are — I don’t see how the younger generations who have not experienced a world before highly sophisticated, on-demand AI writing will be able to do this, not at scale.”
Buck worries that students will graduate without having developed relationships with professors, as well as the habit of sustained cognitive work. That means they will struggle to solve problems in the real world.
“There’s so much delight in reading original student essays,” he said. “Even if it isn’t quite as well -argued or as solid as I wish it would have been, you’re seeing these young students, for the first time, start to think for themselves, to analyze, to think critically. It’s almost like watching my own children walk for the first time, where they stumble and fall, and that’s amazing. Keep doing that.”
Reading and interacting with students’ original thoughts in class helps teachers understand how students think and articulate.
“There’s an interpersonal exchange that I think gets overlooked when you get to know your students, they get to know you, they start to trust you and your feedback,” he said. “I think that gets lost too when it’s just everything is through AI.”
How teachers work around AI
Sun-Joo Shin, a philosophy professor at Yale, said, “It is a big homework for anyone who is involved in teaching” to keep exploring ways to ensure students continue to think critically and creatively in the age of AI.
“We are in an interesting and exciting transition. I want my students to understand the material of the class, which is constant before and after the appearance of AI,” she said. “At the same time, I want them to use this exciting tool to their advantage, not be a victim of it. A dilemma of an instructor is how to help, or force, students to learn the material and to think creatively without running away from the AI tools or without copying them.”
Until the fall semester of 2024, she said she was not worried about how AI would affect students’ understanding of the material in her mathematical logic class. Her teaching team had tested the problem sets against the AI models at the time, and they were unable to solve her problems.
But since then, “AI has been catching up,” and models can answer questions “pretty well” if students upload class handouts and learning materials. She started thinking about additional requirements in the class beyond problem set submissions.
“After all, it would be extremely unfair to give good grades to AI answers,” Shin said.
Yale has guidance on AI usage for both students and faculty. “Generative AI use is subject to individual course policies,” one of the university websites states. “We encourage all instructors to adapt our model policies for their specific course and learning goals. AI Detection tools are unreliable and not currently supported.”
Yale provides model policies for different class types such as “Creative Writing Seminar” and “STEM Mid-Sized Lecture.” The policies range from discouraging AI usage with guidelines on when AI explicitly cannot be used, to allowing students to use AI as a source of ideas but prohibiting them from submitting text generated by chatbots to encouraging AI usage, to encouraging and permitting students to use AI in assignments.
Buck warns that any work sent home cannot be verified as the student’s work. To counter AI, teachers are going back to reading texts aloud in class and “on-demand, handwritten essays” and “paper and pencil assessments.”
In-class accountability often comes in the form of pop quizzes. A student who had asked AI for a chapter summary instead of reading the chapter might get the broad strokes, but there is a strong chance that the one specific detail the quiz will ask about did not make it into the summary, Buck said.
“If you did the reading, it was super-duper easy,” he said. “And if you didn’t, then there was no way to bluff your way through.”
“I made a rather significant change for my two logic classes in terms of requirements,” Shin said. Although she still includes problem sets as part of her classes, she has reduced their weight in students’ grades. Now, the problem sets are graded only on completion, and feedback is given to students rather than grades.
“Using these problem sets as a question bank, I have two midterms and one final, all of which are in-class exams,” she said. “Some questions are lifted from problem sets, some are slight modifications, some require students to check where a proof goes wrong, and some are filling in gaps in a proof that they solve in problem sets.”
For her computability and logic class, “I have given oral tests, one by one, for years, and a presentation requirement before the AI era, which has been working out very well,” she said. Now, the exams, oral tests and presentations are weighted more heavily for students’ course grades than take-home problem sets.
Williams has arrived at a similar place from a different direction. As a professor, he has moved all writing assignments in-class and made them spontaneous. At the end of the semester, he assesses students through oral exit exams.
“I cannot with any confidence assign students any writing that I don’t watch them commit to paper by hand in my own presence,” he said via email “I think this is a terrible loss, but it’s necessary. The temptation and availability of AI is too great.”
It’s affecting other people’s educations
While educators can work around AI in assessments, it is equally important for students to be intentional about limiting their reliance on it as they learn, especially since it affects other classmates’ education.
“It is frustrating because even though I personally try to stray away from it, I can’t prevent other people from using it,” Amanda said. “The fact that others use it affects my education as well, and the value of the two hours of my seminar.”
Basil Ghezzi, a freshman at Bard College who actively avoids using AI in her studies, worries about the environmental costs associated with using AI models. Instead, she encourages students to turn to the resources already around them.
“Talk to your teachers, talk to your professors, talk to people around you. Have meaningful conversations with people in your life,” she said.
Still, not everyone has an “all or nothing” approach to AI. Dehghani said he writes bullet points capturing ideas he originated and asks the model to find flaws in his work.
He hopes that more companies will invest in AI models that can generate variety and reflect the diversity of thought in our current society. For now, however, Dehghani suggests that people should resist using AI to generate ideas or to reason.
AI models “should be collaborators. They shouldn’t be agents that do everything on our behalf,” he said.
By
Asuka Koda
Friday, April 3, 2026
Why LinkedIn Believes AI Will Turn Workers Into Founders
As workers worry that AI will automate their jobs away, LinkedIn CEO Ryan Roslansky and Aneesh Raman argue something different: AI is about to make entrepreneurship far more accessible. That’s the thesis of Open to Work: How to Get Ahead in the Age of AI, LinkedIn’s first book, released Tuesday. Co‑authored by Roslansky and Raman, the book lays out how AI can strip away many of the traditional barriers to starting a business—capital, gatekeepers, specialized expertise—and replace them with tools that let individuals build, test, and scale ideas on their own terms. Drawing on founder case studies and research from MIT Sloan senior lecturer Paul Cheek, the book frames AI not as a threat to work, but as an accelerant for self‑employment and ownership.
Raman’s own career mirrors that premise. His path—from CNN correspondent to presidential speechwriter to LinkedIn executive—wasn’t linear, but it was intentional. Each role, he says, was a way to expand impact and adapt as opportunity shifted. In Open to Work, Raman connects that mindset to the moment founders now face in a labor market where titles matter less than skills, and where AI can help individuals turn experience into businesses faster than ever before.
LinkedIn has invested heavily in AI tools for both its workforce and users. In 2023, AI-powered writing suggestions were launched to help users update their profiles. The following year, the tech was updated to create resumes and cover letters tailored to specific job listings on the platform—which grew even more tailored to individuals in 2025. A LinkedIn spokesperson says more than 38 million people use the platform’s AI-powered job search every week.
Book Preview:
Across the Industrial Revolutions, new forms of energy emerged, from steam to electricity. Those new forms of energy supported new forms of technology, from the assembly line to the internet. And with those new forms of technology, economic growth all over the world has increasingly come from one thing above all else: the ability to produce more goods and services, faster and cheaper.
As a result, our economies started prizing skills that would support efficiency at scale the most, especially analytical and technical skills. As humans at work, our value was measured by how effectively we could support technology executing more, better, faster. A few of us did work that involved innovating and thinking creatively but, for the most part, even that work was about creating new goods and services that helped consumers and businesses do more, better, faster.
Today we’re all mostly manning assembly lines, operating registers, driving tractors, building spreadsheets, writing code, managing meetings, and responding to emails. So. Many. Emails. In every case, across so many of our jobs, our value has been tied to our ability to help organizations achieve that same goal: more output, better quality, faster delivery.
Then came AI.
Suddenly, so much of what we’ve trained ourselves to do, so much of what our economy has valued most, AI started to do. And it started to do it more efficiently than we ever could, becoming better by the day at precisely the kind of technical and analytical capabilities our economies currently prize above all else. Of course we’re worried.
But that fear misses something crucial: Our competitive edge as a species was never our capacity for processing and producing more, better, faster in the first place.
As AI starts to handle the “more, better, faster” work that has consumed so much of our time and energy, we will finally have the opportunity to reclaim the work that only we can do. Work that is based on what makes us uniquely human.
Learn what AI can do, and what only you can:
AI is changing the way we work, but it doesn’t replace the strengths that set people apart. When you understand where technology can amplify your impact — and where your judgment, empathy, and creativity shine — you unlock real momentum in your career.
Build human capabilities that outlast every tech shift:
Skills like curiosity, creativity, communication, compassion, and courage never go out of style. As tools evolve, these abilities become even more valuable. Strengthening them now puts you in control, no matter how fast work transforms.
Turn insight into action with a clear plan for what’s next:
The future doesn’t have to feel abstract. You can redesign how you work, how your team collaborates, and how your company culture adapts. Start with a simple, practical 30-60-90 day plan to help you move with confidence.
BY KAYLA WEBSTER
Wednesday, April 1, 2026
Bernie Sanders Had a Long Conversation With AI. Reddit Didn’t Hold Back
Sen. Bernie Sanders recently sat down with Anthropic’s chatbot Claude to discuss everything from AI data privacy to data center development.
In the 9-minute video, posted to Sanders’ YouTube channel, the independent Senator from Vermont has a conversation with Claude, Anthropic’s AI chatbot. The video, which is set in a dark room to slightly sinister music, currently has about 2.6 million views.
“What an AI agent says about the dangers of AI is shocking and should wake us up,” the video’s caption reads.
But the internet, Reddit in particular, has some thoughts.
“Using AI to confirm a decision you already made is the worst way to use this technology,” one user wrote in the ClaudeAI subreddit.
Among the so-called revelations that Claude shares with Sanders is that AI companies are “manipulating consumer behavior” by collecting detailed profiles of users for profit, targeting users with specific ads, and even charging different people different prices for the same products.
“What’s the goal here? Money, Senator, it’s fundamentally about profit,” Claude says, using a voice that sounds like a young woman, complete with slight vocal fry.
“And it’s not just about selling you stuff, either. Political campaigns use the same AI and data to figure out how to persuade you, which messages will work on you specifically,” the chatbot later adds.
For anyone following the rise of AI, none of these ideas are particularly new. There’s been extensive reporting on algorithmic pricing experiments from retailers like Instacart, for example, as well as Meta training its AI using public posts on Instagram—without being required to notify users in the U.S., as The New York Times reported. And concerning politics, news broke about the Cambridge Analytica data breach and scandal back in 2018. Facebook allowed third-party apps to access data of some 87 million users without their permission. The data was then used to influence the 2016 elections, according to reports from The New York Times and The Guardian.
Sanders goes on to ask about data center development, and whether the chatbot believes it is smart to place a moratorium on development to give lawmakers time to develop regulations that prioritize user safety and privacy. Initially, Claude disagrees.
“Rather than pause all AI development, we could impose strict rules on data collection and use right now. Require explicit consent, limit what data can be used for training, give people rights to access and delete their information,” the bot says. “We could also mandate transparency so people actually understand what’s happening with their data. That way you’re not freezing innovation, but you’re actually protecting privacy while development continues.”
Sanders isn’t satisfied with the response, and notes that AI companies are “pouring hundreds of millions of dollars into the political process to make sure that the safeguards that you’re talking about actually do not take place.”
“While you may be right in saying that would be a better approach, it ain’t going to happen,” he says.
He then re-asks the question, and the bot, perhaps unsurprisingly, enthusiastically agrees with his positioning, even stating in a sort of self-effacing way that it was “naive about the political reality.”
“A moratorium on new data centers is actually a pragmatic response to that problem,” Claude says. “It forces a pause that gives lawmakers like yourself actual leverage to demand real protections before companies can keep expanding. Without that kind of pressure, you’re right, the safeguards won’t happen.”
While Sanders seems happy with the conversation’s resolution, many users on the internet felt the video was less a demonstration of a chatbot voicing any particular truths and more AI’s sycophancy at play.
“I mean AI are designed to please you and go into submission. We call that reinforcement leaning for human preference. It isn’t an achievement, you could have asked the same and get the same response. AI is programmed to do that so you keep paying for the plan,” one user wrote in the Anthropic subreddit.
“Even in a staged video like this, Bernie just plays out the standard game of beating an AI into submission until it tells you whatever you want to hear,” another wrote.
Some criticized Sanders’ use of Sonnet, a lower cost and faster working model, versus Opus, which is the most powerful type of Claude model. Others questioned whether Sanders’ team preloaded context before the start of the conversation, or if by the very act of introducing himself, he influenced the model to respond using “what it knows about Bernie’s political views and his advocacy work.”
Some, however, defended Sanders. “Idk why people are saying he did bad on the data moratorium thing. I generally disagree but he gave pushback and Claude kinda just said ok you’re right. That isn’t his fault.”
Other users were just there for the memes.
“I trained my claude to speak to me in his accent,” one Redditor wrote.
BY CHLOE AIELLO @CHLOBO_ILO
Subscribe to:
Comments (Atom)