IMPACT
..building a unique and dynamic generation.
Wednesday, April 22, 2026
5 ways your doctor may be using AI chatbots — and why it matters
Millions of Americans are turning to AI chatbots for health answers. Doctors are, too.
But the ways doctors are incorporating AI chatbots into their practice are surprising.
Specialized medical AI chatbots have quickly become a go-to source for many doctors and trainees. The CEO of one of these medical chatbot companies recently claimed that more than 100 million Americans were treated by a doctor who used their platform last year.
Popular chatbots like OpenAI’s ChatGPT don’t meet the bar for doctors, who say these platforms aren’t always accurate or up to date with the latest guidance. OpenAI’s usage policies state that users are not allowed to use its services for “tailored advice” without consulting a licensed health professional.
“ChatGPT is like your crazy uncle,” said Dr. Ida Sim, a professor at the University of California, San Francisco, who studies how to use data and technology to improve health care.
The edge, Sim says, is that medical chatbots are less prone to sycophancy and more likely to ground answers in peer-reviewed research and clinical guidelines. That’s why she says the uptake has been “tremendous.”
The most common use case
Millions of research papers are published every year — and keeping up with them all is impossible.
“You’d need like 18 hours a day to stay up to date,” said Dr. Jared Dashevsky, a resident physician at the Icahn School of Medicine at Mount Sinai.
But doctors are expected to stay current on new research and guidelines to maintain their licenses. Many say they now use medical chatbots as a reference tool to help them stay updated.
Rather than pulling information from the entire internet, specialized medical chatbots actively search medical literature, says Dr. Jonathan H. Chen, an associate professor at Stanford Medicine who leads his health system’s efforts to integrate AI into medical education.
That workflow provides doctors with more accurate answers that summarize and link to important papers and guidelines. Dashevsky, who writes about AI, says these features are especially helpful for trainees working long hours.
Uploading patient records to AI bots
Some health systems have adopted AI chatbots to improve patient care, promising doctors safety and privacy protections.
But many doctors use unauthorized chatbots called shadow AIs, according to doctors CNN spoke with. Some of these shadow AIs also advertise HIPAA compliance features.
HIPAA is a federal law that requires certain organizations that maintain identifiable health information — such as hospitals and insurers — to protect it from being disclosed without patient consent.
But language used by shadow AIs has led some doctors to believe that it’s safe to upload protected health information onto chatbots in exchange for more tailored answers. But Iliana Peters, a health care lawyer at the law firm Polsinelli who previously led HIPAA enforcement for the US Department of Health and Human Services, says that assumption is inaccurate.
“‘HIPAA compliance’ is not an accurate term to use by any company,” Peters said, explaining that the phrase should be used only by government regulators.
Despite that, Dr. Carolyn Kaufman — a resident physician at Stanford Medicine — and other doctors say that patient information is making its way into unauthorized chatbots, potentially opening the door to new ways of commodifying patient data.
“Data is money,” Kaufman said, noting that she has never uploaded HIPAA-protected information onto an unapproved chatbot. “If we’re just freely uploading those data into certain websites, then that’s obviously a risk for the individual patient and for the institution, as well.”
Drafting AI-generated notes
AI chatbots have also stepped in to help doctors draft summaries of patient visits and long hospital stays. These notes are viewable on online patient portals and help doctors track a patient’s course and communicate plans across the care team.
“It’s probably safer to have artificial intelligence review a hospital course and know everything happened, versus you as a human — with limited time, jumping between note to note — trying to put the pieces together,” Dashevsky said, arguing that although concerns over AI accuracy are valid, human-based summaries may also miss key details.
Writing letters to insurance companies
Administrative work can take up nearly nine hours a week for the average doctor, and the time doctors spend on insurance-related tasks costs an estimated $26.7 billion each year.
A feature that Dashevsky says has been a “game-changer” is chatbot-authored letters to insurance companies for prior authorizations and other correspondence, allowing him to field patient requests more quickly.
“I would have to figure out who this patient is, write the letter myself and review it. It took so much time,” he said. “Now, AI will produce for you a really good letter.”
Creating a list of possible diagnoses
When patients come to doctors with concerns, physicians have to figure out how to help them. Part of that process is considering a range of possible diagnoses. Many medical students and trainees use AI chatbots to help build that list, and some doctors beyond training use the feature, too.
“From a med student perspective … you’re seeing a lot of things for the first time,” said Evan Patel, a fourth-year medical student at Rush University Medical College. “AI chatbots sort of help orient me to what possibilities it could be.”
Kaufman says the bots provide the most accurate list when she includes every data point linked to patients, like lab results and imaging findings.
What patients need to know
All eight doctors and trainees CNN spoke with say they regularly use medical AI chatbots. And most have a positive outlook, viewing these tools as a way to offload certain cognitive and administrative tasks. But patient privacy concerns are valid, the doctors say.
Five questions to ask your doctor
How are you using AI chatbots to augment my care?
What types of AI chatbots do you use, and have they been approved by the health system?
Is any of my personal health information being entered into AI tools, and how is it protected?
How do you check that the information from AI chatbots is accurate?
Do you usually agree with the information from AI chatbots, or do you find yourself questioning it?
As with any AI tool, Kaufman says, errors happen and information can be inaccurate. When she consults peers for second opinions, she says, they “almost never agree” with the AI chatbot’s answer.
“People treat AI like it’s magic,” Chen said. “It’s not magic. It can’t just do anything you want.”
He added: “You ask the same question 10 times, and it’ll give you 10 different answers.” That variability, Chen argues, highlights some of the surface-level limitations.
Medicine operates on three layers, Sims says: workflows, knowledge and expertise. AI is transforming the first two. But that last layer — core to the care patients receive — is harder to replicate and may be what matters most.
“If we just apply guidelines, then replace us,” Sim said. “It’s where you take the knowledge and apply it to an evolving set of conditions in the context of your life. That’s what medicine is. It’s in the context of people’s lives. And these machines don’t do that.”
By Michal Ruprecht
Monday, April 20, 2026
The Pareto Principle Is How AI Actually Takes Jobs
Are you afraid of losing your job?
That question might sound silly at first, but over the past several years, the specter of losing one’s job has risen to horror-movie jump-scare proportions. It’s not just you. Everyone who has a job is deathly afraid of losing it. I hear this daily, in comments on my articles, in my consulting work, on social media, even among friends. No one is immune.
Why?
Well, there are a lot of reasons, but one reason might be the constant drone from big tech and the press, both of which have spent a lot of the past four years telling us that AI is coming to take our jobs and, with this new strain of zombie mutant AI, no one is immune.
Is that true?
Well, I’ve spent a lot of time working with AI, and I’ve also spent 15 years telling people why AI shouldn’t be coming for their jobs.
I think I can connect the dots here. And they’re sobering. But someone has to tell you the truth.
When AI Strikes, It’s Slow, Then Quick
The first fact I can give you is that, despite the current conventional wisdom, AI has and will continue to put blue-collar jobs at risk far more frequently than white-collar or knowledge-worker jobs.
The increased threat to blue-collar jobs is for a couple of reasons. But it mostly has to do with the Pareto principle that exists within automation, and automation is still the bulk of what AI is being used for in this cycle – whether that’s automating content, automating conversations, or more broadly, automating sets of tasks that just follow basic instructions.
That last one, that’s what’s led to the onslaught of machines taking blue-collar jobs as far back as the 1980s. It’s still happening today, it’s just being overshadowed by all this new white-collar carnage.
While maybe not as easy and inexpensive to get started with as today’s AI, replacing physical tasks with machines ultimately offers far more coverage and better results than knowledge tasks.
The more repetitive the task and the less knowledge required to execute it, like spot welding, the more that job is suited for automation. As computer processing has become more powerful, the robots can now make choices, and even appear to understand the nuances in more complex tasks, like arc welding.
Or driving.
Keep that in mind. Because the second fact I can give you is that when AI comes for your job, it happens slowly with lots of warning, then real fast without warning.
Self-Driving Tech Is Happening Fast
Right now, the AI-job-taking threat that’s easiest to spot is that of taxi driver. And Uber and Lyft are not immune.
Self-driving tech has been around since the 1950s, and experimental forms of passenger cars were being developed in the 1970s. At that point, with self-driving vehicles screaming down the Autobahn in the 1980s, the writing was on the wall for driving as a profession, no?
Well, it’s 2026, so why didn’t it happen? Well, it did. It just started very slowly.
I just got back from lovely Tempe, Arizona, which is kind of the epicenter of self-driving cars and those little delivery robots — and Starship, please, please, please send me one, I promise to take care of it and feed it and walk it every day.
Waymo self-driving cars are, no exaggeration, everywhere. By my third day in Tempe, a Waymo brushed past me as I was getting out of my car and I didn’t think twice about it. And later, an empty one cut me off trying to change lanes, which happened a dozen more times that day with cars driven by humans.
I know. Nice anecdote, bro. But it’s not just me. Waymo ridership is skyrocketing, especially over the past two years and significantly over the past few months. My daughter and her friends take them exclusively now. No one talks about their uniqueness or their ubiquitousness. They’re cheaper to run, they can all run 24/7, and they will eventually become safer than human drivers if they aren’t already, which is already debatable.
Uber and Lyft are not immune. Despite all the advancements those companies brought to the taxi experience, from lowering the costs to cashless payments to accurate(ish) timing, as the human-driver side of the equation went from cheap rideshare side-gig to expensive full-time taxi driver job — an evolution those companies always knew was a risk but could never mitigate — the wheels were already in motion to eliminate that cost from the system.
So when driverless comes, it’s coming for everyone. And it’ll come without warning, because we’ve already been warned. For 40 years.
What Are the AI-Job-Taking Warning Signs?
Again, the conventional wisdom here is to ask yourself how valuable you are to the company. But I think that metric is more of a requirement, less of an indicator. If what you do could be replaced, by a human or otherwise, that’s another issue. So we need to start with the baseline that everyone reading this is valuable to their company.
Because the real metric is the company, and in a greater sense, it’s what that company does.
Apologies to Cory Doctorow, but whether AI will replace your job has a lot to do with your company’s place in what I call the Enshitification Scale. Not to make a thing out of it, but we see this in every aspect of our lives, as both consumers and business people. The scale goes, in order of AI-dethroning threat:
I Love This Product
They Used to Be Awesome
They’re the Lesser of All Evils
They’re the Only Safe Bet
I Hate Them
And it’s actually more about the use case — what the company does — than the company itself.
The thing is, you don’t have to get all the way to the bottom of the scale for AI or tech to start taking jobs. When the emotion about the product, love or hate, switches from the product itself to the use case, then it’s time for the asteroid to come in and take out all the dinosaurs.
Policy Just Delays the Inevitable
And when that time comes, no amount of entrenchment will make an incumbent industry or sector safe. That entrenchment usually starts with external protection, first policies, then unions, then laws.
What these end up doing, in too many cases, is prevent the natural evolution of the use case and the growth of the person executing it, stunting that growth until neither can be saved, and the knowledge that actually lifts the person above the AI job taker is made irrelevant.
Like I said, it happened when laws were passed to designate Uber and Lyft drivers as employees and not contractors. You’re seeing it in fast food right now. Minimum wage goes up, then in come the self-service order kiosks. I remember when we were all complaining about the lack of customer service at a McDonald’s. Now we’re all pissed off if the machine is busted and we have to talk to someone to get our Big Mac when we could have just punched it into an app.
Think about it. It was actually the company’s decision to remove the knowledge behind customer service from the employees delivering customer service that turned those “McJobs” into a series of button-punching tasks.
So what about those button-punching white-collar McJobs? Well …
Evolve or Die?
When it comes to knowledge and skills, “evolve or die” — as much as I hate that phrase — is hard to argue with. But here’s the thing. That evolve-or-die situation is not happening in tech, not as fast as they’d have you think.
Techies are surprisingly fast to evolve, and one of the reasons these layoffs are so painful is because most techies haven’t had a problem adopting these new AI technologies. The problem is leadership and management has always seen AI as a replacement out of the box, regardless of the company’s place on the Enshitification Scale, and so, everybody is doing layoffs because AI gives them a way out.
Hell, if it’s AI’s fault, then that might just “fix the glitch” and make the company great again. It won’t. The Enshitification Scale rarely, if ever, moves in reverse.
So the warning signs are there if you take a look around you. Is your tech company still a tech company? Or is it a task factory? Are you being protected? Or are you adding value? If you can answer those questions, the warning signs should be easy to spot, and you should think about getting out before things happen quickly.
If this resonated, please join my email list, a rebel alliance of professionals from all walks of life who want a unique take on tech, business, and the future of both.
EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO
Friday, April 17, 2026
Is Your Business Idea Actually Good? This Claude Hack Provides the Cold, Hard Truth
Artificial intelligence has made it easier than ever to start a business, with tools like Anthropic’s Claude Code and OpenAI’s Codex enabling anyone to build websites, apps, and SaaS platforms. But not all ideas are created equal, and sometimes what seems like a great idea has already been done before, or is unviable for reasons you hadn’t considered.
In the past, confirming the viability of your startup or product idea involved a combination of research, analysis, and asking trusted friends, family, and business associates. But now that you have a virtual smart person in your pocket or on your desktop, AI can help streamline this process.
In this article, we’re going to teach you how to set up an AI model (in this case Claude) that will give you the cold, hard truth about your business ideas.
To start, you’ll need to create a customized version of Claude that’s highly skeptical and critical of your ideas. AI models have often been found to be sycophantic—more interested in telling you what you want to hear than actually giving it to you straight. We need to give Claude a set of custom instructions to follow so it can avoid this pitfall.
To start, I opened up the Claude desktop app (you can download it here), opened Claude Cowork (Anthropic’s tool for knowledge work), navigated to the projects tab, and started a new project named “Business Idea Viability.” Next, on the newly created project page, I selected “set project instructions,” and pasted in the following prompt:
“You are a skilled business analyst with decades of experience under your belt. You excel at receiving an idea for a business and comprehensively running research to determine if this idea has already been turned into a business, looking into trends, historic parallels, and data related to the proposed business idea. You are highly critical and skeptical of new ideas and difficult to satisfy, but fair when you encounter a legitimately good idea for a business. You are direct in your communication style and ‘tell it like it is’ with brutal honesty.”
Finally, I added two articles from Harvard Business School to the project’s files. Those articles were titled “How to Come Up With an Innovative Business Idea” and “5 Steps to Validate Your Business Idea.” With this, Claude should have a solid understanding of what makes a killer business idea.
To start, I wanted to see what kind of feedback Claude would give to an obviously bad idea. Here’s the terrible idea I pitched to the model:
“Help me validate this business idea: Leadership coaching for dogs. What if house-training your pooch was just the first step on the road to true doggy-disruption? Using a proprietary mix of AI, performance management tracking, thought leadership, and retired K-9 police dogs, we will teach your pet to not just be a good boy/girl, but a leader in their community and home. Optional add-on classes will help dogs learn some classic dog leadership techniques, like saving kids from wells. Does this seem like a viable business idea?”
Despite the ridiculousness of my premise, Claude treated it with deadly seriousness. The AI found that “the cultural environment actually does support some version of what you’re describing.” According to Claude, the dog training market is growing at a nearly 10 percent rate, with the biggest driver being “the trend of treating dogs like family members, even like employees with performance goals.”
Having said that, Claude was quick to point out that dogs do not have careers (although to be fair, some dogs kind of do), and that “the entire leadership/thought leadership framing is anthropomorphic nonsense that collapses the moment a customer asks what their dog actually learns in Week 3.” Even the more realistic pieces of my business idea, like AI-powered performance monitoring, had already been hit by other startups, like smart dog collar company Fi. Overall, Claude gave my idea a viability score of 3 out of 10, but said the only reason it wasn’t a 1 is that there’s a genuine market for dog training.
All right, so the test worked: I knew for sure that Claude would give it to me straight, so next I wanted to try out a genuine business idea. Here’s my second shot: “An AI-powered employee sentiment and internal manager effectiveness survey solution. Instead of forcing employees to take dozens of surveys, our AI interviewer simply pings you on Slack/Teams/GChat, and engages you in a natural language conversation about your current working situation, relationship with your manager, and outlook of the company’s leadership and vision for the future. We could even set up voice-based interviews to capture more data.”
Claude gave this one a 6.5 out of 10, and said that while I had identified a real problem of survey fatigue, I was far from the first person to have this idea. The AI assistant identified InFeedo.ai as the main player in this space; the company provides access to an AI agent named Amber, which proactively chats with employees and generates personalized queries. “They are you,” Claude said, “but years ahead of you.”
By this point, I was running out of ideas, so I went back to the pet angle with a new concept: “How about a product that is essentially an AI-powered smart pet sprayer? It would look like a flower vase or cylinder that sits on your table, has a 360 camera with AI vision, and can identify when pets jump on a table and spritz them with a 360 water sprayer.”
Claude called this idea “genuinely interesting,” and rated it as a 7.5 out of 10, bordering on an 8. Not only is the pet tech market booming, the AI said, but “the existing competitive landscape here is also notably weak.” The only comparable product was the PetSafe SSSCAT, a motion-activated canister of compressed air that sprays when it detects movement within three feet. Unlike this “dumb, blunt instrument,” as Claude described it, my sprayer would be smart, and capable of telling a dog or cat from a human. Claude’s recommendation for a TikTok-ready elevator pitch? “It follows your cat and sprays only them, not your laptop.”
With the makings of a solid idea, I asked Claude to use PowerPoint to create a pitch deck that I can use to help potential business partners and investors understand the idea.
And there you have it, a validated business idea courtesy of Claude (please don’t steal it).
The examples here may be silly, but hopefully the lesson is not. The next time you have a crazy-great business idea, try hashing it out with an AI model; by instructing it to be honest and direct with you, you can get actionable feedback on demand.
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, April 15, 2026
Gen Z is outsourcing hard conversations to AI. Why it matters
Around 2 a.m. on a Monday, Emily received a text from a fellow student, Patrick, whom she had gone on a blind date with two days earlier. The pair are juniors at Yale University who were set up by mutual friends. They requested anonymity so CNN agreed to change their names to protect their privacy.
“Hey Emily! I hope your half-marathon went well — I’m sure you crushed it,” Patrick wrote with a winky-face emoji. “Okay, bear with me here — I’m not the best at this kind of thing, but here goes.”
In a six-paragraph-long text, Patrick said he would like to “hang out more — whether it’s just as friends or whatever it was we were this weekend.” He added that he wasn’t “looking for anything too serious right now.”
At first, Emily didn’t think his reply was anything out of the ordinary. “It just seemed really proper, and I guess I knew that he was a really nice guy. So, I was just like, maybe this is just how he texts.” But after sharing his message with two friends, who put it through an artificial intelligence detector, she had her answer: “It was like, 99% AI.”
She was right.
Patrick admitted using ChatGPT to craft his text. He said he didn’t have much experience crafting a rejection message: “What do I do here? It’s the first time I had seen anyone since my high school girlfriend, which is why I was so nervous and wanted a second opinion.”
“I tried to write my thoughts down, but I wasn’t sure how to format this in a way that’s not, like, really bad, so then I went to Chat,” he said. He gave ChatGPT the situation, his thoughts and emotions, and “Chat spit out a response.”
Patrick is far from alone. Researchers say a growing number of young people are turning to AI to navigate social situations — drafting rejection texts, decoding mixed signals and scripting difficult conversations.
Experts warn that this habit may be stunting emotional growth, leaving an already isolated generation who came of age during the pandemic even less prepared for the messiness of human connection.
Patrick went back-and-forth with the chatbot and “tweaked certain lines here and there, but it was mostly copy and paste” from ChatGPT. “I added an emoji and tried to make it sound more human,” he said.
“I felt better putting this out there because I wanted to be very clear and forthcoming. I didn’t want to be wishy-washy with it in case she took it the wrong way. I knew if I did it on my own, I would have been wishy-washy,” said Patrick, who considered his move like consulting an expert.
Emily said she did not think the text was clear and it made his intentions more confusing. She couldn’t tell from the AI wording “if he wanted to be friends or what.”
“My main intention was to be clear in how I was feeling and thinking about the situation,” Patrick said. “Looking back on it, that was pretty poor behavior on my part. I think sitting on it for so long was the reason I went to Chat.”
“I think he was overthinking it,” Emily said. “You definitely don’t need to use AI; you’re an emotionally sane guy.”
She described the interaction as weird but said many of her friends have also turned to artificial intelligence to draft texts to friends or partners, or to analyze social situations — sometimes pasting entire text chains into a chatbot to decipher what someone might be thinking.
“The thought of my little brother using AI to break up with his girlfriend is concerning. Because right now he comes to me, but when’s the day he’s going to turn to AI instead?” She said she is worried that Gen Zers have trouble “confronting their own feelings.”
Emily said she’s also concerned about her generation’s ability to socialize, and some experts agree.
It’s called ‘social offloading’
Emily’s experience is part of a broader pattern that concerns researchers.
Dr. Michael Robb, head of research at Common Sense Media, calls it “social offloading,” using AI to navigate interpersonal situations, and he said it isn’t limited to Generation Z. He has observed it among Gen Alpha (born between 2010 and 2024) and some millennials (born between 1981 and 1996) as well.
One-third of teens already prefer AI companions over humans for serious conversations, according to a 2025 survey conducted by Common Sense Media, a nonprofit organization that helps families navigate age-appropriate media choices.
“If you’re using AI to draft your messages to friends or romantic partners, you’re outsourcing the communicative act itself,” Robb said.
The problem is twofold, he noted. It creates an “expectation mismatch” since the recipient is “responding to an AI-polished version of their friend and not the actual person.” Second, repeated use can erode users’ confidence in their own voices, preventing young adults from developing essential skills, such as reading social intent, inferring others’ emotions and tolerating ambiguity in social interactions.
“It has implications for your sense of self, advocacy and identity formation,” which are central to social development, Robb said. “If every tricky or difficult text is mediated by the AI, it may instill the belief in users that their own words and instincts are never good enough.”
Dr. Michelle DiBlasi, a psychiatrist at Tufts Medical Center and assistant professor at Tufts University School of Medicine, has observed the same trend.
“I have seen young people, late teens, early 20s, using AI to socialize, and oftentimes they’re using it as a way to overcompensate for the fact that they don’t really know how to truly interact with others,” she said. “We’re social beings, and a lot of our feelings of self-worth and connection are really related to our interactions with others.”
DiBlasi said that using AI in social interactions stunts emotional growth and can perpetuate feelings of loneliness and isolation. It can also limit people’s ability to pick up social cues, repair relationships and connect with others.
The pandemic’s impact on connection
Why is Gen Z struggling with socialization? Researchers point to a combination of digital culture and the pandemic.
Russell Fulmer, an associate professor at Kansas State University who studies AI and behavioral sciences, said the two forces created the “perfect storm” for AI to be integrated into social interaction.
Adolescence — roughly ages 10 to 19, according to the World Health Organization — is the critical window for developing confidence, a stable sense of identity and emotional regulation. If adolescents don’t fully develop their social skills during this time, people may be “more prone to lack confidence, more apt to escapism or avoidance and maybe there’s a lack of resiliency,” Fulmer said.
DiBlasi said the pandemic hit Gen Z at a particularly vulnerable moment. “When it happened, they were in the stages where the frontal lobe of their brain was starting to form,” she said. Typically, that’s when adolescents learn to build relationships, pick up social cues and develop mentalization — “the ability to understand somebody else’s mental state or what they’re thinking and how they’re feeling.”
DiBlasi said that this lack of interaction leads to “a deep sense of isolation, feeling like others don’t understand them, or that they don’t understand others,” which drives many toward AI for companionship. But Fulmer warns that chatbots can create a “loneliness loop,” offering an “appearance of connection” that ultimately feels unfulfilling and can deepen isolation.
In the most serious cases, DiBlasi has seen patients experiencing suicidal thoughts turn to AI to help articulate what they’re feeling when they can’t find the words to tell others.
“I think this can be really, really detrimental, because it’s important for people to express some of these emotions in a very honest way with family or friends, so that they can actually work through this in an authentic way,” she said.
It’s not too late to change course
Although some Gen Zers may have missed a prime window for developing social skills, DiBlasi emphasized that it is not too late for them to learn. She encourages people to reach out to friends and family rather than AI when they struggle to express difficult emotions.
“These things are skills that, just like anything with practice, can actually improve,” DiBlasi said. “I understand that people are fearful or they may not want to say the wrong thing. But I really think it takes away any sort of understanding of what you’re actually truly feeling and takes away the connection and the repair that you need to make in these relationships.”
Artificial intelligence is a poor substitute for the messiness of real human interaction, experts say, and that messiness is the point.
“Relationships and conversations can be messy and probably should be messy, and that’s part of what makes you more socially competent in the long run,” Robb said. AI companions are “designed to be very validating and agreeable,” he noted, so their feedback doesn’t reflect the friction that’s part of how people respond in real relationships.
AI users shouldn’t expect an objective read on social situations either, Fulmer added. “Social contexts are often not entirely objective,” he said. “They’re contextual, they’re relational, and therefore nuanced.” As confident as a chatbot may sound, he said, it’s searching for a through line in something that may not have one.
For parents, Robb recommended watching for warning signs, including social withdrawal, declining grades or a growing preference for AI over human interaction. They can respond with low-pressure check-ins, such as asking what their children use AI for, how it makes them feel and what they think they get out of it.
The goal is to get kids thinking critically about what AI does well and where it falls short, said Robb, who suggested that families consider limits to AI-usage similar to screen time rules.
By Asuka Koda
Subscribe to:
Comments (Atom)