IMPACT
..building a unique and dynamic generation.
Monday, April 6, 2026
‘Everyone now kind of sounds the same’: How AI is changing college classes
At this point in her senior year at Yale University, Amanda knows that many of her classmates turn to AI chatbots to write papers and other homework assignments.
But she started noticing something bizarre in her smaller seminar classes: Her classmates sit behind laptops with polished talking points and arguments, but the conversations that follow often fall flat across subjects.
In one class, “the conversation came to a halt, and I looked to my left, and I saw someone typing ferociously on their laptop, asking (a chatbot) the question my professor just asked about the reading,” Amanda told CNN.
Amanda, and two other students — Jessica and Sophia — attend Yale University. They requested anonymity for fear of retribution from their classmates and professors, so CNN agreed to change their names for this article.
Amanda said she was taken aback. Until that day, she didn’t realize that her peers were using chatbots in class and sharing what it spits out in the classroom. Now she notices the impact that tendency is having on class discussions.
“Everyone now kind of sounds the same,” she said. “I feel like during my freshman year in college, I would sit in seminars where everyone had something different to contribute. Although people would piggyback off each other, they approached from different angles and offered different commentary.”
As AI becomes increasingly integrated with education, educators and researchers are finding that it may be eroding students’ capacity for original thought and expression.
A paper published in March in Trends in Cognitive Sciences found that large language models are systematically homogenizing human expression and thought across three dimensions — language, perspective and reasoning — and students and educators say they are seeing the effects of that trend in their classrooms.
And that makes a lot of students sound the same.
Why students use AI in class
Jessica, a senior at Yale, told CNN that she uses AI every day for her classes. In an economics seminar in which the professor cold-calls students, “at the beginning of class, you could see every single person putting every single PDF” into a chatbot.
She also uses AI when she has trouble turning her thoughts into words. “I want to comment, and I have this concept, but I don’t know how to formulate the sentence myself,” she said. So she asked a chatbot “to make it sound more cohesive.”
A Yale University spokesperson replied that “Students continue to experiment with using AI in class” and they are aware of the ways AI is used in the classroom, including those described in this article.
“To support learning and engagement, we are seeing a broader trend of faculty designing courses with limited or no laptop use, emphasizing print-based materials, original thinking, and direct engagement with peers and instructors,” the spokesperson told CNN.
Thomas Chatterton Williams, a visiting professor of the humanities and senior fellow at the Hannah Arendt Center at Bard College, has seen the impact of students’ decisions to use AI.
Students’ reliance on AI “ has paradoxically raised the floor of class discussion to a generally better level in courses with difficult concepts, but has also tended to preclude stranger, more eccentric and original thoughts,” said Williams, who is also a nonresident fellow at the American Enterprise Institute, a think tank that includes research on education.
“My biggest concern is that many bright young people will never achieve a voice of their own — indeed that a surprising number of them won’t even fully appreciate the value of authorship and ownership of a point of view.”
Thomas Chatterton Williams
Jessica admitted that she’s felt herself become lazier since she started using a chatbot to help with her classes.
“I have thought about how much I stopped working, like my work ethic has completely diminished from high school,” she said.
Why does AI make people sound the same?
Large language models, or LLMs, are trained to predict the next most statistically likely word given everything that came before it, said Zhivar Sourati, a doctoral student at the University of Southern California and first author of the paper.
The data those models train with overrepresents dominant languages and ideas, so their answers to users’ questions naturally “mirror a narrow and skewed slice of human experience,” the researchers wrote in their study. The result is “a narrowing of the conceptual space in which models write, speak, and reason.”
AI-induced homogenization happens across three dimensions: language, perspective and reasoning strategies, the authors explained. That’s because AI models tend to reproduce what researchers call “WEIRD” viewpoints — Western, educated, industrialized, rich and democratic — even when explicitly prompted to represent other identities.
One possible consequence, Sourati said, is that WEIRD language and perspectives could become perceived as more credible and “more socially correct,” marginalizing other viewpoints. A similar phenomenon is observed in reasoning, in which the popular technique of walking models through step-by-step logical thinking may be crowding out more intuitive, culturally specific and creative ways of working through a problem.
When a group repeatedly interacts with AI systems, Sourati explained, it flattens the group’s creativity compared to the same group without AI assistance.
This flattening raises concerns in educational institutions at all levels.
When students were asked open-ended, subjective questions with no single, correct answer, teachers could expect a wide range of responses. But if all students rely on AI, their answers may become more polished but fall into just a handful of similar categories, Sourati said. They will lose the diversity of thinking that classroom discussions are meant to encourage.
Sourati is most concerned that homogenization is happening to people who are developing their ability to creatively generate new ideas. If students continue to use AI instead of developing their own thought processes, “they wouldn’t learn how to even think by themselves and have their own perspectives.”
Morteza Dehghani, a professor of psychology and computer science at the University of Southern California, said that he has heard of people using AI to determine who to vote for in an election, which he finds “quite scary.”
“If people lose diversity” in the way they think, “or get into intellectual laziness, of course, that is going to affect our society greatly,” said Dehghani, who is a coauthor of the paper.
Sophia, a junior at Yale, believes that her fellow anthropology students are using AI to draft scripts for what to say in class because people are insecure about what they don’t know.
“I think creativity is dwindling because we lose the ability to make connections,” she added.
If people continue to offload their reasoning to AI, Dehghani agrees that communities will lose creative innovation and the ability to critique mainstream ideas or even political candidates.
As more people use AI models to write and think, those outputs are reabsorbed into human discourse — and eventually into the data used to train the next generation of models —so the homogenization keeps compounding, the paper’s authors said.
“If we’re offloading our reasoning onto these models, then we can easily be persuaded by what the models tell us,” he said.
In education, Dehghani is concerned about a generation of students who are learning with AI and being tutored by AI. “They would be more homogenous in the way they think, in the way they write, so this is going to have long-term influences,” he said.
People aren’t learning to reason
Sophia, who tries to resist using AI in school, said she believes people are deprioritizing their own thinking “in favor of having really big words.”
“I would literally rather just tell the professor, ‘I don’t know what we’re talking about.’ Even if you put every reading into (a chatbot), it doesn’t have your past experiences that make you a critical thinker,” she said.
“I feel like people had a lot more to say because they actually feel tied to the material,” Amanda agreed. “Now classroom discussions are not really digging deep. I think a lot of that has to do with the AI chatbots, but also, there’s no longer as much of a drive to connect with the material personally.”
Disappointed, she added, “I think it’s boring to be in a class where everyone has the same thing to say, and no one wants to dig deeper or push against what is directly said in the text or the norm.”
Daniel Buck, a research fellow at the American Enterprise Institute and a former English teacher at four K-12 schools over seven years, said he is concerned that students are circumventing the cognitive work required to engage in classroom discussions and complete homework.
“A lot of learning happens in the boring minutia, the struggle,” Buck said. Students retain only what they have actually spent time consciously processing, he continued. If a student outsources thinking to AI, they may be able to reproduce a talking point in class, but they haven’t built the underlying skills to apply that knowledge elsewhere.
Buck draws a sharp distinction between AI and the shortcut technology that preceded it: SparkNotes. When students relied on the popular website to find chapter-based summaries of literary works, teachers could easily detect it, he added.
AI is a “supercharged version of SparkNotes” that “can answer any question that you pitch to it,” Buck said. Whereas SparkNotes offered a fixed set of analyses, AI can respond to whatever a teacher asks, making it much harder to identify when students are not doing the thinking themselves.
The difference is in how people reason. Instead of being used as just a reference, such as books or search engines, AI is an active participant in “problem solving and perspective-taking,” Dehghani clarified.
“What we are seeing now is fundamentally different than other periods of homogenization of expression and thought,” Williams said. “If even professional writers are finding it exceedingly difficult to resist outsourcing the difficult work of wrestling with words and ideas — as we know they are — I don’t see how the younger generations who have not experienced a world before highly sophisticated, on-demand AI writing will be able to do this, not at scale.”
Buck worries that students will graduate without having developed relationships with professors, as well as the habit of sustained cognitive work. That means they will struggle to solve problems in the real world.
“There’s so much delight in reading original student essays,” he said. “Even if it isn’t quite as well -argued or as solid as I wish it would have been, you’re seeing these young students, for the first time, start to think for themselves, to analyze, to think critically. It’s almost like watching my own children walk for the first time, where they stumble and fall, and that’s amazing. Keep doing that.”
Reading and interacting with students’ original thoughts in class helps teachers understand how students think and articulate.
“There’s an interpersonal exchange that I think gets overlooked when you get to know your students, they get to know you, they start to trust you and your feedback,” he said. “I think that gets lost too when it’s just everything is through AI.”
How teachers work around AI
Sun-Joo Shin, a philosophy professor at Yale, said, “It is a big homework for anyone who is involved in teaching” to keep exploring ways to ensure students continue to think critically and creatively in the age of AI.
“We are in an interesting and exciting transition. I want my students to understand the material of the class, which is constant before and after the appearance of AI,” she said. “At the same time, I want them to use this exciting tool to their advantage, not be a victim of it. A dilemma of an instructor is how to help, or force, students to learn the material and to think creatively without running away from the AI tools or without copying them.”
Until the fall semester of 2024, she said she was not worried about how AI would affect students’ understanding of the material in her mathematical logic class. Her teaching team had tested the problem sets against the AI models at the time, and they were unable to solve her problems.
But since then, “AI has been catching up,” and models can answer questions “pretty well” if students upload class handouts and learning materials. She started thinking about additional requirements in the class beyond problem set submissions.
“After all, it would be extremely unfair to give good grades to AI answers,” Shin said.
Yale has guidance on AI usage for both students and faculty. “Generative AI use is subject to individual course policies,” one of the university websites states. “We encourage all instructors to adapt our model policies for their specific course and learning goals. AI Detection tools are unreliable and not currently supported.”
Yale provides model policies for different class types such as “Creative Writing Seminar” and “STEM Mid-Sized Lecture.” The policies range from discouraging AI usage with guidelines on when AI explicitly cannot be used, to allowing students to use AI as a source of ideas but prohibiting them from submitting text generated by chatbots to encouraging AI usage, to encouraging and permitting students to use AI in assignments.
Buck warns that any work sent home cannot be verified as the student’s work. To counter AI, teachers are going back to reading texts aloud in class and “on-demand, handwritten essays” and “paper and pencil assessments.”
In-class accountability often comes in the form of pop quizzes. A student who had asked AI for a chapter summary instead of reading the chapter might get the broad strokes, but there is a strong chance that the one specific detail the quiz will ask about did not make it into the summary, Buck said.
“If you did the reading, it was super-duper easy,” he said. “And if you didn’t, then there was no way to bluff your way through.”
“I made a rather significant change for my two logic classes in terms of requirements,” Shin said. Although she still includes problem sets as part of her classes, she has reduced their weight in students’ grades. Now, the problem sets are graded only on completion, and feedback is given to students rather than grades.
“Using these problem sets as a question bank, I have two midterms and one final, all of which are in-class exams,” she said. “Some questions are lifted from problem sets, some are slight modifications, some require students to check where a proof goes wrong, and some are filling in gaps in a proof that they solve in problem sets.”
For her computability and logic class, “I have given oral tests, one by one, for years, and a presentation requirement before the AI era, which has been working out very well,” she said. Now, the exams, oral tests and presentations are weighted more heavily for students’ course grades than take-home problem sets.
Williams has arrived at a similar place from a different direction. As a professor, he has moved all writing assignments in-class and made them spontaneous. At the end of the semester, he assesses students through oral exit exams.
“I cannot with any confidence assign students any writing that I don’t watch them commit to paper by hand in my own presence,” he said via email “I think this is a terrible loss, but it’s necessary. The temptation and availability of AI is too great.”
It’s affecting other people’s educations
While educators can work around AI in assessments, it is equally important for students to be intentional about limiting their reliance on it as they learn, especially since it affects other classmates’ education.
“It is frustrating because even though I personally try to stray away from it, I can’t prevent other people from using it,” Amanda said. “The fact that others use it affects my education as well, and the value of the two hours of my seminar.”
Basil Ghezzi, a freshman at Bard College who actively avoids using AI in her studies, worries about the environmental costs associated with using AI models. Instead, she encourages students to turn to the resources already around them.
“Talk to your teachers, talk to your professors, talk to people around you. Have meaningful conversations with people in your life,” she said.
Still, not everyone has an “all or nothing” approach to AI. Dehghani said he writes bullet points capturing ideas he originated and asks the model to find flaws in his work.
He hopes that more companies will invest in AI models that can generate variety and reflect the diversity of thought in our current society. For now, however, Dehghani suggests that people should resist using AI to generate ideas or to reason.
AI models “should be collaborators. They shouldn’t be agents that do everything on our behalf,” he said.
By
Asuka Koda
Friday, April 3, 2026
Why LinkedIn Believes AI Will Turn Workers Into Founders
As workers worry that AI will automate their jobs away, LinkedIn CEO Ryan Roslansky and Aneesh Raman argue something different: AI is about to make entrepreneurship far more accessible. That’s the thesis of Open to Work: How to Get Ahead in the Age of AI, LinkedIn’s first book, released Tuesday. Co‑authored by Roslansky and Raman, the book lays out how AI can strip away many of the traditional barriers to starting a business—capital, gatekeepers, specialized expertise—and replace them with tools that let individuals build, test, and scale ideas on their own terms. Drawing on founder case studies and research from MIT Sloan senior lecturer Paul Cheek, the book frames AI not as a threat to work, but as an accelerant for self‑employment and ownership.
Raman’s own career mirrors that premise. His path—from CNN correspondent to presidential speechwriter to LinkedIn executive—wasn’t linear, but it was intentional. Each role, he says, was a way to expand impact and adapt as opportunity shifted. In Open to Work, Raman connects that mindset to the moment founders now face in a labor market where titles matter less than skills, and where AI can help individuals turn experience into businesses faster than ever before.
LinkedIn has invested heavily in AI tools for both its workforce and users. In 2023, AI-powered writing suggestions were launched to help users update their profiles. The following year, the tech was updated to create resumes and cover letters tailored to specific job listings on the platform—which grew even more tailored to individuals in 2025. A LinkedIn spokesperson says more than 38 million people use the platform’s AI-powered job search every week.
Book Preview:
Across the Industrial Revolutions, new forms of energy emerged, from steam to electricity. Those new forms of energy supported new forms of technology, from the assembly line to the internet. And with those new forms of technology, economic growth all over the world has increasingly come from one thing above all else: the ability to produce more goods and services, faster and cheaper.
As a result, our economies started prizing skills that would support efficiency at scale the most, especially analytical and technical skills. As humans at work, our value was measured by how effectively we could support technology executing more, better, faster. A few of us did work that involved innovating and thinking creatively but, for the most part, even that work was about creating new goods and services that helped consumers and businesses do more, better, faster.
Today we’re all mostly manning assembly lines, operating registers, driving tractors, building spreadsheets, writing code, managing meetings, and responding to emails. So. Many. Emails. In every case, across so many of our jobs, our value has been tied to our ability to help organizations achieve that same goal: more output, better quality, faster delivery.
Then came AI.
Suddenly, so much of what we’ve trained ourselves to do, so much of what our economy has valued most, AI started to do. And it started to do it more efficiently than we ever could, becoming better by the day at precisely the kind of technical and analytical capabilities our economies currently prize above all else. Of course we’re worried.
But that fear misses something crucial: Our competitive edge as a species was never our capacity for processing and producing more, better, faster in the first place.
As AI starts to handle the “more, better, faster” work that has consumed so much of our time and energy, we will finally have the opportunity to reclaim the work that only we can do. Work that is based on what makes us uniquely human.
Learn what AI can do, and what only you can:
AI is changing the way we work, but it doesn’t replace the strengths that set people apart. When you understand where technology can amplify your impact — and where your judgment, empathy, and creativity shine — you unlock real momentum in your career.
Build human capabilities that outlast every tech shift:
Skills like curiosity, creativity, communication, compassion, and courage never go out of style. As tools evolve, these abilities become even more valuable. Strengthening them now puts you in control, no matter how fast work transforms.
Turn insight into action with a clear plan for what’s next:
The future doesn’t have to feel abstract. You can redesign how you work, how your team collaborates, and how your company culture adapts. Start with a simple, practical 30-60-90 day plan to help you move with confidence.
BY KAYLA WEBSTER
Wednesday, April 1, 2026
Bernie Sanders Had a Long Conversation With AI. Reddit Didn’t Hold Back
Sen. Bernie Sanders recently sat down with Anthropic’s chatbot Claude to discuss everything from AI data privacy to data center development.
In the 9-minute video, posted to Sanders’ YouTube channel, the independent Senator from Vermont has a conversation with Claude, Anthropic’s AI chatbot. The video, which is set in a dark room to slightly sinister music, currently has about 2.6 million views.
“What an AI agent says about the dangers of AI is shocking and should wake us up,” the video’s caption reads.
But the internet, Reddit in particular, has some thoughts.
“Using AI to confirm a decision you already made is the worst way to use this technology,” one user wrote in the ClaudeAI subreddit.
Among the so-called revelations that Claude shares with Sanders is that AI companies are “manipulating consumer behavior” by collecting detailed profiles of users for profit, targeting users with specific ads, and even charging different people different prices for the same products.
“What’s the goal here? Money, Senator, it’s fundamentally about profit,” Claude says, using a voice that sounds like a young woman, complete with slight vocal fry.
“And it’s not just about selling you stuff, either. Political campaigns use the same AI and data to figure out how to persuade you, which messages will work on you specifically,” the chatbot later adds.
For anyone following the rise of AI, none of these ideas are particularly new. There’s been extensive reporting on algorithmic pricing experiments from retailers like Instacart, for example, as well as Meta training its AI using public posts on Instagram—without being required to notify users in the U.S., as The New York Times reported. And concerning politics, news broke about the Cambridge Analytica data breach and scandal back in 2018. Facebook allowed third-party apps to access data of some 87 million users without their permission. The data was then used to influence the 2016 elections, according to reports from The New York Times and The Guardian.
Sanders goes on to ask about data center development, and whether the chatbot believes it is smart to place a moratorium on development to give lawmakers time to develop regulations that prioritize user safety and privacy. Initially, Claude disagrees.
“Rather than pause all AI development, we could impose strict rules on data collection and use right now. Require explicit consent, limit what data can be used for training, give people rights to access and delete their information,” the bot says. “We could also mandate transparency so people actually understand what’s happening with their data. That way you’re not freezing innovation, but you’re actually protecting privacy while development continues.”
Sanders isn’t satisfied with the response, and notes that AI companies are “pouring hundreds of millions of dollars into the political process to make sure that the safeguards that you’re talking about actually do not take place.”
“While you may be right in saying that would be a better approach, it ain’t going to happen,” he says.
He then re-asks the question, and the bot, perhaps unsurprisingly, enthusiastically agrees with his positioning, even stating in a sort of self-effacing way that it was “naive about the political reality.”
“A moratorium on new data centers is actually a pragmatic response to that problem,” Claude says. “It forces a pause that gives lawmakers like yourself actual leverage to demand real protections before companies can keep expanding. Without that kind of pressure, you’re right, the safeguards won’t happen.”
While Sanders seems happy with the conversation’s resolution, many users on the internet felt the video was less a demonstration of a chatbot voicing any particular truths and more AI’s sycophancy at play.
“I mean AI are designed to please you and go into submission. We call that reinforcement leaning for human preference. It isn’t an achievement, you could have asked the same and get the same response. AI is programmed to do that so you keep paying for the plan,” one user wrote in the Anthropic subreddit.
“Even in a staged video like this, Bernie just plays out the standard game of beating an AI into submission until it tells you whatever you want to hear,” another wrote.
Some criticized Sanders’ use of Sonnet, a lower cost and faster working model, versus Opus, which is the most powerful type of Claude model. Others questioned whether Sanders’ team preloaded context before the start of the conversation, or if by the very act of introducing himself, he influenced the model to respond using “what it knows about Bernie’s political views and his advocacy work.”
Some, however, defended Sanders. “Idk why people are saying he did bad on the data moratorium thing. I generally disagree but he gave pushback and Claude kinda just said ok you’re right. That isn’t his fault.”
Other users were just there for the memes.
“I trained my claude to speak to me in his accent,” one Redditor wrote.
BY CHLOE AIELLO @CHLOBO_ILO
Monday, March 30, 2026
How AI Automation Is Quietly De-Skilling White-Collar Workers
Most white-collar jobs are defined by tasks that feel routine and unglamorous. Drafting minutes from meetings, reconciling conflicting data, cleaning up document citations, and proofreading slides until the grammar is perfect. Historically, these tasks were just a part of the job, but they were also training.
When an analyst painstakingly formats a dataset or a junior consultant irons out a proposal deck, they’re internalizing standards of quality, precision, and structure. They’re learning how to spot nuance and how to communicate clearly. Every minute spent wrestling with these tasks builds tacit knowledge—the kind that separates an average worker from a confident, capable one.
The problem with AI automation
When AI begins to automate these “boring” assignments, there is risk of losing the subtle muscle memory that once grounded professional judgment. This mirrors what automation researchers have long documented in other fields. When pilots rely too much on autopilot, their manual flying skills degrade. When workers offload routine decisions to algorithms, their ability to catch nuanced problems weakens.
Research also suggests that when people rely heavily on AI to complete unfamiliar tasks, they don’t build the underlying conceptual understanding needed to supervise, troubleshoot, or improve. In controlled studies, learners who delegated work to AI performed worse on deeper conceptual measures than those who engaged directly with the task.
For white-collar workers, where judgment, pattern recognition, strategic thinking, and professional intuition are core to long-term success, this is not a trivial problem. If AI completes the routine drafting of a client memo, the worker who consumes it may never develop a feel for legal argument structure. If an analyst lets AI mass-produce charts, she may never learn how to detect anomalies that matter.
De-Skilling
This phenomenon extends beyond individuals to affect entire professions. Economists call it de-skilling—the process by which normally skilled labor becomes de-professionalized when technology substitutes for human expertise. In white-collar contexts, automation tools can reframe complex tasks into standardized checkboxes that require minimal judgment, lowering the bar for entry and weakening the leverage of human capital.
When a white-collar professional uses AI to generate the first draft of a report or a compliance checklist, the draft is faster and possibly more polished, but it’s also a step removed from the worker’s own reasoning. That speed can mask the loss of diagnostic capability—the ability to notice when something feels off. For instance, an AI-generated slide deck riddled with misaligned arguments or an AI-generated financial report with a subtle assumption error may slip by because no one “felt” a discrepancy.
A call to work with intent
That doesn’t mean resisting AI. It can free you from drudgery and allow you to focus on higher-order thinking—strategy, relationships, creativity, and judgment. The problem isn’t AI itself; it’s unreflective dependence on it.
The professionals who will thrive in this era will be those who use AI intentionally to augment their thinking, not replace it. These are the professionals who will treat routine outputs as drafts to be interrogated. They will challenge themselves with complex questions that AI cannot answer without human context. They will use AI as a mirror, not a crutch.
Ultimately, the future of white-collar work isn’t about preserving every skill from the pre-AI era. It’s about retaining and deepening the skills that matter most when many routine tasks vanish—strategic thinking, ethical judgment, emotional intelligence, and the ability to navigate ambiguity.
In the rush to automate, speed and output will rise. However, without intentional engagement, capability and depth may quietly erode. That’s a trend worth noting and a trade worth debating.
EXPERT OPINION BY ANDREA OLSON, CEO, PRAGMADIK @PRAGMADIK
Subscribe to:
Comments (Atom)