IMPACT
..building a unique and dynamic generation.
Friday, April 3, 2026
Why LinkedIn Believes AI Will Turn Workers Into Founders
As workers worry that AI will automate their jobs away, LinkedIn CEO Ryan Roslansky and Aneesh Raman argue something different: AI is about to make entrepreneurship far more accessible. That’s the thesis of Open to Work: How to Get Ahead in the Age of AI, LinkedIn’s first book, released Tuesday. Co‑authored by Roslansky and Raman, the book lays out how AI can strip away many of the traditional barriers to starting a business—capital, gatekeepers, specialized expertise—and replace them with tools that let individuals build, test, and scale ideas on their own terms. Drawing on founder case studies and research from MIT Sloan senior lecturer Paul Cheek, the book frames AI not as a threat to work, but as an accelerant for self‑employment and ownership.
Raman’s own career mirrors that premise. His path—from CNN correspondent to presidential speechwriter to LinkedIn executive—wasn’t linear, but it was intentional. Each role, he says, was a way to expand impact and adapt as opportunity shifted. In Open to Work, Raman connects that mindset to the moment founders now face in a labor market where titles matter less than skills, and where AI can help individuals turn experience into businesses faster than ever before.
LinkedIn has invested heavily in AI tools for both its workforce and users. In 2023, AI-powered writing suggestions were launched to help users update their profiles. The following year, the tech was updated to create resumes and cover letters tailored to specific job listings on the platform—which grew even more tailored to individuals in 2025. A LinkedIn spokesperson says more than 38 million people use the platform’s AI-powered job search every week.
Book Preview:
Across the Industrial Revolutions, new forms of energy emerged, from steam to electricity. Those new forms of energy supported new forms of technology, from the assembly line to the internet. And with those new forms of technology, economic growth all over the world has increasingly come from one thing above all else: the ability to produce more goods and services, faster and cheaper.
As a result, our economies started prizing skills that would support efficiency at scale the most, especially analytical and technical skills. As humans at work, our value was measured by how effectively we could support technology executing more, better, faster. A few of us did work that involved innovating and thinking creatively but, for the most part, even that work was about creating new goods and services that helped consumers and businesses do more, better, faster.
Today we’re all mostly manning assembly lines, operating registers, driving tractors, building spreadsheets, writing code, managing meetings, and responding to emails. So. Many. Emails. In every case, across so many of our jobs, our value has been tied to our ability to help organizations achieve that same goal: more output, better quality, faster delivery.
Then came AI.
Suddenly, so much of what we’ve trained ourselves to do, so much of what our economy has valued most, AI started to do. And it started to do it more efficiently than we ever could, becoming better by the day at precisely the kind of technical and analytical capabilities our economies currently prize above all else. Of course we’re worried.
But that fear misses something crucial: Our competitive edge as a species was never our capacity for processing and producing more, better, faster in the first place.
As AI starts to handle the “more, better, faster” work that has consumed so much of our time and energy, we will finally have the opportunity to reclaim the work that only we can do. Work that is based on what makes us uniquely human.
Learn what AI can do, and what only you can:
AI is changing the way we work, but it doesn’t replace the strengths that set people apart. When you understand where technology can amplify your impact — and where your judgment, empathy, and creativity shine — you unlock real momentum in your career.
Build human capabilities that outlast every tech shift:
Skills like curiosity, creativity, communication, compassion, and courage never go out of style. As tools evolve, these abilities become even more valuable. Strengthening them now puts you in control, no matter how fast work transforms.
Turn insight into action with a clear plan for what’s next:
The future doesn’t have to feel abstract. You can redesign how you work, how your team collaborates, and how your company culture adapts. Start with a simple, practical 30-60-90 day plan to help you move with confidence.
BY KAYLA WEBSTER
Wednesday, April 1, 2026
Bernie Sanders Had a Long Conversation With AI. Reddit Didn’t Hold Back
Sen. Bernie Sanders recently sat down with Anthropic’s chatbot Claude to discuss everything from AI data privacy to data center development.
In the 9-minute video, posted to Sanders’ YouTube channel, the independent Senator from Vermont has a conversation with Claude, Anthropic’s AI chatbot. The video, which is set in a dark room to slightly sinister music, currently has about 2.6 million views.
“What an AI agent says about the dangers of AI is shocking and should wake us up,” the video’s caption reads.
But the internet, Reddit in particular, has some thoughts.
“Using AI to confirm a decision you already made is the worst way to use this technology,” one user wrote in the ClaudeAI subreddit.
Among the so-called revelations that Claude shares with Sanders is that AI companies are “manipulating consumer behavior” by collecting detailed profiles of users for profit, targeting users with specific ads, and even charging different people different prices for the same products.
“What’s the goal here? Money, Senator, it’s fundamentally about profit,” Claude says, using a voice that sounds like a young woman, complete with slight vocal fry.
“And it’s not just about selling you stuff, either. Political campaigns use the same AI and data to figure out how to persuade you, which messages will work on you specifically,” the chatbot later adds.
For anyone following the rise of AI, none of these ideas are particularly new. There’s been extensive reporting on algorithmic pricing experiments from retailers like Instacart, for example, as well as Meta training its AI using public posts on Instagram—without being required to notify users in the U.S., as The New York Times reported. And concerning politics, news broke about the Cambridge Analytica data breach and scandal back in 2018. Facebook allowed third-party apps to access data of some 87 million users without their permission. The data was then used to influence the 2016 elections, according to reports from The New York Times and The Guardian.
Sanders goes on to ask about data center development, and whether the chatbot believes it is smart to place a moratorium on development to give lawmakers time to develop regulations that prioritize user safety and privacy. Initially, Claude disagrees.
“Rather than pause all AI development, we could impose strict rules on data collection and use right now. Require explicit consent, limit what data can be used for training, give people rights to access and delete their information,” the bot says. “We could also mandate transparency so people actually understand what’s happening with their data. That way you’re not freezing innovation, but you’re actually protecting privacy while development continues.”
Sanders isn’t satisfied with the response, and notes that AI companies are “pouring hundreds of millions of dollars into the political process to make sure that the safeguards that you’re talking about actually do not take place.”
“While you may be right in saying that would be a better approach, it ain’t going to happen,” he says.
He then re-asks the question, and the bot, perhaps unsurprisingly, enthusiastically agrees with his positioning, even stating in a sort of self-effacing way that it was “naive about the political reality.”
“A moratorium on new data centers is actually a pragmatic response to that problem,” Claude says. “It forces a pause that gives lawmakers like yourself actual leverage to demand real protections before companies can keep expanding. Without that kind of pressure, you’re right, the safeguards won’t happen.”
While Sanders seems happy with the conversation’s resolution, many users on the internet felt the video was less a demonstration of a chatbot voicing any particular truths and more AI’s sycophancy at play.
“I mean AI are designed to please you and go into submission. We call that reinforcement leaning for human preference. It isn’t an achievement, you could have asked the same and get the same response. AI is programmed to do that so you keep paying for the plan,” one user wrote in the Anthropic subreddit.
“Even in a staged video like this, Bernie just plays out the standard game of beating an AI into submission until it tells you whatever you want to hear,” another wrote.
Some criticized Sanders’ use of Sonnet, a lower cost and faster working model, versus Opus, which is the most powerful type of Claude model. Others questioned whether Sanders’ team preloaded context before the start of the conversation, or if by the very act of introducing himself, he influenced the model to respond using “what it knows about Bernie’s political views and his advocacy work.”
Some, however, defended Sanders. “Idk why people are saying he did bad on the data moratorium thing. I generally disagree but he gave pushback and Claude kinda just said ok you’re right. That isn’t his fault.”
Other users were just there for the memes.
“I trained my claude to speak to me in his accent,” one Redditor wrote.
BY CHLOE AIELLO @CHLOBO_ILO
Monday, March 30, 2026
How AI Automation Is Quietly De-Skilling White-Collar Workers
Most white-collar jobs are defined by tasks that feel routine and unglamorous. Drafting minutes from meetings, reconciling conflicting data, cleaning up document citations, and proofreading slides until the grammar is perfect. Historically, these tasks were just a part of the job, but they were also training.
When an analyst painstakingly formats a dataset or a junior consultant irons out a proposal deck, they’re internalizing standards of quality, precision, and structure. They’re learning how to spot nuance and how to communicate clearly. Every minute spent wrestling with these tasks builds tacit knowledge—the kind that separates an average worker from a confident, capable one.
The problem with AI automation
When AI begins to automate these “boring” assignments, there is risk of losing the subtle muscle memory that once grounded professional judgment. This mirrors what automation researchers have long documented in other fields. When pilots rely too much on autopilot, their manual flying skills degrade. When workers offload routine decisions to algorithms, their ability to catch nuanced problems weakens.
Research also suggests that when people rely heavily on AI to complete unfamiliar tasks, they don’t build the underlying conceptual understanding needed to supervise, troubleshoot, or improve. In controlled studies, learners who delegated work to AI performed worse on deeper conceptual measures than those who engaged directly with the task.
For white-collar workers, where judgment, pattern recognition, strategic thinking, and professional intuition are core to long-term success, this is not a trivial problem. If AI completes the routine drafting of a client memo, the worker who consumes it may never develop a feel for legal argument structure. If an analyst lets AI mass-produce charts, she may never learn how to detect anomalies that matter.
De-Skilling
This phenomenon extends beyond individuals to affect entire professions. Economists call it de-skilling—the process by which normally skilled labor becomes de-professionalized when technology substitutes for human expertise. In white-collar contexts, automation tools can reframe complex tasks into standardized checkboxes that require minimal judgment, lowering the bar for entry and weakening the leverage of human capital.
When a white-collar professional uses AI to generate the first draft of a report or a compliance checklist, the draft is faster and possibly more polished, but it’s also a step removed from the worker’s own reasoning. That speed can mask the loss of diagnostic capability—the ability to notice when something feels off. For instance, an AI-generated slide deck riddled with misaligned arguments or an AI-generated financial report with a subtle assumption error may slip by because no one “felt” a discrepancy.
A call to work with intent
That doesn’t mean resisting AI. It can free you from drudgery and allow you to focus on higher-order thinking—strategy, relationships, creativity, and judgment. The problem isn’t AI itself; it’s unreflective dependence on it.
The professionals who will thrive in this era will be those who use AI intentionally to augment their thinking, not replace it. These are the professionals who will treat routine outputs as drafts to be interrogated. They will challenge themselves with complex questions that AI cannot answer without human context. They will use AI as a mirror, not a crutch.
Ultimately, the future of white-collar work isn’t about preserving every skill from the pre-AI era. It’s about retaining and deepening the skills that matter most when many routine tasks vanish—strategic thinking, ethical judgment, emotional intelligence, and the ability to navigate ambiguity.
In the rush to automate, speed and output will rise. However, without intentional engagement, capability and depth may quietly erode. That’s a trend worth noting and a trade worth debating.
EXPERT OPINION BY ANDREA OLSON, CEO, PRAGMADIK @PRAGMADIK
Friday, March 27, 2026
Skills Every Project Manager Needs to Lead in Artificial Intelligence
Artificial intelligence is redefining what it means to be a successful project manager and transforming how projects are delivered. Discover the key skills—from data literacy and agile delivery to trustworthy AI practices—that will help you lead AI projects responsibly and with confidence.
Build the skills to lead AI projects with confidence
In many industries, artificial intelligence is becoming a key driver of innovation. From intelligent automation to customer-facing applications, AI initiatives are reshaping the kinds of projects organizations pursue—and the skills project managers need to deliver them successfully. As these projects become more common and complex, the demand for AI-savvy project managers is growing fast.
Managing AI projects draws on the same core strengths—technical insight, strategic thinking, and adaptability—that define great project management. It also calls for additional fluency in data, AI concepts, and delivery models built for rapid iteration and change.
This article outlines the top skills for AI project managers, including essential skills for artificial intelligence success in a leadership role. Whether you’re already managing AI projects or looking to grow into the role, these capabilities are essential to your success.
The unique nature of AI projects
Before exploring the skills, it’s important to understand what makes AI projects different. These differences explain why even experienced project professionals often encounter new challenges—and why successful delivery of AI projects benefits from building on core project management strengths while developing new skills tailored to this space. Here's what makes these projects unique:
Data-centric foundations: Unlike traditional software projects, AI initiatives are built around data—not static rules or code. This makes data governance—including quality, availability, and security—central to success.
Iterative development cycles: AI models require continual retraining, evaluation, and updates. There's rarely a fixed endpoint, which means project managers must lead projects that evolve as insights emerge.
Unclear or shifting goals: Many AI initiatives begin with exploratory objectives. Project managers need to lead teams toward outcomes that may not be fully defined from day one.
Context-sensitive results: AI systems often behave differently based on the input or environment. For example, a model might perform well in one region but poorly in another.
Sensitive to change over time: Even subtle shifts in data volume, type, or quality can cause AI outputs to vary—sometimes unpredictably. Continuous monitoring is key.
Trust as a requirement: AI can affect people in unintended ways. Building trustworthy AI means addressing all its key layers—ethical, responsible, transparent, governed, and explainable—throughout the project lifecycle.
These characteristics elevate the importance of specialized skills for AI project managers.
The top artificial intelligence skills for project managers
Mastering AI project management starts with developing the right mix of technical fluency, communication savvy, and ethical foresight. These seven skills will help you lead complex, fast-moving AI initiatives with confidence.
1. Data literacy and awareness
AI project managers don’t need to be data scientists, but they do need a solid understanding of how data works. This includes:
Knowing how data is sourced, labeled, and cleaned
Understanding data quality and bias
Collaborating effectively with data engineers and data scientists
The better your grasp of the data, the better you can scope, prioritize, and de-risk your project.
2. Critical thinking and problem solving
AI initiatives operate in environments of constant change. Project managers need to stay nimble and make decisions quickly as new information emerges including being able to:
Analyze evolving model results
Make judgment calls when performance degrades
Pivot quickly when data reveals new insights
You’re not just managing a plan—you’re constantly reassessing what’s possible and what’s working.
3. Trustworthy AI practices
Trust and accountability are not optional. Project managers play a key role in making sure ethical considerations are embedded throughout the project lifecycle:
Spot ethical risks (e.g., bias, lack of transparency)
Facilitate discussions on fairness and accountability
Incorporate ethical review checkpoints in the project lifecycle
In short: trust isn’t a feature. It’s a necessity.
4. Communication across technical and business teams
AI teams are often composed of specialists who speak different “languages”—data scientists, engineers, legal, product, and line of business. Project managers should act as connectors and translators between these groups to promote shared understanding and alignment:
Bridge communication between technical and business teams
Set realistic expectations with stakeholders
Ensure alignment across cross-functional contributors
5. Agile and iterative delivery for AI projects
While not every AI project uses Scrum or Kanban, nearly all require short cycles, frequent testing, and continuous refinement. AI project managers should be comfortable with:
Managing evolving scope
Prioritizing iterations based on learning
Balancing experimentation with business timelines
6. Understanding AI technologies and lifecycle
AI project managers don’t need to build models themselves—but they do need to understand the typical development process and what’s required at each stage:
Problem definition
Data collection and preparation
Model training and evaluation
Operationalization and monitoring
The PMI Certified Professional in Managing AI (PMI-CPMAI™) certification methodology provides a structured approach.
7. Tool proficiency and hands-on project management
From managing datasets in collaboration tools to tracking experiments, AI projects benefit from:
Project management tools that support data workflows
Basic understanding of version control and pipeline management
Comfort with rapid documentation and tracking
Conclusion
AI projects challenge familiar ways of working, but they also offer an exciting opportunity for project professionals to expand their expertise.
By building the right skills—from data literacy to ethical leadership and more—you’ll be better prepared to guide your teams through the unique demands of artificial intelligence projects and deliver results that are trustworthy, valuable, and aligned with business needs.
By Ron Schmelzer and Kathleen Walch
Subscribe to:
Comments (Atom)