IMPACT
..building a unique and dynamic generation.
Wednesday, March 11, 2026
Bad News for Your Burner Account: AI Is Surprisingly Effective at Identifying the Person Behind One
It’s not uncommon for people to have anonymous or burner accounts in their online activities for a variety of reasons. A new study, though, shows why you might want to be as careful posting from those accounts as you would from one that uses your real name, since they might not hide your identity as well as you think.
A recently released research paper found that artificial intelligence has proved quite effective at figuring out who’s behind those false-name accounts. Large language models, the study found, can use a number of identifiers, such as extracting identity signals (data points or behaviors used to identify, verify, or categorize individuals) or searching for matching data, to significantly outperform existing identity methods.
The study successfully deanonymized 68 percent of the users in its trial data set. Of that 68 percent, it boasted a 90 percent precision rate, meaning it accurately identified the user running the account.
“Our findings have significant implications for online privacy,” the researchers, who were based at ETH Zurich, a public university in Zurich, Switzerland, and MATS, an independent research and educational program, wrote. “The average online user has long operated under an implicit threat model where they have assumed pseudonymity provides adequate protection because targeted deanonymization would require extensive effort. LLMs invalidate this assumption.”
Anthropic also contributed to the study.
The findings that pseudonymous content can be fairly easily unmasked by AI have implications far beyond burner accounts and social media, of course. It can also be a powerful tool for hackers. And it can make it easier for companies to track down employees who leak corporate information or dig into who is asking questions in open forums.
It could also prove embarrassing for leaders who utilize burner accounts to pump up their businesses or covertly settle online scores with rivals. Casey Bloys, chairman and CEO of HBO and Max Content at Warner Bros. Discovery, admitted in 2023 that he had fake social media accounts he used to troll critics about network programming (later admitting that was a “dumb idea“). Elon Musk has confirmed in a court deposition that he has used them in the past. And Barstool Sports was accused in 2023 of using more than 40 accounts to promote its content and help it go viral.
Users hoping to keep their identity private or vulnerable members of society who depend on privacy (e.g., whistleblowers, activists, or abuse survivors) could also be identified. A slightly deeper dive by the AI could also determine where those people live, their occupation (and estimated income level), and more.
To protect against that, the researchers proposed several mitigations, including having platforms enforce rate limits on API access to user data, better detection of automated scraping, and restricting bulk data exports. That said, they acknowledge that preventing AI from being used to identify people and accounts that are trying to obfuscate the user’s identity will be increasingly challenging in the months and years to come.
“Recent advances in LLM capabilities have made it clear that there is an urgent need to rethink various aspects of computer security in the wake of LLM-driven offensive cyber capabilities,” the study reads. “Our work shows that the same is likely true for privacy as well. … Any moderately sophisticated actor can already do what we do using readily available LLMs and embedding models. With future LLMs, without mitigations, this attack will be within the means of basically all adversarial actors.”
BY CHRIS MORRIS @MORRISATLARGE
Monday, March 9, 2026
The Hidden Advantage of Being Over 50 in the Age of AI
I’ve been through a few technology revolutions. I built my first website in 1995, back when the internet made that screeching dial-up sound and nobody really knew what we were building, just that something big was happening. I watched the dot‑com bubble inflate and implode, watched social media go from novelty to addiction, and saw smartphones quietly rewire how humans behave. And now, here we are again: AI.
Everywhere you look, someone is launching an AI startup, automating departments, or building agents that promise to replace entire job functions. If you’re an experienced founder or executive—especially north of 50—it’s easy to feel like you showed up late to the party. I’ve felt it myself. A few months ago, I was sitting in front of my computer watching younger founders crank out AI apps in days, shipping products before I’d even finished reading about the tools they were using. I remember thinking, “Am I becoming the guy who missed it?” That thought lasted about a week.
Once I stopped comparing velocity and started actually using AI in my own work, something clicked. This might be the first tech wave where experience is the real unfair advantage.
AI isn’t about being technical. It’s about thinking clearly
Previous tech revolutions rewarded people who could code, manipulate algorithms, or master new platforms faster than everyone else, but AI is different. You don’t need to learn a programming language; you need to ask better questions. And asking better questions isn’t a technical skill—it’s a judgment skill. The leverage in AI doesn’t come from typing prompts quickly; it comes from knowing what matters, what doesn’t, and what consequences might follow. That’s pattern recognition, and pattern recognition is built over decades. It’s something AI is really good at, and it turns out those with experience are as well.
Speed is overrated. Judgment isn’t
Younger founders are moving fast right now, and I respect that. It’s exciting to watch. But speed without context creates a whole lot of noise, while experience creates context. When I use AI, I’m not asking it to build me a novelty app; I’m asking it to stress‑test a business idea, identify blind spots in a launch plan, challenge my assumptions, and help me flesh out existing models. I don’t accept what it gives me—I argue with it, refine it, and push it. That’s not something you learn from YouTube tutorials. That’s something you learn from making expensive mistakes.
The real danger isn’t falling behind—it’s outsourcing your thinking
There’s a subtle shift happening where leaders are starting to treat AI like a strategy generator instead of a thought partner, and that’s dangerous. AI predicts patterns. It doesn’t carry fiduciary responsibility, understand internal politics, feel reputational damage, or know which risks are existential versus cosmetic. It produces possibilities. You decide. If you’ve been in business long enough, you understand that difference instinctively—and that instinct is more valuable now than ever.
The confidence gap is mostly psychological
I’ve talked to more than a few executives who whisper some version of the same thing: “I’m not technical,” “I feel behind,” or “My kids understand this better than I do.” That may be true at the interface level, but understanding tools isn’t the same as understanding leverage. If you know how distribution works, AI can sharpen your messaging. If you understand customer psychology, AI can help you surface objections faster. If you understand operations, AI can reveal inefficiencies you’ve been tolerating for years. You don’t need to become an AI founder—you need to become more precise.
We’ve seen this movie before, but this time you’re the advantage
Every tech wave follows the same emotional arc: hype, overconfidence, correction, integration. What feels different about AI isn’t the hype—we’ve seen that—it’s the accessibility. You talk to it; it talks back. That simplicity lowers the barrier dramatically, and when the barrier lowers, judgment becomes the differentiator. Not youth. Not speed. Judgment.
The leaders who win this era won’t just be 22‑year‑olds building AI‑native startups. They’ll also be experienced operators who integrate AI quietly and intelligently into systems they already understand. If you’re over 50 and feeling behind, you might actually be early. Because when the tools get easier, experience becomes more powerful—not less. And this time, that experience may finally be the competitive edge.
EXPERT OPINION BY JOEL COMM, AUTHOR AND SPEAKER @JOELCOMM
Friday, March 6, 2026
How to Switch From ChatGPT to Claude With Just 1 Simple Prompt
Anthropic has had a turbulent few days, but the safety-focused AI company might be having the last laugh.
Following Anthropic’s standoff with the United States Department of War, President Trump’s subsequent firing of Claude from government use, and OpenAI’s surprise deal with the Pentagon, individual users are dumping ChatGPT and flocking to Claude. On Saturday, the Claude mobile app rose to the top spot on the iOS App Store, surpassing ChatGPT for the first time. At that same time, TechCrunch has reported, uninstalls of the ChatGPT mobile app jumped 295 percent compared with the previous day.
But switching AI providers isn’t always a seamless experience.
The more often you use an AI platform, the more it gains an understanding of you, your work, and your personal context, which is why starting over with a new AI can feel like taking a major step back. Now, Anthropic is looking to capitalize on its newfound momentum among consumers by making it easy to transfer context about yourself from rival AI providers like ChatGPT and Google Gemini to Claude.
On Monday, the company announced that its Memory feature, which enables Claude to remember key information about you across conversations, is now available for non-paying Claude users. Anthropic says on its website that this allows users to transfer their personal information with a single copy-paste, although in reality, it actually takes two copy-pastes.
How to transfer your context from ChatGPT to Claude
On Claude.ai, navigate to the settings page and select “Capabilities” from the sidebar menu. Then, click the button labeled “start import” under a section titled “Import memory from other AI providers.”
Next, you’ll see a pop-up requesting that you copy a prewritten prompt and paste it into a new chat with the AI platform you’re looking to leave behind. For example, if you’ve been using ChatGPT and want to move on, you’d enter this prompt into ChatGPT.
Here’s the full prompt, courtesy of Anthropic:
Export all of my stored memories and any context you’ve learned about me from past conversations. Preserve my words verbatim where possible, especially for instructions and preferences.
## Categories (output in this order):
1. **Instructions**: Rules I’ve explicitly asked you to follow going forward — tone, format, style, “always do X”, “never do Y”, and corrections to your behavior. Only include rules from stored memories, not from conversations.
2. **Identity**: Name, age, location, education, family, relationships, languages, and personal interests.
3. **Career**: Current and past roles, companies, and general skill areas.
4. **Projects**: Projects I meaningfully built or committed to. Ideally ONE entry per project. Include what it does, current status, and any key decisions. Use the project name or a short descriptor as the first words of the entry.
5. **Preferences**: Opinions, tastes, and working-style preferences that apply broadly.
## Format:
Use section headers for each category. Within each category, list one entry per line, sorted by oldest date first. Format each line as:
[YYYY-MM-DD] – Entry content here.
If no date is known, use [unknown] instead.
## Output:
– Wrap the entire export in a single code block for easy copying.
– After the code block, state whether this is the complete set or if more remain.
What to do with Claude after you’ve entered this prompt
If you prompt a platform like ChatGPT or Gemini with this message, you’ll receive a response that details the information the platform has about you, broken down into sections like identity, career, and projects. The response should also contain instructions detailing how you like your AI models to converse with you, such as specifications for tone of voice.
Once the response is done generating, you can copy it, paste it into the textbox in the Claude settings page, and click the “add to memory” button. With that, you should see a pop-up box named “manage memory.” This box contains all the personal information that Claude knows about you, and after a minute or two it will update with the new data you just transferred from the other platform. Make sure to review this context closely and edit any data that seems inaccurate or unnecessary for what you’re planning on using Claude for.
And there you have it—now you’re ready to start your new journey with Claude. What will you do first?
BY BEN SHERRY @BENLUCASSHERRY
Wednesday, March 4, 2026
AI Adoption Has Surged to 78 Percent in This 1 Industry—but There’s a Catch
One industry has gone from barely touching AI to mass adoption in just two years. AI adoption in the legal field jumped from 23 percent to 78 percent, which is faster than in finance and healthcare.
Litify’s third annual State of AI in Legal Report, which surveyed hundreds of legal professionals across law firms, corporate legal departments, and plaintiff practices, found that legal professionals are now among the fastest AI adopters anywhere.
But there’s a problem hiding inside that adoption number. Only 14 percent say AI is helping them reduce costs. Just 7 percent report billing more time. Legal firms rushed to buy the sports car, then kept driving it in first gear. The gap between “we use AI” and “this changed our economics” is enormous, and it’s widening even further.
“At Litify, we view this as an ‘AI maturity gap,’” notes Curtis Brewer, CEO of Litify, the legal operations platform used by 55,000+ legal professionals. “A firm that relies solely on a general-purpose tool like ChatGPT is only at the first step of its maturity journey.”
The Litify data reveals exactly where firms are stuck. ChatGPT dominates usage at 66 percent, followed by Microsoft Copilot (42 percent) and Google Gemini (24 percent). These are general-purpose tools—not legal-specific platforms. And while 66 percent use AI for legal research and 39 percent for summarization, only 6 percent use it for creating invoices and 5 percent for client communication. Firms are deploying AI for tasks that feel productive but don’t directly touch revenue.
Why freemium tools hit a wall
General-purpose AI tools work well for research and summarization. The problem isn’t that they’re bad, but that they plateau quickly.
That ceiling is exactly why legal-specific platforms like Harvey—built from the ground up on legal data and trained on case law, contracts, and regulatory frameworks—have been gaining traction at major firms. Harvey now counts PwC, A&O Shearman, and half of the 100 highest-grossing law firms in the U.S. among its clients, and has raised over $1.2 billion, with reports of another $200 million round in the works at an $11 billion valuation—partly on the argument that generic AI simply wasn’t built for legal nuance.
“The primary limitation of these general-purpose tools is their lack of legal and business context,” Brewer says. “Legal work is defined by nuances — solicitation rules, jurisdictional requirements, compliance standards, and practice-area-specific workflows — that general models often overlook.”
Then there’s the context problem. Ask ChatGPT to summarize a case, and it only sees what you feed it — not the case history or the client’s background. And since it also can’t take action after summarizing, it’s more or less a dead-end tool.
“A legal-specific tool that lives alongside your data and processes can summarize the case and suggest the next best actions or additional questions to ask,” Brewer says. “As the industry raises the bar, firms that delay are doing more than just missing out on features — they are widening a performance gap that may soon become impossible to close.”
The shadow IT security risk
Here’s where the adoption-without-governance problem gets dangerous: Only 41 percent of firms have an AI policy, and only 45 percent say their staff receive sufficient training. But 78 percent are using AI tools.
That means roughly a third of legal professionals may be using AI in what amounts to a shadow IT environment, where there’s no oversight, guardrails, or policy.
“Security, security, security!” Brewer says. “Given the highly sensitive nature of legal data, business leaders should be concerned that nearly a third of their staff may be using AI in a ‘shadow’ environment without direct IT oversight.”
When employees use public AI tools, they might paste in confidential client information or HIPAA-protected medical records without thinking twice. These systems have no real safeguards. One careless prompt could mean a data breach, regulatory violation, or destroyed client relationship.
“When firms fail to provide proactive guidance and purpose-built tools, staff will seek their own solutions,” Brewer explains. “If AI adoption isn’t intentional and structured from the top down, firms risk losing the very efficiency gains they sought in the first place, while exposing themselves to additional risks.”
What workflow integration actually looks like
The difference between AI as an assistant and AI as a business driver comes down to integration.
Consider billing. Asking ChatGPT to create an invoice is like using your smartphone’s calculator instead of the accounting app. Sure, it works. But you still have to manually punch in every client detail, every payment amount, and every line item. You saved five minutes on the template and spent an hour filling it in. That’s unproductive.
“When AI ‘lives’ natively alongside your billing, client, and case workflows, the impact is fundamentally different,” Brewer notes. “It transforms from an assistant to a proactive business partner.”
An integrated AI tool doesn’t just generate a branded invoice template with client and matter details pre-filled. It can automatically suggest missing time entries or proactively identify billing errors. That’s the difference between saving 10 minutes and changing the economics of the entire billing process.
Litify’s clients who’ve embraced this level of integration are seeing dramatic operational scaling — some firms handle twice as many matters with the same staff, and the highest performers have grown headcount by up to 400 percent as they’ve expanded regionally and nationally.
The four-dimension framework
Brewer says firms need to move on four fronts at once.
1. Tools: You have to stop relying on ChatGPT alone, because that’s not going to get you there. You should move to legal-specific platforms that effectively integrate with your case management, billing, and client systems.
2. Readiness: Write an AI policy. Spell out which tools are approved, how to handle sensitive data, when humans must review output, and what to do when something goes wrong. Then treat training like a safety requirement, not an HR checkbox.
3. Task scope: Research and summarization are fine starting points. But firms that stay there are leaving money on the table. The next level is workflow automation — routing requests, running conflict checks, and building chronologies. Eventually, let AI assign cases, generate invoices, and handle intake.
4. Impact: Pick metrics before you spend another dollar. Cost per matter. Turnaround time. Write-off rates. Error rates. “The try-it-and-see period is ending,” Brewer says. “Leaders will expect ROI.”
Ultimately, the firms pulling ahead didn’t just buy software. They rewired how legal work gets done — from intake to invoice and research to billing — with training, governance, and measurement baked in from the start.
You can keep using the sports car in first gear. But eventually, someone in your market will figure out where the other gears are.
BY KOLAWOLE ADEBAYO
Subscribe to:
Comments (Atom)