IMPACT
..building a unique and dynamic generation.
Wednesday, April 2, 2025
Signal, WhatsApp, and iMessage: Which Messaging App Is Most Secure?
I don’t know very much about what goes into war planning, but I assume that the communications infrastructure that supports that kind of thing is a solved problem for the government. There are secure telephone and video systems, as well as Secure Compartmented Information Facilities (SCIF) that allow the key players to review the most sensitive information about military activities.
Typically, I assume, those sorts of conversations aren’t had using consumer messaging platforms on the Secretary of Defense’s iPhone. Also, I sort of assume that the people involved are smart enough and tech-savvy enough to notice that a journalist has entered the group chat. Apparently, not.
There are a lot of questions raised by what is now certainly the most infamous group chat in the world, in which the Vice President, Secretaries of Defense and State, the Director of National Intelligence, CIA Director, and National Security Advisor were messaging about plans to bomb Houthi rebels in Yemen. We know about the chat because someone accidentally added Jeffrey Goldberg, the editor of The Atlantic.
One question that a lot of readers might be wondering is just how secure the most popular messaging apps are. Here’s a rundown.
Signal
Signal, the app in question in this case, is end-to-end encrypted (E2EE). That means that messages are sent in an encrypted format and can only be read by the recipient. At the core of its encryption is the Signal Protocol, and open source protocol that allows for public inspection. That decreases the chances of hidden vulnerabilities. Signal also uses a form of encryption that ensures that even if a session key is compromised, previous messages stay encrypted.
Signal also allows the most privacy since you don’t have to link a phone number to use the service (unlike other apps on this list). It also allows for contact verification so that you can ensure that the person you’re messaging is who they say they are. In general, Signal is widely considered the most secure consumer messaging app because third-parties can verify its security claims, and the company does not have access to metadata about your conversations.
iMessage
If you only send messages to other iPhone users, Apple’s iMessage platform is arguably the best and most secure option. Unlike Signal, Apple’s protocol is proprietary and not open for inspection by third-party security researchers. That makes it harder to verify that it is as secure as it claims, but Apple is well known for its commitment to security and privacy.
One advantage is that Apple uses a 1:1 encryption model for group chats, which means that every message is encrypted individually for each member of the group. This is technically more secure than Signal’s Sender Key method, though it means that iMessage group chats are much more limited in terms of group size (due to the resources required for all of that individual encryption).
Apple also says its encryption is designed for post-quantum computing. The idea is that eventually quantum computers will be able to break encryption easily enough to read protected messages, but Apple is designing its algorithm to resist those types of future capabilities.
There are, however, two main drawbacks to Apple’s messaging platform. The first is that once you start messaging anyone with an Android device, it will fall back to RCS, or, worse, SMS—neither of which are encrypted within the Messages app. RCS supports E2EE, but Apple has not implemented the ability to send encrypted messages to Android devices.
The other is that if you use iCloud backup for your messages, and aren’t using Advanced Data Protection, a copy of your messages is stored on Apple’s servers. While they are encrypted at rest, the company is able to turn them over if requested by law enforcement because it retains a key.
WhatsApp
WhatsApp uses the Signal Protocol (see above), meaning it offers a reliably secure form of protection for messages by default. One problem with WhatsApp is that, while the content of your messages may be encrypted, the metadata about the messages you send, and who you send them to, is not. That information is collected and stored by WhatsApp.
Some people are also less than enthusiastic about using an app owned by Meta, which isn’t exactly known for its ability to keep its hands off of user data. It does, however, have the benefit of a massive user base, which means that there’s a good chance that the person you want to message with will be using WhatsApp. The app also has the best feature set for group messaging by far.
Telegram
To be clear, Telegram is not an E2EE messaging platform by default. Every regular message you send is encrypted in transit, and is encrypted as it is stored on Telegram’s servers, but that’s not the same thing as being encrypted so that only the recipient can read your message. This makes your messages vulnerable to anyone who has access to those servers.
The app does allow you to create a “Secret Chat,” which is encrypted, and you can even set these to delete after a period of time. Still, if you care about protecting your text conversations, there are far better options on this list.
Messenger
Meta’s “other” messaging platform started rolling out E2EE last year, which should eventually put it on par with WhatsApp. The drawback here is that the rollout is happening over time, which means that not every user will immediately have it turned on by default. In addition, you might have some chats that are protected, and others that aren’t, and the average user isn’t going to know how to tell the difference.
The bottom line
It’s worth mentioning, however, that it does not matter how private or secure the encryption is on a messaging platform—if you include someone in a group chat and send a message to that group, they’re going to be able to read the message. Or, put another way, the problem here has nothing to do with encryption, and everything to do with human error. Most of these apps offer a secure form of E2EE for consumers, but there is no guarantee your messages will stay secret if you text them to a journalist.
EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN
Monday, March 31, 2025
Forecast: AI’s Rise Will Cut Search Engine Traffic, Affecting Advertising
A new report from the research firm Gartner, has some unsettling news for search engine giants like Google and Microsoft’s Bing. It predicts that as everyday net users become more comfortable with AI tech and incorporate it into their general net habits, chatbots and other agents will lead to a drop of 25 percent in “traditional search engine volume.” The search giants will then simply be “losing market share to AI chatbots and other virtual agents.”
One reason to care about this news is to remember that the search engine giants are really marketing giants. Search engines are useful, but Google makes money by selling ads that leverage data from its search engine. These ads are designed to convert to profits for the companies whose wares are being promoted. Plus placing Google ads on a website is a revenue source that many other companies rely on–perhaps best known for being used by media firms. If AI upends search, then by definition this means it will similarly upend current marketing practices. And disrupted marketing norms mean that how you think about using online systems to market your company’s products will have to change too.
AI already plays a role in marketing. Chatbots are touted as having copy generating skills that can boost small companies’ public relations efforts, but the tech is also having an effect inside the marketing process itself. An example of this is Shopify’s recent AI-powered Semantic Search system, which uses AI to sniff through the text and image data of a manufacturer’s products and then dream up better search-matching terms so that they don’t miss out on matching to customers searching for a particular phrase. But this is simply using AI to improve current search-based marketing systems.
AI–smart enough to steal traffic
More important is the notion that AI chatbots can “steal” search engine traffic. Think of how many of the queries that you usually direct at Google-from basic stuff like “what’s 200 Farenheit in Celcius?” to more complex matters like “what’s the most recent games console made by Sony?”–could be answered by a chatbot instead. Typing those queries into ChatGPT or a system like Microsoft’s Copilot could mean they aren’t directed through Google’s labyrinthine search engine systems.
There’s also a hint that future web surfing won’t be as search-centric as it is now, thanks to the novel Arc app. Arc leverages search engine results as part of its answers to user queries, but the app promises to do the boring bits of web searching for you, neatly curating the answers above more traditional search engine results. AI “agents” are another emergent form of the tech that could impact search-AI systems that’re able to go off and perform a complex sequence of tasks for you, like searching for some data and analyzing it automatically.
Google, of course, is savvy regarding these trends, and last year launched its own AI search push, with its Search Generative Experience. This is an effort to add in some of the clever summarizing abilities of generative AI systems to Google’s traditional search system, saving users time they’d otherwise have spent trawling through a handful of the top search results in order to learn the actual answer to the queries they typed in.
But as AI use expands, and firms like Microsoft double– and triple-down on their efforts to incorporate AI into everyone’s digital lives, the question of the role of traditional search compared to AI chatbots and similar tech remains an open one. AI will soon impact how you think about marketing your company’s products and Search Engine Optimization to bolster traffic to your website may even stop being such an important factor.
So if you’re building a long-term marketing strategy right now it might be worth examining how you can leverage AI products to market your wares alongside more traditional search systems. It’s always smart to skate to where the puck is going to be versus where it currently is.
BY KIT EATON @KITEATON
Friday, March 28, 2025
OpenAI Says Using ChatGPT Can Make You Lonelier. Should You Limit AI Use at Work?
The dramatic increase in AI chatbot use created a new way for people and machines to interact, and that may be a problem. Researchers from market leading AI firm OpenAI and MIT’s Media Lab have concluded that using ChatGPT may actually worsen feelings of loneliness for people who use the chatbot all the time. If your company is using AI tools to speed up your worker’s days, this is definitely something to consider!
While the results were presented with some nuance and subtlety, they fuel a narrative begun in 2023, when then Surgeon General Dr. Vivek Murthy warned that there was an “epidemic” of “loneliness and isolation” that was harming Americans’ health, and he partly blamed our digital era, noting to NPR that “we are living with technology that has profoundly changed how we interact with each other.”
That doesn’t mean AI use should be banned at work, but it’s worth considering how long your employees are spending working with a chatbot. The authors of the joint study noted that their analysis found that while “most participants spent a relatively short amount of time chatting with the chatbot, a smaller number of participants engaged for significantly longer periods.”
It’s these “power users” that may be experiencing the biggest impact. The authors noted that people who had “higher daily usage — across all modalities and conversation types — correlated with higher loneliness, dependence, and problematic use, and lower socialization.” Reporting on the study, Business Insider pointed out that in some ways this sort of investigation is always tricky because feelings of loneliness and social isolation often change from moment to moment, influenced by many factors. But to control for this the researchers measured both the survey participants’ feelings of loneliness and their actual level of socialization, to separate out when people really experience isolation from the feelings of loneliness.
As with face-to-face interactions, tone was a big influence, the study concluded. When ChatGPT was instructed to react with flatter, more neutral interactions in a “formal, composed, and efficient” manner, the power users felt heightened loneliness. When the chatbot was told to be “delightful, spirited, and captivating” and reflect the user’s emotions, the users didn’t suffer this way. This makes sense: you wouldn’t necessarily feel listened to if, say, your AI conversations were as muted as a short discussion with a clerk at the department of motor vehicles, but you would feel differently if your AI coworker spoke more like the machine in Scarlett Johansson‘s movie Her.
If your company has raced to embrace the promise of AI, does this mean you should rethink your AI tool usage, or, at the very least, worry about your staff mental health?
The nuanced answer is probably not. Not yet, at least…but it’s definitely something to keep a weather eye on. AI technology is making its presence felt in the workplace, and it’s now capable of remarkably human-like level of interaction.
Wired’s recent report about Google’s “scramble” to catch up with OpenAI will unsettle critics. After Google’s Gemini AI was perfected and released, one executive told Wired that “she had switched from calling her sister during her commutes to gabbing out loud with Gemini Live.” That’s a score for Google’s AI chops, but it’s easy to see how the change may impact that executive’s relationship with her family.
For now, many AI tools exist more in the background in the workplace, and they’re less like chatting to a digital person than interacting with a smarter version of Microsoft’s old, much-loathed “clippy” digital assistant. For example, taking advantage of Microsoft’s AI Copilot suggestions to speed up writing an email in Outlook is unlikely to harm your workplace friendships in the way that gabbling to ChatGPT for 8 hours while sat at your desk might.
At this point, connections are fraying in the increasingly digitized workplace. Workplace friendships are eroding, and the idea of a “workplace spouse” is fading. Tech company executives are pushing AI hard too, with Slack’s leadership imagining a near future when workers spend more time talking to AI agents at work than they do chatting with their colleagues.
Last year, drug maker Moderna’s CEO said he was hoping his company’s bottom line would benefit from workers chatting to ChatGPT at least 20 times per day. Increased use of AI in the workplace could easily mean that many more people effectively become AI “power users,” triggering the kind of worry about loneliness that MIT’s researchers spoke of.
No matter how much AI may drive up your employees’ efficiency, and boost your bottom line, it’s worth remembering that a ton of science shows that happy workers are better workers.
BY KIT EATON @KITEATON
Wednesday, March 26, 2025
Microsoft’s AI Agents Aim to Make Cybersecurity Teams’ Work Easier
If you peek behind the curtain at a network defender’s workflow, you might see hundreds—if not thousands—of emails marked as potential spam or phishing. It can take hours to sift through the messages to detect the most urgent threats. When a data breach occurs, figuring out what vital information was stolen is a critical—but often challenging—step for investigators.
Today, Microsoft announced a set of artificial intelligence agents aimed at making cybersecurity teams’ work a little easier. That could be good news for the many businesses large and small that use Microsoft 365 for their email, cloud storage, and other services.
Agentic AI is a buzzy new term for AI systems that can take actions on behalf of a human user. One step up from generative AI chatbots, AI agents promise to do actual work, such as executing code or performing web searches. OpenAI recently launched Deep Research mode for ChatGPT, which can conduct multi-step web searches to research complex topics or make shopping recommendations for major purchases. Google has been rolling out its own AI agents built off the latest version of Gemini.
A year ago, Microsoft launched Security Copilot, which introduced AI tools to its suite of security products: Purview, Defender, Sentinel, Intune, and Entra. Starting in April, users can opt in to having AI agents do specific tasks for them.
Microsoft says the agents can help streamline the work of security and IT teams, which are facing both a labor shortage and an overwhelming volume of threats.
Take phishing emails. In 2024, Microsoft says it detected 30 billion phishing emails targeting customers. At a company level, security teams often have to individually evaluate every potential phishing email and block malicious senders.
A new phishing triage agent inside Defender scans messages flagged by employees to ensure that the most urgent threats are addressed first. Among the tasks the agent performs are reviewing messages for language that suggests a scam and checking for malicious links. The most dangerous emails go to the top of a user’s queue. Other messages might be deemed false positives, or simple spam.
From there, the IT team can review a detailed description of the steps the agent took. The AI agent will suggest next steps—such as blocking all inbound emails from a domain associated with cybercriminals—and the human user can click a button to instruct the agent to perform those tasks.
If an email was mistakenly marked as spam, there’s a field for the user to explain in natural language why that email should not have been flagged, helping train the AI to be more accurate going forward.
Another AI agent helps prevent data loss—for example, looking for suspicious activity that might indicate an insider threat—and in the event of a data breach, helps investigators understand what information was stolen, whether a trade secret or customer credit card numbers.
Other AI agents ensure new users and apps have the right security protocols in place, monitor for vulnerabilities, or analyze the evolving threat landscape a company faces. In each case, a user can look under the hood to see what steps the AI agent took in its investigation. The user can make corrections, or with the click of a button, tell the agent to complete the tasks it suggested, such as turning on multi-factor authentication for certain users, or running a software update.
So far, the tools work across Microsoft services, such as Outlook, OneDrive and Teams, though integrations with third-party tools such as Slack or Zoom could be offered down the line. The tools also don’t take remediation steps without human approval. In the future, some of those tasks could also be automated.
BY JENNIFER CONRAD @JENNIFERCONRAD
Subscribe to:
Posts (Atom)