Friday, August 30, 2024

Why Microsoft's Controversial AI Search Tool, Recall, Suffered a False Start

On August 21, Microsoft announced that a controversial feature called Recall would finally be launching this fall for an exclusive group of product testers outside the company. The feature, which automatically takes screenshots and uses AI to classify them, will only be available on specific Microsoft-approved laptops, and could help people spend less time searching through their computer history to find something they vaguely remember seeing. Microsoft originally announced Recall in May as a feature for its new line of AI-powered Copilot+ PCs, which are specifically built to take advantage of Copilot, Microsoft's generative AI-powered assistant. Recall is meant to serve as a kind of device-specific search engine, designed to find "anything you have ever seen or done on your PC." Soon after the reveal, though, it was reported that Microsoft's security team wasn't able to get the feature in a safe enough state to launch with the PCs in June, and delayed Recall to later in the year. Microsoft now says Recall will first be released in October exclusively to people within the company's software testing program, called Windows Insider. How is Recall supposed to work? Here's how Recall is intended to function: The feature constantly takes screenshots of whatever you're looking at on your display, and uses computer vision to categorize what it "sees." Then, users can use identifying keywords to search through their PC's history. In an example shown at the May event, a Microsoft leader used Recall to search for a "chart with purple writing," and automatically found the annotated PowerPoint page she was looking for. What are the security concerns with Recall? Immediately following Recall's reveal, cybersecurity experts began to express concern with the feature, like pointing out that it doesn't hide passwords or financial information when taking screenshots. Former Microsoft threat intelligence analyst Kevin Beaumont wrote on his blog that "with Recall, as a malicious hacker you will be able to take the handily indexed database and screenshots as soon as you access a system ... If you have malware running on your PC for only minutes, you have a big problem in your life now rather than just changing some passwords." So, what's happening with Recall now? In June, less than a month after its reveal, Microsoft announced that Recall would not be rolling out with the new line of PCs and would instead launch exclusively for Windows Insiders, but did not specify when the feature would become available to the general public. In the announcement, Microsoft said that it had delayed Recall in order to "leverage the expertise of the Windows Insider community to ensure the experience meets our high standards for quality and security." To allay security concerns, Microsoft has said that users will be required to explicitly opt-in to Recall, and screenshots will be encrypted and can only be unlocked with a successful face scan of the computer's owner. Microsoft now says that "when Recall is available for Windows Insiders in October, we will publish a blog with more details."

Wednesday, August 28, 2024

Quantum Computers Could Help Hackers Defeat Encryption. Here's How to Protect Your Data

Encryption is the secret sauce that keeps private information private as it travels across the internet. Apps like Apple's iMessage use it to protect the contents of your communication, as do other services. Many encryption algorithms in use today are based on techniques developed nearly 50 years ago. They've served us well, allowing people and businesses to confidently send emails across the globe, shop online, and move company data to the cloud. But that could change, since researchers believe quantum computers capable of breaking today's encryption could arrive within a decade. In basic terms, a common form of encryption known as public key encryption relies on digital "keys" created by multiplying two extremely large prime numbers. An attacker would need to calculate the two factors to break the encryption. If the number is big enough, that's an impossible task for today's computers. But here's the rub: quantum computers have unique characteristics that make them better than regular, or classical, computers at doing certain tasks -- and one of those is factoring enormous prime numbers. A number that would take a classical computer millions of years to factor could potentially be factored in "a matter of hours" by a quantum computer, explains Ray Harishankar, the head of IBM's Quantum Safe initiative. Anticipating that future, in 2016 the National Institute of Standards and Technology, part of the U.S. Department of Commerce, called on researchers from academic institutions and tech companies around the world to design new encryption algorithms. The best options were tested both in-house by the NIST and by outside experts and underwent years of refinement and standardization. Last week, the NIST released the first three approved standards for post-quantum encryption. The goal, says Dustin Moody, a mathematician in the institute's computer security division, is for most of the U.S. government to adopt the standards by 2035. And while the standards were developed for the government, many private companies and other organizations are expected to use them as well. Why Companies Need to Prepare for the Quantum Future Today "Every enterprise that uses digital communication should be looking at this now," says Harishankar. That, of course, means virtually every business out there needs to begin evaluating their systems. Even if quantum computers that can break today's encryption are still a decade away, it could take years for the updates to be adopted across the board, and connected devices (like cars, industrial control systems, and smart appliances) available today might still be in use a decade from now. The 10-year frame is just a projection, Moody adds, and a technological breakthrough could speed up the timeline. Some in the cybersecurity community have also warned of a practice known as harvest now, decrypt later. Hackers and foreign governments may already be scooping up encrypted sensitive information -- anything from health data to military plans -- in hopes of being able to decrypt it with future technologies. Companies of all sizes should take stock of what they have that's protected by encryption -- and particularly any information that will still be valuable a decade from now. Why Implementing Quantum Safe Standards Is a Massive Job One big challenge is finding where encryption lives in various systems. It can be embedded in both software and hardware, such as connected devices and point-of-sale terminals. "If you step back and think about it, that is a huge problem," says Harishankar. Nobody can easily list each place in their tech stack that has encryption embedded, and which protocol is used in each instance. Take IBM's Db2 database software. The software has about 28 million lines of code, Harishankar says. IBM researchers discovered more than 160 instances of cryptography within the code -- and that's a smaller number than you'll find in many other companies' software, he says. For proprietary software, IBM and others have developed tools to detect where encryption lies -- it's too big a task to be done manually. Businesses that rely on third-party software should ask their vendors whether their products are quantum safe, and if not, when they will be. The National Cybersecurity Center of Excellence offers resources for migrating to post-quantum encryption, as do some industry organizations. Many of the largest tech companies are already beginning the process, and others are expected to do so now that the NIST's standards have been released. Newer versions of Apple's iMessage, Zoom's meeting capabilities, and Google's Chrome browser have all adopted post-quantum encryption in the past several months. "It is a journey. It will take multiple years to transform and test," warns Harishankar. "It's not take A and replace it with B. I wish it were that simple."

Monday, August 26, 2024

The Future of AI in Content Marketing: 3 Noteworthy Trends

Artificial intelligence -- It's all anyone seems to talk about. On a regular basis, you see articles about how AI is going to help content marketers streamline processes. You read how it will essentially replace the need to hire actual human writers. People tell stories about how it will drive unprecedented growth in today's competitive market environment. Given how ubiquitous it's become, I still marvel at the fact that ChatGPT -- the solution essentially becoming synonymous with AI at this point -- only launched to the public in November 2022. A lot has changed since then. Regardless of the hype, it's especially important to remain current in constantly changing industries like content marketing. The following trends are worth a look -- they reshape how content is created, distributed, and consumed across platforms. 1. Content Repurposing: Video-to-Article Generators Content repurposing, a long-standing practice in marketing, involves transforming existing content into different formats to extend its reach and relevance, maximizing its value and impact across various audiences. It's a way to extract even more value from what you've already invested in. Without AI, this can involve clipping a long-form video for social-media posts or taking points from a blog and creating infographics. With AI, marketers have even more options, such as utilizing video-to-article generators to seamlessly convert visual content into a written format. Consumers in virtually all industries expect video content. The ability of a video to captivate audiences and convey information is really unmatched. Still, producing a high-quality video that resonates can cost a lot of money. You need to take care of lighting and sound equipment, cameras, talent, and more. Even a supposedly modest video can represent a significant upfront investment. In fact, some sources say producing a video can cost anywhere from $500 to $3,000, depending on quality and length. When you spend that kind of money on content, you want to repurpose it. A contact of mine founded ArticleX, an example of one of several available AI podcast summarizer tools I've heard great things about. This particular tool uses AI to take a video you've already worked on and quickly turn it into a polished article that can be posted on social media, shared on your blog, or published elsewhere. With the help of AI-powered solutions like this, high-value offerings like podcasts or any other video/audio content can be transformed into a versatile asset capable of engaging audiences across multiple platforms -- without additional money or effort. 2. Improved Content Creation Processes In addition to taking the content you already have and making it even more valuable, AI is also being used to create totally new content from the ground up. AI is great for creating short, data-driven news stories that are tailor-made for an intended audience. Organizations like The Washington Post have already used it for tasks like reporting election results and sports recaps. The Post's AI-powered tool, Heliograf, allows the publication to create more content with fewer resources. It allows the reporters to focus on more in-depth articles. Does it provide the same flair that a human writer would? No. But sometimes that's not exactly what you need. When your only goal is to report the facts in the most straightforward way possible, AI can save you a lot of time. In content marketing, AI can be used to create newsy, fact-heavy pieces. It can also help with microcopy, like captions and headlines, product descriptions, certain listicles, and FAQs. If your organization's tone is required in these pieces, copywriters can leverage AI to get a head start on researching and outlining. The key here is to ensure you have an adequate balance between AI and human content. It's especially important to prioritize the reader when creating content -- over-using AI can diminish that value if you aren't careful. 3. Automated Optimization and Personalization AI is making it easier than ever to get the right message in front of the right person at exactly the right time. AI can quickly analyze and contextualize the real data your existing content marketing campaigns are generating. You don't have to pay someone to interpret what your users like -- you'll know exactly what they respond to, which can help inform your campaign strategies in the future. AI can also be used for automating SEO strategies. You can optimize your website content by employing machine learning algorithms to analyze search engine ranking factors, such as keywords and user experience metrics. These can continuously learn and adapt to changing search engine algorithms, allowing for dynamic and effective SEO strategies without the need for manual intervention. In addition to optimizing content, AI can assist with personalization. By leveraging user behavior, preferences, and demographic data, AI algorithms can tailor marketing messages to users in real time. Whether through personalized email campaigns, targeted social-media advertisements, or customized website content, AI-driven personalization empowers marketers to deliver the most relevant and compelling content to each user, driving better results. Embracing the AI Frontier As AI continues to evolve, it's reshaping the content marketing landscape in unprecedented ways. But as long as you stay informed about current trends, you should be able to ride the wave. From content repurposing to improved content creation and automated optimization, AI-driven solutions are revolutionizing how businesses create, distribute, and engage with content. Embracing these trends empowers marketers to stay ahead and deliver more personalized and impactful experiences to their audiences. Expert Opinion By John Hall, Co-founder and president, Calendar @johnhall

Friday, August 23, 2024

Hacked? Here's the First Call You Should Make

Falling victim to a hack or ransomware attack is a nightmare scenario for business owners. For small businesses, it can be financially devastating--not to mention the stress, disruption to normal operations, and hit to your reputation. And, unfortunately, attacks are only becoming more common. San Francisco-based cyber-risk and insurance firm Resilience crunched the numbers and found that 48 percent of all claims it processed in 2023 were related to ransomware, and many attacks start with human error, such as clicking on a malicious link. Companies in wholesale, health care, construction, and transportation were frequent targets, according to a report released by Resilience on Tuesday. What's more, mergers and acquisitions can be a precarious time, as companies are exposed to each other's existing vulnerabilities, and new weaknesses can crop up as the systems integrate--something made clear by the ransomware attack on Change Healthcare soon after it was acquired by United. Resilience is also seeing a spike in claims related to products from third-party vendors, and notes that the dominance of a few key vendors will likely result in more supersize outages and hacks as intruders focus their efforts on a few common systems. For example, earlier this year Ticketmaster, Santander Bank, and others had data compromised after hackers accessed their Snowflake cloud storage systems. (Last month's CrowdStrike-linked outages, which were caused by a faulty software update and not an attack, are another example of the kind of chaos that can ensue when there's an issue with a widely used service.) Resilience co-founder and CEO Vishaal Hariprasad, a former Air Force cyber operations officer, spoke to Inc. about the first steps business owners should take if they're subject to a ransomware attack or another business intrusion. While it's best to have an action plan ready in advance, quick thinking can help mitigate the damage. "Every company of any size is a tech company directly or indirectly," says Hariprasad, who adds that hackers will often leverage small companies to access larger companies they do business with. "Especially on the [small or medium-size business] side, a small outage could become very material very fast." Step 1: Get on the Phone If you have cybersecurity insurance, phone the claims hotline right away. "Their claims person should literally be the quarterback for their next steps," says Hariprasad. The insurance company should be able to provide resources, contacts, and advice--and it's in the insurer's best interest to make sure you recover as quickly as possible. If you don't have cyber insurance, there are incident management companies that you can call. Otherwise, if you use an outside vendor for IT services, that should be your first call. Many companies spend too much time trying to clean things up in-house. Or they call law enforcement first, hoping for an immediate incident response and recovery--something authorities are generally not equipped to provide. In most cases, a call to the police or FBI should be the third or fourth call a company makes. "The mistake most people make is that they reverse that order," says Hariprasad. Step 2: Cut Off Access About a decade ago, if a company faced a virus or a breach, the advice might be to run around unplugging computers to keep an issue from spreading. That no longer works in the era of cloud computing and connected systems, but what you can do is cut off access to systems you can still access as soon as possible. "Once they're in, the attacker is going to move as fast as they can to the crown jewels," says Hariprasad. Start with the systems that are most critical to your business or that hold sensitive information--an attacker will likely go after those first so you're more compelled to pay a ransom--and work your way down the line. For small businesses, customer databases in HubSpot CRM or Salesforce might be some of the first things to lock down. Cut off administrator access for any email addresses that have been compromised and change the admin passwords for other accounts. Many common software suites offer options to lock down a system until an incident is resolved, including preventing new accounts from being created and preventing data from being exported.

Wednesday, August 21, 2024

As Big Companies Air AI Fears, Entrepreneurs Adopt it With Gusto

Since OpenAI released its Chat GPT bot two years ago, reactions to rapidly evolving artificial intelligence (AI) applications have spawned nearly equal portions of enthusiasm and concern. Those contrasts now appear in real-life contexts, as many of the country's largest corporations air worries about the tech in their reports to shareholders, while growing numbers of entrepreneurs say using AI to launch their businesses helped them progress toward profitability faster than they could have otherwise Fretting about the potential downsides of artificial intelligence for businesses was laid bare in a report by machine-learning analysis and development platform Arize AI. Its research examined the most recent annual reports published by Fortune 500 companies, and determined 56 percent of those--a total of 281--cited emerging AI as a "risk factor" to their operation, up by over 473 percent in 2022. Meanwhile, nearly 70 percent of the yearly investor publications that specifically mentioned generative AI pointed out the negative effects it could have on future revenue--or even the business itself. The proliferation of those corporate warnings was doubtless fueled by the tech's rapid spread throughout business, and in broader public awareness thanks to ChatGPT. Overall, 324 of the 500 companies examined mentioned AI in their latest reports, compared to 128 in 2022. And the 108 companies that recently noted the risks that swiftly developing generative AI may bring was a stark contrast to the complete absence of references to the applications two years ago. What were the main reasons behind the corporate worries about AI's possible negative impacts? They ranged from computers eventually replacing humans central to Fortune 500 media, entertainment, advertising, and software businesses, to the possibility AI will permit struggling rivals to harness its power catch up to, or even surpass established sector leaders. Netflix, for example, specifically mentioned generative AI, warning "(i)f our competitors gain an advantage by using such technologies, our ability to compete effectively and our results of operations could be adversely impacted." Other fears included inadvertent leaks of proprietary and other confidential company data, potential legal hazards stemming from communications or decisions taken by machines, and various problems in the still uncharted ethical waters of creating and deploying AI. "For example, the development of AI and (our) Customer 360, the latter of which provides information regarding our customers' customers, presents emerging ethical issues," read one passage from Salesforce's annual report. It took particular note of the risk of apps provoking "controversy due to their perceived or actual impact on human rights, privacy, employment, or in other social contexts." But if the largest companies in the U.S. that were among AI's first corporate adopters are now raising precautionary questions, a report Sunday in The New York Times suggests entrepreneurs are embracing the rapidly improving technology without the same hesitations. Though largely anecdotal in nature, the article cites founders who used publicly available applications like ChatGPT and GitHub Copilot to launch their companies--in some cases in learning details about their chosen sector, or even the fundamentals of business ownership. The apps then allowed those new bosses to progress toward profitability faster by relying on the tech to "write intricate code, understand complex legal documents, create posts on social media, edit copy, and even answer payroll questions" for which they'd normally have to pay third-party providers, the Times said. Among the young companies cited was Skittenz, which Golden, Colorado emergency doctor Steven Bright founded recently to produce a variety of snazzy looking outer skins that fit over bland mittens and gloves. After finding that his fellow medical professionals could offer little advice about how to start a business, Bright told the Times he used ChatGPT to navigate typically impenetrable legal, patent, and administrative questions. He was even able to get answers to technical manufacturing questions like which dye would work best in his products. "(T)o be able to harness the whole power of the internet into a bit of a conversation gives you some reassurance," Bright said, likening the tech to "stilts to get through an obstacle-- to get through a minefield." Bright and other founders aren't alone in seeing AI's upsides for startups. According to a 2023 report by the U.S. Chamber of Commerce, 23 percent of all U.S. small businesses were already using "an AI platform," and 71 percent planned to adopt some form of the technology. In its conclusion, the Arize AI study advised big U.S. companies now expressing concerns about potential AI business risks to take inspiration from entrepreneurs who are using it successfully--then actively pursue its development in ways that reduce or eliminate future threats. "Given that most mentions of AI are as a risk factor, there is a real opportunity for enterprises to stand out by highlighting their innovation and providing context on how they are using generative AI," the study said.

Monday, August 19, 2024

The End of the Creator Economy

No, YouTube, I don't want to watch any more videos of randos giving their opinions on movies. So. The creator economy. Did it peak that fast? I was singing its praises (sort of) just six months ago. Was I that wrong? Meh. Not really. I think I was just early. And I'm probably early here too. I'm not saying the creator economy is over. I'm just saying this is what the beginning of the end looks like. A word of warning. This column is going to be full of reckless speculation. Also, this has nothing to do with whom I create for, mostly because I use words and it's an entirely different model, but also because if they weren't awesome, I wouldn't create for them. There. Now I can take the gloves off. Look, the creator economy has grown massively over the past few years, but here are several critical questions that need answers if the "economy" part of it is going to survive. Is the money there? Can you really, really make a million dollars as a YouTuber? Really? As a lifelong entrepreneur, I've seen these "anyone can get rich" stories play out over and over again. In fact, it's happening right now with generative AI. And in a lot of cases, there's just enough truth there for it to be true. I absolutely hate when this happens, and investors and boards use me as a BS detector quite often. Like when someone says their company is worth X and X is a very large number of dollars. But X is just a calculation based on Y that uses a value of Z and also A and B and C, all the way up to H. And H is totally made up. Fictional. No basis in reality. When you see a tech startup with a massive valuation crash and burn, it's usually because, even with a bunch of smart people sinking zillions of dollars into the venture, no one asked about H. It happens all the time in entrepreneurship, especially in front of investors and in the press. And the same kind of almost-true ballooning is happening in the creator economy right now, although it's more tacit than deliberate, and usually fueled by the creators themselves. Now, as I mentioned, I'll also tell you that I make a lot of money spitting truth grenades with funny words (see, I just ballooned myself). But I say that to say I'm not just one of those folks being all sad because the algorithm is oppressing them (yes, they mean "suppressing," but they don't know that). I'm just saying I've been on both sides of the algorithm. And I know some of you are already mentally arguing semantics about being able to make a million dollars on YouTube. So I'll change up the question ... Can just anyone make millions of dollars as a YouTuber? Well, the answer to this one is a firm, flat no. But the longer answer is sneakier because that answer is absolutely not unless lightning strikes or they get a lot of help from people with a lot of money, like venture capital or private equity money, which is most definitely happening. I know I just shattered a lot of dreams, but I need to talk about infinite smallness and the long tail. There are a handful of unbacked creators who are going to make a lot of money for a very short period of time. There are another handful who will make average money for a longer period of time. Then there's the long tail, which is infinitely long and infinitely thin when, say, charting revenue per creator for any single creation. In other words, it's hit or miss, for sure, and there is no way to predict a hit. Unless ... Is the money worth making? There are two ways to make money as a creator. Option 1 is to put a lifetime and a keg of passion into a single subject and extend the possibilities of it for millions of people in an entertaining way, while also being great in the medium in which you're working. Option 2 is to chase a bunch of algorithms and SEOs and whatever China is doing with TikTok so that your content rises to the top. The first way is incredibly difficult. The second way is incredibly difficult but is sold as easy. Search for how to make money on any creator channel and your soul will ache with the daunting volume and the sheer banality of the results. Also, there seems to be a lot of hel" offered for option 2, but these people will mostly just steal your money. For the record, those search results you just pulled up are also algorithm-driven. It becomes a game. The ones with the talent don't want to play. The ones who are playing have a different talent. And I'm not here to judge, but no one ever vibes with a creator because the content hits them in their SEO. But you have to play the game at least a little, because ... How do you get past the ad-based content problem? I said in a recent column that the only way to make a lot of money is to make even more money for someone else. Nowhere is this more true than in the creator economy. What this boils down to is that the creator economy isn't really revolutionary. It lives and dies on advertising the same way that all media that came before it did. It's just easier to access the distribution and the ad sellers now. But it's plagued by the same demons as the thing it replaced. So if you're going with option 1 above, you're likely doomed to be a starving artist. If you're going with option 2, you either produce the content that serves the most ads or you don't get paid. And before you start to say "sponsorship," or "influencer," please understand that these methods have always existed and they last died out with broadcast radio by the 1950s. I'll reference the long tail again. Matt Damon can sell the hell out of some crypto. You and I cannot. So it's starve or serve the algos or don't do it. But I know what you're thinking, you plucky and attractive creator ... How do you get past the algorithm problem? I do not know. There, I said it. It took me about three months before I had reached the end of quality YouTube. Some of you will be shocked it took that long. And it's only getting worse, because ... What comes after TikTok? Or, what happens when the next thing comes along that lowers the common denominator even further? Or, what happens when the average attention span shrinks to the point where it's not even content anymore--it's literally all shiny colors and flashing lights. Funny story. TikTok banned me because I don't conform to the seven seconds of stupid that it demands for its videos. I don't care because posting on TikTok always made me feel like I was being Uncle Cringe. I'm just saying the channels are no longer just actively promoting the lowest common denominator, they're enforcing it. Good luck, starving artists! Is the creator economy sustainable? So this is the million-dollar question, for real. But let me answer that question by asking another question. Who needs another podcast in their lives? Once the entire creator economy is saturated, and I believe we're close if not already there, where is the sustainable income for creators going to come from? This is actually what I talked about in my post six months ago, only then I was optimistic. Now, I'm not so sure. The minute someone hits the end of the algorithm, as I did after a few months, all those ad-friendly quality videos suddenly stop, as they did for me, and then the timeline is all garbage -- a never-ending infinitely thin marching line of angry opinions on movies. I actually kinda like the angry ones. It's the fawning ones I can do without. Opinions are fine, but everybody has one, and the ones that count are the ones with experience and facts behind them. Until we start to slam those factors into the algos, the creator economy will be a quagmire, no matter who is doing the backing. But I also know tech, and I know that going against the algorithm is pointless. The whole system needs to be replaced. Quality costs money, and that money has to make a lot more money for someone else before quality can matter. Until then, the lowest common denominator will always win, and when the kids get tired of TikTok's too-long seven-second videos, we'll all move to an app that's just flashing lights and bright colors. Expert Opinion By Joe Procopio, Founder, TeachingStartup.com @jproco

Friday, August 16, 2024

How AI Helped This Founder Fix a Massive and Dangerous Problem With Prescriptions

Yoona Kim, 44, wanted to help people access health care. That desire led her down a compounding path--from an early career analyzing health care cost trends, to pharmacy school, and then to a PhD program in health economics. Eventually, she encountered an important, solvable problem: Millions of people are prescribed suboptimal--and often dangerous--medications each year. So in 2017, Kim and CTO Penjit "Boom" Moorhead co-founded Arine. The platform, powered by AI, recommends safer, more effective medication regimens. It's grown by illuminating the shadowy side of the $1 trillion U.S. pharmaceutical industry. I always saw gaps in the health care system. My mom is a public health nurse, and even when we were kids, she would bring us to volunteer at public health fairs and county clinics. I remember seeing lines of people out the door--so many people who struggled to get health care. I spent 20 years in health care in different roles. Most of what I did was evaluate the outcomes of different medical interventions. In 2017, everyone was focused on medication adherence and dispensing and delivery solutions for pharmaceuticals. But I realized nobody was focused on the root of the problem: Are patients even taking the right medications in the first place? As a health economist, I'm used to looking at data and evaluating outcomes. My co-founder, Boom, is a nuclear physicist-turned-data scientist and developer. We could've come to market with a manual medication-review service for indi­vid­ual patients. Instead, we spent two years building a platform to be scalable. Humans can't get it right every time. There are so many different data points you need to piece together, like which other medications have been tried and failed, what other conditions the patient has, and their age, gender, and life situation. This is the biggest area of waste in our health care system today--spending $530 billion on the problems that are caused by taking the wrong medicine. We really needed to util­ize AI to solve this massive problem. Before we went to investors, we launched a pilot with a Medicaid plan, and we had some case studies. One of them showed that a patient was taking two to three doses of six separate heart-failure medications. Nobody caught it. He was going in and out of the hospital because of heart failure--probably because of these multiple medications--and every time he went in, he received a new set of medications. Because of the data silos that exist in our system, none of the other doctors could see what other medications were being prescribed to the patient. We have so many stories like this. By making sure a patient's medications are the safest and most effective ones, we can reduce hospitalizations by 40 percent, and that, in turn, reduces the total cost of care. The biggest challenge of turning this platform into a reality was finding our initial seed financing. It was hard to convince investors that changing a medication can positively impact the total cost of care and the total health of an individual--as well as reduce complications and hospitalizations. Nobody knew how big the medication problem was, because everybody thought someone else was taking care of it. I talked to more than 50 investors, and then stopped keeping count. I spent over a year fundraising. It's probably no coincidence that another Asian American woman put in the largest seed check. Oklahoma Health Care Authority was our first customer, and Magellan Health was our first large client. Now, we work with more than 35 health plans and risk-bearing providers, including five ­national health plans, seven Blue Cross Blue Shield plans, and two national pharmacy benefit managers. We've had 100 percent cus­tomer retention since inception. Our biggest driver of growth is focusing on the mission. We need to improve patient outcomes and health for the 20 million lives on our platform. By doing that, the rest will fall into place. More than 50 percent of Americans fill three-plus medications every year. The minimum cost savings we've seen for every patient is $1,500 annually. This is hundreds of millions of dollars for a large health plan. We are just scratching the tip of the surface of the impact we can have.

Wednesday, August 14, 2024

Your Guide to Understanding the E.U.'s AI Act

On August 1, 2024, the European Union's Artificial Intelligence Act (AI Act) came into effect, bringing significant changes to how AI technologies are regulated within the E.U. Why the AI Act Matters The AI Act aims to ensure AI systems are ethical, transparent, and respectful of fundamental rights, addressing potential risks while balancing innovation and public safety. For U.S. businesses, this means adhering to clear guidelines and standards when dealing with E.U. companies or operating within the E.U., ensuring your AI practices align with these new regulations. The Act applies to all professional AI applications within the E.U., excluding military, national security, research, and non-professional uses. If your company uses AI in any professional capacity within the E.U. or partners with E.U. companies that do, compliance with this act is mandatory. Key Steps to Navigate the AI Act 1. Conduct an AI Audit Begin by auditing your current AI systems. Identify which applications fall under the AI Act's jurisdiction. Common examples include customer service chatbots, predictive maintenance tools, and recruitment software. Understanding which of your AI applications are affected is the first step toward compliance. During the audit, document the functionality and data flow of each AI application. This detailed overview will help in classifying each AI system under the AI Act and identifying areas requiring immediate attention. 2. Understand the Risk-Based Classification The AI Act categorizes AI applications into four risk levels: minimal, limited, high, and unacceptable. Here's what each means for your company: Minimal Risk: Includes applications like spam filters or video game AI, which pose a negligible risk. These are not regulated under the AI Act. However, it's important to monitor these systems to ensure they remain low-risk as your business evolves. Limited Risk: Requires transparency but less stringent regulations. For instance, AI-generated content should be clearly labeled to inform users. Ensure any AI-driven interactions with customers are clearly communicated to maintain transparency and trust. High Risk: Includes applications that impact health, safety, or fundamental rights, like recruitment algorithms or diagnostic tools. These require rigorous checks, transparency, and regular assessments. Establish regular review processes for these applications, ensuring compliance with the AI Act's stringent requirements. Unacceptable Risk: These are banned because of their significant potential for harm, such as AI for real-time biometric identification in public spaces. Stay informed about these prohibitions to avoid legal repercussions and maintain ethical standards. 3. Implement Compliance Measures For high-risk applications, invest in security and transparency. For example, if you use AI for hiring, ensure the system is regularly tested for fairness and accuracy. Provide clear information to candidates about how their data is used and how decisions are made. Implementing robust compliance measures involves more than just technical adjustments. Foster a culture of compliance within your organization. Train your staff on the importance of ethical AI use and the specifics of the AI Act. Encourage a proactive approach to identifying and addressing potential compliance issues before they escalate. 4. Set Up a Compliance Team Consider establishing a dedicated team to oversee AI compliance. This team can regularly review your AI systems, update protocols, and train staff on regulatory requirements. External audits can provide an objective assessment of your compliance status. Your compliance team should be well-versed in both the technical and legal aspects of the AI Act. They should stay updated on any changes to the legislation and be ready to adapt your compliance strategies accordingly. Regular communication with regulatory bodies can also help your company stay ahead of compliance challenges. Leveraging the AI Act for Competitive Advantage Compliance isn't just about avoiding penalties. It can also enhance trust and innovation. Here's how: Transparency: Clearly communicate how your AI systems work and how data is used. This builds customer trust and shows your commitment to ethical practices. Transparency can differentiate your company in the marketplace, attracting customers who value ethical business practices. Security: For high-risk applications, robust cybersecurity measures are essential. Protecting sensitive data not only ensures compliance but also safeguards your business reputation. Investing in advanced security technologies can also enhance the overall resilience of your operations. Education: Educate your employees and customers about AI. Demystifying AI technologies can build a more informed and supportive relationship with all stakeholders. Offer workshops, webinars, and informational materials to help your team and clients understand the benefits and limitations of AI. Transparency and security should be central to your AI strategy. Develop clear policies on data usage and AI decision-making processes, and make these policies easily accessible to your customers and partners. This openness can foster a stronger, more trusting relationship with your audience. Detailed Steps to Strengthen Compliance 1. Regular Training Programs Organize regular training sessions for your team to keep them updated on the latest AI regulations. These sessions should cover the basics of the AI Act, how it affects your company, and specific compliance procedures relevant to your operations. Training should be continuous and adaptable. As new AI technologies and applications emerge, your training programs should evolve to address these changes. Encourage your staff to stay informed about industry trends and regulatory updates, fostering a knowledgeable and compliant workforce. 2. Develop Clear Documentation Maintain detailed documentation of all AI processes and decisions. This includes how AI systems are developed, tested, and deployed. Clear records help demonstrate compliance and can be crucial during audits or reviews by regulatory bodies. Documentation should be comprehensive and regularly updated. It should include data flow diagrams, decision-making protocols, and compliance checklists. This thorough documentation not only aids in regulatory compliance but also serves as a valuable resource for internal audits and process improvements. 3. Engage With Industry Peers Join industry groups or forums that allow you to share experiences and learn from other companies facing similar challenges. These networks can provide valuable insights into best practices for compliance and innovation. Engaging with industry peers can also help you stay informed about emerging regulatory trends and potential challenges. Collaborative efforts can lead to the development of industry-wide standards and practices, making it easier for all companies to navigate the regulatory landscape. New Regulatory Bodies and Their Roles The AI Act establishes new bodies to oversee its implementation, such as the AI Office and the European Artificial Intelligence Board. These bodies ensure consistent application and provide support to businesses. U.S. companies should be aware of these bodies and their roles to stay informed about compliance requirements. The AI Office: Attached to the European Commission, coordinates the AI Act's implementation across member states. European Artificial Intelligence Board: Comprises representatives from each member state, advising the commission and facilitating consistent application. Advisory Forum: Represents stakeholders from industry, startups, civil society, and academia, offering technical expertise. Scientific Panel of Independent Experts: Provides technical advice to ensure regulations align with the latest scientific findings. How Compliance Can Help Your Business The E.U.'s AI Act is a significant step toward safe, transparent, and innovative AI use. For U.S. companies interacting with E.U. firms or operating in Europe, understanding and complying with this act is crucial. By auditing your AI systems, understanding risk classifications, implementing robust compliance measures, and leveraging the act for competitive advantage, your company can thrive in this new regulatory landscape. Compliance will not only protect your business but also enhance your reputation and trustworthiness among customers and partners. By taking proactive steps and fostering a culture of compliance, U.S. companies can both meet regulatory requirements and position themselves as leaders in ethical AI use. This approach will build stronger customer relationships, improve operational security, and drive innovation, ultimately contributing to long-term business success. Expert Opinion By Benjamin Laker, Professor, Henley Business School, University of Reading @DrBenLaker

Monday, August 12, 2024

The humanoid robot market is sizzling, and competitors are pushing to get their products perfected.

Figure, a Sunnyvale, California-based AI robotics company, just rolled out its latest machine: Figure 02. Let's not say rolled out, though. Figure's 02 strolled confidently onto the buzzy android scene on its own two legs. Reengineered and redesigned from top to bottom, 02 is a big advance over the company's already sci-fi-like 01 model, and could hit factory floors by 2026. It seems Figure knew exactly what to do with the $675 million it received in February--funding that came from high tech names like Microsoft, Nvidia, and Jeff Bezos himself. In a press release shared with Inc., Figure says that 02 is the "highest performing humanoid robot" on the market. The company claims that 02 incorporates advances across the robot's entire design, "including AI, computer vision, batteries, electronics, sensors, and actuators." Figure's version 01 robot was already capable of impressively human-like chatting, and subtle movements like loading capsules into a coffee maker, as shown in a previous video. Part of the smart stuff inside 02 is its use of "Vision Language Model," or VLM. You may know that the current crop of cutting-edge AI chatbots rely on core technologies called Large Language Models, or LLMs, which are all about understanding human words. In comparison, Figure 02's AI vision system is all about enabling "fast common-sense visual reasoning from robot cameras." Robots currently in use in factories or warehouses are usually only capable of moving from point to point and picking up precisely oriented objects from exact positions. But the real world is never that precise. In the 02 promotional video, the robot moves metal pieces around a factory floor, placing them into manufacturing jigs. When one piece doesn't quite fall into place, the robot gives it a nudge with a finger in exactly the sort of common-sense way a human would. Part of this dexterity comes from its human-like hands, which Figure says have very "human-like strength." The 02 model also has three times the computer power on board compared to the 01 robots, so it can better work autonomously. Figure's website explains why robots like 02 are human shaped: it's to fit into our world, where doors, tools, stairs and other physical pieces of the environment are shaped for human bodies. Also, people are moving around all the time--something robot coworkers have to deal with. Figure's vice president of Growth, Lee Randaccio, explained in an email to Inc. that "people are dynamic, so making sure robots can seamlessly adapt to environments and make the same decisions as humans/complete tasks in entirety is the hardest part of the tech." The addition of speech-to-speech powers, via AI models trained in partnership with OpenAI essentially makes 02 smarter than the first version. The new model can talk to humans better, see better (it's equipped with six cameras, compared to our two eyeballs), has better power systems, senses its environment more precisely, and move its limbs better. The press release says the company's wants machines like 02 performing a "wide range of tasks across commercial applications." Randaccio confirmed Figure's busy "continuing to test and iterate" its hardware and software to "prepare for deployment." Figure has been working with BMW Manufacturing to see how its robots actually perform in the harsh environment of an industrial workplace, and Randaccio said this has recently included 02, which performed "AI data collection and use case training." Meanwhile, the company is not just not just setting its sights on having robot workers on factory floors, where they can work in dangerous zones that are risky for humans: "in the near future," the press release says, you may have a 02 working "at home." If all this sounds familiar, it's because several companies are competing in the humanoid robotics space, including well-known robot maker Boston Dynamics. But Figure 02's most direct rival is Tesla's Optimus machine. Elon Musk has said he's confident that Optimus is actually Tesla's future, that the machines may cost just $20,000 and that they'll be tested in its facilities in 2025. But Optimus's competition definitely includes 02. Though Figure says it currently doens't have a number to share for pricing, CNBC says price estimates for robots in this class range from $30,000 to $150,000. And Randaccio said that 2026 is when Figure expects to see "many robots operational in customer sites." By Kit Eaton @kiteaton

Friday, August 9, 2024

3 Signs You've Become a Disrupter in Your Industry

The business world loves disrupters. Those who upend the status quo and offer truly noteworthy innovation -- from Ford to Netflix -- often end up in the history books, serving as case studies for other would-be entrepreneurs. Of course, while everyone would like to disrupt their industry, true disruption isn't always easy to come by. Fortunately, resilient, forward-thinking organizations are less likely to be disrupted by technological and other changes. However, true disruption occurs when your business changes the established rules or norms of the industry, doing things differently in a way that improves operational efficiency or creates more value for your customers. And quite often, there are tell-tale signs your efforts have proved successful. 1. The market is responding to your way of doing things. How your target market responds to your disruptive efforts is one of the earliest (and most obvious) indicators as to whether you are truly disrupting your niche. When you provide real value to your target audience, they will respond by making your brand a priority. For example, as a USA Today analysis of photonics and spectroscopy manufacturer tec5USA notes, the company's focus on tailoring custom solutions based on the needs of individual customers has played a pivotal role in its efforts to expand its contract manufacturing services into new sectors. By building off its pre-existing photonics and optics alignment expertise, the company has been able to disrupt its niche by expanding its manufacturing capabilities to include medical devices like laboratory analyzers and pharmaceutical production devices. Such efforts have been a success thanks to the market's need for lower overhead costs and margin increases during a volatile economic landscape, leading to significant growth for the company. As another example, the success of Ford's disruptive automobile manufacturing processes resulted in over half of the world's registered automobiles in the 1920s being a Ford. Deliver what your niche wants, and user behaviors (particularly their spending habits) will adapt to accommodate your disruptive efforts. 2. Others are attempting to copy your disruptive efforts. One of the most obvious indicators that you are successfully disrupting your industry is when other businesses attempt to imitate what you're doing. After Netflix dominated the video streaming industry for years after introducing its streaming service in 2007, it now seems like just about every entertainment brand has its own streaming service, with an estimated 2,286 services available in the U.S. alone. You don't have to introduce an entirely new business model to start seeing copycat efforts from your competitors. Any business practices that you implement that help you operate more efficiently or better meet the needs of your target audience will likely be adopted by others. While this imitation is certainly flattering, it's also an indicator that you need to stay vigilant and flexible in this new landscape. Successful disrupters who become too complacent ultimately risk being disrupted themselves. 3. Your business is scaling successfully. Industry disruptions aren't always as dramatic as the ones that make the history books. But when you're doing things differently from the norm, it can also be helpful to look at the context of your business's ability to scale. In other words, your disruptive efforts should enable your company to expand its operations in a manner that helps you stay profitable and continue to deliver value to your customers. Successful disruptive businesses will be more likely to attract and retain top talent who contributes to the culture of innovation and constant improvement that you need to cultivate. They also fine-tune their systems and processes so that expansion occurs in a controlled manner that helps them maintain efficiency. These businesses grow at a pace and in a manner that allows them to deliver the same level of quality to their customers, even as the scale of their operation grows. While some challenges are inevitable with growth, truly disrupting an industry means you'll continue to find success as your operations expand. Find success as a disrupter. Disruption doesn't always have to be directly customer-facing to have a meaningful impact on the market (though it often is). Even when you're reworking your internal processes, successful disruptive efforts can have a significant impact on your ability to deliver true value to your customers. As you strive to continuously innovate and improve, the tell-tale signs of disruptive success are sure to follow. Expert Opinion By Heather Wilde, CTO, theDifference @heathriel

Wednesday, August 7, 2024

An OpenAI Tool Can Spot Text Made by ChatGPT, but You Can't Use It

Ever since generative AI started generating content--text, images, videos--knowing what is and isn't made by artificial intelligence has been a central question and concern. In an election year when deepfake problems and AI misinformation are in the spotlight, identifying AI-made materials assumes even greater importance. That makes it especially surprising that OpenAI, the market-leading AI company, has long had an incredibly reliable tool for identifying texts made by its systems available--and it won't release it. AI text-spotting The news comes via the Wall Street Journal, which learned from OpenAI insiders that the project has been "mired in internal debate" for about two years and was judged ready for a public launch for around a year. What's been keeping this critical tech hidden away? Apparently, internal concerns about what releasing it would do to the world--and to OpenAI's profits. On one level, the merits of letting the detection tool out matter seem obvious. If OpenAI released a tool that was incredibly accurate (99.9 percent effective, the Journal reports) at detecting its own text, it would help educators spot when students were using the AI tool to cheat, it would help job recruiters spot when applicants were using AI to answer interview questions, and it would have a thousand other uses when it's important or merely useful to know about the presence of inappropriate or maybe even illegal use of AI-generated text. But that would very likely deter people from using ChatGPT, pushing them onto other AI systems, and hurt OpenAI's bottom line. Both sides of this debate are supported by user data that show people across the world support an AI-detecting tool--by a four-to-one margin--according to the Journal. Data from about 30 percent of surveyed ChatGPT users indicated they'd use the software less if they thought the text was somehow watermarked so it could be detected as AI-made. People simply don't want others to know they're getting an AI boost. OpenAI is also concerned that any such move to make it easy to spot ChatGPT-created text may disproportionately impact non-English speakers, presumably because they're using the app to simply translate text from one language to another. TechCrunch reports the company said it also worries it's easy to circumvent the tool, which works only on AI-generated text from ChatGPT, using relatively low-tech tricks like passing ChatGPT-made content through another AI system. That deception means someone using ChatGPT for malicious reasons could get away with it if any monitor assumed its ChatGPT-spotting tool was a completely effective safeguard. Nevertheless, TechCrunch spotted an OpenAI blog post update that says that it's still pursuing a number of different tools to allow easy checking for any AI provenance, including using metadata, the behind-the-scenes labels added to computer files--for example, photo metadata might say where the shot was taken, and AI metadata could reveal which chatbot made the content. Metadata is "cryptographically signed, which means that there are no false positives," OpenAI notes, meaning that in the future it might be harder to hide the fact you've used AI to make content. Tell me, ChatGPT, what do people ask you? Meanwhile, other researchers are assessing exactly what content people want AIs to produce. Make a guess at the top three before you read on. We suspect that you'll get one or two dead right. The Washington Post summarizes the results neatly: it's all about sex and homework. The data, gathered from a research dataset from AI chatbots using the same core tech as ChatGPT, show that 21 percent of the first query of the day chat requests from a random sample were about "creative writing and role play." This is the biggest category of AI prompts in the data, and it sounds like simple, silly uses like "tell me a story about a fish named Steve." Meanwhile, 18 percent of queries were about seeking help with homework and 15 percent were on work or business topics--showing average users really are trying to get AIs to do their dirty work for them. Only 5 percent of queries were about health and advice, and only 2 percent addressed job seeking. Reporting on the dataset, the Washington Post also points out that while AIs have filters in place to prevent them talking dirty, people do seem keen to try to get around the blocks for "emotional connection and sexy talk." This shows how far AIs have already reached into our society, and that as well as being used for their smarts, they're being used for their softer side--despite the fact that they very clearly have no, er, softer parts. By Kit Eaton @kiteaton

Monday, August 5, 2024

Safari and Mail Summaries are the kind of AI every company should be thinking about.

When Apple introduced updates to its operating systems at WWDC this summer, the only thing anyone wanted to know was what Apple had planned for artificial intelligence. The iPhone maker was generally considered to be far behind competitors like Google, Microsoft, and Meta, which are all either developing their own large language models, or integrating those of the de facto leader, OpenAI. Apple, as you might expect, took a measured approach, mostly focusing on making Siri smarter and more useful, as well as a handful of text and image generation tools. It called its flavor of AI, Apple Intelligence, because, of course. Courtesy, Apple, Inc. On Monday, it finally shipped a beta version of iOS 18.1 and macOS 15.1, which includes some of those Apple Intelligence features, to developers. I've spent a day now using Apple Intelligence on an iPhone 15 Pro Max, as well as an M3 MacBook Air, and the most interesting thing about it so far isn't that it's missing many of the most high-profile features. The most interesting thing is that the best parts of Apple Intelligence aren't what you think. Look, every tech company is trying to figure out how to sprinkle AI into pretty much every product they make. The thing is, a lot of it is just fancy marketing for computers doing computer things. A lot more of it is just overhyped chatbots that will readily lie to you. But, Apple managed to come up with a few subtle features that I think will change the way a lot of people use their iPhones. Safari Safari is the default, and most important mobile browser in the world. In iOS 18.1, it gets Apple Intelligence summaries of webpages that give you a brief overview of what a page or article is about. It seems like such a small feature, but it's actually really practical, and really useful. Being able to quickly summarize long content to pull out key information or decide if you want to spend the time to read the entire thing is something I find myself doing all the time with Apple Intelligence. It's also the type of thing that generative AI tools are very good at. Mail Along similar lines is the new Mail Summarization. In fact, this might even be more useful than Safari Previews since email is a thing that a lot of us deal with all day. The good news is that Mail.app will now summarize your emails and display those summaries in the email preview instead of just showing you the first two lines of text. Courtesy, Apple, Inc. This is just brilliant and it's the way every single email should function. If I'm only going to see a dozen or so words in the preview of an email, it's so much more helpful for those words to be a summary of what the entire email is about, instead of just whatever words the sender typed first. Anything that helps me quickly triage emails and decide whether to archive, act on, or delete them quickly saves me time every day. Let me just say that I'm sure people will love using features that let you do things like rewrite text to be more professional or to proofread large amounts of copy. And I'm sure people will have plenty of fun with image-generation tools like Genmoji and Image Playgrounds. But I don't think those are the real quality-of-life improvements that Apple Intelligence promises to make us a little more productive. To be fair, none of the text or image generation stuff is in the 18.1 beta anyway. Neither is the ChatGPT integration. For those, we'll have to wait until later this year at the earliest. Apple Intelligence itself won't even ship with the initial version of iOS 18. Instead it'll likely come a few weeks later with 18.1. Apple may have taken a far less ambitious approach than other tech companies when it comes to incorporating LLMs into its software, but I think it's the right approach. Sure, there's some stuff in there that's just meant to get attention, like the image-generation tools. For the most part, however, Apple seems to be sticking with features that are actually useful and make using its devices better. There's a good lesson here, which is that your main job should always to be thinking of ways to delight customers by making the experience of using your product better. Right now there's a huge temptation to lean into the hype by making promises about stuff that sounds really cool but that doesn't actually improve anyone's life. Apple just showed that the best AI features aren't necessarily flashy. They just make it easier to use the technology you already have. Expert Opinion By Jason Aten, Tech columnist @jasonaten

Friday, August 2, 2024

Google's Cringey AI Olympics Commercial Is Backfiring in a Big Way

Google is facing a backlash over an Olympics-themed AI ad that critics find tone-deaf. "Gemini, help my daughter write a letter telling Sydney how inspiring she is," the narrator of the July 26 ad asks Google's AI assistant, Gemini. (The man's daughter reveres Olympic hurdler Sydney McLaughlin-Levrone.) He continues: "And be sure to mention that my daughter plans on breaking her world record one day." Beneath triumphant music, the ad then shows the proprietary chatbot drafting a letter to the Olympian--or, as the ad puts it, offering "a little help from Gemini." The criticism followed soon after. "As a general 'look how cool, she didn't even have to write anything herself!' story, it SUCKS," Linda Holmes, a pop culture correspondent at NPR, wrote. "Who wants an AI-written fan letter?" Shelly Palmer, a Syracuse University media professor, deemed it in a blog post as "one of the most disturbing commercials I've ever seen." Meanwhile, TechCrunch editor Anthony Ha remarked in a recent article: "It's hard to think of anything that communicates heartfelt inspiration less than instructing an AI to tell someone how inspiring they are." People on Reddit don't seem to like it either. It's worth noting that another part of the ad, which depicts Gemini being used to help the narrator train his daughter better, hasn't really caught much flak--suggesting that critics are less concerned about people using AI for mundane tasks such as searching for information or generating ideas, and are more focused on how the software could corrupt something as innocent as a child writing a personal letter to her hero. Nor has another Olympics-adjacent AI ad, this one from Microsoft, faced nearly as much online sniping. It, too, tugs at heartstrings, but the use-cases Microsoft pitches in it aren't nearly as intimate: summarizing morning calls, for instance, or analyzing data. Marketing and media outlet The Drum reports that Google developed and produced its ad in-house. Google did not respond to a request for comment from Inc. The firm is already the "official search AI partner of Team USA" for the Paris Olympics, with various AI features set to be integrated into NBCUniversal's coverage of the event. Despite the drama, AI has proved quite lucrative for Google, which saw steady growth last quarter amid a pivot toward generative AI technology, including in Google's marquee search engine. The blowback here is reminiscent of another recent advertising snafu from a tech giant: Apple's "Crush!" ad from earlier this year. Both ads demonstrate that consumers bristle when massive tech corporations co-opt intimate human experiences to shill some new product. All in all, the backlash to the company's new ad suggests that there still are many pitfalls for firms that want to market AI products to the general public. People remain skeptical of the technology even under the best of circumstances, and touching on particularly sentimental or heartfelt themes--especially in relation to something as emotionally charged and distinctly human as the Olympics--seems to be asking for trouble.