Monday, July 8, 2024

Is That AI Safe? Startup Anthropic Will Pay to Check

As the battle between the AI giants heats up, the topic of AI safety is always hovering around in the background--because these ever-smarter tools can be both powerful and incredibly dangerous. It's for this reason that one of the leading AI makers, Anthropic, which makes the AI system Claude, is starting a program that will fund the creation of AI benchmarks, so that we'll all be able to more accurately measure both the smarts and the potential impact of AI systems. Making sure AIs are safe In a blog post, Anthropic explains that "developing high-quality, safety-relevant" evaluations of AI quality and impact "remains challenging, and the demand is outpacing the supply." Essentially as more and more AI systems come online, and the pressure to measure them so that we understand their value and riskiness rises, there aren't enough tools available. To help solve this, Anthropic believes its investment could "elevate the entire field of AI safety, providing valuable tools that benefit the whole ecosystem." Anthropic's post goes into great detail about the exact qualities it's trying to encourage third-party evaluators to measure. It mentions specifics like the risk AI may pose to cybersecurity, social manipulation (critically important in an election year), national security risks like "defense, and intelligence operations of both state and non-state actors," and even the chance that AIs could "enhance the abilities of non-experts or experts" to create chemical, biological, radiological and nuclear threats. It also says it wants the ability to measure "misalignment," a situation where AIs "can learn dangerous goals and motivations, retain them even after safety training, and deceive human users about actions taken in their pursuit." AI safety is a tricky problem This is high-level stuff, addressing a very difficult problem that has troubled even OpenAI, the industry's current market leader. To keep its own AIs safe, OpenAI formed a superalignment team a while back, after the brief 2023 scandal that saw CEO Sam Altman temporarily removed as some board members worried about the direction he was taking the company. However, the leaders of that team recently left the organization--sparking fresh concerns. One of those executives, Ilya Sustkever, subsequently launched his own startup with the express goal of building safe AIs in a development environment insulated from the financial pressures faced by other AI startups. Anthropic's program to tackle AI safety will involve third parties who've submitted their plans to the company and been selected to develop the relevant AI-measuring tools. Anthropic will then "offer a range of funding options tailored to the needs and stage of each project." As news site TechCrunch points out, the expectation is that these third parties will be building whole AI-assessing platforms that should allow experts to craft their own AI safety assessments, and also involve "large-scale trials of models involving 'thousands' of users." Safety first! But are AIs really that much of a threat? TechCrunch also points out that some of the scenarios Anthropic illustrates in its blog post are a little far-fetched, especially since some high-profile experts, including futurist guru Ray Kurzweil, have suggested that fears that AI represents an existential threat to humans are somewhat overblown. Clearly the better move is to err on the side of caution, though, especially when players with skin in the game--like OpenAI's Altman and entrepreneurial AI-maker Elon Musk--are loudly voicing their concerns about the potential risks from the same AIs they're spending billions to make. The news that even a leading AI maker is worried about the threat the technology poses should give business users pause. We know how useful AI can be to a company--but it's worth reminding your staff that it poses certain risks too, and its output shouldn't be trusted without at least a double-check. Meanwhile, when Inc. asked OpenAI's ChatGPT how easy it was to measure AI safety, it was pretty candid: It admitted it was a tricky job, but then it added "as for me, I'm designed to be a helpful tool. I operate under strict guidelines to ensure I provide accurate, safe, and useful information." It also noted that it was supposed to be an assistant, "not to pose any threat." We're not entirely sure how we feel about the fact it said "me" in that statement. BY KIT EATON @KITEATON

Friday, July 5, 2024

The Biggest Problem With Apple Intelligence Is That It Won't Run on the Best AI Device Ever Made

I've spent a lot of time thinking about Apple Intelligence over the past few weeks. I say thinking because you can't actually use any of the features Apple demonstrated during its WWDC keynote earlier this month. You can't create your own Genmoji or ask Siri to remind you when your mom's flight is supposed to arrive. You also can't look at a PDF and have Siri send it to ChatGPT to answer questions about whether you're allowed to have a pet lizard. But, all of those things are impressive demos, and I'm excited to try them once they are available. If they work the way Apple promises, your next iPhone is going to be a lot more interesting. Of course, we've seen a lot of impressive demos over the past year. What we haven't seen are any impressive products. The Humane Ai Pin is basically a flop. The Rabbit R1 is not just a failure, it's not even really an AI gadget, it turns out. To be fair, there have been some impressive features announced by Microsoft and Google, and ChatGPT is obviously a thing. Google's Magic Erase feature in its Photos app is both cool and practical. As far as devices that use AI to dramatically improve the way we interact with computers, however, there's basically nothing. That's a shame because a wearable device that you use to interact with a smart assistant capable of doing more than just set timers or showing you "results from the web," would be a step change in personal computing. That's the entire premise behind the Ai Pin and R1--create a device that serves as a way of accessing an always-present assistant that can interact with your own personal information and apps. The problem is, none of them work. They don't have access to your personal information, they don't have apps, and the hardware isn't up to the task. For example, the Ai Pin gets only a few hours of battery life--at best--and that's when it doesn't shut down because it's too hot. Do you know what gets incredible battery life and has very capable hardware? An Apple Watch. Look, I've been saying for a while now that the perfect AI wearable gadget is the Apple Watch. If I'm going to wear a device that I can interact with, the Apple Watch is already the right form factor--it just needs to be smarter. That's the entire premise of Apple Intelligence--make Siri smarter. The problem is, Apple Intelligence doesn't run on the Apple Watch. For that matter, it won't run on any but the most high-end recent iPhone. Even if you bought an iPhone 15 in the last year, you're out of luck. Presumably, anything with an iPhone 16 in the name will be capable of running Apple Intelligence, but there are like 1.5 billion iOS devices in the world, and most of them are not going to be able to run Apple Intelligence. For Apple's effort to be successful, that needs to change. One way is for the company to get a version of it running on the Watch. Or, at least, make it possible for your Watch to interact with a capable iPhone. After all, it's basically an accessory for your iPhone. It's a very capable accessory, but for most people, it's a way to get notifications or information from your iPhone, without having to actually use your iPhone. Which, to be honest, is great. But, it would be better if you could ask Siri a question on your Watch and it would either send the query to Apple's cloud service, or it would just feed the query to your iPhone. It's great that you'll be able to do all kinds of AI things on your iPhone, but the reason a wearable device seems so appealing is because it's more accessible. You don't have to pull out your iPhone just to ask a question or to get information. Imagine if you could just ask Siri, via your Watch, the question about, "What time does my mom's flight arrive?" Or "Will we have enough time to get from the airport to our dinner reservation?" Your Watch would interact with the services on your iPhone to find out when the flight is supposed to arrive, whether it's still on time or delayed, where and when you made a dinner reservation, and whether you'll be able to get there based on directions and current traffic conditions. Presumably, you'll be able to do most of that at some point on your iPhone, but Apple's real killer move would be to make all of this possible on the best AI device form factor--the Watch. EXPERT OPINION BY JASON ATEN, TECH COLUMNIST @JASONATEN

Wednesday, July 3, 2024

Microsoft's AI Chief Says Your Content Is Fair Game If It's on the Open Web

The warning used to be that anything you put on the internet stays there--somewhere--forever. The advent of artificial intelligence models has put a twist on this, and now it's that anything you post online will end up in an AI, whether you want it to or not. That is, at least, what Mustafa Suleyman, co-founder of Google's DeepMind AI unit and now CEO of AI at Microsoft, thinks. According to Suleyman, it's fine for AI companies to scour every corner of the open web--which is, arguably, anything on any website that's not protected behind a paywall or login interface--and to use what they find to train their algorithms. In an era of rapid growth by data-hungry generative AI services, it's a stark reminder that you or your company should never publish anything on your website or a social media service that you don't want to be publicly accessible. Fair use or abuse? Speaking to CNBC at the Aspen Ideas Festival last week, Suleyman referred to the widely accepted idea that whatever content is "on the open web" is freely available to be used by someone else under fair use principles, news site Windows Central reports. This is the notion that if you quote, review, reimagine, or remix small parts of someone else's content, that's OK. Specifically Suleyman said that for fair use content, "anyone can copy it, recreate with it, reproduce it." Note his use of the word "copy," rather than simply "reference" or "remix." Suleyman is implying that if someone has published text, imagery, or any other material on a website, it's fine for companies like his to access it wholesale. This is already somewhat questionable: Fair use isn't designed to enable outright copying, and one of the big no-noes is copying someone else's work for your own financial gain. But Suleyman's next words will worry critics who think big AI's powers are already too vast. Suleyman acknowledged that a publisher or a news organization can explicitly mark its content so that systems like Google's web-indexing bots--the code robots that tell Google's algorithm where everything is online--either can or cannot access the info. He also noted that some also mark content as OK for bots to access only for search engines, but not "for any other reason," such as AI data training. He said this is a gray area, and one that he thinks is "going to work its way through the courts." Suleyman may be hinting that he thinks sites simply shouldn't be able to bar their content from being looked at by AI. This is timely, since the high-profile AI firm Perplexity is in the spotlight for allegedly ignoring exactly this sort of online content marking, and then scraping website's data without permission. Data, data everywhere, for any AI to "drink" The big problem is that generative AI needs tons and tons of data to work; without data, it's just a big box of complicated math. With data, AI algorithms are shaped to reply with real-world information when you type in a query. In the hunt for this data, some companies, like Apple, are partnering with content publishing services, like Reddit, to gain access to billions of pieces of text, photos, and more that users have uploaded over the years. But other companies have been accused of questionably or even illegally snaffling up data that they really shouldn't take. The New York Times and other newspapers have launched lawsuits based on this, and three big record labels just sued some music-generating AI companies that they claim have illegally copied their music archives. As more and more companies, from one-person businesses to giant enterprises, embrace AI technology, this is yet another reminder that you need to be careful when you use answers spit out from a chatbot: Someone needs to check they're not violating another company's intellectual property rights before you use it. It's also a stark warning that you should be very careful that your website, social media posts, or any other content you publish online isn't giving away proprietary data you'd prefer to keep private, because it could simply end up being used to train someone's smart AI model. BY KIT EATON @KITEATON

Monday, July 1, 2024

We Should Talk About Why the Gen-Z Workforce Is So Angry

Hey Google, Apple, Amazon. All you tech companies. Got a second? We need to have a little sidebar about Gen Z because you've got a major problem bubbling. Here's the thing. The way you've always approached work -- the 9-to-5, the org structure, the career ladder, the assumed authority -- they don't want any of it. I wrote a post last week about the deterioration in tech company culture, in which I said: "We're seeing a generational shift in how and where we work. I feel like maybe the boomers established modern-day' work, Gen X rebelled against it, Millennials are borderline revolting against it, and Gen Z sees it as completely foreign." Holy smoke, that line resonated. You're essentially asking Gen Z to do what you've done your whole life, but do it wearing clown shoes. Man, I love a good/terrible metaphor. Look. I'm not talking to Gen Z in this article, a group that's catching a lot of strays these days. I'm also not talking for them. I did not appoint myself the spokesman for a generation. I tried that with Gen X and I couldn't get them to rebel, so I know that won't work this time either. Honestly, I don't even want to be here. This time, I'm talking directly to the companies themselves. And I'm not here to cause a scene, I'm just giving you a heads-up. Gen Z is pissed. At you. Not "them." You. It Runs Deeper Than Generational Differences This is not just an issue of sitcom-ready generational differences between age groups. That's the first mistake the tech companies make when thinking about it, which is why I'm not surprised that you don't have answers. This is more than malaise. This is not about anarchy. It's not a lack of generational understanding on either side. But this is the first place where Gen Z is catching strays. You're blaming natural generational differences, which skims over the real issues and just makes it worse. It's not about quiet quitting or presenteeism or creative loafing or -- honestly, I can't keep up with the buzzwords anymore. All those articles with the splashy headlines are wrong. It's deeper than that. Gen Z fully knows what you're "about." They're just rejecting it. What the Tech Companies Think Gen Z Is About Is Irrelevant I'm not a big believer in generational dynamics. I mean, everyone says "Remember when Saturday Night Live was actually funny?" I get it. There are some obvious generation-defining touchstones: References, trends, and especially technical advancements. I'm more inclined to point to the PC generation, the internet generation, the mobile generation, and the social generation, because those labels have more of an impact on group behavior and dynamics than a somewhat arbitrary milestone like age. But this still isn't that. Gen Z is not angry because you're keeping them out of their TikToks and their text chats. And this is the second place where Gen Z is getting maligned. You're blaming the victim here. And what's more, you're doing it without actually doing it. You're not blaming Jane, you're blaming a person roughly the same age as Jane who isn't Jane but then also connecting every misdiagnosed personality trait from that unspecified person to Jane. Which, obviously, just makes it worse. OK, so what is their problem then? When You Started Work, It Was Work With every generation that came before Gen Z, when we became a part of the tech workforce, you gave us actual work to do. It had meaning. Gen X didn't invent the internet, but we worked with it to make it into something that no business, and eventually no person, could live without. Apologies. But even then, I saw the technical evolution start to create a divide. The generations that came before us needed us to work with the technology, because they didn't really understand it. But make no mistake, they decided how that technology would be harnessed to build things and run businesses. They still do. And then, when our jobs didn't feel like jobs, which was like 25 percent of the time, they came to us and said "Well, I know you're bored and unsatisfied, here's a program. Here's a policy. Let's get you out of your cubicle and give you something else that isn't typing commands into beige boxes." They didn't say exactly that. They weren't that clever. But this is when our workdays started filling up with TPS reports and other garbage to keep us occupied but didn't let us accomplish anything. There's an entire movie about it called Office Space. Highly recommend. Oh, also, this is what pushed me into entrepreneurship. Each Generation Has Had Less to Do The Millennials didn't invent mobile or its platform, but you gave them enough work to do with it to occupy, let's say 40 percent of their time, and the other 60 percent was just filled with fluff. This could be anything from "running Hubspot" to "Slack moderator" to ... literally nothing. Speaking of art imitating life, one of the funnier aspects of a workday at Hooli, Silicon Valley's stand-in for Google or Yahoo, was not how little most employees did, but how accepted it was. Same guy that did Office Space. Mike Judge. Visionary. Now here we are with Gen Z, and work isn't work anymore. It's all fluff. Man, it's not that they don't want to come back to the office. They don't want any part of what they do when they get there. The reason there isn't any work-life balance isn't because life has gone away, it's because work has gone away. It all just blends now. Now work is all programs and policies and TPS reports. Work is literally connect-the-dots and paint-by-numbers and fill-in-the-(Agile)-blanks and check-the-(Jira)-boxes. The only place it isn't like this is in the very early days of tech startups, which is why so many GenZers want to be entrepreneurs. So the anger comes out as against BS programs, office culture, corporate culture, tech culture, jobs, careers, capitalism, and democracy, and then you get your anarchy. Ultimately Gen Z wants satisfaction, not just participation. They see through the fluff because it's all fluff. And by the way, you're also telling them that AI is already making them redundant anyway. Not helping. Also not true. As I mentioned earlier when I saw this devolution of work begin to happen, I lucked into startups and entrepreneurism, and I never looked back. I'm not rich. I don't have a vacation home or a boat or a country club membership. But I don't wake up angry every morning. Ultimately, that's the thing. That's all Gen Z wants. I think. Anyway, I'm sure they'll tell me if I'm wrong. EXPERT OPINION BY JOE PROCOPIO, FOUNDER, TEACHINGSTARTUP.COM @JPROCO