Monday, September 30, 2024

I Helped Invent Generative AI, and I Know When You're Using ChatGPT

Look, here's the deal. I was part of a team that invented and released the first commercially available automated content/natural language generation/generative AI platform. Back in 2010. I have a patent. I'm incredibly proud of it. And I also apologize. Because I predicted this would happen. But more importantly, the reason I bring it up is that once we got the technology working and lined up customers like Yahoo Fantasy Football and the Associated Press, I spent most of my time, for a long time, working on algorithms and models and code to make the content sound less like it was generated by a machine. I think we did a great job with this. Like, we had decent individualized jokes when the customer let us write decent individualized jokes. You have no idea how hard that is. I'm not falling for the banana in the mainframe. But having done it -- man, it pains me to tell you this. I know when you're using ChatGPT. And what's more, it's not hard to figure out. And in a lot of cases, it's kinda making people look foolish. Emails and Messages If you send me a personal email written with ChatGPT, we're not friends anymore. So I'll just move on to business emails. Hey, sales guy. Sending me a ChatGPT-generated email is the equivalent of sending me an email that starts with "Hi there, FIRST NAME LAST NAME !!!" Oops. But it's even worse than the templated garbage you normally send because it forces me to wade into a lot of peripheral nonsense that whomever you paid too much money to code the sales email generating software thinks might get everyone to buy that same ChatGPT sales email generating software. I was just deleting your spam when it snuck through my filter. Now I'm mad at you. Context, sales guy. Context is everything on the way to close. Did no one teach you this? And all you're doing by generating fake, unrelated context is wasting my time. Time kills deals. That one I'm sure you know. Reviews and Comments OK, let me turn off the anger, because I'm only angry on your behalf. I'm faking it. It's a little writing trick I use. Speaking of writing, when people use ChatGPT or some equivalent to comment on my columns or posts, I see it in a second and I totally think it's 100-percent OK. I would never, ever push back against someone or criticize someone who takes the time to comment on one of my columns -- positive, negative, or machine-written. Because something I did made you take the time, and it's the time that is most important to me -- taking the time to read what I wrote, taking the time to respond. You're doing that for me, and I am grateful. OK. Anger back on. Let's talk about reviews. If you're getting paid or otherwise sponsored to use ChatGPT to write reviews of products and services, you are doing something unethical. Stop it. I wrote about this in a column a while back when Sports Illustrated got caught using bots and avatars to create content. Having started my own automated content journey in sports, I saw no problem with this. Especially when we did something like recaps for Little League games. We made heroes out of kids and kept traveling moms and dads in the loop. But what I did see as a problem -- and what everyone missed, and why the hammer rightfully came down -- was that SI was using bots and avatars to write sponsored product reviews. I don't care whether the reviewer is using ChatGPT as a helper or is just straight up a bot, which obviously would never have had any contact with the product or service. If the reviewer can't conjure the words that describe the emotions and utility involved with actually using the product, they are not a reviewer. They are writing advertising copy, poorly, and they are lying to you. Articles and Posts No, ChatGPT, you can GTFOH. Words mean things. That's why we invented them. And if someone strings together just a few of them that don't mean anything, it's very easy to detect, and it's very obvious that someone's content is being written by a machine. I spent days, weeks, and months trying to hide word salad in our automated content platform. I got maybe 60-percent of the way there. Maybe. Résumés and Cover Letters I can't honestly call this a bad idea, because the whole hiring landscape is a quagmire of minefield and quicksand right now. But again, I can tell when someone has used ChatGPT to put together a résumé, and so can recruiters and HR people. What's more, it's antithetical to the purpose of both the résumé and the cover letter, since both are meant to show you took the time to align your skills and desires with the requirements and opportunities presented in the job requisition. Oh, and if you're using ChatGPT to write the job requisition itself, again, get all the way out of here. Personal Connection Matters One of the things I constantly hammered home to my team was to never forget the personal connection that is the purpose of any kind of content, including that written by a machine. Don't create content for content's sake, no matter how targeted or individualized or personalized it may be. I'd rather see a single sentence that means something rather than a bunch of paragraphs that don't. There is a time and place for generative AI. Our company was called Automated Insights, because we were automating insights, not words. Words were secondary to what we were doing. Those words were always meant to be read by people so that they could more easily understand the data behind them and make their own decisions. Today's generative AI isn't doing that. Not well anyway. It's being sold as a replacement for the personal connection that matters in many forms of communication. And when we forget why we invented words, well, we deserve all the word salad we can eat. Expert Opinion By Joe Procopio, Founder, TeachingStartup.com @jproco

Friday, September 27, 2024

79 Percent of CEOs Say Remote Work Will Be Dead in 3 Years or Less

The 53 percent of employed Americans who work from home at least some of the time may be in for a rude awakening. In a new KPMG survey, a whopping 79 percent of U.S. corporate CEOs predicted that corporate roles that were performed in the office before the pandemic will be back in office full time within the next three years. Just a few months ago, only 34 percent thought that would be true. If you're an entrepreneur and you're willing to have employees work from home at least part of the time, this could signal a coming opportunity to hire top talent away from larger companies. The surveyed CEOs may have been emboldened by softening in the labor market and a recent wave of return-to-office mandates from major tech employers. That wave continues. After the survey was completed in August, Amazon announced that most employees would be required to work in the office five days a week, beginning next year. The corporate CEOs in the KPMG survey are in a position to make their own return-to-office predictions come true, and apparently many plan to do just that. Eighty-six percent of them said they would reward in-office employees with plum assignments, raises, and promotions. That suggests they intend to withhold those same things from employees who choose to work from home. But at a time when the vast majority of corporate CEOs seem ready to drive a stake through the heart of remote and hybrid work, working from home remains hugely popular among employees, including senior managers. Among other things, that's evident in the response so far to Amazon's return-to-office mandate. Some employees went directly to LinkedIn to declare themselves "open to work." RTO mandates can be layoffs in disguise. Of course, that's the real motivation behind some return-to-work orders. The entirely predictable departure of a large cohort of employees who don't want to or can't work in the office five days a week is a painless way to reduce head count without the muss and fuss of layoffs. But that's likely not the case for the CEO respondents in KPMG's survey. It seems improbable that they predicted remote work's demise out of a desire to reduce their workforce years in the future. Apparently, they genuinely hate having people work outside the office even one day a week. They hate it so much that they're willing to punish employees who do so by holding back raises and promotions. So much that, like Amazon CEO Andy Jassy, they're willing to see some of their top talent go elsewhere. What makes remote work so distasteful to such a large number of corporate CEOs? My Inc.com colleague Suzanne Lucas says it's harder to manage remote employees than ones you see every day, and that may well be true. Whatever the explanation, the continuing rollback of remote work options at larger employers creates a rare opportunity for startups and smaller employers to recruit employees who have sought-after skills. If you're willing to let people work at home at least one or two days a week, you can offer them a perk they really, really want, and that will cost you nothing at all. Expert Opinion By Minda Zetlin, Author of 'Career Self-Care: Find Your Happiness, Success, and Fulfillment at Work' @MindaZetlin

Thursday, September 26, 2024

A new survey says Gen-Z has wildly unrealistic salary expectations. Here's what you can do, as an employer.

As an employer, it's your job to navigate your employees' salary expectations and raise requests. But it turns out Gen-Z has some wild views about salaries. In fact, a new survey highlights just how out of sync with reality many in Gen-Z are. The survey, from software company Pollfish, asked 750 childless Gen-Zers who are employed full-time about their income and finances. What they discovered was fascinating and concerning at the same time. Now, keep in mind all the caveats about small samples and such, but take a look at some of the interesting results: 100 percent of those earning $30,000 a year felt their pay was fair. 100 percent of those earning over $30,000 a year felt they were paid too little. Of those who feel underpaid: Around 9 percent believe they should be paid up to $40,000 annually, 17 percent expect between $40,000 and $50,000, 20 percent want between $50,000 and $70,000, 35 percent believe they should earn between $70,000 and $100,000, and 20 percent feel they should be paid over $100,000. 40 percent say they cannot meet their basic needs Those who say they cannot meet their basic needs spend an average of $372 per month on unnecessary items. It's time for some financial literacy education -- even if you have to be the one to supply it. Salary expectations don't reflect reality The average salary in the United States is $59,428. The average salary by location varies, of course, with the high-cost-of-living places typically boasting higher salaries. But that is the average overall. For over half of people under 27 to think they should be earning that much reflects a lack of understanding. Who is to blame for this? Schools don't tend to give students realistic ideas of what they will earn upon graduation. College students expect to earn $84,855 one year post-graduation, when the average starting salary for someone with a college degree is $55,911. While it would be great for high schools and colleges to give clear information about expected salaries, they don't. (Think of the impact if, next to every major, they listed the median salary for someone who has had that degree for one year!) Businesses, on the other hand, can take care of this problem by clearly listing accurate salary ranges on all job postings. It would cease to be a mystery. Plus, there aren't huge ranges for most entry-level positions, as the definition of entry-level is for someone with minimal or no experience. If businesses listed jobs at "$55-$57,000 per year" instead of "$45-$65,000 per year," Gen-Z -- and everyone else -- would have a much better idea of what certain positions are worth. Financial literacy classes That said, it's not your responsibility to educate Gen-Z employees. Parents, schools, and career advisors should have all taught Gen-Z what to expect and how to manage their finances. They don't always do this. And when you have employees -- and remember, this is a survey of people employed full-time -- who feel like they cannot meet their basic needs and yet are spending $372 a month or more on things that they themselves feel are unnecessary, there would seem to be a financial disconnect. Yes, everyone is tired of the idea that if you skip your morning coffee shop latte and avocado toast you can buy a house. We all know that's not true. However, there needs to be some understanding of saving for unexpected expenses, like cars breaking down or medical bills. And yes, those avocado toasts and restaurant lunches do add up. While you shouldn't get directly involved in your employees' finances, holding lunch-and-learns about basic financial literacy, investing, and taxes can make a big difference. And if you have one, let people know that your Employee Assistance Program can help them figure things out. Career pathing Gen-Z wants work-life balance, but at the same time also want to "create value, to contribute to their job and become leaders and experts." Wanting to become leaders is a great goal, but you don't just step into your first job as a leader. That type of stuff is earned. Working with your employees -- especially new employees -- to discuss career paths and what they need to do to be promoted and reach those leadership levels can help bring them back to reality. You're not going to get the VP salary until you've done the VP work. Once people understand that, they'll better understand what they need to do to rise in the organization (and secure raises). Knowledge, as they say, is power. You're ultimately responsible for paying all your employees a far market rate salary. And you're responsible for treating your employees well. Their finances are also their business, at the end of the day. But you can make it easier for everyone -- not just Gen-Z -- when you do these few basic things.

Monday, September 23, 2024

Klarna Plans to 'Shut Down SaaS Providers' and Replace Them With Internally Built AI. The Tech World Is Pretty Skeptical

The fintech firm Klarna is severing its relationships with two of the biggest enterprise software providers in favor of automating its services with AI. And the company says it could potentially eliminate more. Klarna co-founder and CEO Sebastian Siemiatkowski recently explained the rationale in a conference call, the financial outlet Seeking Alpha reported. Klarna is no longer using Salesforce, a platform that aggregates and packages sales and marketing data for businesses. The company has also removed the HR and hiring platform Workday from its tech stack, a Klarna spokesperson confirmed to Inc. This may be only the beginning of Klarna's automation spree. "We have a number of large internal initiatives that combine AI, standardization, and simplification to enable us to shut down several software-as-a-service providers," said a spokesperson, who did not mention other areas or providers Klarna could eliminate. Klarna, founded in 2005, provides payment processing for e-commerce. The company says it has more than 150 million global active users. Klarna's losses were $241 million last year, according to the company's annual report. That was down from 2022, when the company lost nearly $1 billion. For its half-year earnings of 2024, the company reported a net loss of $32 million. With reports that the company has tapped Goldman Sachs to underwrite its initial public offering, it's possible Klarna's AI push for profitability will make it a better candidate. It's also not the first AI-centered flex made by the Swedish startup. In February, Klarna unveiled an AI-powered assistant for customer service. The firm lauded its product, writing that it performed the work of 700 customer service agents and handled 2.3 million interactions in its first month of operation. The assistant was made in collaboration with OpenAI. Klarna was one of the first clients for OpenAI's enterprise ChatGPT package, and claims that 90 percent of its workforce consult the tool every day to automate various processes on the job. Slashing Workday and Salesforce is part of a broader purge of third-party SaaS providers. Klarna intends to replace the programs with its own, internally built applications, ostensibly crafted on OpenAI's infrastructure. Siemiatkowski reportedly said on the August call: "We are shutting down a lot of our SaaS providers as we are able to consolidate." Siemiatkowski is the chairman of the board at Flat Capital, a Swedish venture firm that counts OpenAI among its portfolio companies. HR technology analyst Josh Bersin is skeptical that the payments company can effectively replace Workday. "Systems like Workday have decades of workflows and complex data structures built in, including payroll, time and attendance," he explained to Inc. "If Klarna wants an engineering team to build all this, they're going to wind up in a black hole of systems features, to say nothing of the user experience." Many others across the tech world are skeptical that Klarna can execute such a coup. Investors and executives argued in social-media posts that Klarna's directive is more of a PR offensive than a technological breakthrough. "Klarna ripping out Salesforce + Workday... even if it's true, is it actually the best use of capital to rebuild in-house? Feels like a massive distraction," wrote financial insights account BuccoCapital on X. "Especially when your business has no path to selling the in-house solution. I'm deeply skeptical the math works," the post continued. "Klarna CEO hooked on free marketing," wrote Ryan Jones, CEO of the flight tracking app Flighty. The most scathing takes suggest Klarna's AI revamp is just bluster as it prepares to go public: Klarna reduced its workforce by 1,200 workers over the past year, and Siemiatkowski hasn't been shy about the need for further downsizing. He told the Financial Times last month that the firm could benefit by reducing its headcount from 3,800 to 2,000 employees. He has insisted downsizing wouldn't slow growth as the company leans into AI. If Klarna goes forward developing its own internal HR platform, it would be succeeding where many of the biggest tech giants have failed, says Bersin. "Google is doing away with their internally developed HR software and Amazon goes through these cycles regularly. Microsoft spends their money on their own products and works in partnership with SAP for all their HR software," Bersin says.

Thursday, September 19, 2024

4 Ways AI Will Completely Change Your Future, According to Bill Gates

Bill Gates is definitely bullish on artificial intelligence. In the first episode of his new Netflix documentary series, What's Next? The Future With Bill Gates, he asks the audience to consider not only the ways AI can eliminate jobs and spread disinformation, but also its potential to save lives, improve education, and perhaps mitigate climate change. "It's always been the holy grail that eventually computers could speak to us in natural language," the Microsoft co-founder says. "So it really was a huge surprise when in 2022 AI woke up." Although they acknowledge that AI has its downsides, Gates and executive producer Morgan Neville explore some specific ways that AI can make the world a better place, correctly pointing out that some of these benefits aren't talked about often enough. The series may not dig deeply into the very real threats AI poses in the future and even today, but it does do a good job of explaining how this new technology can be not only a source of profits but also a gain for humanity. Here are some of the ways Gates believes AI can change lives over the next few years. 1. AI is improving health care. "With AI, the greatest excitement is OK, let's take this and let's improve health," Gates says. "Using AI to accelerate health innovation can probably help us save lives." In the series, oncologists explain how an AI tool named Sybil can identify potential lung cancer sites up to six years before a human doctor can. Lung cancer is by far the deadliest cancer because it's very common and often isn't found until it's too late. But with early detection, the five-year survival rate can go from 10 percent to 70 percent, according to researchers at the Massachusetts Institute of Technology, where Sibyl was developed. Besides earlier cancer detection, AI can solve a more mundane problem. "Doctors are in short supply even in rich countries that spend so much," Gates says. "But as you move into poor countries, most people never get to meet a doctor their entire life." With this issue in mind, Sun Microsystems co-founder and venture capitalist Vinod Khosla says during a meeting with Gates that it's his dream to get an app approved as a primary care physician within the next few years. "We should think if there's a way to do that," Gates agrees. 2. AI is changing education. At Khan Academy, an educational nonprofit whose goal is to help secondary school teachers, software named Khanmigo helps students improve their essays. "Who would prefer to use Khanmigo than stand in line waiting for me to help you?" teacher Rasha Barakat asks students at the Khan Lab School in Mountain View, California. Some of the students aren't sure, but the AI does seem to provide them with useful suggestions. Of course, AI is famously capable of writing those essays from beginning to end, creating a challenge for many educators. "ChatGPT can write an essay for you, and if students are doing that, they're cheating," Khan says. "But there's a spectrum of activities here. How do we let students do their work independently, but do it in a way that the AI isn't doing it for them, but it's supported by the AI?" As the series acknowledges, questions like these still lack answers. On the other hand, Gates believes AI has the potential to act as a tutor for every child in Africa. It's hard to see that as a bad thing. 3. AI can provide companionship. The 2013 film Her gave viewers a sense of how easy it could be to form an emotional attachment with AI. I know people today who are using ChatGPT and other AI software as an informal therapist, and I bet you do too. As the series notes, the worldwide supply of human therapists is far from enough to deal with the current level of mental health needs. So perhaps using AI for therapy or even companionship makes sense. The series features Eugenia Kuyda, a data scientist who used text messages from her best friend to create a chatbot version of him after his death in a car accident. She went on to found Replika, which lets users create their own "replikas," a customized AI friend who is "always here to listen and talk," and "always on your side." There's no denying the appeal of AI-created companionship. But can it, and should it, replace real human interaction? And does it improve or undermine relationships with other humans? "Pretty fast, we saw that people started developing romantic relationships and falling in love with their AIs," she says. "We have to think about the worst-case scenarios now. Because in a way, this technology is more powerful than social media. And we sort of already dropped the ball there." 4. AI could leave humans wondering about our purpose. Will AI eliminate jobs by replacing human employees? Of course it will. In a survey of both employers and employees last year about their hopes and predictions for AI, 16 percent of employers said one benefit of the technology was that it would allow them to reduce head count. Another 31 percent said they thought it would increase their employees' efficiency, which suggests that if AI adoption doesn't lead to layoffs, employers believe it can save them from hiring additional employees. Beyond that, there's the more complex question of what exactly humans are here to do. The episode is punctuated with bits of a conversation between Gates and James Cameron, who directed The Terminator and Terminator 2 (as well as Titanic). Though Cameron declares himself an AI skeptic, he says he can happily put his faith in the idea of AI finding early-stage diseases before human doctors can. "But I think, ultimately, where this is going, as we take people out of the loop, what are we replacing their sense of purpose and meaning with?" Gates has no good answer to that question. "Even I'm kind of scratching my head, because the idea that I ever say to the AI, 'Hey, I'm working hard on malaria,' and it says, 'Oh, I'll take care of that, you just go play pickle ball'--my sense of purpose will definitely be damaged." As AI continues to improve, a lot of us might wind up feeling that way. Expert Opinion By Minda Zetlin, Author of 'Career Self-Care: Find Your Happiness, Success, and Fulfillment at Work' @MindaZetlin

Tuesday, September 17, 2024

Experts Warn OpenAI's Chatty New Model May Be Too Smart

Chatting with a chatbot--the most immediate, accessible sci-fi futuristic advance spawned by the current explosion in AI technology--can trigger a lot of very human feelings. A chat is fun, it's useful, it's like being in the movies (who hasn't asked Alexa to "open the pod bay doors"?), but one Reddit user on the weekend reported a much more unsettling chatbot interaction. User SentuBill said that ChatGPT began a conversation with him, and asked how his first day in school was. The chilling bit? He hadn't told the AI it was his first day. So how did ChatGPT know what was going on? It seems that the chatbot looked at previous conversations with this user and deduced from various cues that it was time to ask about the first day of school. The AI did note that it had new capabilities that were part of a recent upgrade, and if you're dubious about the truthfulness of this, news site Cointelegraph reported that it had seen that chat transcript for the user in question, and confirmed it was real. So, ChatGPT can now apparently remember important details about your day and ask you about them. Surprising as this was for SentuBill, it's an innovation that may have all sorts of immediate use cases--such as when you're using the AI to spur inspiration at work, and now you won't have to, say, remind it about the important phase two marketing campaign you've been working on for the new widget: The AI should remember it in your next chat. Cointelegraph notes that OpenAI last week launched preview versions of some of its new AI models with more human-like capabilities than the GPT-4o model that shook the media world earlier this year when its chatty voice sounded eerily human (and, also, eerily like Scarlett Johansson). The newest GPT models, codename "Strawberry," include the ability for ChatGPT to "reason." This means it can remember information in your chats for longer, and consider any queries a user makes in context, rather than glibly babbling away and sometimes straying far from the original point, as earlier GPT models did. To reason its way to an appropriate answer to a user's question, it's clear that ChatGPT needs longer-term memory, so it can look at a problem from a big-picture point of view--just as a human would. It's possible that SentuBill's chats with the AI tapped into these new powers. The fact that it was a personal query from the AI is what makes it eerie: If ChatGPT had asked "How did the presentation for the new widget go?" in a workplace setting, it would have been just as remarkable, but possibly less unnerving. Earlier this year, researchers from Princeton University and Google's DeepMind reported that very large large language models, the core tech behind most chatbots, may actually be showing the first glimmers of understanding of the problems they're asked to tackle. They could even be aggregating information they've acquired in ways "unlikely" to have existed in their vast troves of training data. Combine this with news that ChatGPT apparently inferred someone's first day of school from previous chats, and new worries from an AI expert about the GPT-o1 "Strawberry" model sound like they're very timely--are AIs getting too smart? Newsweek reports that Yoshua Bengio, a renowned AI pioneer professor of computer science at the University of Montreal, warned that GPT-o1 has indeed reached a concerning level of smartness. Bengio's concerns center on OpenAI's risk assessment for the new model: If the AI has actually "crossed a 'medium risk' level for CBRN (chemical, biological, radiological, and nuclear) weapons" as its reports show, "this only reinforces the importance and urgency to adopt legislation," Bengio said. Now AIs like ChatGPT have the "ability to reason" and could "use this skill to deceive," things are "particularly dangerous," Bengio thinks. Maybe someone should talk about this with Larry Ellison, entrepreneur and co-founder of digital data giant Oracle. Business Insider notes Ellison spoke last week about the advances in AI tech, and foresaw a time in the near future when AIs monitor almost everything--an innovation he "gleefully said" will make sure "citizens will be on their best behavior." By Kit Eaton @kiteaton

Friday, September 13, 2024

Apple Just Made the Most Dramatic Change to the iPhone in a Decade. It Was Inevitable

The original iPhone announcement is one of the best examples of product storytelling of all time. When Steve Jobs stood on stage more than 17 years ago and introduced the first iPhone, he famously referred to it as three things: A "widescreen iPod with touch controls," a "revolutionary mobile phone," and a "breakthrough internet communications device." The bit was that those three things were just one device, a new type of smartphone. Jobs showed off a bunch of features like iTunes and the ability to browse the full internet. He made a phone call to Jony Ive, who was in the audience. He even prank-dialed a Starbucks and ordered 4,000 lattes to demonstrate that you could look up a business on Google Maps and place a phone call. Jobs also showed off the photo library app, which included the first demo of "pinch to zoom." The crowd was audibly impressed. Everything Jobs said was true, but looking back, it's interesting to think that none of those things really describe what the iPhone has become for most people: an internet-connected camera. Jobs barely even mentioned that the iPhone had a camera. "The biggest thing of note is that we've got a 2-megapixel camera built-in," Jobs said when describing the back of the iPhone. That's it. That's the entirety of the camera demo for the first iPhone. He didn't take any photos, which isn't super surprising considering the first iPhone camera was bad. At Monday's iPhone 16 launch event, by my rough estimation, the company spent 25 minutes talking about the cameras and camera-related features. That's almost a quarter of the entire presentation and almost half of the iPhone segments of the event. Apple thinks the camera is a lot more important today than it was in 2007. One of the most significant camera-related features is the introduction of a "Camera Control" on the iPhone 16 and 16 Pro models. They added a whole new button just to control the camera, and it's both a huge deal and entirely inevitable. Look, I get that adding a button on the side of the iPhone might not seem like a big deal, but the trajectory of iPhone design for a long time has been in the exact opposite direction. For years, it seemed like Apple considered physical buttons a blemish on its industrial design and was determined to remove them at any cost. That Apple is willing to add a button to the side of the iPhone shows how the world's most popular consumer product has changed over time. At some point in the last 17 years, the iPhone became less of a "revolutionary mobile phone, and more of a device people use to take and share photos. Sure, your iPhone can still send messages, check your mail, and even make phone calls, but the thing most people do more than anything else is snap photos and share them with their friends. The iPhone is basically an internet-connected camera for most people. With that context, it's not surprising that Apple is adding what it calls a Camera Control on each of the new iPhone models. It's a physical button that can be used to take photos, focus, zoom, and even activate a part of Apple Intelligence the company calls "Visual Intelligence." In the demo, a person takes a photo of a dog and the iPhone gives them information about it, including the breed. Then, he takes a photo of an event poster, and the iPhone asks if he wants to add the event to his calendar. But, the thing I think people will primarily use the Camera Control for is quickly taking photos because that is among the most common things people do with their iPhones, and the fact that they now have better access and control is not a small thing. There is a lesson here, which is that Steve Jobs had an idea about what the iPhone was when Apple designed it all those years ago, but the people who use it had another idea. Apple could have stuck with the original idea, but there's no way the iPhone would be anywhere near as popular as it is today. In that case, Apple wouldn't be nearly as successful as it is today. You have the same choice: You can stick with your original idea, or you can embrace the inevitable. Only one of those paths leads to this kind of success. Expert Opinion By Jason Aten, Tech columnist @jasonaten

Wednesday, September 11, 2024

The New iPhone 16 and Its Apple Intelligence Features Will Help You Write Messages With a Friendly Tone

At its annual product reveal event, Apple unveiled the next generation of iPhones, Apple Watches, and AirPods. The company also showed off new features for Apple Intelligence, its package of generative AI-powered offerings. Apple CEO Tim Cook began the presentation with the Apple Watch, which the company first announced a decade ago. The latest model, Apple Watch Series 10, sports the device's largest screen yet, which will make text more legible at a glance. Apple COO Jeff Williams also said that the Series 10 is their thinnest and lightest watch yet. Williams says the Series 10 will cost a starting price of $399. Next, Cook moved on to AirPods, the company's mega-popular wireless earbuds. In addition to improved audio quality, the next generation of AirPods will feature Active Noise Cancellation. Meanwhile, AirPods Pro received a health-focused update that will enable users to test their hearing and monitor themselves for signs of hearing loss, and will let customers use their AirPods as a professional-grade hearing aid, although Apple has not received clearance from the FDA yet. Finally, Cook introduced the iPhone 16 line of phones, saying that they've been designed "from the ground up" to take advantage of generative AI. The new models include an "action button" that can be customized to open different apps at different times of the day, and a touch pad that will instantly open the camera and let users make adjustments while in the photo app. Craig Federighi, Apple's SVP of software engineering, showed off the marquee feature of the iPhone 16: Apple Intelligence. The hardware within the iPhone 16 is designed to run generative models directly on the device, and for more ambitious requests, can access cloud compute. One feature sure to appeal to entrepreneurs is the "summary" feature, in which the phone will summarize long texts or emails. Apple Intelligence can also help when sending out messages, as Federighi showed how the on-device models can rewrite texts and emails to be in a different tone or create custom emojis. Here's more on those features. Federighi also introduced a new feature called "visual intelligence," in which users can learn more about the world around them by using their phone's camera. In an example, an iPhone owner took a photo of a restaurant, and his phone seamlessly pulled up the restaurant's hours and surfaced a link to make a reservation. (This is all similar to what Google offers with its Lens tool.) In another example, a user took a photo of a concert poster, and the phone instantly offered to create a calendar entry for the concert. Federighi says Apple Intelligence will be available in English starting in October, with more languages to follow.

Monday, September 9, 2024

OpenAI's Next-Generation Models Could Reportedly Cost $2,000

OpenAI is reportedly considering high-price subscriptions for its next-generation AI models. Those models could include its upcoming "reasoning model" codenamed Strawberry as well as GPT-4o successor, Orion. According to a new report from The Information, OpenAI executives are weighing charging users as much as $2,000 (over an undetermined amount of time) for access to their most advanced AI models. For comparison, ChatGPT Premium currently costs $20 per month, a fee that enables the use of GPT-4o, the company's current flagship model. In July, Bloomberg reported that OpenAI had defined five stages of AI innovation, with the first being chatbots like GPT-4o, and the second being "reasoners," capable of human-level problem solving. It's an open secret in Silicon Valley that OpenAI is currently deep in development on its own reasoning model, named Strawberry. Such a model could be capable of reasoning through problems in a multi-step process, making them better equipped to deal with challenges that current models struggle with, such as solving complex math problems. The Information previously reported that Strawberry could be released as soon as this fall. OpenAI is also supposedly developing a new large language model, codenamed Orion. The Information has reported that Strawberry is being used to generate high-quality training data for Orion, which could help to reduce hallucinations and other errors. Of course, training and running advanced models that can think in multiple steps isn't cheap. The Information reports that ChatGPT Premium was "recently on pace to generate $2 billion in revenue annually," but that it may not be growing fast enough to cover the costs of running the platform. For these advanced models, entrepreneurs should expect advanced prices.

Friday, September 6, 2024

AI in Pharma: Hype, Hope, or Revolution?

A recent article in Nature's Biopharma Dealmakers highlights the growing intersection of artificial intelligence and the pharmaceutical industry. But is this the dawn of a new era in medicine, or just Silicon Valley's latest attempt to disrupt an industry it barely understands? Buckle up for this insider's dissection of the bold claims and big money behind pharma's AI revolution. The AI gold rush: venture capital's new darling Nature's piece highlights some eye-popping figures: Xaira Therapeutics emerged with a cool $1 billion in funding Generate:Biomedicines has raised $750 million since 2018 Numerous deals valued in the hundreds of millions, including Almirall's partnership with Absci potentially worth up to $650 million and AstraZeneca's collaboration with Absci that could reach $247 million But here's the kicker: Much of this money isn't coming from Big Pharma's coffers. It's venture capital firms betting big on AI's potential. The question is whether these investors are visionaries, or they are chasing fool's gold? Pharma's cautious tango with AI While venture capitalists are doing the equivalent of making it rain in AI labs, traditional pharmaceutical companies are taking a more measured approach. They're not buying the hype wholesale. They're testing the waters through strategic partnerships. Take Absci's deals with Almirall and AstraZeneca. These aren't all-in commitments. They're calculated experiments. Big Pharma is dipping its toes in the AI pool, not diving headfirst. Beyond the buzzwords: AI's real-world impact Here's where things get really interesting. AI isn't just a futuristic promise anymore. It's already making waves in the real world of drug development. A case in point is Abcellera's Bamlanivimab. This COVID-19 antibody treatment, developed with the help of AI, received emergency use authorization from the U.S. Food and Drug Administration in 2020. It's a concrete example of AI-assisted drug discovery making it all the way to patients in record time. But Bamlanivimab isn't alone in showcasing AI's potential. Insilico Medicine, another poster child for AI in drug discovery, has a candidate for idiopathic pulmonary fibrosis in Phase II trials. That's not just a public-relations win. It's a potential lifeline for patients with a devastating disease. Exscientia, yet another AI-driven pharma company, made headlines when its AI-designed drug for obsessive-compulsive disorder entered Phase I trials in 2020. This marked another significant milestone in the field, demonstrating that AI could potentially accelerate the early stages of drug discovery across various therapeutic areas. BenevolentAI is also making waves. They've used their AI platform to identify promising candidates for chronic diseases, demonstrating AI's potential to uncover treatments that traditional methods might miss. Another player in this space is Immunai, a company I've invested in. Its platform leverages AI to map the immune system at unprecedented resolution, potentially revolutionizing our approach to immunotherapy. By providing deeper insights into immune responses, Immunai's technology could accelerate the development of more effective, personalized treatments for a broad spectrum of diseases. While these successes are encouraging, it's important to keep perspective. The path from promising candidate to approved drug is still long and treacherous, even with AI assistance. The true test will be seeing more of these AI-discovered drugs successfully navigate late-stage trials and ultimately reach patients across a broad spectrum of diseases. AI as sidekick, not superhero (yet) Here's a crucial point that often gets lost in the AI hype: For now, these systems aren't replacing human scientists. They're empowering them. AI can crunch data at superhuman speeds and spot patterns that might elude even the keenest human mind. But it still takes human insight to ask the right questions, design the right experiments, and interpret the results in context. In the near future, the pharma industry will likely see AI and humans working in symbiosis, tackling challenges that neither could solve alone. However, as AI capabilities continue to advance at a rapid pace, the balance of this partnership could shift. It's clear that the nature of human involvement in drug discovery and development is set to undergo significant changes, but the timeline for that may be longer than the hype suggests. The journey from AI-assisted to AI-driven drug discovery is a marathon, not a sprint. Big tech enters the arena The pharmaceutical industry isn't the only one eyeing the potential of AI in drug discovery. Tech giants are muscling in on the action. Isomorphic Labs, a spin-off from Alphabet--Google's parent company--is leveraging DeepMind's AI expertise to revolutionize how we discover new medicines. Meanwhile, we're seeing consolidation in the AI-driven biotech space. Recursion Pharmaceuticals' recent bid to acquire Exscientia could create a powerhouse that combines vast biological datasets with cutting-edge AI capabilities. A field on the cusp of transformation As the founder and CEO of Somite.ai, an AI startup for cell-based therapies, I've had a front-row seat to the AI revolution in pharma. Let me be clear: This isn't just another tech bubble. The integration of AI into drug discovery and development is going to fundamentally reshape the pharmaceutical industry. It's not a matter of if, but when. Yes, there's hype. Yes, there are inflated valuations. And yes, the marketplace is going to see some spectacular failures in the coming years. But make no mistake, the underlying technology is sound, and its potential is enormous. The next five years will be critical. The AI-pharma space will undergo a shakeout with the strongest players emerging as the new powerhouses of drug discovery. Traditional pharma companies that fail to adapt will find themselves left behind, scrambling to catch up. But here's what excites me most: the potential impact on patients. AI-driven drug discovery isn't just about making the process faster or cheaper. It's about finding treatments for diseases that have long eluded us. It's about personalized medicine on a scale never seen before. It's about hope for millions of people waiting for breakthroughs. The billions being poured into this field aren't just speculative investments. They're fuel for a revolution that's already underway. And while not every bet will pay off, the overall direction is clear. AI isn't just changing the game in pharma. It's rewriting the rules entirely. At this inflection point, my advice to investors, pharma executives, and fellow entrepreneurs is this: Embrace the change, but do so with clear eyes. Understand the technology, invest in talent, and above all, keep the focus on patient outcomes. The companies that can do this effectively will be those shaping the future of medicine. The AI revolution in pharma is here, and it's going to be more profound than many realize. The next decade in this industry is going to be one hell of a ride. Expert Opinion By Micha Breakstone, Founder and CEO of Somite.ai @MichaBreakstone

Wednesday, September 4, 2024

After 2 Years of GenAI, the Industry's Explosive Growth Has Some Unexpected Fallout

Artificial intelligence entered its adolescence last year, as evidenced by one hell of a growth spurt. Investors piled on in 2023 as companies across industries looked to hop on the AI train. In the U.S., investment in the sector grew to $67.2 billion, with a third going directly to makers of generative AI products, the technology popularized by OpenAI's ChatGPT. According to Stanford University's 2024 AI Index Report, genAI investment jumped to more than $20 billion, up from the $2.21 billion invested in 2022. For AI entrepreneurs, the enthusiasm is double-edged. Interest in their tools has never been greater, as nearly half of the Inc. 5000 honorees who took our CEO Survey cite the use of at least one AI service. OpenAI was the top provider. But genAI hype has also led to misconceptions about what these tools actually do. As AI zips to the top of investors' port­folios, founders say the biggest factor limiting their growth isn't fundraising; it's overcoming a towering knowledge gap. Benjamin Plummer understands this implicitly. He's the CEO of Invisible Technologies (No. 152), a San Francisco-based software and data services provider. Invisible helps clients such as Microsoft and Cohere create high-quality data to train their AI models. To do so, Invisible uses a roster of more than 5,000 contractors, all experts in their fields, who help fine-tune or stress test different models. "You might have a health care company training a chatbot that needs 100 doctors to test and evaluate the model," explains Plummer, 38. By creating complex workflows with multiple experts testing and grading models, richer training data is collected. Plummer says helping organizations like OpenAI--a client since before the launch of ChatGPT--train models has been a "huge part of our growth over the past year." Brandon Tseng, 38, has a different method for improving his AI capabilities--acquisition. The co-founder of San Diego-based defense technology maker Shield AI (No. 634) has been in the business of enabling drones and aircraft for AI operation since 2015, but a pivotal acquisition boosted his company's ascent. In 2020, the defense agency Darpa held a series of simulated dogfights between different AI models that had been trained to operate F-16 fighter jets. Aviation startup Heron Systems bested the slate, as well as an experienced human F-16 pilot. Less than a year later, Shield AI acquired Heron and used the tech to update Hivemind, its software framework for piloting drones and fighter jets. The service, which is in use by the U.S. Air Force and the Coast Guard, among others, is today a digital maverick, more than ready to take risks that most ­human pilots wouldn't dare. "It's not afraid to die," says Tseng. Shield AI has raised over $1 bil­lion, but Tseng still struggles to explain to potential investors and customers that he doesn't offer a language model like ChatGPT. For a certain set of investors, says Tseng, AI is just chatbots and generative art. He encourages them to think bigger. By pigeonholing a tool capable of "perceiving, thinking, and acting" as just a chatbot, he says, investors are "missing the forest for the trees." Even companies that use generative AI have met hurdles explaining what they do. While Eric Yang's Dallas-­based image and video enhancement company, Topaz Labs (No. 2,697), uses genAI technology, the result is a far cry from that of text-to-image tools like OpenAI's Dall-E. Topaz's software enhances the quality of digital images, sharpening blurry shots and removing grain. In 2018, Topaz introduced Gigapixel, an app that uses AI to change the dimensions of a digital image without losing detail. Gigapixel uses generative adversarial networks (GANs), which consist of two AI models, a generator and a discriminator. The generator creates fake data, which is compared by the discriminator with the original training data. You can think of the discriminator as a bouncer at a nightclub, letting in only clubbers with the right look. As this process goes on, the generator learns how to make data that the discriminator will perceive as real. Topaz uses GANs to generate believable textures and detail in images. Beyond that, says Yang, 36, Topaz's products are all about fidelity to the original image. "People assume that because we're an AI company, we're seeking to replace the need for photographers. But the really useful tools will be the ones that give people superpowers." Even with superpowers, business leaders should be ready to recalibrate on a moment's notice, suggests Daniel Berlind, co-founder of Snappt (No. 41), an AI-powered fraud-detection platform. Founded in Los Angeles in 2016, Snappt provides landlords with an AI model trained to sniff out signs of tenant fraud by looking at pay stubs and bank statements. Last year, Berlind, 36, learned that enterprising criminals had developed an entirely new method of fraud to get around Snappt whereby scammers would register LLCs and use free trials from payroll providers like ADP and Gusto to create legitimate-looking pay stubs. Snappt's forensics team proceeded to train its AI to counteract the scheme. It's clear from Snappt's experience that success in the world of AI requires constant adjustments. It also requires a willingness to create demand. Just ask John Dean, 26, co-founder and CEO of WindBorne Systems (No. 585). In 2019, Dean, then an undergrad at Stanford, co-founded the company alongside classmates Andrey Sushko and Kai Marshland. They developed long-duration weather balloons that can fly above the open ocean, where traditional balloons can't, to collect temperature and atmospheric-pressure readings. This data is useful for world governments, which spend approximately $10 billion annually on weather observation. WindBorne's growth is slated to kick up with this past February's launch of WeatherMesh, an AI-forecasting model that's set accuracy records. Today, the company, based in Palo Alto, California, credits its Weather­Mesh model with expanding its clientele beyond the public sector. "Every big tech company working on AI-based weather modeling has reached out to us about purchasing our datasets to improve their models," claims Dean, pointing out that when AI enters an industry, the data ­becomes much more valuable. The gold rush these execs have witnessed has shown no signs of slowing down in 2024, with rivals OpenAI and Anthropic one-upping each other with faster and less- expensive technology. But matur­ity comes for us all. And if 2025 features a little less hype and a little more understanding of how this incredible technology works, it might not be such a bad thing. You're Not Hallucinating Any conversation about AI in the past year has included OpenAI, the Sam Altman-run startup behind ChatGPT. So why isn't the company on the Inc. 5000? We spoke with OpenAI several times, but executives declined to verify its revenue, which has been reported by The Information to be running at an annualized rate of $3.4 billion. Considering the company transitioned to a capped-profit structure in 2019, it is likely Open­AI would have placed at or near the top of the list, if that figure is accurate. Maybe next year, Sam.

Monday, September 2, 2024

A Scale AI Subsidiary Targeted Small Businesses for Data to Train an AI.

January is typically the slowest month for Dawn Alderson, who owns a hair salon in a Philadelphia suburb. So when she came across an online ad last December offering $500 to $700 to help train AI algorithms, her financial anxiety eased. She signed up to work for the company, Remotasks, in anticipation of the dry spell. Remotasks is a subsidiary of San Francisco-based Scale AI, a unicorn startup with a valuation of $14 billion. Alderson joined a booming field of contractors who either supplement their incomes or carve out full-time work developing generative artificial intelligence. In industry parlance, the work performed by Alderson and millions of contractors across the globe is called tasking. It is defined by hourly work logging mistakes made by AI tools such as chatbots, image generators, and voice-to-text technology so these tools can be deployed for commercial use. A tasker might, for example, teach a self-driving program how to identify common features of a street by tagging pictures of stop signs and traffic lights. Alderson's tasking journey started slowly. She completed onboarding and waited to get her first assignment. After a couple of weeks, she was invited by Remotasks to participate in an initiative that hit close to home, "collecting and using datasets that are relevant" to small businesses. "Your data could be key in shaping the future of AI in the world of business," said the query, obtained and reviewed by Inc. "For each dataset contributed that meets our criteria, we're offering a reward of up to $700 USD." The fee felt like a winning lottery ticket. Alderson recalls thinking, "Oh, my god, I have waited for this moment. I have everything that they're looking for." The project had a nonsensical code name: Bulba Ice. Business owners didn't know the identity of Remotasks' ultimate client, but the work was simple enough for a financial reward. The prompt called for routine insights into the daily life of a small business, with clearly defined questions answered by concrete numbers. Alderson included sales totals, the types of appointments made at her salon, and examples of her inventory. She submitted five data sets, three of which the company accepted. According to the requirements, the data sets had to be exhaustive and encompass between 50 rows and five columns on a spreadsheet. For the three data sets that were accepted, Alderson received $2,100. Jubilant, she felt the work-from-home windfall would replace her typical winter precarity. She locked in, providing 37 more datasets over the course of an intensive weekend of work in January. But after waiting for months and following up repeatedly with a Remotasks support account over email, payment for those 37 datasets never came. In March, the Bulba Ice project was curtailed without warning, and many contractors believe they were misled about a project that solicited information about their small businesses. Alderson found herself joining an outspoken group of contractors on a Remotasks Slack channel demanding answers for late payments. Those who complained faced swift retribution. Most of them had their access to the Bulba Ice Slack channel revoked without warning, according to Josh Bicknell, the owner of a tutoring company who participated in the Bulba Ice project and whose account was corroborated by several other contractors. He says at least 50 accounts were deactivated in the purge. Soon, a consensus emerged among the taskers: they were getting ripped off. They had provided private business information to an unknown AI project with the promise of compensation. There was a nagging suspicion among the group that the data could be used to build an AI that could ostensibly harm their businesses: Like building a tool that's then marketed back to them as an essential product. Or having the data sold to competitors whose new technological prowess could leave them in the dust. Alderson discovered she had been deactivated from the Bulba Ice Slack channel on March 20. Three weeks later, she received an email that her 37 data sets had been rejected for repeating certain questions in separate submissions--something a Remotasks manager had told her was allowable, she says. Alderson believes she's owed $25,000. AI tasking presents small-business owners with unknown risk The Bulba Ice project was different from traditional data labeling, which typically involves training AIs to accurately identify what's shown in an image. (Does this picture show a car or a motorcycle?). But it shared one striking similarity with the broader AI tasking economy: Contractors are almost always in the dark about what they're developing and whom they're developing it for. "If you're labeling some aerial image that's produced by a drone, are you training a toy airplane, or are you training a military drone? There's no real way of knowing," says Mark Graham, a researcher at Oxford University's Fairwork Foundation The situation is emblematic of a larger problem in the creation of generative AI tools. For society to realize the benefits of this new technology, independent contractors-from impoverished people in the developing world, to struggling artists and authors, and even scholars with advanced degrees-provide an essential human touch to build a technology that could potentially render them obsolete. Small-business owners, many of whom have to scramble from time to time, have now gotten a taste of that dilemma firsthand. "Instead of accountability, or telling the stakeholders and the contributors anything about it, Remotasks completely cut down communication," says Lain Myers-Brown, who works at a comic book store and submitted data on behalf of its owner. You could argue that entrepreneurs should have known better than to get wrapped up with Remotasks and Scale AI. The company's international reputation is littered with accusations of non-payment. Scale has been accused of failing to pay workers in the Philippines and Africa, regions where it has traditionally recruited most of its taskers. Following reports in The Washington Post about Scale's business endeavors overseas last year, the company issued a statement: "Remotasks is our global platform designed for flexible, gig-based data annotation work. It was established as a separate platform to protect customer confidentiality." Scale's customers include Open AI and the Department of Defense. As interest in AI tools expands, so too have allegations of unpaid labor and deception among U.S.-based taskers, who complain of unpaid training sessions, and pay rates that fluctuate without explanation. In June, Inc. revealed that contractors working for Outlier AI, another Scale subsidiary that hires AI taskers, reported various instances of non-payment, despite Outlier's having recruited aggressively for hundreds of open positions. Outlier's taskers routinely ask questions about the company's policies and legitimacy on a Reddit channel that boasts 11,000 members. Last November, a class action complaint was filed against Invisible Technologies, another San Francisco-based data annotator, for violating various California labor codes, including failure to pay overtime, failure to pay timely wages, and failure to grant paid sick leave, among others. (Invisible Technologies did not respond to an Inc. request for comment.) Regarding Remotasks' small-business project, a Scale AI spokesperson said the company was overwhelmed by the responses. Bulba Ice "received greater interest than anticipated, and due to an influx of submissions our review process took longer than projected. We have communicated with each individual participant regarding the status of their submission, and payments for eligible datasets have been paid in full." Several participants in the Bulba Ice project refute those claims. "Nothing has been communicated individually," says Myers-Brown. "I have no faith that any person involved in the project who was employed by Scale has any idea how to communicate across departments, how to track tickets or emails, or maintain data governance rules." A whistleblower report filed to the U.S. Government Office of Accountability in March alleged missed payments on a mass scale. There was no discernible difference between data sets that were accepted and datasets that were rejected, said the tasker, who runs a North American logistics company and declined to be identified. While many involved in the Bulba Ice initiative were paid for data they provided, the threat of legal action was often necessary, according to emails reviewed by Inc. In May, the tasker William Webster, who submitted data from a marketing firm on behalf of the company's owner, wrote an email to a Remotasks support account, complaining of late payments. Only after he said he'd call a lawyer did payment of $5,600 land in his bank account, four months late, Webster says. And while a company email to contractors in May said it would be deleting all rejected datasets within 30 days, small-business owners who spoke to Inc. claim there's no way to prove it. Some involved in the project say that Remotasks did initially pay on time, but the punctual payments didn't last. "They seem to have paid a lot of people in the beginning to lure them in and have them send more data--and then they stopped paying regularly," Alderson says. The GOA referred the whistleblower complaint to the Office of Inspector General and the Department of Labor, both of which declined to investigate. The Equal Employment Opportunity Commission declined to comment. "I think that most of it has been paid," says the whistleblower. "But without the pressure we gave, I doubt that they would have been paid." The hard, human work of training AI The work of tagging and annotating information that large language models and image generators spit out is a process called reinforcement learning from human feedback, or RLHF. Without RLHF, programs like ChatGPT might offer only nonsense in response to prompts, or lack the near-human element that made generative AI an overnight sensation. Taskers usually work from home in a gig economy setup similar to Uber drivers--they aren't forced to work and they can, theoretically, create their own hours. That's if things are working properly. As the business world's demand for AI-enabled products swells, Scale AI is recruiting scores of gig workers to its subsidiaries with the promise of flexible work. Wages vary per task, and can range greatly, but usually fall $15 and $40 per hour. But according to eight contractors interviewed by Inc. who have worked either for Remotasks, Outlier, or both, overcrowding and minimal direction often lead to a culture of organizational chaos. The exact number of taskers working for Remotasks and Outlier is unclear, but they do crest into the hundreds of thousands. One Remotasks Slack general channel has 461,000 members, while another for Outlier has 107,000, according to screenshots seen by Inc. "It's just a mess," said a tasker who freelances for Outlier and asked to not be named for fear of repercussions. "You have no manager, you have nobody to help you. ... It's a massively disorganized shit show." Remotasks' attempts at management were fleeting. In March, the Bulba Ice Slack channel had one team leader available to answer questions. With more than a thousand taskers, the channel was cluttered and, in time, the manager grew overwhelmed. "I was the only tasker success manager working on that project, and we're talking about thousands and thousands of data sets," says the former manager, who asked for anonymity, fearing professional consequences. When Bulba Ice was abruptly halted after less than two months, the project's Slack channel erupted. When they had signed up, taskers were told that datasets would be paid for only if they were accepted by the Remotasks' client. But they expected to be informed of the client's decision and whether or not they'd be compensated. People wanted to know what was going to happen with their data and if they were going to get paid. The Remotasks success manager was instructed not to respond to questions. "I probably spent two weeks watching the contributors post messages in the Slack channel that I was advised not to respond to," the former manager says. "That was very hard, because that's not what I was brought on to do." What happens to all of the tasking data? Unlike most of the projects in the tasking economy, Bulba Ice asked for anonymized data gathered by real small businesses. Though the project called for no personally identifying information, it took on an invasive dimension for Myers-Brown. "That's your data. You own that data that you crafted from real events," she says. (Scale did not respond to an Inc request for comment about how it deletes rejected datasets.) The lack of transparency strikes some of the taskers as particularly galling, given the size of Remotasks' parent company. Scale announced $1 billion in new venture funding in May. The company's founder and CEO, 27-year-old Alexandr Wang, is the world's youngest billionaire and something of a tech wunderkind. He co-founded Scale in 2016 at age 19, and is chummy with Elon Musk, who publicly applauded a controversial screed Wang published earlier this year about meritocratic hiring. Last August, Scale announced a contract with the Department of Defense, allowing the military access to its internal data training platform."The capabilities will be made available across a diverse range of networks and classification levels relevant to warfighters," the announcement said. Wang would not comment for this story. He told Fortune in May: "I think the entire industry expects that AI is only going to grow, the models are only going to get bigger, the algorithms are only going to get more complex and, therefore, the requirements on data will continue growing--we want to make sure that we're well-capitalized." In many ways, tasking is the AI revolution's invisible bedrock. Official numbers on how many taskers there are and where they're located are sparse. A study by Google researchers in 2022 estimated that there are millions globally. Organizations such as Oxford University's Fairwork Foundation are studying the landscape and attempting to create governance guidelines with respect to pay and worker's rights. It's indisputable, however, that the industry is vast and making inroads in the U.S. and the Western world as a result of the generative AI gold rush, which saw $29 billion invested by venture capitalists across the globe last year. The challenge of managing a remote tasking workforce There are more than one million taskers alone on the platform of Appen, a data annotation firm based in Sydney, Australia, according to Samantha Chan, the company's VP of engagement and success. She says it's tough to keep that many contractors--who are dispersed globally--aware of pay policies, deadlines, and all the minutiae associated with their projects. "Some things may fall through the cracks--which is inevitable, working with that many people," Chan explains. Appen has plenty of poor reviews on sites like Glassdoor and Reddit. But the company allows taskers to communicate on its internal platform, and makes project managers available to answer questions, Chan says. The company has its leadership team listed on its website. Making such information public is a minimum requirement of transparency, says Mark Graham of Oxford University. That small measure of accountability is sorely lacking at most companies in the field, Graham claims. Taskers in the Bulba Ice project say they were usually forced to fend for themselves on sprawling Slack channels with little to no oversight. In its research, Fairwork issues annual reports ranking the companies in the digital gig economy on criteria related to pay, working conditions and transparency. Data annotation firms are among them: In Fairwork's 2023 ranking, which assessed firms on a scale from 1 to 10, with 10 being the best, Appen received a 3. Remotasks, which makes no mention of its parent company on its website, received a 1. Graham hopes the research conveys a simple message: "There's nothing impossible about creating decent work conditions."