Thursday, Apr 18, 2024

Can AI generate a way to pay for itself?

If we automate venture capital, will the hype generate itself? | Illustration: Alex Castro / The Verge

The AI hype is marketing, baby.

I’ve heard a lot of talk about how AI is going to make me — a journalist, someone in the workforce, a human — obsolete or whatever, and so I wondered: is that even true?

Here’s Sam Altman, CEO of OpenAI, speaking in 2019: “I really do believe the work that we’re doing at OpenAI will, like, not only far eclipse the work that I did at YC [startup incubator Y Combinator], but the work that anyone in the tech industry does.” Why is that? Well, he believes that someone is going to build a software system that is “smarter and more capable than humans in every way,” and it might as well be him. “And very quickly, it will go from being a little bit more capable than humans to something that is, like, a million or a billion times more capable than humans.”

Now, Altman is highly motivated to hype AI. After all, dude runs a startup and needs investment. Still, there’s a lot of investment pouring into the field — about $94 billion in 2021, according to Stanford’s AI index, more than double the year before. In 2021, AI companies had 15 different funding rounds that were worth $500 million or more.

Code and GPUs and so on aren’t the real driving force in tech. Money is. AI is expensive! Altman and his ilk have to talk up a big game in order to get the massive sums of money in order to set up AI. His competitors — Google and Facebook — are basically money-printing machines that can afford to burn cash on experimental tech without having to hype it.

Now, I’m old enough to remember when self-driving cars were the future, as I am over the age of eight. In 2014, Google’s head of self-driving cars said he was committed to making sure his 11-year-old wouldn’t get his driver’s license in five years. It’s almost a decade later, and self-driving cars still aren’t a thing because it turns out that humans are smarter and more sophisticated than we give ourselves credit for. Still, a lot of companies scrambled to get self-driving cars on the road and get a slice of a market that Intel projected would have a revenue stream of $800 billion in 2035. Softbank alone plonked down $30 billion from 2010 to 2019, and the total disclosed investment in that near-decade was $84.5 billion.

The self-driving stuff wasn’t a total failure, but you can see the pattern: promising a huge, revolutionary change is more inspiring and tends to create more investor interest. Those people will eventually expect a return! And when it comes to AI, the biggest return they can get is from replacing people — especially expensive, white-collar people — with cheaper machines. So if we want some clues as to what the AI future looks like, we should follow the money.

Why is AI so expensive?

Getting into AI is spendy, babe. The entry fee for this field varies, but it’s high. It’s really only the big companies and the extraordinarily well-funded companies that can afford to play in this space, says Meredith Broussard, a data journalism professor at New York University who focuses on artificial intelligence in investigative reporting.

“If you’re trying to create a startup that’s gonna build these large language models and do the compute yourself, that’s gonna cost a fortune,” says Avi Goldfarb, a professor of marketing at the University of Toronto, who’s written a book about the economics of AI. “So OpenAI is very expensive, billions and billions of dollars.”

Renting compute is cheaper, though companies still have to pay AWS or whoever. Then there’s the data to train the model — sometimes people have that on hand, and sometimes not, so costs there vary, too. Some data sets, like the Common Crawl and LAION, are free to use, says Sasha Luccioni, a research scientist at Hugging Face, a company that develops tools for machine learning. In those cases, the costs are mostly associated with cleaning and processing the data, and costs can range from the hundreds of thousands to millions of dollars, Goldfarb says.

Doing some back-of-the-envelope math based on papers about large language models, Debarghya Das, a founding engineer at Glean who formerly worked at Google Search, has figured the bare minimum cost of training is $4 million for Facebook’s LLaMA and $27 million for Google’s PaLM.

But even using free data has its own costs. “Once you’ve downloaded the terabytes of data, if you want to filter it or use it in some particular way, like text-to-image models, what they’ve been doing is focusing on certain subsets of line in order to make the model better,” Luccioni says. “So that’s where it gets really tricky.” You need a lot of compute power and a lot of specialized people to figure this out.

Those specialized people are also expensive and not included in Das’ estimate. “People in machine learning are so highly paid because you’re competing with Google or other big tech companies, and literally, it’s like sometimes millions of dollars for researchers,” says Luccioni. In 2016, OpenAI’s top researcher made $1.9 million, for instance. Compensation in 2020 wasn’t as eye-watering, at least according to the company’s publicly available tax filing, but with more market competition, that may change.

The thing is, both training the model and the specialized people who work with it are ongoing costs. A customer service bot, for example, may need to be fine-tuned every week or couple of weeks. “What’s expensive is that you have to keep doing it, and you have to keep testing the model, and you have to make sure it’s doing what you expect it to do,” Luccioni says. Models also ideally need to be stress-tested to make sure they don’t produce unwanted results.

Once all that is done and the model is made available, it may get hundreds or thousands of queries a day. Putting in the engineering aspects that make it scalable and reliable — so it won’t crash — is also expensive, requiring specialized personnel.

People might believe that AI is going to eliminate a lot of jobs, but at least right now, it requires a lot of human labor just to get going.

So how does this pay off?

Answering funny text prompts or drawing steampunk avatars, that’s small potatoes. There are lots of uses for AI, and it’s not exactly new — Stanford’s report shows a boom in patent filings around AI uses starting in 2019. CVS Healthcare started touting its investment in AI around the same time. At the 2021 CES, Walmart said it was using AI to personalize its customers’ experiences.

What’s likely being danced around here is something many consumers are unfortunately familiar with: automated customer service. I can tell you from personal experience that CVS’s is miserable. But because companies view customer service as a cost-sink, one that doesn’t really expand the business, it’s an easy area to replace people with machines.

Indeed, according to data from McKinsey, that’s where a lot of the current use is — in what the advisory firm terms “service operations.” Maybe Altman’s grandiose-yet-nebulous vision pans out, but right now, AI seems primed mostly for other unsexy uses like marketing and sales, supply chain management, and strategy and corporate finance.

AI is also being used widely by programmers already in applications such as GitHub’s Copilot. That just makes coding a lot faster, where a lot of boilerplate code is being done by the AI, saving humans for human tasks. That can make programmers twice as fast at coding, Das tells me. GitHub also claims that Copilot makes programmers more satisfied with their work, which may be true but seems like the kind of thing that you’d expect an AI purveyor to say.

But McKinsey’s charts also show something interesting: as of the end of 2022, adoption had leveled off. Sure, overall, usage has more than doubled since 2017, but the peak appears to have been 2019. Plus, hiring for AI roles has gotten harder. This makes the hype machine seem like pure marketing: if implementation has actually stalled, why else would we be hearing so much about chatbots?

The way people think about AI now is to walk through a company’s workflow, identify tasks that a machine can do, and automate them. “But ultimately the upside is pretty limited because the best you can do is what you’re already doing but a little bit better,” says Goldfarb. “Those typically don’t justify the huge expense — tens of millions, hundreds of millions, billions of dollars to build these kinds of things.”

The real money, then, is in totally blowing up the workflow and replacing it with AI, Goldfarb says. “It’s riskier because once you talk about messing with the workflow, there’s lots of failures that can happen,” he says. “But that’s where the upside is in the billions, or the tens of billions, or more.”

As an example, he uses healthcare. If the industry is reorganized around machine diagnosis, that could mean more efficiency. “My understanding is that there’s lots of doctors who are really awful at diagnosis,” Goldfarb says. “So maybe we’ll never get machines that are as good as the 95th percentile doctor or the 99th percentile doctor, but we’ve got to get ones that are as good as the 20th percentile doctor pretty soon.” Having machines that are as good as the 20th percentile doctor would mean improvements for people who can’t access doctors at all — or who are being treated by the 10th percentile doctors, he figures.

Also vulnerable? Finance, says Mark Muro, a senior fellow at the Brookings Institute. “It’s all about pattern recognition,” and pattern recognition is something AI is notoriously good at. So financial institutions would need fewer databases and data junkies to monitor trends — the AI could do all that, with a few higher-level professionals managing the AI. “Probably lower-level quants would be more vulnerable,” Munro says. Finance is also high-end, so humans would probably stick around to present to clients.

Still, it’s white-collar jobs that involve pattern recognition and modeling that are most vulnerable to AI in Muro’s view. Consultants, for instance, should watch their backs. “They’re the classic quick-and-dirty pattern recognition, and plausible Ivy League graduates present the results,” Muro says. “They have to put together a plausible argument based on plausible statistics. They assemble the pattern read of a situation of something like an LLM [large language model] could do.” So instead of hiring junior associates, you might use AI to do their jobs and keep the Ivy Leaguers for presenting the slide decks.

Which might explain OpenAI’s partnership with Bain & Company. “This is arguably one of those moments in time, an inflection point of artificial intelligence that’s going to change the destiny of the world,” said Manny Maceda, a worldwide managing partner at Bain, in a video promoting the partnership. The first client is Coca-Cola, a maker of fizzy beverages. (OpenAI, Bain, and Coca-Cola did not respond to emails requesting comment.)

In the video, Zack Kass of OpenAI says in the video that OpenAI is “inundated at this point with enterprise demand that we sort of waited for for a long time.” The focus of the partnership, at least according to the press release, is “hyper-efficient content creation, highly personalized marketing, [and] more streamlined customer service operations.”

The enterprise demand Kass was waiting for means sales: OpenAI is expecting $200 million in revenue this year and $1 billion by 2024, Reuters reported. In a secondary share sale recently, the company was valued at $20 billion. If that’s accurate, its valuation is higher than the market cap of Hewlett Packard Enterprise, Garmin, Cloudflare, Snap, and H&M.

Plus, Microsoft has shoved ChatGPT into Bing in order to try to carve out market share from Google in search. Clearly, the pressure is on to monetize OpenAI’s tech. And though some people argue AI development should be slowed to minimize its risks to society, the financial incentive is likely to drown out any caution.

But is the future of AI really replacing workers?

It’s possible that the AI boom is kind of like the .com or mobile booms, where people might just be throwing money at everything involving AI and hoping for the best. The incremental uses of AI basically function as a way of improving business as usual rather than fundamentally reinventing it. Some investors, like Kyle Harrison of Contrary, share my skepticism about the AI hype. Besides, the big tech companies have better distribution than the startups and may be best positioned to make use of AI.

Even people who work on AI aren’t sure exactly what it will be used for in the long term. ChatGPT might improve marketing emails or let students more effectively cheat in class, but is that really changing the world? Regardless, Broussard is skeptical that ChatGPT will be free forever. “The business model here is the same as drug dealers use,” she says. “It’s to give you a taste for free, get you hooked, then jack up the price. That’s a tried and true Silicon Valley strategy.”

Broussard can also imagine a world where OpenAI sells “authenticity protection.” Since it’s possible to create reams of text with ChatGPT, beleaguered professors might pay to run student papers through an AI detector — letting them flunk anyone who didn’t do their own work.

Also, computers are generally bad at situations that are socially fair. This is one reason why CVS’s automated customer support is so miserable. It’s socially blind. A person handling customer support can do you an administrative kindness so that you don’t have to waste time; a computer, not so much.

There are other social deficits. The datasets that AIs train on are sexist and racist because there’s no discrimination-free world for them to pull from, says Broussard. Plus, most of the models appear to be in English, which may make AI inaccessible for the rest of the world, says Das. The Biden administration has proposed a Blueprint for an AI Bill of Rights, which would require human fallbacks. That, of course, would make AI more expensive — and make it much harder to replace people wholesale.

One thing we learned from self-driving cars was that there are a lot of so-called corner cases where human judgment matters. An AI can be wrong with code — in Copilot, for instance — with much less disastrous consequences, says Das. But when it comes to banking or medicine or the other white-collar professions that AI proponents say they want to replace, the stakes might be more like self-driving cars: you really have to get it right.

That’s part of the reason we aren’t seeing whole-scale replacement and why we might not see it at all, says Luccioni. “If you have an automatic trading system, it’s going to make trades for you,” she says. “Like, there’s a lot of money on the line.”

One reason we might be hearing about AI so much right now has to do with the uncertainty in the economy, says Muro. When times are good, there’s no incentive to change anything. “Organizations are under a lot of stress, and that’s when they do invest in changing their processes,” he says.

AI purveyors are among the companies under stress, which is why we’re seeing the push to market. “There’s no doubt there’s a race to monetize,” Muro says. “Some people are uncomfortable with the race to get this stuff onto websites and publicly accessible with only limited vetting, but that does signal the urgency of getting to market.”

So maybe the best way to make money in AI isn’t to make AI at all — it’s to make the chips the AI runs on, or to run the data centers these companies rely on, or to be one of the people who helps build it. The surest way to make money in AI is the same way as during any gold rush: just sell the shovels.

------------
Read More
By: Elizabeth Lopatto
Title: Can AI generate a way to pay for itself?
Sourced From: www.theverge.com/2023/3/23/23651976/ai-money-investment-vc-hype
Published Date: Thu, 23 Mar 2023 12:00:00 +0000