Friday, Nov 15, 2024

Google's Gemini flop raises the question: What exactly do we want our chatbots to do, really?

As AI explodes onto the scene, recent hiccups should have ask asking: What do we want from this technology, anyway?
  • Google's Gemini AI chatbot roll-out was marred by bias issues.
  • The controversy fuelled arguments of "woke" schemes within Big Tech.
  • Inside Google, the bot's failure is seen by some as a humiliating misstep.

The takes are still flying in the wake of Google's embarrassing Gemini roll-out — where the company tried to make sure its flagship AI chatbot corrected for bias and ended up with a comically biased AI chatbot.

It reads like the kind of heavy-handed woke scheme Elon Musk and his fellow culture warriors have been feverishly accusing Big Tech of doing all along. Now Google has gone ahead and given them heavy-duty ammunition for their argument.

It reminds me of Twitter temporarily blocking distribution of a 2020 New York Post story about Hunter Biden's laptop — an embarrassing mistake properly pounced on by the likes of Ted Cruz.

Internet culture chronicler Max Read has a particularly sharp assessment about all of it: Yes, this is dumb. But also: What do we want our chatbots to do, actually?

But I'm not sure that "how did this happen?" or "why did this happen?" are all that interesting or enlightening questions compared to something like "well, what did you want the computer to do?" Personally, it's hard for me to imagine caring about--let alone getting mad at!--a computer that generates text equivocating between Pol Pot and Martha Stewart because I would never ask a computer to compare the two of them. I have not yet outsourced my research abilities or critical faculties or moral compass to the probabilistic text generator, and "generating text that plausibly compares historical figures on a moral basis" is a bafflingly foreign use case for chatbots to me. I can't really even come up with situation where Gemini's refusal to say that Hitler is worse than Elon Musk has some terrible downstream effect.

Two things can be true at the same time: The histrionics about this from the Elonsphere are histrionic. And, also — The Gemini debacle really is a debacle.

This is supposed to be Google's big bet on the future, and while most people in the real world have no idea about any of this, internally at Google, it's viewed as a humiliating self-own. And while we are kind of comfortable with the idea that AI bots can't be 100% trusted because they "hallucinate," that's a different prospect than worrying that chatbots will be intentionally wrong, because they were intentionally built that way.

But as far as Read's other point: Yes.

What do we want these things to do, for real? They seem very good at summarizing text, and that alone is pretty meaningful in terms of economic disruption. (For a glimpse at the future, read the AI-generated bullet points if they've appeared at the top of this article, and then imagine an AI version of me re-blogging Read's blog.) But a lot of the other imagined use cases — particularly the one where chatbots become a lifelong "ally," which understands you and your needs in a deep way, as investor and AI booster Marc Andreessen predicts — seem hard to imagine at the moment.

Maybe we can all take a breath and slow down, and figure out what this tech really can, and can't do.

Read the original article on Business Insider
------------
Read More
By: [email protected] (Peter Kafka)
Title: Google's Gemini flop raises the question: What exactly do we want our chatbots to do, really?
Sourced From: www.businessinsider.com/google-gemini-ai-chatbot-woke-bias-controversy-raises-question-2024-2
Published Date: Thu, 29 Feb 2024 20:05:29 +0000

Did you miss our previous article...
https://trendinginbusiness.business/business/bitcoin-will-see-a-correction-before-rallying-to-new-record-highs-billionaire-crypto-bull-mike-novogratz-says