Sunday, Dec 22, 2024

AI is about to turn the internet into a total nightmare

AI bots and AI-generated content are flooding the internet with spam, scams, and misinformation. And it's making it a nightmare to be online.

When logging on to HBO Max at the end of May, people noticed something strange. Usually when someone logs into the site, HBO asks them to verify that they are human by solving a captcha — you know, the little "I am not a robot" checkbox or the "select all squares with stoplights" image grids that prove to the website that you are, in fact, a human.

But this time, when users logged on they were asked to solve a complex series of puzzles instead. The bizarre tasks ranged from adding up the dots on images of dice to listening to short audio clips and selecting the clip that contained a repeating sound pattern. These odd new tasks, ostensibly to prove users were human, haven't been limited to HBO: Across platforms, users have been stumped by increasingly impossible puzzles like identifying objects — such as a horse made out of clouds — that do not exist.

The reason behind these new hoops? Improved AI. Since tech companies have trained their bots on the older captchas, these programs are now so capable that they can easily beat typical challenges. As a result, we humans have to put more effort into proving our humanness just to get online. But head-scratching captchas are just the tip of the iceberg when it comes to how AI is rewriting the mechanics of the internet.

Since the arrival of ChatGPT last year, tech companies have raced to incorporate the AI tech behind it. In many cases, companies have uprooted their long-standing core products to do so. The ease of producing seemingly authoritative text and visuals with a click of a button threatens to erode the internet's fragile institutions and make navigating the web a morass of confusion. As AI fever has taken hold of the web, researchers have unearthed how it can be weaponized to aggravate some of the internet's most pressing concerns — like misinformation and privacy — while also making the simple day-to-day experience of being online — from deleting spam to just logging into sites — more annoying than it already is.

"Not to say that our inability to rein AI in will lead to the collapse of society," Christian Selig, the creator of Apollo, a popular Reddit app, told me, "but I think it certainly has the potential to profoundly affect the internet."

And so far, AI is making the internet a nightmare.

Internet disruption

For close to 20 years, Reddit has been the internet's unofficial front page, and that longevity is due in large part to the volunteers who moderate its various communities. By one estimate, Reddit moderators do $3.4 million worth of annual unpaid work. To do this, they rely on tools like Apollo, a near-decade-old app that offers advanced moderation tools. But in June, users were greeted with an unusual message: Apollo was shutting down. In the company's attempt to get in on the AI gold rush, third-party apps faced the chopping block.

Apollo and other interfaces like it rely on access to Reddit's application programming interface, or API, a piece of software that helps apps exchange data. In the past, Reddit allowed anyone to scrape its data for free — the more tools Reddit allowed, the more users it attracted, which helped the app grow. But now, AI companies have begun to use Reddit and its vast reserve of online human interaction to train their models. In an attempt to cash in on this sudden interest, Reddit announced new, expensive pricing for access to its data. Apollo and other apps became collateral damage, sparking a month of protests and unrest from the Reddit community. The company refused to budge, even though that meant alienating the communities of people who make up its soul.

A report from Europol expects a mind-blowing 90% of internet content to be AI-generated in a few years.

As data-scraping cash cows undermine the quality of once-reliable sites, a glut of questionable AI-generated content is spilling out over the pages of the web. Martijn Pieters, a Cambridge-based software engineer, recently witnessed the decline of Stack Overflow, the internet's go-to website for technical questions and answers. He'd been contributing to and moderating on the platform for over a decade when it took a sudden nosedive in June. The company behind the site, Prosus, decided to allow AI-generated answers and began charging AI firms for access to its data. In response, top moderators went on strike, arguing that the low-quality AI-generated content went against the very purpose of the site: "To be a repository of high-quality question and answer content."

NewsGuard, a firm that tracks misinformation and rates the credibility of information websites, has found close to 350 online news outlets that are almost entirely generated by AI with little to no human oversight. Sites such as Biz Breaking News and Market News Reports churn out generic articles spanning a range of subjects, including politics, tech, economics, and travel. Many of these articles are rife with unverified claims, conspiracy theories, and hoaxes. When NewsGuard tested the AI model behind ChatGPT to gauge its tendency to spread false narratives, it failed 100 out of 100 times.

AI frequently hallucinates answers to questions, and unless the AI models are fine-tuned and protected with guardrails, Gordon Crovitz, NewsGuard's co-CEO told me, "they will be the greatest source of persuasive misinformation at scale in the history of the internet." A report from Europol, the European Union's law-enforcement agency, expects a mind-blowing 90% of internet content to be AI-generated in a few years.

Though these AI-generated news websites don't have a significant audience yet, their rapid rise is a precursor to how easily AI-generated content will distort information on social media. In his research, Filippo Menczer, a computer science professor and director of Indiana University's Observatory on Social Media, has already found networks of bots that are posting large volumes of ChatGPT-generated content to social-media sites like X (formerly Twitter) and Facebook. And while AI bots have telltale signs now, experts indicate that they will soon get better at mimicking humans and evading the detection systems developed by Menczer and social networks.

While user-run sites like Reddit and social-media platforms are always fighting back against bad actors, people are also losing a crucial place they turn to to verify information: search engines. Microsoft and Google will soon bury traditional search-result links in favor of summaries stitched together by bots that are ill-equipped to distinguish fact from fiction. When we search a query on Google, we not only learn the answer, but also how it fits in the broader context of what's on the internet. We filter those results and then choose the sources we trust. A chatbot-powered search engine cuts off these experiences, strips context like website addresses, and can "parrot" a plagiarized answer, which NewsGuard's Crovitz told me sounds "authoritative, well-written," but is "entirely false."

Synthetic content has also swamped e-commerce platforms like Amazon and Etsy. Two weeks before a technical textbook from Christopher Cowell, a curriculum engineer from Portland, Oregon, was set to be published, he discovered a newly listed book with the same title on Amazon. Cowell soon realized it was AI-generated and the publisher behind it likely picked up the title from Amazon's prerelease list and fed it into software like ChatGPT. Similarly, on Etsy, a platform known for its hand-crafted, artisanal catalog, AI-generated art, mugs, and books are now commonplace.

In other words, it's going to quickly become very difficult to distinguish what's real from what's not online. While misinformation has long been a problem with the internet, AI is going to blow our old problems out of the water.

A scamming bonanza

In the short term, AI's rise will introduce a host of tangible security and privacy challenges. Online scams, which have been growing since November, will be harder to detect because AI will make them easier to tailor to each target. Research conducted by John Licato, a computer science professor at the University of South Florida, has found that it's possible to accurately engineer scams down to an individual's preferences and behavioral tendencies given very little information about a person from public websites and social-media profiles.

One of the key telltale signs of high-risk phishing scams — a kind of attack where the intruder masquerades as a trusted entity like your bank to steal sensitive information — is that the text often contains typos or the graphics aren't as refined and clear as they should be. But these signs won't exist in an AI-powered fraud network, with hackers turning free text-to-image and text generators like ChatGPT into powerful spam engines. Generative AI could potentially be used to plaster your profile picture in a brand's personalized email campaign or produce a video message from a politician with an artificially reworked voice, speaking exclusively on the topics you care about.

The internet will increasingly feel like it's engineered for the machines and by the machines.

And this is already happening: Data from a cybersecurity firm, Darktrace detected a 135% increase in malicious cyber campaigns since the start of 2023, and revealed criminals are increasingly turning to bots to write phishing emails to send error-free, longer messages that are less likely to be caught by spam filters.

And soon hackers may not have to go through too much trouble to obtain your sensitive information. Right now, hackers often resort to a maze of indirect methods to spy on you, including hidden trackers inside websites and buying large datasets of compromised information off of the dark web. But security researchers have discovered that the AI bots in your apps and devices might steal sensitive information for the hackers. Since AI models from OpenAI and Google actively crawl the web, hackers can hide malicious codes — a set of instructions for the bot — inside websites and make the bots execute it with no human intervention.

Say you're on Microsoft Edge, a browser that comes built-in with the Bing AI chatbot. Because the chatbot is constantly reading the pages you look at, it could pick up malicious code concealed in a website you visit. The code could ask Bing AI to pretend to be a Microsoft employee, prompt you with a new offer to use Microsoft Office for free, and ask for your credit-card details. That's how one security expert managed to trick Bing AI. Florian Tramèr, an assistant professor of computer science at ETH Zürich, finds these "prompt injection" attacks concerning, especially considering AI smart assistants are making their way into all sorts of apps such as email inboxes, browsers, office software, and more, and therefore, can easily access data.

"Something like a smart AI assistant that manages your email, calendar, purchases, etc., is just not viable at the moment because of these risks," said Tramèr.

'Dead internet'

As AI continues to wreak havoc on community-led initiatives like Wikipedia and Reddit, the internet will increasingly feel like it's engineered for the machines and by the machines. That could break the web we're used to now, Toby Walsh, an artificial intelligence professor at the University of New South Wales, told me. It will also make things difficult for the AI makers as well. As AI-generated content drowns out human work, tech companies like Microsoft and Google will have less original data to improve their models.

"AI today works because it is trained on the sweat and ingenuity of humans," Walsh said. "If the second-gen generative AI is trained on the exhaust of the first generation, the quality will drop drastically." Earlier this year in May, a University of Oxford study found that training AI on data generated by other AI systems causes it to degrade and ultimately collapse. And as it does, so will the quality of information found online.

Licato, the University of South Florida professor, likens the current state of the web experience to the "dead internet" theory. As the internet's most-visited sites like Reddit become flooded with bot-written articles and comments, companies will deploy additional counter-bots to read and filter automated content. Eventually, the theory goes, most of the content creation and consumption on the internet will no longer be done by humans.

"It's a weird thing to imagine, but it seems increasingly likely with how things are going," said Licato.

I can't help but agree. Over the past few months, the places I used to frequent online are either overrun with AI-generated content and faces or are so occupied with keeping up with their rivals' AI updates that they've crippled their core services. If it goes on, the internet will never be the same again.


Shubham Agarwal is a freelance technology journalist from Ahmedabad, India whose work has appeared in Wired, The Verge, Fast Company, and more.

Read the original article on Business Insider
------------
Read More
By: [email protected] (Shubham Agarwal)
Title: AI is about to turn the internet into a total nightmare
Sourced From: www.businessinsider.com/ai-scam-spam-hacking-ruining-internet-chatgpt-privacy-misinformation-2023-8
Published Date: Tue, 08 Aug 2023 10:00:01 +0000