Sunday, Dec 22, 2024

There’s never been a more important time in AI policy

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Before we get started I wanted to flag two great talks this week. 


⚖
On Tuesday, September 12, at 12 p.m. US Eastern time, we will be hosting a subscriber-only roundtable conversation about how to regulate artificial intelligence. I’ll help you decipher what is going on in AI regulation and what to pay attention to this fall. (You can subscribe to get access here.)


🦾
On Thursday, September 14, at 12 p.m. US Eastern time, I am interviewing Gareth Edwards, the director behind Rogue One: A Star Wars Story, about his new film, The Creator. The film is about the current state of AI and the pitfalls and possibilities ahead as this technology marches toward sentience. Join us on LinkedIn Live!

Okay, on with the newsletter!

Lawmakers are back from summer vacation and ready for action. The new school year has started with a flurry of action in AI in what is turning out to be one of the most consequential seasons for the technology.

A lot has changed since I first started covering AI policy four years ago. I used to have to convince people that the subject was worth their time. Not any more. It has gone  from being a super nerdy, niche topic to front-page news. Notably, politicians in countries such as the US, which have traditionally been reluctant to regulate tech, have now come out swinging with lots of different proposals. 

On Wednesday, tech leaders and researchers are meeting at Senate Majority Leader Chuck Schumer’s first AI Insight Forum. The forum will help Schumer shape his approach to AI regulation. My colleague Tate Ryan-Mosley breaks down what to expect here. 

Senators Richard Blumenthal and Josh Hawley have also said they will introduce a bipartisan bill for artificial intelligence, which will include rules for licensing and auditing AI, liability rules around privacy and civil rights, as well as standards for data transparency and safety. They would also create an AI office to oversee the tech’s regulation.

Meanwhile, the EU is in the final stages of negotiations for the AI Act, and some of the toughest questions about the bill, such as whether to ban facial recognition, how to regulate generative AI, and how enforcement should work, will be hashed out between now and Christmas. Even the leaders of the G7 decided to chime in and agreed to create a voluntary code of conduct for AI.

Thanks to the excitement around generative AI, the technology has become a kitchen table topic, and everyone is now aware something needs to be done, says Alex Engler, a fellow at the Brookings Institution. But the devil will be in the details.

To really tackle the harm AI has already caused in the US, Engler says, the federal agencies controlling health, education, and others need the power and funding to investigate and sue tech companies. He proposes a new regulatory instrument called Critical Algorithmic Systems Classification (CASC), which would grant federal agencies the right to investigate and audit AI companies and enforce existing laws. This is not a totally new idea. It was outlined by the White House last year in its AI Bill of Rights. 

Say you realize you have been discriminated against by an algorithm used in college admissions, hiring, or property valuation. You could bring your case to the relevant federal agency, and the agency would be able to use its investigative powers to demand that tech companies hand over data and code about how these models work and review what they are doing. If the regulator found that the system was causing harm, it could sue.

In the years I’ve been writing about AI, one critical thing hasn’t changed: Big Tech’s attempts to water down rules that would limit its power.

“There’s a little bit of a misdirection trick happening,” Engler says. Many of the problems around artificial intelligence—surveillance, privacy, discriminatory algorithms—are affecting us right now, but the conversation has been captured by tech companies pushing a narrative that large AI models pose massive risks in the distant future, Engler adds.

“In fact, all of these risks are far better demonstrated at a far greater scale on online platforms,” Engler says. And these platforms are the ones benefiting from reframing the risks as a futuristic problem.

Lawmakers on both sides of the Atlantic have a short window to make some extremely consequential decisions about the technology that will determine how it is regulated for years to come. Let’s hope they don’t waste it. 

Deeper Learning

You need to talk to your kid about AI. Here are 6 things you should say.

In the past year, kids, teachers, and parents have had a crash course in artificial intelligence, thanks to the wildly popular AI chatbot ChatGPT. But it’s not just chatbots that kids are encountering in schools and in their daily lives. AI is increasingly everywhere—recommending shows to us on Netflix, helping Alexa answer our questions, powering your favorite interactive Snapchat filters and the way you unlock your smartphone.

AI 101: While some students will invariably be more interested in AI than others, understanding the fundamentals of how these systems work is becoming a basic form of literacy—something everyone who finishes high school should know. At the start of the new school year, here are MIT Technology Review’s six essential tips for how to get started on giving your kid an AI education. Read more from Rhiannon Williams and me here.

Bits and Bytes

Chinese AI chatbots want to be your emotional support
What is Chinese company Baidu’s new Ernie Bot like, and how does it compare to its Western alternatives? Our China tech reporter Zeyi Yang experimented with it and found that it did a lot more hand-holding. Read more in his weekly newsletter, China Report. (MIT Technology Review)

Inside Meta’s AI drama: Internal feuds over compute power
Meta is losing top talent left, right, and center over internal feuds about which AI projects are given computing resources. Of the 14 researchers who authored Meta’s LLaMA research paper, more than half have left the company. (The Information)

Google will require election ads to disclose AI content
Google will require advertisers to “prominently disclose” when a campaign ad “inauthentically depicts” people or events. As the US presidential election looms closer, one of the most tangible fears around generative AI is the ease with which people can use the technology to make deepfake images meant to mislead people. The changes will come into effect from mid-November. (The Financial Times)

Microsoft says it will pay for its clients’ AI copyright legal fees
Generative AI has been accused of stealing authors’ and artists’ intellectual property. Microsoft, which offers a suite of generative AI tools, has said it will pay up if any of its clients are sued for copyright violations. (Microsoft)

A buzzy AI startup for generating 3D models used cheap human labor
The Mechanical Turk, but make it 3D. Kaedim, a startup that says it uses machine learning to convert 2D illustrations into 3D models, actually uses human artists for “quality control,” and sometimes to create the models from scratch. (404 media)

------------
Read More
By: Melissa Heikkilä
Title: There’s never been a more important time in AI policy
Sourced From: www.technologyreview.com/2023/09/12/1079315/theres-never-been-a-more-important-time-for-ai-policy/
Published Date: Tue, 12 Sep 2023 09:32:28 +0000