No menu items!
EletiofeThe Senate’s AI Future Is Haunted by the Ghost...

The Senate’s AI Future Is Haunted by the Ghost of Privacy Past

-

- Advertisment -

The recent burst of generative artificial intelligence is forcing the US Senate into a debate lawmakers have put off for years: privacy reform.

While Americans’ personal data is a commodity sold, traded, mined, and even “recycled,” passing from second party to third party to digital banana stand, some senators believe your personal data is siloed off from the earth-altering AI work those companies, like OpenAI and Google, are testing, tweaking, and deploying daily.

“They want to predict the future for purposes of marketing and selling products, and that’s already there,” says Florida Republican Marco Rubio, the vice-chair of the Senate Intelligence Committee, dismissing the need for an overhaul of federal privacy laws.

Rubio is far from an outlier. Ted Cruz of Texas, the top Republican on the Senate Commerce Committee, agrees. “I think if the Democrats push through restrictions on innovation and AI, it would be disastrous for America,” Cruz says. If the United States doesn’t lead, goes the GOP’s stock argument, an adversarial nation (read: China) will.

Still, there’s fear about AI’s potential to dramatically alter the world, which has kept senators mostly united over the need to do something. But with lawmakers beginning to write AI-related legislation, old and unresolved privacy debates are proving a major impediment—and there’s little room for error on the tightrope of bipartisanship in today’s Washington.

In our rapidly evolving world, Congress must debate the past, even as AI is statistically generating our future.

Lesson Not Learned

Before Congress left Washington for a month-long August recess, senators held their third and final all-senators AI briefing. Although not all 100 senators attended, the bipartisan, closed-door AI briefings were intended to provide a baseline framework for understanding artificial intelligence. If nothing else, the briefings surely got senators talking about AI. Almost instantly, the AI chatter resurrected data privacy debates that had died in each recent congressional session.

Rubio’s gut reaction is that he’s fine with American tech companies running unregulated through the frontiers of AI as they create even newer frontiers. Commerce, he says, is commerce. “They’ll take the data to try to predict what you’re going to buy tomorrow or where you want to travel to tomorrow or what you want to look at. It already does,” Rubio says. “We still have laws that govern things like privacy and property rights and all kinds of [areas]. Certainly, those things would still be prohibited whether it’s a human or a machine that’s violating it.”

Senators like Rubio and Cruz appear to forget what happened the last time Congress decided to just let the tech industry run wild. Google swallowed our ability to find information while collecting every bit of personal data it could. Facebook and Twitter created dossiers on everyone who touched them before dictating who and what can be said on social media. And Amazon consumed nearly 40 percent of the retail world (along with our data) while expanding into cloud storage, entertainment, satellite internet, and a bazillion other markets.

In short, the tentacles of US tech firms are everywhere—vaccines, food, cancer research, psilocybin centers, criminal justice reform, homelessness—the list could reach the moon. (Speaking of the moon, how could we forget commercial spaceflight?) And the AI boom is likely to further expand tech firms’ power and riches. Yet on Capitol Hill, some powerful Republicans are focused on one goal: ensuring American AI dominance.

On this front, Rubio generally sees any new regulation as a needless-to-harmful constraint on US technology giants and their AI experiments. One near-universal takeaway from the briefings is that America can’t afford to be number two.

“You’re dealing with a technology that knows no national borders, so even if we write laws that say a company can’t do that in America, it doesn’t mean some company in some other part of the world or some government in other parts of the world won’t innovate that, and use it, and deploy it against the US,” Rubio says.

Senator Mike Rounds, a South Dakota Republican and one of four senators who spearheaded the all-senators briefings, echoes this sentiment. “AI is gonna advance regardless of whether it happens here in the United States or elsewhere. We have to be advancing faster than our adversaries,” he says. “We have to advance it, but we also want to put in appropriate safeguards.”

Specifics remain impossible to pin down in most corners of the Capitol. Lawmakers are still taking in the potential of new language learning models, like ChatGPT and Google’s Bard, even as AI laps us all. Rounds maintains an openness to nebulous new parameters, on the one hand, but in a critical, fatherly way, he also faults Americans for signing over our data privacy.

“Here’s the deal, we voluntarily give it away,” Rounds says. “People don’t seem to realize that when they sign these agreements, they’re giving up a lot of their personal information.”

Recklessly handing over our data might be fine if it’s American tech companies that are grabbing it. But Rounds, like most lawmakers, decries the idea of giving our private data to Chinese-owned TikTok. It’s the one privacy matter everyone can agree on—excluding, perhaps, the 150 million US-based users the company claims to have.

“There doesn’t seem to be a whole lot of concern about it by a significant amount of the American public, which is unfortunate because that’s helping to create the databases that eventually may be used against us,” Rounds says.

While Senate Majority Leader Chuck Schumer and the others tried to steer the conversation around artificial intelligence clear of politics, AI now seems lodged in the age-old partisan debate that pits laissez-faire capitalism against Big Brother, which New Mexico Democrat Martin Heinrich says is regrettably shortsighted.

“We failed to regulate the internet when it was regulatable, and Republicans and Democrats today—for the most part—are going, ‘Holy cow, we subjected our entire teenage population to this experiment, and it’s not serving us well.’ So I just don’t think it’s helpful to get hardened,” Heinrich says.

It’s not just Democrats who are voicing concern. There are some outspoken privacy hawks in the GOP, chief among them Senator Josh Hawley, a Missouri Republican. When asked about Cruz and Rubio’s positions—that encroaching on the Silicon Valley data mining model could imperil America’s AI future—Hawley laughs.

“Ha. I don’t know that we’re gonna be able to hermetically seal it like that,” Hawleys says, before laughing uproariously. “This idea that we can just trust Google and Meta to be good actors, you know—not gonna happen.”

While his colleagues are almost solely focused on America’s adversaries, Hawley—who may be China’s biggest critic in the Senate—is unsettled by the thought of American tech companies plugging your private data into their AI language learning models right now.

“We need to just ban that. That’s the way to do it. Yeah, we just say no—in federal law,” Hawley says. “They wouldn’t let you sue if they didn’t. That’s the way to fix that, in my view.”

That’s Hawley’s number one priority. He’s the top Republican on the Senate Judiciary Subcommittee on Privacy, Technology, and the Law and teamed up with its chair, Senator Richard Blumenthal of Connecticut, in introducing the No Section 230 Immunity for AI Act.

The bipartisan pair say the legislation is essential because it explicitly tacks an AI clause onto Section 230 of the 1996 Communications Decency Act, which shields online companies from liability for anything their users post on their platforms. While both senators have spent years trying to overhaul Section 230, they say there’s no time to waste in updating it to protect consumers from generative AI-powered deepfakes.

Point of No Return

This summer’s AI briefings also instilled the need for speed, and senators are finally moving—if at senatorially turtle-like speeds.

Before lawmakers left town for the summer, they passed their first generative AI amendments, tacking them onto the annual, must-pass National Defense Authorization Act (NDAA). One such measure requires the Department of Defense to set up a “bug bounty program” so Pentagon officials can test American-made AI for security flaws.

Senators, led by Schumer, also tucked a nondefense AI amendment into their version of the defense bill, which still must be reconciled with the House version. It mandates that any AI “gap in knowledge” from the Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, National Credit Union Administration, or Bureau of Consumer Financial Protection must be reported to Congress.

Many Democrats agree with the GOP on the need to lead the AI race, though the details are devilish. On the left, the prescription is one of those much-decried government ones.

“We cannot put ourselves at a disadvantage compared to China or other adversaries or competitors. But just like in atomic energy, there needs to be some kind of international structure,” Blumenthal says.

He says that means international agreements, multilateral trust, and “one central agency or office that is responsible for AI and can conduct negotiations with other countries.”

Behind closed doors, senators were warned about an AI fiscal cliff on the horizon, and many seem acutely on edge over how easy replicating AI is now and will be tomorrow. “One of the things that was discussed—I don’t think it’s classified in any way—is, like in almost all technology, the price is going to come down dramatically,” says Senator John Hickenlooper, a Colorado Democrat. “Look at how many billionaires we have, they can create their own large language models.”

Senators are now engaged in many side debates, with a few focused on the technology’s impact on democracy.

“Frankly, I personally haven’t quite figured out what we ought to do,” says Senator Mazie Hirono, a Hawaii Democrat. “But you can see the damage that can be done in the political arena, and I think that there should be some disclosure requirements in the political arena.”

Others are looking at potential disclosure requirements, especially when it comes to AI making decisions on loans, insurance applications, and other consequential matters.

“If a decision is made about you based on AI making the decision, whether it’s an insurance company or your own government, you have a right to be able to know what’s the data set that’s behind that, whether it’s a valid data set,” says Senator James Lankford, an Oklahoma Republican. “And so that’s a bigger challenge.”

Some newer, younger senators left the private AI briefings more unsettled than when they entered. “The lack of specific detail in some of these briefings makes me a little bit worried about what both the Senate and maybe the Department of Defense know about where we are on AI relative to other countries,” says Republican JD Vance, the freshman senator from Ohio. “It’s not clear if they’re so substance-less because they think that you’re stupid or because they’re hiding something.”

While the Senate is now demanding an AI health checkup from an array of federal agencies, lawmakers are also getting in the weeds on their specific committees and hashing out targeted AI proposals.

There’s broad agreement that there’s no going back. “One thing I’m certain of is I know of no technological advance in human history that we’ve been able to roll back. It’s going to happen,” Rubio says. “The question is, how do we build guardrails and practices around it so that we can maximize its benefits and diminish its harms?”

The Senate, meanwhile, can’t go forward without first taking steps back—to the debate Congress never had. For Senators Hawley and Blumenthal, AI safeguards start with overhauling Section 230. “You’ve got to put that there, and then you can build around that,” Hawley says. “If people don’t have the right, you can tell them, ‘don’t use your data stores for generative AI,’ but then if they do it anyway, it’s like what, the FTC fines them $1 million? No, you have to let people sue and do class actions. Then they pay attention.”

Latest news

7 Best Handheld Gaming Consoles (2024): Switch, Steam Deck, and More

It feels like a distant memory by now, but right before the Nintendo Switch launched in 2017, it seemed...

The Boeing Starliner Astronauts Will Come Home on SpaceX’s Dragon Next Year

NASA has announced that astronauts Barry Wilmore and Sunita Williams will return to Earth next February aboard SpaceX’s Dragon...

How to Switch From iPhone to Android (2024)

Ignore the arguments about which is better, because iPhones and Android phones have far more in common than some...

12 Best Tablets (2024): iPads, Androids, and More Tested and Compared

Tablets often don't come with kickstands or enough ports, so it's a good idea to snag a few accessories...
- Advertisement -

Will the ‘Car-Free’ Los Angeles Olympics Work?

THIS ARTICLE IS republished from The Conversation under a Creative Commons license.With the Olympic torch extinguished in Paris, all...

Lionel Messi will return before MLS playoffs, says Inter Miami coach Tata Martino

Inter Miami head coach Tata Martino said on Friday that Lionel Messi will return to the team's lineup before...

Must read

7 Best Handheld Gaming Consoles (2024): Switch, Steam Deck, and More

It feels like a distant memory by now, but...

The Boeing Starliner Astronauts Will Come Home on SpaceX’s Dragon Next Year

NASA has announced that astronauts Barry Wilmore and Sunita...
- Advertisement -

You might also likeRELATED
Recommended to you