No menu items!
EletiofeMarc Andreessen Once Called Online Safety Teams an Enemy....

Marc Andreessen Once Called Online Safety Teams an Enemy. He Still Wants Walled Gardens for Kids

-

- Advertisment -

In his polarizing “Techno-Optimist Manifesto” last year, venture capitalist Marc Andreessen listed a number of enemies to technological progress. Among them were “tech ethics” and “trust and safety,” a term used for work on online content moderation, which he said had been used to subject humanity to “a mass demoralization campaign” against new technologies such as artificial intelligence.

Andreessen’s declaration drew both public and quiet criticism from people working in those fields—including at Meta, where Andreessen is a board member. Critics saw his screed as misrepresenting their work to keep internet services safer.

On Wednesday, Andreessen offered some clarification: When it comes to his 9-year-old son’s online life, he’s in favor of guardrails. “I want him to be able to sign up for internet services, and I want him to have like a Disneyland experience,” the investor said in an onstage conversation at a conference for Stanford University’s Human-Centered AI research institute. “I love the internet free-for-all. Someday, he’s also going to love the internet free-for-all, but I want him to have walled gardens.”

Contrary to how his manifesto may have read, Andreessen went on to say he welcomes tech companies—and by extension their trust and safety teams—setting and enforcing rules for the type of content allowed on their services.

“There’s a lot of latitude company by company to be able to decide this,” he said. “Disney imposes different behavioral codes in Disneyland than what happens in the streets of Orlando.” Andreessen alluded to how tech companies can face government penalties for allowing child sexual abuse imagery and certain other types of content, so they can’t be without trust and safety teams altogether.

So what kind of content moderation does Andreessen consider an enemy of progress? He explained that he fears two or three companies dominating cyberspace and becoming “conjoined” with the government in a way that makes certain restrictions universal, causing what he called “potent societal consequences” without specifying what those might be. “If you end up in an environment where there is pervasive censorship, pervasive controls, then you have a real problem,” Andreessen said.

The solution as he described it is ensuring competition in the tech industry and a diversity of approaches to content moderation, with some having greater restrictions on speech and actions than others. “What happens on these platforms really matters,” he said. “What happens in these systems really matters. What happens in these companies really matters.”

Andreessen didn’t bring up X, the social platform run by Elon Musk and formerly known as Twitter, in which his firm Andreessen Horowitz invested when the Tesla CEO took over in late 2022. Musk soon laid off much of the company’s trust and safety staff, shut down Twitter’s AI ethics team, relaxed content rules, and reinstated users who had previously been permanently banned.

Those changes paired with Andreessen’s investment and manifesto created some perception that the investor wanted few limits on free expression. His clarifying comments were part of a conversation with Fei-Fei Li, codirector of Stanford’s HAI, titled “Removing Impediments to a Robust AI Innovative Ecosystem.”

During the session, Andreessen also repeated arguments he has made over the past year that slowing down development of AI through regulations or other measures recommended by some AI safety advocates would repeat what he sees as the mistaken US retrenchment from investment in nuclear energy several decades ago.

Nuclear power would be a “silver bullet” to many of today’s concerns about carbon emissions from other electricity sources, Andreessen said. Instead the US pulled back, and climate change hasn’t been contained the way it could have been. “It’s an overwhelmingly negative, risk-aversion frame,” he said. “The presumption in the discussion is, if there are potential harms therefore there should be regulations, controls, limitations, pauses, stops, freezes.”

For similar reasons, Andreessen said, he wants to see greater government investment in AI infrastructure and research and a freer rein given to AI experimentation by, for instance, not restricting open-source AI models in the name of security. If he wants his son to have the Disneyland experience of AI, some rules, whether from governments or trust and safety teams, may be necessary too.

Latest news

7 Best Handheld Gaming Consoles (2024): Switch, Steam Deck, and More

It feels like a distant memory by now, but right before the Nintendo Switch launched in 2017, it seemed...

The Boeing Starliner Astronauts Will Come Home on SpaceX’s Dragon Next Year

NASA has announced that astronauts Barry Wilmore and Sunita Williams will return to Earth next February aboard SpaceX’s Dragon...

How to Switch From iPhone to Android (2024)

Ignore the arguments about which is better, because iPhones and Android phones have far more in common than some...

12 Best Tablets (2024): iPads, Androids, and More Tested and Compared

Tablets often don't come with kickstands or enough ports, so it's a good idea to snag a few accessories...
- Advertisement -

Will the ‘Car-Free’ Los Angeles Olympics Work?

THIS ARTICLE IS republished from The Conversation under a Creative Commons license.With the Olympic torch extinguished in Paris, all...

Lionel Messi will return before MLS playoffs, says Inter Miami coach Tata Martino

Inter Miami head coach Tata Martino said on Friday that Lionel Messi will return to the team's lineup before...

Must read

7 Best Handheld Gaming Consoles (2024): Switch, Steam Deck, and More

It feels like a distant memory by now, but...

The Boeing Starliner Astronauts Will Come Home on SpaceX’s Dragon Next Year

NASA has announced that astronauts Barry Wilmore and Sunita...
- Advertisement -

You might also likeRELATED
Recommended to you