The algorithm has won. The most powerful social, video, and shopping platforms have all converged on a philosophy of coddling users in automated recommendations. Whether through Spotify’s personalized playlists, TikTok’s all-knowing For You page, or Amazon’s product suggestions, the internet is hell-bent on micromanaging your online activity.
At the same time, awareness of the potential downsides of this techno-dictatorial approach has never been higher. The US Congress recently probed whether social media algorithms are threatening the well-being of children, and new scholarship and books have focused fresh attention on the broad cultural consequences of letting algorithms curate our feeds. “I do think it reifies a lot of our cultural tastes in a way that at least I find concerning,” says Ryan Stoldt, an assistant professor at Drake University and member of the University of Iowa’s Algorithms and Culture Research Group.
In response to the growing sense of unease surrounding Big Tech’s mysterious recommender systems, digital refuges from the algorithm have begun to emerge. Entrepreneur Tyler Bainbridge is part of a nascent movement attempting to develop less-fraught alternatives to automated recommendations. He’s founder of PI.FYI, a social platform launched in January that hopes to, in Bainbridge’s words, “bring back human curation.”
PI.FYI is born out of Bainbridge’s popular newsletter, Perfectly Imperfect, and a simple conceit: Humans should receive recommendations only from other humans, not machines. Users post recommendations for everything from consumer products to experiences such as “being in love” or “not telling men at bars you study philosophy,” and they also crowdsource answers to questions like “What did you read last week?” or “London dry cleaner?”
Posts on the platform are displayed in chronological order, although users can choose between seeing a feed of content only from friends and a firehose of everything posted to the service. PI.FYI’s homepage offers recommendations from a “hand-curated algorithm”—posts and profiles selected by site administrators and some carefully chosen users.
“People long for the days of not being bombarded by tailored ads everywhere they scroll,” Bainbridge says. PI.FYI’s revenue comes from user subscriptions, which start at $6 a month. While its design evokes an older version of the internet, Bainbridge says he wants to avoid creating an overly nostalgic facade. “This isn’t an app built for millennials who made MySpace,” he says, claiming that a significant portion of his user base are from Gen Z.
Spread, a social app currently in closed beta testing, is another attempt to provide a supposedly algorithm-free oasis. “I don’t know a single person in my life that doesn’t have a toxic relationship with some app on their phone,” says Stuart Rogers, Spread’s cofounder and CEO. “Our vision is that people will be able to actually curate their diets again based on real human recommendations, not what an algorithm deems will be most engaging, therefore also usually enraging,” he says.
On Spread, users can’t create or upload original text or media. Instead, all posts on the platform are links to content from other services, including news articles, songs, and video. Users can tune their chronological feeds by following other users or choosing to see more of a certain type of media.
Brands and bots are barred from Spread, and, like PI.FYI, the platform doesn’t support ads. Instead of working to maximize time-on-site, Rogers’ primary metrics for success will be indicators of “meaningful” human engagement, like when someone clicks on another user’s recommendation and later takes action like signing up for a newsletter or subscription. He hopes this will align companies whose content is shared on Spread with the platform’s users. “I think there’s a nostalgia for what the original social meant to achieve,” Rogers says.
So you joined a social network without ranking algorithms—is everything good now? Jonathan Stray, a senior scientist at the UC Berkeley Center for Human-Compatible AI, has doubts. “There is now a bunch of research showing that chronological is not necessarily better,” he says, adding that simpler feeds can promote recency bias and enable spam.
Stray doesn’t think social harm is an inevitable outcome of complex algorithmic curation. But he agrees with Rogers that the tech industry’s practice of trying to maximize engagement doesn’t necessarily select for socially desirable results.
Stray suspects the solution to the problem of social media algorithms may in fact be … more algorithms. “The fundamental problem is you’ve got way too much information for anybody to consume, so you have to reduce it somehow,” he says.
In January, Stray launched the Prosocial Ranking Challenge, a competition with a $60,000 prize fund aiming to spur development of feed-ranking algorithms that prioritize socially desirable outcomes, based on measures of users’ well-being and how informative a feed is. From June through October, five winning algorithms will be tested on Facebook, X, and Reddit using a browser extension.
Until a viable replacement takes off, escaping engagement-seeking algorithms will generally mean going chronological. There’s evidence people are seeking that out beyond niche platforms like PI.FYI and Spread.
Group messaging, for example, is commonly used to supplement artificially curated social media feeds. Private chats—threaded by the logic of the clock—can provide a more intimate, less chaotic space to share and discuss gleanings from the algorithmic realm: the trading of jokes, memes, links to videos and articles, and screenshots of social posts.
Disdain for the algorithm could help explain the growing popularity of WhatsApp within the US, which has long been ubiquitous elsewhere. Meta’s messaging app saw a 9 percent increase in daily users in the US last year, according to data from Apptopia reported by The Wrap. Even inside today’s dominant social apps, activity is shifting from public feeds and toward direct messaging, according to Business Insider, where chronology rules.
Group chats might be ad-free and relatively controlled social environments, but they come with their own biases. “If you look at sociology, we’ve seen a lot of research that shows that people naturally seek out things that don’t cause cognitive dissonance,” says Stoldt of Drake University.
While providing a more organic means of compilation, group messaging can still produce echo chambers and other pitfalls associated with complex algorithms. And when the content in your group chat comes from each member’s respective highly personalized algorithmic feed, things can get even more complicated. Despite the flight to algorithm-free spaces, the fight for a perfect information feed is far from over.