April 10 was a very bad day in the life of celebrity gamer and YouTuber Atrioc (Brandon Ewing). Ewing was broadcasting one of his usual Twitch livestreams when his browser window was accidentally exposed to his audience. During those few moments, viewers were suddenly face-to-face with what appeared to be deepfake porn videos featuring female YouTubers and gamers QTCinderella and Pokimane—colleagues and, to my understanding, Ewing’s friends. Moments later, a quick-witted viewer uploaded a screenshot of the scene to Reddit, and thus the scandal was a fact.
Deepfakes refer broadly to media doctored by AI, commonly to superimpose a person’s face onto that of, say, an actor in a movie or video clip. But sadly, as reported by Vice journalist Samantha Cole, its primary function has been to create porn starring female celebrities, and perhaps more alarmingly, to visualize sexual fantasies of friends or acquaintances. Given its increasing sophistication and availability, anyone with a picture of your face now can basically turn it into a porno. “We are all fucked,” as Cole concisely puts it.
For most people, I believe, it is obvious that Ewing committed some kind of misconduct in consuming the fictive yet nonconsensual pornography of his friends. Indeed, the comments on Reddit, and the strong (justified) reactions from the women whose faces were used in the clips, testify to a deep sense of disgust. This is understandable, yet specifying exactly where the crime lies is a surprisingly difficult undertaking. In fact, the task of doing so brings to the fore a philosophical problem that forces us to reconsider not only porn, but the very nature of human imagination. I call it the pervert’s dilemma.
On the one hand, one may argue that by consuming the material, Ewing was incentivizing its production and dissemination, which, in the end, may harm the reputation and well-being of his fellow female gamers. But I doubt that the verdict in the eyes of the public would have been much softer had he produced the videos by his own hand for personal pleasure. And few people see his failure to close the tab as the main problem. The crime, that is, appears to lie in the very consumption of the deepfakes, not the downstream effects of doing so. Consuming deepfakes is wrong, full stop, irrespective of whether the people “starring” in the clips, or anyone else, find out about it.
At the same time, we are equally certain that sexual fantasies are morally neutral. Indeed, no one (except perhaps some hard-core Catholics) would have blamed Ewing for creating pornographic pictures of QTCinderella in his mind. But what is the difference, really? Both the fantasy and the deepfake are essentially virtual images produced by previous data input, only one exists in one’s head, the other on a screen. True, the latter can more easily be shared, but if the crime lies in the personal consumption, and not the external effects, this should be irrelevant. Hence the pervert’s dilemma: We think sexual fantasies are fine as long as they are only ever generated and contained in a person’s head, and abhorrent the moment they exist in the brain with the aid of somewhat realistic representation—yet we struggle to identify any morally relevant distinction to justify this assessment.
In the long run, it is likely that this will force us to reevaluate our moral attitudes to both deepfakes and sexual fantasies, at least insofar as we want to maintain consistency in our morality. There are two obvious ways in which this could go.
The first is that we simply begin to accept pornographic deepfakes as a normal way of fantasizing about sex, only that we outsource some of the work that used to happen in the brain to a machine. Considering the massive supply of (sometimes stunningly realistic) pornographic deepfakes and the ease with which they can be customized for one’s own preferences (how long before there is a DALL-E for porn?), this may be a plausible outcome. Knowing that people probably use your photos to create fictive porn may assume the same status as knowing that some people probably think of you (or look at your most recent Instagram selfie) when they masturbate—not a huge deal unless they tell it to your face. At the very least, we can imagine the production of deepfakes assuming the same status as drawing a highly realistic picture of one’s sexual fantasy—weird, but not morally abhorrent.
The second, and arguably more interesting option, is that we begin to question the moral neutrality of sexual fantasies all together. Thinking about sex was, for a long time, considered deeply sinful in Christian Europe, and has continued to be a stigma for some. It was only after the Enlightenment that whatever goes on in a person’s mind became a “private matter” beyond moral evaluation. But this is definitely an exception, historically speaking. And to some extent, we still moralize over people’s fantasies. For instance, several ethicists (and many other people I reckon), hold that sexual fantasies involving children or brutal violence are morally objectionable.
But deepfakes may give us reason to go even further, to question dirty thoughts as a general category. Since the advent of the internet, we’ve been forming a new attitude to the moral status of our personal data. Indeed, most Westerners today take it for granted that one should be in full control over information pertaining to one’s person. But wouldn’t this, strictly interpreted, also include data stored in other people’s heads? Wouldn’t it grant me a level of control over other people’s imagination? The idea is not as wild as it first appears. Consider the Friends episode “The one with a chick and a duck,” in which Ross teases Rachel by picturing her naked against her will, claiming that it is one of the “uh, rights of the ex-boyfriend, huh?” Rachel repeatedly begs him to stop, but Ross merely responds by closing his eyes saying, “Wait, wait, now there’s 100 of you, and I’m the king.” The joke is portrayed as completely uncontroversial, with added audience laughter and all. But now, some two decades later, doesn’t it leave you with a rather bitter taste in your mouth? Indeed, in the age of information, the moral neutrality of the mind seems to be increasingly under siege. Perhaps, in another 20 years, the thought that I can do whatever I want to whomever I want in my head may strike people as morally disgusting too.
We are probably going to see some of both scenarios. There will be calls to moralize over people’s imagination. And people will probably react with less and less shock when learning about the deepfake phenomenon, even when it happens to themselves. Just compare the media coverage of deepfake porn today with that of two years ago. The (legitimate) moral panic that characterized the initial reports has almost completely vanished, despite the galloping technological development that has taken place in the meanwhile. Yet, we will probably not arrive at any moral consensus regarding deepfakes anytime soon. Indeed, it has taken us thousands of years to learn to live with human imagination, and the arrival of deepfakes puts most of those cultural protocols on their heads.
So, which of the options are preferable from the viewpoint of moral philosophy? There is no simple answer. This is in part because both options make sense, or at least have the potential to make sense (otherwise there wouldn’t be any dilemma to begin with). But it is also due to the very nature of moral judgements. Moral truths cannot be stated once and for all. On the contrary, we need to begin every day by asking them anew.
Think of it like this: We know how many electrons are in a hydrogen atom, and so we never need to ask that question again. Questions like “Who should we be?”, “What is a good human life?”, or “Can we blame people for their fantasies?”, on the other hand, are questions that need to be asked again and again by every generation. This is because moral philosophy is an activity that dies the moment we stop doing it. For our moral lifeworlds to make sense, we must consciously reevaluate them, because this activity is always dependent on the social, technological, and cultural contexts in which it takes place. So, the moment we arrive at a definitive answer to the question of which option is preferable from the standpoint of moral philosophy, moral philosophy ceases to be.
Where does all this put us in relation to Ewing, Pokimane, and QTCinderella? There is no doubt that the feelings of shame and humiliation expressed by the targets of the videos are real. And I personally do not find any reason to question the authenticity of the shame and regret expressed by Ewing. But our moral sensemaking of the situation is a different matter. And we should be open to the fact that, in 20 years, we may think very differently about these things. It all depends on how we continue to build and reevaluate our moral lifeworlds. A good first step is taking a step back and reconsidering what exactly it is we find objectionable about deepfakes.
I think the best place to start is to assess the social context in which deepfakes are used, and compare this to the context around sexual fantasies. Today, it is clear that deepfakes, unlike sexual fantasies, are part of a systemic technological degrading of women that is highly gendered (almost all pornographic deepfakes involve women). And the moral implications of this system are larger than the sum of its parts (the individual acts of consumption). Fantasies, on the other hand, are not gendered—at least we have no reliable evidence of men engaging more with sexual imagination than women do—and while the content of individual fantasies may be misogynist, the category is not so in and of itself. The immoral aspect of Ewing’s actions therefore lies not primarily in the damage it caused to the individuals portrayed, but in the partaking of a technically-supported systemic degrading of women, a system that amounts to something more than the sum of its parts.
While this is the beginning of an answer, it is not the answer. How the technology is used and fitted into our social and cultural protocols will continue to change. What Ewing did wrong cannot be answered once and for all. For tomorrow, we will need to ask again.