Image by MargJohnsonVA on Envato ElementsImage by MargJohnsonVA on Envato ElementsThe prevalence of false news stories, as well as our exposure to them, has risen dramatically over the last few years.  While many are familiar with disinformation, especially as it exists in the political arena, more have not yet recognized the significance of a growing sub-genre of synthetic media called deepfakes.  The massive expansion of deepfakes has provided more fertile ground for the development of disinformation campaigns throughout every facet of society.  This includes government and security officials worrying about whether a deepfake could instigate a future conflict between the United States and an adversary, in addition to concerns about how deepfakes could implicate advertising, education, the economy, and more.  One of the most significant of the threats is how the use of deepfakes is being used to oppress and discriminate against women through non-consensual pornography.  Some research indicates over 90% of recent deepfakes have been non-consensual pornographic clips of women (Savin, 2022).

This analysis will explain what deepfakes are and how this new manifestation of disinformation is advancing in ways many never expected would be possible.  Further, the use of deepfakes is fundamentally unethical. The misappropriation of women's bodies in non-consensual pornography is a devastatingly effective way of silencing the voices of women and continues to flaunt the misogyny and patriarchy embedded in our society (Savin, 2022).  Calling out deepfakes as an unethical communication practice is necessary.  It is time the discipline of communication shined a much brighter light on deepfakes as a new vehicle for commodifying women's bodies.  The support for deepfakes as a potentially legitimate form of communication using the guise of free speech and the global marketplace must end.  This ethical case against deepfakes is made to provide at least some form of symbolic justice to those who have been oppressed, deter potential perpetrators by staking the claim that they are ethically wrong, and be a first step in mobilizing proper societal and discipline-wide responses to this marginalization.

What are deepfakes?

Deepfakes are a form of artificial intelligence that use machine learning to create extremely realistic looking video and audio of events that never happened or did not happen in the way shown (Sample, 2020).  The history of deepfakes is quite short.  In 2016, Justus Thies and his colleagues presented at the Conference on Computer Vision and Pattern Recognition research establishing their ability to use real-time face capture and re-enactment of expressions (de Ruiter, 2021).  In 2017, on Reddit, videos appeared where the faces of celebrity actresses were inserted into scenes from other pornographic videos (de Ruiter, 2021).  The technology has advanced from there.

Deepfakes are likely to depict people saying or doing things they would have no inclination or thought of ever actually doing.  Significant and justified concern has been raised that "as a result of deepfakes, we are heading toward an 'infopocalypse' where we cannot tell what is real from what is not (Fallis, 2021, para. 2).  Deepfakes are getting GOOD.  New advances in technology have allowed companies to produce video and audio that are basically indiscernible from the real thing.  The subject photos can be retrieved from anywhere – including the vast expanse of the internet – and translated into video depicting literally anything the creator would want.  As of late, the audio abilities of deepfakes have become super advanced as well.  With a small clip of someone's voice saying anything a profile can be created and then words typed into their mouth that sound literally as if they are coming from the true subject themselves.  If you saw, for example, the Mark Zuckerberg video from years back, where his blinking and body movements provide hints giving away that the video was fake, now you would be shocked by how far the technology has come. So, let's be entirely clear.  Artificial intelligence has made it possible for "almost anyone to create convincing fake videos of anyone doing or saying anything" (Fallis, 2021, para. 14).  Another key reason for advancement has been the use of both generator and discriminator networks.  Often referred to as GANs, these networks now work together to both create and test images for how likely they are to be determined as realistic and genuine in a deepfake (de Ruiter, 2021).

Even if the ability to use this type of technology well was limited to a few noted experts that should elicit enough fear, but that is not where we are.  There are literally dozens of apps – including FakeApp and Zao-which are totally user-friendly software programs available with online tutorials that can basically make anyone with access to a computer (an obviously huge number of people) adept at creating deepfakes (Fallis, 2021).  As one writer put it "we're getting to the point where we can't distinguish what's real—but then, we didn't before. What is new is the fact that it's now available to everybody, or will be ... It's destabilizing. The whole business of trust and reliability is undermined by this stuff" (Fallis, 2021, para. 14).

A real consequence of this is deepfakes are being used to convince people about things that are not true.  Additionally, deepfakes can not only convince people things are wrong, but can prevent people from gaining knowledge that is correct.  The inability to trust any video about anything risks further collapse of all institutions on which society has traditionally relied.  How will people trust the government, the media, educational institutions, scientists, or anyone is telling them the truth.  We are not that far removed from the height of the COVID-19 pandemic where it became clear disinformation was a risk to public health.  Imagine that on a scale of at least a thousand-fold.

Overall, the risks to society are clear. If you are interested in fodder for your nightmares, new studies have revealed the readiness of cybercriminals to "integrate deepfakes into attacks on financial institutions, scams, and attempts to impersonate politicians" (Vijayan, 2022, para. 4).  Vladimir Kropotov, a security researcher, adds "what's scary is that many of these attacks use identities of real people – often scraped from content they post on social media networks (Vijayan, 2022, para. 4). Tools exist in this arsenal to open or hijack bank accounts, plant evidence for extortion, and gather up all of our passwords (Vijayan, 2022).  Yes, you can run to delete content from your Facebook now. And if you thought this was all something for the future, it isn't.  In a Trend Micro survey of cyber security professionals, respondents revealed deepfake enabled threats were experienced by a "startling 66% - up 13% from 2021" (Vijayan, 2022, para. 13).

How do deepfakes oppress women?

Now, imagine being a woman who gets a message from a friend or colleague saying 'when did you start posing for porn sites?'- so you immediately go the site your friend referenced and see your head on a body that isn't yours engaging in sexual acts with people who you have never met (Barnett & Rivers, 2022).  Situations like this are all too real.

"Over 90% of all deepfakes online are non-consensual and pornographic clips targeting women" (Savin, 2022, para. 5).  Let's reiterate that.  90% of all deepfakes are non-consensual pornographic clips targeting women.  Deepfakes are "weaponized disproportionately against women, representing a new and degrading means of humiliation, harassment and abuse" (Barnett & Rivers, 2022, para. 3).  They are weaponized against women like Kate Isaacs.  Kate was a leader in the Not Your Porn group who helped secure the deletion of millions of videos from Pornhub that were illegal or non-consensual (meaning from rapes, trafficking, unapproved surveillance, etc.) (Enright, 2022).  In late 2020, she realized that those who were upset by her efforts had produced a deepfake where her face had been added to the body of another person.  She remembers thinking "Who is this person? Did I have sex with this person?" (Enright, 2022, para. 1).  But it gets worse.  Her face was recorded in videos saying "I only wanted to get rid of porn because I've got a porn video that I'm ashamed of" (Enright, 2022, para. 4).  People believed it.  She later recounted "it was the most terrifying thing I've ever experienced.  I didn't want to walk home alone.  This was the winter so the thought of walking home in the dark was really scary.  Essentially, it was about silencing me" (Enright, 2022, para. 6).

Kate isn't the only one, by far.  Cara Hunter provides another example, though. She was weeks away from an election in Northern Ireland when the deepfake featuring her was released.  She reported "two days after the video started doing the rounds, a man stopped me in the streets when I was walking by myself and asked for oral sex" (Enright, 2022, para. 16).  Excuse me? Not surprisingly, she says it was "a form of psychological warfare" (Enright, 2022, para. 19).  There are too many other examples. Suffice it to say, these women spoke truth to power and were targeted for it.  If deepfakes are the next step in guaranteeing that women who attempt to use their voice will continue to be casualties of a masculine dominated and patriarchal societal structure, what hope do we have?   Additionally, as Adrienne de Ruiter observed "the wrongness of this type of non-consensual deepfake cannot be fully grasped without considering the way in which people's identity is tied up with their face and voice" (2021, para. 52).  Will it ever end?

Why deepfakes are fundamentally unethical

As we can tell, communicating ethically is now quite a bit more difficult than it arguably used to be because of the need to navigate an increasingly more treacherous set of challenges.  That said, ethical intervention into the disinformation crisis is not only reasonable but also required, because it has become "the major moral crisis of our times" (Effron & Raj, 2019, para. 1).

While the last few years have definitely put far more attention on the spread of disinformation, debates about the ethics of lying have been widespread for decades.  Is it acceptable to tell a colleague you like his tie when you do not? Should you withhold information about being fired from your job because you are worried it would upset a friend on their birthday? Can you feel good about telling your mom the Thanksgiving turkey was delicious, even though everyone at dinner thought it was quite dry and overdone?  Is it ever ethical to lie?

The duty to truthfulness was established in the writings of the philosopher Immanuel Kant and the Categorical Imperative.  At its most basic, the Categorical Imperative argues we should treat others as an ends, not just a means. In advancing his theorizing surrounding the Categorical Imperative, Kant advocated that "truth telling is a perfect duty, one so basic that it cannot be overridden by other values – not even saving the life of a friend" (Schulman, 2014, para. 4).  Substantial scholarly writing has emerged since Kant's universalization of the value of truth.  This analysis contextualizes an ethical argument against deepfakes using what has been identified as the second formation of Kant's Categorical Imperative, the formula of humanity as in end its itself [FHE]: "So act that you use humanity, whether in your own person or in the person of any other, always at the same time as an end, never merely as a means" (Kant, 1996, 4.429).  For Kant, the perfect duty of respect for humanity means a rational agent's ability to "choose ends and means to those ends with some spontaneity or freedom" (Stroud, 2018, p. 255).  It is understood that using the Formula of Humanity, "coercion and deception are the most fundamental forms of wrongdoing –the roots of all evil" (Korsgaard, 1986, p. 333).  Both coercion and deception violate the conditions for freedom – making it impossible for a rational agent to choose to contribute to the end.  Stroud (2018) poses the following question to determine whether an action is consistent with the goals of FHE: "does that action preserve and promote the rational agency of otherstheir ability to freely select ends and the means to reach those endsor does it distort or destroy it?" (p. 257).

As it relates to deepfakes, the answer to that question is a categorical no – they do not preserve and promote the rational agency of others.  Deepfakes, especially ones using non-consensual pornography, are ethically problematic because the individual portrayed was not informed and did not freely consent to contribute to the production/end.  The creator of a deepfake treats another as a mere means toward an end of what is likely power or profit.  As was previously discussed, an image or voice of another can be taken from anywhere and utilized in any way in a deepfake and the situation looks very real, despite the possibility that a rational agent could not or would not choose to be a part.  Specifically, Adrienne de Ruiter explains "deepfake technology is prone to violate the categorial imperativethis signifies that we need to respect that human beings have will of their own and that we cannot treat them as mere instruments to pursue our own goals or satisfy our desires.  In the case of non-consensual deepfake pornography, for example, this maxim is clearly violated (2021, para. 32).

A less appreciated and more recently discussed element of Kant's duty to truthfulness concerns his prohibition against internal lies.  Kant expressed that ethically an internal lie should clearly be held as unacceptable.  He claimed, "by an external lie a human being makes themselves an object of contempt in the eyes of others; by an internal lie they do what is still worse: they make themselves contemptible in their own eyes and violate the dignity of humanity" (Sackovich, 2016, p. 19-20).  Kant further argues that external lies are often a consequence of or rooted in internal lies.  While it might seem odd, self-honesty (which is what Kant is calling for in prohibiting internal lies) should be a moral obligation because "it is impossible for us to show ourselves respect when we are deceiving ourselves" (Sackovich, 2016, p. 23).  The fact that many people either have or are lying to themselves can often explain the type of lack of self-respect that leads to unhappiness – which can cause a whole host of issues which can be devastating to both an individual and those living around/with them.  Both external and internal lies are at issue in the debate about deepfakes.  Those who would use the technology for nefarious purposes where they know they are misleading others are also clearly lying to themselves.  This continued lack of self-respect in search of the profit or other perceived benefits they will generate through producing misleading deepfake videos does nothing to boost happiness or civility in our society.

Of course, it is important to address the question of what should even be considered a lie.  This analysis assumes that disinformation, including the use of deepfakes, is based on lies which are unethical.  Scholars such as George Yoos cast a very wide net on the behavior he would deem lying.  He believed "lying extends to all sorts of statements and behaviors that may be misleading, deceptive, and confusing" (Yoos, 2007, pg. 2006).  Yoss further claimed our looks, our actions, and even our silence can lie. So, if you smile when your best friend tells you excitedly that she got a new job and is moving away, even though you think it's the worst news ever, Yoos would consider that a lie. The logic of Yoos arguably provides direct support for the argument that our silence in knowing deepfakes exist and are harming women but not speaking out about it is indeed a version of a lie and certainly for the claim that deepfakes count as unethical lying.

Dr. Paul Ekman has used a more traditional definition, writing a lie "is an act in which someone makes a deliberate choice to mislead another person without giving prior notification of that intention" (2022, para. 1).  Another scholar, Sissela Bok defines a lie as "any intentionally deceptive message which is stated" (1979, p.13). Bok's definition is more meant to distinguish between an act of deception and a lie.  Bok argues that lying is a sub-category of the larger category of deception.  Lying is stated (usually meaning orally or in writing), whereas according to her definition the action of smiling at your friend who got a new job is potentially deceptive, but not a lie. Unless a deepfake is very upfront about how it is an obvious misrepresentation (which most are not), they are deceptive and also potentially unethical according to both of these scholars.

Disagreement exists among scholars about when lying is ethical versus when it is not.  Some believe lying is always wrong, no matter what.  Usually, this perspective is grounded in the notion that it is fundamentally disrespectful of another to intentionally share false information with them.  Using a utilitarian perspective, one might think lying is largely wrong, but could be justified in the interest of saving others' lives or preventing large scale suffering.  Sissela Bok (referenced earlier) seems more forgiving when assuming "truthful statements are preferable to lies in the absence of special considerations" (1979, p. 30).  The special considerations could be situations like white lies, lying to protect confidentiality or lies to those who are sick and dying.  It seems reasonable to assume that profit motives or other justifications for deepfakes are not the special considerations Bok had in mind.

In opposition to these perspectives, David Nyberg is an example of a scholar who is much more forgiving about lying.  Nyberg considers it more normal than abnormal and has contended "truth telling is morally overrated" (1994, p. 7).  Nyberg is less confident than other scholars that it is even possible to organize behaviors in the world, among so many who have different perspectives, goals, expectations of privacy, and types of relationships, without deception and lying being a relatively common place occurrence, even if there was a theoretical presumption against lying.  Those who would argue in favor of the use of deepfake technology would likely feel comfortable with Nyburg's perspective on lying.

An outside example might help put all of this into perspective.  As a part of the continued investigation of Trump's efforts to overturn the 2020 election results, federal prosecutors revealed they had obtained an email from the Trump campaign directing a group of Georgia Republicans to meet in secret, obscure their objectives and maintain complete discretion until they had managed to secure an overturn of the election results in that state (Polatz, Cohen, & Perez, 2022).  Is this a type of deception we should view as common place in this political environment? Should this type of behavior in reaction to an election result be considered illegal?  Unethical?  While there is a good chance that at least some people involved in the Trump campaign in various capacities believe that this is the type of request for concealment or secrecy that deserves special consideration in the interest of protecting the public, still others would argue this direction by Trump is deceptive and supported lying.  We continue to learn more about this issue and the legal issues are yet to be resolved, but it highlights the significant continued relevance of the study of ethical communication.

As far as this analysis is concerned, ethical communication should also avoid intentional ambiguity or vagueness.  Intentional ambiguity is "the use of language or images to suggest more than one meaning at the same time" (Cambridge Dictionary, 2022, para. 2).  Prior to 2022, many US leaders had been intentionally vague about whether they would defend Taiwan if they were attacked by China, due to the controversy that could be created between the United States and China if the US was to recognize the government of Taiwan.  In May 2022, President Biden seemed to disregard this previous ambiguity and told Anderson Cooper that the United States would come to Taiwan's defense if China attacked.  While this presented a brief scuffle in the media concerning our China policy, Biden's communication is not really inconsistent with what United States policy has been for many years.  Another classic example of ambiguity is the phrase "the murderer killed the student with a book".  In this example, one could hear all of the words and still be confused about whether the murderer killed the student who possessed a book, or if the murderer literally killed a student using the book as a weapon.

The reason intentional ambiguity is considered unethical is because the ideal communicator would clearly communicate what is meant by their claims.  The receiver of the communication would not be the one with the obligation to sort out what a sender might have meant by using the ambiguous language.  In the example above about the murder, it could delay finding either the murder victim or the murder weapon if a person were to intentionally provide that ambiguous statement to authorities.  It could reasonably cause a delay in the ability to notify family about a victim's identity or perhaps result in authorities failing to retrieve an important piece of evidence that could lead to a conviction in a court of law.  In the instance of deepfake video and audio, the sender rarely makes it obvious to the receiver the material is not real.  The inability for deepfakes to clearly communicate with a receiver without fear it would lose its appeal, humor, or message should raise our concern that there is an issue with the use of the technology.

Vagueness is when a person communicates in a way that is unclear, not precise, or uncertain.  Many believe that politics provides fertile ground for finding examples of vague and unethical communication.  For example, during a campaign a politician could promise to not increase taxes on the middle class.  If the politician were to be elected, even partially, based on this promise, and then increased taxes on those individuals making more than $80,000 per year, there would likely be discontent because those who make $80,000 are middle class.  A politician could easily argue, though, that their definition of middle class was something else – perhaps including those who made anything over $50,000 per year.  Theoretically, before electing a politician based on policy directed toward a vague term like middle class, more specific questions should be asked or provided about the details of the definition.  In this vein, consider the situation of deepfakes in politics. If we find out individuals are being elected or not elected because of deepfake technology (which we can already foresee happening), shouldn't we be concerned about the impact that could have on the future of our democracy?

Until now, the most substantive specific discussion communication or media scholars are raising about deepfakes surrounds whether their use to undermine another's reputation can be framed as defamation.  Indeed, it seems true, as explained by Diakopoulos and Johnson, that deepfakes do damage to reputations (as was made clear in the discussion of non-consensual pornography).  At the same time, however, "it is important to realize the problem does not concern a person's reputation alone.  A non-consensual deepfake porn video would wrong the person portrayed even if nobody other than the target were to see it" (de Ruiter, 2021, para. 39).  Luckily, choosing the route of defamation lawsuits has been effective for some women who have been oppressed by non-consensual pornography.  In late September 2022, a Los Angeles Police Department captain sued the department after a non-consensual porn photo was released of her and widely distributed (Hetherman, 2022).  The department ignored requests to inform the entire department that the image was not actually of her and she was awarded $4 million dollars in a jury trial (Hetherman, 2022).  Defamation cannot be the only response these women have available.  While this analysis agrees defamation is also unethical, the ability to win these cases often involves resources many women just don't have.  A better solution is to condemn the practice on all ethical levels as a potential mechanism for broad change.

But let's have the conversation.  Sophie Maddocks, a research fellow for the Netter Center for Community Partnerships and a doctoral student in Communication at the University of Pennsylvania is another one of the too few of us who accurately recognizes "facing up to the extent of the problem of fake porn is crucial.  Sex is taboo.  People working at tech startups and think tanks don't want to sit at the round table and talk about sex.  They don't want to acknowledge that the root of so many contemporary technology issues is this very consumptive form of sexuality in the internet space" (Enright, 2022, para. 24).  Yes, it is true that we do not usually ethically condemn the use of technology on the grounds of it being responsible for disinformation, misrepresenting reality, or even just portraying falsehoods.  But maybe we should.

How ex ante ethical determinations are necessary to empower silenced voices

Ethical communication is essential, as our behavior and communication have the potential to impact other people. Philosopher S. Jack Odell believed "ethical principles are necessary preconditions for the existence of a social community.  Without ethical principles it would be impossible for human beings to live in harmony and without fear, despair, hopelessness, anxiety, apprehension, and uncertainty" (Odell, 1983, p. 95).  As communication professionals, we should ask ourselves the question of why the legitimacy of deepfakes has been more broadly analyzed in law reviews or philosophy journals, than in communication studies. Deepfakes are a form of communication and we should feel more of an obligation to raise awareness about the impact the growth of this type of communication is having on our society.

Indeed, we are living in extraordinary times.  Economic, social, and political structures are in upheaval and that has impacted individual behavior as well.  It is easy to find situations where ethics have been sacrificed for the sheer desire of "getting ahead". The example of the rapid growth of investment in deepfake technology is no exception.  Further, we could look to situations like the 2019 college entrance exam bribery scandal involving Felicity Huffman and Lori Laughlin, corporate fraud, tax evasion, sports doping scandals and more.  David Callahan (2011), a senior fellow at Demos, believes at least one factor driving these actions is "growing economic inequality which has meant larger rewards for winners than ever before…which has increased the incentives to become a winner by whatever means necessary. The bigger the carrots, the more likely people will cheat to grasp them" (para. 8).  Beyond the desire to get ahead, others think that the focus on ethics has declined as people have become more isolated.  Joseph Chumen, leader of the Ethical Culture Society of Bergen County, has offered "if we are less ethical now than in the past…could it be because of our relationship to other people? Is it because we're more solitary?....People are more alienated and isolated than they used to be" (Beckerman, 2019).  The COVID-19 pandemic certainly has only increased feelings of isolation. The context of the advances in deepfake technology certainly must be viewed within the context of isolation and trying to stay more connected to others by emphasizing the significance of video and auditory technologies.

Scholars and policymakers have begun to discuss and distinguish two types of solutions to the current crisis.  The first type are ex-post solutions, or those related to fighting disinformation following its manifestation. An example of an ex-post solution would be when Facebook puts a warning on a post someone has made indicating that it contains potentially false information.  The second type of solutions are ex-ante.  Ex-ante solutions aim to prevent any type of information pollution before it is able to manifest itself in society.  These ex-ante solutions seem fundamentally rooted in ethics. These ex-ante solutions include persuading individuals that it is an obligation to communicate true information.  Deepfakes have such serious potential to marginalize, oppress, and silence the voices of women that we should support discipline, community, and societal efforts toward ex-ante solutions.

In particular, we need to understand our role and the role of every individual in fighting disinformation, including deepfakes.  Multiple studies have persuasively established that humans are primarily responsible for the spread of disinformation.  Many people like to think that, since artificial intelligence, bots, or other corporations/institutions are the entities causing the disinformation crisis, if they stay out of it or think they would never be convinced a deepfake was real, then everything is ok.  However, the spread of disinformation is, more often than not, due to individuals retweeting or otherwise sharing it on social media.  Meaning, individuals are going to be the ones responsible for the viral spread of deepfakes and they are not going to be equipped to actually tell the difference between a real video and a deepfake.  It will just be too hard. Also, we already know false news stories are 70 percent more likely to be shared than true stories.  Why will this be any different with deepfakes?  Further, it takes true stories "six times as long to reach 1500 people as it does for false stories to reach the same number of people" (Dizikes, 2018, para. 6).  Meaning, once a deepfake is out for public consumption, the chances of being able to retract what has been seen and embedded in people's psyche and worldview will be overwhelming to deal with.

Unfortunately, it now seems that people are not processing that it is not ethically permissible to share false or misleading information.  People seem especially ok with sharing false information when it is easy or when it serves their political or personal agendas. In fact, studies have reported that at least 16% of the population has knowingly shared false information online (Dizikes, 2021, para. 5).   If ex-ante solutions to disinformation are going to have any impact, it will require individuals to take an ethical stand, stop sharing false information and to hold others accountable for the false information that they share or might want to share as well.

Shifting attention to accuracy and truth in sharing information could have a positive effect and we should do that too, but it might not be enough.   An MIT study suggested that "the large majority of people across the ideological spectrum want to share only accurate content" (Dizikes, 2021, para. 6).  Many individuals, though, reported sharing false content and claimed it only happened due to inattention, haste, or just being mistaken about the truth of the content.  These results lend credence to the possibility that a focus on ethics which highlighted the need to be accountable for the information being shared might reduce the prevalence of false content on the internet.  MIT professor David Rand wrote "getting people to think about accuracy makes them more discerning in their sharing, regardless of ideology" (Dizekes, 2021, para. 3).  Hopefully, this could also mean that if individuals better understood and were more aware of the dangers of deepfakes and non-consensual pornography they would also support the advocacy in this analysis.

Starting with seemingly small steps, such as shifting attention to accuracy, could make larger changes possible.  But vigilance is necessary. We have also learned that repeated exposure to false content can "lead people to judge it as less unethical to publish or share, independent of their belief in its factual accuracy" (Effron & Raj, 2019, para. 3).  What this means is if content is shared more often and people encounter false information more often, it weakens the ethical and moral judgements people might have against the information and actually strengthens individual desires to promote it.  Effron and Raj (2019) found repeat exposure to false content reduces "intentions to block or unfollow someone who shares it" and "may contribute to its spread and reduce censure of people who spread it" (para. 57).  So, it is possible that if we can reduce each individual's willingness to spread false or misleading content that we could increase the pressure on others to prevent its spread and encourage more ethical behavior, otherwise we could be in trouble.

While this analysis has focused substantially on the fact that deepfakes are offense to the truth, it is also the case that deepfakes are yet another artifact in a broader catalog of digital exploitation.  Doxxing, catfishing, and revenge porn are other examples.  What many of these forms of digital exploitation have in common is that they all violate fundamental ethical assumptions related to respect for others/humanity.  Those who engage in digital exploitation use others as a means -- whether the end be power, profit, entertainment, or something else.  Our path forward navigating these, and other challenges magnified by the rapid deployments of generative artificial intelligence requires new ethical discussions.

Ultimately our end goal needs to be to take whatever action we need to lay bare the startling ramifications of deepfakes, especially on the marginalized in our society.  Dwelling on the gender threats deepfakes pose, interrogating the rationales for their existence, and doing all we can to protect women from having their voices further silenced by those in power is not only our ethical obligation, but it is so obviously right thing to do.  Full stop. Bring into mainstream conversation in the field of communication these concerns about toxic masculinity dominating the future by using deepfakes.  Get tired of the justification that the interests of profit and free speech mean the bodies of those most vulnerable can be pushed to the wayside. Be a part of the solution for all the women who already have been impacted and the infinite others put at risk if we, as communication professionals, stand by and do nothing. The transition to a new use of technology will continue historical, unacceptable, needless, and despicable oppression – if we let it.  

Works Cited

Barnett, R. & Rivers, C. (2022, May 23).  Deep fakes: The latest anti-woman weapon. Women's eNews.

Beckerman, J. (2019, October 26). Are ethics for suckers? The US has a complicated relationship with right and wrong in 2019. USA Today.

Bok, S. (1979).  Lying: Moral Choice in public and private life. New York: Vintage Books.

Callahan, D. (2011, May 22).  Are Americans becoming less ethical? Zocalo Public Square.

Cambridge Dictionary. (2021). Meaning of fact.

de Ruiter, A. (2021, June 10). The distinct wrong of deepfakes. Philosophy & Technology.

Diakopoulos, N. & Johnson, D. (2020). Anticipating and addressing the ethical implications of deepfakes in the context of elections. New Media & Society.

Dizikes, P. (2021, March 17). A remedy for the spread of false news. MIT News.

Dizikes, P. (2018, March 8). Study: On Twitter, false news travels faster than true stories. MIT News.

Effron, D. & Raj, M. (2019).  Misinformation and morality: Encountering fake-news headlines make them seem less unethical to publish and share. Psychological Science.

Ekman, P.  (2022). What is a lie? Paul Ekman Group.

Enright, L. (2022, September 28).  The world of deepfake porn: How fabricated videos are being used to harm women. iNews.

EU East Stratcom Task Force. (2022, June 2). Ukraine war: Russian disinformation continues at full speed." The Brussels Times.

Fallis, D. (2021). The epistemic threat of deepfakes. PhilosTechnol.

Hetherman, B. (2022, September 30). LAPD Captain Carranza awarded $4 million after fake nude photo shared among officers.  NBC Los Angeles.

Kant, Immanuel. 1996. "Groundwork for the Metaphysics of Morals." In Practical Philosophy. Trans. Mary J. Gregor, 37–108. New York: Cambridge University Press.

Korsgaard, C.M. (1986).  The right to lie:  Kant on dealing with evil.  Philosophy & Public Affairs, vol. 15(4), pp. 325-349.  Retrieved from:

Odell, S. J.  (1983). Philosophy and Journalism, Edited by John C. Merrill and S. Jack Odell. New York: Longman.

Nyberg, D. (1994).  The varnished truth: Truth telling and deceiving in ordinary life. Chicago: University of Chicago Press, 1994.

Porantz, K., Cohen, Z., & Perez, E. (2022, June 6). Email reveals Trump campaign told fake electors in Georgia to use complete secrecy." CNN.

Sackovich, J. (2016).  The duty to truthfulness: Why what we care about is a moral matter. Georgia State University.

Sample, I. (2020, January 13). What are deepfakes – and how can you spot them? The Guardian.

Savin, J. (2022, October 6). The dawn of the deepfake porn matrix.  Cosmopolitan.

Schulman, M. (2014). Truth or consequences. Santa Clara University.

Stroud, S. R. (2018). "Immanuel Kant: Morality as Universal Law," An Encyclopedia of Communication Ethics, Ronald C. Arnett, Annette M. Holba, & Susan Mancino (eds.), Peter Lang Publishing, 254-257.

Vijayan, J. (2022, September 30).  Reshaping the threat landscape: Deepfake cyberattacks are here. Dark Reading.

Yoos, G. (2007). Reframing rhetoric: A liberal politics without dogma. London: Palgrave-Macmillan.


  • Heather Walters, J.D. is a Senior Instructor in the Department of Communication, Media, Journalism, and Film at Missouri State University. She received her Juris Doctorate degree from the University of Maryland School of Law and also earned Master's degrees in both Communication and K-12 Educational Administration. She teaches and conducts research in communication/media law & ethics, argumentation, and disinformation. She is the author of Communication Ethics: Promoting Truth, Responsibility and Civil Discourse to be released in 2024 by Cognella Publishing.


Image by MargJohnsonVA on Envato Elements.