
When it comes to the ethics around disinformation, we can apply a few main moral lenses to help us analyse complex situations. See the examples below in the context of a case study:
Social media platforms (e.g., Facebook, YouTube, and Twitter / X) combine AI and human efforts for moderation of their platforms.
Facebook’s AI uses strategies such as “remove, reduce, and inform” to manage content that breaches its terms of service. Techniques like Neuro-Linguistic Programming (NLP) and Panoptic FPN are utilized to identify and process content related to nudity, violence, child pornography, and terrorism, which are automatically removed.
Meanwhile, content categorized as misinformation might be reduced in visibility, and content deemed violent or sensitive is flagged to inform viewers.
Despite these measures aiming to comply with human rights standards, the use of AI in content moderation has led to errors and controversy, particularly with the wrongful removal of Palestinian content, highlighting concerns about digital discrimination and the accuracy of AI systems.
In the MENA region, a notable case study involves the alleged censorship and removal of Palestinian content by major social media platforms using AI-driven moderation tools.
Reports have surfaced suggesting that AI algorithms disproportionately flag and remove Palestinian posts, especially during heightened periods of conflict. Activists and digital rights organizations argue that these AI systems, trained potentially with biases that do not adequately distinguish between legitimate political expression and prohibited content, result in unfair censorship.
This has sparked significant debate about the neutrality and effectiveness of AI in content moderation, calling for more transparent and accountable moderation practices that respect users’ rights to free expression while effectively identifying and addressing actual policy violations.
The practice of virtue ethics involves a continuous balancing act. There is no comprehensive handbook for navigating every moral dilemma that calls for virtuous behavior. Applying this theory is a unique process for each situation, with no two cases being the same. While honesty and justice may guide social media companies in content moderation, patience and temperance could be more fitting in a different scenario. How can we determine which virtues to prioritize in a specific case? And how should we handle conflicting virtues that offer conflicting solutions?
(adapted with the permission of the Markkula Center for Applied Ethics, www.scu.edu/ethics, https://www.scu.edu/ethics/focus-areas/journalism-and-media-ethics/resources/the-ethics-of-social-media-decision-making-on-handling-the-new-york-post-october-surprise/)

Time to play the game! How good are you at spreading disinformation?