Published on

Practical AI Ethics course notes - Disinformation


Decided to share my notes on the excellent Practical AI Ethics course, as I believe it to be essential information for anyone, particularly for software engineers & data scientists, "people in the room" in the making & training of AI models, hopefully to lead to bette ethical considerations. Lesson 1 is on Disinformation, and oh my this is scary already...

Lesson 1 - Disinformation

What do you define as ethics?

There's not one definition for ethics, but this one shared in the course I found to be most encompassing - extracted from

Ethics is two things.

First, ethics refers to well-founded standards of right and wrong that prescribe what humans ought to do, usually in terms of rights, obligations, benefits to society, fairness, or specific virtues. Ethics, for example, refers to those standards that impose the reasonable obligations to refrain from rape, stealing, murder, assault, slander, and fraud. Ethical standards also include those that enjoin virtues of honesty, compassion, and loyalty. And, ethical standards include standards relating to rights, such as the right to life, the right to freedom from injury, and the right to privacy. Such standards are adequate standards of ethics because they are supported by consistent and well-founded reasons.

Secondly, ethics refers to the study and development of one's ethical standards. As mentioned above, feelings, laws, and social norms can deviate from what is ethical. So it is necessary to constantly examine one's standards to ensure that they are reasonable and well-founded. Ethics also means, then, the continuous effort of studying our own moral beliefs and our moral conduct, and striving to ensure that we, and the institutions we help to shape, live up to standards that are reasonable and solidly-based.

What is disinformation?

Orchestrated campaign, courtesy of

Harm of disinformation

  • Mislead, especially propaganda by a government organization to a rival power or the media.
  • Discredit legitimate literature and resources
  • On a deeper level, desintegrate democracy

Types of disinformation

  • Orchestrated campaigns of manipulation
  • Memes, media, leading to push a specific agenda
  • Can also be flooding with information to drown a listener, which would dampen truthful information
    • Narrative laundering - building a story on top of real documents
    • Mix faked documents in with large dump of real documents


I wonder about the “Narrative laundering” effect in XR as the technology gets more immersive, and how accrued immersion paired with misinformation/disinformation could produce even more conviction into false and potentially harmful narratives

How tech platforms incentivize & promote disinformation?

  • What is shared is not about credibility but rather emotive and political preference (the truth has no reach)
  • Social media platforms, by design, tilt users away from considering accuracy
    • Encourage users to rapidly scroll & spontaneously engage
    • Immeditate quantified feedback (# of likes)
    • Producing addiciton is a platform goal, and the content to do such often leans outside of boring truth
  • Tech fundamentals flaws towards disinformation
    • Fundamental business model is around manipulating people’s behaviour & monopolising time
    • Incentives usually focus on short term metrics
    • Feedback loops can occur when your model is controlling the next round of data you get. The data that is returned quickly becomes flawed by the software itself
  • How much of algorithm vs the platform?


Youtube: Can relate to this heavily with my experience of youtube which I spend a lot of time on; the large majority of the recommendation my partner and I get are really one sided, and not in correlation with the content we usually watch?

DeepFakes in the wild

GAN Generated profiles -


More than a million pro repeal net neutrality comment were likely faked (

AI Generated comments on military budgetting in the US

AI Generated comments on military budgetting in the US

We have the technology to totally fill Twitter, email and the web up with reasonable sounding, context appropriate prose, which would drown out all other speech and be impossible to filter

Jeremy Howard - co-founder

How do we circumvent the disinformation bias


  • “Treat disinformation as a cybersecurity problem” sounds like the right way to go conceptually, but probably even more than with traditional cyber security challenges, the fight is endless in a sense that new malware are programmed to avoid exisitng securities; We can only assume that AI models would be trained to confuse whatever verification tools are out there
Let's stay in touch!
Get the latest posts delivered right to your inbox