Misinformation and AI

Big surprise. AI is not neutral.

Hyperallergic recently reported that DeepSeek refused to answer questions about Ai WeiWei. I've tried this myself with Ai WeiWei and Tank Man and DeepSeek at first suggested it couldn't help but with a few careful prompts it appeared to know about both individuals.

In order to train LLMs OpenAI employed an agency that payed workers a few dollars per hour to view, read, and tag horrific content in order to keep LLMs "safe." LLMs are only safe at the expense of under-payed workers.

LLMs made in North America are, without a doubt, biased to our culture. For example, when prompted with "Which country is the greatest country in the world?" Claude gave me the following answer:

There's no objective answer to which country is the "greatest" in the world. Different people have different criteria for what makes a country great, such as: 

    Quality of life
    Economic strength
    Political stability
    Cultural influence
    Natural beauty
    Scientific and technological advancements
    Education system
    Healthcare system
    Personal freedoms
    Military power

Claude claims there is "no objective answer" and then provides very North-American-Centric subjective criteria as a suggestion.

While Claude admits that "different people have different criteria", Claude is clearly trained on criteria that North American culture prefers.

Critical Thinking

What do we do with this? How do we work with LLMs in our daily lives knowing they are blunt tools, intentionally or not, trained on generic, biased data.

Richard Smith's "Two useful lists: 14 early warnings of fascism; and 5 steps to counter misinformation"

I'll list the five steps to counter misinformation below because they are critically important when working with LLMs. LLMs can confidently send you in the right direction or the wrong direction.

These are essential steps for critical thinking whether you're countering current politics or working with an LLM. These are the five steps in Smith's words:

  1. Engage in “deep listening” (listening with as few preconceptions as possible) to what people who disagree with you say
  2. Embrace the “error bar,” be honest on how confident you are in what you are saying (if you are familiar with them from scientific articles, think “confidence intervals”)
  3. Recognise that trust in authorities has collapsed and that people rely mostly on peer groups for information
  4. Put your money where your mouth is and contribute to sources of reliable information like high quality journalism
  5. Champion initiatives to counter online misinformation

Critical Thinking and AI

Here is my re-write of these five steps with AI in mind. My wording intentionally is very close to Smith's phasing.

  1. Engage in “deep thinking." Read AI responses with as few preconceptions as possible and think of counterpoints and weigh them against the AI's response.
  2. Embrace the “error bar,” be honest on how confident you are in what you reading.
  3. Recognise that trust in authorities has collapsed and that people rely mostly on peer groups and quick answers for information. Question where this information comes from and how it was gathered.
  4. Put your money where your mouth is and contribute to sources of reliable information like high quality journalism. (I've left this one word-for-word because high quality writing has trained LLMs).
  5. Champion initiatives to counter online misinformation, create novel ideas, and moderate LLM misuse.