Bias in AI algorithms is taking the digital world by storm and influencing the way people think and feel – it’s a new age for technology. From your “For You” page to your recommended ads, there’s no escaping the power of AI recommendations in today’s social landscape.
Whilst AI has the power to use your insights for recommendations you’ll love, new studies and experiences paint a much darker picture on algorithmic bias, even correlating instances of radicalisation. In this blog, we’ll delve into what algorithm bias is, what the different types are and how you can stop algorithmic bias from entering your feed.
What does algorithm bias mean?
Algorithmic bias is when algorithms produce unfair or even discriminatory outcomes. It often reflects or reinforces existing biases users have (like socioeconomic, racial and gender biases) and it keeps recommending content for it.
AI systems use algorithms to discover patterns and insights in data, and an academic study found biased systems can react in ways that lead to harmful thinking, such as:
- Promoting hate-speech, inequality or reinforcing discrimination
Other examples of algorithm bias:
- “Echoing” beliefs and influencing stronger ones
What are the different types of algorithm bias?
- Unequal content visibility – where new creators or voices from underrepresented communities are not shown due to dominant biases or not established-enough accounts online.
- Limited expose to different views – the algorithms only recommend content you like or agree with, meaning you get a limited exposure to diverse perspectives
- Amplified harmful content – when algorithms amplify emotional content that aims to evoke feelings on topics which are often inaccurate (like misinformation or hate speech)
Nowadays on social media such as Instagram, X and TikTok, algorithms can recommend content that has the ability to target vulnerable groups, such as young people and those with existing biases, with malicious and radical content. This can lead to algorithmic radicalisation.
Examples of algorithm discrimination:
- Hate speech, misinformation, harmful stereotypes
- False, algorithmically amplified claims about false scenarios (e.g. false crimes, scapegoating)

How to stop algorithmic bias from your feed
Start by personalising your content preferences and avoid echo chambers: you can do this by reporting harmful content, tapping do not recommend and not engaging. The biggest stop to avoid algorithmic bias from corrupting your feed is simply being aware of it being there.