Bias in AI is not (usually) malicious. It is statistical. If you train a Hiring AI on 10 years of resumes from a male-dominated industry, the AI learns: "Men are good candidates". Amazon famously scraped a recruitment tool because it automatically penalized resumes containing the word "Women's" (e.g., "Women's Chess Club").

Facial Recognition Failures

Early facial recognition systems were trained mostly on white faces.
As a result, they had high error rates for people of color.
This led to wrongful arrests (false positives) when police started using these tools.
IBM, Amazon, and Microsoft paused sales of this tech to police due to these ethical concerns.

1. The EU AI Act

The first major law regulating AI.
It categorizes AI by risk:
- Unacceptable Risk: Social Scoring (banned).
- High Risk: Medical devices, Recruitment, Border Control (Strict regulations).
- Low Risk: Chatbots (Must disclose they are AI).