From reordering your favourite takeaway to choosing which series to watch next, we use artificial intelligence all the time.
But what about when AI is used for decisions that actually matter? Like whether a person with disabilities gets the support they need to live independently, or how the police predict who is going to commit a crime?
With people’s rights and freedom on the line, the stakes are much higher – especially because AI can discriminate.
With people’s rights and freedom on the line, the stakes are much higher – especially because AI can discriminate.
To unpack all of this we’re joined by Griff Ferris, Senior Legal and Policy Officer at campaign organisation Fair Trials, to discuss the extent to which AI can discriminate, the impact it has on people who are already marginalised, and what we can do about it.
Mentioned in this episode:
- Fair Trials’ predictive policing quiz
- Fair Trials’ report Automating Injustice
- The HART algorithm used by Durham Police
- The Government’s AI regulation white paper
- Public Law Project’s Tracking Automated Government register