Algorithmic Borders critiques the use of AI-driven technologies in high-risk settings such as border control, focusing on the EU-funded iBorderCtrl project - a system that subjects border-crossers to lie detection tests and extensive biometric screenings. Built on flawed assumptions of dishonesty, these algorithmic systems amplify racial biases and power asymmetries by framing migrants as threats and reducing them to mere data points. Under institutions with a history of systemic bias, such technologies accelerate discriminatory practices behind a façade of machine neutrality. The focus of this work is to research the dangerous use of algorithmic decision-making systems and how we can hold these systems accountable.
algorithmic disrimination, AI, border justice