top of page

FUNDAMENTAL RIGHTS AND ARTIFICIAL INTELLIGENCE ANALYTICAL TOOLS

In May 2023, the European Parliament Internal Market and the Civil Liberties Committees jointly adopted a draft negotiating mandate on the first ever rules for Artificial Intelligence (Regulatory framework proposal on artificial intelligence - AI Act) with 84 votes in favour, 7 against and 12 abstentions.   In the AI Act, the European Commission adopts a classification of AI systems according to a 'risk-based approach', with different requirements and obligations for each risk level.

image_2023-09-10_195000762.png

In the AI Act (which will have now to be discuss with the European Commission and the European Council) MEPs propose to ban (inter alia) the follow biometric applications based on AI systems or technology:

  • “Real-time” remote biometric identification systems in publicly accessible spaces;

  • “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;

  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);

  • Predictive policing systems (based on profiling, location or past criminal behaviour);

  • Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and

  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

Facial Recognition Technology (FRT) is rather different from “old-fashioned” face biometrics. “In the mid-21st century, facial recognition was limited to characteristics

related to the eyes, ears, nose, mouth, jawline, and cheek structure (…)  Facial recognition technology today uses complex mathematical representations and matching processes to compare facial features to several data sets using random (feature-based) and photometric (view-based) features. It does this by comparing structure, shape, and proportions of the face; distance between the eyes, nose, mouth, and jaw; upper outlines of the eye sockets; the sides of the mouth; the location of the nose and eyes; and the area surrounding the cheek bones” (Rzemyk, 2017). The application of AI algorithms has made possible to detect and recognize a wide range of information. Current FRT can perform 12 different tasks (Hupont, Tolan, Gunes, & Gomez, 2022), “(1) face detection; (2) face tracking; (3) facial landmark extraction; (4) face spoofing detection; (5) face identification; (6) face verification; (7) kinship verification; (8) facial expression recognition; (9) Action Unit detection; (10) automatic lip reading; (11) facial attribute estimation; and (12) facial attribute manipulation”.

image_2023-09-10_195301627.png

SOTERIA will use Face Detection (FD), Face Verification (FV), and Face Spoofing Detection (FSD), which represent the three main computational tasks involved in the remote, real-time, authentication process entailed by SOTERIA design (e.g., a voter who has to cast his or her e-vote, a patient who has to access his or her e-medical record, a student who has to register to be admitted to an online examination).  

Overall, the combination of the three computational tasks used by SOTERIA (FD+FV+FSD) is low risk.  Face Detection and Face Verification are usually considered to be “minimal risk”. Face Spoofing Detection can either “limited risk” or “high risk” according to the technology used. On the whole, then, fundamental rights risks entailed by AI technology are marginal in SOTERIA, but they should not be completely underestimated, especially in consideration of that a false rejection of an authorised user can be a major event in the life of a vulnerable person.

The European Commission has followed a risk-based approach in the AI regulatory framework outlined in theirWhite Paper on Artificial Intelligence – A European
Approach to excellence and trust
, stating that “[a] risk-based approach is important to help ensure that the regulatory intervention is proportionate.”

Many other governments, international institutions, standard organisations, businesses, academics and civil society institutions have  based their approach to AI governance on “risk”. The following table produced by the Policy Prototyping Program, co-designed and facilitated by Facebook and the consulting
firm Considerati, provides a comprehensive list of the main risk based approach to AI governance

OECD Catalogue of Tools & Metrics for Trustworthy AI

The OECD AI Policy Observatory has recentlly created an exhaustive online searchable  Catalogue of Tools & Metrics for Trustworthy AI. The catalogue is a platform where AI practitioners from all over the world can share and compare tools and build upon each other’s efforts to create global best practices and speed up the process of implementing the OECD AI Principles.

image_2023-09-10_202011002.png

Other Resources 

bottom of page