Skip to main content
European Commission logo
AI Watch
  • News article
  • 30 September 2024
  • Joint Research Centre
  • 1 min read

First steps towards evaluating fairness of perception systems in autonomous driving

New study features over 140,000 annotations of protected attributes in widely used visual datasets for perception, and a thorough evaluation of biases and the annotation process.

The study contributes to the Joint Research Centre’s (JRC) efforts to advance towards trustworthy autonomous vehicles (AVs). Led by the HUMAINT team and conducted in collaboration with researchers at the University of Alcalá, the study has been released in the Journal of Big Data. It presents a novel set of annotations for protected attributes of persons and vehicles, from the most widely used visual datasets for vision-based perception in AVs. This work addresses a critical gap in the field, as current design and evaluation methods for AVs often overlook the ethical principle of fairness.

The publication focuses on the annotation of intrinsic attributes of persons, such as age, sex/gender, skin tone and demographic group; means of transport; and vehicles, such as vehicle type, colour or car type. The functioning of perception systems in AVs can be biased depending on these attributes, producing different levels of safety for different people.

The researchers developed a specialised annotation tool and methodology to minimize common errors, biases, and discrepancies among annotators. The tool and methodology were applied to six different datasets, resulting in the annotation of over 90,000 individuals and 50,000 vehicles. The study's findings highlight significant biases in the datasets, including underrepresentation of certain demographic groups, such as children, wheelchair users, and personal mobility device users. Moreover, it also discusses the complexity of the annotation process which relies on visual stereotypes that are not always consistent with the real data.

Annotation tool, protected attributes, persons and vehicles

The study's results have important implications for the development of fair and trustworthy AVs. The researchers emphasise that bias identification in training datasets is a crucial step towards addressing algorithmic fairness. The publication contributes to the integration of fairness metrics in future evaluations of perception and prediction systems in AVs. The JRC's work in this area aims to promote the development of AVs that are not only technologically advanced but also socially responsible and respectful of human diversity. By advancing the state-of-the-art in fairness evaluation, the JRC is helping to build trust in the development and deployment of AVs.

Persons datasets; attributes distribution;

Access the full paper here

 

Details

Publication date
30 September 2024
Author
Joint Research Centre

Contacts

Fernandez Llorca, David

Name
Fernandez Llorca, David