In the European, human-centric approach, artificial intelligence (AI) is considered a tool operating in the service of humanity and of the public good, aiming to increase individual and collective well-being. People will only be able to confidently and fully reap the benefits of a technology that they can trust: therefore, AI’s trustworthiness must be ensured.
In order to create an environment of trust for the successful development, deployment and use of AI, the European Commission encouraged stakeholders to implement the seven key requirements for trustworthy AI defined by its High-Level Expert group and outlined in the figure.
The Commission published in 2020 its proposal of regulatory framework for Artificial Intelligence, the AI Act, which adopts a risk-based approach based on a subset of these seven requirements. In this approach, the regulation defines a set of prohibited practices and a set of high-risk scenarios in which systems must fulfill a set of requirements in order to be deployed in Europe.
AI Watch carries out research on the opportunities, risks and challenges brought by AI systems when exploited in different applications with a strong social impact, such as: the use of algorithms for decision making in criminal justice, the role of AI in medicine and healthcare, children-robot interaction, facial analysis, autonomous vehicles, or the impact of AI in music and culture.
From these analyses, AI Watch develops methodologies for trustworthy AI, with a focus on aspects such as:
- evaluation of bias and fairness with respect to gender or age in data, algorithms, and research communities
- ensuring transparency of systems
- implementation of relevant human oversight strategies
This work is undertaken in collaboration with the HUMAINT project, which engages in interdisciplinary research involving knowledge and methodologies from engineering and computer science, cognitive science, and economics.
The landscape of facial processing applications in the context of the European AI Act and the development of trustworthy systems
Scientific report in Nature by Isabelle Hupont, Songül Tolan, Hatice Gunes & Emilia Gómez
Artificial Intelligence and the Rights of the Child: Towards an Integrated Agenda for Research and Policy
This report proposes a set of science-for-policy future directions for AI and child’s rights.
All our publications on Trustworthy Artificial Intelligence in Automated/Autonomous Driving
Artificial Intelligence in Autonomous Vehicles: towards trustworthy systems
Trustworthy requirements for AVs have a heterogeneous level of maturity, and bring new research and development challenges in different areas.
Trustworthy Autonomous Vehicles
This report aims to advance towards a general framework on Trustworthy AI for the specific domain of Autonomous Vehicles (AVs).
Evaluating recommender systems with and for children: towards a multi-perspective framework
This paper provides a comprehensive view of the different perspectives involved in the evaluation of Recommender Systems for children.