In the European, human-centric approach, artificial intelligence (AI) is considered a tool operating in the service of humanity and of the public good, aiming to increase individual and collective well-being. People will only be able to confidently and fully reap the benefits of a technology that they can trust: therefore, AI’s trustworthiness must be ensured.
In order to create an environment of trust for the successful development, deployment and use of AI, the European Commission encouraged stakeholders to implement the seven key requirements for trustworthy AI defined by its High-Level Expert group and outlined in the figure.
The Commission published in 2020 its proposal of regulatory framework for Artificial Intelligence, the AI Act, which adopts a risk-based approach based on a subset of these seven requirements. In this approach, the regulation defines a set of prohibited practices and a set of high-risk scenarios in which systems must fulfill a set of requirements in order to be deployed in Europe.
AI Watch carries out research on the opportunities, risks and challenges brought by AI systems when exploited in different applications with a strong social impact, such as: the use of algorithms for decision making in criminal justice, the role of AI in medicine and healthcare, children-robot interaction, facial analysis, autonomous vehicles, or the impact of AI in music and culture.
From these analyses, AI Watch develops methodologies for trustworthy AI, with a focus on aspects such as:
- evaluation of bias and fairness with respect to gender or age in data, algorithms, and research communities
- ensuring transparency of systems
- implementation of relevant human oversight strategies
This work is undertaken in collaboration with the HUMAINT project, which engages in interdisciplinary research involving knowledge and methodologies from engineering and computer science, cognitive science, and economics.
Scientific report in Nature by Isabelle Hupont, Songül Tolan, Hatice Gunes & Emilia Gómez
This report proposes a set of science-for-policy future directions for AI and child’s rights.
All our publications on Trustworthy Artificial Intelligence in Automated/Autonomous Driving
Trustworthy requirements for AVs have a heterogeneous level of maturity, and bring new research and development challenges in different areas.
This report aims to advance towards a general framework on Trustworthy AI for the specific domain of Autonomous Vehicles (AVs).
This paper provides a comprehensive view of the different perspectives involved in the evaluation of Recommender Systems for children.