Details
- Publication date
- 18 June 2021
- Author
- Joint Research Centre
- Topic area
- Evolution of AI technology
Description
The widespread use of experimental benchmarks in AI research has created competition and collaboration dynamics that are still poorly understood. In this paper, published in Nature Machine Intelligence, the authors provide an innovative methodology to explore these dynamics and analyse the way different entrants in these challenges, from academia to tech giants, behave and react depending on their own or others’ achievements.
They perform an analysis of 25 popular benchmarks in AI from Papers With Code, with around 2,000 result entries overall, connected with their underlying research papers. They identify links between researchers and institutions (that is, communities) beyond the standard co-authorship relations, and we explore a series of hypotheses about their behaviour as well as some aggregated results in terms of activity, performance jumps and efficiency. They characterize the dynamics of research communities at different levels of abstraction, including organization, affiliation, trajectories, results and activity.
They found that hybrid, multi-institution and persevering communities are more likely to improve state-of-the-art performance, which becomes a watershed for many community members. Although the results cannot be extrapolated beyond their selection of popular machine learning benchmarks, the methodology can be extended to other areas of artificial intelligence or robotics, and combined with bibliometric studies.