Zum Hauptinhalt springen

ExamAI – Testing and Auditing of AI systems

Developing recommendations for action to improve the auditability and controllability of AI-based systems based on use cases in the fields of collaborative industrial production and individual work careers

In the consortium project “ExamAI – Testing and Auditing of AI systems" under the direction of the German Informatics Society (Berlin office), an interdisciplinary team of computer scientists, software engineers, legal experts and political scientists will examine two concrete application areas for its investigation on meaningful auditing and testing for AI systems. The focus will lie on examining the effects of AI-technologies in the areas of human-machine cooperation in industrial production and AI systems in personnel- and talent management as well as in recruiting. The project will demonstrate how effective auditing and testing for AI systems may be realised in the future and how the Federal Government of Germany (Bundesregierung) could implement such procedures.

The results of the project contribute to the work of the institutions envisaged in the AI strategy of the Federal Government. The 20-month-long project will develop recommendations for action for the legal and technical roles of these institutions based on a multi-stage process comprising legal and socio-technical requirements for AI systems as well as applicable and currently developed standards. It will also closely analyse norms and guidelines for implementing AI-technologies and successful testing, control and certification practices.

The central research questions are: How can meaningful control and test procedures for AI systems look like in the two areas of application? Which institutional requirements must be provided by the Federal Government? In this context, we will also assess whether the Machinery Directive 2006/42/EC addresses the topic of Artificial Intelligence adequately or whether there is a further need to investigate it.

In addition to a theoretical analysis of existing legal and technical standards, the two areas of application, collaborative production processes and individual work careers, will be studied to answer the research question of the project. On the basis of these areas, deficits and best practices of existing test and auditing procedures will be examined exemplarily. Finally, the working group will develop general recommendations for the verification of characteristics and concrete requirements for the creation of fair, responsible, and transparent AI systems.

The investigated use cases may not share many similarities at the first glance, except that they will be of immanent importance for employees in the respective fields of work in the future and that AI systems will be used to an increasing extent. Yet, the aim of the project is to identify commonalities (e.g. functional uncertainties) in order to draw conclusions about each of the areas of interest and to establish cross-disciplinary criteria for AI auditing and testing, generate recommendations for the responsible institutions (of the Federal Government) and publish generalisable findings.

The project is funded by the Federal Ministry of Labour and Social Affairs' Policy Lab Digital, Work & Society as part of the Observatory for Artificial Intelligence in Work and Society (AI Observatory) project. The Policy Lab Digital, Work & Society is a new organisational unit within the “Digitalisation and the Labour Market Department” at the Federal Ministry of Labour and Social Affairs. It is a new, interdisciplinary and agile organisational unit that observes technological, economic and social trends and helps to shape change together with academia, business and social partners.