Main content

Contributors:

Date created: | Last Updated:

: DOI | ARK

Creating DOI. Please wait...

Create DOI

Category: Project

Description: This article examines artificial intelligence (AI) co-regulation in the EU AI Act and the critical role of standards under this regulatory strategy. It engages with the foundation of democratic legitimacy in EU standardisation, emphasizing the need for reform to keep pace with the rapid evolution of AI capabilities, as recently suggested by the European Parliament. The article highlights the challenges posed by interdisciplinarity and the lack of civil society expertise in standard-setting. It critiques the inadequate representation of societal stakeholders in the development of AI standards, posing pressing questions about the potential risks this entails to the protection of fundamental rights, given the lack of democratic oversight and the global composition of standard-developing organisations (SDOs). The article scrutinizes how under the AI Act technical standards will define AI risks and mitigation measures and questions whether technical experts are adequately equipped to standardize thresholds of acceptable residual risks in different high-risk contexts. More specifically, the article delves into the complexities of regulating AI, drawing attention to the multi-dimensional nature of identifying risks in AI systems and the value-laden nature of the task. It questions the potential creation of a typology of AI risks and highlights the need for a nuanced, inclusive, and context-specific approach to risk identification and mitigation. Consequently, in the article we underscore the imperative for continuous stakeholder involvement in developing, monitoring, and refining the technical rules and standards for high-risk AI applications. We also emphasize the need for rigorous training, certification, and surveillance measures to ensure the enforcement of fundamental rights in the face of AI developments. Finally, we recommend greater transparency and inclusivity in risk identification methodologies, urging for approaches that involve stakeholders and require a diverse skill set for risk assessment. At the same time, we also draw attention to the diversity within the European Union and the consequent need for localized risk assessments that consider national contexts, languages, institutions, and culture. In conclusion, the article argues that co-regulation under the AI Act necessitates a thorough re-examination and reform of standard-setting processes, to ensure a democratically legitimate, interdisciplinary, stakeholder-inclusive and responsive approach to AI regulation, which can safeguard fundamental rights and anticipate, identify, and mitigate a broad spectrum of AI risks.

Files

Loading files...

Citation

Recent Activity

Loading logs...

OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.