Artificial intelligence is increasingly shaping our societies, economies, and daily lives. Yet the rapid development and deployment of AI technologies raise pressing questions about the risks they entail, questions that go beyond technical or regulatory dimensions and require deeper philosophical reflection.
This workshop offers a philosophical analysis of AI-related risk, exploring how traditional categories of risk, from natural to technological, can be rethought and expanded in light of the challenges posed by machine learning, robotics, and digital technologies. Starting from the EU AI Act and its risk-based approach, the discussion moves toward a more nuanced understanding of risk and its components, highlighting the limits of current classifications and pointing to the need for multi-component and multi-risk frameworks.
At the core of the session is the idea that philosophy of science and technology can provide unique tools to clarify the epistemological and ethical underpinnings of risk, thereby offering concrete contributions to mitigation strategies. By analysing how risks emerge, interact, and evolve, a philosophical lens can help uncover blind spots in regulation and design, while fostering more responsible and informed approaches to AI governance.
Participants can gain insights into how these conceptual perspectives translate into practical benefits: from broadening the range of risks taken into account, to supporting interdisciplinary dialogue, to enabling more effective and ethically grounded strategies for addressing AI’s societal impact.
The workshop is led by Viola Schiaffonati, Associate Professor of Logic and Philosophy of Science at Politecnico di Milano, whose research lies at the intersection of philosophy, science, and computer engineering.
The event is held in English. Free registration is required.