Responsible AI & ML
Virginia Dignum & Monowar Bhuyan
Detailes will be added soon.
Alignment with what values?
Kalle Grill
Detailes will be added soon.
RAI Themis 2.0 (Game)
Mattias Brännström & Themis Dimitra Xanthopoulou
Detailes will be added soon.
Between legal and (non) responsible AI
Lena Enqvist
The lecture explores how law functions as a governance technology that both constrains and enables AI and autonomous systems, and why “responsible AI” often lives in the grey zone between formal legality and socially acceptable practice. Using the ideas of regulation "of", "for", and "in" technology, it highlights how legal norms refract as they are rendered into system logic and data practices, and what this means for legitimacy, accountability, and contestability in real world deployments..
Disinformation and Propaganda in the Era of Generative AI
Nina Khairova
This presentation offers a conceptual and methodological overview of contemporary disinformation and propaganda in digital communication, with particular attention to the transformative role of generative AI. The escalation from false information to intentional persuasive propaganda is examined, together with the technical and psychological foundations that make propagandistic content effective. The presentation also introduces a general framework for AI-based misinformation detection and discusses how the emergence of generative AI is reshaping the broader misinformation ecosystem by altering the scale, speed, and adaptability of content production and dissemination, while introducing new challenges for detection, governance, and responsible AI..
Fairness in AI
Lili Jiang
Fairness is a critical consideration in responsible AI decision-making systems. This presentation introduces fairness concepts, exploring definitions, metrics, and case studies on bias throughout the AI system’s life cycle—from input data and modeling to output. We will discuss common fairness criteria, highlighting their trade-offs and real-world implications.
Q0 Assessment
Tatjana Titareva
Additional presenters: Petter Ericson, Rachele Carli, Viktoriia Movchan, Bertilla Fabris
Detailes will be added soon.
Explainable Artificial Intelligence AI
Leila Methnani
As Artificial Intelligence (AI) becomes ubiquitous in our society, it is increasingly important to enable relevant stakeholders to understand these models’ behaviour. The field of eXplainable Artificial Intelligence (XAI) aims to support human understanding of opaque AI models through the continuous development of explainability tools. This lecture will introduce students to the field by first motivating its need from various stakeholder perspectives, and then stepping through select XAI techniques that aid these stakeholder needs. Alongside the promises of XAI, this lecture will also cover the existing challenges of ensuring that explanations do not mislead and result in downstream effects that are counter to the aims of the field.
Data privacy
Vicenc Torra
Data-driven models are built from data, and data is in most of the cases sensitive. Then, traces of sensitive information are often found in the data-driven model. In this talk I will explain some challenges and solutions in data privacy. For example, how we can measure information leakage in a model by means of membership inference attacks, and how we can protect information using protection mechanisms that implement privacy models.
← Back to Timetable