Designing and Applying Mitigation Measures for Bias and Transparency Risks in AI Systems

Company
TU Delft
Type
Graduation Assignment
Location
Curius
Sector
Bachelor & Master, Master
Required language
Dutch, English
Commences at
24 February 2026
Finishes at
24 February 2027

Description

Master Thesis in partnership with YAGHMA 

How can organisations effectively mitigate risks related to bias and lack of transparency in AI systems in  real-world applications? While bias and transparency are widely recognised as critical ethical and societal  risks of AI, organisations often struggle to move beyond high-level principles toward concrete mitigation  measures that can be applied, evaluated, and monitored in practice. This challenge underscores the  need for structured approaches that connect risk identification with actionable mitigation strategies  across the AI lifecycle. 

This master thesis, supervised in partnership with YAGHMA B.V., focuses on researching, selecting, and  applying mitigation measures aimed at reducing bias and improving transparency in AI systems. The  thesis combines conceptual analysis with applied case studies, linking AI risk assessment to concrete  organisational interventions within one or more illustrative AI use cases. 

What You’ll Do: 

Analyse scientific and grey literature on algorithmic bias, transparency, explainability, and  responsible AI to identify common sources of risk and existing mitigation strategies • Review existing AI governance, ethical, and regulatory frameworks to understand how bias and  transparency requirements are currently formulated and assessed 

• Develop a structured overview or typology of mitigation measures for bias and transparency,  characterising them by properties such as lifecycle stage, technical vs. organisational nature,  required expertise, implementation effort, and expected impact 

• Apply selected mitigation measures to one or more illustrative AI use cases (e.g. decision support systems in public services, healthcare, or industrial contexts), assessing how these  measures reduce identified risks 

• Reflect on the effectiveness, limitations, and trade-offs of different mitigation approaches,  including their implications for organisational processes, accountability, and ongoing monitoring 

What You’ll Gain: 

Hands-on experience with bias and transparency risk mitigation in AI systems • Practical skills in linking AI risk assessment to concrete technical and organisational interventions • Insight into the real-world challenges of operationalising responsible AI principles • Strong preparation for roles in AI governance, AI risk assessment, and responsible AI consulting 

This thesis combines theoretical rigor with practical application considerations, positioning you at the  intersection of AI ethics and trustworthy AI. For more information, please contact rbl@yaghma.nl.

Sponsors