Developing a Practical AI Risk Assessment Taxonomy for Organisational Decision-Making

Company
TU Delft
Type
Graduation Assignment
Location
Curius
Sector
Bachelor & Master, Master
Required language
Dutch, English
Commences at
24 February 2026
Finishes at
24 February 2027

Description

Master Thesis in partnership with YAGHMA 

How can organisations systematically assess and prioritise AI-related risks in a way that is both  methodologically rigorous and practically applicable? While numerous ethical guidelines, regulatory  principles, and high-level AI governance frameworks exist, organisations often struggle to translate  these into concrete risk assessment processes that support real-world decision-making across the AI  lifecycle. This gap highlights the need for a structured taxonomy that characterises AI risks in a  consistent, operational, and scalable manner. 

This master thesis, supervised in partnership with YAGHMA B.V., focuses on the development of a  practical AI risk assessment taxonomy that supports qualitative AI impact assessments in applied  contexts. The taxonomy will classify AI risks across selected dimensions—such as ethical, social,  governance, and regulatory risks—while explicitly linking them to organisational contexts and stages of  the AI lifecycle. 

What You’ll Do: 

Analyse scientific and grey literature on AI risk, AI impact assessment, and AI governance to  identify commonly recognised AI risk categories and assessment gaps 

• Review existing AI risk and governance frameworks (e.g. lifecycle-based, principle-based, and  compliance-oriented approaches) to understand how risks are currently structured and  operationalised 

• Design a structured AI risk taxonomy that characterises AI risks using clear properties, such as  affected stakeholders, lifecycle phase, severity, reversibility, detectability, and governance  responsibility 

• Apply and validate the taxonomy through one or more illustrative AI use cases (e.g. public  services, healthcare, or industrial AI systems), using qualitative assessment methods • Reflect on how the taxonomy can support organisational decision-making, reporting, and  monitoring of AI risks over time 

What You’ll Gain: 

• Experience in AI risk assessment methodology development 

• Practical exposure to applied AI governance and consulting work 

• Skills in taxonomy design, qualitative risk analysis, and framework validation • A strong foundation for roles in AI governance, risk assessment, or responsible AI consulting 

This thesis combines theoretical rigor with practical application considerations, positioning you at the  intersection of AI ethics and risk assessment. For more information, please contact rbl@yaghma.nl.

Sponsors