Developing organizational arrangements for artificial intelligence in public surveillance
- TU Delft
- Required language
We are looking for an enthusiastic PhD candidate from the field of policy analysis, public administration, policy
sciences or a related field to study and develop organizational implementation arrangements for Artificial Intelligence
(AI) for regulatory surveillance. The position is a for 4-year PhD track, financed by the Faculty of Technology, Policy
and Management, department of Multi-Actor Systems in close collaboration with regulatory agencies.
Artificial Intelligence (AI) systems are is increasingly being used in regulatory processes. No wonder, it promises a
more effective regulatory supervision process, in which inspectors on the floor are advised on the basis of AI-
generated risk analysis. Technically there are still many development possibilities. However, whether AI can be
applied effectively and responsibly raises also an organizational issues, such as:
- Professionalism of the inspectors. Risk analyses were conducted before the AI era. These often took place in the
minds of professional inspectors and/or during their work meetings. In other words: not only data, but also inspectors
are an important source of knowledge about risks. Their knowledge is often implicit and based on years of
experience. Application of AI can count on resistance from professionals. Not only that: too aggressive application of
AI can be at the expense of their valuable knowledge.
- Data pollution. Too much risk-based supervision can lead to data pollution: data mainly comes in from those under
supervision who have previously been identified as being high-risk. Other people under supervision can thus remain
out of sight. They can change their behaviour. There will therefore always be a demand for risk-based random
supervision, in which random samples are also used for supervision. Within an organization there are different ideas
about the right ratio.
- Ethical issues. There is a lot of discussion about bias, protection of vulnerable individuals, transparency and privacy.
Of course, these discussions also exist internally. This means that those involved – data scientists, inspectors,
managers – make trade-offs between values. Many decisions about data and algorithms can be leading for the
outcomes. These decisions increasingly seem to be made 'at the front', the data experts. This presents challenges
with regard to the management of the data experts.
Based on these signals, we conclude that further development of AI has both technical and organizational aspects.
Successful further development of AI will therefore have to be linked to the organization of supervisory practices.
How are technical development of AI and the supervisory organization connected and how can this connection be
improved? Much literature on AI is about technical possibilities. Much literature deals with ethical concerns. Little
literature deals with the actual implementation and use of AI in organizational practice. A study into this subject
should have two characteristics:
- The research is interdisciplinary. It has both innovative-technical aspects and social-scientific aspects. The question
is about the interplay between the two.
- Connected to practice. We are looking for the application of AI in the context of practice. That is why we want to
conduct empirical research, using one or more cases. The research will be carried out in close collaboration with the
Dutch Food and Product Safety Authority.
In this project you will study and develop arrangements that bring together the world of AI and managements of
This requires understanding of AI (data science, machine learning, programming) and more intangible processes of
organization, including organization structures, informal coordination and organizational politics. It includes both
analysis and design efforts. It will involve conducting case studies at NVWA
We are looking for someone who:
- has a Master's degree and a background in policy analysis, public administration or a related field
- has affinity (and preferably also demonstratable experience) with AI
- has affinity (and preferably also experience) with organizational science
- has the ability and interest to cross disciplinary boundaries
- is fluent in English, both written and orally
- has pro-active communication skills and is able to work well in a team
Doing a PhD at TU Delft requires English proficiency at a certain level to ensure that the candidate is able to
communicate and interact well, participate in English-taught Doctoral Education courses, and write scientific articles
and a final thesis. For more details please check the Graduate Schools Admission Requirements
[https://www.tudelft.nl/onderwijs/opleidingen/phd/admission]. Since the project is developed in close cooperation with a Dutch inspection agency (NVWA) that mostly communicates in Dutch, a candidate that speaks Dutch or is willing to
learn the Dutch language is preferred.
You can find more information on: https://www.academictransfer.com/nl/325624/phd-position-artificial-intelligence-