Predictive Data Modelling in Police
Introducing predictive modelling, especially the kind that will inform decision making that impacts the public, is a commitment both in terms of time and resources. To successfully develop and implement algorithmic modelling in policing, diverse stakeholders must work together to define why and how modelling will work in practice, as well as have the specific expertise needed to develop and test large data models. In lieu of a centralised body that can leverage machine learning techniques on a national level, we recommend police develop and build their own models in-house for two reasons. Firstly, the model will be specific to the aims and data of each force and secondly, any model needs to be maintained over time.
The Interpol AI toolkit has complementary documents that support this section including:
-
What is the problem to be addressed?
-
What is the proposed solution?
-
What is the overall aim of the initiative and why is this important?
-
Who/what does the initiative target and why?
-
What are the key definitions being used and how are they operationalised? (e.g., high harm, recidivism risk)
-
What is the mechanism (e.g. professional judgement, structured professional judgement using a tool, static algorithm, machine-learning based algorithm) by which cases of interest will be identified and why?
-
Will identified cases go through further assessment? If so, what does this look like?
-
What kind of action will be taken? How is this justified and are there resources available for this?
-
How does all of the above fit within the legal and ethical frameworks in which policing operates?
-
What evidence base are you using to justify all of the above steps? (briefly state underlying theory, hypotheses, or evidence from prototyping)