top of page

Rationale

The rationale section should provide a way to iteratively document the thought processes and decisions that lead to the model. Modelling is an iterative and non-linear process – sections of the business case will change as the project progresses; this should be documented along with the reasons for change.  We recommend you:

 

  • Document all decisions so the entire process is transparent and auditable 

  • Document tradeoffs e.g., increased accuracy often means decreased explicability 

  • Discuss the business case before any modelling takes place

​

Responsible team members: The whole team

Multi-disciplinary teams

Assign a team and a senior reporting officer to ensure full accountability of the tool’s performance and its deployment 

1 / Enables you to apply different expertise to the problem 

2 / Enables internal auditing i.e., the person developing the model is different from the person reviewing the model’s statistical performance.   

3 / Enables external auditing i.e., external scrutiny of the process, especially for high-risk models  

You may not have personnel available to fill all the separate positions, if so, think through the implications of this. For example, using an external agency to fulfill all these roles may lead to problems when maintaining the model.  

Effective multidisciplinary working

1 / Develop a shared understanding of role responsibilities, tasks and procedures involved  

2 / Have regular team meetings where working relationships are actively developed and good communication practices established (e.g., minuted meetings)    

Roles

Senior Reporting Officer:  Takes strategic and managerial responsibility of the project, including delivery and ongoing evaluation.  

Domain Experts: Possess subject matter expertise and will use the model for their work, set out the problem the model is being used to solve and what the required outcomes are. Help evaluate if the model is achieving the intended outcomes.

Data Engineers: Responsible for the data used by the models, they unify and standardise the data for use in the model. 

Data Scientists: Create, evaluate and maintain models, including associated documentation.

Validators: Review and evaluate the work of the data engineers and scientists, with a focus on technical accuracy. Oftentimes, validators are data scientists who are not associated with the specific model or project at hand.

Governance Personnel: Review and approve the work created by both data engineers and data scientists, with a focus on risk.  

Validators and governance personnel can include ethics boards, academics and community stakeholders who are external to the force and can review the work.

Computational resources

Training and using of some types of machine learning models may require computational resources that are only available to you through cloud computing. This may be costly.  Make sure you adequately calculate the potential development and hosting costs for the final models. It may be possible to reduce costs of deep learning by using computational tricks such as quantisation and adapters to train smaller versions of more powerful models.

Business case

Create a clear business case

  • Document all decisions so the entire process is transparent and auditable 

  • Document trade-offs e.g., increased accuracy often means decreased explicability 

  • Discuss the business case before any modelling takes place 

  • Modelling is an iterative and non-linear process – sections of the business case will change as the project progresses; this should be documented along with the reasons for change  

This business case template can help clarify the key points of the project to facilitate easier communication within the organisation. It covers the points listed below.


Note that it is not exhaustive and should be added to in line with the force's own concerns. Responses will change as the project progresses.

Team

  • Senior Responsible Officer 

  • Data Engineer(s) 

  • Data Scientist(s) 

  • Domain Expert(s) 

  • Validator(s) 

  • Governance Expert(s) 

  • Plan for if someone leaves post

Model

  • Outline the problem to be addressed and the overall aim of the model, including if relevant, who the model is being used to target, and why 

  • Why is algorithmic modelling rather than other options (e.g., professional judgment) best suited to solving this problem? 

  • Alignment with force priorities 

  • Alignment with national strategic priorities 

  • Briefly state underlying theory, hypotheses, or evidence from prototyping

  • Desired outcome(s) i.e., how will you know the model is working?  

  • Possible undesired outcome(s) (e.g., either directly or through misuse) 

  • Model design (e.g., classification, ranking) & rationale 

  • Data features needed & rationale 

  • Data analysis plan & rationale (e.g., how bias will be assessed, how the model will be evaluated and analysis of errors)  

  • Inclusion and exclusion criteria for cases being included in the model & rationale 

  • Plan for storing & sharing output 

  • Who will use the output & will training on using the output be provided? If yes what? And if no, why not? 

  • How will the model’s outputs be incorporated into officer decision-making? Are processes in place to  keep track of accuracy and to catch model drift?

  • If relevant, what is the intervention for the cases the model identifies? Are there situations where identified cases will not be considered for an intervention? 

  • What are the implications of an error (both false positives and false negatives) e.g., ethical, legal, reputational?

  • What is the plan for ongoing model evaluation (e.g., thresholds and inputs/outputs that trigger model retraining)?

  • Can iterative changes be made to the model as needed? (e.g., to compensate for feedback loops due to the effects of interventions) 

Costs and resourcing

  • What costs/resources will be needed for set up & piloting? 

  • What costs/resources will be needed for model maintenance? 

Changes and trade-offs (to be completed during project lifecycle) 

  • Any changes to model design & rationale 

  • Any changes to analysis plan & rationale 

  • Any trade-offs & rationale e.g., false positive/negative rates, accuracy vs. explicability 

bottom of page