Ethical frameworks don't need to stifle creativity and suck up all your time
The need for thoroughness demanded by ethical development is at odds with the lack of time and computational resources but shouldn't stop the creativity of early experimentation.
The recent UK post office scandal is an exercise in unethical application of public-facing algorithms, but for the purposes of this blog two lessons stand out:
A prototype is not a product.
Ethics is key in both development AND deployment.
How does this relate to AI in policing? Well, in some ways being a data scientist in a police force is the holy grail of data geekiness: the data is real, it's tricky, and applications you build can have a real positive impact on society. The reality, as always, is complex.
The data science and analytics teams are as underfunded and under-resourced as the rest of the police and public sector, which translates into lack of time for training or development of varied applications, lack of support with data engineering, lack of much needed compute power.
On the other hand, to produce useful and interesting systems data scientists need time and space to familiarise themselves with the data and which algorithms work best. They need integration and communication with their clients within the force to find out what are the real needs that need addressing, so that they can prototype solutions to see what is feasible and will be of practical value.
Therefore, effective ML solutions require bottom up prototyping, as much as they do top-down requests, and both need to consider the underlying need or problem that needs solving.
This is where ethical frameworks come in. The conceptualisation and rationale phases aim to elicit what is really required to avoid your own post office moment, for example:
Do you really need a high-powered ML algorithm to identify the worst offenders? Or are you trying to improve outcomes and increase prosecution rates?
What are the ethical issues that may arise? What are the intrinsic biases encoded in the data?
Do you have the capacity to deal with the algorithm predictions?
How do you deploy the algorithm so that it effectively and ethically augments existing procedures so that it helps without leading to prosecutions that fail due to decision deferral or algorithmic bias?
This sets out the groundwork that will guide that prototype towards a deployable product that can be leveraged effectively.
RUDI and other ethical AI frameworks intend to provide guidance throughout the project, and constitute an institutional memory in a changing environment. They allow development of a complete product that can be maintained, continued with change of personnel, and monitored effectively so that it can provide continuous value, rather than being scraped due to feasibility issues.
Over a lifetime of the project, what seems like a lot of documentation gets filled in naturally as a log of a series of decisions.
Although, parts of the framework, such as data and model cards, are useful for general recording of algorithmic development, the full framework is not necessary for the prototyping phase or for algorithms intended for internal operations. However, as evidenced by the fact that multiple overlapping frameworks exist, it is an essential component for development of ethical public-facing AI. Where there's a cost to individuals and communities any system, whether it's software, AI, or civil engineering, must consider and mitigate for the worst case scenario.
Comentários