Oxford's Godofredo Ramizo Jr. Eyes Framework for Governmental AI Projects

May 14, 2021

In a new study, researchers from the Oxford Commission on AI and Good Governance (OxCAIGG) at the Oxford Internet Institute, University of Oxford, set out a new best practice approach for government-led AI projects to help officials deliver successful outcomes for the public.

The study, ‘Practical Lessons for Government AI Projects’ authored by researcher Godofredo Ramizo Jr, a DPhil candidate at the Oxford Internet Institute, is based on in-depth structured interviews with senior policy practitioners and an extensive literature review.

The study provides practical guidance for government officials responsible for designing and delivering AI projects.Godofredo Ramizo Jr, Researcher and lead author of the study, Oxford Internet Institute, said: “Governments around the world are launching projects that embed AI in the delivery of public services. These range from AI-driven management of internal systems to smart city solutions for urban problems. Yet many of these projects fail due to lack of financial resources, poor oversight or knowledge gaps. We believe there is a clear need for a succinct framework that will help government decision-making navigate the complexities of AI projects, avoid pitfalls and uphold the public good.”

The report examined the diversity of AI projects currently in scope with various governments to identify best practice. Researchers identified four types of AI projects which varied in terms of importance and resources available specific to AI, namely:

Reformer project – high resource, high project importance

Steward project – high resource, relatively low importance

Aspirant project – low resource, high importance

Adventurer project – low resource, relatively low importance

Researchers used this classification system as the basis for developing five practical principles designed to help governments manage diverse types of AI projects whilst minimising risks and upholding the public interest. Each principle can be tailored by government officials according to the context of the project and the project type as identified by the Oxford researchers.

These are:

Determine appropriate solutions – decision makers should critically assess whether and how AI can help governance challenges

Include a multi-step assessment process – consider using feasibility studies, pilots, milestones for quality control and post-implementation monitoring

Strengthen government’s bargaining position – robustly engage with technology vendors and external partners using tactics such as blacklists and bulk tenders

Pay attention to sustainability – ensure human talent is available as well as financial and political support for long-term success

Manage data, cybersecurity and confidentiality effectively – protect national interest and individual privacy, as well as win public trust

This latest report is the third in the series of OxCAIGG reports which seek to advise world leaders on effective ways to use AI and machine learning in public administration and governance.

With the UK government set to unveil its National AI Strategy in September this year, the Oxford researchers urge government officials to consider the evidence brought forward in this report.

Ramizo adds “In our study, we have shown how certain practical principles of good governance can be deployed to mitigate the risk or pursue advantages inherent in different types of AI projects. By following this approach, we hope that government officials will benefit from a greater awareness of the risks, opportunities and strategies suitable for their particular project. Ultimately, we hope our study serves to contribute to a future where government-led AI projects indeed serve the public good.”

Terms of Use | Copyright © 2002 - 2021 CONSTITUENTWORKS SM  CORPORATION. All rights reserved. | Privacy Statement