FICO Eyes Responsible AI
May 25, 2021
released its State of Responsible AI from market intelligence firm
Corinium which found that despite the increased demand and use of AI
tools, almost two-thirds (65%) of respondents’ companies can’t explain
how specific AI model decisions or predictions are made. The study found
that the lack of awareness of how AI is being used and whether it’s
being used responsibly is concerning as 39% of board members and 33% of
executive teams have an incomplete understanding of AI ethics.
Conducted by Corinium and sponsored by FICO, the report – State of
Responsible AI - surveyed 100 C-level analytic and data executives and
conducted in-depth interviews with industry thought leaders from MIT, AI
Truth, The Alan Turing Institute, World Economic Forum, and FinRegLab to
understand how organizations are deploying AI capabilities and whether
they are ensuring AI is used ethically, transparently, securely and in
their customers’ best interests.
While compliance staff (80%) and IT and data analytics team (70%) have
the highest awareness of AI ethics and responsible AI within
organizations, understanding across organizations remains patchy. As a
result, there are significant challenges to build support to establish
practices as the majority of respondents (73%) have struggled to get
executive support for prioritizing AI ethics and responsible AI
“Over the past 15 months, more and more businesses have been investing
in AI tools, but have not elevated the importance of AI governance and
responsible AI to the boardroom level,” said Scott Zoldi, Chief
Analytics Officer at FICO. “Organizations are increasingly leveraging AI
to automate key processes that - in some cases - are making
life-altering decisions for their customers and stakeholders. Senior
leadership and boards must understand and enforce auditable, immutable
AI model governance and product model monitoring to ensure that the
decisions are accountable, fair, transparent, and responsible.”
Whose Responsibility is it?
The study found that almost half (49%) of the respondents report an
increase in resources allocated to AI projects over the past 12 months,
followed by team productivity (46%) and predictive power of AI models
(41%). Whereas, only 39% have prioritized increased resources to AI
governance during model development and 28% have prioritized ongoing AI
model monitoring and maintenance.
Despite the embrace of AI, what is driving the lack of awareness? The
study showed that there is no consensus among executives about what a
company’s responsibilities should be when it comes to AI.
The majority of respondents (55%) agree that AI systems for data
ingestion must meet basic ethical standards and that systems used for
back-office operations must also be explainable. But this may partly
reflect the challenges of getting staff to use new technologies, as much
as wider ethical considerations.
More troublesome is that almost half (43%) of respondents say they have
no responsibilities beyond meeting regulatory compliance to ethically
manage AI systems whose decisions may indirectly affect people's
livelihoods – i.e. audience segmentation models, facial recognition
models, recommendation systems.
“AI will only become more pervasive within the digital economy as
enterprises integrate it at the operational level across their
businesses,” said Cortnie Abercrombie, Founder and CEO, AI Truth. “Key
stakeholders, such as senior decision makers, board members, customers,
etc.; need to have a clear understanding on how AI is being used within
their business, the potential risks involved and the systems put in
place to help govern and monitor it. AI developers can play a major role
in helping educate key stakeholders by inviting them to the vetting
process of AI models.”
Combating AI Bias
What can businesses do to help turn the tide? Combating AI model bias is
an essential first step, but many enterprises haven’t fully
operationalized this effectively as 80% of AI-focused executives are
struggling to establish processes that ensure responsible AI use.
Currently, only a fifth of respondents (20%) actively monitor their
models in production for fairness and ethics, while less than a quarter
(22%) say their organization has an AI ethics board to consider
questions on AI ethics and fairness. One in three (33%) have a model
validation team to assess newly developed models and only 38% say they
have data bias mitigation steps built into model development processes.
However, evaluating the fairness of model outcomes is the most popular
safeguard in the business community today, with 59% of respondents
saying they do this to detect model bias. Additionally, 55% say they
isolate and assess latent model features for bias and half (50%) say
they have a codified mathematical definition for data bias and actively
check for bias in unstructured data sources.
Businesses recognize that things need to change, as the overwhelming
majority (90%) agree that inefficient processes for model monitoring
represent a barrier to AI adoption. Thankfully, almost two-thirds (63%)
respondents believe that AI ethics and responsible AI will become a core
element of their organization's strategy within two years.
Educating key stakeholder groups about the risks associated with AI
as well as the importance of complying with AI regulation are two
critical steps to addressing companies blindspots around responsible AI.
Additionally, the report highlights several best practices that will
help organizations plot a path to responsible AI, including:
practices that protect the business against reputational threats from
irresponsible AI use
Balancing the need to be responsible with the need to bring new
innovations to market quickly
Securing executive support for prioritizing AI ethics and responsible AI
Futureproofing company policies in anticipation of stricter regulations
Securing the necessary resources to ensure AI systems are developed and
“The business community is committed to driving transformation through
AI-powered automation. However, senior leaders and boards need to be
aware of the risks associated with the technology and the best practices
to proactively mitigate them. AI has the power to transform the world,
but as the popular saying goes – with great power, comes great
responsibility,” added Zoldi.