Blame the CEOs if Your AI Acts Weird

PwC is calling company leaders to take responsibility for artificial intelligence (AI) practices.

In a recent press release, PwC argued that a piecemeal approach to AI creates risks. Instead, company leaders need to develop responsible AI practices from the start. To do this, they also need comprehensive understanding, development and integration of AI.

PwC highlighted five areas CEOs need to focus on. They include governance, ethics and regulation, interpretability and explainability, robustness and security, and bias and fairness.

An earlier study saw 85% of CEOs highlighting the importance of AI in the next five years. Yet, 84% admitted that AI-based decisions need to be explainable to be trusted.

"The issue of ethics and responsibility in AI are clearly of concern to the majority of business leaders. The C-suite needs to actively drive and engage in the end-to-end integration of a responsible and ethically led strategy for the development of AI in order to balance the potential economic gains with the once-in-a-generation transformation it can make on business and society. One without the other represents fundamental reputational, operational and financial risks," said Anand Rao, global AI leader, PwC US. He was speaking at the World Economic Forum in Dalian, China.

PwC's offers a Responsible AI Toolkit. The diagnostic survey assesses organizational understanding and application of responsible and ethical AI practices. In May and June 2019, around 250 respondents involved with AI completed the assessment.

Immaturity and inconsistency in the understanding and application of responsible and ethical AI practices were the main conclusions. Specifically:

  • Only 25% of respondents said they would prioritize consideration of the ethical implications of an AI solution before implementing it.
  • One in five (20%) have defined processes for identifying risks associated with AI. Over 60% rely on developers, informal processes, or have no documented procedures.
  • Ethical AI frameworks or considerations existed, but enforcement was not consistent.
  • 56% said they would find it difficult to articulate the cause if their organization's AI did something wrong.
  • Over half of respondents have not formalized their approach to assessing AI for bias, citing a lack of knowledge, tools, and ad hoc evaluations.
  • 39% of respondents with AI applied at scale were only "somewhat" sure they know how to stop their AI if it goes wrong.

"AI decisions are not unlike those made by humans. In each case, you need to be able to explain your choices and understand the associated costs and impacts. That's not just about technology solutions for bias detection, correction, explanation, and building safe and secure systems. It necessitates a new level of holistic leadership that considers the ethical and responsible dimensions of technology's impact on business, starting on day one," said Rao.

Wilson Chow, global technology, media and telecommunications leader at PwC China also urged Chinese company leaders to take note.

"The foundation for responsible AI is end-to-end enterprise governance. The ability of organizations to answer questions on accountability, alignment and controls will be a defining factor to achieve China's ambitious AI growth strategy," he said.