Thursday, Nov 21, 2024

Deploying a multidisciplinary strategy with embedded responsible AI

The finance sector is among the keenest adopters of machine learning (ML) and artificial intelligence (AI), the predictive powers of which have been demonstrated everywhere from back-office process automation to customer-facing applications. AI models excel in domains requiring pattern recognition based on well-labeled data, like fraud detection models trained on past behavior. ML can support employees as well as enhance customer experience, for example through conversational AI chatbots to assist consumers or decision-support tools for employees. Financial services companies have used ML for scenario modeling and to help traders respond quickly to fast-moving and turbulent financial markets. As a leader in AI, the finance industry is spearheading these and dozens more uses of AI.


Deploying a multidisciplinary strategy with embedded responsible AI

In a highly regulated, systemically important sector like finance, companies must also proceed carefully with these powerful capabilities to ensure both compliance with existing and emerging regulations, and keep stakeholder trust by mitigating harm, protecting data, and leveraging AI to help customers, clients, and communities. “Machine learning can improve everything we do here, so we want to do it responsibly,” says Drew Cukor, firmwide head of AI/ML transformation and engagement at JPMorgan Chase. “We view responsible AI (RAI) as a critical component of our AI strategy.”

Understanding the risks and rewards

The risk landscape of AI is broad and evolving. For instance, ML models, which are often developed using vast, complex, and continuously updated datasets, require a high level of digitization and connectivity in software and engineering pipelines. Yet the eradication of IT silos, both within the enterprise and potentially with external partners, increases the attack surface for cyber criminals and hackers. Cyber security and resilience is an essential component of the digital transformation agenda on which AI depends.

A second established risk is bias. Because historical social inequities are baked into raw data, they can be codified—and magnified—in automated decisions leading, for instance, to unfair credit, loan, and insurance decisions. A well-documented example of this is Zip code bias. Lenders are already subject to rules that aim to minimize adverse impacts based on bias and to promote transparency, but when decisions are produced by black-box algorithms, transgressions can occur even without intent or knowledge. Laws like the EU’s General Data Protection Regulation and the U.S. Equal Credit Opportunity Act require that explanations of certain decisions be provided to the subjects of those decisions, which means financial firms must endeavor to understand how the relevant AI models reach their results. AI must be understood by internal audiences too by ensuring, for example, that AI-driven business-planning recommendations are intelligible to a chief financial officer or that model operations are reviewable by an internal auditor. Yet the field of explainable AI is nascent, and the global computer science and regulatory community has not determined precisely which techniques are appropriate or reliable for different types of AI models and use cases.

There are also macro risks related to the health of the economic system. Financial companies applying data-driven AI tools at scale could create market instability or incidents such as flash crashes through automated herd behavior if algorithms implicitly follow similar trading strategies. AI systems could even functionally collude with each other across organizations, such as by bidding to achieve the highest or lowest price for a stock, creating new forms of anticompetitive behavior.

Toward responsible AI

Most AI risks are not, however, unique to financial services. Companies from media and entertainment to health care and transportation are grappling with this Promethean technology. But because financial services are highly regulated and systematically important to economies, firms in this sector have to be at the frontier when it comes to good AI governance, and proactively preparing for and avoiding known and unknown risks. Currently, banks are familiar with using governance tools like model risk management and data impact assessments, but how these existing processes should be modified in light of AI’s impacts remains an open conversation.

Enter responsible AI (sometimes called ethical or trustworthy AI). Responsible AI refers to principles, policies, tools, and processes to ensure AI systems are developed and operated in the service of good for individuals and society, while—in the business context—still achieving positive impact. Governments and regulatory bodies from the EU to the Monetary Authority of Singapore have been active in encouraging businesses to embed practices enhancing fairness, explainability, security, and accountability into AI throughout the AI lifecycle. The Algorithmic Accountability Act of 2022, introduced to the U.S. Congress in February 2022, aims to direct the Federal Trade Commission to require impact assessments of automated decision systems and augmented critical decision processes. Other regulators have also taken notice. The EU’s AI Act is in particular expected to be a major international driver of regulatory change in this space. Policymakers are focusing on creating standardized AI regulations while at the same time harmonizing these rules with finance-specific laws.

Along with the voluntary guidance and emerging regulations coming from policymakers, other actors like professional associations, industry bodies, standards organizations such as the Institute of Electrical and Electronics Engineers (IEEE), and academic coalitions have released recommendations and tools for companies hoping to lead in responsible uses of AI.

Customer expectations are also a significant driver of RAI. “Customers want to know that their data is protected and that we’re not using it incorrectly. We take a lot of time to consider and make sure we’re doing the right thing,” says Cukor. “This is something that I spend a lot of time on with my fellow chief data officers in the firm. It’s very critical to us, and it’s not something we’re ever going to compromise.”

Responsible AI is, for Cukor, a lifecycle approach that upholds integrity and safety at every step in the journey. That journey starts with data, the lifeblood of AI. “Data is the most important part of our business,” he explains. “Data comes in and we process it, make sense of it, and make decisions based on it. The whole end-to-end process has to be done responsibly, ethically, and according to law.”

Accountability and oversight must be continuous because AI models can change over time; indeed, the hype around deep learning, in contrast to conventional data tools, is predicated on its flexibility to adjust and modify in response to shifting data. But that can lead to problems like model drift, in which a model’s performance in, for example, predictive accuracy, deteriorates over time, or begins to exhibit flaws and biases, the longer it lives in the wild. Explainability techniques and human-in-the-loop oversight systems can not only help data scientists and product owners make higher-quality AI models from the beginning, but also be used through post-deployment monitoring systems to ensure models do not decrease in quality over time.

“We don’t just focus on model training or making sure our training models are not biased; we also focus on all the dimensions involved in the machine learning development lifecycle,” says Cukor. “It is a challenge, but this is the future of AI,” he says. “Everyone wants to see that level of discipline.”

Prioritizing responsible AI

There is clear business consensus that RAI is important and not just a nice-to-have. In PwC’s 2022 AI Business Survey, 98% of respondents said they have at least some plans to make AI responsible through measures including improving AI governance, monitoring and reporting on AI model performance, and making sure decisions are interpretable and easily explainable.

Notwithstanding these aspirations, some companies have struggled to implement RAI. The PwC poll found that fewer than half of respondents have planned concrete RAI actions. Another survey by MIT Sloan Management Review and Boston Consulting Group found that while most firms view RAI as instrumental to mitigating technology’s risks—including risks related to safety, bias, fairness, and privacy—they acknowledge a failure to prioritize it, with 56% saying it is a top priority, and only 25% having a fully mature program in place. Challenges can come from organizational complexity and culture, lack of consensus on ethical practices or tools, insufficient capacity or employee training, regulatory uncertainty, and integration with existing risk and data practices.

For Cukor, RAI is not optional despite these significant operational challenges. “For many, investing in the guardrails and practices that enable responsible innovation at speed feels like a trade-off. JPMorgan Chase has a duty to our customers to innovate responsibly, which means carefully balancing the challenges between issues like resourcing, robustness, privacy, power, explainability, and business impact.” Investing in the proper controls and risk management practices, early on, across all stages of the data-AI lifecycle, will allow the firm to accelerate innovation and ultimately serve as a competitive advantage for the firm, he argues.

For RAI initiatives to be successful, RAI needs to be embedded into the culture of the organization, rather than merely added on as a technical checkmark. Implementing these cultural changes require the right skills and mindset. An MIT Sloan Management Review and Boston Consulting Group poll found 54% of respondents struggled to find RAI expertise and talent, with 53% indicating a lack of training or knowledge among current staff members.

Finding talent is easier said than done. RAI is a nascent field and its practitioners have noted the clear multidisciplinary nature of the work, with contributions coming from sociologists, data scientists, philosophers, designers, policy experts, and lawyers, to name just a few areas.

“Given this unique context and the newness of our field, it is rare to find individuals with a trifecta: technical skills in AI/ML, expertise in ethics, and domain expertise in finance,” says Cukor. “This is why RAI in finance must be a multidisciplinary practice with collaboration at its core. To get the right mix of talents and perspectives you need to hire experts across different domains so they can have the hard conversations and surface issues that others might overlook.”

This article is for informational purposes only and it is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations are not the responsibility of JPMorgan Chase & Co.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

------------
Read More
By: MIT Technology Review Insights
Title: Deploying a multidisciplinary strategy with embedded responsible AI
Sourced From: www.technologyreview.com/2023/02/14/1066582/deploying-a-multidisciplinary-strategy-with-embedded-responsible-ai/
Published Date: Tue, 14 Feb 2023 18:00:00 +0000