Could AI trigger a financial crisis?

AI Thumbnail

Here’s the intro from this blog by Cooley’s Cydney Posner:

In remarks on Monday to the National Press Club, SEC Chair Gary Gensler, after first displaying his math chops—can you decipher “the math is nonlinear and hyper-dimensional, from thousands to potentially billions of parameters”?—discussed the potential benefits and challenges of AI, which he characterized as “the most transformative technology of our time,” in the context of the securities markets. 

When Gensler taught at MIT, he and a co-author wrote a paper on some of these very issues, “Deep Learning and Financial Stability,” so it’s a topic on which he has his own deep learning. The potential for benefits is tremendous, he observed, with greater opportunities for efficiencies across the economy,  greater financial inclusion and enhanced user experience. The challenges introduced are also numerous— and quite serious—with greater opportunity for bias, conflicts of interest, fraud and platform dominance undermining competition. Then there’s the prospective risk to financial stability altogether—another 2008 financial crisis perhaps? But not to worry—Gensler assured us, the SEC is on the case.

Gensler looked at the anticipated impact of AI from both a narrow and a broad perspective. The growing capability of AI to tailor communications to each of us individually—which Gensler referred to as “narrowcasting”—raises or exacerbates a number of potential issues. The growing capacity of AI to make predictions about individuals, with outcomes that may be “inherently challenging to interpret,” could be problematic: the results of predictive algorithms could be based on incorrect information, or “on data reflecting historical biases” or “latent features that may inadvertently be proxies for protected characteristics.” Or, the AI system could create conflicts of interest by optimizing the interests of the platform of, say, the broker or financial adviser over the interests of the customer. The SEC’s most recent agenda indicates that the Division of Trading and Markets is targeting October 2023 for a proposal “related to broker-dealer conflicts in the use of predictive data analytics, artificial intelligence, machine learning, and similar technologies in connection with certain investor interactions.”

And then, there are the obvious opportunities for fraud and deception, with “bad actors” using “AI to influence elections, the capital markets, or spook the public, potentially making Orson Welles 1938 ‘The War of the Worlds’ radio broadcast look tame.” Nevertheless, Gensler reaffirmed, “under the securities laws, fraud is fraud. The SEC is focused on identifying and prosecuting any form of fraud that might threaten investors, capital formation, or the markets more broadly.” He also makes the point that, for public companies that are disclosing AI opportunities and risks, it is important “to take care to ensure that material disclosures are accurate and don’t deceive investors.”

The macro-level risks are perhaps even more daunting. There will certainly be changes to the job market, and AI may well be turf on which the U.S. and China “compete economically and technologically.”  More chilling are the nightmares of bad actors’ misuse of AI: “geopolitical challenges from state actors and militaries’ potential use of AI.” Or worse.