Last month Theta Lake submitted a response to a request for comment from several federal banking agencies including the Federal Reserve, the Consumer Financial Protection Bureau, and the Office of the Comptroller of the Currency about the use of Artificial Intelligence (AI) and Machine Learning (ML) in financial services. In our response, we described how Theta Lake uses AI in its Security and Compliance Suite, offered thoughts about how the agencies might create a framework for assessing AI risk, and outlined a few standard practices that would facilitate strong AI development in the future.
AI is incorporated into Theta Lake’s built-in risk detections that examine the video, audio, chat, and file transfers in collaboration and chat conversations for universally common concerns in communications. Theta Lake has developed more than 70 risk detections, which are pre-trained and ready for customer use with customers able to provide feedback and training on these classifiers. Customers can also engage Theta Lake to create customized AI-based risk detections for issues relevant to their specific product offerings, business units, or security and compliance concerns.
A key challenge facing the agencies is how to provide meaningful guidance to firms about evaluating AI-enabled applications and related due diligence expectations for the assessment process. As part of our response, Theta Lake suggested that the agencies consider creating a risk-based framework that firms could use to assess AI technologies.
We proposed that a series of high-risk activities could be defined based on potential customer and market impacts as well as the sensitivity of the data being analyzed or used for training purposes, such as credit scores, gender, or race. We proposed that AI used for the purposes of underwriting, investment advice, consumer lending, and several other activities could be considered high-risk.
We noted that the European Commission is considering a risk-based approach to the evaluation of AI, as discussed in their recent proposal “Laying Down Harmonized rules on Artificial Intelligence and Amending Certain Union Legislative Acts” 2021/0106 (COD). Although the scope of the Commission’s proposal is quite broad, adapting the core concept of using risk as the driving principle for vetting AI technologies would be extremely useful in the financial services context.
In addition to offering thoughts about the evaluation of AI, we outlined a set of best practices for organizations creating AI technologies that demonstrate meaningful rigor around related security and development controls. We observed that auditing frameworks such as the SOC 2, Type 2 and ISO 27001 test a comprehensive set of administrative and technical controls that any organization developing AI-enabled platforms should adopt. Similarly, the implementation of standard policies and documentation around information security, baseline explainability, and machine learning operations are key to AI development. Finally, we highlighted the need for transparency and performance auditability as part of AI system design and deployment.
At Theta Lake, we’ve implemented these controls to demonstrate the rigor of our approach to system security and development. Theta Lake’s annual SOC 2, Type 2 audit, mapping of its controls to ISO 27001 and HIPAA, and related policy and explainability documentation provide tangible evidence of an approach to security that informs the entirety of our AI development process. Additionally, our classifier audit report capability facilitates oversight and measurement of our AI-enabled detections.
Theta Lake supports the continued evolution of reasonable, risk-based standards for AI assessment and was excited to contribute to this critical conversation among US financial services agencies.
Download Theta Lake’s response letter to the agencies’ request for comment about the use of Artificial Intelligence and Machine Learning in financial services.