Theta Lake Blog

Bard of The 21st Century: Risks and Opportunities For Generative AI

Posted by Marc Gilman on May 19, 2023 5:16:23 AM
Marc Gilman


Generative AI refers to a set of technologies that produce new data based on the information they have been trained on–these applications “generate” new information like text or images based on their training data, hence the “generative” monniker. The most popular uses of generative AI, or “GAI,” have been as part of interactive chat applications like Open AI’s ChatGPT and Google’s Bard, image generating applications like Stable Diffusion, Midjourney, and DALL-E, and code generating systems like Copilot. 

As with any emerging technology, careful evaluation of the appropriate use cases and creation of acceptable use guardrails are crucial, particularly in the enterprise context. Financial services firms have taken divergent approaches to the use of generative AI platforms with some touting the development of new, innovative applications and others blocking them outright.

We’ll quickly explore the risks and opportunities of these GAI applications–outlining compliance and security considerations as well as identifying a few scenarios where GAI is, or may be, used in the future. We’ll also discuss how Theta Lake will consider putting guardrails in place to facilitate appropriate use of these new systems.

There are several compliance and security issues to consider when using GAI applications.  From an electronic messaging perspective, GAI chat tools provide different, sometimes radically different, responses to the same prompt. So, unlike chatbots that simply regurgitate a set of canned responses, there is a credible argument to be made that the unique outputs of generative chat applications constitute “business as such” electronic messaging under relevant SEC, FINRA, FCA, and related regulatory regimes.

The input of sensitive, confidential, or proprietary information poses a challenge from a cybersecurity standpoint. Most generative AI applications have wide-ranging licenses that assert ownership over, and the ability to use, any data provided by a user in a text, image, or code prompt and the source of data is opaque, which is problematic when trying to ascertain the validity of information used. Any part of the prompt, which could include confidential or sensitive data, can resurface as generated text to anyone else using the app. As such, employees using GAI tools must have clear guidance that prohibits providing customer financial information and personal details, strategic company information, or any other protected data in prompts. These restrictions should include using any proprietary intellectual property like trade secrets, patent applications, or other text, code, or images where the appropriate rights for use have not been secured.

Finally, users should maintain a healthy skepticism about the results provided by GAI chat apps. So-called “AI hallucinations”--instances where the GAI application fabricated or misstates facts in a response–are extremely common.  Content from GAI systems should be fully vetted prior to use, whether for internal presentations or for external customer or public engagements. As s cautionary tale, have a look at this hilarious response from Google’s Bard:

Month Prediction (1)

We asked ChatGPT about its error rates and it provided the following response:

ChatGPT error rates (1)

Clearly there is some room for improvement.

On the flip side, there are numerous potential benefits of GAI in the compliance and security contexts.  

With the appropriate caveats around confidentiality and errors, GAI can expedite research and drafting of documents and other materials for internal and external use. Technical, product, and marketing teams can initiate research on specific subject areas and use the GAI-generated results as the baseline for presentations, business analysis, and social media posts.  

We’re seeing GAI used to generate meeting summaries and power other features in platforms like Zoom IQ and Microsoft Teams. As these use cases develop, Theta Lake is working in lock-step with our unified communications partners to provide supporting capture, retention, and analysis of this dynamic data in an effort to make these features widely available in financial services.   

GAI may also be used to facilitate more robust and nuanced queries on datasets. So, for example, newly-developed GAI applications could assist in searching for particular topics across a range of data for production or supervision. Theta Lake is examining these use cases to determine potential applicability for compliance and security purposes.  

Finally, GAI could be used to improve other customer interfaces, enhancing support portals and FAQ pages to provide better and more accurate information for those searching for troubleshooting or technical details.

Theta Lake is taking an open and flexible approach to the evaluation and use of GAI tools. As clear security and compliance uses emerge, we are committed to incorporating new functionalities to support innovative features to promote more effective and efficient use of unified communications applications.

Comment Here

Theta Lake provides security and compliance for modern collaboration platforms using frictionless partner integrations with Cisco Webex, Microsoft Teams, RingCentral, Slack, Zoom, and more. Using patented machine learning and NLP, Theta Lake detects risks in: video, voice, chat, and document content across what is shared, shown, spoken, and typed. Those risks are surfaced in an AI-assisted, patent-pending review workspace that adds consistency, efficiency, and scale for security and compliance teams. All of this enables organizations to safely realize the full ROI of a collaboration-first workplace while reducing the cost of security and compliance.

Subscribe here to stay up to date!