In the age of artificial intelligence and machine learning, the use of large language models like ChatGPT presents both opportunities and compliance challenges for investment adviser firms. This post aims to discuss at least one risk associated with an investment adviser firm’s supervised persons using the consumer version of ChatGPT, specifically focusing on the potential for an investment adviser to share inadvertently non-public personal information (“NPPI”) of clients and several best practices for mitigating this privacy risk.
Does ChatGPT Share User Data with Third Parties?
OpenAI disclosed that it does share content from ChatGPT conversations but only with a select group of trusted service providers that assist in delivering their services.
- Third-Party Sharing: OpenAI shares content with a select group of trusted service providers to help provide their services. These providers are bound by strict confidentiality and security obligations.
- No Marketing or Advertising: OpenAI explicitly states that they do not use or share user content for marketing or advertising purposes.
- Data Storage: Content is stored on OpenAI systems and those of their trusted service providers, both in the U.S. and around the world.
- Human Access: A limited number of authorized OpenAI personnel and specialized third-party contractors may view and access user content. This is strictly controlled and is only for specific reasons such as investigating abuse, providing support, complying with legal obligations, or fine-tuning models.
Does Non-API Consumer Version of ChatGPT Train on a User’s Conversations?
According to its Data Usage for Consumer Services FAQ, for non-API consumer products like ChatGPT, OpenAI may use content such as prompts, responses and uploaded images to improve the performance of their model. This could include the conversations that an investment adviser has with the consumer version ChatGPT. This leads to the question of whether any client NPPI shared by a user of a consumer version of ChatGPT will be subsequently revealed by ChatGPT in a conversation with another user. In other words, would ChatGPT use confidential client information to improve its answer for another user? Based upon our limited review, this question does not appear to be answered by OpenAI. However, OpenAI explains that users of the consumer version of ChatGPT can opt out of having their content used to improve OpenAI’s services at any time by filling out a specific form . This opt-out will apply on a going-forward basis only.
Best Practices for an Investment Adviser Using ChatGPT
- Prohibit Entering Sensitive Information: Make it a policy of your investment adviser firm that non-public personal information of a client should not be entered into the non-API consumer version of ChatGPT or any other large language models.
- Opt-Out of Data Usage: If your investment adviser firm still wish to use the non-API consumer version of ChatGPT for general purposes, require each user to opt-out of data usage for training the model.
- Adopt API or Enterprise Version of ChatGPT: In order to protect trade secrets and the NPPI of its clients and improve its supervisory systems, an investment adviser may want to consider adopting an enterprise or API version of ChatGPT. OpenAI explains that “[it] does not use data submitted to and generated by our API to train OpenAI models or improve OpenAI’s service offering.”
- Due Diligence: When conducting due diligence of a large language model (“LLM”) or third-party software which has LLM plug-in, investigate privacy and data protection afforded the users.
- Regular Training and Audits: Regularly train your investment adviser’s supervised persons on the importance of data privacy and conduct internal audits to ensure compliance with GLBA (via Regulation S-P for SEC registered firms or the FTC’s Safeguard’s Rule for state registered firms).
While ChatGPT and similar artificial intelligence technologies offer innovative ways to interact and generate content, they must be used cautiously and in full compliance with existing regulations. By understanding how these large language models use data and by implementing firm-wide best practices, investment adviser firms can mitigate risks and operate within the bounds of the regulatory requirements.
This post is a brief summary which is general in nature and offered only for educational purposes. This post is based upon a limited review of FAQs posted by OpenAI; RIA Compliance Consultant, Inc. did not independently verify any statements by OpenAI. This post should not be considered as a comprehensive review or analysis of this development. This communication is not intended to constitute compliance consulting advice or apply to any particular investment adviser firm’s specific situation without further analysis. This post is not a safe harbor or a legal opinion. RIA Compliance Consultants, Inc. is not a technology or artificial intelligence expert. The reader should consult with his or her information technology and compliance staff and consultants. This post is not a substitute for reviewing the complete and most current privacy guidance provided by OpenAI related ChatGPT. This information in this regulatory alert may become out of date.