Three Expected Inclusions and Two Surprising Omissions From Last Week's White House AI Announcement

October 4, 2023

Last week the White House announced that it had convened the leading 7 AI companies to discuss how best to ensure the safe, secure, and transparent use of AI technology. The companies include Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The White House also announced that the companies had agreed to eight “voluntary commitments,” included at the end of this post.

Perhaps unsurprisingly, the commitments focus on security testing, safeguard and reporting. Two others focus on research on the risks and potential benefits of AI. Of the two remaining, one involves developing a “watermarking” system to identify AI-generated content and the last commits the companies to “publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.”

What is surprising, however, are two glaring omissions from the set of commitments: ensuring model reliability and limiting client data use by the companies. For the first, the CFPB has expressed significant concern over the model’s accuracy and reliability, raising the potential danger this creates in the context of consumer uses, such as direct-to-consumer chatbots. While reliability is perhaps the hardest challenge for AI to tackle, there are some basic ways that these companies could work towards increased reliability.  Currently, models like OpenAI often do not cite the sources behind a particular answer, making it difficult for the individual user to confirm accuracy and oversee the results. In fact, when asked questions related to, for example, what is permitted or disallowed by regulation, OpenAI will often give either a broad source (the CFPB for example) or will give an answer explaining that the model uses a variety of resources and cannot cite specific ones for that particular response. The inability to trace the source and double-check the result will make it very difficultc for many industries, such as financial services, to fully take advantage of these models without additional product overlays.

The other surprising omission from the set of commitments relates to the company’s own use of the data shared with their models.  OpenAI, in particular, has faced significant regulatory scrutiny over their use of customer data and inputs to train the model.  In fact, the FTC has launched an investigation into whether the company has "engaged in unfair or deceptive data security practices."  Perhaps in response to these types of concerns, OpenAI has created options for users of some of its solutions to choose to limit the company’s use of the user data to train the model or develop new products, while still allowing it to use data for some internal monitoring purposes. These options do not currently apply to some of its more popular solutions, such as ChatGPT. Of course, given that data is the lifeblood of these models, it seems appropriate to create additional incentives for all AI companies to clearly disclose how user data is used and to permit clients to limit that use in informed, practical ways.

These issues, reliability and vendor data use, are two of the challenges that Vectari seeks to address through its compliant-by-design solutions. For example, to strengthen the reliability of the solutions, in our Policy GPT product, the user will have the option of receiving the specific citation and source for the answer given, allowing the user to double-check for themselves, and read further into the related policy if useful. Policy GPT will also be closely overseen by regulatory experts, to give the model ongoing feedback to ensure accuracy. These safeguards will make it more feasible for financial institutions to rely on the model to enhance the work of their human workforce, and to be able to explain to auditors and regulators how the answers are derived and used. To tackle the data use issue, Vecatri provides solutions that can be fully implemented within an FI’s internal environment, within firewalls, and eliminate the need to share data with the outside AI model vendor. This will allow FIs to rest assured that their data will not be used by the vendor for model training or other purposes that could inadvertently expose confidential information.

Of course, while these omissions to the “voluntary commitments” show the continued need for companies to implement additional safeguards in using these models, the White House’s efforts to partner with industry to tackle some of the inherent risks in their use is a promising sign that, rather than trying to fight the growth of this technology, the government is willing to take a collaborative approach to help realize its potential safely.


---

From: FACT SHEET: Biden-⁠HarrisAdministration Secures Voluntary Commitments from Leading ArtificialIntelligence Companies to Manage the Risks Posed by AI, June 21, 2023, available at www.whitehouse.gov/briefing-room/statements-releases/.

Ensuring Products are Safe Before Introducing Them to the Public

  • The companies commit to internal and external security testing of their AI systems before their release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.
  • The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.

Building Systems that Put Security First

  • The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.
  • The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly.

Earning the Public’s Trust

  • The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.
  • The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.
  • The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.   
  • The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Contact Us

Let's explore how Vectari can help.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.