skip to main content

Led by the nation’s foremost legal specialist on the intersection of artificial intelligence and the law, as well as world-class litigators and IP-focused lawyers fluent in the latest AI and machine-learning tools and their applications, our cross-departmental AI team is uniquely positioned to help clients mitigate potential legal and business risks in their use of AI-powered technologies, while safely taking advantage of the business opportunities presented by these powerful new tools.

Commerce Proposes Rule to Collect Frontier AI and Computing Cluster Data for National Security Purposes

September 13, 2024 Download PDF

On September 9, 2024, the U.S. Department of Commerce’s Bureau of Industry and Security (“BIS”) issued a proposed rule to establish reporting requirements for companies developing advanced artificial intelligence (“AI”) models and computing clusters (the “Proposed Rule”).[1]  The Proposed Rule aims to ensure U.S. national security and competitiveness in the defense industrial base by requiring individuals and entities subject to the reporting requirements to provide BIS detailed information on dual-use AI models (often colloquially called “frontier AI”) and large-scale computing clusters.

“As AI is progressing rapidly, it holds both tremendous promise and risk.  This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security,” said Secretary of Commerce Gina M. Raimondo.[2]  “Through this proposed reporting requirement, we are developing a system to identify capabilities emerging at the frontier of AI research,” said Assistant Secretary of Commerce for Export Administration Thea D. Rozman Kendler.

Background

On October 30, 2023, President Biden issued Executive Order 14110 titled, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which, among other things, directs the Secretary of Commerce to require companies to report on their development, or their intention to develop, potential dual-use foundation AI models[3] and companies, individuals, or other organizations or entities to report on the acquisition, development, or possession of a potential large-scale computing cluster (the “Executive Order”).[4]

This mandate from the Executive Order grounds its authority largely on the Defense Production Act (“DPA”), under which the President is authorized to take certain actions to ensure that the U.S. industrial base is prepared to supply products and services to support the national defense.[5]  Among such authorities, the DPA provides that the President may obtain certain information from any person as may be necessary or appropriate.  This includes an authority to perform industry studies assessing the capabilities of the U.S. industrial base.

According to BIS, “AI models are quickly becoming integral to numerous U.S. industries that are essential to the national defense,” including manufacturers of military equipment that use AI models to “enhance the maneuverability, accuracy, and efficiency of equipment” and manufacturers of signals intelligence that use AI models to improve how satellites, cameras, and radar, among other technologies, capture signals and eliminate noise.  Additionally, BIS reasons that the U.S. government “must minimize the vulnerability of dual-use foundation models to cyberattacks,” including manipulation by hostile actors—a concern likely driven, at least in part, by the risks posed by foreign adversaries and non-state actors.  BIS also communicates that the U.S. government requires information to understand these dual-use foundation models’ safety and reliability to ensure the proper functioning of products relevant to the defense industrial base that integrate these AI models, as well as to combat potential threats that may be posed by foreign adversaries or non-state actors.

The Proposed Rule

The Proposed Rule considers a notification and reporting process for (1) companies that develop or intend to develop dual-use foundation AI models and (2) for companies, individuals or other organizations or entities that acquire, develop, or possess large-scale computing clusters that meet the technical conditions issued by the Department of Commerce.  Covered U.S. persons that exceed the technical thresholds captured by the Proposed Rule would be required to report certain information to BIS on a quarterly basis.

Covered U.S. Persons

For purposes of the Proposed Rule, “covered U.S. persons” include any individual U.S. citizen, any lawful permanent resident of the United States (as defined by the Immigration and Nationality Act),[6] any entity, including organizations, companies, and corporations organized under the laws of the United States or any jurisdiction within the United States, or any person located in the United States.

Technical Collection Thresholds

BIS has incorporated the technical conditions as specified in the Executive Order for models and computing clusters that would trigger reporting under the Proposed Rule, and will update such conditions as appropriate.  Currently, BIS is contemplating updated collection parameters as specified below and seeks the public’s comments.

  • Dual-use Foundational Model Training: Conducting an AI model training run using more than 10^26 computational operations (e.g., integer or floating-point operations).[7]
  • Large-scale Computing Clusters: Acquiring, developing, or coming into possession of a cluster that has a set of machines transitively connected by networking of over 300 Gbit/s and having a theoretical maximum performance greater than 10^20 computational operations (e.g., integer or floating point operations) per second (OP/s) for AI training, without sparsity.

BIS currently projects there are no more than 15 companies that will be captured by the technical collection thresholds for models and computing clusters, all of which BIS says are “well-resourced technology companies.”[8]  Moreover, BIS reasons that exceeding the technical collection thresholds under the Proposed Rule requires “access to vast computing power,” and that the minimum computational threshold that would trigger a reporting requirement currently exceeds all or virtually all models in use.

Quarterly Notification[9]

The Proposed Rule contemplates requiring all covered U.S. persons with models or clusters exceeding the technical collection thresholds to provide notification to BIS via email[10] on a quarterly basis[11] in order to “facilitate respondent planning and cease respondent burden.”  More specifically, the reporting requirement turns on the presence of “applicable activity” defined to include “developing, or having the intent to develop within the next six months, an AI model or computing cluster” above the technical threshold. 

Covered U.S. persons that have applicable activities to report must simply submit a notification to BIS, which may result in detailed follow-up questions that respondents must answer within 30 days of receiving the request.  Information submitted under the Proposed Rule must conform to the manner prescribed by BIS in instructions sent to respondents after BIS receives notification.  After the first report is made, the covered U.S. person must continue to file reports on a quarterly basis sharing additions, updates, or changes to the information in its last report.  If in a given quarter, covered U.S. persons that had previously notified BIS of applicable activity have no additions, updates, or changes to report, they would only be required to affirm that fact to BIS.  If, after filing seven quarterly reports stating that there is no new information to share on the applicable activity, the reporting obligation terminates.[12]

The types of questions BIS may send to respondents include:

  • Any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats;
  • The ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights;
  • The results of any developed dual-use foundation model’s performance in relevant AI red-team testing, including a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security; and
  • Other information pertaining to the safety and reliability of dual-use foundation models, or activities or risks that present concerns to U.S. national security.

BIS is also seeking comments on the frequency of the notifications, as well as alternatives for achieving timely reporting of the required information.

Collection and Storage

BIS recognizes the extremely sensitive nature of the information it requires reported.  The Proposed Rule therefore makes explicit reference to the protections of such information—i.e., surveys and assessments of critical U.S. industrial sectors and technologies are deemed business confidential and under the DPA, BIS is prohibited from publishing or disclosing such information unless the President determines that its withholding is contrary to the national defense.

Conclusion

The Proposed Rule comes on the heels of the conclusion of a pilot survey program first conducted by BIS in early 2024.  According to BIS, the information collected will provide the U.S. government with both accurate and current information to maintain the competitiveness of the U.S. industrial base and provide for the national defense.

The reporting requirements are not yet in effect.  BIS is soliciting public comment on the Proposed Rule until October 22, 2024.

Annex: Definitions For Purposes of the Reporting Requirements

  • AI Red-Teaming: A structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI. In the context of AI, red-teaming is most often performed by dedicated “red teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.
  • AI Model: A component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.
  • AI System: Any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI.
  • Company: A corporation, partnership, association, or any other organized group of persons, or legal successor or representative thereof. This definition is not limited to commercial or for-profit organizations.  For example, the term “any other organized group of persons” may encompass academic institutions, research centers, or any group of persons who are organized in some manner.  The term “corporation” is not limited to publicly traded corporations or corporations that exist for the purpose of making a profit.
  • Covered U.S. Person: Any individual U.S. citizen, lawful permanent resident of the United States as defined by the Immigration and Nationality Act, entity—including organizations, companies, and corporations—organized under the laws of the United States or any jurisdiction within the United States (including foreign branches), or any person (individual) located in the United States.
  • Dual-Use Foundational Model: An AI model that is (A) trained on broad data; (B) generally uses self-supervision; (C) contains at least tens of billions of parameters; (D) is applicable across a wide range of contexts; and (E) exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by: (1) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons; (2) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyberattacks; or (3) permitting the evasion of human control or oversight through means of deception or obfuscation. Models meet this definition even if they are provided to end users with technical safeguards that attempt to prevent users from taking advantage of the relevant unsafe capabilities.
  • Knowledge: Includes not only positive knowledge that the circumstance exists or is substantially certain to occur, but also an awareness of a high probability of its existence or future occurrence. Such awareness is inferred from evidence of the conscious disregard of facts known to a person and is also inferred from a person’s willful avoidance of facts.  See 15 C.F.R. 722.1, available here.
  • Large-Scale Computing Cluster: A cluster of computing hardware that meets the technical thresholds provided by BIS.
  • Model Weights: The numerical parameters used in the layers of a neural network.
  • Training of Training Runs: Any process by which an AI model learns from data using computing power. Training includes but is not limited to techniques employed during pre-training like unsupervised learning and employed during fine tuning like reinforcement learning from human feedback.

*       *       *

 

[1]       U.S. Dep’t of Commerce, Bureau of Industry and Security, Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters, 15 C.F.R. Part 702 (Sept. 11, 2024), available here.

[2]       Id.

[3]       Under Executive Order 14110 “dual-use foundational model” is “ trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.”

[4]       The White House, Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Oct. 30, 2023), available here.

[5]       See Defense Production Act, 50 U.S.C. 4501 et seq.

[6]       See Immigration and Nationality Act, 8 U.S.C. 1101 et seq.

[7]       Models trained on primarily biological sequence data, but at the lower threshold of 10^23 computational operations will be addressed in a separate survey.

[8]       BIS estimates the specific survey required by the Proposed Rule will have an estimated burden of approximately 5,000 hours per year aggregated across all new respondents.

[9]       See Annex for applicable defined terms.

[10]     Covered U.S. persons are required to submit a notification to BIS by emailing ai_reporting@bis.doc.gov.

[11]     Quarterly notification dates are as follows: Q1- April 15; Q2- July 15; Q3- October 15; Q4-January 15.

[12]     If the covered U.S. person submits an affirmation of no applicable activities for seven consecutive quarters, they do not need to provide BIS with any affirmation thereafter until they again have applicable activities to report.

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy