2025 SESSION
INTRODUCED
25104439D
HOUSE BILL NO. 2094
Offered January 13, 2025
Prefiled January 7, 2025
A BILL to amend the Code of Virginia by adding in Title 59.1 a chapter numbered 58, consisting of sections numbered 59.1-607 through 59.1-613, relating to high-risk artificial intelligence; development, deployment, and use; civil penalties.
—————
Patrons—Maldonado, Glass, Hayes, Shin, Anthony, Askew, Callsen, Clark, Cohen, Cole, Convirs-Fowler, Feggans, Henson, Hernandez, Herring, Keys-Gamarra, Laufer, LeVere Bolling, McClure, Price, Seibold, Simonds and Tran
—————
Referred to Committee on Communications, Technology and Innovation
—————
Be it enacted by the General Assembly of Virginia:
1. That the Code of Virginia is amended by adding in Title 59.1 a chapter numbered 58, consisting of sections numbered 59.1-607 through 59.1-613, as follows:
CHAPTER 58.
HIGH-RISK ARTIFICIAL INTELLIGENCE DEVELOPER AND DEPLOYER ACT.
§ 59.1-607. Definitions.
As used in this chapter, unless the context requires a different meaning:
"Algorithmic discrimination" means the use of an artificial intelligence system that results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, sexual orientation, veteran status, or other classification protected under state or federal law. "Algorithmic discrimination" does not include (i) the offer, license, or use of a high-risk artificial intelligence system by a developer or deployer for the sole purpose of the developer's or deployer's self-testing to identify, mitigate, or prevent discrimination or otherwise ensure compliance with state and federal law; (ii) the expansion of an applicant, customer, or participant pool to increase diversity or redress historical discrimination; or (iii) an act or omission by or on behalf of a private club or other establishment not in fact open to the public, as set forth in Title II of the Civil Rights Act of 1964, 42 U.S.C. § 2000a(e), as amended from time to time.
"Artificial intelligence system" means any machine learning-based system that, for any explicit or implicit objective, infers from the inputs such system receives how to generate outputs, including content, decisions, predictions, and recommendations, that can influence physical or virtual environments.
"Consequential decision" means any decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer of, or the cost or terms of, (i) parole, probation, a pardon, or any other release from incarceration or supervision, (ii) education enrollment or an education opportunity, (iii) employment, (iv) a financial or lending service, (v) health care services, (vi) housing, (vii) insurance, or (viii) a legal service.
"Consumer" means a natural person who is a resident of the Commonwealth and is acting only in an individual or household context. "Consumer" does not include a natural person acting in a commercial or employment context.
"Deployer" means any person doing business in the Commonwealth that deploys or uses a high-risk artificial intelligence system to make a consequential decision in the Commonwealth.
"Developer" means any person doing business in the Commonwealth that develops or intentionally and substantially modifies a high-risk artificial intelligence system that is offered, sold, leased, given, or otherwise provided to consumers in the Commonwealth.
"Distributor" means a person doing business in the Commonwealth, other than a developer, that makes an artificial intelligence system available in the market.
"Foundation model" means a machine learning model that (i) is trained on broad data at scale, (ii) is designed for generality of output, and (iii) can be adapted to a wide range of distinctive tasks.
"General-purpose artificial intelligence model" means any form of artificial intelligence system that (i) displays significant generality, (ii) is capable of competently performing a wide range of distinct tasks, and (iii) can be integrated into a variety of downstream applications or systems. "General-purpose artificial intelligence model" does not include any artificial intelligence model that is used for development, prototyping, and research activities before such artificial intelligence model is released on the market.
"Generative artificial intelligence" means artificial intelligence capable of emulating the structure and characteristics of input data in order to generate derived synthetic content, including audio, images, text, and videos.
"Generative artificial intelligence system" means any artificial intelligence system or service that incorporates generative artificial intelligence.
"High-risk artificial intelligence system" means any artificial intelligence system that is specifically intended to autonomously make, or be a substantial factor in making, a consequential decision. A system or service is not a "high-risk artificial intelligence system" if it is intended to (i) perform a narrow procedural task, (ii) improve the result of a previously completed human activity, (iii) detect any decision-making patterns or any deviations from pre-existing decision-making patterns, or (iv) perform a preparatory task to an assessment relevant to a consequential decision. "High-risk artificial intelligence system" does not include any of the following technologies:
1. Anti-fraud technology that does not use facial recognition technology;
2. Anti-malware technology;
3. Anti-virus technology;
4. Artificial intelligence-enabled video games;
5. Calculators;
6. Cybersecurity technology;
7. Databases;
8. Data storage;
9. Firewall technology;
10. Internet domain registration;
11. Internet website loading;
12. Networking;
13. Spam and robocall filtering;
14. Spell-checking technology;
15. Spreadsheets;
16. Web caching;
17. Web hosting or any similar technology; or
18. Technology that communicates with consumers in natural language for the purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.
"Integrator" means a person that knowingly integrates an artificial intelligence system into a software application and places such software application on the market. An "integrator" does not include a person offering information technology infrastructure.
"Intentional and substantial modification" means any deliberate change made to (i) an artificial intelligence system that results in any new reasonably foreseeable risk of algorithmic discrimination or (ii) a general-purpose artificial intelligence model that affects compliance of the general-purpose artificial intelligence model, materially changes the purpose of the general-purpose artificial intelligence model, or results in any new reasonably foreseeable risk of algorithmic discrimination. "Intentional and substantial modification" does not include any change made to a high-risk artificial intelligence system, or the performance of a high-risk artificial intelligence system, if (a) the high-risk artificial intelligence system continues to learn after such high-risk artificial intelligence system is offered, sold, leased, licensed, given, or otherwise made available to a deployer, or deployed, and (b) such change (1) is made to such high-risk artificial intelligence system as a result of any learning described in clause (a), and (2) was predetermined by the deployer or the third party contracted by the deployer and concluded and included within the initial impact assessment of such high-risk artificial intelligence system as required in § 59.1-609.
"Machine learning" means the development of algorithms to build data-derived statistical models that are capable of drawing inferences from previously unseen data without explicit human instruction.
"Person" includes any individual, corporation, partnership, association, cooperative, limited liability company, trust, joint venture, or any other legal or commercial entity and any successor, representative, agent, agency, or instrumentality thereof. "Person" does not include any government or political subdivision.
"Principal basis" means the use of an output of a high-risk artificial intelligence system to make a decision without (i) human review, oversight, involvement, or intervention or (ii) meaningful consideration by a human.
"Red-teaming" means an exercise that is conducted to identify the potential adverse behaviors or outcomes of an artificial intelligence system, identify how such behaviors or outcomes occur, and stress test the safeguards against such behaviors or outcomes.
"Significant update" means any new version, new release, or other update to a high-risk artificial intelligence system that results in significant changes to such high-risk artificial intelligence system's use case or key functionality and that results in any new or reasonably foreseeable risk of algorithmic discrimination.
"Social media platform" means an electronic medium or service where users may create, share, or view user-generated content, including videos, photographs, blogs, podcasts, messages, emails, or website profiles or locations, and create a personal account.
"Substantial factor" means a factor that is (i) the principal basis for making a consequential decision, (ii) capable of altering the outcome of a consequential decision, and (iii) generated by an artificial intelligence system. "Substantial factor" includes any use of an artificial intelligence system to generate any content, decision, prediction, or recommendation concerning a consumer that is used as the principal basis to make a consequential decision concerning the consumer.
"Synthetic digital content" means any digital content, including any audio, image, text, or video, that is produced or manipulated by a generative artificial intelligence system, including a general-purpose artificial intelligence model.
"Trade secret" means information, including a formula, pattern, compilation, program, device, method, technique, or process, that (i) derives independent economic value, actual or potential, from not being generally known to, and not being readily ascertainable by proper means by, other persons who can obtain economic value from its disclosure or use and (ii) is the subject of efforts that are reasonable under the circumstances to maintain its secrecy.
§ 59.1-608. Operating standards for developers of high-risk artificial intelligence systems.
A. No developer of a high-risk artificial intelligence system shall offer, sell, lease, give, or otherwise provide to a deployer, or other developer, a high-risk artificial intelligence system unless the developer makes available to the deployer or other developer:
1. A statement disclosing the intended uses of such high-risk artificial intelligence system;
2. Documentation disclosing the following:
a. The known or reasonably known limitations of such high-risk artificial intelligence system, including any and all known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of such high-risk artificial intelligence system;
b. The purpose of such high-risk artificial intelligence system and the intended benefits and uses of such high-risk artificial intelligence system;
c. A summary describing how such high-risk artificial intelligence system was evaluated for performance before such high-risk artificial intelligence system was licensed, sold, leased, given, or otherwise made available to a deployer;
d. The measures the developer has taken to mitigate reasonable foreseeable risks of algorithmic discrimination that the developer knows arises from deployment or use of such high-risk artificial intelligence system; and
e. How an individual can use such high-risk artificial intelligence system and monitor the performance of such high-risk artificial intelligence system for any risk of algorithmic discrimination;
3. Documentation describing (i) how the high-risk artificial intelligence system was evaluated for performance and for mitigation of algorithmic discrimination before such system was made available to the deployer; (ii) the data governance measures used to cover the training data sets and the measures used to examine the suitability of data sources, possible biases of data sources, and appropriate mitigation; (iii) the intended outputs of the high-risk artificial intelligence system; (iv) the measures the developer has taken to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the reasonably foreseeable deployment of the high-risk artificial intelligence system; and (v) how the high-risk artificial intelligence system should be used, not be used, and be monitored by an individual when such system is used to make, or is a substantial factor in making, a consequential decision; and
4. Any additional documentation that is reasonably necessary to assist the deployer in understanding the outputs and monitoring performance of the high-risk artificial intelligence system for risks of algorithmic discrimination.
B. Each developer that offers, sells, leases, gives, or otherwise makes available to a deployer a high-risk artificial intelligence system shall make available to the deployer, to the extent feasible and necessary, information and documentation through artifacts such as model cards or impact assessments, and such documentation and information shall enable the deployer or a third party contracted by the deployer to complete an impact assessment as required in § 59.1-609.
C. A developer that also serves as a deployer for any high-risk artificial intelligence system shall not be required to generate the documentation required by this section unless such high-risk artificial intelligence system is provided to an unaffiliated entity acting as a deployer or as otherwise required by law.
D. Nothing in this section shall be construed to require a developer to disclose any trade secret.
E. High-risk artificial intelligence systems that are in conformity with the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology, Standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, shall be presumed to be in conformity with related requirements set out in this section and in associated regulations.
F. For any disclosure required pursuant to this section, each developer shall, no later than 90 days after the developer performs an intentional and substantial modification to any high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
G. 1. Each developer of a high-risk artificial intelligence system, including a general purpose artificial intelligence model, that generates or manipulates synthetic digital content shall ensure that the outputs of such high-risk artificial intelligence system are marked and detectable, in a manner that is detectable by consumers and complies with any applicable accessibility requirements, as synthetic digital content no later than the time that consumers who did not create such outputs first interact with or are exposed to such output;
2. If such synthetic digital content is an audio, image, or video format that forms part of an evidently artistic, creative, satirical, fictional analogous work or program, such requirement for marking outputs of high-risk artificial intelligence systems pursuant to subdivision 1 shall be limited to a manner that does not hinder the display or enjoyment of such work or program.
3. The marking of outputs required by subdivision 1 shall not apply to (i) synthetic digital content that consists exclusively of text, is published to inform the public on any matter of public interest, or is unlikely to mislead a reasonable person consuming such synthetic digital content or (ii) the outputs of a high-risk artificial intelligence system that performs an assistive function for standard editing, does not substantially alter the input data provided by the developer, or is used to detect, prevent, investigate, or prosecute any crime as authorized by law.
§ 59.1-609. Operating standards for deployers of high-risk artificial intelligence systems.
A. Each deployer of a high-risk artificial intelligence system shall use a reasonable duty of care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In any enforcement action brought on or after such date by the Attorney General pursuant to § 59.1-613, there shall be a rebuttable presumption that a deployer of a high-risk artificial intelligence system used a reasonable duty of care as required by this subsection if the deployer complied with the requirements of this section.
B. No deployer shall deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has designed and implemented a risk management policy and program for such high-risk artificial intelligence system. The risk management policy shall specify the principles, processes, and personnel that the deployer shall use in maintaining the risk management program to identify, mitigate, and document any risk of algorithmic discrimination that is a reasonably foreseeable consequence of deploying or using such high-risk artificial intelligence system to make a consequential decision. Each risk management policy and program designed, implemented, and maintained pursuant to this subsection shall be (i) at least as stringent as the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology, Standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems and (ii) reasonable considering (a) the size and complexity of the deployer; (b) the nature and scope of the high-risk artificial intelligence systems deployed and used by the deployer, including the intended uses of such high-risk artificial intelligence systems; (c) the sensitivity and volume of data processed in connection with the high-risk artificial intelligence systems deployed and used by the deployer; and (d) the cost to the deployer to implement and maintain such risk management program.
C. Except as provided in this subsection, no deployer shall deploy or use a high-risk artificial intelligence system to make a consequential decision unless the deployer has completed an impact assessment for such high-risk artificial intelligence system. The deployer shall complete an impact assessment for a high-risk artificial intelligence system (i) before the deployer initially deploys such high-risk artificial intelligence system and (ii) not later than 90 days after each significant update to such high-risk artificial intelligence system is made available.
Each impact assessment completed pursuant to this subsection shall include, at a minimum:
1. A statement by the deployer disclosing (i) the purpose, intended use cases and deployment context of, and benefits afforded by the high-risk artificial intelligence system and (ii) whether the deployment or use of the high-risk artificial intelligence system poses any known or reasonably foreseeable risk of algorithmic discrimination and, if so, (a) the nature of such algorithmic discrimination and (b) the steps that have been taken, to the extent feasible, to mitigate such risk;
2. For each post-deployment impact assessment completed pursuant to this subsection, whether the intended use cases of the high-risk artificial intelligence system as updated were consistent with, or varied from, the developer's intended uses of such high-risk artificial intelligence system;
3. A description of (i) the categories of data the high-risk artificial intelligence system processes as inputs and (ii) the outputs such high-risk artificial intelligence system produces;
4. If the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data the deployer used to customize such high-risk artificial intelligence system;
5. A list of any metrics used to evaluate the performance and known limitations of the high-risk artificial intelligence system;
6. A description of any transparency measures taken concerning the high-risk artificial intelligence system, including any measures taken to disclose to a consumer that such high-risk artificial intelligence system is in use when such high-risk artificial intelligence system is in use; and
7. A description of any post-deployment monitoring performed and user safeguards provided concerning such high-risk artificial intelligence system, including any oversight process established by the deployer to address issues arising from deployment or use of such high-risk artificial intelligence system as such issues arise.
A single impact assessment may address a comparable set of high-risk artificial intelligence systems deployed or used by a deployer. High-risk artificial intelligence systems that are in conformity with the latest version of the Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology, Standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems, or parts thereof, shall be presumed to be in conformity with related requirements set out in this section and in associated regulations. If a deployer completes an impact assessment for the purpose of complying with another applicable law or regulation, such impact assessment shall be deemed to satisfy the requirements established in this subsection if such impact assessment is reasonably similar in scope and effect to the impact assessment that would otherwise be completed pursuant to this subsection. A deployer that completes an impact assessment pursuant to this subsection shall maintain such impact assessment and all records concerning such impact assessment for three years.
D. Not later than the time that a deployer uses a high-risk artificial intelligence system to interact with a consumer, the deployer shall disclose to the consumer that the deployer is interacting with an artificial intelligence system disclosing (i) the purpose of such high-risk artificial intelligence system, (ii) the nature of such system, (iii) the nature of the consequential decision, (iv) the contact information for the deployer, and (v) a description of the artificial intelligence system in plain language of such system.
If such consequential decision is adverse to such consumer, the deployer shall provide to the consumer (a) a statement disclosing the principal reason or reasons for the consequential decision, including (1) the degree to which and manner in which the high-risk artificial intelligence system contributed to the consequential decision, (2) the type of data that was processed by such system in making the consequential decision, and (3) the sources of such data; (b) an opportunity to correct any incorrect personal data that the high-risk artificial intelligence system processed in making, or as a substantial factor in making, the consequential decision; and (c) an opportunity to appeal such adverse consequential decision concerning the consumer arising from the deployment of such system. Any such appeal shall allow for human review, if technically feasible, unless providing the opportunity for appeal is not in the best interest of the consumer, including instances in which any delay might pose a risk to the life or safety of such consumer.
E. Each deployer shall make available, in a manner that is clear and readily available, a statement summarizing how such deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.
F. For any disclosure required pursuant to this section, each deployer shall, no later than 90 days after the developer performs an intentional and substantial modification to any high-risk artificial intelligence system, update such disclosure as necessary to ensure that such disclosure remains accurate.
G. Any deployer who performs an intentional and substantial modification to any high-risk artificial system shall comply with the documentation and disclosure requirements for developers pursuant to subsections A through F of § 59.1-608.
§ 59.1-610. Operating standards for integrators of high-risk artificial intelligence systems.
Each integrator of a high-risk artificial intelligence system shall develop and adopt an acceptable use policy, which shall limit the use of the high-risk artificial intelligence system to mitigate known risks of algorithmic discrimination.
Each integrator of a high-risk artificial intelligence system shall provide to the deployer clear, conspicuous notice of (i) the name or other identifier of the high-risk artificial intelligence system integrated into a software application provided to the deployer; (ii) the name and contact information of the developer of the high-risk artificial intelligence system integrated into a software application provided to the deployer; (iii) whether the integrator has adjusted the model weights of the high-risk artificial intelligence system integrated into the software application by exposing it to additional data, a summary of the adjustment process, and how such process and the resulting system were evaluated for risk of algorithmic discrimination; (iv) a summary of any other non-substantial modifications made by the integrator; and (v) the integrator's acceptable use policy.
§ 59.1-611. Operating standards for distributors of high-risk artificial intelligence systems.
Each distributor of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. If a distributor of a high-risk artificial intelligence system considers or has reason to consider that a high-risk artificial intelligence system is not in compliance with any requirement of this chapter, it shall immediately withdraw, disable, or recall, as appropriate, the high-risk artificial intelligence system from the market until such system has been brought into compliance with the requirements of this chapter. The distributor shall inform the developers of the high-risk artificial intelligence system concerned and, where applicable, the deployer of any such system's noncompliance with this chapter and the withdrawal, disablement, or recall of such system.
§ 59.1-612. Exemptions.
A. Nothing in this chapter shall be construed to restrict a developer's, integrator's, distributor's or deployer's ability to (i) comply with federal, state, or municipal ordinances or regulations; (ii) comply with a civil, criminal, or regulatory inquiry, investigation, subpoena, or summons by federal, state, local, or other governmental authorities; (iii) cooperate with law-enforcement agencies concerning conduct or activity that the developer, integrator, distributor, or deployer reasonably and in good faith believes may violate federal, state, or local law, ordinances, or regulations; (iv) investigate, establish, exercise, prepare for, or defend legal claims; (v) provide a product or service specifically requested by a consumer; (vi) perform under a contract to which a consumer is a party, including fulfilling the terms of a written warranty; (vii) take steps at the request of a consumer prior to entering into a contract; (viii) take immediate steps to protect an interest that is essential for the life or physical safety of the consumer or another individual; (ix) prevent, detect, protect against, or respond to security incidents, identity theft, fraud, harassment, or malicious or deceptive activities; (x) take actions to prevent, detect, protect against, report, or respond to the production, generation, incorporation, or synthesization of child sex abuse material, or any illegal activity, preserve the integrity or security of systems, or investigate, report, or prosecute those responsible for any such action; (xi) engage in public or peer-reviewed scientific or statistical research in the public interest that adheres to all other applicable ethics and privacy laws and is approved, monitored, and governed by an institutional review board that determines, or similar independent oversight entities that determine, (a) that the expected benefits of the research outweigh the risks associated with such research and (b) whether the developer, integrator, distributor, or deployer has implemented reasonable safeguards to mitigate the risks associated with such research; (xii) assist another developer, integrator, distributor, or deployer with any of the obligations imposed by this chapter; or (xiii) take any action that is in the public interest in the areas of public health, community health, or population health, but solely to the extent that such action is subject to suitable and specific measures to safeguard the public.
B. The obligations imposed on developers, integrators, distributors, or deployers by this chapter shall not restrict a developer's or deployer's ability to (i) conduct internal research to develop, improve, or repair products, services, or technologies; (ii) effectuate a product recall; (iii) identify and repair technical errors that impair existing or intended functionality; or (iv) perform internal operations that are reasonably aligned with the expectations of the consumer or reasonably anticipated based on the consumer's existing relationship with the developer, integrator, or deployer.
C. Nothing in this chapter shall be construed to impose any obligation on a developer, integrator, distributor, or deployer to disclose trade secrets or information protected from disclosure by state or federal law.
D. The obligations imposed on developers, integrators, distributors, or deployers by this chapter shall not apply where compliance by the developer, integrator, distributor, or deployer with such obligations would violate an evidentiary privilege under federal law or the laws of the Commonwealth.
E. Nothing in this chapter shall be construed to impose any obligation on a developer, integrator, distributor, or deployer that adversely affects the legally protected rights or freedoms of any person, including the rights of any person to freedom of speech or freedom of the press guaranteed in the First Amendment to the Constitution of the United States or under the Virginia Human Rights Act (§ 2.2-3900 et seq.).
F. The obligations imposed on developers, integrators, distributors, or deployers by this chapter shall not apply to any artificial intelligence system that is acquired by or for the federal government or any federal agency or department, including the U.S. Department of Commerce, the U.S. Department of Defense, and the National Aeronautics and Space Administration, unless such artificial intelligence system is a high-risk artificial intelligence system that is used to make, or is a substantial factor in making, a decision concerning employment or housing.
G. For the purposes of this subsection:
"Affiliate" means the same as that term is defined in § 6.2-1800.
"Bank" means the same as that term is defined in § 6.2-800.
"Credit union" means the same as that term is defined in § 6.2-1300.
"Federal credit union" means a credit union duly organized under federal law.
"Out-of-state bank" means the same as that term is defined in § 6.2-836.
"Out-of-state credit union" means a credit union organized and doing business in another state.
"Subsidiary" means the same as that term is defined in § 6.2-700.
The obligations imposed on developers, integrators, distributors, or deployers by this chapter shall be deemed satisfied for any bank, out-of-state bank, credit union, federal credit union, out-of-state credit union, or any affiliate or subsidiary thereof if such bank, out-of-state bank, credit union, federal credit union, out-of-state credit union, or affiliate or subsidiary is subject to examination by any state or federal prudential regulator under any published guidance or regulations that apply to the use of high-risk artificial intelligence systems and such guidance or regulations (i) impose requirements that are substantially equivalent to, and at least as stringent as, the requirements set forth in this chapter, and (ii) at a minimum, require such bank, out-of-state bank, credit union, federal credit union, out-of-state credit union, or affiliate or subsidiary to (a) regularly audit such bank's, out-of-state bank's, credit union's, federal credit union's, out-of-state credit union's, or affiliate's or subsidiary's use of high-risk artificial intelligence systems for compliance with state and federal anti-discrimination laws and regulations applicable to such bank, out-of-state bank, credit union, federal credit union, out-of-state credit union, or affiliate or subsidiary and (b) mitigate any algorithmic discrimination caused by the use of a high-risk artificial intelligence system or any risk of algorithmic discrimination that is reasonably foreseeable as a result of the use of a high-risk artificial intelligence system.
H. For purposes of this subsection, "insurer" means the same as that term is defined in § 38.2-100.
The provisions of this chapter shall not apply to any insurer, or any high-risk artificial intelligence system developed or deployed by an insurer for use in the business of insurance, if such insurer is regulated and supervised by the State Corporation Commission or a comparable federal regulating body and subject to examination by such entity under any existing statutes, rules, or regulations pertaining to unfair trade practices and unfair discrimination prohibited under Chapter 5 (§ 38.2-500 et seq.) of Title 38.2, or published guidance or regulations that apply to the use of high-risk artificial intelligence systems and such guidance or regulations aid in the prevention and mitigation of algorithmic discrimination caused by the use of a high-risk artificial intelligence system or any risk of algorithmic discrimination that is reasonably foreseeable as a result of the use of a high-risk artificial intelligence system. Nothing in this chapter shall be construed to delegate existing regulatory oversight of the business of insurance to any department or agency other than the Bureau of Insurance of the Virginia State Corporation Commission.
I. The provisions of this chapter shall not apply to the development of an artificial intelligence system that is used exclusively for research, training, testing, or other pre-deployment activities performed by active participants of any sandbox software or sandbox environment established and subject to oversight by a designated agency or other government entity and that is in compliance with the provisions of this chapter.
J. The provisions of this chapter shall not apply to a developer, integrator, distributor, or deployer, or other person who develops, deploys, puts into service, or intentionally modifies, as applicable, a high-risk artificial intelligence system that (i) has been approved, authorized, certified, cleared, developed, or granted by a federal agency acting within the scope of the federal agency's authority, or by a regulated entity subject to the supervision and regulation of the Federal Housing Finance Agency or (ii) is in compliance with standards established by a federal agency or by a regulated entity subject to the supervision and regulation of the Federal Housing Finance Agency, if the standards are substantially equivalent or more stringent than the requirements of this chapter.
K. The provisions of this chapter shall not apply to a developer, integrator, distributor, deployer, or other person that is a covered entity within the meaning of the federal Health Insurance Portability and Accountability Act of 1996 (42 U.S.C. § 1320d et seq.) and the regulations promulgated under such federal act, as both may be amended from time to time, and is providing (i) health care recommendations that (a) are generated by an artificial intelligence system and (b) require a health care provider to take action to implement the recommendations or (ii) services utilizing an artificial intelligence system for an administrative, financial, quality measurement, security, or performance improvement function.
L. If a developer, integrator, distributor, or deployer engages in any action authorized by an exemption set forth in this section, the developer, integrator, distributor, or deployer bears the burden of demonstrating that such action qualifies for such exemption.
§ 59.1-613. Enforcement; civil penalty.
A. The Attorney General shall have exclusive authority to enforce the provisions of this chapter.
B. Whenever the Attorney General has reasonable cause to believe that any person has engaged in or is engaging in any violation of this chapter, the Attorney General is empowered to issue a civil investigative demand. The provisions of § 59.1-9.10 shall apply mutatis mutandis to civil investigative demands issued pursuant to this section. In rendering and furnishing any information requested pursuant to a civil investigative demand issued pursuant to this section, a developer, integrator, distributor, or deployer may redact or omit any trade secrets or information protected from disclosure by state or federal law. To the extent that any information requested pursuant to a civil investigative demand issued pursuant to this section is subject to attorney-client privilege or work-product protection, disclosure of such information pursuant to the civil investigative demand shall not constitute a waiver of such privilege or protection. Any information, statement, or documentation provided to the Attorney General pursuant to this section shall be exempt from disclosure under the Virginia Freedom of Information Act (§ 2.2-3700 et seq.).
C. Notwithstanding any contrary provision of law, the Attorney General may cause an action to be brought in the appropriate circuit court in the name of the Commonwealth to enjoin any violation of this chapter. The circuit court having jurisdiction may enjoin such violation notwithstanding the existence of an adequate alternative remedy at law. In any action brought pursuant to this chapter, it shall not be necessary that damages be proved.
D. Any person who violates the provisions of this chapter shall be subject to a civil penalty in an amount not to exceed $1,000 plus reasonable attorney fees, expenses, and costs, as determined by the court. Any person who willfully violates the provisions of this chapter shall be subject to a civil penalty in an amount not less than $1,000 and not more than $10,000 plus reasonable attorney fees, expenses, and costs, as determined by the court. Such civil penalties shall be paid into the Literary Fund.
E. Each violation of this chapter shall constitute a separate violation and shall be subject to any civil penalties imposed under this section.
F. The Attorney General may require that a developer disclose to the Attorney General any statement or documentation described in this chapter if such statement or documentation is relevant to an investigation conducted by the Attorney General. The Attorney General may also require that a deployer disclose to the Attorney General any risk management policy designed and implemented, impact assessment completed, or record maintained pursuant to this chapter if such risk management policy, impact assessment, or record is relevant to an investigation conducted by the Attorney General.
G. In an action brought by the Attorney General pursuant to this section, it shall be an affirmative defense that the developer, integrator, distributor, or deployer (i) discovers a violation of any provision of this chapter through red-teaming; (ii) no later than 45 days after discovering such violation (a) cures such violation and (b) provides notice to the Attorney General in a form and manner as prescribed by the Attorney General that such violation has been cured and evidence that any harm caused by such violation has been mitigated; and (iii) is otherwise in compliance with the requirements of this chapter.
H. Prior to causing an action against a developer, integrator, distributor, or deployer for a violation of this chapter pursuant to subsection C, the Attorney General shall determine, in consultation with the developer, integrator, distributor, or deployer, if it is possible to cure the violation. If it is possible to cure such violation, the Attorney General may issue a notice of violation to the developer, integrator, distributor, or deployer and afford the developer, integrator, distributor, or deployer the opportunity to cure such violation within 45 days of the receipt of such notice of violation. In determining whether to grant such opportunity to cure such violation, the Attorney General shall consider (i) the number of violations, (ii) the size and complexity of the developer, integrator, distributor, or deployer; (iii) the nature and extent of the developer's, integrator's, distributor's, or deployer's business; (iv) the substantial likelihood of injury to the public; (v) the safety of persons or property; (vi) whether such violation was likely caused by human or technical error. If the developer, integrator, distributor, or deployer fails to cure such violation within 45 days of the receipt of such notice of violation, the Attorney General may proceed with such action.
I. Nothing in this chapter shall create a private cause of action in favor of any person aggrieved by a violation of this chapter.
2. That the provisions of this act shall become effective on July 1, 2026.
3. That the provisions of this act shall apply only to a violation committed or a cause of action accruing on or after July 1, 2026.