AI's Legal Labyrinth: Why It’s Time to AI-Proof Your Contracts
Authored by: Randy Gleason, Partner, Soloway Schwartz, LLC
Artificial intelligence (“AI”) development, deployment, and use is evolving across virtually all businesses and industries faster than the legal frameworks meant to govern it. The EU enacted EU Regulation (EU) 2024/1689 (the “EU AI Act”) in 2024 which applies extraterritorially to any business that deploys or uses an AI system in the EU and enforcement of additional obligations are coming into effect in the coming years through 2027. The United States does not yet have a comprehensive federal regulatory framework akin to the EU AI Act, however, federal agencies and state governments are not remaining idle when it comes to AI regulation. Businesses today are subject to a growing patchwork of state laws that govern the development, deployment and use of AI (“AI Laws”), and we expect this trend to continue as more states enact similar laws. Additionally, regulatory guidance and enforcement actions provide insight into how regulators will apply and enforce AI Laws, and standards like NIST’s AI Risk Management Framework are often referenced as the benchmark for responsible AI governance across a variety of industries.
While not an exhaustive list, some examples of AI Laws existing at the time this article is published include:
Jurisdiction |
Legislation |
Summary |
Status |
Effective |
California |
AB 2885 |
Defines “Artificial Intelligence” under CA laws. |
Enacted |
January 1, 2025 |
AB 3030 |
Requires disclaimers for use of AI in patient communications. |
Enacted |
January 1, 2025 |
|
AB 2013 |
Requires developers of generative AI systems to disclose information |
Enacted |
January 1, 2026 |
|
SB 53 – Transparency in Frontier Intelligence Act (TFAIA) |
Imposes significant new disclosure, reporting, and transparency obligations on “large frontier developers” of frontier AI models (i.e. those developing/training AI models using computing power of 1026 computing power or FLOPs, and who have greater than $500 Million in annual revenue – though some obligations apply to large frontier developers regardless of revenue). |
Enacted |
January 1, 2027 (earlier reporting beginning in 2026) |
|
SB 942 – California AI Transparency Act |
Imposes obligations on “Covered Providers” (i.e. any person that creates a publicly available generative AI system with over 1 Million unique users in a 12-month period), to: (i) provide free AI detection tools; (ii) provide disclosures regarding AI-generated content; and (iii) ensure downstream licensees (e.g. customers/users) maintain disclosure requirements. |
Enacted |
January 1, 2026 |
|
Illinois |
House Bill 3733 |
Regulates use of AI in hiring and employment decisions. |
Enacted |
January 1, 2026 |
Colorado |
SB 205 – Colorado Artificial Intelligence Act (CAIA) |
Comprehensive AI law governing development and deployment of “High-Risk Artificial Intelligence Systems” and requires “Deployers” of any AI system to disclose to CO residents (“Consumer”) that they are interacting with AI. |
Enacted |
February 1, 2026 |
Texas |
HB 149 – Texas Responsible Artificial Intelligence Governance Act (TRAIGA) |
Creates a regulatory “sandbox” for testing of AI, prohibits use of AI for certain activities (e.g. social scoring, use of biometric data, without consent); disclosure requirements for government agencies. Note: TRAIGA generally does not apply in the employment or commercial (B2B) context. |
Enacted |
January 1, 2026 |
Utah |
SB 149 – Utah Artificial Intelligence Policy Act (UAIPA) |
Requires consumer disclosures regarding AI-generated content. |
Enacted |
May 1, 2024 |
European Union |
EU Regulation (EU) 2024/1689 – the EU AI Act |
Comprehensive legal framework that governs the development, deployment and use of AI using a risk-based approach, prohibits use of AI in certain contexts, and applies to all general-purpose AI (GPAI) models. Applies to developers and providers of AI systems deployed or used in the EU regardless of where the developer/provider is located. |
Enacted |
August 1, 2024 (obligations coming into effect through 2027) |
AI Laws often apply extraterritorially, integrate with sector-specific rules (e.g., third-party risk management frameworks, HIPAA regulations, and data protection and privacy laws), and consistently focus on the following principles in some form:
• Performing risk assessments;
• Transparency, explainability and disclosure;
• Verification of data integrity and quality;
• Consumer protection and data privacy;
• Non-discrimination and bias minimization;
• Human oversight;
• Stricter regulation of “high-risk” AI systems; and
• Auditability
For businesses, the contract has become the most practical tool for AI risk management and contracting parties are now regularly including AI-specific terms and conditions their commercial contracts. It is important for both vendors and customers to understand their obligations under AI Laws, the risks associated with their intended use of AI, and to develop, implement, and enforce internal policies regarding its development, deployment, and use.
Commercial contracts should not only address requirements under existing AI Laws, but also include future-proof terms and conditions to address compliance under new or amended AI Laws that align with the principles set forth above, including but not limited to:
• Transparency and disclosure of use of AI in connection with products or services, including downstream subcontractors in the supply chain;
• Ownership of data (including training data), inputs, and outputs;
• Restrictions on processing of data by third parties with any AI system, and use of data for model training;
• Data security and retention (e.g. zero data retention) requirements;
• Sourcing, quality, and use of training data;
• Compliance with applicable AI Laws and data protection laws;
• Appropriate indemnities and limitations and exclusions of liability; and
• Record-keeping and audit requirements.
These provisions are especially important where AI will be used in “high risk” use cases, such as, making decisions related to education, employment, creditworthiness, insurance risk, or public benefit eligibility; processing of sensitive or proprietary data (e.g. biometric data, source code); supporting critical infrastructure; or in highly regulated sectors such as the financial services, healthcare, and pharmaceutical industries.
Similarly, developers, vendors, and service providers should ensure their contracts contain customary terms and conditions that:
• Ensure customers have legally required rights and consents for the data being processed;
• Require customers to comply with their obligations under applicable AI Laws in a shared responsibility model;
• Prohibit use of AI systems or outputs in a manner that violates applicable laws or contractual obligations, infringes upon third-party rights, or for any purpose other than the intended purpose;
• Clarify retention and ownership of intellectual property in products and services (including any IP embodied in, or used to produce, output or any other deliverables);
• Address the collection, creation, and use of anonymized and/or aggregated data, if required; and
• Include appropriate disclaimers, indemnities, and limitations of liability.
These terms are not theoretical, and the fines and penalties for non-compliance can be substantial. The Federal Trade Commission in Operation AI Comply issued enforcement actions against companies for various deceptive practices regarding marketing, promotion, and use of AI. In October 2025 the U.S. Department of Justice filed a proposed settlement in which Greystar Management Services LLC and twenty-five other landlords agreed to pay $141 Million for alleged violations of anti-trust laws stemming from the landlords’ sharing and processing data through an AI algorithm to allegedly manipulate rental price increases. The developer, RealPage, Inc., remains a defendant in both a class action lawsuit and a separate federal anti-trust case alleging RealPage’s software enables landlords to coordinate and set rental prices in violation of anti-trust laws. Workday, Inc. faces a pending nationwide class action lawsuit in Mobley v. Workday, Inc., which alleges that Workday’s algorithm-based applicant screening tools discriminate against older applicants. Severe violations of the EU AI Act can lead to fines of up to €35 Million or 7% of annual revenue, some state laws carry fines of up to $1,500 per violation, and the potential reputational harm associated with violations could be significant, particularly in highly regulated industries.
Regardless of whether companies are aware of existing AI Laws, their compliance obligations under such laws remain. By enforcing internal policies regarding the development, use and deployment of AI systems, implementing sound third-party risk management procedures, performing appropriate risk assessments, monitoring changes in laws, and drafting careful, forward-looking contract language, businesses can manage AI use responsibly, achieve appropriate allocation of risk, reduce legal exposure, and ensure their development, and use of AI remain compliant as the legal and regulatory landscape continues to evolve.
If you have any questions or would like assistance with revising your contracts to ensure compliance with AI Laws, we can help.
© 2025 by Randall B. Gleason Sr. All rights reserved. Disclaimer: This summary is provided for educational and informational purposes only and is not legal advice. Any specific questions about these topics should be directed to an attorney.