Current Regulations on AI in Employment Decisions

by Nathan Steineker, Associate Member, University of Cincinnati Law Review Vol. 94

I. Introduction

Several states have recently enacted legislation regulating the use of Artificial Intelligence (“AI”) in employment decisions. These laws primarily aim to prevent algorithmic discrimination against candidates and employees in the hiring, promotion, and firing process. Several of these recent laws require the following: employer disclosures to candidates and employees regarding AI when it is used to make employment decisions, mandatory bias audits and risk assessments for AI systems, and employer liability for discriminatory outcomes from AI tools even if such tools are developed by third parties.1See, e.g., 820 Ill. Comp. Stat. 42/5(2) (LexisNexis, LEXIS through P.A. 104-451); N.Y.C. Admin. Code §§ 20-871(a)(1), (b) (LexisNexis, LEXIS through Mar. 6, 2026); Colo. Rev. Stat. § 6-1-1703(4)(a)(1) (LEXIS through first 2025 Sess.); 775 Ill. Comp. Stat. 5/2-102(L)(2) (LexisNexis, LEXIS through P.A. 104-451); Cal. Code Regs. tit. 11, § 7221 (Barclay, LEXIS through Reg. 2026, No. 6, Feb. 6, 2026). While some of these laws provide guidance for employers on how to use AI in the workplace, several states have yet to enact legislation in this area, and other existing state laws remain bare-boned, leaving unanswered questions for employers. On December 11, 2025, President Trump released Executive Order 14365, titled “Ensuring a National Policy Framework for Artificial Intelligence,” which laid out the Trump administration’s plan to create a uniform federal regulatory standard for AI.2Exec. Order No. 14365, 90 Fed. Reg. 58499 (Dec. 16, 2025). President Trump supported this decision by criticizing current state laws as “a patchwork of 50 different regulatory regimes.”3Id.

This Article explores the impact that President Trump’s Executive Order 14365 may have on state laws regulating the use of AI in employment decisions. Part II provides background on Executive Order 14365 and relevant state laws, both current law and those soon to take effect. Part III discusses how these laws may be impacted by the Executive Order and how employers may be subsequently affected. Finally, Part IV offers a conclusion on the current AI-in-the-workplace regulatory scheme.

II. Background

Section A traces the rapid emergence of state-level regulation of artificial intelligence in employment, beginning with Illinois’ 2019 Artificial Intelligence Video Interview Act (“AIVI Act”).4820 ICLS Artificial Intelligence Video Interview Act (LexisNexis, LEXIS through P.A. 104-451). It then progresses chronologically through increasingly expansive regulatory schemes, including New York City’s bias audit mandate for automated employment decision tools, New Jersey’s broader disparate impact framework, Colorado’s comprehensive Consumer Protections for Artificial Intelligence Act (“CPAI Act”), Texas and Maryland’s employee exempted statutes, Illinois’s 2024 amendments to its Human Rights Act, and California’s automated decision-making technology regulations.5N.Y.C. Admin. Code § 20-870 et seq. (LEXIS through Mar. 6, 2026); N.J. Admin. Code § 13:16 (LEXIS through N.J. Reg. Vol. 58, No. 5); Colo. Rev. Stat. §§ 6-1-1701-1707 (LEXIS through LEXIS through first 2025 Sess.); Tex. Bus. & Com. Code Ann. §§ 551.001-554.103 (West through second 2025 Sess.); Md. Code Ann., Com. Law § 14-4701-4711 (LexisNexis, Lexis Advance through 2025 Reg. and first Special Sess.); 775 Ill. Comp. Stat. 5/2-102 (LexisNexis, LEXIS through P.A. 104-451); Cal. Code Regs. tit. 11, §§ 7001-7302 (Barclay, LEXIS through Reg. 2026, No. 6, Feb. 6, 2026). Section B shifts to the federal response, detailing Executive Order 14365.690 Fed. Reg. 58499 (Dec. 16, 2025).

A. Existing State Laws

In 2019, Illinois became the first state to enact legislation regulating the use of AI in a specific subset of employment decisions: video interviews.7See, e.g., Kwabena A. Appenteng, Philip L. Gordon & Garry G. Mathiason, Implementing Illinois’ AI Video Interview Act: Five Steps Employers Can Take to Address Hidden Questions and Integrate Policies with Existing Employment Laws, Litter (Sep. 17, 2019), https://www.littler.com/news-analysis/asap/implementing-illinois-ai-video-interview-act-five-steps-employers-can-take#:~:text=To%20address%20this%20use%20of,intelligence%20analysis%E2%80%9D%20is%20not%20defined [https://perma.cc/Y63B-VXQ2] (“The AI Interview Act is the first U.S. law to establish a framework for employers’ use of AI in the hiring process.”). The AIVI Act, which became effective on January 1, 2020, requires Illinois employers, who administer recorded video interviews and use artificial intelligence to evaluate those recordings, to inform applicants in advance that AI may be used to analyze the interview and assess their suitability for the position.8820 ICLS Artificial Intelligence Video Interview Act (LexisNexis, LEXIS through P.A. 104-451); 820 Ill. Comp. Stat. 42/5, 42/5(1) (LexisNexis, LEXIS through P.A. 104-451). Additionally, the AIVI Act mandates that employers disclose to applicants how the company’s AI system operates and obtain their consent before using AI to assess their candidacy.9820 Ill. Comp. Stat. 42/5(2), (3) (LexisNexis, LEXIS through P.A. 104-451). Employers that rely exclusively on an automated analysis of a video interview to decide whether a candidate advances to an in-person interview must gather and submit data on the race and ethnicity of both interviewed and hired applicants to the state Department of Commerce and Economic Opportunity.10820 Ill. Comp. Stat. 42/20(a), (b) (LexisNexis, LEXIS through P.A. 104-451).

In 2021, New York City (“NYC”) enacted legislation like the AIVI Act, which targeted discriminatory uses of AI in the hiring or promotion process.11Automated Employment Decision Tools (AEDT), N.Y.C. Dept. Consumer and Worker Protection, https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page#:~:text=Local%20Law%20144%20of%202021%20prohibits%20employers,help%20you%20track%20and%20update%20your%20complaint [https://perma.cc/A8ET-CX52] (last visited Feb. 18, 2026). NYC Local Law 144 prohibits NYC employers and employment agencies from using an automated employment decision tool (“AEDT”), which essentially means any AI model, at any time in the hiring or promotion process.12N.Y.C. Admin. Code § 20-870 (LEXIS through Mar. 6, 2026) (“The term ‘automated employment decision tool’ means any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons. The term ‘automated employment decision tool’ does not include a tool that does not automate, support, substantially assist or replace discretionary decision-making processes and that does not materially impact natural persons, including, but not limited to, a junk email filter, firewall, antivirus software, calculator, spreadsheet, database, data set, or other compilation of data.”); id. at § 20-871(a). However, such use is allowed if: (1) The AEDT has been subject to a bias audit within one year of the AEDT’s use;13Id. § 20-871(a)(1). (2) information about the bias audit is publicly available;14Id. § 20-871(a)(2). and (3) employees or candidates have been notified of the AEDT’s use, including the job qualifications or characteristics the AEDT relies on and the type of data the AEDT collects.15Id. § 20-871(b). Third-party auditors are required to conduct unbiased audits in an independent and objective manner.16Id. § 20-870. The audits must evaluate sex, race, and ethnicity, including the intersection of these three categories, among candidates and employees that the AEDT assesses in order to ensure that the AEDT is not discriminating against candidates and employees based on these protected factors.176 R.C.N.Y. § 5-301(b) (LEXIS through Mar. 6, 2026).

In early 2024, the New Jersey Division on Civil Rights proposed N.J.A.C. 13:16, which was adopted and published on December 15, 2025, regulating the use of AEDT in employment decisions.18N.J. Admin. Code § 13:16 (LEXIS through N.J. Reg. Vol. 58, No. 5). N.J.A.C. 13:16 differs from NYC Local Law 144, similarly regulating AEDTs, by covering a broader spectrum of policies that may cause a disparate impact on employees.19Id. § 13:16-1.3 (regulating AEDTs that “make decisions regarding employees, … including advertising, recruiting, screening, interviewing, hiring, placement, promotion, and compensation, or any other term, condition, or privilege of employment”). This statute also provides a burden-shifting framework for establishing a disparate impact claim.20Id. § 13:16-2.2.

Colorado became the first state to authorize comprehensive legislation specifically regulating the use of AI in employment decisions with its CPAI Act, which is effective June 30, 2026.21Colo. Rev. Stat. §§ 6-1-1701-1707 (LEXIS through first 2025 Sess.); S.B. 25B-004, 75th Gen. Assemb., 1st Spec. Sess. (Colo. 2025). Colorado’s CPAI Act adds new requirements for employers doing business in Colorado that either deploy or develop “high-risk artificial intelligence systems” that make or are substantial factors in making employment decisions.22Colo. Rev. Stat. §§ 6-1-1701(3), (6), (7), (9) (LEXIS through first 2025 Sess.). A ‘substantial factor’ is one that “assists in making a consequential decision; is capable of altering the outcome of a consequential decision; and is generated by an artificial intelligence system.”23Id. § 6-1-1701(11). (A “‘Substantial factor’ includes any use of an artificial intelligence system to generate any content, decision, prediction, or recommendation concerning a consumer that is used as a basis to make [an employment] decision concerning the consumer.”). The CPAI Act broadly covers decisions that have “a material legal or similarly significant effect” in the employment context, which can be construed to include every stage of the employer-employee relationship, including screening, interviewing, hiring, scheduling, promoting, disciplining, and firing employees.24Id. § 6-1-1701(3).

Under the CPAI Act, covered employers are required to take reasonable steps to prevent and mitigate known or reasonably anticipated risks of algorithmic discrimination stemming from their use of AI.25Id. § 6-1-1702(1). The CPAI Act’s definition of algorithmic discrimination covers a wide range of protected statuses.26Id. § 6-1-1701(1)(a) (“‘Algorithmic discrimination’ means any condition in which the use of [AI] results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”). It also creates a rebuttable presumption of reasonable care if employers implement risk management policies, complete annual impact assessments, and provide certain notices.27Id. § 6-1-1703(1)-(4). Applicants and employees must be notified when AI is used “to make, or be a substantial factor in making, a consequential decision before the decision is made.”28Id. § 6-1-1703(4)(a)(1). Applicants and employees who experience an adverse employment action based on AI must receive notice of the decision, including an explanation of how the AI was used, the categories and sources of data it analyzed, an opportunity to correct any inaccurate personal information relied upon, and a chance to appeal the decision for review by a human decision-maker.29Id. § 6-1-1703(4)(b).

Texas followed suit and became the second state to enact comprehensive legislation targeting AI in 2025 with the Texas Responsible Artificial Intelligence Governance Act (“TRAIGA”).30H.B. 149, 89th Leg., Reg. Sess. (Tex. 2025). However, TRAIGA specifically exempts employees from coverage.31Tex. Bus. & Com. Code Ann. § 551.001(2) (West through second 2025 Sess.) (“‘Consumer’ means an individual who is a resident of this state acting only in an individual or household context. The term does not include an individual acting in aemployment context (emphasis added).”). Maryland’s Online Data Privacy Act, applying to activities occurring after April 1, 2026, is another comprehensive Act targeting AI that specifically exempts employees from coverage.32Md. Code Ann., Com. Law § 14-4701(h)(2) (LexisNexis, Lexis Advance through 2025 Reg. and first Special Sess.) (“‘Consumer’ does not include an individual acting in a commercial or employment context; or an individual acting as an employee…”).

In 2024, Illinois again addressed the use of AI in employment in passing HB 3773, which amended the Illinois Human Rights Act (IHRA) by prohibiting employers from deploying AI in a manner that causes discrimination against employees on the basis of recognized, protected characteristics, under certain conditions, irrespective of intent.33H.B. 3773, 103d Gen. Assemb., Reg. Sess. (Ill. 2024); 775 Ill. Comp. Stat. 5/2-102(L)(1) (LexisNexis, LEXIS through P.A. 104-451) (“With respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment, for an employer to use artificial intelligence that has the effect of subjecting employees to discrimination on the basis of protected classes under this Article…”). Like Colorado’s CPAI Act, HB 3773 requires employers to notify applicants and employees when AI is used to influence employment decisions.34775 Ill. Comp. Stat. 5/2-102(L)(2) (LexisNexis, LEXIS through P.A. 104-451). HB 3773 further authorizes the Illinois Department of Human Rights to promulgate any rules needed to implement and enforce the provision.35Id.

In 2025, the California Privacy Protection Agency set forth regulations under the California Consumer Privacy Act regarding “automated decisionmaking technology” (“ADMT”), which it defined as, “any technology that processes personal information and uses computation to replace human decisionmaking or substantially replace human decisionmaking.”36Cal. Code Regs. tit. 11, §§ 7001-7302 (Barclay, LEXIS through Reg. 2026, No. 6, Feb. 6, 2026); id. § 7001(e)(“To ‘substantially replace human decisionmaking’ means a business uses the technology’s output to make a decision without human involvement.”). While these regulations avoid mentioning AI, such systems fall under the statutory definition of ADMT. The regulations cover employers that use ADMT to make any “significant decision” in the employment context.37Id. § 7001(ddd)(“‘Significant decision’ means a decision that results in the provision or denial of… employment or independent contracting opportunities or compensation…”). Like other regulations mirroring Colorado’s CPAI Act, covered employers are required to provide applicants and employees with advanced notice of the ADMT’s use by explaining the ADMT’s purpose, scope, and its potential impacts on the decision-making process.38Id. § 7220. The notice must inform individuals of their right to opt out of the use of ADMT in significant decisions affecting them, except in limited circumstances.39Id. § 7221. Most importantly, employers that use ADMT exclusively to evaluate an employee’s or applicant’s ability to perform in  a workplace or educational setting for purposes of admission, acceptance, or hiring are not obligated to provide an opt-out option.40Id. § 7221(b)(2).

B. The Executive Order

President Trump’s administration injected itself into the AI-regulation discussion on December 11, 2025, with Executive Order 14365 (“the Order”), titled “Ensuring a National Policy Framework for Artificial Intelligence.”4190 Fed. Reg. 58499 (Dec. 16, 2025). The Order specifically criticized Colorado’s CPAI Act, stating that the CPAI Act “may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.”42Id. at 58499. To combat these perceived issues, the Order set out a plan to “sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”43Id. at 58499 (emphasis added). Section three of the Order gives the U.S. Attorney General thirty days after the Order’s publication date to establish an “AI Litigation Task Force… whose sole responsibility shall be to challenge State AI laws… on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful.”44Id. Section four of the Order provides that, within ninety days after publication, the Secretary of Commerce will evaluate existing state AI laws and identify any that impose undue burdens.45Id. at 58500. This includes mandates requiring AI systems to alter accurate outputs or compelling developers or deployers to disclose or report information in ways that violate the First Amendment or other constitutional protections.46Id. States with AI laws that are found to violate section four risk losing remaining funds through the Broadband Equity Access and Deployment Program (“BEAD”)–47Id. a $42.45 billion federal initiative designed to expand high-speed internet access to unserved and underserved locations across the U.S. and its territories.48Broadband Grants for States, District of Columbia, Puerto Rico, and Territories, Pub. L. 117-58, tit. I, 135 Stat. 1182 (2021) (codified in various sections of 47 U.S.C.). However, section eight indicates that state laws regulating child safety, AI data center infrastructure, and State government use of AI will be left unchallenged.49Id. Section seven provides that the Chair of the Federal Trade Commission (“FTC”) shall issue a policy statement addressing how the prohibition on unfair or deceptive acts or practices under 15 U.S.C. § 45 applies to AI models.5090 Fed. Reg. 58499, 58500 (Dec. 16, 2025) (The policy statement must explain when state laws mandating changes to truthful AI outputs are preempted by the FTC Act’s ban on deceptive acts or practices affecting commerce). Section eight of the Order directs the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to prepare proposed legislation aligned with the Order’s broader objectives.51Id. at 58501.

Two key updates have followed the Order’s publication. First, on January 9, 2026, the Department of Justice (“DOJ”) established a specialized task force pursuant to section three of the Order.52Memorandum from the Att’y Gen. on Artificial Intelligence Litigation Taskforce to U.S. Dep’t of Just. (Jan. 9, 2026), https://www.justice.gov/ag/media/1422986/dl [https://perma.cc/7NGU-B2VC]. As of the publication date of this Article, the task force has not taken any action beyond formation. Second, despite the Order, Florida has introduced its own legislation targeting AI, titled “Artificial Intelligence Bill of Rights” (“SB 482”).53S.B. 482, 129th Gen. Assemb., Reg. Sess. (Fla. 2026) (in appropriations committee as of Jan. 21, 2026). SB 482 does not regulate the employer-employee relationship, but primarily addresses AI companion chatbot platforms, prohibiting them from opening or maintaining an account with a minor without parental consent.54Id.

III. Discussion

In the wake of President Trump’s Executive Order 14365, employers and courts are left wondering which state AI-in-employment regulations will remain in effect. The Order expressly criticized Colorado’s CPAI Act, signaling that the Trump administration would likely challenge state regulations that closely mirror its framework.5590 Fed. Reg. at 58499. To support the creation of a federal standard that would preempt inconsistent state laws, the Order further asserted that certain state measures “impermissibly regulate beyond state borders, impinging on interstate commerce.”56Id. Despite the CPAI Act specifically limiting covered entities to those doing business in Colorado,57Colo. Rev. Stat. §§ 6-1-1701(6), (7) (LEXIS through first 2025 Sess.). it seems the Trump administration has taken the position that these regulations and others like it may be unconstitutionally regulating entities operating within the enacting state, as well as in other states.5890 Fed. Reg. at 58499.

Despite the Order’s strong wording, executive orders alone do not automatically override state laws.59See, e.g., Youngstown Sheet & Tube Co. v. Sawyer, 343 U.S. 579, 591 (1952) (striking down an executive order because the president lacked authority to seize steel mills in the absence of constitutional or congressional authority); City & Cnty. of S.F. v. Trump, 897 F.3d 1225, 1245 (9th Cir. 2018) (striking down an executive order because it improperly directed agencies to withhold funding that Congress had not tied to compliance with the relevant statute). As the Supreme Court has held, “The President’s power, if any, to issue an executive order must stem either from an act of Congress or from the Constitution itself.”60Youngstown, 343 U.S. at 585. The Constitution grants the President authority as commander-in-chief, certain powers over foreign policy and national security, and oversight of the executive branch.61U.S. Const. art. II, §§ 2, 3. In Youngstown Sheet & Tube Co. v. Sawyer, Justice Jackson in his concurring opinion gave a useful framework for evaluating the President’s authority to issue executive orders.62Youngstown, 343 U.S. at 635-38. Under this framework, the President’s authority is at its maximum when he acts pursuant to congressional authorization, in a twilight zone when he acts in the absence of congressional authorization, and at its lowest ebb when he acts against congressional authority.63Id.

Assessed through Youngstown’s framework, Order 14365 functions both as a call for Congress to pass legislation consistent with its provisions and an instruction for the U.S. Attorney General to contest applicable state laws in court.6490 Fed. Reg. at 58499-500. The Order hinted at two sources of congressional preemption: “the Federal Trade Commission Act’s prohibition on unfair and deceptive acts or practices under 15 U.S.C. § 45 to AI models” and the Commerce Clause of the Constitution.65Id. at 58499. Because executive orders are binding on the federal government, the Order also provides federal agencies with guidance on regulating AI use under existing federal law.66Id. at 58499-500. Until Congress acts, state AI regulations remain in effect. Therefore, states will continue to enforce their prospective AI-in-the-workplace laws.

States opposing the Order will likely raise at least three objections. First, some may contend that the Order infringes upon the Tenth Amendment and the constitutional principles of federalism. The Tenth Amendment grants states “[t]he powers not delegated to the United States by the Constitution, nor prohibited by it to the states.”67U.S. Const. amend. X. This confers broad regulatory authority on the states, commonly referred to as the “police power,” enabling them to enact laws that safeguard public health, safety, and welfare.68See, e.g., Berman v. Parker, 348 U.S. 26, 32 (1954) (“Public safety, public health, morality, peace and quiet law and order … are some of the more conspicuous examples of … police power.”). The Supreme Court has acknowledged that states may use this power to regulate employment conditions.6990 Fed. Reg. 58500; see, e.g., West Coast Hotel v. Parrish, 300 U.S. 379, 398-399 (1937) (upholding minimum wage laws for women because it was a valid exercise of the state’s police power to protect the health and safety of women). However, since the Supreme Court recently ruled that federal courts generally cannot issue universal or nationwide injunctions against executive orders, states can no longer depend on federal courts to immediately halt such orders nationwide.70Trump v. CASA, Inc., 606 U.S. 831, 839-862 (2025).

Second, states may challenge the Order’s provisions that threaten to withhold remaining BEAD funds on grounds that such measures are coercive or insufficiently related to the program’s purpose.71See, e.g., Nat’l Fed’n of Indep. Bus. v. Sebelius, 567 U.S. 519, 681 (2012) (striking down a federal regulation as coercive on the grounds that it gave states “no real choice but to go along” with the regulation). The Supreme Court has held that, “Congress may not simply ‘commandeer[] the legislative process of the States by directly compelling them to enact and enforce a federal regulatory program.’”72New York v. United States, 505 U.S. 144, 161 (1992) (quoting Hodel v. Virginia Surface Mining & Reclamation Assn., Inc., 452 U.S. 264, 288 (1981)). Congress may condition states’ eligibility to receive federal funding so long as the conditions are in furtherance of “the general welfare”, unambiguous, related to the federal interest in the particular national program, and not in violation of the Constitution.73South Dakota v. Dole, 483 U.S. 203, 207-08 (1987). However, the Supreme Court has recognized that federal inducement might at some point become coercive.74Id. at 211. The Order threatens to prevent noncomplying states from accessing ‘non-deployment funds’ through the BEAD program.7590 Fed. Reg. 58500. In the context of the BEAD program, non-deployment funds are any portion of a state’s allocated funding that remains after satisfying the program’s infrastructure deployment obligations, which, in the aggregate, has been estimated to be as much as $21 billion.76Michael Santorlli & Alex Karras, The $21 Billion Question: What To Do With Leftover BEAD Funds?, Broadband Expanded (Oct. 23, 2025), https://broadbandexpanded.com/posts/beadleftover#:~:text=Final%20Proposals%20Show%20$21B+%20in%20Leftover%20Funding,-Since%20June%206&text=Many%20states%20met%20this%20ambitious,percentage%20of%20funds%20left%20over [https://perma.cc/2HSG-Q6BZ]. Losing access to roughly half of the original BEAD funding would severely hinder state efforts to modernize infrastructure, likely halting or setting back projects intended to bring high speed internet to unserved and underserved communities. The loss of significant funds for infrastructure is something that courts could consider coercive.

Third, states may argue that their AI laws do not trigger Commerce Clause scrutiny as they merely regulate businesses operating within their borders. The fact that certain covered businesses operate across multiple states does not diminish each state’s authority to regulate intrastate commerce under the Tenth Amendment.77See, e.g., United States v. Lopez, 514 U.S. 549,  (1995) (Reaffirming that the Tenth Amendment limits federal power, protecting areas of traditional state concern from federal overreach, even if they indirectly affect interstate commerce.); Gibbons v. Ogden, 22 U.S. 1,  (1824) (Recognizing that Congress’ authority to regulate interstate commerce under the Commerce Clause “does not extend to the regulation of the internal commerce of any State… Internal commerce must be that which is wholly carried on within the limits of a State.”). The Commerce Clause grants Congress the authority to regulate interstate commerce.78U.S. Const. art. I, § 8, cl. 3 (“The Congress shall have Power… To regulate Commerce… among the several states.”). The Supreme Court has interpreted the Commerce Clause to grant Congress the power to regulate the use of the channels of interstate commerce, the persons or things in interstate commerce, and activities having a substantial relation to interstate commerce.79United States v. Lopez, 514 U.S. 549, 558-59 (1995). If Congress were to pass legislation consistent with the Order, and a state later challenged it in court, the court would need to weigh state sovereignty against federal supremacy.

Despite tensions between state regulations and the Trump administration, the state laws governing AI use in the workplace that are already in effect remain legally valid. Therefore, employers operating in the aforementioned states should continue to observe these regulations until they are addressed by an act of Congress or challenged in court. As the reader may have noticed, Ohio is not one of the previously listed states. As of this Article’s publication date, Ohio has yet to introduce comprehensive legislation targeting the use of AI in the workplace.

However, employers both within and outside of Ohio should take proactive measures to protect their companies. Employers operating in multiple states should determine which state laws are applicable and treat the most stringent requirements as the minimum standard for compliance. Even if a relevant state law is challenged or preempted, employers are still liable for traditional discrimination claims. Therefore, employers should comply with state laws requiring documentation and risk assessments of AI use in employment decisions. Maintaining documentation will also assist employers in state or federal proceedings if a challenge to their AI use should arise. Considering the Order’s reference to the FTC Act’s ban on unfair or deceptive practices involving AI, employers should ensure that any statements about their AI systems are accurate and refrain from claiming that their AI is bias-free without supporting evidence. 

IV. Conclusion

In summary, although states like Illinois, Colorado, New Jersey, and California have advanced regulations on AI in employment, emphasis on transparency, bias audits, risk management, and employer accountability, Executive Order 14365 creates uncertainty by indicating possible federal challenges to these laws based on preemption. Nevertheless, the Order does not invalidate existing statutes, and state laws remain enforceable unless and until courts or Congress provide otherwise. Accordingly, employers must continue complying with applicable state regulations, maintain strong documentation and governance practices, and remain attentive to federal developments, recognizing that traditional anti-discrimination obligations persist regardless of how the broader federal-state regulatory conflict is ultimately resolved.


Cover Photo by Vitaly Gariev on Unsplash

References

  • 1
    See, e.g., 820 Ill. Comp. Stat. 42/5(2) (LexisNexis, LEXIS through P.A. 104-451); N.Y.C. Admin. Code §§ 20-871(a)(1), (b) (LexisNexis, LEXIS through Mar. 6, 2026); Colo. Rev. Stat. § 6-1-1703(4)(a)(1) (LEXIS through first 2025 Sess.); 775 Ill. Comp. Stat. 5/2-102(L)(2) (LexisNexis, LEXIS through P.A. 104-451); Cal. Code Regs. tit. 11, § 7221 (Barclay, LEXIS through Reg. 2026, No. 6, Feb. 6, 2026).
  • 2
    Exec. Order No. 14365, 90 Fed. Reg. 58499 (Dec. 16, 2025).
  • 3
    Id.
  • 4
    820 ICLS Artificial Intelligence Video Interview Act (LexisNexis, LEXIS through P.A. 104-451).
  • 5
    N.Y.C. Admin. Code § 20-870 et seq. (LEXIS through Mar. 6, 2026); N.J. Admin. Code § 13:16 (LEXIS through N.J. Reg. Vol. 58, No. 5); Colo. Rev. Stat. §§ 6-1-1701-1707 (LEXIS through LEXIS through first 2025 Sess.); Tex. Bus. & Com. Code Ann. §§ 551.001-554.103 (West through second 2025 Sess.); Md. Code Ann., Com. Law § 14-4701-4711 (LexisNexis, Lexis Advance through 2025 Reg. and first Special Sess.); 775 Ill. Comp. Stat. 5/2-102 (LexisNexis, LEXIS through P.A. 104-451); Cal. Code Regs. tit. 11, §§ 7001-7302 (Barclay, LEXIS through Reg. 2026, No. 6, Feb. 6, 2026).
  • 6
    90 Fed. Reg. 58499 (Dec. 16, 2025).
  • 7
    See, e.g., Kwabena A. Appenteng, Philip L. Gordon & Garry G. Mathiason, Implementing Illinois’ AI Video Interview Act: Five Steps Employers Can Take to Address Hidden Questions and Integrate Policies with Existing Employment Laws, Litter (Sep. 17, 2019), https://www.littler.com/news-analysis/asap/implementing-illinois-ai-video-interview-act-five-steps-employers-can-take#:~:text=To%20address%20this%20use%20of,intelligence%20analysis%E2%80%9D%20is%20not%20defined [https://perma.cc/Y63B-VXQ2] (“The AI Interview Act is the first U.S. law to establish a framework for employers’ use of AI in the hiring process.”).
  • 8
    820 ICLS Artificial Intelligence Video Interview Act (LexisNexis, LEXIS through P.A. 104-451); 820 Ill. Comp. Stat. 42/5, 42/5(1) (LexisNexis, LEXIS through P.A. 104-451).
  • 9
    820 Ill. Comp. Stat. 42/5(2), (3) (LexisNexis, LEXIS through P.A. 104-451).
  • 10
    820 Ill. Comp. Stat. 42/20(a), (b) (LexisNexis, LEXIS through P.A. 104-451).
  • 11
    Automated Employment Decision Tools (AEDT), N.Y.C. Dept. Consumer and Worker Protection, https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page#:~:text=Local%20Law%20144%20of%202021%20prohibits%20employers,help%20you%20track%20and%20update%20your%20complaint [https://perma.cc/A8ET-CX52] (last visited Feb. 18, 2026).
  • 12
    N.Y.C. Admin. Code § 20-870 (LEXIS through Mar. 6, 2026) (“The term ‘automated employment decision tool’ means any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons. The term ‘automated employment decision tool’ does not include a tool that does not automate, support, substantially assist or replace discretionary decision-making processes and that does not materially impact natural persons, including, but not limited to, a junk email filter, firewall, antivirus software, calculator, spreadsheet, database, data set, or other compilation of data.”); id. at § 20-871(a).
  • 13
    Id. § 20-871(a)(1).
  • 14
    Id. § 20-871(a)(2).
  • 15
    Id. § 20-871(b).
  • 16
    Id. § 20-870.
  • 17
    6 R.C.N.Y. § 5-301(b) (LEXIS through Mar. 6, 2026).
  • 18
    N.J. Admin. Code § 13:16 (LEXIS through N.J. Reg. Vol. 58, No. 5).
  • 19
    Id. § 13:16-1.3 (regulating AEDTs that “make decisions regarding employees, … including advertising, recruiting, screening, interviewing, hiring, placement, promotion, and compensation, or any other term, condition, or privilege of employment”).
  • 20
    Id. § 13:16-2.2.
  • 21
    Colo. Rev. Stat. §§ 6-1-1701-1707 (LEXIS through first 2025 Sess.); S.B. 25B-004, 75th Gen. Assemb., 1st Spec. Sess. (Colo. 2025).
  • 22
    Colo. Rev. Stat. §§ 6-1-1701(3), (6), (7), (9) (LEXIS through first 2025 Sess.).
  • 23
    Id. § 6-1-1701(11). (A “‘Substantial factor’ includes any use of an artificial intelligence system to generate any content, decision, prediction, or recommendation concerning a consumer that is used as a basis to make [an employment] decision concerning the consumer.”).
  • 24
    Id. § 6-1-1701(3).
  • 25
    Id. § 6-1-1702(1).
  • 26
    Id. § 6-1-1701(1)(a) (“‘Algorithmic discrimination’ means any condition in which the use of [AI] results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”).
  • 27
    Id. § 6-1-1703(1)-(4).
  • 28
    Id. § 6-1-1703(4)(a)(1).
  • 29
    Id. § 6-1-1703(4)(b).
  • 30
    H.B. 149, 89th Leg., Reg. Sess. (Tex. 2025).
  • 31
    Tex. Bus. & Com. Code Ann. § 551.001(2) (West through second 2025 Sess.) (“‘Consumer’ means an individual who is a resident of this state acting only in an individual or household context. The term does not include an individual acting in aemployment context (emphasis added).”).
  • 32
    Md. Code Ann., Com. Law § 14-4701(h)(2) (LexisNexis, Lexis Advance through 2025 Reg. and first Special Sess.) (“‘Consumer’ does not include an individual acting in a commercial or employment context; or an individual acting as an employee…”).
  • 33
    H.B. 3773, 103d Gen. Assemb., Reg. Sess. (Ill. 2024); 775 Ill. Comp. Stat. 5/2-102(L)(1) (LexisNexis, LEXIS through P.A. 104-451) (“With respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment, for an employer to use artificial intelligence that has the effect of subjecting employees to discrimination on the basis of protected classes under this Article…”).
  • 34
    775 Ill. Comp. Stat. 5/2-102(L)(2) (LexisNexis, LEXIS through P.A. 104-451).
  • 35
    Id.
  • 36
    Cal. Code Regs. tit. 11, §§ 7001-7302 (Barclay, LEXIS through Reg. 2026, No. 6, Feb. 6, 2026); id. § 7001(e)(“To ‘substantially replace human decisionmaking’ means a business uses the technology’s output to make a decision without human involvement.”).
  • 37
    Id. § 7001(ddd)(“‘Significant decision’ means a decision that results in the provision or denial of… employment or independent contracting opportunities or compensation…”).
  • 38
    Id. § 7220.
  • 39
    Id. § 7221.
  • 40
    Id. § 7221(b)(2).
  • 41
    90 Fed. Reg. 58499 (Dec. 16, 2025).
  • 42
    Id. at 58499.
  • 43
    Id. at 58499 (emphasis added).
  • 44
    Id.
  • 45
    Id. at 58500.
  • 46
    Id.
  • 47
    Id.
  • 48
    Broadband Grants for States, District of Columbia, Puerto Rico, and Territories, Pub. L. 117-58, tit. I, 135 Stat. 1182 (2021) (codified in various sections of 47 U.S.C.).
  • 49
    Id.
  • 50
    90 Fed. Reg. 58499, 58500 (Dec. 16, 2025) (The policy statement must explain when state laws mandating changes to truthful AI outputs are preempted by the FTC Act’s ban on deceptive acts or practices affecting commerce).
  • 51
    Id. at 58501.
  • 52
    Memorandum from the Att’y Gen. on Artificial Intelligence Litigation Taskforce to U.S. Dep’t of Just. (Jan. 9, 2026), https://www.justice.gov/ag/media/1422986/dl [https://perma.cc/7NGU-B2VC].
  • 53
    S.B. 482, 129th Gen. Assemb., Reg. Sess. (Fla. 2026) (in appropriations committee as of Jan. 21, 2026).
  • 54
    Id.
  • 55
    90 Fed. Reg. at 58499.
  • 56
    Id.
  • 57
    Colo. Rev. Stat. §§ 6-1-1701(6), (7) (LEXIS through first 2025 Sess.).
  • 58
    90 Fed. Reg. at 58499.
  • 59
    See, e.g., Youngstown Sheet & Tube Co. v. Sawyer, 343 U.S. 579, 591 (1952) (striking down an executive order because the president lacked authority to seize steel mills in the absence of constitutional or congressional authority); City & Cnty. of S.F. v. Trump, 897 F.3d 1225, 1245 (9th Cir. 2018) (striking down an executive order because it improperly directed agencies to withhold funding that Congress had not tied to compliance with the relevant statute).
  • 60
    Youngstown, 343 U.S. at 585.
  • 61
    U.S. Const. art. II, §§ 2, 3.
  • 62
    Youngstown, 343 U.S. at 635-38.
  • 63
    Id.
  • 64
    90 Fed. Reg. at 58499-500.
  • 65
    Id. at 58499.
  • 66
    Id. at 58499-500.
  • 67
    U.S. Const. amend. X.
  • 68
    See, e.g., Berman v. Parker, 348 U.S. 26, 32 (1954) (“Public safety, public health, morality, peace and quiet law and order … are some of the more conspicuous examples of … police power.”).
  • 69
    90 Fed. Reg. 58500; see, e.g., West Coast Hotel v. Parrish, 300 U.S. 379, 398-399 (1937) (upholding minimum wage laws for women because it was a valid exercise of the state’s police power to protect the health and safety of women).
  • 70
    Trump v. CASA, Inc., 606 U.S. 831, 839-862 (2025).
  • 71
    See, e.g., Nat’l Fed’n of Indep. Bus. v. Sebelius, 567 U.S. 519, 681 (2012) (striking down a federal regulation as coercive on the grounds that it gave states “no real choice but to go along” with the regulation).
  • 72
    New York v. United States, 505 U.S. 144, 161 (1992) (quoting Hodel v. Virginia Surface Mining & Reclamation Assn., Inc., 452 U.S. 264, 288 (1981)).
  • 73
    South Dakota v. Dole, 483 U.S. 203, 207-08 (1987).
  • 74
    Id. at 211.
  • 75
    90 Fed. Reg. 58500.
  • 76
    Michael Santorlli & Alex Karras, The $21 Billion Question: What To Do With Leftover BEAD Funds?, Broadband Expanded (Oct. 23, 2025), https://broadbandexpanded.com/posts/beadleftover#:~:text=Final%20Proposals%20Show%20$21B+%20in%20Leftover%20Funding,-Since%20June%206&text=Many%20states%20met%20this%20ambitious,percentage%20of%20funds%20left%20over [https://perma.cc/2HSG-Q6BZ].
  • 77
    See, e.g., United States v. Lopez, 514 U.S. 549,  (1995) (Reaffirming that the Tenth Amendment limits federal power, protecting areas of traditional state concern from federal overreach, even if they indirectly affect interstate commerce.); Gibbons v. Ogden, 22 U.S. 1,  (1824) (Recognizing that Congress’ authority to regulate interstate commerce under the Commerce Clause “does not extend to the regulation of the internal commerce of any State… Internal commerce must be that which is wholly carried on within the limits of a State.”).
  • 78
    U.S. Const. art. I, § 8, cl. 3 (“The Congress shall have Power… To regulate Commerce… among the several states.”).
  • 79
    United States v. Lopez, 514 U.S. 549, 558-59 (1995).

Up ↑

Discover more from University of Cincinnati Law Review Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading

Exit mobile version
Skip to content
%%footer%%