400 AI Bills. One Architecture That Passes Them All.

Feb 17, 2026

Your legal team isn't blocking AI hiring tools because they don't understand the technology.

They're blocking them because they understand it perfectly.

They know that when a candidate submits an application, their name, address, employment history, and sometimes Social Security number gets transmitted to a vendor's cloud. Then to OpenAI's API. Then back to the vendor. Then to your recruiter's screen.

At each transmission point: liability. At each storage location: exposure. At each external API call: a potential compliance violation under laws that are already in effect and getting stricter.

In 2024, over 400 AI bills were introduced across 41 states. Three major AI hiring laws are already in effect or taking effect in the next 90 days. The EEOC has issued guidance on algorithmic hiring. State attorneys general are watching.

Your legal team is doing their job.

The question isn't whether to comply. It's whether the architecture you choose makes compliance possible—or makes it impossible.

Here's what each regulation actually requires, where SaaS tools fail, and why on-premise deployment is the only architecture that satisfies all of them simultaneously.

The Regulatory Landscape in 2026

Let's start with what's already law.

NYC Local Law 144 (In Effect Since 2023)

NYC Local Law 144 applies to any employer using an Automated Employment Decision Tool (AEDT) to screen candidates or employees for jobs based in New York City.

What it requires:

Annual bias audits. Any AEDT used in hiring must undergo an independent bias audit at least annually. The audit must analyze adverse impact across race, ethnicity, and sex. Results must be published publicly on the employer's website.

Candidate notification. Before using an AEDT to evaluate a candidate, employers must notify the candidate that an AEDT is being used, what job qualifications or characteristics it evaluates, and how candidates can request an alternative selection process.

Data retention. Employers must retain bias audit results and candidate notifications for at least three years.

Where SaaS tools fail:

When the AEDT runs on a vendor's servers, the employer cannot independently audit it. They're dependent on the vendor to provide audit results. The vendor controls the methodology. The vendor controls what gets disclosed.

Legal teams at Fortune 500 companies don't accept compliance by proxy. "Our vendor did the bias audit" is not a defensible answer when an enforcement action arrives.

How on-premise deployment satisfies this:

When the system runs in your VPC, you control the audit. You access the scoring logic directly. You run your own demographic analysis. You produce bias audit results from your own data using your own methodology.

Our system generates EEOC/OFCCP audit exports on demand showing demographic distributions of scored candidates versus hired candidates. Your legal team runs the audit. Your legal team publishes the results. No vendor dependency.

Illinois Artificial Intelligence Video Interview Act (Effective January 1, 2025)

The Illinois AI Video Interview Act regulates employers who use AI to analyze video interviews of Illinois candidates.

What it requires:

Candidate consent. Employers must notify candidates before the interview that AI will be used to analyze their video. They must explain how the AI works and what characteristics it evaluates. Candidates must provide explicit consent.

Restricted sharing. Video interviews analyzed by AI cannot be shared with third parties except for limited purposes (technical support, research). Sharing videos with third-party vendors for AI analysis beyond the original purpose is restricted.

Deletion on request. Employers must delete video interviews within 30 days of a candidate's written request.

Where SaaS tools fail:

Every SaaS AI video interview tool processes videos on vendor servers. Some transmit video data to foundation model APIs for analysis. The sharing restrictions in Illinois law create immediate exposure for this architecture.

"Technical support" exceptions don't cover sending candidate video to OpenAI for inference. That's core product functionality, not technical support.

How on-premise deployment satisfies this:

When video analysis runs in your VPC, videos never leave your environment. There is no third-party sharing to restrict. Deletion requests are fulfilled by deleting records from your own database—no vendor coordination required.

Consent and notification requirements still apply. But the architecture eliminates the data sovereignty violations that make SaaS tools legally risky in Illinois.

Colorado Artificial Intelligence Act (Effective February 1, 2026)

The Colorado AI Act is the most comprehensive state AI law in the country. It applies to "high-risk AI systems" used in "consequential decisions"—explicitly including employment decisions.

What it requires:

Impact assessments. Before deploying a high-risk AI system, employers must conduct an impact assessment evaluating the system's intended purpose, potential risks, and mitigation measures. Assessments must be updated annually.

Bias testing. Employers must implement reasonable care to protect consumers from algorithmic discrimination. This requires testing for bias across protected characteristics before deployment and ongoing monitoring.

Transparency. Employers must disclose when AI is used in consequential decisions and provide a meaningful explanation of how the AI reached its decision.

Opt-out. Consumers (including job candidates) must be able to request a human review instead of AI evaluation. Employers must accommodate these requests.

Data governance. Employers must maintain documentation of data used to train AI systems and demonstrate the data was appropriate for the intended use.

Where SaaS tools fail:

The data governance requirement is where SaaS tools collapse.

Colorado requires employers to maintain documentation of training data. But with SaaS tools, the training data lives on vendor servers. Employers don't know what data the models were trained on. They can't document it. They can't demonstrate it was appropriate.

"Our vendor says the model is fair" doesn't satisfy Colorado's requirement for employer-maintained data governance documentation.

How on-premise deployment satisfies this:

When models train in your VPC on your data, you know exactly what the training data is. Your HRIS data. Your performance reviews. Your top performer patterns. Documented. Auditable. Defensible.

Impact assessments are possible because you can inspect the system directly. Bias testing runs against your own data in your own environment. Opt-out requests route to human reviewers without vendor coordination.

The Colorado AI Act was written assuming that regulated enterprises need to control their AI systems to comply. On-premise deployment is not a workaround. It's the architecture the regulation contemplated.

EEOC Guidance on Algorithmic Hiring Tools

The EEOC's technical assistance document on algorithmic hiring tools establishes that employers remain liable for Title VII violations even when AI tools are used in hiring—and even when those tools are provided by third-party vendors.

What it requires:

Employer responsibility. Employers cannot outsource compliance. "The vendor's algorithm discriminated" is not a defense. The employer is responsible for ensuring AI tools don't create adverse impact.

Adverse impact analysis. Employers must monitor AI hiring tools for adverse impact on protected classes. If adverse impact exists, employers must demonstrate that the selection procedure is job-related and consistent with business necessity.

Documentation. Employers must be able to produce documentation showing how their AI tools work, what data they use, and what safeguards are in place.

Where SaaS tools fail:

The EEOC's guidance explicitly creates employer liability for vendor AI behavior. When a SaaS tool discriminates, the employer is responsible—not the vendor.

But the employer has no visibility into how the SaaS model works. They can't audit it. They can't modify it. They can't demonstrate it doesn't discriminate except by taking the vendor's word for it.

"We relied on our vendor's bias audit" is not a defense under EEOC guidance. The employer is liable.

How on-premise deployment satisfies this:

When the model runs in your VPC, you can audit every decision. You can produce demographic distributions. You can demonstrate bias controls were active. You can explain why any candidate was scored the way they were.

Our system includes EEOC/OFCCP audit exports showing the demographic distribution of scored candidates versus hired candidates at every stage of the funnel. Legal can validate that the system doesn't create adverse impact. Legal can produce this documentation on demand during an enforcement proceeding.

This is what "defensible AI" means in practice: not a vendor promise, but actual documentation from your own system.

The Three Compliance Questions Every Architecture Must Answer

After working through legal reviews at Fortune 500 financial services and insurance companies, we've identified three questions that determine whether an AI hiring architecture is compliant.

Question 1: Can You Produce Documentation for Any Hiring Decision?

If EEOC requests documentation for why a candidate was rejected, can you produce it in 24 hours?

SaaS answer: "We'll need to request that from our vendor." Vendor may take days or weeks. Documentation may not exist in the form EEOC requires. Legal has no direct access.

On-premise answer: Pull the decision trace from your own database. Every candidate has a Fit Score (0-100) with plain-English explanation. Every scoring decision is logged. Every bias control application is documented. Available in 24 hours. No vendor coordination.

CNO Financial can produce documentation for any of the 660,000+ candidates processed through our system. Decision trace, scoring explanation, bias control log, demographic data. All in their environment. All on demand.

Question 2: Can You Demonstrate Bias Controls Were Active for Every Decision?

Not "we have bias controls." Every single decision, every single candidate, every single role.

SaaS answer: "Our models are tested for bias before deployment." That's pre-deployment testing. Not per-decision documentation. When an enforcement action asks for proof that bias controls were active for Candidate #247,891 on March 15, 2025, pre-deployment testing isn't an answer.

On-premise answer: ELK Stack logging captures every decision. Two-layer bias control (PII stripping + verification) runs for every candidate. Logs show PII was stripped before scoring. Logs show verification passed. Logs show anonymized candidate was scored. Defensible for every single decision.

Question 3: Do You Control Your Training Data?

Colorado AI Act requires documentation of training data. EEOC guidance requires employers to understand what their AI tools are optimizing for.

SaaS answer: "Our models are trained on large datasets of job postings and resumes." What datasets? From what sources? What biases might they contain? Employers cannot answer these questions because they don't control the training data.

On-premise answer: Models train on your HRIS data. Your performance reviews. Your top performer patterns. You know exactly what the training data is because you own it. Impact assessments are possible. Bias testing runs against known data. Documentation is complete.

How the Two-Layer Bias Control Works

Bias control isn't a feature we added. It's an architectural requirement we built around.

Regulated enterprises deploying AI in hiring face a specific problem: historical hiring data contains bias. If models train on "who got hired," they learn the biases of previous hiring managers. That's not acceptable legally or ethically.

Our two-layer system addresses this:

Layer 1: PII Stripping

Before any candidate data enters the scoring system, we strip all personally identifiable information:

  • Name (which can indicate ethnicity and gender)

  • Age and graduation years (which indicate age)

  • Photos (which indicate race, gender, age)

  • Address (which can proxy for race via neighborhood demographics)

  • Gender pronouns and indicators

  • Any field that could proxy for protected characteristics

The model never sees demographic data. It scores on behavioral patterns, career trajectory, skill indicators, and communication patterns—not on who the person is.

Layer 2: Bias Verification

After PII stripping, a separate validation layer verifies that removal was complete. This catches edge cases where demographic information might be embedded in unexpected fields (a resume that mentions a historically Black college, for instance, or a cover letter that mentions a disability accommodation).

Only after verification passes does the anonymized application enter the scoring system.

Ongoing Monitoring

After each hiring cycle, we run demographic analysis comparing:

  • Score distributions across protected classes

  • Advancement rates at each stage by demographic group

  • Hire rates compared to applicant pool demographics

If adverse impact appears at any stage, the system flags it for human review before the next cycle.

EEOC/OFCCP audit exports are available on demand, showing the full demographic picture at every funnel stage.

The CNO Financial Compliance Story

CNO Financial is a Fortune 500 insurance company. They are subject to:

  • HIPAA (they process health insurance data)

  • EEOC/OFCCP (federal contractor obligations)

  • State insurance regulations across all 50 states

  • Illinois AI Video Interview Act (Illinois operations)

  • NYC Local Law 144 (New York City operations)

Legal blocked every AI hiring tool for 18 months before we deployed. Not because they didn't want AI. Because every tool they evaluated failed on data sovereignty.

When we entered the process, their legal team asked the three questions:

"Where does candidate data go?" Our answer: "Nowhere. It stays in your AWS VPC. Zero external API calls."

"Can you produce documentation for any hiring decision?" Our answer: "Yes. Every candidate has a decision trace with plain-English explanation. Available in 24 hours. No vendor coordination required."

"What's our liability exposure?" Our answer: "Minimal. Data never leaves your environment. You own the models. You control the audit trail. We sign BAAs for HIPAA compliance."

Legal approved in 17 days.

Not because we made better promises. Because the architecture made compliance structurally possible instead of dependent on vendor cooperation.

Thirty days later, CNO deployed company-wide across all 215 locations as mandatory infrastructure. Legal didn't just approve the system. They became advocates for it because it made their compliance posture stronger.

What "Defensible AI" Actually Means

There's a lot of talk about "explainable AI" in hiring. Most of it misses the point.

Explainable AI means you can describe what the model does. "It scores candidates on communication patterns and career trajectory."

Defensible AI means you can defend every individual decision in front of a regulator.

"Candidate #247,891 received a Fit Score of 43/100. Here is the plain-English explanation: strong educational background (72/100), limited relevant experience (38/100), communication pattern mismatch with top performer profiles (31/100). Here is the bias control log showing PII was stripped before scoring. Here is the demographic verification showing no protected class data was present during scoring."

That's defensible. That's what EEOC enforcement actually requires.

Our system produces this documentation for every candidate, automatically, as a byproduct of the scoring process. It's not a separate compliance workflow. It's built into the architecture.

The Compliance Roadmap for 2026 and Beyond

Here's what legal and compliance teams need to prepare for:

Already In Effect

NYC Local Law 144: Annual bias audits, candidate notification, data retention. Enforcement is active. Non-compliant employers face fines.

Illinois AI Video Interview Act: Consent requirements, sharing restrictions, deletion obligations. Effective January 1, 2025.

EEOC Guidance: Employer liability for AI hiring tools, adverse impact analysis, documentation requirements. Not a new law—clarification of existing Title VII obligations.

Taking Effect in 2026

Colorado AI Act (February 1, 2026): Impact assessments, bias testing, transparency, opt-out, data governance. The most comprehensive AI hiring regulation in the country. Applies to any employer using high-risk AI in hiring for Colorado-based roles.

On the Horizon

Federal AI legislation: Multiple federal AI bills pending. The trajectory is toward more regulation, not less. Employers building compliant architecture now are ahead of employers who wait.

State expansion: Colorado's model is being watched by other states. California, Virginia, Texas, and New York are all evaluating similar legislation. What complies in Colorado will likely comply nationwide as other states follow.

The regulatory trend is unmistakable: employers must control their AI systems to comply with emerging requirements. Vendor-controlled black boxes are not compliant. Customer-controlled infrastructure is.

The Architecture That Satisfies Everything

Here's the summary of how on-premise deployment satisfies each regulatory framework:

NYC Local Law 144:

  • ✅ Bias audits: Run from your own system, your own data, your own methodology

  • ✅ Candidate notification: Standard workflow integration

  • ✅ Data retention: Your database, your retention policy

Illinois AI Video Interview Act:

  • ✅ No third-party sharing: Videos never leave your environment

  • ✅ Deletion on request: Delete from your database directly

  • ✅ Consent workflow: Integrated into candidate experience

Colorado AI Act:

  • ✅ Impact assessments: Inspect your own system directly

  • ✅ Bias testing: Run against your own training data

  • ✅ Transparency: Plain-English explanations for every decision

  • ✅ Opt-out: Human review workflow built in

  • ✅ Data governance: You own and document the training data

EEOC Guidance:

  • ✅ Employer responsibility: You control the system, you own the compliance

  • ✅ Adverse impact analysis: EEOC/OFCCP audit exports on demand

  • ✅ Documentation: Decision traces for every candidate, available in 24 hours

One architecture. Every framework satisfied.

What This Means for General Counsel

If you're General Counsel or Chief Compliance Officer at a regulated enterprise evaluating AI hiring tools, here's the framework:

Step 1: Map your data flows

Before evaluating any tool, document where candidate data goes. Does it leave your environment? Which vendors touch it? Which APIs process it?

Any tool that sends candidate PII to external APIs creates immediate exposure under CCPA, state AI hiring laws, and EEOC guidance.

Step 2: Validate your audit trail

Can you produce a plain-English explanation for why any candidate was advanced or rejected? Can you export demographic distributions showing no adverse impact? Can you demonstrate bias controls were active for every decision?

If the answer to any of these is "we'd need to ask our vendor," you are not compliant with current regulatory requirements.

Step 3: Assess your training data governance

Under Colorado's AI Act (and the federal legislation likely to follow), you must document what data your AI systems train on and demonstrate it was appropriate.

Can you do that with your current vendor? Or is the training data a black box on their servers?

Step 4: Evaluate architecture, not promises

Vendors will tell you their systems are compliant. Architecture is what actually determines compliance.

Does the system deploy in your environment? Do you own the models? Do you control the audit trail? Can you shut it down instantly if needed?

If the answer to these questions is no, the vendor's compliance promises are not sufficient.

The Cost of Non-Compliance

Getting this wrong is expensive.

NYC Local Law 144 fines can reach $1,500 per violation per day. For a company processing thousands of candidates annually, enforcement exposure adds up quickly.

EEOC settlements for systemic hiring discrimination regularly reach $1-5 million. Class action exposure for algorithmic discrimination is potentially much larger.

Colorado AI Act violations create exposure to state attorney general enforcement actions, private lawsuits, and reputational damage.

The cost of deploying compliant architecture upfront: $300K-$600K annually.

The cost of an enforcement action: potentially millions in fines, settlements, legal fees, and reputational damage.

This is not a theoretical risk. Enforcement is active. NYC has already brought enforcement actions under Local Law 144. The EEOC has issued guidance specifically because it is investigating algorithmic hiring discrimination.

Legal teams at Fortune 500 companies are not being paranoid when they block AI hiring tools. They are doing their jobs.

The solution isn't to convince legal to accept more risk. It's to deploy architecture that eliminates the risk.

What CNO Learned About Compliance-First AI

CNO Financial started with compliance as the primary requirement. Not "find us the best AI tool." But "find us an AI tool that legal can actually approve."

That constraint produced a better outcome than starting with capability.

By requiring on-premise deployment for compliance, they got:

  • Faster legal approval (17 days vs 18 months for competitors)

  • Higher prediction accuracy (80% vs 20-25% for generic models—because on-premise enables training on performance data)

  • Stronger compliance posture (every decision documented, auditable, defensible)

  • Better data governance (they own the models, the data, the IP)

Compliance-first architecture didn't compromise capability. It enabled it.

This is the insight that General Counsel and Chief Compliance Officers bring to AI hiring decisions that CHROs and recruiting leaders sometimes miss: the constraints that make AI compliant are the same constraints that make it more accurate.

On-premise deployment requires data sovereignty. Data sovereignty enables performance data access. Performance data access enables accurate predictions.

Compliance and capability point in the same direction. The architecture that passes legal review is also the architecture that works best.

FAQs

We operate in multiple states. How do we comply with different state laws simultaneously?

On-premise deployment creates a compliance foundation that satisfies all current state AI hiring laws simultaneously.

The key requirements across NYC Local Law 144, Illinois AI Video Interview Act, and Colorado AI Act all center on the same core principles: data sovereignty, algorithmic transparency, bias audits, and employer control.

When your AI system deploys in your VPC:

  • Data sovereignty is satisfied in every state (data never leaves your environment)

  • Algorithmic transparency is possible in every state (you control the system and can explain every decision)

  • Bias audits are possible in every state (you run audits against your own data)

  • Employer control exists in every state (you can modify or shut down the system)

State-specific requirements (like Illinois's consent workflow or NYC's public bias audit publication) are process requirements that apply on top of this foundation. They require workflow changes, not architectural changes.

One compliant architecture. State-specific process compliance built on top.

What happens if regulations change after we deploy?

Because you own the architecture, you can adapt to regulatory changes without vendor cooperation.

When Colorado's AI Act takes effect February 1, 2026, companies using SaaS tools face a problem: they can't modify the vendor's system to comply. They're dependent on the vendor to update their product. If the vendor is slow, or decides not to support the new requirement, the employer is still liable.

When you own the infrastructure, you adapt directly. New documentation requirement? Update your logging configuration. New opt-out workflow required? Build it into your candidate experience. New bias testing methodology required? Apply it to your own data.

Regulatory agility is a function of system control. You can only adapt quickly to regulatory changes in systems you control.

Can we use this documentation in an actual EEOC investigation?

Yes. The documentation our system produces is designed for exactly this purpose.

For any candidate, you can produce:

  • Fit Score (0-100) with plain-English explanation

  • Scoring breakdown by evaluation dimension

  • Bias control log showing PII stripping and verification

  • Demographic data showing no protected class information was present during scoring

  • Comparison of candidate score to role threshold and scoring distribution

For any hiring cohort, you can produce:

  • EEOC/OFCCP adverse impact analysis

  • Score distributions across protected classes

  • Advancement rates at each funnel stage by demographic group

  • Hire rate compared to applicant pool demographics

  • Four-fifths (80%) rule analysis by protected class

This is the documentation EEOC investigators request. Our system produces it automatically. No vendor coordination. No data reconstruction. Available from your own database on demand.

CNO Financial's legal team reviewed this documentation capability during the 17-day approval process. It was a significant factor in their decision to approve company-wide deployment.

Ready to discuss compliance architecture for your organization? Visit nodes.inc to start the conversation about legally defensible talent intelligence infrastructure for Fortune 500 enterprises.

See what we're building, Nodes is reimagining enterprise hiring. We’d love to talk.

See what we're building, Nodes is reimagining enterprise hiring. We’d love to talk.

See what we're building, Nodes is reimagining enterprise hiring. We’d love to talk.

See what we're building, Nodes is reimagining enterprise hiring. We’d love to talk.

See what we're building, Nodes is reimagining enterprise hiring. We’d love to talk.