Washington state lawmakers are taking one other run at regulating synthetic intelligence, rolling out a slate of payments this session geared toward curbing discrimination, limiting AI use in faculties, and imposing new obligations on firms constructing emotionally responsive AI merchandise.
The state has handed slim AI-related legal guidelines prior to now — together with limits on facial recognition and distributing deepfakes — however broader efforts have usually stalled, together with proposals final 12 months centered on AI growth transparency and disclosure.
This 12 months’s payments give attention to youngsters, psychological well being, and high-stakes selections like hiring, housing, and lending. The payments may have an effect on HR software program distributors, ed-tech firms, psychological well being startups, and generative AI platforms working in Washington.
The proposals come as Congress continues to debate AI oversight with little concrete motion, leaving states to experiment with their very own guardrails. An interim report issued lately by the Washington state AI Job Drive notes that the federal authorities’s “hands-off strategy” to AI has created “an important regulatory hole that leaves Washingtonians susceptible.”
Right here’s a have a look at 5 AI-related payments that have been pre-filed earlier than the official begin of the legislative session, which kicks off Monday.
HB 2157
This sweeping invoice would regulate so-called high-risk AI techniques used to make or considerably affect selections about employment, housing, credit score, well being care, training, insurance coverage, and parole.
Corporations that develop or deploy these techniques in Washington can be required to evaluate and mitigate discrimination dangers, disclose when persons are interacting with AI, and clarify how AI contributed to antagonistic selections. Customers may additionally obtain explanations for selections influenced by AI.
The proposal wouldn’t apply to low-risk instruments like spam filters or fundamental customer-service chatbots, nor to AI used strictly for analysis. Nonetheless, it may have an effect on a variety of tech firms, together with HR software program distributors, fintech corporations, insurance coverage platforms, and enormous employers utilizing automated screening instruments. The invoice would go into impact on Jan. 1, 2027.
SB 5984
This invoice, requested by Gov. Bob Ferguson, focuses on AI companion chatbots and would require repeated disclosures that an AI chatbot just isn’t human, prohibit sexually express content material for minors, and mandate suicide-prevention protocols. Violations would fall beneath Washington’s Shopper Safety Act.
The invoice’s findings warn that AI companion chatbots can blur the road between human and synthetic interplay and will contribute to emotional dependency or reinforce dangerous ideation, together with self-harm, significantly amongst minors.
These guidelines may instantly affect psychological well being and wellness startups experimenting with AI-driven remedy or emotional help instruments — together with firms exploring AI-based psychological well being providers, similar to Seattle startup NewDays.
Babak Parviz, CEO of NewDays and a former chief at Amazon, stated he believes the invoice has good intentions however added that it could be troublesome to implement as “constructing a long-term relationship is so vaguely outlined right here.”
Parviz stated it’s necessary to look at techniques that work together with minors to verify they don’t trigger hurt. “For important AI techniques that work together with individuals, it’s necessary to have a layer of human supervision,” he stated. “For instance, our AI system in clinic use is beneath the supervision of an knowledgeable human clinician.”
OpenAI and Frequent Sense Media are partnering on a poll initiative in California additionally centered on chatbots and minors.
SB 5870
A associated invoice goes even additional, creating a possible civil legal responsibility when an AI system is alleged to have contributed to an individual’s suicide.
Underneath this invoice, firms may face lawsuits if their AI system inspired self-harm, supplied directions, or didn’t direct customers to disaster assets — and can be barred from arguing that the hurt was brought on solely by autonomous AI habits.
If enacted, the measure would explicitly hyperlink AI system design and operation to wrongful-death claims. The invoice comes amid rising authorized scrutiny of companion-style chatbots, together with lawsuits involving Character.AI and OpenAI.
SB 5956
Targets AI use in Okay–12 faculties, banning predictive “threat scores” that label college students as seemingly troublemakers and prohibiting real-time biometric surveillance similar to facial recognition.
Faculties would even be barred from utilizing AI as the only real foundation for suspensions, expulsions, or referrals to regulation enforcement, reinforcing that human judgment should stay central to self-discipline selections.
Educators and civil rights advocates have raised alarms about predictive instruments that may amplify disparities in self-discipline.
SB 5886
This proposal updates Washington’s right-of-publicity regulation to explicitly cowl AI-generated solid digital likenesses, together with convincing voice clones and artificial pictures.
Utilizing somebody’s AI-generated likeness for business functions with out consent may expose firms to legal responsibility, reinforcing that current identification protections apply within the AI period — and never only for celebrities and public figures.


