A proposed law requiring AI developers to impose controls on their AI services is advancing through the state Legislature.
The Senate Joint Committee on Commerce and Consumer Protection and Labor and Technology voted Tuesday in favor of Senate Bill 3001, which would require AI operators to adopts certain disclaimers and protocols in order to mitigate AI’s potential to impact users’ mental health.
The bill requires that any “conversational artificial intelligence service” — e.g., an AI chat service such as ChatGPT — to include clear notifications informing users that they are not communicating with a human, particularly if the user is a minor. Those notifications would be issued at the beginning of each session and subsequently at least every hour, and would remind the user to take a break from the service.
Furthermore, operators would have to develop protocols for how an AI service could respond to user prompts indicating an intention to self-harm or commit suicide. Those protocols would include responses referring the user to crisis service providers and preventing the AI from explicitly claiming that the service is intended to provide professional mental health care.
Also, in cases where a user is reasonably believed to be a minor, AI operators would be required to impose controls preventing the AI from generating pornography, making sexually explicit statements, making statements simulating emotional or romantic connections with the user, or claiming itself to be sentient.
The bill follows growing reports of “AI psychosis,” a trend wherein habitual users of AI chatbots developing delusional beliefs exacerbated by the AI’s responses.
Last April, California teen Adam Raine committed suicide after confiding his suicidal ideation to ChatGPT; according to a subsequent lawsuit filed by Raine’s parents against ChatGPT developer OpenAI, the chatbot had advised Raine about the viability of specific suicide methods and repeatedly validated his suicidal thoughts.
And in 2024, 14-year-old Sewell Setzer III committed suicide after extensive use of Character AI, a chatbot emulating the personalities of fictional characters. Following months of sexually charged conversations with the chatbot — during which Setzer confided his suicidal thoughts, which were encouraged and validated by the chatbot — Setzer killed himself after the chatbot told him to “come home to me as soon as possible.”
Setzer’s mother subsequently sued Character Technologies, Character AI’s developer.
Hawai‘i isn’t the only state considering such a measure. Lawmakers in Washington state have introduced their own bills that would require similar protocols from AI operators. Meanwhile, Illinois, Utah and Nevada have all passed laws prohibiting the use of AI chatbots in mental health therapy.
The bill was broadly popular at Tuesday’s committee hearing. Nahelani Parsons, a representative of Google’s government affairs and public policy team, submitted testimony in support of the bill, although she added that Google’s AI chatbot, Gemini, has content safeguards and suicide protocols already in place.
On the other hand, one AI company — Aidgentic LLC, which describes itself as an “AI-focused automation agency” — testified against SB 3001, arguing that most major AI chatbots already include safeguards to reduce harm, but those can still be circumvented by users despite the billions of dollars invested in those safeguards.
“If passed, this bill would effectively mandate that small Hawai‘i businesses solve technical safety challenges that even the world’s largest tech companies have not yet fully resolved,” Aidgentic wrote.
Meanwhile, the state Department of the Attorney General raised concerns that the bill technically could be considered a violation of the First Amendment of the U.S. Constitution.
“A statute preventing a conversational artificial intelligence service from making the statements described … may be viewed as a content-based law and subject to challenge under the First Amendment,” read testimony by the Attorney General’s Office.
Whether AI-generated statements are protected by the First Amendment remains an open question. Last year, during the Character AI lawsuit, a Florida judge rejected the developer’s argument that the First Amendment shields the company from lawsuits based on the chatbot’s statements.
Complicating matters further is an executive order issued by President Donald Trump last December. That order claimed that “excessive state regulation” has hamstrung AI companies from innovation by creating “a patchwork of 50 different regulatory regimes that makes compliance more challenging” and called for a “minimally burdensome national standard” governing AI models.
In any case, the Joint Committee voted to recommend the bill’s passage with amendments as requested by the Department of Commerce and Consumer Affairs. Those amendments include a provision requiring AI companies to only collect data from users necessary to run the service and a provision to make any violation of the bill’s terms an unfair or deceptive practice.
The bill is still awaiting a hearing at the Senate Judiciary Committee.
Meanwhile, a corresponding House measure, House Bill 2502, is in similar straits, having passed the House Committee on Economic Development and Technology last week.
For the latest news of Hawai‘i, sign up here for our free Daily Edition newsletter.




