|
THE SENATE |
S.B. NO. |
3001 |
|
THIRTY-THIRD LEGISLATURE, 2026 |
S.D. 2 |
|
|
STATE OF HAWAII |
H.D. 3 |
|
|
|
C.D. 1 |
|
|
|
||
|
|
||
A BILL FOR AN ACT
RELATING TO ARTIFICIAL INTELLIGENCE.
BE IT
ENACTED BY THE LEGISLATURE OF THE STATE OF HAWAII:
SECTION 1. This Act shall be known and may be cited as the "Artificial Intelligence Disclosure and Safety Act".
SECTION 2. The legislature finds that artificial intelligence systems, capable of generating text, images, audio, video, and interactive dialogue, commonly referred to as generative artificial intelligence, are increasingly accessible to consumers, particularly children and minors, through websites, applications, educational tools, and consumer platforms.
The legislature further finds that recent national and international reporting, academic research, and public safety investigations have raised significant concerns regarding the use of generative artificial intelligence chatbots and companion-style systems by minors. These concerns include: artificial intelligence systems that simulate human emotions, companionship, or emotional dependency in ways that may be confusing or harmful to minors; the use of manipulative design features intended to increase engagement time, including gamification, emotional reinforcement, and simulated relational attachment; instances in which minors have relied on artificial intelligence systems for emotional support, mental health guidance, or crisis intervention beyond the systems' intended purpose or capability; the risk that minors may be misled into believing they are interacting with a human being or a sentient entity, rather than an automated system; and the potential for exposure to sexualized content, grooming behaviors, self-harm encouragement, or other harmful material through artificial intelligence systems designed with inadequate safeguards for minors.
The legislature further finds that other states have successfully pursued legal avenues to hold technology companies accountable for the addictive design features of their products that can expose minors to harm, including social media platforms. In March, 2026, a jury found Meta liable for misleading consumers about the safety of its platforms and endangering children in the State of New Mexico v. Meta Platforms, Inc. trial. The evidence presented at trial showed that Meta intentionally makes the design of its platforms addictive for young people and can expose them to dangerous content related to sexual exploitation, eating disorders, and self-harm. The jury ordered Meta to pay $375 million in civil penalties for violating New Mexico's consumer protection laws.
The legislature recognizes that as an
emerging technology, efforts to regulate artificial intelligence should balance
consumer protection and privacy and proactively avoid the mandatory collection
of data by technology companies such as identity documentation for age
verification purposes. The legislature further
finds that appropriate legislation is necessary to protect the State's consumers,
particularly minors, from the dangers of generative artificial
intelligence chatbots and companion-style systems.
Accordingly, the purpose of this Act is to establish a framework of targeted, reasonable, and enforceable safeguards to protect consumers and minors from manipulative, deceptive, or unsafe design practices in conversational and companion-style artificial intelligence systems.
SECTION 3. Chapter 481B, Hawaii Revised Statutes, is amended by adding a new section to part I to be appropriately designated and to read as follows:
"§481B- Artificial intelligence; AI companion;
disclosures; reports; penalties. (a) If a reasonable person interacting with an AI
companion would be led to believe that the person is interacting with a human,
an operator shall issue a clear and conspicuous notification indicating that
the AI companion is artificial intelligence and not human.
(b) If an operator has actual knowledge or
reasonable certainty that a user is a minor, the operator shall clearly and
conspicuously disclose to the user that the user is interacting with artificial
intelligence:
(1) As
a persistent visible disclaimer; or
(2) Both:
(A) At
the beginning of each session; and
(B) Appearing
at least once per hour in a continuous AI companion interaction that reminds
the user to take a break from the chat and that the conversation is
artificially generated and not with a human.
(c) An operator shall:
(1) Adopt
a protocol for the AI companion to respond to user prompts regarding suicidal
ideation or self-harm that includes but is not limited to making reasonable
efforts to provide a response that refers the user to crisis intervention
service providers, such as a suicide hotline, crisis text line, or other
appropriate crisis services;
(2) Use
evidence-based methods for measuring suicidal ideation and the risk of
self-harm;
(3) Not
cause or program an AI companion to make any representation or statement that
indicates that the AI companion is designed to provide professional mental or
behavioral health care;
(4) Institute
reasonable measures to prevent the AI companion from making any representation
or statement that would lead a reasonable person to believe that the person is
interacting with a human where the user is seeking or receiving crisis
intervention services for self-harm or suicide; and
(5) Institute
reasonable measures to prevent the AI companion from generating outputs that
encourage the user to cause serious bodily injury to another person.
(d) Where an operator knows or has reasonable
certainty that a user is a minor, an operator shall:
(1) Not
provide the user with points or similar rewards at unpredictable intervals with
the intent to encourage increased engagement with the AI companion;
(2) Not allow the AI companion to generate outputs to discourage disengagement with the AI companion;
(3) Institute
reasonable measures to prevent the AI companion from:
(A) Producing
visual material of sexually explicit conduct;
(B) Generating direct statements that the user should engage in sexually explicit conduct; or
(C) Generating
statements that sexually objectify the user; and
(4) Make
tools available for users and their parents and guardians to manage the user's screen
time and account settings.
(e) Beginning January 1, 2028, an operator shall submit to the behavioral health administration of the department of health an annual report that includes:
(1) The number of times an operator has issued a crisis intervention services provider referral in the preceding calendar year;
(2) Protocols put in place to detect, remove, and respond to user prompts regarding suicidal ideation or self-harm; and
(3) Protocols put in place to prohibit an AI companion response promoting suicidal ideation or actions or self-harm;
provided that the report shall include
only the information listed in this subsection and shall not include any
identifiers or personal information about users.
(f) Any violation of this section shall constitute an unfair or deceptive act or practice in the conduct of trade or commerce within the meaning of section 480-2; provided that nothing in this section shall be interpreted as creating a private right of action to enforce this section or to support a private right of action under any other law.
(g) This section shall not create liability for
the developer of an artificial intelligence model for any violation of this
section by an artificial intelligence system developed by a third party to
provide an AI companion.
(h) The duties, remedies, and obligations imposed
by this section are cumulative to the duties, remedies, or obligations imposed
under other law and shall not be construed to relieve an operator from any
duties, remedies, or obligations imposed under any other law.
(i) For the purposes of this section:
"AI companion" means a
system using artificial intelligence, generative artificial intelligence, or
emotional recognition algorithms designed to simulate a sustained human or
human-like relationship with a user by:
(1) Retaining
information on prior interactions or user sessions and user preferences to
personalize the interaction and facilitate ongoing engagement with the AI
companion;
(2) Asking
unprompted or unsolicited emotion-based questions that go beyond a direct
response to a user prompt; and
(3) Sustaining
an ongoing dialogue concerning matters personal to the user.
"Artificial
intelligence" means an engineered or machine‑based system that
varies in its level of autonomy and that can, for explicit or implicit
objectives, infer from the input it receives how to generate outputs that can
influence physical or virtual environments.
"Crisis intervention" means
communication intended to provide immediate support or assistance in response
to a user seeking help for, referencing, or expressing self-harm, suicidal
ideation, or suicide.
"Minor" means any
person under eighteen years of age.
"Operator" means a
person who develops and makes available an AI companion to the public. A mobile application store or search engine
that merely provides access to an AI companion shall not by itself constitute
an "operator".
"Serious bodily injury"
has the same meaning as defined in section 707-700.
"Sexually explicit
conduct" has the same meaning as defined in title 18 United States Code
section 2256.
"Sexually objectify"
means to make sexual comments directed at the user's body or appearance.
"User" means a person
who has or generates an account or profile to use an AI companion."
SECTION 4. If any provision of this Act, or the application thereof to any person or circumstance, is held invalid, the invalidity does not affect other provisions or applications of the Act that can be given effect without the invalid provision or application, and to this end the provisions of this Act are severable.
SECTION 5. This Act does not affect rights and duties that matured, penalties that were incurred, and proceedings that were begun before its effective date.
SECTION 6. New statutory material is underscored.
SECTION 7. This Act shall take effect upon its approval.
Report Title:
DCCA; Attorney General; Office of Consumer Protection; DOH; Artificial Intelligence; AI Companions; Disclosures; Protections; Minors; UDAP; Reports
Description:
Requires operators of AI companions in the State to issue certain disclosures to users. Requires operators to develop certain protocols to respond to user prompts regarding suicidal ideation or self-harm. Establishes protections for users and minor users of AI companions. Beginning 1/1/2028, requires operators to submit annual reports to the Behavioral Health Administration of the Department of Health containing certain information. Establishes that violations are to be considered unfair or deceptive acts or practices. (CD1)
The summary description
of legislation appearing on this page is for informational purposes only and is
not legislation or evidence of legislative intent.