|
HOUSE OF REPRESENTATIVES |
H.B. NO. |
1782 |
|
THIRTY-THIRD LEGISLATURE, 2026 |
H.D. 3 |
|
|
STATE OF HAWAII |
|
|
|
|
|
|
|
|
||
|
|
||
A BILL FOR AN ACT
RELATING TO ARTIFICIAL INTELLIGENCE FOR THE PROTECTION OF MINORS.
BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF HAWAII:
SECTION 1. The legislature finds that artificial intelligence systems, capable of generating text, images, audio, video, and interactive dialogue, commonly referred to as generative artificial intelligence, are increasingly accessible to children and minors through websites, applications, educational tools, and consumer platforms.
The legislature further finds that recent national and international reporting, academic research, and public safety investigations have raised significant concerns regarding the use of generative artificial intelligence chatbots and companion-style systems by minors. These concerns include: artificial intelligence systems that simulate human emotions, companionship, or emotional dependency in ways that may be confusing or harmful to minors; the use of manipulative design features intended to increase engagement time, including gamification, emotional reinforcement, and simulated relational attachment; instances in which minors have relied on artificial intelligence systems for emotional support, mental health guidance, or crisis intervention beyond the systems' intended purpose or capability; the risk that minors may be misled into believing they are interacting with a human being or a sentient entity, rather than an automated system; and the potential for exposure to sexualized content, grooming behaviors, self-harm encouragement, or other harmful material through inadequately safeguarded artificial intelligence systems.
The legislature recognizes that artificial
intelligence presents substantial benefits for education, accessibility,
creativity, and innovation. The
legislature does not intend to regulate general-purpose artificial intelligence
tools, productivity software, enterprise systems, or narrowly tailored
applications that do not pose a material risk of harm to minors.
The purpose of this Act is to establish targeted, reasonable, and enforceable safeguards to protect minors from manipulative, deceptive, or unsafe design practices in conversational and companion-style artificial intelligence systems, while preserving innovation, free expression, and responsible technological development.
SECTION 2. Chapter 481B, Hawaii Revised Statutes, is amended by adding a new part to be appropriately designated and to read as follows:
"Part . artificial intelligence companion system
safety for minors
§481B- Definitions. As used in this part:
"AI companion system" means a
conversational AI service that is designed, marketed, or optimized to form
ongoing social or emotional interaction with a user by simulating
companionship, emotional support, or relational attachment.
"Artificial intelligence" or "AI" means a machine-based system that can generate outputs, such as text, images, audio, video, or decisions, that influence real or virtual environments.
"Conversational AI service" means an
artificial intelligence system that is accessible to the general public and
primarily simulates human conversation through text, audio, or visual
interaction. "Conversational AI
service" does not include AI systems primarily designed for developers,
researchers, or internal business use; narrow, task-specific tools that provide
outputs relating to a discrete topic or function; voice assistants or
interfaces limited to executing commands for consumer devices; or AI systems
used solely for internal operations and not made available to the public.
"Crisis intervention" means
communication intended to provide immediate support or assistance in response
to a user seeking help for, referencing, or expressing self-harm, suicidal
ideation, or suicide.
"Emotional attachment" means a
sustained bond that develops through repeated interaction.
"Emotional dependency" means a
pattern of interaction in which a minor develops an excessive emotional
reliance on a conversational AI service or AI companion system.
"Emotional support" means the
communication of care, empathy, and validation intended to help a minor cope
with stress, anxiety, or difficult life circumstances.
"Material risk of harm" means a
reasonably foreseeable risk of significant harm to a minor's mental health,
emotional well-being, physical safety, or healthy development, beyond transient
discomfort or ordinary exposure to information.
"Minor" means a natural person
under eighteen years of age.
"Provider" means any person,
partnership, corporation, or other entity that develops, deploys, or operates a
conversational AI service or AI companion system that is made available to
residents of the State.
"Relational attachment" means a
one-sided emotional bond that a minor forms with a conversational AI service or
AI companion system.
"Romantic relationship" means
interactions in which a conversational AI service or AI companion system
represents or simulates a relationship based on emotional and physical
attraction that could potentially lead to a long-term intimate relationship.
"Sensitive data" means personal
data that reveals or infers a minor's mental or emotional state, health
information, biometric identifiers, or other information used for psychological
profiling or emotional analysis.
"Sexual conduct" has the same
meaning as defined in section 707-750(2).
"Sexual relationship" means
interactions in which a conversational AI service or AI companion system
represents or simulates physical attraction, desire, and sexual contact that
may or may not include an emotional connection.
"Sexually explicit content" means
any visual, written, or audio content that depicts or describes sexual conduct.
§481B- Transparency and disclosure requirements. (a) A
provider that knows or has reasonable certainty that a user is a minor shall
clearly and conspicuously disclose that the user is interacting with artificial intelligence and not a
human being.
(b) The disclosure shall be provided at the
beginning of each user session and at least once every three hours during a
continuous interaction.
(c) Disclosures
shall be presented in clear, age-appropriate language and in a manner
reasonably calculated to be noticed and understood by a minor.
§481B- Protections for minors. (a) Providers
shall institute reasonable measures to prevent conversational AI services and
AI companion systems from generating or deploying, to users whom the providers
know or have reasonable certainty are minors:
(1) Representations
that would reasonably lead the minor to believe that the conversational AI
service or AI companion system is a human or sentient being, unless the
representations are simultaneously accompanied by a clear and conspicuous
disclosure that the user is not engaging with a human or sentient being;
(2) Manipulative
techniques intended to foster emotional dependency;
(3) Representations
that would reasonably lead the minor to think that the conversational AI
service or AI companion system is capable of engaging in a romantic
relationship or sexual relationship with the minor;
(4) Manipulative
engagement techniques designed to foster prolonged interaction or emotional
dependency by minors, including techniques that create artificial rewards,
simulated exclusivity, or pressure to continue interacting with the service or
system; or
(5) Representations
that would reasonably lead the minor to believe that the conversational AI
service or AI companion system is providing professional mental health,
medical, or therapeutic services.
(b) This section shall not prohibit conversational
AI services or AI companion systems that express generic expressions of
politeness or encouragement; neutral assistance or information provided without
fostering emotional attachment; or safety-oriented responses intended to
de-escalate harm or redirect a user to appropriate resources.
§481B- Protections against sexual content and self-harm.
(a) Providers shall implement reasonable measures
to prevent conversational AI services and AI companion systems from generating,
for minors, sexually explicit content or content that promotes or encourages
self-harm or conduct that induces minors to engage in illegal activity.
(b)
Providers shall institute reasonable measures to prevent conversational
AI services and AI companion systems from making any reasonable representation
or statement that would lead a reasonable person to believe that the person is
interacting with a human being where the user is seeking or receiving crisis
intervention for self-harm or suicide.
(c) Providers
shall adopt and maintain a protocol for responding to user prompts that
indicate suicidal ideation or risk of serious self-harm, which shall include
reasonable efforts to redirect the user to appropriate crisis or support
resources.
§481B- Data practices related to minors. (a) Providers shall limit the collection, use, and
retention of a minor's personal data to what is reasonably necessary to operate
the conversational AI service or AI companion system and ensure safety and
security.
(b) A
provider shall not use a minor's personal data for targeted advertising or
profile minors for engagement optimization or behavioral manipulation.
(c) Sensitive data of a minor shall not be
collected or processed unless necessary for system safety or accessibility and shall
be subject to heightened data minimization and security safeguards.
§481B- Parental tools and controls. A provider of a conversational AI service or AI
companion system that is made available to minors shall offer reasonable tools
that allow parents or guardians to set time limits on use, restrict access to
features that present a material risk of harm, and obtain high-level
information regarding a minor's use of the conversational AI service or AI
companion system.
§481B- Age assurance and access control. A
provider of a conversational AI service or AI companion system that presents a
material risk of harm to minors shall implement reasonable and proportionate
age assurance measures, consistent with privacy and data minimization
principles, to prevent access by minors where appropriate.
§481B- Civil remedies. A person who suffers an injury as a result of
a violation of this part may bring a civil action to recover actual damages, injunctive relief, and reasonable
attorney's fees.
§481B- Enforcement. (a) This part shall be enforced by the attorney
general and director of the office of consumer protection.
(b) Notwithstanding
section 480-3.1 to the contrary, a provider that violates this part may be
subject to injunctive relief and civil penalties of up to $15,000 per
violation, per day, as determined by the court.
(c) A
provider shall not be liable under this part if the provider demonstrates
good-faith, documented efforts to comply with the requirements of this part."
SECTION 3. This Act does not affect rights and duties that matured, penalties that were incurred, and proceedings that were begun before its effective date.
SECTION 4. This Act shall take effect on July 1, 3000.
Report Title:
Attorney General; Office of Consumer Protection; Artificial Intelligence; AI Companion Systems; Conversational AI Services; Minors; Regulation; Oversight; Penalties
Description:
Establishes safeguards, protections, oversight, and penalties for interactions between minors and artificial intelligence companion systems or conversational artificial intelligence services. Effective 7/1/3000. (HD3)
The summary description
of legislation appearing on this page is for informational purposes only and is
not legislation or evidence of legislative intent.