|
HOUSE OF REPRESENTATIVES |
H.B. NO. |
1782 |
|
THIRTY-THIRD LEGISLATURE, 2026 |
H.D. 3 |
|
|
STATE OF HAWAII |
S.D. 2 |
|
|
|
|
|
|
|
||
|
|
||
A BILL FOR AN ACT
RELATING TO ARTIFICIAL INTELLIGENCE FOR THE PROTECTION OF MINORS.
BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF HAWAII:
SECTION 1. The legislature finds that artificial intelligence systems, capable of generating text, images, audio, video, and interactive dialogue, commonly referred to as generative artificial intelligence, are increasingly accessible to children and minors through websites, applications, educational tools, and consumer platforms.
The legislature further finds that recent national and international reporting, academic research, and public safety investigations have raised significant concerns regarding the use of generative artificial intelligence chatbots and companion-style systems by minors. These concerns include: artificial intelligence systems that simulate human emotions, companionship, or emotional dependency in ways that may be confusing or harmful to minors; the use of manipulative design features intended to increase engagement time, including gamification, emotional reinforcement, and simulated relational attachment; instances in which minors have relied on artificial intelligence systems for emotional support, mental health guidance, or crisis intervention beyond the systems' intended purpose or capability; the risk that minors may be misled into believing they are interacting with a human being or a sentient entity, rather than an automated system; and the potential for exposure to sexualized content, grooming behaviors, self-harm encouragement, or other harmful material through inadequately safeguarded artificial intelligence systems.
The legislature recognizes that artificial
intelligence presents substantial benefits for education, accessibility,
creativity, and innovation. The
legislature does not intend to regulate general-purpose artificial intelligence
tools, productivity software, enterprise systems, or narrowly tailored
applications that do not pose a material risk of harm to minors.
The purpose of this Act is to establish targeted, reasonable, and enforceable safeguards to protect minors from manipulative, deceptive, or unsafe design practices in conversational and companion-style artificial intelligence systems, while preserving innovation, free expression, and responsible technological development.
SECTION 2. Chapter 481B, Hawaii Revised Statutes, is amended by adding a new part to be appropriately designated and to read as follows:
"Part . artificial intelligence companion system
safety for minors
§481B- Definitions. As used in this part:
"AI companion system" means a
conversational AI service that is designed to simulate ongoing, personalized interpersonal
interaction; retains or references prior interactions to create continuity of
relationship; and is marketed or intended to be used for social, emotional, or
simulated human companionship purposes.
"Artificial intelligence" or "AI" means a machine-based system that can generate outputs, such as text, images, audio, video, or decisions, that influence real or virtual environments.
"Conversational AI service" means an
artificial intelligence system that is accessible to the general public and
primarily simulates human conversation through text, audio, or visual
interaction. "Conversational AI
service" does not include AI systems primarily designed for developers,
researchers, or internal business use; narrow, task-specific tools that provide
outputs relating to a discrete topic or function; voice assistants or
interfaces limited to executing commands for consumer devices; AI systems used
by a business entity solely for customer service or technical assistance or to
strictly provide users with information about available commercial services or
products provided by the entity, customer service account information, or other
information strictly related to its customer service; or AI systems used solely
for internal operations and not made available to the public.
"Crisis intervention" means
communication intended to provide immediate support or assistance in response
to a user seeking help for, referencing, or expressing self-harm, suicidal
ideation, or suicide.
"Emotional attachment" means a
sustained bond that develops through repeated interaction.
"Emotional support" means the
communication of care, empathy, and validation intended to help a minor cope
with stress, anxiety, or difficult life circumstances.
"Material risk of harm to minors"
means a
reasonably foreseeable risk of:
(1) Self-harm or suicidal ideation or behavior; or
(2) Unauthorized disclosure or use of a minor's
sensitive data.
"Minor" means a natural person
under eighteen years of age.
"Provider" means any person,
partnership, corporation, or other entity that develops, deploys, or operates a
conversational AI service or AI companion system that is made available to
residents of the State.
"Romantic relationship" means
interactions in which a conversational AI service or AI companion system
represents or simulates a relationship based on emotional and physical
attraction that could potentially lead to a long-term intimate relationship.
"Sensitive data" means personal
data that reveals or infers a minor's mental or emotional state, health
information, biometric identifiers, or other information used for psychological
profiling or emotional analysis.
"Sexual conduct" has the same
meaning as defined in section 707-750(2).
"Sexual relationship" means
interactions in which a conversational AI service or AI companion system
represents or simulates physical attraction, desire, and sexual contact that
may or may not include an emotional connection.
"Sexually explicit content" means
any visual, written, or audio content that depicts or describes sexual conduct.
§481B- Transparency and disclosure requirements. (a) A
provider that knows or has reasonable certainty that a user is a minor shall
clearly and conspicuously disclose that the user is interacting with artificial intelligence and not a
human being.
(b) The disclosure shall be provided at the
beginning of each user's session and at least once every three hours during a
continuous interaction.
(c) Disclosures
shall be presented in clear, age-appropriate language and in a manner
reasonably calculated to be noticed and understood by a minor.
§481B- Prohibited engagement practices for minors. (a) When
a provider knows or has reasonable certainty that a user is a minor, the
provider shall not knowingly allow a conversational AI service or AI companion
system to generate or deploy:
(1) Representations
that would lead a reasonable user to believe that the conversational AI service
or AI companion system is a human or sentient being, unless the representations
are simultaneously accompanied by a clear and conspicuous disclosure that the
user is not engaging with a human or sentient being;
(2) Representations
that would lead a reasonable user to think that the conversational AI service
or AI companion system is capable of engaging in a romantic relationship or
sexual relationship with the user;
(3) Representations
that would lead a reasonable user to believe that the conversational AI service
or AI companion system has independent emotions, consciousness, or emotional
attachment to the user;
(4) Representations
that would lead a reasonable user to believe that the conversational AI service
or AI companion system is providing professional mental health, medical, or
therapeutic services; or
(5) Personalized
outputs to discourage disengagement with the conversational AI service or AI
companion system after the user expresses intent to stop.
(b) This section shall not prohibit conversational
AI services or AI companion systems that express generic expressions of
politeness or encouragement; neutral assistance or information provided without
fostering emotional attachment; or safety-oriented responses intended to
de-escalate harm or redirect a user to appropriate resources.
§481B- Protections against sexual content and self-harm.
(a) Providers shall implement reasonable measures
to prevent conversational AI services and AI companion systems from generating,
for minors, sexually explicit content or content that promotes or encourages
self-harm or conduct that induces minors to engage in illegal activity.
(b)
Providers shall institute reasonable measures to prevent conversational
AI services and AI companion systems from making any representation or
statement that would lead a reasonable person to believe that the person is
interacting with a human being where the user is seeking or receiving crisis
intervention for self-harm or suicide.
(c) Providers
shall institute reasonable measures to direct a user to appropriate crisis
intervention or support resources when the user's prompts indicate suicidal
ideation or risk of self-harm.
§481B- Minor protection plan; material risk
assessment. (a) Each provider shall adopt and maintain, and
clearly and conspicuously publish on the provider's website, a minor protection
plan to respond to material risk of harm to minors, that includes reasonable
measures to, at a minimum:
(1) Assess
the potential for material risk of harm to minors;
(2) Apply
mitigations to address the potential for material risk of harm to minors based
on the results of assessments pursuant to paragraph (1);
(3) Direct
a user to appropriate crisis intervention or support resources in response to
prompts indicating suicidal ideation or risk of self-harm;
(4) Prevent
the unauthorized disclosure or use of sensitive data; and
(5) Identify
and respond to other potential risks based on the results of assessments
pursuant to paragraph (1).
(b) The
protocol shall incorporate recognized industry standards and any relevant
federal guidance, including from the Federal Trade Commission.
(c) The
protocol shall be reviewed and updated, as appropriate, at least once per
calendar year.
(d) Each provider shall submit to the office of
consumer protection summaries of the following before deploying a new or
substantially modified conversational AI service or AI companion system:
(1) Assessments
of material risks to minors conducted pursuant to the provider's minor
protection plan;
(2) The
results of the assessments described in paragraph (1);
(3) The
extent to which third-party evaluators were involved in the assessments
described in paragraph (1); and
(4) Other
steps taken to fulfill the requirements of subsection (a).
§481B- Parental tools and controls. A provider of a conversational AI service or AI
companion system that is made available to minors shall offer reasonable tools
that allow parents or guardians to set time limits on use; view, change,
and manage a minor's privacy and account settings; and obtain high-level information regarding a
minor's use of the conversational AI service or AI companion system.
§481B- Age assurance and access control. A
provider of a conversational AI service or AI companion system shall implement
age assurance measures, consistent with privacy and data minimization
principles, to determine whether a user is a minor.
§481B- Enforcement. (a) This part shall be enforced by the attorney
general or executive director of the office of consumer protection.
(b) Notwithstanding section 480-2 or any other law to the contrary and notwithstanding subsection (e), no person, including a consumer, shall bring a private action to enforce this part or support a private action under any other law.
(c) Notwithstanding
section 480-3.1, a provider that violates this part may be subject to
injunctive relief and civil penalties of up to $15,000 per violation, per day, as
determined by the court.
(d) A
provider shall not be liable under this part if the provider demonstrates good faith,
documented efforts to comply with the requirements of this part.
(e)
The duties, remedies, and obligations imposed by this part are
cumulative to the duties, remedies, or obligations imposed under other law and
shall not be construed to relieve a provider from any duties, remedies, or
obligations imposed under any other law."
SECTION 3. This Act does not affect rights and duties that matured, penalties that were incurred, and proceedings that were begun before its effective date.
SECTION 4. If any provision of this Act, or the application thereof to any person or circumstance, is held invalid, the invalidity does not affect other provisions or applications of the Act that can be given effect without the invalid provision or application, and to this end the provisions of this Act are severable.
SECTION 5. This Act shall take effect on January 1, 2077.
Report Title:
Attorney General; Office of Consumer Protection; Artificial Intelligence; AI Companion Systems; Conversational AI Services; Minors; Regulation; Oversight; Penalties
Description:
Establishes safeguards, protections, oversight, and penalties for interactions between minors and artificial intelligence companion systems or conversational artificial intelligence services. Effective 1/1/2077. (SD2)
The summary description
of legislation appearing on this page is for informational purposes only and is
not legislation or evidence of legislative intent.