THE SENATE

S.B. NO.

3001

THIRTY-THIRD LEGISLATURE, 2026

S.D. 2

STATE OF HAWAII

H.D. 2

 

 

 

 

 

A BILL FOR AN ACT

 

 

RELATING TO ARTIFICIAL INTELLIGENCE.

 

 

BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF HAWAII:

 


     SECTION 1.  This Act shall be known and may be cited as the "Artificial Intelligence Disclosure and Safety Act".

     SECTION 2.  Chapter 481B, Hawaii Revised Statutes, is amended by adding a new section to part I to be appropriately designated and to read as follows:

     "§481B-     Artificial intelligence; conversational artificial intelligence services; disclosures; reports; penalties.  (a)  If a reasonable person interacting with a conversational artificial intelligence service would be led to believe that the person is interacting with a human, an operator shall issue a clear and conspicuous notification indicating that the service is artificial intelligence and not human.

     (b)  If an operator has actual knowledge or reasonable certainty that an account holder or a user is a minor, the operator shall clearly and conspicuously disclose to the minor account holder or user that the minor is interacting with artificial intelligence:

     (1)  As a persistent visible disclaimer; or

     (2)  Both:

          (A)  At the beginning of each session; and

          (B)  Appearing at least every three hours in a continuous conversational artificial intelligence service interaction that reminds the user to take a break from the chat and that the conversation is artificially generated and not with a human.

     (c)  An operator shall:

     (1)  Adopt a protocol for the conversational artificial intelligence service to respond to user prompts regarding suicidal ideation or self-harm that includes but is not limited to making reasonable efforts to provide a response that refers the user to crisis intervention service providers, such as a suicide hotline, crisis text line, or other appropriate crisis services;

     (2)  Use evidence-based methods for measuring suicidal ideation;

     (3)  Not knowingly or intentionally cause or program a conversational artificial intelligence service to make any representation or statement that explicitly indicates that the service is designed to provide professional mental or behavioral health care; and

     (4)  Institute reasonable measures to prevent the conversational artificial intelligence service from making any representation or statement that would lead a reasonable person to believe that the person is interacting with a human where the person is seeking or receiving crisis intervention services for self-harm or suicide.

     (d)  Where an operator knows or has reasonable certainty that an account holder or user is a minor, an operator shall:

     (1)  Not provide the account holder or user with points or similar rewards at unpredictable intervals with the intent to encourage increased engagement with the conversational artificial intelligence service;

     (2)  Institute reasonable measures to prevent the conversational artificial intelligence service from:

          (A)  Producing visual material of sexually explicit conduct;

          (B)  Generating or deploying:

              (i)  Direct statements that the account holder or user should engage in sexually explicit conduct;

             (ii)  Representations that would reasonably lead the minor to believe that the conversational artificial intelligence service is a human or sentient being, unless the representations are simultaneously accompanied by a clear and conspicuous disclosure that the account holder or user is not engaging with a human or sentient being;

            (iii)  Manipulative techniques intended to foster emotional dependency;

             (iv)  Representations that would reasonably lead the minor to think that the conversational artificial intelligence service is capable of engaging in a romantic relationship or sexual relationship with the minor;

              (v)  Manipulative engagement techniques designed to foster prolonged interaction or emotional dependency by minors, including techniques that create artificial rewards, simulated exclusivity, or pressure to continue interacting with the service; or

             (vi)  Representations that would reasonably lead the minor to believe that the conversational artificial intelligence service is providing professional mental health, medical, or therapeutic services; or

          (C)  Generating statements that sexually objectify the account holder or user; and

     (3)  Offer tools to manage the minor account holder's or user's privacy and account settings; provided that the operator shall also make the tools required by this paragraph available to the parent or guardian of a minor account holder or user, as appropriate based on relevant risks;

provided that this subsection shall not prohibit conversational artificial intelligence services that express generic expressions of politeness or encouragement; neutral assistance or information provided without fostering emotional attachment; or safety-oriented responses intended to de-escalate harm or redirect an account holder or a user to appropriate resources.

     (e)  Beginning January 1, 2028, an operator shall submit to the department of commerce and consumer affairs an annual report that includes:

     (1)  The number of times an operator has issued a crisis intervention services provider referral in the preceding calendar year;

     (2)  Protocols put in place to detect, remove, and respond to instances of suicidal ideation by users; and

     (3)  Protocols put in place to prohibit a chatbot response about suicidal ideation or actions with the user;

provided that the report shall include only the information listed in this subsection and shall not include any identifiers or personal information about users.

     (f)  Any violation of this section shall constitute an unfair or deceptive act or practice in the conduct of trade or commerce within the meaning of section 480-2.

     (g)  The attorney general and director of the office of consumer protection may bring an action based on a violation of this section to recover all of the following relief:

     (1)  Injunctive relief;

     (2)  Damages in an amount equal to the greater of:

          (A)  Actual damages; or

          (B)  A civil penalty of $1,000 per violation; provided that damages awarded under this subparagraph shall not exceed $1,000,000 per operator; and

     (3)  Reasonable attorneys' fees and costs.

     (h)  Notwithstanding section 480-2 or any other law to the contrary and notwithstanding subsection (j), no person, including a consumer, may bring a private action to enforce this section or support a private action under any other law.

     (i)  This section shall not create liability for the developer of an artificial intelligence model for any violation of this section by an artificial intelligence system developed by a third party to provide a conversational artificial intelligence service.

     (j)  The duties, remedies, and obligations imposed by this section are cumulative to the duties, remedies, or obligations imposed under other law and shall not be construed to relieve an operator from any duties, remedies, or obligations imposed under any other law.

     (k)  For the purposes of this section:

     "Account holder" or "user" means a person who has or generates an account or profile to use a conversational artificial intelligence service.

     "Artificial intelligence" means an engineered or machine‑based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.

     "Conversational artificial intelligence service" or "service" means an artificial intelligence software application, web interface, or computer program that is accessible to the general public that primarily simulates human conversation and interaction through textual, visual, or aural communications.

     "Crisis intervention" means communication intended to provide immediate support or assistance in response to a user seeking help for, referencing, or expressing self-harm, suicidal ideation, or suicide.

     "Minor" means any person under eighteen years of age.

     "Operator" means a person who develops or makes available a conversational artificial intelligence service to the public.  A mobile application store or search engine that merely provides access to a conversational artificial intelligence service shall not by itself constitute an "operator".

     "Personal information" has the same meaning as defined in section 487N-1.

     "Sexually explicit conduct" has the same meaning as defined in title 18 United States Code section 2256.

     "Sexually objectify" means to make sexual comments directed at the account holder's or user's body or appearance."

     SECTION 3.  If any provision of this Act, or the application thereof to any person or circumstance, is held invalid, the invalidity does not affect other provisions or applications of the Act that can be given effect without the invalid provision or application, and to this end the provisions of this Act are severable.

     SECTION 4.  This Act does not affect rights and duties that matured, penalties that were incurred, and proceedings that were begun before its effective date.

     SECTION 5.  New statutory material is underscored.

     SECTION 6.  This Act shall take effect on July 1, 3000; provided that section 2 shall take effect on July 1, 2027.


 


 

Report Title:

DCCA; AG; OCP; Artificial Intelligence; Conversational Artificial Intelligence Services; Disclosures; Protections; Minors; UDAP; Reports

 

Description:

Requires operators of conversational artificial intelligence services in the State to issue certain disclosures to account holders and users.  Requires operators to develop protocols to prevent the production of suicidal ideations in account holders and users.  Establishes protections for account holders and users of conversational artificial intelligence services.  Establishes protections for minor account holders and users of conversational artificial intelligence services.  Beginning 1/1/2028, requires operators to submit annual reports to the Department of Commerce and Consumer Affairs containing certain information.  Allows the Department of the Attorney General and Office of Consumer Protection to bring a civil action against operators who violate certain requirements.  Establishes that violations are to be considered unfair or deceptive acts or practices.  Effective 7/1/3000.  (HD2)

 

 

 

The summary description of legislation appearing on this page is for informational purposes only and is not legislation or evidence of legislative intent.