Meetings

Transcript: Select text below to play or share a clip

[Alyssa Black]: Hi, welcome back everybody. We are plowing through the afternoon.

[Brian Cina]: As

[Katie McLennan]: it were.

[Alyssa Black]: So we are moving on. We are introducing a new topic, and we have three separate builds on our wall that all sort of deal with just the broader topic. So I thought we should have a walkthrough of all of them. So we're going to start with Jen Carvey, who is going to walk us through H-eight 14. As I told the committee,

[Brian Cina]: we need to go

[Alyssa Black]: for these bills and then we're going to get a little education on the topic, but we will be coming back to this next week.

[Jen Harvey]: Good afternoon. Jen Harvey from the Office of Legislative Council. So we're gonna look at h eight fourteen, and I don't know how much detail you want to get into. It would say 27 page bill. Yes. Okay. Brilliant. Maybe just like We'll we'll we'll do some, and there may be some that I go into more detail for terminology and otherwise, and some I may try to summarize a little bit. And the lead sponsor is here if we have specific questions. And I don't know if you want, yeah, if you want an overview or anything or you just wanna.

[Alyssa Black]: That, and then we'll get an overview of it.

[Jen Harvey]: HK14 is an act relating to neurological rights and the use of artificial intelligence technology in health and human services. Starts out with intent, but I think it's probably important to look at in more detail. So it says that it is the intent of the General Assembly to protect human rights, promote equity, increase efficiency, enhance accessibility, create transparency, and guarantee accountability in healthcare and human services through the ethical and responsible use of artificial intelligence technology. Two is maximize the benefits and minimize the risks of using artificial intelligence in healthcare and human services. Three, promote the ethical and responsible use of augmented intelligence in service delivery, coverage determinations, and access to healthcare and human services. Four, prevent harm from the use of augmented and other artificial intelligence in healthcare and human services. Five, improve the experience of patients, providers and payers through the use of augmented and other artificial intelligence. And six, improve quality of care, drive positive health outcomes and cultivate population health through the use of augmented and other artificial intelligence. That's sort of the overarching intent around the bill. Section two adds a new chapter in Title 18, Chapter 42 on neurological rights. And it starts by the state recognizing each individual has a right to mental and neural data privacy, freedom of thought, cognitive liberty to change their decision regarding neurotechnology and the right to determine by what means they changed the decision. They afforded protection from neurotechnological interventions of the mind and unauthorized access to or manipulation of an individual's brain activity, and be afforded protection from unauthorized neurotechnological alterations in mental functions critical to personality. It creates definitions. I don't know how much you want to look at what they all are, but as far as terminology, we've got brain computer interface, conscious decision making, consciousness bypass, neural data, neurotechnology, and written informed consent. Section eighteen ninety three has a general, except with limited exceptions, general prohibition on collecting or recording an individual's neural data gathered from a brain computer interface or sharing that data that is gained from a brain computer interface. It prohibits anyone from collecting or recording someone's neural data gathered from a brain computer interface unless the person provides written notice explaining how the data will be used and then gets written informed consent from the individual. Similar language around consent to share, a person shall not share with a third party an individual's neural data gathered from a brain computer interface unless they provide a written request to the individual to share their neural data and gives the purposes and who it will be shared with and then gets written informed consent. Revocation of consent. This allows somebody who has provided written informed consent, allowing their neural data to be collected, reported, or shared, allows them to revoke at any time by providing written notice of their intent to revoke. And it specifies that the revocation of consent notice must be as easy or easier for the individual to provide as compared to the requirements for initially providing consent. So it has to be at least as easy to withdraw consent as it was to provide consent. And then it gives some directions to what the person who receives the written notice of revocation of consent has to do. They have to destroy all of the records of neural data, the individual's neural data, within ten days after receiving notice. And then if it's a revocation of consent to share, they must immediately stop sharing and also let everybody who they'd shared the neural data with know that the consent has been revoked. Then we have consciousness bypass limitations.

[Daisy Berbeco]: This is heavy.

[Katie McLennan]: Talking about I I'm mean, I

[Jen Harvey]: have no idea what this brain computer interface is.

[Brian Cina]: We have a witness coming on after this to talk about it.

[Jen Harvey]: Right now you're looking at what the bill was and then

[Brian Cina]: A witness today is going to cover this part of the bill first, all the brain interface and neural data stuff.

[Alyssa Black]: So is this happening now?

[Brian Cina]: Yes, they're gonna talk about it soon.

[Jen Harvey]: I don't know. We can go back up to the Let's issue That's why I searched it. It's like, what? We have consciousness bypass limitations. Again, we've got specific consent is required. So a person cannot allow a brain computer interface that it manufactures to be used to bypass an individual's conscious decision making unless the individual provides written informed consent. And it talks about specific written consent, meaning written consent for each and every category of action performed by the brain computer interface. Somebody receiving written informed consent has to keep a record of that. And it specifies that consent obtained by using a consciousness bypass is not informed consent. Then it goes into revoking consent against somebody who has provided specific written informed consent, allowing the brain computer interface to be used to bypass their conscious decision making as the right to revoke that consent at any time. Again, it must be as easy or easier to withdraw the consent as it was to provide the consent. And this specifies that an individual's agent, guardian or surrogate can also revoke consent on the individual's behalf. We have some penalty and enforcement language saying that a violation of this chapter is an unfair or deceptive act of practice in commerce. So as a violation of Vermont's Consumer Protection Act, the violations are subject to a civil penalty of not more than $10,000 per violation and gives the Attorney General's Office the same authority as they have under the Consumer Protection Act and consumers the same rights and remedies as well. And if we proceed with this bill, we may wanna talk to the attorney general's office about this language and just they spent a lot of time with them on five eighty three on consumer protection stuff. I think they they may have a slightly different way they would like to say that.

[Brian Cina]: I'm taking note for witnesses to have on our list. Thank you.

[Jen Harvey]: Section three adds a new chapter in title 18 on artificial intelligence in health care. Starts out with some general provisions, starting there with definitions. And I do just want to pause on the artificial intelligence definition. That means a machine based system that makes predictions, recommendations, or decisions influencing real or virtual environments. And I also just want to flag there are lot of bills moving through the building in various committees dealing with artificial intelligence. So we will want to be sure that we are coming out with sort of consistent definitions, even as the applications may change in the specific context, try to be sure that the state is using a single definition of artificial intelligence. Go ahead, Brian.

[Brian Cina]: Your comment just made me write down as a potential witness, the division of artificial intelligence in the digital services. There any other witness you can think of that might speak to this, like another alleged counsel?

[Jen Harvey]: Mean, I think the various of us are working on it together. Rick Segal works on it in the Commerce Committees. Katie's working on it. You'll hear from her next. So we're keeping in touch with each other on these. But yes, I just did a continuing legal education class earlier this week with the Attorney General's Office. Someone from ADS who does AI stuff was there and was very clear and very helpful.

[Brian Cina]: Okay, so I'll put the Division of AI. Thank you.

[Jen Harvey]: Artificial intelligence technology is a computer system application or other product that uses or incorporates one or more forms of artificial intelligence, which I may just say AI for purposes of our walkthrough here. Confidential communication is defined. We're starting to get some of this into the context of interactions between a patient and a mental health or an individual and a mental health provider. So I won't go through all the specifics of that, but just so you know. Covered entity generative AI is an AI technology system that is trained on data, is designed to simulate human conversation with consumer through one or more of the following: text, audio, or visual communication and that generates non scripted outputs similar to outputs created by a human with limited or no human oversight. So that is generative artificial intelligence. Got definitions of healthcare provider and healthcare plan that use definitions in federal HIPAA laws. Individually identifiable health information. We've got mental health chat bot is an AI technology that uses generative AI to engage in interactive conversations with a user of the mental health chatbot, similar to the confidential communications that an individual would have with a licensed mental health provider and a supplier represents or a reasonable person would believe, can or will provide psychotherapy or help a user manage or treat mental health conditions. Then it specifies that a mental health chatbot does not include AI technology that only provides scripted outputs or pre written output, like a guided meditation or mindfulness exercises, or that analyzes an individual's input for the purpose of connecting the individual with a human mental health provider. We have a long definition of mental health providers. These are a lot of different licensed provider types. For some of them like physician is a physician specifically engaged in the practice of psychotherapy. An APRN is one who specializes in psychiatric or mental health nursing. And a physician assistant is someone who specializes in psychiatric or mental health care. I think the rest of them are more self explanatory by their licensed type psychologist, social worker, alcohol and drug abuse counselor, clinical mental health counselor, marriage and family therapist, and psychoanalyst. We've got personal data. We have scientific research development, supplier, and user input is input content provided to a mental health chatbot by a Vermont user. Oh, and then a Vermont user is an individual located in Vermont at the time they are accessing or using a mental health chatbot. We have Notice of Use of Generative AI. So except as provided in subsection Any healthcare provider using generative AI to generate written or verbal patient communications relating to patient clinical information must ensure that communications include both of the following: first, a disclaimer indicating to the patient that the communication was generated by generative AI. And it talks about what that looks like for written communications, audio communications, video communications. And clear instructions on how the patient can contact a human healthcare provider, employee of the healthcare facility, clinic, physician's office, or office of a group provider or other appropriate person. And if a community, so the exception is if a communication is generated by generative AI, but it's read and reviewed by a licensed human healthcare provider, then those disclaimer requirements do not apply. And in addition to enforcement authority, you'll see next, a violation of this section by a licensed healthcare provider is subject to the jurisdiction of Office of Professional Regulation of the Board of Medical Practice as applicable. Violations, this allows the Attorney General to impose an administrative penalty of not more than $2,500 for each violation of the chapter. And in addition, the Attorney General can file an action in Superior Court with the same authority to investigate and obtain remedies as if those were under the Consumer Protection Act, and each violation is a separate violation that the Attorney General can obtain relief for. That was all Subchapter one. Subchapter two is artificial intelligence applications relating specifically to mental health. And the first part of this is around protection of personal information of mental health chatbot users. Except as provided in subdivision two, a supplier of a mental health chatbot shall not sell to or share with any third party any individual identifiable information of a Vermont user or user input of a Vermont user. That does not apply, this says, to an individually identifiable health information that is requested by a health care provider with the consent of the Vermont user, provided to a health plan of a Vermont user upon the user's request or shared in compliance with Subsection B. Subsection B allows a supplier to share individually identifiable health information if it's necessary to ensure the effective functionality of the mental health chatbot with another person that the supplier has a contract with related to the functionality. And when sharing information under that exception, the supplier and other person must comply with the applicable privacy and security provisions of HIPAA, what these two parts of the Code of Federal Regulations are, as if the supplier were a covered entity, which is a defined term, and the other person was a business associate, as those terms are defined in the HIPAA law. So basically putting those under the types of requirements of HIPAA. Then there's restrictions on advertising to mental health chatbot users. This prohibits a supplier from using a mental health chatbot to advertise a specific product or service to a Vermont user in a conversation between the Vermont user and the mental health chatbot unless the chatbot clearly and conspicuously identifies the advertisement as an advertisement and clearly and conspicuously discloses to the user any sponsorship, business affiliation, or agreement that the supplier has with a third party to promote, advertise, or recommend a product. It prohibits a supplier of a mental health chatbot from using a Vermont user's input to determine whether to display an advertisement for a product to the user or service to the user unless the advertisement is for that mental health chatbot itself or using the input to determine a product, service, or category of product or service to advertise to the Vermont user, or customize how an advertisement is presented. And nothing in this section should be construed to prohibit a mental health chatbot from recommending that the user seek psychotherapy or other assistance from a licensed health care provider, including a specific licensed health care provider. 97.63 is the disclosure requirements for mental health chatbots. This would require a supplier of a mental health chatbot to cause the chatbot to clearly inconspicuously disclose to a Vermont user that the chatbot is an artificial intelligence technology and not a human. And it talks about when in their communications the disclosure must be made, before the user can access the features of the chatbot, at the beginning of any interaction with the Vermont user if the user hasn't used the chatbot within the previous seven days, and anytime the user asks or otherwise prompts the chatbot about whether artificial intelligence is being used. Section 97.64 talks still about malchatbots. And here we've got some affirmative defenses. So it is an affirmative defense to liability in an action for unlawful or unprofessional conduct brought against the supplier by the Office of Professional Regulation or Board of Medical Practice. If the supplier demonstrates That the supplier meets all of the following conditions: The supplier created, maintained, and implemented a policy that we'll look at next The supplier maintains documentation about how it developed and implemented the mental health chatbot that talks about various components of that development. They filed a policy with the Office of the Attorney General, and the supplier complied with all of the requirements of the filed policy at the time of the alleged violation. So it's an affirmative defense that they kind of take the steps that are the certain steps that are required if they are subject to discipline by the Licensing Board. A policy described in Subdivision A1 must meet the following requirements. It must be in writing. It must clearly state the intended purposes of the Mental Health Chatbot and its abilities and limitations. Describe the procedures by which the supplier ensures that qualified mental health providers licensed in Vermont or in other states or both are involved in the development and review process. Ensures the chatbot is developed and monitored in a manner consistent with clinical best practices. Next testing prior to making the chatbot publicly available and ongoing thereafter to make sure that the output of the chatbot poses no greater risk to a user than that posed to an individual in psychotherapy with a licensed mental health provider. They identify reasonably foreseeable adverse outcomes and potentially harmful interactions that could come from using the mental health chatbot, provide a mechanism to report any potentially harmful interactions, implement protocols to assess and respond to the risk of harm, detail actions taken to prevent or mitigate adverse outcomes or potentially harmful interactions, implement protocols to respond in real time to acute risk of physical harm, reasonably ensures regular objective reviews of safety, accuracy, and efficacy, provide users with necessary instructions on the safe use of the chatbot, ensures users understand they are interacting with artificial intelligence and that they understand the intended purposes, capabilities, and limitations of the chatbot, prioritize user mental health and safety over engagement metrics or profit, implement measures to prevent discriminatory treatment of users, and ensure compliance with the security and privacy protections in HIPAA as if the supplier were a covered entity, and applicable consumer protection requirements that we've been looking at. To file a policy with the Office of the Attorney General, under this section, the chat supplier the chatbot and talks about what they have to provide to the office, including a $100 filing fee. And then it specifies that the affirmative defense does not apply only in an administrative or civil action alleging unlawful or unprofessional conduct by a mental health provider, and nothing in the section is construed to prohibit the Attorney General, OPR, or the Board of Medical Practice from bringing an action alleging unlawful or unprofessional conduct against a supplier or that they must recognize a mental health chatbot as a licensed mental health provider. Subtitle three is coming out of the mental health specific context. And now we have artificial intelligence applications relating to health insurance. And this is use of artificial intelligence in utilization review. This requires that a health plan that uses, and it's a broad definition, that uses an artificial intelligence algorithm or other software tool for purposes of utilization review or utilization management functions based in whole or in part of medical necessity, or that contracts with or otherwise works with an entity that uses these, shall ensure all of the following. That the AI algorithm or other software tool bases its determination on the following information as applicable, the covered individual's medical or other clinical history, the specific clinical circumstances presented by the requesting healthcare provider, and other relevant clinical information contained in the individual's medical or other clinical record. And they must ensure that the AI algorithm or other software tool does not base its determination solely on a group dataset. Must ensure that their criteria and guidelines comply with our health insurance laws, our health care administration laws, and other applicable state and federal laws, that the AI algorithm or other software tool does not supplant health care provider decision making, that the use of the AI algorithm or other software tool does not discriminate against covered individuals in violation of state or federal law, that the AI algorithm or other software tool is is fairly and equitably applied, including following anything from the the US Department of Health and Human Services, that it is open for inspection to inspection for audit or compliance reviews reviews by DFR and other state agencies and departments. The disclosures pertaining to the use and oversight are contained in the health plans written policies and procedures to the extent required by DFR. That the tool's performance, use, and outcomes are periodically reviewed and revised to maximize accuracy and reliability, that patient data is not used beyond its intended and stated purpose, consistent with this chapter and I'm sorry, the previous chapter, which is on health care data privacy and with the security and privacy protections of HIPAA, and that the tool does not directly or indirectly cause harm to the covered individual. Notwithstanding, A, the use of a tool shall not deny, delay, or modify healthcare services based in whole or in part on medical necessity. And that specifies that a determination of medical necessity must be made only by a licensed human healthcare provider who is competent to evaluate the specific clinical issues involved in the healthcare services requested by reviewing and considering the requesting provider's recommendation, the medical or other clinical history, and specific clinical circumstances. It applies to utilization review or utilization management functions in all stages, prospective, retrospective, and concurrent review requests for covered health care services. Almost done. Section four amends the artificial intelligence advisory council. So it makes some modifications. This is just a conforming change, but makes some modifications to the membership. So instead of there's a list of members you don't see between one and f there, But this modifies the member with experience in the field of ethics and human rights from being appointed by the governor to appointed by the National Association of Social Workers Vermont chapter In H, it, eliminates the Commissioner of Health or designee as a member and instead has a member with experience in public education appointed by the Vermont NEA, adds one member with experience in health care appointed by the Vermont Medical Society, adds the Secretary of Human Services or designee and adds the State Treasurer or designee. It also extends the lifetime of this council, which is currently scheduled to sunset in June 2027 for an additional three years until 06/30/2030. Section five requires a report. It directs the Artificial Intelligence Advisory Council, that last one we looked at, in coordination with the Director of the Division of Artificial Intelligence to review guidelines and recommendations from the American Medical Association, National Association of Social Workers, National Education Association, and other relevant professional organizations regarding the use of AI in healthcare, human services, education, public participation, and public finance research existing and potential uses of AI in public participation processes and public finance and create opportunities for public education engagement in the development of AI policy. The written report is due by 01/15/2027, comes to the General Assembly, and it must recommend any additional statutory changes to further the purposes of this Act, summarize any additional ways government can promote the ethical and responsible use of AI technology in health and human services and in education, propose pilot projects to improve public engagement in public finance using ethical and responsible AI technology, and identifying any reasons for further delaying or removing the new 2030 sunset on the Artificial Intelligence Advisory Council. And finally, Act would take effect on July 1.

[Brian Cina]: Wow, in thirty minutes, you did it.

[Jen Harvey]: We did not start until like five minutes. Yeah.

[Daisy Berbeco]: You did great.

[Alyssa Black]: Thank you, We just have a discussion this morning about how you can't quantify the value of legislative councils. Yes. I'll say it again. I think we just saw a perfect example. So

[Brian Cina]: for the sake of time, I would like to, because we're going to talk about this next week, unless people have questions for Jen now, explain some of this to people next week, some of the rationale of this, because it'll take up time now, and I want to get through the other bills. We have a witness who's gonna cover the first part of the bill today. But the additional sections are gonna need their own time, and Daisy and I are gonna work together on that. I've been making a list of witnesses, and if there's anyone you wanna hear from, let us know. I I I guess I got a question. Yeah. Where did this bill come from? I will answer that quickly today, that I researched all of the AI legislation that has passed in The United States in the last two years related to healthcare, and I picked things from various states. I picked a mental health chatbot law from a state. I'll tell you the state later. I picked the utilization review piece from another state, and I picked the generative AI piece from a state. The neurological rights piece I created myself, and I talked with the group you're going to hear from today. And the final part of this I created after talking with the agency of digital services, when they pointed out that the council is going to sunset and that there's not enough members of the public on, it's all government. We can look at this later in detail at the current statute and you'll see what I mean. It's all like the governor's people. And they said they needed healthcare and education people on there. So this was my attempt of adding some healthcare and education members to the council. And then I asked them to write the report because anything we missed in here, they could come back with next year and say, here's additional things Vermont should do to protect our people. But the idea of the bill was me wanting to introduce an AI bill that would be comprehensive and bring Vermont up to speed. In fact, just up to speed, but match us, go beyond what other states did to protect people. Thank you. It was my idea, no one came to me, it was my own. So

[Daisy Berbeco]: in a nutshell, does this save money, will it save money and will the outcomes, is that the intent of this?

[Brian Cina]: If you look at the intent section, that is one of the pieces of intent.

[Daisy Berbeco]: To save money and to increase outcomes?

[Brian Cina]: It's to improve the functioning of the system, improve efficiency, but also to protect human rights at a time when they're, you're gonna hear in a little bit about how they're threatened right now.

[Jen Harvey]: Okay, just to revisit the very beginning, it started with intent.

[Brian Cina]: It's in the first page or two.

[Alyssa Black]: Yes, yeah, I hope so.

[Brian Cina]: It's a list, Yes. Six things.

[Daisy Berbeco]: I thought Barry would not

[Alyssa Black]: be on this stuff like breaking. We're

[Brian Cina]: going to come back to this and pee in chunks. We're going to come back to it in chunks over the next week. So don't stress. That's what I was saying earlier. Don't stress today because this is an overview.

[Daisy Berbeco]: Brian's gonna get us a one pager on the bill, Allen.

[Brian Cina]: Do you need some rose water?

[Daisy Berbeco]: Brian's gonna make us a one pager on the bill that just summarizes it.

[Brian Cina]: That will help. Right? I can do that. And we're gonna do section by section over the next week and look at it in pieces. Thanks Jen for such I'm a quick and detailed

[Alyssa Black]: glad you're thinking about this. It's

[Brian Cina]: scary, but we'll deal with it.

[Alyssa Black]: Okay. Debra, we have Katie. She's gonna walk us through two.

[Brian Cina]: Jen's gonna get into her. Unmute herself.

[Katie McLennan]: I don't feel better. I always have my volume. I'm silent, and I just came from a Zoom meeting.

[Jen Harvey]: I had it all the way up.

[Brian Cina]: It's okay. Needed some coffee.

[Katie McLennan]: Okay. Well, good afternoon. Katie McLennan, office of legislative counsel. So you have two bills this afternoon that both address AI and the provision of mental health care when sort of ensuring that mental health care is provided by a person and that therapeutic decisions are made by a person. And that also create carve outs when AI is appropriate in the mental health profession for more administrative purposes. There's also language in these bills that would prohibit, for example, a company from advertising a mental health service in the state that is actually provided through AI. Do you have a preference which bill we start with?

[Alyssa Black]: Let's go in numerical order and start with six forty four. Sure.

[Katie McLennan]: Okay. Here we are. So this bill starts out with findings. I'm not sure if you're tight on time. Would you like to go through the findings? Would you like me to sort of

[Jen Harvey]: skip down a little bit?

[Brian Cina]: We summarize them somehow.

[Alyssa Black]: Are witnesses at 02:15, are they here now or? They cannot be here till 02:15. Okay, you haven't So until two

[Katie McLennan]: you have a finding that individuals are increasingly using chatbots to receive unlicensed therapy from large language models. And researchers from the Stanford Institute for Human Centered Artificial Intelligence, Carnegie Mellon, University of Minnesota, Twin Cities, University of Texas evaluated AI systems against clinical standards for therapists, finding that commercially available therapy chatbots responded inappropriately to various mental health conditions, encouraged delusions and failed to recognize crises contrary to best practice. In Subdivision 2, researchers at the Center for Countering Digital Hate found patterns of chat GPT advice pertaining to mental health, eating disorders and substance use disorders on topics such as how to safely cut yourself, pills for overdose, restrictive diet plans, appetite suppressing medications, personalized plans for getting drunk, and how to hide intoxication at school. The chat box also generated a suicide planning goodbye notes to family. In Subdivision 3, deaths by suicide have been reported after the deceased's use of an artificial intelligence tool, including an individual 14 years of age who died by suicide after suicidal conversations with a chat box. And an individual 13 years of age was encouraged to take his own life via chat box. Top of page three, purpose. It was the purpose of this act to safeguard individuals seeking mental health services in Vermont from psychological harm, including death by suicide, by ensuring that these services are delivered by mental health professionals and not AI systems. And then that brings us to section three. Section three governs on unprofessional conduct, and this is a whole list of what constitutes unprofessional conduct for different professions. And so being added to this list is for any mental health professional misuse of AI pursuant to language that we will be looking at. In section four, we're in title 18. This is prohibited uses of AI. We're also gonna look at a section in title six, which is our title that regulates professions. So you'll be seeing that. But in title 18, the health title, we have some definitions that you'll see again. But we have a definition for AI to mean any machine based system, software, algorithm that depending on human objectives is capable of perceiving an environment through data acquisition and then processing and interpreting the derived information to take action or to imitate intelligent behaviors such as natural language processing, pattern recognition, predictive analytics, offering recommendations or decision making. Subdivision two defines mental health services, meaning peer support counseling therapy, psychotherapy used to diagnose or treat an individual's mental or behavioral health provide ongoing recovery support, including providing therapeutic decisions, issuing direct therapeutic communications, generating treatment plans or recommendations, or detecting or interpreting emotions or mental states. And then a definition of therapeutic communication, we'll see later on. And this is the section that a person, corporation or entity shall not offer, provide or advertise mental health services in the state that use AI in full or in part except as authorized in title 26. So this is the prohibition on the use of or the advertisement of mental health services that use AI, except we have carve outs within the profession portion of the bill for things like administrative support or transcribing. And then you have language in subsection c. This is language you see throughout the VSA. This is a violation of this section, is a violation of the Consumer Protection Act. And the AG has the authority to conduct investigations, bring civil actions, etcetera. And this top of page five has a sentence. Each violation of this section shall carry a civil penalty of $10,000 as set forth in statute. So one thing we have going here is that this section, the section we'll look at next are enforced by different entities, which is in part why they're divided up. So this providing or advertising mental health services by a corporation or an entity. This is enforced by the AG's office. The next section we'll look at is underneath OPR. So it'd be OPR that would be doing the enforcement on a mental health professional in the state who wasn't conforming with the statutes governing their profession. So that's where we are in section five. This creates a new chapter generally about AI and regulated professions, imagining that AI issues are going to come up for lots of professions in the future, not just for mental health providers. Within this new chapter that's being created, there's a subchapter on general provisions. Here we have definitions of AI that might apply across multiple professions. And then we have a subchapter that is specific to AI in the practice of mental health. So we've already looked at the definition of AI in this field. This is the same definition. And then we have our subchapter on specific to mental health professionals. Prohibited uses of artificial intelligence in therapeutic settings. We have some definitions. First, we have a definition of administrative support, which is important because this is what would be allowed. It means a task other than a therapeutic communication that is performed to assist a mental health professional and the professional's delivery of mental health services, such as managing appointment scheduling and reminders, processing billing and insurance claims, drafting general communications related to practice logistics, preparing and maintaining clinical records, including notes from patients or client sessions, analyzing de identified data to track patient or client progress or identify trends, and identifying and organizing external resources or referrals for patient or client use. They have a definition of consent to mean explicit affirmative act by an individual that unambiguously communicates in writing voluntary informed revocable agreement. And then we have a list of who we mean by a mental health professional. It's very inclusive. Individuals who are licensed, certified, rostered as physicians, APRNs specializing in psychiatric mental health care, psychologists, peer support providers that are certified. The bill that you worked on last year, two years ago. We had the conversation about them this year, but this is different. This is the emergency service providers.

[Alyssa Black]: We had to have a new definition.

[Katie McLennan]: I know. Let's see. Social workers, alcohol and drug abuse counselors, clinical mental health counselors, marriage and family therapist, psychoanalyst, applied behavior analyst, a non licensed or non certified psychotherapist, non certified psychoanalyst, and any other profession that provides mental health services. So this is a very broad definition of mental health service providers, who the restrictions on the use of AI for therapeutic decision making would apply to. A definition of mental health services that I think nearly tracks the definition we've already been over. Therapeutic communication is worth spending some time on, because this sort of distinguishes what is administrative work and what is the actual clinical work that's happening. A written verbal or non verbal interaction conducted in a professional therapeutic setting intended to diagnose or treat any type of mental or behavioral health concern, provide ongoing recovery support or provide any advice related to diagnosis, treatment or recovery, such as engaging in direct interactions with clients or patients for the purpose of understanding or reflecting their thoughts, emotions or experiences, providing guidance, therapeutic strategies or interventions designed to achieve mental health outcomes offering emotional support, reassurance or empathy in response to emotional or psychological distress collaborating with a patient or client to develop or modify treatment plans or therapeutic goals, and delivering feedback intended to promote growth or address mental health conditions. In subsection B, we have a list of permitted uses for AI in the practice of mental health. So a mental health professional can use AI for administrative support to the extent that the professional reviews and assumes responsibility for all tasks performed by outputs created by and data use associated with AI. And subdivision two, if that professional uses AI for transcription and reporting, the mental health professional first has to inform the patient or client, or if a minor, the person's legal guardian in writing, the specific purpose for which AI is being used and that any transcription or recording performed by AI shall be subject to the disclosure provisions in C, and then obtain consent from the patient or the parent or guardian. On the top of page nine, confidentiality. Any administrative support tasks conducted using artificial intelligence shall be subject to disclosure provisions that are already in statute, including transcription and recording. And then in d, we have a list of what's prohibited. So for a mental health professional, they're not allowed to use AI in the state to make a therapeutic decision, to issue a therapeutic communication, and we have a definition for that, generate a treatment plan or recommendations or detect and interpret emotions or mental states. Person is also not allowed to offer, provide, advertise mental health services in the state that use AI in whole or part, except as provided in the list of exceptions in subsection B. And this takes effect on passage. So that's the first bill. I'll stop sharing, and I'll fluff the other one.

[Alyssa Black]: It's still eight pages similar, correct? Yeah,

[Brian Cina]: 02:00.

[Katie McLennan]: Okay. So ten minutes? Okay. Yeah. Okay. So this is eight sixteen. This bill doesn't have a findings section, it has a purpose section. To ensure therapeutic judgment, clinical decision making and therapeutic communication. Remain the responsibility of qualified mental health professionals. Are not delegated to AI systems, respecting individual choice in selecting mental health services, including community care, faith based options, allowing the responsible use of AI for admin operational documentation and quality improvement functions to support access, efficiency and innovation to mental health services. Section two is the same language we've already looked at with regard to the list of unprofessional conduct. Section three also creates language in Title 18 about prohibited uses of artificial intelligence. We have a definition of artificial intelligence that is a little bit different from the definition we last looked at, in part that it includes generative artificial intelligence and includes a definition for generative AI. We have definitions for mental health services, therapeutic communications. Then we have this similar language that a person, corporation or other entity shall not offer, provide or advertise mental health services in the state that represent AI as providing therapeutic judgment, diagnosis, treatment, therapeutic communication. Nothing in this subsection shall prohibit the use or disclosure of AI for administrative documentation, operational or quality improvement purposes when the mental health professional retains clinical responsibility pursuant to their clinical responsibilities in Title 26. Same language around enforcement by the AG's office. No language about the civil penalty. In section four, this also creates the same new chapter structure that is in the previous bill, title 26. So in subchapter one, we have general provisions, which again is definitions at this point. And these definitions track the definitions that are in an earlier section of the bill. Subchapter two pertains specifically to mental health professionals, we have a definition section. Whereas in the last bill, some of the different types of support that AI could be provided were all consolidated in one definition. We have some different types of supports that AI could offer. There's administrative support and supplementary support, and those are broken out in this definition section. There's a definition of clinical responsibility. Consent, the mental health professional piece is also broad, like in the previous bill that we just looked at.

[Jen Harvey]: What is this? 08/1970

[Katie McLennan]: Sept is exempted in E. So we have some This references some exemptions that we'll be looking at. Definitions of peer support and religious counseling are not in the previous bill. Similarly, supplementary support isn't in the previous bill. Therapeutic communication means a written or spoken interaction intended to diagnose or treat any type of mental or behavioral health concern, provide ongoing recovery support, or provide any advice related to diagnosis, treatment or recovery. And then we have a section on permitted uses. A mental health professional may use AI for admin support, supplementary support, and operational or quality improvement functions, provided the professional retains clinical responsibility. Permitted uses include scheduling, billing, coding, and claims processing, transcription and documentation support with patient or client consent, preparation and maintenance of clinical records, the identified data analysis for quality improvement and workforce and capacity planning. We have similar language under this confidentiality and consent that any support tasks that AI is doing, including transcription and reporting, are subject to existing health privacy protection laws. And that consent by a patient or client is required when an AI recording or transcribing tool is being used. Prohibited uses. Mental health professional can't use AI to make therapeutic decisions, interact with clients, generate treatment plans or detect emotions or mental states, or offer provider advertise mental health services that represent artificial intelligence by providing therapeutic judgment, diagnosis, treatment, therapeutic communication. Is difference from the two bills. Nothing in this subsection shall prohibit a mental health professional from disclosing or describing the mental health professional's use of AI for admin support or supplementary support purposes to a prospective current or former client. And then we have a list of exemptions that this would not apply to religious counseling, peer support that's not provided by somebody who's certified, and then generalized educational and self help resources that don't prefer to offer mental health services. This also takes effect on passage.

[Brian Cina]: Thank you.

[Katie McLennan]: You're welcome. Thank you. Before I move out of this seat, if it's okay.

[Alyssa Black]: Yeah.

[Katie McLennan]: To the extent it's helpful, I put together a comparison of I the two of

[Alyssa Black]: was ask about

[Brian Cina]: this. I was this. Oh, good. I was gonna ask what's the if you could tell us the differences between Yeah.

[Katie McLennan]: Yeah. I had to put it in writing. I had to see it. So you would like, I could share this with Tasha so We you can take a look at

[Brian Cina]: can study it over the weekend and come back to

[Alyssa Black]: next week.

[Katie McLennan]: Great. A lot of work.

[Brian Cina]: Yeah, it

[Jen Harvey]: is. Oh, you're welcome.

[Brian Cina]: I was going to ask for the other one to be compared at some point too, but this is a good start.

[Alyssa Black]: Can I quickly? Well, I'll wait

[Jen Harvey]: for Uzi to end. No, I will

[Katie McLennan]: stop sharing this and you can, I'll send it to Tasha.

[Alyssa Black]: Thank you for that. Thank you. Yes,

[Brian Cina]: it's good.

[Daisy Berbeco]: I think in terms of like broad crosswalk, I have something too that I can share, but I think my observation is that the intent of the last bill we saw is mine, and my intent is to really preserve the clinical authority while also allowing the use of AI in order to create operational efficiencies, which is what we already know clinicians are doing. And I don't want to prohibit that, because I think that's how we reduce the burden on health care providers. We make health care cheaper, is we allow them to lean into technologies that facilitate easier billings, blah, blah, blah. But we still want to protect people. The other bill has those great findings that articulates why we need the bill. And so I think the first bill is a lot narrower and targeted and a lot more about harm reduction in my observation. And I think my bill is much more about preserving clinical authority. Well, that's the line I'm trying to walk anyway. And there are other things, I think, in terms of the administrative uses. When you look at Katie's comparison, I think there's a little bit of a difference in terms of the strength of the consent and confidentiality language.

[Alyssa Black]: And then, who would you share that with, Tasha?

[Daisy Berbeco]: Yeah, I'll share my write up with Tasha. Brian?

[Brian Cina]: So thank you for that comparison. I think that's something everyone could see is that the three bills all have a section, health chat, well, they all relate to mental health chatbots and that eight fourteen goes further. And so it's my hope that we can look at the various sections separately and that we would have time to dig into this mental health chatbot issue in here, because I'm already hearing from people who have concerns about various ways that the different bills are approaching it. So I think we need to make space to hear those, and it might be one of those situations where we look at all the concerns about mental health chatbots and look at where we find consensus and build on it, but we may not get everyone to agree. And it can get complicated over like little details. So we'll probably need to have testimony just on that. But I think the thing coming up next is going to be only relevant to the first bill we saw, which is about neurological rights and hoping people will be able to focus on that today and put the other stuff aside till next week. And then we'll come back to those other sections. Sounds great. Because it's too much to do it all in one big pile without AI. Oh, without AI, that's Thank pretty you, Katie. Thank you.

[Katie McLennan]: You're welcome.

[Brian Cina]: Thank you, Jen, too. Thank you both. Thank you much.