Meetings
Transcript: Select text below to play or share a clip
[Brian Cina (Member)]: If you have to.
[Alyssa Black (Chair)]: Hi, welcome back everyone. Really? Okay, we are live.
[Brian Cina (Member)]: We are
[Alyssa Black (Chair)]: going to really quickly,
[Daisy Berbeco (Ranking Member)]: I said we are going to re straw curl
[Alyssa Black (Chair)]: section one around Blue Cross Blue Shield's governance on May. So in light of the testimony that we just took, and I have Lori's vote already, she's given me that. And I'd like to ask if any votes have changed as far as who would support leaving it as is in the bill as it currently stands?
[Brian Cina (Member)]: Leave what in the law is?
[Alyssa Black (Chair)]: We're leaving it in the bill as it Section currently one. Section one. Governor appointed Governor of Blue Cross Blue Shield.
[Brian Cina (Member)]: I see.
[Alyssa Black (Chair)]: One, two, three, four, five, six, seven, eight for Lori. Eight. All those opposed? Three. Eight, three. Thank you very much. Hi, Katie. Perfect timing. Okay. We are going to really quickly walk through eight and sixteen. Why do we have these bills all with so many similar
[Allen "Penny" Demar (Member)]: It's three sixteen.
[Katie (Legislative Counsel)]: 816. Two dot okay. It'll take me a minute to
[Alyssa Black (Chair)]: write. I
[Brian Cina (Member)]: need that one.
[Alyssa Black (Chair)]: 1. 22. 35 So I can say something to Sure. Do it out.
[Daisy Berbeco (Ranking Member)]: Sure. Go ahead. Eight sixteen is the mental health and AI build. And last we walked through this with Katie, there was some conversation around aligning the definitions of AI and generative AI with other bills happening in the legislature. So Katie's done that.
[Alyssa Black (Chair)]: Those were all the changes we made? I think that's the opinion.
[Katie (Legislative Counsel)]: There was that one other change that came from JEPSA.
[Alyssa Black (Chair)]: Yes. JEPSA here? JEPSA's here. I didn't see it quite
[Katie (Legislative Counsel)]: a bit, but Oh.
[Alyssa Black (Chair)]: It's two point two, are we on? Yeah. Yeah. Have a copy paste. Okay. I'm happy to.
[Katie (Legislative Counsel)]: It might have been a deletion, so therefore it wouldn't show. Okay.
[Alyssa Black (Chair)]: Okay. Still joining. I
[Katie (Legislative Counsel)]: think I'm pull up your document.
[Alyssa Black (Chair)]: Okay.
[Katie (Legislative Counsel)]: You know what? Maybe it's better if I pull up my email from Jessa so I can remember what we pulled out.
[Alyssa Black (Chair)]: Oh, okay.
[Katie (Legislative Counsel)]: Yeah. It was a strikethrough, so that's why it doesn't show. So Jessie's change was on page seven, b, permitted uses. There is a second sentence in that B that says permitted uses include, and there is a list of different items. The version you saw yesterday was transcription and documentation support. Do you see where that is? Line seven, eighteen. And it had the phrase with patient or client consent. And the back and forth was that this might be confusing for providers because there's already straightforward language on page eight, C2, that says consent by a patient or client is required when AI is used to record identifiable therapeutic communications. So the suggestion was to delete the phrase with patient or client consent on page seven and have it covered on page eight and not have it addressed in two different places in a slightly different way in both places. So that is the change.
[Alyssa Black (Chair)]: Good. Go ahead, Brian.
[Brian Cina (Member)]: Do patients have to consent to the use of AI in their treatment? Regard to Recording communication. What about other ways?
[Katie (Legislative Counsel)]: So if it's their billing services, no.
[Brian Cina (Member)]: What if clinician is using it behind the scenes and not telling the patient?
[Katie (Legislative Counsel)]: How would they be using it?
[Brian Cina (Member)]: It's a good question.
[Katie (Legislative Counsel)]: If they're using it for scheduling or billing, no. If they're using it for these types of transcription and reporting services, they do have to disclose it.
[Daisy Berbeco (Ranking Member)]: And they currently don't.
[Alyssa Black (Chair)]: That's a
[Brian Cina (Member)]: good question. I'm just processing it. Did you hear that question?
[Katie (Legislative Counsel)]: I'm sorry, I apologize.
[Alyssa Black (Chair)]: I see you. Let's go through the language changes and then have your question asked.
[Brian Cina (Member)]: It says to find consent.
[Alyssa Black (Chair)]: Okay,
[Katie (Legislative Counsel)]: so the changes right here, artificial intelligence and generative artificial intelligence. So as we talked about yesterday, there was a conversation with another committee that the California language seems to be the direction states are going. So that is the language here. Excuse me. Is sort of still an outstanding hit on this that I think the other committee wasn't ready to address. And they said to just let it go this way for the moment. And that hiccup is that your original draft had this sentence that isn't highlighted. Artificial intelligence includes generative artificial intelligence. But for that sentence, you wouldn't need a definition of generative artificial intelligence because it doesn't exist in the bill in its own right. And so if you want to have both of those covered, you sort of still need the sentence. Their concern was that saying that artificial intelligence includes generative artificial intelligence was a bit circular, is what I understand. I don't know enough about artificial intelligence to explain why. But we went back and forth talking about different ways that it could be handled. In the end, I was sort of told to keep it as is and that if there's more thinking as the session goes on, perhaps this might be addressed differently. But for now, this
[Alyssa Black (Chair)]: is Would it be fair to say that all the things that are moving around the building regarding artificial intelligence, whatever, that somebody is keeping track of the ones that know that they're not competing at the very end with I think
[Katie (Legislative Counsel)]: this was this was the coordination. Okay. Yes. I don't you have a bill with Jen on AI. I don't know if that's moving out this week. But otherwise It's out. Okay.
[Brian Cina (Member)]: It's gonna be on the floor on Tuesday.
[Katie (Legislative Counsel)]: Okay. So I think there are two health care bills moving with AI and then the Commerce Committee. It sounds like they have a bill moving with AI also.
[Daisy Berbeco (Ranking Member)]: And they now have eyes on this.
[Alyssa Black (Chair)]: Yeah. And we'll keep an eye out.
[Katie (Legislative Counsel)]: And then we changed that again? Yep, you changed it in both chapters where it appeared. So this is an identical change. And then the other change, we already talked about permitted uses. Where is it? Right here. This seventeen and eighteen transcription and documentation support. So that had a clause that followed it with consent of patients and clients. But the email conversation was that this too already addresses consent and having it in two places presented in a slightly different way would be confusing to providers. So I was directed to remove the language up here. And so that is the third change.
[Alyssa Black (Chair)]: Questions for Jen on the change Jen, Katie, I'm in changes. Sorry. I hate you. Then into the questions on
[Brian Cina (Member)]: the bill in general. It was just more about the piece that up. Penny's hand, why don't you go first?
[Daisy Berbeco (Ranking Member)]: Yeah. Oh, that's right. I'm sorry. Okay.
[Allen "Penny" Demar (Member)]: My notes were taken yesterday's bill, not this So I just got a couple of questions. We're talking, I think it was on page seven, I think it's really seven, the hazard risk. How are we going to identify that and manage it and monitor it? Okay, I think line 12. It's line seven, page seven.
[Alyssa Black (Chair)]: Okay, algorithmic.
[Allen "Penny" Demar (Member)]: About the algorithmic.
[Alyssa Black (Chair)]: Yes. Is that like things like therapeutic?
[Allen "Penny" Demar (Member)]: Line 10, therapeutic decision does not include algorithmic risk scoring data analytics or other clinical decisions, but that word is being used other places. And I just wonder, I looked that meaning up, but how do you identify or monitor something like that with AI?
[Katie (Legislative Counsel)]: What word are you referring to?
[Alyssa Black (Chair)]: Algorithmic? Algorithmic.
[Katie (Legislative Counsel)]: Well, here it's being excluded, so they wouldn't have to track it.
[Allen "Penny" Demar (Member)]: So they wouldn't have to track it, but I did look it up, and it's very easily part of AI.
[Brian Cina (Member)]: Because what
[Allen "Penny" Demar (Member)]: I did after
[Brian Cina (Member)]: Correct, yes.
[Allen "Penny" Demar (Member)]: Some of the risk of it was kind of, to me, was kind of alarming.
[Brian Cina (Member)]: I think algorithmic risk scoring as used here is not maybe what you're thinking, because algorithmic risk scoring, my understanding is that's using algorithms to assess the situation, not the risk of using an algorithm.
[Allen "Penny" Demar (Member)]: So how do we know we're not implementing it or using it? I know we got rules saying you can't do this, you can't do that.
[Daisy Berbeco (Ranking Member)]: So I think it sounds like each one of the places that you're seeing it may be using it differently because it should specify which type of algorithm. So in this case, like Brian said, they're referring to something like, have you heard of the ACEs? Where it's like, okay, have you experienced this? Put a one, and then you add them up. And then it's like, if you've experienced six or more, then blah, blah, blah. Okay, sir. Okay, that's what they're talking about. And there's so many different types of algorithms that each different one,
[Katie (Legislative Counsel)]: I think, is specifying it. So maybe we have to look at each one as a separate case or as with the context that's around it. I just did a word search, that's the only one that I pulled up in the draft. I wonder if originally the definition that we had yesterday had it. And if we change definitions,
[Jessica Botter (Vermont Medical Society)]: we don't have it anyway.
[Allen "Penny" Demar (Member)]: I don't. It's okay. I just sent a little flag up. I don't understand the whole damn AI stuff anyway. So somehow we got to monitor it. And I don't know how we I understand that.
[Katie (Legislative Counsel)]: Well, have a lot of reinforcement happening through either OPR or the AG's office. My guess would be that there are folks at the AG's office who are starting to develop a specialty in IT technology, probably AI. So I'm guessing that it would be the expert who would have that specialized expertise who would be doing the enforcement.
[Daisy Berbeco (Ranking Member)]: Just so you know, you know what
[Alyssa Black (Chair)]: an algorithm does? Every time you look something up on your iPhone and then you see something, like an advertisement, that's So it's
[Allen "Penny" Demar (Member)]: going to be part of this.
[Alyssa Black (Chair)]: They're just saying they're not going to use it summer.
[Allen "Penny" Demar (Member)]: No, they're not going to use it. I understand that. And they can't use it in their decision making and clinical reasoning. I've read all that, but just because it says something, that doesn't mean you're going to, not read it.
[Alyssa Black (Chair)]: Yeah. This is an exemption though.
[Jessica Botter (Vermont Medical Society)]: Yeah, this is a current So
[Katie (Legislative Counsel)]: they're saying you can't use AI for therapeutic decisions, but therapeutic decisions doesn't mean algorithmic risk scoring. So that potentially could be used. Oh, wait, say that again. Just said. So the bill says that you can't use AI for coming up with therapeutic decisions. And then in our definition of therapeutic decisions, we carve out algorithmic risk scoring, meaning that somebody could use algorithmic risk scoring because that is not a prohibition on therapeutic decisions.
[Alyssa Black (Chair)]: Well, it's contradictory the other time.
[Katie (Legislative Counsel)]: No. It's just creating a carve out. So
[Allen "Penny" Demar (Member)]: if it happens, then I'm in trouble. Okay. I'm I'm okay. Yeah.
[Brian Cina (Member)]: Thank you, Lori. Thank you, Bill. So back to the consent thing, what I was hearing is that if a provider of mental health services is using artificial intelligence for administrative purposes, they do not have to disclose to the client how data is protected or shared in those administrative tasks, and that they're prohibited from using artificial intelligence in lieu of their own, without their oversight. And if they are using it with their oversight, record is implying that it's recording the interaction in law, they have to get explicit consent as defined by the bill.
[Katie (Legislative Counsel)]: Yes and no. So I think the last part is correct. This bill is not requiring patient or client consent for every use of AI by a provider. Some uses are allowed and they don't have to be reported or receive consent, like the billing and the scheduling. But if it's a transcription or recording, this bill says, yes, your client has to know and consent. The other question you're asking, I feel like is a trickier question to answer. So there are HIPAA protections on privacy, and clients do have to sign off on what the HIPAA protections are. If you're trying to ask me how does HIPAA protect from AI system sharing information, I don't think I can answer that question for you.
[Brian Cina (Member)]: Are you familiar with the HITECH Act or what's happened since?
[Daisy Berbeco (Ranking Member)]: Yeah, and this Sorry, I'll let
[Brian Cina (Member)]: you No. Answer Because I don't know it well enough, but I know that there's additional health. There's HITECH, but there's a newer one. Don't know if BMS knows what it's called. I'd have to look it up. And I just, I know that there's evolution on how we're protecting privacy with technology. So I think in general, it's probably safe, like with billing and clearing houses and all that, but I do still worry about that we may miss. We're going to be studying it, so we'll figure. We have another bill that's going to be studying this stuff, so maybe they'll come back and tell us if there's anything we're missing.
[Daisy Berbeco (Ranking Member)]: And remember, this is only for these few mental health providers. So everybody else is doing this without regulation.
[Brian Cina (Member)]: I know. I'm If concerned about that
[Daisy Berbeco (Ranking Member)]: you think about data collection as a process, this doesn't get into that part of the process. You gather data, then you have to put it somewhere, store it, and then retrieve it. This does nothing to regulate or oversee the storage of data or the retrieval of personal data. It's just silent on it.
[Brian Cina (Member)]: And we don't have to inform the patient that we have to get their consent to use the AI, but we don't have to know how the AI is using their data. For example, it could be recording them and it might be private, but that company that's selling that product or leasing or whatever the word is providing that product to the provider, that company could be mining the patient's data get to train their AI to do that test and more. So people are essentially becoming AI training machines without, it's part of their healthcare, there's nothing monitoring that yet.
[Daisy Berbeco (Ranking Member)]: There is, because as Katie said, the levers are OPR and the ADL's office. So the lever there is you have clinical responsibility, and that's defined It's clinical responsibility, right? And it's It's classified, yes. Specifies that you review and approve. You're clinically responsible. Maybe that's what I'm thinking of.
[Katie (Legislative Counsel)]: You're specifically responsible. Outputs. You have to check the outputs of your AI use. Right.
[Daisy Berbeco (Ranking Member)]: Now the OPR also has the Under professional regulations, they have conduct that you have to follow. So I think misuse and there are terms around misuse of tools and things that I think protect people from being guinea pigs or whatever.
[Brian Cina (Member)]: I just don't think that providers know that that's happening because they're not being asked to ask the When I asked the Department of Mental Health to tell us how their new system of tracking crisis workers was collecting our data, they could not tell me. They still haven't told us how they're collecting our data and using it to train the AI. We're not even thinking about that. It's not about harming the person, because the person doesn't know it's happening, it's about exploitation of us. It's like we're allowing companies to exploit us in the actions of our daily life, even using our phone. We're letting them track everything we do and learn and get better at what they're using it for to make themselves money, and I don't see any protection yet. I don't think this bill is going to do it. Think we need to just move forward and continue this discussion, and hopefully we can find ways to protect ourselves from this. The companies that are investing in AI are banking on the fact that they're going to make more money. Like the way it's designed is data is AI food. So whenever you use AI, you're feeding it in the process of you getting something from it, it's taking something from you. And I think the question is how much is that extraction really benefiting us
[Alyssa Black (Chair)]: we're now. Sounds like another bill. Yeah.
[Brian Cina (Member)]: Well, think it's something in this bill, it's not something we're going solve
[Alyssa Black (Chair)]: in it. You're not going to solve this bill.
[Brian Cina (Member)]: So I can let it go, but I'm just trying to articulate that it's not
[Alyssa Black (Chair)]: Dressed the law.
[Brian Cina (Member)]: I don't it's addressed. Yeah, it's like something that's I don't know how we address.
[Daisy Berbeco (Ranking Member)]: This is a step in that direction.
[Brian Cina (Member)]: It is, Because it's we're at least talking about the problem and setting some guidelines around the outward facing part. We're still not getting behind the curtain and I don't know how to do that.
[Alyssa Black (Chair)]: Any other questions on Oh, go ahead.
[Jessica Botter (Vermont Medical Society)]: Jessica Botter from the Vermont Medical Society. I just wanted to address one piece around the safety of compliance at least from a higher being and it's referenced in the bill, the section, like my reference, 1881, it basically is saying under remote law, says
[Alyssa Black (Chair)]: that all providers have HIPAA, and
[Jessica Botter (Vermont Medical Society)]: HIPAA regulates data sharing of any patient identifiable information does apply to use it with AI as well. So that provider is gonna be making sure that any AI tool they
[Katie (Legislative Counsel)]: are using is a business associate, which
[Jessica Botter (Vermont Medical Society)]: is a term under HIPAA, and agrees to follow HIPAA, who does not share that data beyond typically sort of like an internal closed loop system, not more broadly. So HIPAA does attempt to address the data sharing beyond the walls that buyers offers.
[Brian Cina (Member)]: I'm letting you go for now. Happy to talk about that too. We have to go, and we're not gonna solve it today because it's something, not challenging anything you said, I think there's high-tech in these additional laws because things have changed. Scary.
[Alyssa Black (Chair)]: Any other questions on the bill?
[Katie (Legislative Counsel)]: Please. Go ahead. Thank you. So this bill is protecting the privacy of information gathered, not excluding any tools used for AI information. I think I classify it differently. I think this bill is regulating how AI is used in the practice of mental health. So it's saying when AI is allowed to be used in the practice of mental health and when it's prohibited from being used in the practice of mental health. I think it also has language that says somebody can't advertise or come into the state having a service yeah, having a service, or, a mental health service that relies on AI, like the chat bots. So it's doing two different things. So this doesn't I can't see anything in here that does. But does this at all limit sorry if I'm not understanding the good uses of AI that I've read about like emergency rooms that are needed like, what is it, brain injury or seizure or stuff like that? Is that, it's not excluding any of that? It is saying that a mental health professional can use AI for administrative and supplementary purposes like billing, like scheduling, like transcription services, but it prevents a mental health professional from using AI to make a clinical decision about a patient. They have to use their own decision making. That doesn't mean they can't research, but they, at the end of the day, have to take responsibility for any AI product and they have to make their own clinical judgment to make a decision about a patient. So it's how you're using that information. Yeah, so if a mental health professional was to come up with a treatment plan or to have a direct conversation with a patient about their clinical needs, that has to be the mental health professional. They can't rely on AI to have that clinical conversation or provide a printout to the patient, giving
[Alyssa Black (Chair)]: them a treatment plan. Right, any other questions? Would like to vote on this today, so I would entertain a motion. I make a motion that we vote on 08:14 today. We are not voting on eight fourteen eight
[Daisy Berbeco (Ranking Member)]: sixteen, oh my god, see there's too many that are allowed. Okay,
[Alyssa Black (Chair)]: Caledonia. Brian Cina.
[Allen "Penny" Demar (Member)]: Yes.
[Alyssa Black (Chair)]: Wendy Critchlow. Yes. Allen Demar. Yes. Leslie Goldman.
[Leslie Goldman (Member)]: Yes. Lori Houghton.
[Katie (Legislative Counsel)]: Karen Lueders. Yes.
[Alyssa Black (Chair)]: Debra Powers. Yes. Valerie Taylor. Yes. Daisy Berbeco. Yes. Francis McFaun. Yes. I'm white female. They're black. Alyssa Black. I know. It's called my own name. Black of Essex. Yes. So, N01. Okay, I've asked Wendy to report on this. Hope it's not the same day as the RA, RX bill comes. This bill, from what I can see, I don't think this bill is going to be going anywhere. So this will go on notice tomorrow and be on the floor Tuesday. Housekeeping, Brian did appropriations.
[Brian Cina (Member)]: They voted it out today. Eleven-zero-zero.
[Alyssa Black (Chair)]: Okay, so we'll also have your bill. Oh, we'll have AI Healthcare AI Day on Florida on Tuesday. Better wear green now. I
[Brian Cina (Member)]: was gonna ask AI to give the same type of paper we