Meetings

Transcript: Select text below to play or share a clip

[Lynn Courier, Executive Director, NASW Vermont and New Hampshire]: Good

[Alyssa Black, Chair]: morning. It's the healthcare committee. Today is February 25. And this morning we're gonna talk about two bills, H14 and H16. You mean eight fourteen in cadence? Yeah, I mean eight fourteen and eight sixteen. So we're gonna take up eight fourteen.

[Leslie Goldman, Member]: Can we turn that person? Let's jump our switch.

[Brian Cina, Member (bill sponsor)]: Then you're presenting it sideways. It's a tough brass, there we go. There we go. Actually,

[Alyssa Black, Chair]: Lynn is our first witness. So do you wanna test your voice, see if we can hear you? Can't hear you.

[Brian Cina, Member (bill sponsor)]: Yeah. You appear to be muted. Okay.

[Alyssa Black, Chair]: There you go.

[Lynn Courier, Executive Director, NASW Vermont and New Hampshire]: You can hear me now?

[Leslie Goldman, Member]: Yes. And you're right side up.

[Lynn Courier, Executive Director, NASW Vermont and New Hampshire]: And I'm right side up.

[Alyssa Black, Chair]: You're right side up and we can hear you. That's great. Then Small victories. Start us off, please.

[Lynn Courier, Executive Director, NASW Vermont and New Hampshire]: Hi. My thanks to the committee for hearing my testimony. For the record, my name is Lynn Courier. I'm a social worker for thirty years, and I'm the executive director of the National Association of Social Workers Vermont Chapter. I'm also the executive director for the New Hampshire Chapter, and I appreciate being able to go first. I've been trapped in Chicago and hopefully will be getting out in the next hour. And so, I'll be speaking about both the bills, and if you'll indulge me, because I know the chatbots have been separated out, but our first priority is to and concern is independent AI therapy chatbots and banning them from being used in Vermont. At best, it's unlicensed practice. At worst, anything that's attached to a large language model is potentially very dangerous. In regards to the neurological rights, social workers are always ready to uplift and to defend human rights and privacy rights. The amount of money that is tied in with this is overwhelming. Using the five neurological rights as established by the UN seems like perfect way to keep the language and the definitions all uniform and puts Vermont in a place in taking the lead on this type of human rights and privacy issue. And the final piece is around the AI Council. This is such a fast moving technology that has both psychological and sociological impact, NASW would be honored to have a seat at that table, and we believe that because of the potential psychological and sociological benefits and harms, we do have something to add to the discussion. That summarizes my testimony. I am happy to take any questions.

[Alyssa Black, Chair]: Okay, Brian.

[Brian Cina, Member (bill sponsor)]: Yeah, so Lynn, I heard you say that you would support, just so everyone knows, when we heard from the Neuro Rights Foundation, they suggested that we replace the existing six neurological rights with the five from the United Nations. That's, Jen is working on an amendment for us that does that.

[Leslie Goldman, Member]: No, not

[Brian Cina, Member (bill sponsor)]: yet. Can send those rights out to everyone on the committee if you want to see them right now, just so that you're not confused. They're not very different, but Lynn just said she supports that. I heard you say

[Leslie Goldman, Member]: you That's support that section two?

[Brian Cina, Member (bill sponsor)]: Yes. I'm subtracting the is. Then I heard Lynn, I heard you say that you understand that H816 is the vehicle for the chatbots and not eight fourteen because you have to go, you weighed in on them. And you said something about chatbots. And then I heard you because I think that's what we were like, if you saw any talking here, just to be transparent, it was like, Uh-oh, this is about eight sixteen. And I was like, She's got the plane. So we're letting it slide. Then- I appreciate that. And then just for the record and for everyone, Jen's working on an amendment that would take the remaining AI regulations in the bill and push them into the study, because there's not enough time in two weeks for us to take adequate testimony on all those changes. So the study's going to have additional pieces to it. And the AI Council, there are some suggested changes. What I heard you say, Lynn, is NASW, with support being added back on like you were on the original task force, and that you support the work of the council and that study?

[Lynn Courier, Executive Director, NASW Vermont and New Hampshire]: Correct. Correct. Whoever sits, whether it's myself or somebody appointed by NASW Vermont, keep in mind we have four fifty members who all have a variety of experiences and knowledge bases, and we can pull all of that together so that whoever is sitting on the council has both the psychological and the sociological, so both the micro and the macro perspective and knowledge base to bring to the table in the discussion.

[Brian Cina, Member (bill sponsor)]: Topper, may I ask another follow-up question? I'm a social worker, so biased and I understand the code of ethics and the core values of the profession. I think for the record, asking the executive director of the NASW if you could speak just for a minute about why social workers would be a uniquely appropriate appointee for a position related to human rights and ethics, because that's the position that we'd be putting the NASW filling. It's a person with expertise in human rights and ethics. Why is the profession of social work the one for the job?

[Lynn Courier, Executive Director, NASW Vermont and New Hampshire]: Thank you for the question and for the opportunity to get on my soapbox about the wonders of social work. The profession of social work, what separates social work from other professions is our talk about bifocals. You can look to see what's right close-up and you can see what's in the distance. Social workers are trained to not only see the person right in front of them, but to look at the whole picture. What are the social determinants of health? What are the environmental impacts that are coming to bear on that individual, on that family, on that community, on that state, on that nation. And so, you pair that with AI, there's a growing body of research on the psychological effects of AI on individuals, which then translates to, okay, what effect is this having on society? And there's a growing body of research on that. And so social workers are able to look with those bifocals, what's right up close, what's the bigger picture, and how are all these systems all working together? So I think that perspective and you have social workers who are much more focused on the clinical, and you have social workers who are much more focused on the macro, Then you have social workers who are doing both. I think that perspective is incredibly valuable when you're looking not only at the mental health of individuals but the effect of AI on society. How's that for a social work soapbox?

[Brian Cina, Member (bill sponsor)]: I think it was a good elaboration of what you said earlier, so people understood that it has to do with the training of social workers and lens through which social worker approaches addressing human rights and ethics. I think another thing that maybe before I make space for someone else is just I looked up, because I needed the reminder, even though I have the training, that the six core values of social work are service, social justice, dignity and worth of the person, importance of human relationships, integrity and competence. And so I thought I'd just put that out there for people to hear and give you a chance if there's anything you want to say about how those core values might be useful in the work ahead regarding artificial intelligence in society?

[Lynn Courier, Executive Director, NASW Vermont and New Hampshire]: Well, our perspective is always going to be that the dignity and worth of the person over the mean, let's be real. AI can be a really helpful tool. It can also have intended and unintended consequences. And so, yes, when you pull in what are our core values, we're looking at what is in the best interest of people. And that will always be our default, whether it's with AI or any other issue.

[Alyssa Black, Chair]: Thank you. Any other questions? Comments? Okay, thank you very much. We appreciate you taking the time. You can go run for your plane now.

[Lynn Courier, Executive Director, NASW Vermont and New Hampshire]: I'm gonna go grab that Uber. So thank you very much.

[Brian Cina, Member (bill sponsor)]: Thanks for making time.

[Alyssa Black, Chair]: Hold on just a minute.

[Leslie Goldman, Member]: No, not for her. I have a question for the sponsors. If it's appropriate. So I really appreciate this bill and the idea of protecting people who are getting mental health or

[Brian Cina, Member (bill sponsor)]: in a

[Leslie Goldman, Member]: therapeutic relationship. So this question is for you, Brian. I'm just wondering how it applies outside the mental health sphere, because many people get mental health counseling and advice in all different areas of medicine where AI might be used. Do you think it's only in mental health?

[Brian Cina, Member (bill sponsor)]: No. The bill we're looking at right now, eight fourteen, is related to health and human services. And if we look at section one, when it talks about the intent, it's talking about the entire health and human services system. Neurological rights would apply to people in all settings. It wouldn't just apply in mental because it's neurological rights. They're really establishing that our brain is a private space ultimately. I think that's what the Neurological Rights Foundation people, the Neurorites Foundation people were talking about, that we would be establishing in law some specific human rights grounded in our other rights, but more specific at this moment in history where the brain, the privacy of the brain is going to be there's going to be the ability to violate the privacy of our brain in an unprecedented manner. Keep up at all settings. Yes. Got it. And then later in the bill, there were these specific policy changes based on other states regarding chatbots, generative AI and regulated professions, and utilization review, which is the finance of healthcare. But because there's not a lot of time for testimony, we've asked Jen to make those points of a study. And so what we're doing here is establishing rights, modifying the membership of the AI Council so that they have more expertise in the area we need the work done, and then asking them to do a report over the next year. And we're going to hear from the folks who run that council soon about how that fits into their existing work, what works or doesn't work about how we're approaching this. But the idea is that we'll establish some rights this year and then ask existing structures of government, modify them and ask them to bring back information so that the legislature can make decisions in the following years around regulations needed regarding artificial intelligence in the entire health and human services system. Also it mentions education and public finance, and maybe I should explain now why, because we're going to hear witnesses from treasurer as well, that the use of artificial intelligence for improving the budgeting and spending of government is real. Slovakia is currently using AI to build their budget, their government's budget. So there are strengths and weaknesses to that, but we could potentially use AI to improve the financial functioning of all government. So that's something that it affects healthcare, but instead of just saying only look at healthcare, we're asking them to look at it in general. And then we say education, because right now, as we talk about education transformation, it's an opportunity to And education is also the social driver of health. It's kind of all

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Thank you. So I

[Brian Cina, Member (bill sponsor)]: hope that helps. That helps me. Okay, Thank

[Alyssa Black, Chair]: thank you, Brian. Any comments based on what you just heard?

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: People are ready for the next witness.

[Alyssa Black, Chair]: Okay. We'll go to the next witness, Peter from the State Treasurer's Office.

[Peter Trombley, Director of Legislative Affairs, State Treasurer’s Office]: Thank you, everybody. Peter Trombley, director of legislative affairs, state treasurer's office. Appreciate the committee asking our office then to testify. I'll keep it very brief, testifying specifically to sections four and five. We support state treasurer being added to the artificial intelligence advisory council. As the representative mentioned, there are a lot of interesting potential applications for artificial intelligence and public finance, as well as other systems in our office that are, I would say, constituent facing, like our retirement systems. We would be grateful for the opportunity to engage in more productive conversations about how we can use AI to benefit the experience that Vermonters have of our office, but also benefit the state's financial management of its cash. And

[Alyssa Black, Chair]: I think that's all I can say. That's all you have. Thank you.

[Peter Trombley, Director of Legislative Affairs, State Treasurer’s Office]: Thank you very

[Alyssa Black, Chair]: much. Supporting. Hold on just a minute. Questions from anybody? I have

[Brian Cina, Member (bill sponsor)]: one, but I'm waiting. Yeah.

[Alyssa Black, Chair]: I'm serious. Go ahead.

[Brian Cina, Member (bill sponsor)]: You're waiting. I was giving you a chance, yeah. Oh, go ahead. So, I've talked with the treasurer before about the issue. I'm curious, are you aware of any examples? Like I mentioned Slovakia. Have you all had a chance to look at any examples yet of how it has helped government?

[Peter Trombley, Director of Legislative Affairs, State Treasurer’s Office]: I'm not aware of specifically in the area of budgeting or investment. Although our deputy treasurer just got back from the National Association of State Treasurer's meeting, so I can certainly ask him if there was anything on the docket down in DC on this in this subject. But I think it is clearly a topic of interest that we have to be looking at more thoughtfully going forward, and this would be a great vehicle for our office to lend its expertise and to engage in those discussions.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: Penny.

[Alyssa Black, Chair]: Alright, Brian. No. I guess I'm confused a little bit with the treasurer's department having something to do with policy. Usually, I think of the treasurer's department's financial. So I don't know how we can put these two together. Policy, can you help me?

[Peter Trombley, Director of Legislative Affairs, State Treasurer’s Office]: Yeah. I appreciate that question. I'd say our office deals with a few, you know, discrete computer systems in state government around our cash management management, around cash transfers, our retirement systems. We're the only entity in state government with deep visibility into those systems. And I think that would be the unique perspective that we bring to the table for this conversation. How should the state be thinking about its use of AI with respect to those systems?

[Alyssa Black, Chair]: Thank you. Any other questions, comments?

[Peter Trombley, Director of Legislative Affairs, State Treasurer’s Office]: Alright, thank you very much. Thank you for the community, Sam.

[Alyssa Black, Chair]: Alright, moving right along here with Joshua Rafic?

[Brian Cina, Member (bill sponsor)]: Sweet. Josiah. So maybe for the record, could see your name.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Yeah. Yeah. I'll be right Hi. So for the record, Josiah Resh, I respond to anything that's vaguely similar to my name. So thank you. You're you're gonna you're one of

[Brian Cina, Member (bill sponsor)]: the better ones, so appreciate that.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Yeah. So I'm the chief data and AI officer for the state. I sit within the agency of digital services and excited to be here to testify on this bill. I am ready to go like pretty deep if you want to. I am also happy to not go too deep if you wanna keep it more brief. So I guess I defer to you all on that.

[Alyssa Black, Chair]: I think that people really need to understand this. And so you make that decision if you feel that the information that you're gonna provide is gonna help us around the table understand this. Because there's only a couple of people that sit around this table that really get this right now. Okay.

[Brian Cina, Member (bill sponsor)]: I'm not one of them. I would appreciate it if you assumed that no one here knows anything about the work of AI in Vermont or the history or the Vermont model or any of that. So Okay. Yeah.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Sounds great. So I'll give a little bit of background then, and then I'll I'll dive into specifics here. So and I would appreciate assistance with time management because I know you have other witnesses, and I wanna make sure I'm not eating the whole session. You have plenty of time, actually.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: You have

[Alyssa Black, Chair]: plenty of time.

[Brian Cina, Member (bill sponsor)]: Alright. Yeah. We have till eleven, so I

[Alyssa Black, Chair]: won't talk that long.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Alright. So I'll

[Leslie Goldman, Member]: get you to your father. No.

[Brian Cina, Member (bill sponsor)]: We have until eleven. He knows he doesn't know.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Yeah. I don't I don't alright. So I will just give a little bit of a background on the history of the work we've been doing in Vermont. And I'll start with my work with AI really started about seven years ago. I was hired in the agency of transportation as a developer before ADS was created. I automated all my work, and then they're like, can you learn about PI and AI? And I was like, sure. So learned about artificial intelligence, learned how to build neural networks, built a few with the agency of transportation for doing things like monitoring infrastructure degradation and predicting when we would need to do more pigment treatments, like, things like that. We also worked on some computer vision projects around managing the inventory of traffic signs around the state, which is it is a lot more complicated than I ever would have imagined. There are over 200 kinds of traffic signs, and we trained an AI classifier to be able to identify those, inventory them so we could manage them effectively. So, like, have some experience, like, hands on in this space. When when the legislature created the director of artificial intelligence position and the AI council, I was like, I could try doing that because there was nobody anywhere else in the country doing that work. And so I got to be the first one kind of inventing that. We developed a code of ethics with the AI Council and working with a number of partners. That code of ethics has since been adopted by numerous states. I've lost track at this point, but I have talked to, I think, 47 or 48 states about the work we're doing in Vermont and how they can emulate the success we've had here. The states I work with most closely are Pennsylvania, Maryland, Georgia, and Ohio actually is is a kinda up and coming state in this space, and and we kind of talk regularly. We have meetings every other week just to share in this emerging space what's going on, what are you all doing in your state, what are your great ideas that I can take, and how can I support you all? So

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: we have

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: a very collaborative community across states, and I I really appreciate that. So code of ethics was one of our first things. We also put out data use guidelines, which talked about protections. It talked about kind of how to approach using AI well within government and started making some recommendations beyond that to industry, various various fields. I am on the board of AI Vermont, which is a nonprofit that seeks to train educators on how to bring AI into their classroom and educate the next generation of students to use AI effectively. And that starts from elementary all the way through college. We work with all sorts of educators there. And AI Vermont last year, I believe, last spring, posted a discussion of AI and mental health, and they asked me to come be on the panel for that and kinda bring a technologist perspective. And that was really fascinating work. And coming out of that discussion and then work of the AI council, we actually requested in our report. I think it was in the January 25 report. This was an area we said, hey. AI chatbots are something that legislature should should look at, and so excited to see that coming to fruition here. We see that there is a lot of opportunity to expand access to mental health support through AI. Also, the risks laid out here, I think, are real and need to be addressed. So there you go. That was some rambly background. Any questions? And then we can dive into specifics. Oh,

[Brian Cina, Member (bill sponsor)]: no. I'll wait for you to dive into the specifics. So I have questions about specifics. I've learned to wait till the person's done enough. All right. I think you were wrapping up that quick. Sorry.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: No, no, no, no. I'm not happy. All alright. Right. So diving into specifics. I actually love to start with a question on the intent section. So number three in the intent section talks about augmented intelligence. Page two. Page

[Leslie Goldman, Member]: Line 15. Okay. Thank you.

[Alyssa Black, Chair]: Yep.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: So, there we talk about augmented intelligence, and also use that same phrase again, line 18. And I would love to just, I didn't see a definition for augmented intelligence, I just wanna make sure we're talking about the same thing. So if anyone would like to I think you're the sponsor. So if you have any, specifics there, I'd love to hear.

[Brian Cina, Member (bill sponsor)]: It comes from language from other states. So I think we would need pledge counsel here, because I gave legislative counsel laws from other states, and that led to this. So maybe the question we could have for our lawyer later is, should we have a definition of augmented intelligence in here since we're using the term? It's not a question. I think it's something we need. Yeah. It's only

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: used in the intent section, I believe. I didn't see it anywhere else, so you may not absolutely have to have a definition, but it would be helpful. There are a few different models of what augmented intelligence could look like. What why don't you tell us? So I could I like, this isn't Okay. I am not an expert in AI and neurology and brain computer interfaces. So but one of the models that I've heard is basically almost like the the smart glasses, the ability to kind of bring up information in real time and have it kind of in your peripheral vision as you're interacting with the real world. So kind of an augmented reality plus Google search capability to surface information. I think one place that's being used is, like, in industry. I've seen demos of this. I don't know if anyone's actually using it in production yet. But, looking at a machine that's currently suffering a breakdown on a manufacturing floor and having additional information about that machine, how to maintain it, and the current operating status of it kind of presented in real time. And that would be one model of what augmented intelligence could look like. There's also kind of the more, like, neural link a little bit farther out there models of of augmented intelligence where it's like, we can turn plug Wikipedia into your brain. But I think that's not real yet. That's still a little bit on the sci fi side of things to pull information directly into the brain. So just clarifying where we're going with this would be great.

[Brian Cina, Member (bill sponsor)]: So what I can say to that, because you asked the question, without the lawyer's definition, but the intent of drafting it, was to cover, to make it broad so that it would cover the assistive AI. I just looked up, because some of this came from American Medical Association guidelines, that augmented intelligence is assistive AI usually, where the human and the computer are going back and forth. So I think, I don't know, Josiah, if you would say that ambient AI and medical record system goes as far as augmented intelligence, where it's listening in, giving you suggestions, and then you're deciding what happens. But that's maybe the beginning of the concept. And then the most extreme is, just I mentioned, Neuralink. I sent the committee a link to the Neuralink website last week. It's the one where they implant things in people's brains. So there's people in wheelchairs with implants, and they're able to move the cursors on the screen and stuff. That kind of augmentation is evolving and there may be a point, and this is what Josiah said, we're not there yet, where people will have enhanced abilities or have access, like imagine calling your friend in your head, stuff like that. We're not there yet, but it's not far off. The intent here was to cover all of it before we get there. Yeah. And I

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: think this gets to some of the comments I was going to make later in this, but I think we can, and maybe this is pivoting into section two, so I'll I'll pause for a moment first. Is there anything any other questions for me on section one before I pivot into section two? Well, besides augmented intelligence,

[Brian Cina, Member (bill sponsor)]: do you have any problems with any of the language in it or any concerns or any suggested changes?

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: One other thing, I guess, that there's a model from New Jersey that I really like, and I I jotted it down in the intent section. It's not tied to a specific comment, but I believe it's New Jersey bill a thirty five forty, I believe, and I can send this over to Okay. To counsel after if that's helpful. But they have put together some really good work on deepfakes and generative artificial intelligence and just, like, how they think about that that I think would help to frame our approach throughout this bill. So that's not, like, a specific comment in the intent section, but just relevant throughout.

[Brian Cina, Member (bill sponsor)]: It would be great if you could send that. Can you say that it's New Jersey A thirty five forty. Because I'm also taking notes to follow-up. But if you could send it over to Tasha, then Tasha can share it with us. Can you make a

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: note that I want to do that? Thank you.

[Peter Trombley, Director of Legislative Affairs, State Treasurer’s Office]: All right.

[Leslie Goldman, Member]: A real intelligence, not

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Rely on on counsel for for real intelligence. Yes. Okay. So so section two. A distinction that I wanted to draw as I was reading through this and thinking thinking about the approach here, is there is a a big difference between the ability to read data from remember the exact term that's used, but basically to read neural data and to provide input to neural data. Those are quite different things, and reading neural data is the topic of a lot of research right now. So, you've heard about, I think you mentioned headbands, you mentioned bracelets. There's there's a variety of ways that people are looking at how to read neural data. I think one of the more one of the ones that's been around longer is the ability to hook prosthetics up to nerves and be able to, like, have some level of control over the prosthetic device from the neurological system.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: And

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: and that all boils down to, like, we're taking inputs from the brain, and we're we're reading them and then doing something with those. There's a pretty significant difference between that and, like, writing information directly back into the brain. And that's an area that there's some interest in and there's some exploration, but I think, like, that's still farther out and not well defined yet as to what's even possible. Like using a a computer example, you know, if you if you this is a thing that people who are better at hardware than I am can do. You can take a chip and you can put pins in a in a computer chip, and you can read the outputs of it and learn to decode that even if you don't have documentation for the for the piece of hardware. Like, that is a thing that's possible. Controlling that device is like a whole another level of complexity. So putting input into it is much harder than reading the input. So I'm like, that was going in my head as I was reading this and wanted to just say, there's there's probably value in differentiating between those in this bill that we wanna make sure that we're doing the data protection things, that we're protecting privacy. Those kinds of things all are related to reading data out, and that's, like, here and tangible, and we should be taking action on that now. Some of the pieces, we wanna make sure that we're we're differentiating that from we're putting data directly into people's brains. That's the, like, put Wikipedia in your head so you know everything in there. Like, that's not real yet, and and it's not even certain that that's possible at this point. So I wanted to draw that distinction, and I hope that's helpful to you as you're thinking about this, Bill.

[Alyssa Black, Chair]: I have a question. On line 15 on page three, I'm sitting here trying to figure out how information gets put into my brain and how intervention can take place. How do you do that?

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: I think that's the point I'm trying to make, that putting things directly into the brain is not I haven't seen any examples of that yet. Now when you're like, does exist is the ability to use the senses you already have. Right? So, like, you can I guess, for example, right, like, if you're using and and I won't use a neurological example specifically, but, like, a lot of, like, wellness apps or even, like, the health app on the iPhone will be like, your pulse is really high? You should take some breaths, right, if you have certain types of of devices. That's not neurological, but that's using biometric signals to then surface information to you that your eyes take in, and then you're like, okay. And that provides you, like, useful feedback. That's a little different from putting breathe into your head directly. Sending you notifications a little bit on that.

[Alyssa Black, Chair]: That's fine.

[Brian Cina, Member (bill sponsor)]: Yeah. So what I'm hearing is that an area of current concern is that we do have the ability to measure and record and share neural data information about what's private in our minds, and that that's emerging, that there's not a technology yet that is inserting data through the same means, but that through our existing senses, our brain can be influenced. And some examples, and then I have a question, some examples to consider would be subliminal messages can be implanted into images. Binaural beats and isochronic tones are examples of sounds that could be used to induce brain waves. So if someone is measuring your neural data and tracking it, and that's integrated into the media that you're looking at, like social media, it's possible with the current technology that you could be given information through the social media that would influence your brain state. I don't know if anyone's doing that.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Yeah, I'm not aware of anyone doing that, and I don't know that it's tech current technology we have is fine grained enough to introduce, like, anything specific. I think what I would and and this is another, like, general framing idea. Under HIPAA, there is an assumption that data that is health data is used in direct patient care. And I'm sure you're all familiar with this idea. And I think framing and I don't know if we can frame all of neurological data or all of the chatbot input, getting to that section of the bill, as health data. But framing significant carve outs of it as this is protected health and health information would prevent a lot of what you're talking about. Like, could just hook into existing statutes that protect certain types of data from being used for anything other than direct patient care. And then that would eliminate the social media using this type of data in order to to do anything if that becomes technically possible in the future. And I think that's a bit it's a more subtle step, but I think provides robust protection and also gives good future proofing as as these as, like, technology evolves and, you know, capabilities are developed.

[Leslie Goldman, Member]: Excellent. We'll keep talking about current state, and the current state is now, but the evolving state is so fast. And I don't think that we can predict how fast, but it's scary fast.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Very fast, yes.

[Leslie Goldman, Member]: It's a matter of my concern. Does this bill look only at current state, or does it think about future state as well? And do we need to think about this rapidly evil how do we corral this rapidly evolving technology?

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Yeah, that's a good question. Currently, I think the bill is kind of looking both at current state and at future state. And my proposal is look a little bit less at the farther out future state. Stay a little bit closer to current state, and that will help us like, using protections that already exist and frameworks that already exist that are also evolving, like hooking into health care data privacy. That's also an evolving field that's keeping up with these use cases. Most

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: of us.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Like, generally, it's, like, trending in that direction to keep up with it. So so that would be my proposal. Rather than, like, trying to envision potentially new industries, which there could be, but we don't know enough yet to be able to start like, this is the right shape of regulation for an industry that doesn't yet exist. That makes sense?

[Leslie Goldman, Member]: Yeah. Would the council be the place where this evolving use case, if that's the right term, be explored or understood?

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Yeah, we can certainly explore this here. But I think the small tangible step here would be protecting neural data as health data because that will provide a lot of the protections. And then, yes, the AI council should should might, you know, monitor for any additional things that are needed. But I think that would get us a good good step toward protection.

[Leslie Goldman, Member]: Does the council now and I'm not as deeply into this, I'm just seeing it currently. But does that does the council's brief includes looking at that? Or does that need to

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: be stronger? I think as it is in here, there's enough. Okay. Thank

[Alyssa Black, Chair]: you.

[Brian Cina, Member (bill sponsor)]: That's a good segue to what I wanted to ask next. Because I feel like now we're digging into the meat of section two, which was the rights, and then a series of changes to law. The amendment Jenna's working on, I wish we had something, I have something I center, but I don't know if it's good to put that up for people to see yet, would strike out everything in section two after the neurological rights and move them into the study. The reason being is the chatbots are going to be covered through a different bill, and the other pieces, like we're hearing from Josiah, some of them were not ready to make these changes yet. It's too future thinking. So what would happen, and Josiah, this is going lead to the question, is in the amendment that's going to come out, Jen's going to figure out the language. So what I'm going to say to you may not be the exact language, but I think the gist of this is what you'll see. If you look in this study part of it, it says on or before, I don't know the exact line number. Yeah, we can find that if people wait to the end.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: It's in section five. This is new language,

[Brian Cina, Member (bill sponsor)]: not language that's here. Right? It's a small modification to the language that's there. So I'll tell you exactly what it is so everyone can look and, like, imagine this together. Because then I I would like you to weigh in it, Josiah, because I

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: think it's speaking to what you're saying now. This is probably line three of page 27.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: Can you go to page

[Leslie Goldman, Member]: 27? I'm

[Brian Cina, Member (bill sponsor)]: looking on Wendy's actual bills, comparing it to mine. If you go to page 27, line six, it says recommending any additional statutory changes necessary to further the purposes of this act, and it would add in including but not limited to something like that. Number one, protections for neurological rights and neurotechnologies, two, guidance on use of generative AI by regulated professions, three, regulation of use of artificial and augmented intelligence in payer utilization review processes. So what we would be doing is saying, of passing all these regulations for what might happen, asking the council to monitor these things over the next year and come back with any recommendations. And so it gives them significant amount of time. And then we would be adding to the council some new members, switching it up a little. We would be adding a social worker back in the position of ethics. We would be adding a doctor and adding someone from a teacher. And then adding someone from the treasurer's office. So we're adding members and then repositioning. So the commissioner of health is replaced by the agency of human services, but that's just their boss. They could still appoint the commissioner of health or whoever. But it's thinking broader. It's not just about health, it's about the whole agency. So the idea is we're switching up the membership to add some expertise to the council, and then giving them some specific work related to health, human services, education, public finance in the next year. So how do you feel about being asked to take on that work or being given some direction or focus for the next year? Do you feel like these changes to the membership are going be harmful, helpful? Are you neutral on it? Any changes? I think this would be important to hear from you.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Yeah. So I will actually defer the changes to Miles because I wanted to hear from

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: him as well. Good time

[Brian Cina, Member (bill sponsor)]: to bring you. You can go sit up there with him.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: We can pivot to him

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: in a little bit.

[Peter Trombley, Director of Legislative Affairs, State Treasurer’s Office]: Alright.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: So one one comment, though, is there's a lot to bite off in this study for for by the time this bill passes six months from when the bill passes. Right? So my my consideration would be and we have, like, these new the new scope, have to digest some more. But it might make sense to split this into a couple of things and getting priority from the legislature on which thing do you want for the 27 part of the session, which thing would you like for 28, would just give us a little more time to digest these thoroughly. What I don't wanna do is give you, like, you know, four paragraphs on each. Think the the and maybe making explicit. Do you want language from us? Like, you want us to work with counsel to develop language, or or do you just want ideas? And knowing intent there would be really helpful.

[Alyssa Black, Chair]: Okay. Yeah. All right, now we're gonna do something that is unusual. We haven't finished with you and I know you've got somebody else with you. We have a witness that has to go. So you say right there. You want to go over here. Okay. Alright. Let's switch places.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: Excellent. Thank

[Brian Cina, Member (bill sponsor)]: you. It's like an intuition.

[Alyssa Black, Chair]: Yeah. Have to reboot.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Yeah. Yeah. That's exactly in section three. Here, I'll go this way,

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: go that way. I

[Brian Cina, Member (bill sponsor)]: like the clockwise motion, this is good energy. Clockwise, clockwise.

[Alyssa Black, Chair]: So, do it by AI, we'll be all set.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: Hi everybody, for the record, my name is Doctor. Rick Barnett, licensed psychologist doctorate, licensed alcohol and drug counselor, and the chair for the legislative committee for the Vermont Psychological Association. Thank you for the opportunity to testify. I did submit some written testimony late last night. I don't know if I really want to read through all of it. And all of this conversation so far is very informative and relates to some of the testimony that I've already submitted. So I don't need to necessarily go through every item, but I do wanna highlight a few things. Basically, the Vermont Psychological Association is in full support of H1814. We do have some concerns about some of the language, but it's not really opposition. It just focuses on clarification of some of these things. Some of it's already been addressed in the testimony we've heard so far, particularly about AI chatbots. So one of the things that we wanted to distinguish was if that's being taken out or dealt with separately. There are situations where the Food and Drug Administration are increasingly approving digital therapeutics. Those digital therapeutics are as yet to be defined in some cases, yet to be approved, some are approved. What I mean by digital therapeutics is potentially some form of AI chatbot that is approved by the FDA. So if there's an outright ban on AI chatbots, and we are authorized by the Food and Drug Administration to use these tools in our practices, we don't want to get dinged violating these new laws for using an FDA approved product. So I just want the committee to think future oriented, as was mentioned before by Representative Goldman. It is hard to determine what's coming down the pipeline, but to the extent that we are able to imagine massive changes that are coming down the pipeline, we don't want to be in a situation where we're overly regulated, overly restricted, and at risk for violating these types of rules to the extent of tens of thousands of dollars for each infraction. That's some of the language in the bill as it refers to penalties, which is something else I wanted to mention in my testimony. So that was the first thing. Also, another important thing, as Representative Cina was just mentioning, the American Psychological Association, for which I'm a member of and serve on one of the governing bodies of that national organization. The American Psychological Association has a lot of experience with AI. Psychologists in general do a ton of research on AI and the effects of AI on the brain and social behavioral functioning. So, we would like to be included in some way, shape or form as part of this advisory council as a leader in this field. So, there was a specific ask in my testimony to see if there's a way to change the language to add someone appointed by the Vermont Psychological Association, or even in the section that we were just reviewing, review guidelines, recommendations from the American Medical Association, National Association of Social Workers. We could add American Psychological Association there just because of their leadership in this area. The other thing I just wanted to clarify, there is some language in the bill about the Office of Professional Regulation that is involved. We are regulated by the Office of Professional Regulation. So my hope would be that this bill, if it were to be passed, that gov ops, Senate or House gov ops would have a chance to look at this and see what role they should be playing in this. Not just a consumer protection thing where we're involved in consumer commerce litigation penalties, but also the regulation by our governing board as professionals. So it is mentioned in the bill, but I think it could be made stronger that the Office of Professional Regulation ultimately has a lot of power over our practice, not just fines from the Attorney General's office or whatever there. And finally not finally, I feel a little bit rushed. But I think there is information in the bill about insurer accountability. So we're talking about the state finances using AI, and how insurance companies are using AI in terms of auditing insurance claims and things like that. I think that clinicians should be held to a higher standard than some of these other entities that are using AI as well. So penalties, if they're going to be leveraged against different entities, they should be fair and balanced across the board. It did seem like there may be some disproportionate focus on mental health providers here. And I just want to be careful about that. And related to that, we were talking about augmented reality versus artificial, I'm sorry, augmented intelligence versus artificial intelligence. And the difference, in my opinion, and again, I'm not an expert on this necessarily, is that the person computer interface is front and center in augmented intelligence, whereas artificial intelligence is more dominated by the technology itself, not the human. So we just want to be clear in our testimony for the committee in that we would like to be able to be protected in using AI tools, assuming that the provider is engaging. So, I just had a session this morning as a psychologist. I did it on Zoom in my car outside. And I use an AI notetaker in my practice. So, for example, I'm not allowed to just record the session without, of course, telling the patient this session is being recorded. And I can't just click sign at the end of the session. I have to go through and make edits to prove that I actually read the note and engage with the technology. So it's not just the technology writing the note for me, and I'm not even paying attention anymore. So that's a real time example of how we want to be able to use these tools, and not be under threat of being penalized for misuse in some way. So protections around showing that these are helpful tools as long as the provider is involved with engaging with the tool that's being used rather than the tool itself taking over our brands and our practices. So, I just wanted to make that distinction. And, I did actually go through a lot of detail in suggested amendments. I don't know how much the, committee is interested in those specific amendments that I added or suggested and the rationale behind them, but I'll pause there and take any questions.

[Alyssa Black, Chair]: Questions? Ahead, So I did

[Leslie Goldman, Member]: a quick search and found what you were referring to about the FDA. Didn't have a chance to read it, but I did send it to the committee and maybe we could post it on our page just so people could understand what's going on with the FDA and its realm. So I just wanted to thank you for that because there's so much, and it's moving so fast. How do we keep track and protect patients? I mean, to me, that's sort of

[Brian Cina, Member (bill sponsor)]: I think her hand

[Alyssa Black, Chair]: is up to Oh,

[Leslie Goldman, Member]: I really appreciate your points around making sure that clinicians are protected and also empowered to use tools. And we do have another piece of legislation, 16. I'm going to email it to you, and I would invite your thoughts on that.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: I've looked at that one as well. Just was bit careful not to get into that material, but I'm happy

[Alyssa Black, Chair]: to provide some feedback. Thank you.

[Brian Cina, Member (bill sponsor)]: And I wanna acknowledge, I did read the letter that the VA, I believe, sent about chatbots already. I saw, there's a letter on the record you sent. Oh, that was

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: from the president of Vermont Psychological Organization.

[Brian Cina, Member (bill sponsor)]: You signed it too? Yeah, signed it, yeah. Their name is weirdly formatted on it, by the way. I don't know why, but it's like, anyway, I noticed that. It was like, they're signed over their name, but I saw it and I read it, just acknowledge it. And if anyone wants to read it, it's on our, Tasha can direct you to where exactly, but it's on, I think it's under 08:16, I'm not sure. How much time do you have till ten No,

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: I've got till 10:15. I was grateful to

[Brian Cina, Member (bill sponsor)]: speak Okay, in I just want to make sure that I don't make you late, that's what I was asking. But if you have till then, don't think we need to rush

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: you because we have time.

[Alyssa Black, Chair]: So I

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: think if I get done early, want

[Alyssa Black, Chair]: to keep listening to you.

[Brian Cina, Member (bill sponsor)]: So the first question I have is if you could say a little bit about, you mentioned the diagnostic tools that psychologists might be able to use that are examples of augmented intelligence. Is there anything currently existing or that you see coming soon, for example, like AI assisted psychological testing, ways that you're aware that the profession is considering using AI that committee might just hear an example about. You mentioned note taking, but I'm just thinking more in terms of are there any AI assessment tools coming up? Because I know psychologists use a lot of tools. And maybe you could say more even about that. Right, yes. So psychologists, one of

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: the things that separates them a little bit from other licensed mental health practitioners is the use of psychological testing. I don't engage in too much psychological testing, but there are a lot of different psychological tests that can be used to diagnose autism, to diagnose ADHD, to diagnose personality disorders, depression, all kinds of different things. And yes, increasingly, there are many, many, many different applications that use some form of AI to assist in those psychological tests. And just to give an example of what Josiah was saying about augmented intelligence and glasses. So in terms of future oriented stuff, there is a not too distant future in which we may be sitting on Zoom or in session with patients where we are getting real time neurological data from them, from the dilation of their pupils, from the way in which they move their head around the room, like all kinds of scanning instruments that might be detecting discomforts or honesty or dishonesty. Mean, it's kind of scary. And so I think some of that is going to be happening more and more in terms of using those tools for diagnostic assessment purposes, for therapeutic intervention purposes. So yes, 100%, there's a lot of tools out there. It is happening so fast. And I think a lot of my read on the profession, and maybe Brian could speak to this, is that I think a lot of practitioners are scared right now to engage too much in these tools because of patient risks and the desire to protect patients. But more and more, as we get used to it, it's gonna be incorporated more and more. So I think the idea of having a study separating because it's happening so fast, we don't want to slow it down too much. We want to do something, but we have to also, we don't know what's coming down the line. We don't want to inadvertently screw up what we can actually do that's helpful to people down the road.

[Brian Cina, Member (bill sponsor)]: And so that leads to the second question, which is that, as we heard from Josiah, Vermont established an AI council, and they established a code of ethics that then guided the country. So if we establish some basic neurological rights in this bill, it's like setting a tone without making excessive regulation, and then ask the counsel to study those specific pieces I mentioned earlier. It sounds like you're suggesting that that would be useful, but that it would be better if we added a psychologist to the mix. Yes.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: Okay. We have our own, you mentioned the NASW code of ethics. The Psychological Association has a history of being the template for all other mental health professions for their code of ethics as well, even though we all share some similar code of ethics and some slight differences because there are differences in our training. But yes, we have a high code of ethics that a lot of people adhere to, and they're helpful as guiding tools.

[Brian Cina, Member (bill sponsor)]: So you would bring expertise in ethics and in clinical practice and in human cognition and perception and some of these other important dimensions that we need to be considering.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: Yeah, the line in my thing here says, because psychologists are experts in behavioral science and I see it. Yes. I don't know where it is in here, but I yeah. There are specific things that we can bring to the table that may be similar to other mental health providers, but also add a layer of expertise that could be helpful.

[Brian Cina, Member (bill sponsor)]: I took note of your suggestion. A lot of these other suggestions may not apply if we strike out those pieces, but there are things that could be looked at by the council with a psychologist present. So I read through your testimony carefully and

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: I appreciate it. Thank you.

[Alyssa Black, Chair]: All right, any other questions? All right, thanks for coming in and Thank you,

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: thank you all. Yeah, I'm gonna stay a little bit longer.

[Brian Cina, Member (bill sponsor)]: Debra, are we gonna have a break at any point?

[Alyssa Black, Chair]: Oh, but I have to say some things right now. Okay. We've got two other witnesses, I do wanna take a little bit of a break, so leave it up to the committee. Do you wanna take a break now, or do you wanna finish this? I can finish this. Okay. After this, we'll take a break while we have to do that with the other witnesses.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: Yeah, Cassie Fong Soon.

[Alyssa Black, Chair]: And let's let him finish so that.

[Peter Trombley, Director of Legislative Affairs, State Treasurer’s Office]: For the

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: record, I'm still Josiah. Just You're not augmented.

[Leslie Goldman, Member]: I'm not authenticated. You have planned to submit.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Alright. So let's see. I wanted to dive into so I did I understand correctly? All of section two will be going to that one sentence in the study now. So we don't really need to drill into section two and further

[Brian Cina, Member (bill sponsor)]: Sec section two had the neurological rights first. Oh, yes. That would remain the same. And what we've asked, we don't have the language, but what we're asking is that the rest of it be rolled into the study. The actual regulatory changes regarding neurotechnology, generative AI, and utilization review.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Then the rights are switching to the UN's

[Brian Cina, Member (bill sponsor)]: Yeah, that's our hope, or some version, because legislative council will figure out the exact language, but I've sent the language from the UN. Our lawyer is here, I don't know if you have a mistake. Okay. Sure.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Then, so section three, did I understand that instead of sections or parts of section three, you're gonna use eight sixteen? I just wanna make sure I'm That's

[Brian Cina, Member (bill sponsor)]: gonna be Yes. We use section three, the chatbots or I I

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: eight sixteen. Alright. Is section three just Section three is a few different things. It has notice. It has chatbots, and it has like, there's one other thing. Advertise yeah. A bunch of chatbot related.

[Brian Cina, Member (bill sponsor)]: Basically, the chatbot stuff is all gonna be wrote addressed under the eight sixteen, and the other remaining pieces of eight fourteen, changes, were going to be rolled into the study. So what would remain would be intent, neurological rights, changes to the council, the study report, and an effective date.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Okay. So the generative AI notice in section three, which is page I'm sorry, I'm bouncing around a lot. Page 13, that notice of usage of generative AI That would not be would also be okay. So I won't spend a lot of time on those then. I think if it's okay, I would like to just take a moment to talk about how we're approaching generative AI notice within the executive branch, because I think it may help as you're thinking about that in general. Is that alright? Okay. So one of the first things that we did in the in the code of ethics was to say that you have to be transparent about when you're using AI. So and additional to that, we said, we actually in government, and I think the same types of things would apply in in health care and human services. We actually don't ever want a robot making a the language we used was any decision taken by an AI has to be transient and reversible. And so that was like the framing that we have. So if there is a a decision that's being made within the executive branch that's being taken directly by an a some sort of an algorithm or an AI agent, it has the any impact has to be transient and reversible. And so an example of that is, like, our firewalls. Right? If if it appears that, say, your computer is hacked and it's not you anymore using the computer to access state systems, the our firewalls and our security tools can cut off your access. That is reversible, and that is a transient impact. But that is necessary step to take kind of at that speed of not going first to someone on our cybersecurity team to review. Right? We we just initially terminate access, and then we will reopen it if that was a bad call. And that's kind of the approach that we take. But that's like you can't do that on anything where it's actually gonna, like, cause significant harm. So the impact has to be transient and reversible. So if you're making any decisions about financial benefits, or healthcare provision, or anything like that, those effects are either are not transient, or you can't actually fully reverse the impact of a bad call in those cases. Any questions on that? Go ahead.

[Leslie Goldman, Member]: So this is a kindergarten question. How do you know all the decisions that AI is making

[Lynn Courier, Executive Director, NASW Vermont and New Hampshire]: background?

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: It's a great question. Part

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: of the work that Miles does,

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: and we get framing guidance from the council on this, is we evaluate each use case to determine whether it's okay to use AI to make that type of decision. And so whenever someone brings forward a, we wanna use AI to do this, we go through and say, alright. What decisions would that be involved in? Are those going to be decisions that are taken directly by an AI or an algorithm, or are those like a recommendation to a person? And then if it's a recommendation to a person, like, how is that system designed in order to make it easy for the person to make a good decision and not, like, bias them in different ways? So we actually look at every use of AI within the executive branch from a decision making lens, and then apply that to determine whether it's appropriate to use the AI in that loop.

[Leslie Goldman, Member]: And how many do you think there are?

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: I believe our current inventory is around 50 systems.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: And we

[Leslie Goldman, Member]: Oh, 50.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: 50. Yep. 50 systems that use AI. There are a number of decisions supported by each of those. And we publish our inventory in our annual report, which comes to legislature. So that's on the legislature website.

[Leslie Goldman, Member]: Maybe we could get that report. Yeah. We only

[Brian Cina, Member (bill sponsor)]: found it if send it.

[Leslie Goldman, Member]: Yeah. That'd be great.

[Brian Cina, Member (bill sponsor)]: I'm assuming it's okay to ask you the question. We decided by consensus. Can I ask a question? Yes, you may. I was trying to respect you and wait. Thank you. Doing my best. You mentioned the AI inventory or the inventory of automated decision making systems, like that. And I brought this up before when we've had testimony about how healthcare providers are using AI internally and how they are not necessarily We don't require them to report it, but they have their own AI. It sounds like the UBMMC, for example, on the record was talking about how they have an AI internal committee, and we've asked them to look at what the state's doing. Can you say a little bit more about the inventory? You mentioned that there's 50 systems, which I think originally there were like 10 or something when we started, so it's growing. In some jurisdictions, there's hundreds. If you look at there, I think New York City has a very large amount. I don't know, I don't want to give the false info, but it's way more than Vermont. Can you say a little bit about what is, even though I'm going send it to everyone, briefly for the record, what is on the inventory? It's the name of the system, is it the creator of the system? And then what are the other Yeah. I know.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: This is

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: a pop quiz. Yeah. May be able to samples, talk

[Leslie Goldman, Member]: to this more. Answer, so it's not really

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: So maybe Yeah. So there's

[Brian Cina, Member (bill sponsor)]: I don't know.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: We yes. We have the name, the maker of the system. We talk about the decisions that are made, whether those decisions have a direct impact, where we know financial information about the system, like procurement costs and things like that, we those usually, AI is like a small part of a bigger thing, so we don't usually have those direct costs for the AI system. For the ones that we build in house, we include a little bit about the underlying models supporting those. And we talk about the use cases and any particular constraints that we put in place around those. So that's there's a lot in the inventory, and Miles will fill in any details that I missed. But that's that's the the the guts of it. And I think where I wanted to go with this was was not so much, you know, we need every practitioner to, you know, disclose their their AI usage. I think as government, we have a bit of a there's a bit more scrutiny and a bit higher transparency on us about that. But what I think we do really well is a disclosure of when we're using AI. So when generative AI tools landed, we developed a disclosure that says, this content was and then how do you use the generative AI? Did you use it in drafting? Did you use it in editing? Right? Like, this content was, we'll say, edited with the support of a generative AI tool and then the name of the tool. So I used Anthropic Plot or I used our internal chat VT tool. The content has been reviewed and verified and represents the intent of somebody. Right? So you've like, if you're using AI, you've gotta kinda put your name on, this is what I intend. And that gets around like, protects us and the public from I'll give a a silly example that was in the news lately. There was a police officer who was responding to a domestic. The princess and the frog was playing on Netflix in the back oh, probably Disney It's it's a Disney movie. Was playing on Disney plus in the background during the call. And so the transcript, was generated by the body cam software of what happened, suggested that the police officer had turned into a frog.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: Oh.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: And that got filed as evidence.

[Alyssa Black, Chair]: Wow.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: And that's the type of thing we are trying very hard to avoid, because now you can't rebuild trust after that. It's really hard. So yes, we're using it right now. We weren't then, but now we're doing better. So instead, we're we're requiring any use of generative AI. People have to say, I reviewed this. I verified it, and it represents my intent. And put their name on it or the name of their office in the case that it's, you know, an official body. And so what was on page 13 of this bill, and I think I haven't looked at h eight sixteen. I would love to take a dive into that and happy to come back and provide feedback on that specific language. But the approach taken on page 13 here was that it was about generative AI usage only when there wasn't a review. I think at the beginning of line five says, four and five says, If a communication is generated by generative AI and read and reviewed by a licensed healthcare provider, the requirements of this section shall not apply. I would actually, from my perspective, and I'll defer to you if you have other thoughts on this, but I actually don't want generative AI ever providing direct recommendation to a patient on behalf of a healthcare organization. You can use ChatGPT for that on your own time if you feel like it, but as soon as it comes from a trusted medical office, if it's unreviewed, I think that that's not appropriate at this stage. And so the framing of this section was basically a disclosure for unreviewed generative AI information. And I would rather just be like, let's not have unreviewed generative AI information and potentially have a disclosure if there was generative AI use at all so that people know can still have trust that they're getting good content even if the medical provider is using, say, transcription service to summarize the case notes. That's a great thing to do, but you wanna make sure that that's been reviewed. That was a bit rambly, but I'll pause for questions.

[Brian Cina, Member (bill sponsor)]: Yeah, mentioned that you think it would be best to not have generative AI interacting with patients without a provider's review, but I've been getting these advertisements because I'm a provider of healthcare, just for those who don't know. I get advertisements because I talk about this all the time, my phone's listening to me. So I get these advertisements now, not only for headbands, I'm flooded with headband ads and I'm even tempted to buy one, but digital receptionists. So I can actually get an AI now that would take my voice like you talk and it learns your voice, it sounds like you, and it answers your phone for you. And then it says, this isn't Brian, this is Brian's AI. I can schedule an appointment for you, I can leave a message for Brian. Is it an emergency? Is that too much? Because they're actually handling a person who could be in crisis. Do you feel like that's too much? That's currently being sold to us.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: I think those administrative type uses, I think could be okay. And I think in that context, I was thinking more in the context of this is like clinical information being provided or interpreted. And that's what I'm like, let's not interpret clinical information without a human saying it's their intent.

[Leslie Goldman, Member]: Last question on this piece. Well, this may not be appropriate for this moment, but you mentioned Claude and Anthropic, and I know they're not cheap. So I'm wondering, well, maybe in the great scheme, are cheap. Do you have a sense of how much the executive branch is spending on this?

[Alyssa Black, Chair]: I have to get back to

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: you on that. I know our ChatVT tool, largely we did that because it's dramatically cheaper than buying licenses for everyone in the state government to use a $20 a month tool. So our internal use of ChatBet, I think we're under $1,000 a month still for that. And that allows anyone in the executive branch to have access to these tools.

[Leslie Goldman, Member]: That's a lot of people, right?

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Yeah, it's almost 10,000 people.

[Brian Cina, Member (bill sponsor)]: Right, so that seems like-

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: Yes, that's why we do that.

[Leslie Goldman, Member]: It is

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: very cost effective.

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: Okay, thank you.

[Alyssa Black, Chair]: Great. Thank you very much. Do you

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: have more that you need? Let me just see if

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: there's anything toward the end. I would be happy to come in and talk about eight sixteen if that is helpful. I haven't reviewed it yet, so I

[Alyssa Black, Chair]: can't do that at this moment.

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: But I I think I can hand it over to Miles or we can take

[Alyssa Black, Chair]: a break. That's what I wanna do, because we need to take a break. How long are going to take, Miles?

[Josiah Resh, Chief Data and AI Officer, Agency of Digital Services]: I'm pretty flexible on timing. If you want to

[Brian Cina, Member (bill sponsor)]: take a break now, I would be happy to do that,

[Dr. Rick Barnett, Vermont Psychological Association (Legislative Committee Chair)]: and then we can

[Alyssa Black, Chair]: do some tests. Alright,

[Lynn Courier, Executive Director, NASW Vermont and New Hampshire]: let's