Meetings

Transcript: Select text below to play or share a clip

[Leslie Goldman (Member)]: We're live now.

[Francis "Topper" McFaun (Vice Chair)]: Okay, we're live. This is the healthcare committee, it's still the twenty fifth and we're back from a little bit of a break. And with us we have Miles and we're gonna let him take it away right now. We're still talking about H814.

[Miles Latham (Director of Artificial Intelligence, Vermont Agency of Digital Services)]: All right, well, morning everybody. For the record, my name is Miles Latham. I am the director of artificial intelligence with ADS, the agency of digital services here at the state of Vermont. I lead our division of artificial intelligence within ADS, which is part of Joe Zia's team, who you heard from earlier. Our division is primarily responsible for implementing and developing the AI tools that we make available across the state government, as well as specific tools that we develop for specific agencies or departments throughout the state of Vermont government. Additionally, we are responsible for defining and operationalizing policy guidance on the use of AI within our state government workforce, as well as various other related initiatives, such as education development for state of Vermont employees. I have a couple of thoughts and comments that I wanted to share with respect to section four of this bill and the proposed changes to the composition of the AI Council. And to start, just wanted to begin with a little bit of a high level summary of what the AI Council does from my perspective and how my position and our division works with them, if that's okay. So to start, the main role of the AI Council as I see it is to provide the framing strategic guidance to our division, which myself and our team are then in turn responsible for operationalizing through the various phases of our work that I previously described. The council provides strategic guidance on a very wide ranging array of issues, including some of the things that Josiah spoke to a few minutes ago regarding identifying appropriate AI use cases, how we can most ethically and appropriately leverage this technology to improve service delivery within the executive branch. I also mentioned that a component of our work is policy development. AI Council is central to reviewing those policies and providing guidance, including on our policies governing the use of AI within the state government, as well as specific more detailed policies like how to best use AI within accessibility context. Additionally, they provide strategic guidance for us on how we can best approach the educational initiatives that we're working on within the state government, as well as some other very complex but highly salient issues like how we're approaching workforce development in Vermont, how we can best plan for the future uncertainty that's arising given that this technology is evolving so rapidly that it's very challenging to estimate what it's actually going to look like in five years. So how can we effectively develop strategic plans on a five year time horizon for what our division is gonna be focusing on? Those kinds of questions, I found that our sessions with the council have been instrumental in helping to develop answers to those. And that's all to say that one of the main benefits to the council that I've experienced has just been the diversity of the backgrounds and the perspectives that the various council members bring to each of our monthly meetings. So going to section four specifically, I just had a couple of notes here, as I mentioned. So for point f, one thing that I wanna raise here is that the present governor's appointee has been extremely insightful from my perspective, and I am absolutely not opposed to adding an appointee from the NASW. I thought that executive director Courier's testimony earlier was extremely compelling, and provided this moves forward. I I think that her perspectives or that the perspectives of somebody else from NASW would be instrumental both in driving the research and reporting initiative that's a component of this bill, but additionally, just as an additional perspective for some of the other areas of advice that I mentioned where we look to the council. But again, going back to the benefit of having that the diversity of backgrounds on the AI council, I I do think that retaining a governor's appointee, particularly somebody with a background in industry, would still be extremely valuable for our work with the council moving forward, namely because of the rapidly evolving nature of this AI technology that we've previously discussed. And for better or for worse, industry is often on the forefront of that. So being able to have an industry representative on the council, we can bring that perspective, think would be very helpful for us. So again, I am absolutely supportive of the idea of adding a representative from the NASW, and I I think that that would be great moving forward. But I I would also like to hopefully capture that industry perspective as well. The next point that I just wanted to raise is with respect to point H in section four, where one member will be added with experience in public education and appointed by the Vermont National Education Association. The thing that I wanna raise here is is, as just I briefly touched briefly touched on, we do have, some folks at the agency of education that we've been working with pretty closely in our interactions with the AI Council over the last several months. As one example, we've been coordinating pretty closely with Jeff Bloomberg, who is the Education Technologies Program Manager at AOE. He is not currently working in public education, obviously, since he's with AOE in that position, but he he does have a background in education, specifically computer science. And his team authored the AOE's recent, guidance document focused on artificial intelligence guidance for education. He he actually presented this at our previous month's council meeting. I I thought it was a really strong document. And my my my overall point here is is that given the focus for the research and and the the report that that's laid out at the tail end of of this proposal, I I think that the experience of Josh and his technology program management team at at AOE, I I think that that would be a very valuable addition to the council as well, both, again, with a focus on this specific research task that we'll be undertaking over the next year or so, but additionally, just turning back to the the other points of guidance that I mentioned. And that is everything that I had, so I'm happy to address any questions.

[Francis "Topper" McFaun (Vice Chair)]: Yes, I have one first. What did you think of the idea of a representative from the Vermont Psychological Association?

[Miles Latham (Director of Artificial Intelligence, Vermont Agency of Digital Services)]: I am broadly supportive of that idea as well. To be frank, I think the more broad array of perspectives we can bring to the AI Council, the better. And I thought that that was a compelling idea, especially given the salience of AI and healthcare and particularly psychological health care. Think that would be very helpful.

[Francis "Topper" McFaun (Vice Chair)]: Thank you. You. Go ahead, Leslie.

[Leslie Goldman (Member)]: For both of you, I'm not sure if you could give us some written testimony so we can review it as we go back. That would be really helpful. And I'm looking at the council members currently. I think I have it right. And I'm not seeing anyone from primary care or the clinical practice of medicine outside of the mental health world. So I'm wondering about thinking about including that role and what you think. I'm sorry?

[Brian Cina (Member)]: Well, there's someone appointed by the medical society in the bills.

[Leslie Goldman (Member)]: Yeah, okay. So that would cover that.

[Brian Cina (Member)]: Yeah, we could fix the language further, but I just want to

[Leslie Goldman (Member)]: Okay, I was just looking at current state and so you're addressing that gap. I mean, it seems like there's a gap. Thanks.

[Brian Cina (Member)]: Can, I don't want That's the question? Because

[Leslie Goldman (Member)]: right now it looks like there's a lot of, well there's the Department of Health, so I guess that would count. It's broad, but I just want to be sure the primary care or the clinical medicine is. What page is that on?

[Francis "Topper" McFaun (Vice Chair)]: It's on page 26. Yeah,

[Brian Cina (Member)]: what we would be adding is a social worker, it says a member with experience in healthcare appointed by the Vermont Medical Society and a member with experience in public education appointed by the Vermont REA. The current bill language would add three new perspectives. Would add a healthcare provider appointed by the doctors, a social worker and a teacher, so to speak. What we heard is that a psychologist might be useful. Josiah Miles,

[Leslie Goldman (Member)]: did you

[Brian Cina (Member)]: suggest another addition? Because I was trying to take some notes on what you were saying. Did you say there would be another perspective we could add?

[Miles Latham (Director of Artificial Intelligence, Vermont Agency of Digital Services)]: So I don't necessarily have anybody specific in mind, but one of the areas where I think that having an additional perspective on the council is somebody who is working in industry in some capacity. Okay, that's

[Brian Cina (Member)]: a must. Thank you. I'm gonna write it. Okay.

[Francis "Topper" McFaun (Vice Chair)]: Hey, there we go. Yeah.

[Daisy Berbeco (Ranking Member)]: It's not really a quick Well, do you think the information that you heard from Josh Lundberg would be helpful to us when you just had a meeting with him?

[Miles Latham (Director of Artificial Intelligence, Vermont Agency of Digital Services)]: Yeah. So for context, he was the lead author for the work product that I mentioned, the artificial intelligence guidance for education document that the AOE put out at the beginning of this year. Our AI council meetings are once per month, the first Friday of each month. And at the previous month's meeting, he took up a portion of that to discuss this report with the council. I don't know that that document is directly pertinent to this proposal, but I would definitely recommend reading it. It is a pretty strong document just in terms of capturing the current best practice recommendations for incorporating AI into K through 12 education. What's up? Josh Bloomberg.

[Leslie Goldman (Member)]: You.

[Francis "Topper" McFaun (Vice Chair)]: Any other questions?

[Brian Cina (Member)]: Go ahead. I trying to wait too, so.

[Francis "Topper" McFaun (Vice Chair)]: Two

[Brian Cina (Member)]: questions. One is, do you know It says in the current statute that the governor should appoint someone with experience in ethics and human rights, and I looked at the council membership and it says it is The current appointee is Philip Sussman from Norwich University, and when I looked him up online, I didn't see anything about human rights or ethics, but I did see he's a teacher at Norwich for cybersecurity. Can you say more about the perspective he's bringing? And would he be someone that if we changed the governor's appointee to industry, he could fill the role of industry, even if he has human rights and ethics experience?

[Miles Latham (Director of Artificial Intelligence, Vermont Agency of Digital Services)]: I am not prepared off the top of my head to answer detailed questions regarding his background. I I will say that his cybersecurity perspective, given his academic position at Norwich, that that has been very helpful. And I I again, I I couldn't articulate what specific ethics and human rights background he has. But, yeah, I would have to get back to you on whether or not he would be a sufficient industry representative as well. And that may be something that Chazay could speak

[Brian Cina (Member)]: to in more detail too. And my intent here is to preserve strength of the council while adding members and figuring out a way to do it so that it's a win win and not anyone losing.

[Miles Latham (Director of Artificial Intelligence, Vermont Agency of Digital Services)]: Yeah, and I will say, I guess to to more concretely answer your question, his insights have been very valuable. I, again, cannot speak to the details of his background. Yeah. But he he's been very helpful in terms of helping us to guide our strategy.

[Brian Cina (Member)]: You could always have a member at large appointed by the governor, and then they get to decide what the void is that they're filling.

[Francis "Topper" McFaun (Vice Chair)]: We'll

[Brian Cina (Member)]: that. Come And then the second question, which is, and it's dedicated to you, honey, are these people paid? Is this costing us anything? How much is this costing?

[Leslie Goldman (Member)]: It's not in there as

[Daisy Berbeco (Ranking Member)]: per diem or any of that.

[Brian Cina (Member)]: Well, it might be in the existing statute.

[Jeremy Aderman (Senior Director, National Council for Mental Wellbeing)]: Thank Yeah. You, Brian. Yes.

[Josiah Raiche (Agency of Digital Services)]: I don't wanna be out of order, I can go back up there, Chewar.

[Francis "Topper" McFaun (Vice Chair)]: Just tell you who you are.

[Josiah Raiche (Agency of Digital Services)]: So there is a per diem available for members who are not otherwise part of the executive branch. Branch.

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: So there are, I think,

[Josiah Raiche (Agency of Digital Services)]: three, two or three. So it does cost something, it does not cost very much because it's fun.

[Brian Cina (Member)]: How much, is it $50 per diem? I think it is because I'm on a board and we get 50 no matter how long the meeting is. I'm just putting it out there that could have Nolan always run the numbers, but if we added four people to this council, it would cost the taxpayers $2,400 more a year. So I just want to be clear, is it worth that investment to have that perspective? If people don't think so, we could always eliminate the stipend, we could reduce the numbers that we're suggesting adding, but I wanted to just at least honor that there is a small expense. But the question I would have, and maybe this can affect you guys, is what is the cost versus benefit analysis of this work? The investment the state's putting into this work, how much do you think it's yielding us in terms of the return?

[Josiah Raiche (Agency of Digital Services)]: So this actually segues to something else that I wanted to mention, which was we have lots of people who regularly attend who are not actual members of the council and they participate in the discussions, we recognize that and they participate throughout, but we don't actually need all the perspectives to have to be voting members necessarily. So that's another option to consider. We were talking about last year about, we should have somebody from the Vermont Arts Council engaged in some of these conversations, especially when we're talking about China. There's lots of like specific groups that would be helpful to us at different points, but don't necessarily have service voting members. And that may be a way to address some of this as well. And I'm not ready to speak to like of the four that are being proposed with which we're going to majority of the talk. Can digest that if you want, but- That's okay.

[Francis "Topper" McFaun (Vice Chair)]: Give us a recommendation on that. We'll make that decision. Okay, thank you very much. We appreciate it, thank you very much.

[Leslie Goldman (Member)]: Awesome.

[Francis "Topper" McFaun (Vice Chair)]: We've really gotta move on because we've got one more witness. Okay. How long, go ahead, if you've

[Leslie Goldman (Member)]: a- Well, I have just asked, what's the downside of all this? When you worry about what's happening, what do you think about? And you had mentioned a couple of things, so I thought we'd get it on the record. But if you wanna put any written testimony, that would be fine too. I don't wanna take up If

[Jeremy Aderman (Senior Director, National Council for Mental Wellbeing)]: you prefer that, we

[Francis "Topper" McFaun (Vice Chair)]: can certainly do that. If

[Josiah Raiche (Agency of Digital Services)]: you prefer that, I can do it verbally, it's subject.

[Francis "Topper" McFaun (Vice Chair)]: Nope, well, we'll get the answer right away if you do it Do you want me to sit up here or shake? Stand right where you are? All right, I'll stand here. Tell us who you are.

[Miles Latham (Director of Artificial Intelligence, Vermont Agency of Digital Services)]: Well that way, once they

[Brian Cina (Member)]: can see you. There you go, perfect.

[Leslie Goldman (Member)]: They can say that one straight over here.

[Francis "Topper" McFaun (Vice Chair)]: All So once again, I'm Josiah for

[Josiah Raiche (Agency of Digital Services)]: the record. So the things that we're tracking right now as as biggest risks around AI, there are a few. So one is around workforce disruption. As we're getting AI tools more and more capable. One of the standing conversations that we have with the other states I mentioned in the previous recordings is how are we thinking about I'll give an example from many years ago, but medical transcription used to be a thing that was a job for many people. That no longer meaningfully exists. And that wasn't even cool new AI. That was old, uncool AI. So we're looking at things like that, how we support folks whose jobs are changing, and we're working with Department of Labor. So that's one. Education is another big one. Getting people used to a world where you're collaborating with machines in order to accomplish work more effectively. That's another pretty significant new set of norms. We already talked about the disclosure that we use with the executive branch, but both for our k through continuing education folks. That's a a topic we talk about a lot. And we wanna make sure that we are pairing the next generation effectively both to use the tools and to take advantage of personalized like, really personalized education that's available through AI. So those are two and, you know, we just wanna make sure that we are we are working well with our current state workforce. One of the other questions that you asked during the break was, what does this mean for the existing state of Vermont workforce? And the way that I've been talking about AI within the executive branch is that it's really a power tool for information workers. It helps you kind of zoom out from the mechanics of doing the task and focus on the outcomes. And we've been doing more with less for so long that this is actually helping us catch up a little bit around quality. I think the very first big AI pilot that we did, that was a legislative report. We worked with our project management office and we this was '20 might have been the 2023 report. So a couple years ago now. We let them use AI to generate parts of their report and to, like, make it so that it was more readable for people who are not nerds. That was the goal, to make the report more legible. And the response we got back from the project managers, when I asked, how much time did it save? They said, several hours. And I was like, this is a 100 page report. What do you need? Only saved you several hours. And they said, this is the first year we were ever able to do it to the level of quality that we wanted to. We've never had time. And so that's the paradigm that we're seeing across the government. All right. That I think answers both of your questions. Yeah, thank you.

[Jeremy Aderman (Senior Director, National Council for Mental Wellbeing)]: Thank you.

[Francis "Topper" McFaun (Vice Chair)]: Okay, thanks very much. Thank you to both of you. So next we're going to have Kelsey I hope I pronounced.

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: You got it.

[Francis "Topper" McFaun (Vice Chair)]: I got it, good, thank you.

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: So for the record, my name is Kelsey Staffseff, I'm the executive director at Northeast Kingdom Human Services and also co president of the Vermont Care Partners Network, which is 16 designated specialized service agencies that provide mental health for children and adults, substance use services, emergency crisis services, and developmental services. So it's a big topic, and I've read both eight fourteen and eight sixteen. I think, I know we're focused on eight fourteen, and so some of the neurological, I think the technical pieces are outside of my expertise, but one thing, wanted to highlight that I did appreciate and that we do support is going to be for protected information. So I think that's really key and part of this bill that we support. I think the advisory council is important. I know artificial intelligence is broad and used in many different areas as we've heard. I do think that in the medical community and for medical supports should be thoughtful and paying close attention to what information is captured, stored, even as de identified or disaggregated information, and that patient protections should be robust and prioritized.

[Francis "Topper" McFaun (Vice Chair)]: One

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: thing, I know that the Artificial Intelligence Advisory Council is large and I can't speak to, I haven't been to a meeting, and I believe Josiah spoke to this a little bit about inviting people to speak to it. When I look at the members, it's a very professional group of folks that we've talked about. I think it's important to have people with lived experience, peers, and folks who are using AI day to day, but in a less professional mindset to be able to speak to that or participating in a meaningful way as decisions are being made about how artificial intelligence is used. I think this is specific to mental health and our world, just saying, I think there's potential to have significant impacts. I think regulating professional usage, which would include the designated agency system, is important, but the purview on the information related to artificial intelligence should be expanded. I think people are using it. One of the most sought after pieces of information that people do put into LLMs is looking for therapeutic advice, or asking about therapy adjacent information, including diagnoses. I think that there are chatbots, I believe that is something that's being taken up, which people are seeking out, and there are anecdotes of significant harm that has come to families and kids. And also that AI, especially when it comes to relational work, is geared towards sycophancy. And so that if it's used as someone providing clinical information or advice or anything that resembles therapeutic diagnostic or information that there should be regulations around that. And so in general, what's been provided, I think we're generally supportive of in the eight fourteen bill, and I think there are some things that continue to need to be studied and reviewed as this rapidly evolves. And one thing I can speak to is that we do use what we would call augmented intelligence, and I know that there's artificial intelligence and then augmented intelligence both referenced. Do I would recommend a definition or that a definition be devised for augmented intelligence. That's how we refer to our AI overlay for our EMR, which is supportive, and I would say more administrative. But we have two policies at NKHS that help to regulate artificial intelligence. And so essentially

[Jeremy Aderman (Senior Director, National Council for Mental Wellbeing)]: we have

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: a high level one, that goes beyond the overlay for EMR. And so really, we're trying to establish clear guidelines for the responsible and ethical use of artificial intelligence tools. And we say those as tools because they need to be used and guided by humans. AI technologies can enhance efficiency support, information discovery, and assist in drafting content. However, to maintain the integrity, accuracy, and human centered nature of our services, AI must be used as a supportive tool, not as a final decision maker or authoritative source. And so this policy applies to anyone who works at NKHS, but also interns volunteers as a contractor or vendor. And then we support the use of AI tools to assist staff in drafting documents, transcribing of meeting minutes, and conduct preliminary research and discovering relevant information. However, AI generated content must always be reviewed, validated, and finalized by a qualified human professional. AI shall not be used to make final decisions, provide clinical recommendations, or replace human judgment in any aspect of service delivery. And then we have one that's more tailored towards the AI overlay on our medical records,

[Jeremy Aderman (Senior Director, National Council for Mental Wellbeing)]: and saying,

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: you know, proper use includes the ethical, secure, and effective use of technology and will be used as a supportive technology tool that will not replace clinical judgment analysis or direct client interactions. And so there is that is a policy that folks sign off on, and essentially what we're saying is that all clinical decision making is still left to our clinicians, our case managers, anybody who's working with another human being, must take responsibility for any content generated or any information offered, and that we cannot rely solely upon any information developed by artificial intelligence, or augmented intelligence is how we refer to our EMR overlay. I will say that it is helpful in some ways, transcription, meeting minutes, and even in initial research. It is quick, it is robust, it does save time. You can edit documents for efficacy, for length, for whatever the requirements are. And one thing I would also say is that given the regulatory environment, it also feels necessary to support our clinicians. And so creating efficiencies in note writing documentation to address what we would call administrative burden is important. And that any time saved is really given back to the people that we're working with that more face time and person to person interaction to focus on treatment goals. One thing, I appreciate this, Brian, brought this up, just saying, what problem are we trying to solve? And so this is a little tangential, but I appreciate the six core values of the social work. So social justice, dignity, and worth of the person, importance of human relationships, I would highlight that one. And this isn't necessarily related to the bill, but I would say that as we move forward as an organization, we are saying the isolation, loneliness epidemic is related to our adoption and use of technology, in that the focus and priority of strengthening our communities and prioritizing human interactions is essential. I understand that this bill is about regulating AI as a tool, but I think technology adoption and use for not only professional services, but leisure and recreation is an important topic to cover, and something that, yes, we're using thoughtfully, but also pushing back in terms of how we treat and diagnose folks and saying it's hard to connect all the dots. I believe I can't remember the person's name who testified, but all of the information gathered and wearing glasses that are going to say if your pupils are dilated or your heart rate's up, or if you're lying or not. What problem are we actually trying to solve? And so the hyper detailed data gathering to solve an issue, which I would argue is judgment, I think we need to pause and think hard about how that is being implemented and what problem it's actually trying to solve. So what has been presented again in 08/14, and I believe eight sixteen is going to be taken up as well, seems like a thoughtful approach and first step to balance people's liberty, dignity and privacy, without over regulating something that is rapidly evolving day to day. And I don't know if this is the time to comment on age 16 as well or we're just keeping it to age 14.

[Francis "Topper" McFaun (Vice Chair)]: You're going to get a shot at that after we get through this. So you're going to hang around for us.

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: Perfect.

[Francis "Topper" McFaun (Vice Chair)]: We thank you for that. Watchmans?

[Leslie Goldman (Member)]: Thank you, Kelsey. That's really interesting. I'm wondering if you might be willing to put your testimony in some form of writing so we can review it. Much appreciated.

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: Yes, will do.

[Francis "Topper" McFaun (Vice Chair)]: Brian? One

[Brian Cina (Member)]: was there, someone else waiting. So you may have heard this with previous witnesses, but we're looking at an amendment to the bill to take a lot of the regulation and move it into a study because there's not adequate time in two weeks for us to really make those decisions. Do you think that that, or what are your thoughts about that approach considering the limited time we have and the gravity of these decisions? And the other question would be, what are your thoughts about our changes to the council and our assignment that we're giving them in this bill?

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: Yes, good question. The section I found most compelling about regulation, was geared towards the professionals, mental health professionals or health is the clinical judgment. I'm not quite sure if that was gonna be taken out, but saying like it is the responsibility of folks who provide therapy services to own the recommendations. I feel like that is geez, it's so hard to tease out all of the legal ramifications of that. So I support a study. I also support and we've developed a policy saying it is the responsibility, and we can't say we've created a document in AI and just pass it along and saying, well, was generated and can't take responsibility for that. It is the individual responsibility as a healthcare professional that whatever gets put into our clinical documentation or is conveyed is owned by that professional. And so I do think that is an important piece of the legislation to say that a human must take responsibility for clinical judgment and insights. And any communication passed on from a professional, in our case mental health professional, to a client or patient is the responsibility of the professional. That feels clear to me. From the advisory standpoint, I think the additions make sense. Again, as I mentioned at the beginning, just saying input from people who use it, I think the benefits of the data, the benefits are clear. I think it's hard to wrestle with the potential harms. And I think, as I mentioned before, expanding the use of technology in day to day life, to me, just warrants much further research in general, and so AI is going to accelerate the benefits I think, and has the potential to accelerate harms. So a further study to come back next year for more robust legislation, would also support.

[Francis "Topper" McFaun (Vice Chair)]: Okay, thank you. Any questions now further?

[Brian Cina (Member)]: I did have one last question about that. It's just that we had heard from UVM Medical Center that they have an internal group that is looking at their use of AI, and we heard from you that your agency has policies on AI. Do the designated agencies come together and share policies? Is there some central council that What other kind of work is being done without us requiring it to ensure safety and maximize the benefits and minimize the risks of the use of augmented and artificial intelligence technology in health and human services?

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: Yeah, Vermont Care Partners does a great job of sharing policies and what you're working on, talking about what's working and not working, what technology adoption we do have. We have a number of different committees that do that. We do have an operations committee that does pick up like what tools and what we're using. And so we don't have a specific AI committee, but I do know that we have talked about that with some regularity, have shared policies and gone back and forth on technology use and how we are implementing that. So there is a network wide conversation happening about that.

[Francis "Topper" McFaun (Vice Chair)]: All right. Thank you very much. Thanks. You're on time. Basically. Well done. All right, now what we're gonna do next is we're gonna switch over to eight sixteen. That's an active evasive regulation. Think about artificial intelligence when you're providing mental health. Our first witness is Kelsey, who's already here, and then we have Jeremy Adamant. So Kelsey, why don't you finish? Thank you. Mine's on it. And let's switch over to 8

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: Great, thank you. I appreciate that. Again, for the record for 816, my name is Kelsey Staphseph, I'm the Executive Director of Northeast Kingdom Human Services and Co President of Vermont Care Partners, which is the network of designated and specialized service agencies. So again, I think like, you know, there's a overlap, I think, in the intention of eight fourteen and eight sixteen, which I appreciate. I think the purpose of the bill lined therapeutic judgment, clinical decision making, and therapeutic communication remain the responsibilities of qualified mental health professionals, and this is not delegated to artificial intelligence, is spot on. I think that mirrors what we've got in our policy. Respecting individual choice in selecting mental health services, including community peer and faith based options is important. Then also consent, so we do get consent before we use our augmented intelligence, which is about kind of transcription and drafting of notes. So we do get consent for that. And then appreciate that. I just noted like allowable, allowing the use of responsible use of artificial intelligence. So going through the bill in 08/16 again saying, I think it does a nice job of balancing individual liberty and also understanding that one artificial intelligence, augmented intelligence is here, it's being used and we should be paying attention to it, but doesn't, in my opinion, over regulate the uses. A couple things that I would want to note that had been brought up, I do think it's worth considering, and maybe from Ledge Council, defining both artificial intelligence and augmented intelligence. I do think those are sometimes used interchangeably, but are different. And I do think with large language models and then the potential onset of generative AI, those are different. But again, I feel very like a lay person when it comes to how is that created, what does that mean? But I think it's worth considering, or people with more expertise to consider the difference between artificial intelligence, augmented intelligence, and the difference between LLMs and generative AI. I do see there's a generative artificial intelligence definition in here as well. Another thing that I noted was that therapeutic communication was listed in here, and so I can't quite find it, but in one section, you know, it was listed saying like, it doesn't prevent the documentation from being supported, it's saying that's a reasonable use, but in the prohibited usage, therapeutic communication was listed as both verbal and written. And so I don't have a suggestion but can work on that when submitting written testimony but just noted that I think there was a couple sections that were slightly in conflict in terms of the usage but that prohibiting the therapeutic communication from a written standpoint just brought up a little confusion and concern about how the augmented intelligence we use supports a drafting of the initial note if needed. And then our policy says you can't just submit that, you must review it and make sure it's what you discussed with your client in session or in your case management work, and that you must edit for appropriateness and then sign that it's your work that's been supported. And so I think the therapeutic communication, I believe, was like an AI to a person, but the way it's written, I do worry a little bit about how that could be applied to clinical documentation, which is signed. So I will think on any draft language or feedback that I will provide to the committee. That's I haven't had a chance to thoroughly read it multiple times. Yes.

[Daisy Berbeco (Ranking Member)]: Thank you for catching the inconsistency around that. I did and I think our next witness will probably discuss this a little bit, but we were talking about the need of reviewing and approving notes. So as you contemplate suggested language, consider whether just adding reviewed and approved at the end of, I think it's bottom on page four. Go ahead. Sorry. I just wanted to let you know that we were already No.

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: That's really helpful. Thank you. Yeah. No. That's great. I think was really the what I had highlighted in that, but I do given some of the high profile issues using the technology, I think it's helpful to provide this regulation, especially for professionals. One of the things, again, as we try to wrangle this large picture saying, I think there's a bigger conversation outside of this, again, the AI regulatory council about like, how is this being used? I worry about chatbots, I worry about access to LLMs on the internet, I worry about how use of technology with an AI overlay to collect data and target people specifically for selling things or using things certainly can have a negative impact especially on youth. I think about social media, I think about gambling, I think about the supercomputers in our pockets called smartphones, and so I think the way this bill is tailored, generally supportive with a few of those tweaks that Daisy mentioned, but I appreciate that this committee is taking on a bigger look at this, and think that there is a bigger conversation about technology and artificial intelligence to be had in terms of how it's impacting our community members, but also how it's intersecting with our social health, mental health, physical health, and how our professional services are equipped to handle that and what services we're providing. So that's a conversation for another day, but I do appreciate that the committee is taking artificial intelligence seriously and putting something forward to begin a framework for regulation.

[Francis "Topper" McFaun (Vice Chair)]: Thank you. Any other questions?

[Leslie Goldman (Member)]: I just have a question. I'm not sure I'm seeing where this conflict is, so can you point me to that for a minute or another time?

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: I can point out one, so at the bottom of page five it says administrative support means a task performed to assist a mental health professional in the professional's delivery of mental health services such as scheduling, billing, and general logistics but excluding therapeutic communication. And so, and then if you go down to the definition of therapeutic communication, means a written or spoken interaction intended to diagnose or treat any type of mental or behavioral health concern, provide ongoing recovery support, or provide any advice related to diagnosis, treatment, or recovery. And so some of the overlay of technology helps to draft a note, so that is part of the augmented intelligence, and so I would, reading this at first glance saying that feels like a written interaction that's intended with the treatment note to provide ongoing treatment or recovery. And so I worry a little bit saying, I think that the tool use is really helpful. Think our policy covers saying you can't just hit send after something is drafted. It must be edited and reviewed, which we can track what portion of a note is edited. And that just reading therapeutic communication there saying I think what was recommended earlier about just saying, no, can add a little something there, but I just want to clarify therapeutic communication and how that's applied to how we're currently using our tools, which has been helpful, but is still guided and owned by a human.

[Francis "Topper" McFaun (Vice Chair)]: Thanks very much.

[Daisy Berbeco (Ranking Member)]: Incredibly helpful Kelsey, I appreciate you.

[Kelsey Staffseff (Executive Director, Northeast Kingdom Human Services; Co‑President, Vermont Care Partners)]: Always good to see y'all, thanks for having me. I gotta jump off but we'll see you later.

[Francis "Topper" McFaun (Vice Chair)]: Okay, thank you. All right, next week, did you have something add? No, no. This person we have is Jeremy Did I get it?

[Jeremy Aderman (Senior Director, National Council for Mental Wellbeing)]: You got it, yes. So much. Great to be with you all. Appreciate you having me. Again, my name is Jeremy Aderman. I serve as a senior director at the National Council for Mental Well-being. The National Council is a national association representing over 3,200 mental health and substance use provider organizations across the country, primarily safety net behavioral health providers like Kelsey's organization, who is a member of ours, and others in the Vermont state. My background is in community behavioral health. I served as a case manager for several years and then had my education as a social worker, served as a clinician and therapist for several years. I'm with the National Council. In my current role, I spend a lot of time thinking about how solutions, in particular technology and in particular artificial intelligence, can support the provision of quality behavioral health care for all people and how it can help mitigate some of the challenges and barriers to that kind of care, and how we need to ensure protections like what you all are offering for both consumers and providers. So I'll echo a lot of what Kelsey said. This is really a strong text in this legislation to both protect clients or patients or consumers and the provider community. It mirrors what we've seen in other states as well. If you look at Maryland, Maine, Illinois, Utah, California, very similar approach to both protecting but also not fully inhibiting innovation and how that innovation might ultimately impact the quality of behavioral health care and the outcomes that we can achieve through the quality of behavioral health care. So kudos on that front. Where I would say, where I would add one other consideration as Kelsey added and Daisy was just referencing, what we've seen in other states as well is that the key linchpin, and I think in a lot of this, is that ultimately the provider is responsible. The liability stays with the provider here, and that is the case with everything from documentation as Kelsey was speaking to, but can extend to other use cases as well. And one consideration for this committee would be around the use of artificial intelligence to support therapeutic decisions and to help generate treatment plans. These are use cases that are already coming into existence and have you know really impactful opportunity to ensure that the care that's being delivered is quality right to improve that. Where I think the bill suggests is that artificial intelligence cannot do that. What I would maybe make a recommendation is, as they did in Illinois, add in that clause that Daisy recommended around that artificial intelligence cannot do that on its own. It must be reviewed and approved by a licensed professional, and so ensuring that a licensed professional is reviewing and approving any therapeutic recommendation or treatment plan provides them with tools that will help them deliver better quality care, but ultimately ensures that the care that's being delivered is of the decision of the provider itself, right? The provider has to sign off on that. So that's one consideration. If you see, think at the bottom of page eight, some of the restricted use around it, again you may be able to add that as an allowable use if again if that the therapeutic recommendations or the treatment plans must be reviewed and approved by a licensed professional. Otherwise I think again the legislation does a really great job of again protecting without stifling. And the other consideration I might add before I pause is that this text does not really speak to chatbots, and I think Kelsey mentioned chatbots, and I was just briefly reviewing I think the other text, I think it was eight fourteen, that does speak more directly on chatbots. Chatbots is going to be a sort of a big issue here as Kelsey was saying, there's a bigger conversation here around how people, whether we like it or not, they will be using chatbots for mental health support and sometimes those chatbots oftentimes those chatbots were not designed for that mental health support. So how do you protect people in that way? Utah last year put out the governor of science and legislation that tries to offer some protections for what the chatbot developers, what the technology companies need to do to ensure some levels of safety for the use of their tools as mental health support, that it looks like eight fourteen borrows a lot of language from that. I think it's really good language. It's really good language in defining what is a mental health chatbot. It's a good language in putting in place some requirements on how those tools need to be developed, but that a bigger conversation around chatbots that might not be addressed in this text. So I'll pause there and see if there's any questions, but I wanted to offer those reflections.

[Francis "Topper" McFaun (Vice Chair)]: Okay, thank you. Any questions? Go ahead, Kai.

[Brian Cina (Member)]: It's sort of like a general question, which is there was, we're talking about eight sixteen, there's also six forty four and eight fourteen that have chop out sections, and I'm just wondering if witnesses have compared them, and if not, you know? You just mentioned 814.

[Jeremy Aderman (Senior Director, National Council for Mental Wellbeing)]: 814 I've looked at, yep, as a comparison point, and again, there are components around chatbot that are productive and I think are a strong language, but I think the language in eight sixteen, specifically around the permitted and prohibited use cases for artificial intelligence in community behavioral settings, behavioral health providers, is really strong language and would recommend, you know, use thereof.

[Brian Cina (Member)]: But did you look at six forty four also?

[Jeremy Aderman (Senior Director, National Council for Mental Wellbeing)]: Personally, did not. Happy to

[Brian Cina (Member)]: Yeah. That's why I just think that looking at the three would be useful at some point, see what the strengths are. And then it would be great if we could look at the strengths of them all or the intersection just so we get the strongest possible chatbot book.

[Francis "Topper" McFaun (Vice Chair)]: You did a comparison.

[Leslie Goldman (Member)]: I did, appreciated that, thank you. Question? Just with the question of chatbots, is that something the advisory councils could address? They could. Do we want to ask them to?

[Brian Cina (Member)]: In addition to passing a chatbot bill or in lieu of passing one?

[Leslie Goldman (Member)]: I don't know, no comment. Can be expressive. Yeah, just wondering if we want to be clear that that's part of their charge.

[Brian Cina (Member)]: I feel like we have enough time that we probably could with additional testimony make a decision on chatbots and let them focus on all the other stuff. But that's just my opinion.

[Leslie Goldman (Member)]: You're the expert, sorry.

[Brian Cina (Member)]: I'm not an expert.

[Leslie Goldman (Member)]: No, mean, in this

[Francis "Topper" McFaun (Vice Chair)]: realm. But

[Brian Cina (Member)]: I think we're hearing from experts.

[Daisy Berbeco (Ranking Member)]: Jeremy, what are you seeing other states do with regard to protecting the general population from chatbots and also protecting the clinical setting like eight sixteen does? How are you seeing other states deal with that? Is it separate legislation? Give us some thoughts.

[Jeremy Aderman (Senior Director, National Council for Mental Wellbeing)]: Yeah. Primarily separate legislation, although that may be for other dynamics around capacity of those legislations and when they feel ready to move on certain things. Most of the legislation to date as it relates to AI and behavioral health is more akin to eight sixteen, really sort of thinking about the use cases within behavioral health providers and settings. Chatbot regulation, to the extent that it touches on mental health, is again largely Utah being the only state that has put something out and has even gotten something over the finish line therein. A lot of the movement we're seeing around AI chatbots and its intersection with mental health is more so on the developer side and less requirements and more decisions they're making in how they choose to operate. So for example, when we've seen some of these developers or technology companies say, know, we are going to remind the person who we are talking to that you are talking to a chatbot every x number of minutes or hours. You're seeing some of these developers think about obviously the models they're in that they're using and how you know sycophantic or agreeable they are, the ways in which they are elevating risky conversations to the appropriate authorities or places. So the developers themselves are taking on some of that. It's less happening in state legislation outside of, again, Utah that has done something around chatbots. So comprehensive response, but hopefully helpful.

[Francis "Topper" McFaun (Vice Chair)]: Okay. Thank you so much for coming in. Not coming in, but

[Miles Latham (Director of Artificial Intelligence, Vermont Agency of Digital Services)]: Beaming in.

[Leslie Goldman (Member)]: Beaming in.

[Francis "Topper" McFaun (Vice Chair)]: Beaming

[Daisy Berbeco (Ranking Member)]: in. In. Good to see

[Jeremy Aderman (Senior Director, National Council for Mental Wellbeing)]: you, Jeremy. Thank you so much. Care. Thanks for the time.

[Francis "Topper" McFaun (Vice Chair)]: Alright, today. We're all done. We have for this morning. 01:00 this afternoon. Very easy bill. $5.85. Should be an interesting discussion. Have a nice lunch and relax.

[Brian Cina (Member)]: Before we end, can we take a minute just to check-in about these two bills?

[Francis "Topper" McFaun (Vice Chair)]: What do you mean by that?

[Brian Cina (Member)]: Like talk as a committee about where we're at, so we can plan the rest of the testimony. Can you

[Daisy Berbeco (Ranking Member)]: I think before we make any decisions on that, I'd want the chair to be here.

[Francis "Topper" McFaun (Vice Chair)]: Yeah.

[Daisy Berbeco (Ranking Member)]: But I'll connect with you about further testimony and stuff.

[Brian Cina (Member)]: When is the chair going to be here?

[Leslie Goldman (Member)]: Alright, she's supposed to get back later this afternoon last night.

[Francis "Topper" McFaun (Vice Chair)]: Okay, she might be here this afternoon, but if not tomorrow morning. And then, Brian, you can talk to her about that. Okay. I don't wanna push forward on these particular bills because we've had a discussion and we're at the point where she wanted us to be and we'll go from there. Okay. All right? Yes.

[Leslie Goldman (Member)]: I'm just curious how these three bills kind of interact. And I'm not I see your reviewing your comparison, is, as I said, really helpful. How do we understand the differences? How do we understand what's important for us to pass right now? I guess it's murky to me, maybe I'm the only one, but

[Francis "Topper" McFaun (Vice Chair)]: I think when the change gets back, that's gonna be a discussion that we have. So I would rather hold that so that she can do it. I know that she's gonna do this.

[Brian Cina (Member)]: What I'm hearing from you is you wanna make sure we include our chair in any decisions moving forward.

[Leslie Goldman (Member)]: About a decision, which I was about understanding. I'm not suggesting a decision. I just wanted to

[Francis "Topper" McFaun (Vice Chair)]: That will be explained. I but but I agree with Leslie. There's actually four bills here. We just don't understand each one's accomplishing something different, that we can't have one bill or two bills. We're going have two. Well, we say that. You can bring that up and talk about that.

[Daisy Berbeco (Ranking Member)]: Yeah, think that's an important point to remember. We have two bills. So trying to figure out what the third and fourth bill are, that's not even on the table. We're taking up these two bills right now, and we don't have time to pass a third or a fourth bill. So we're doing these two bills right now.

[Leslie Goldman (Member)]: These two meaning 814 and 816? Yes. Okay.

[Daisy Berbeco (Ranking Member)]: We're not taking up the other two.

[Leslie Goldman (Member)]: So we're not taking up $6.44. That's what I was a little confused about. Because you said that comparison and there was a lot of overlap that I just want to understand how to

[Daisy Berbeco (Ranking Member)]: think about. Well, did the walkthrough. That's the only reason I did the overlap, is to help us decide which ones to take up, and then it was decided to take up these two. Got it. Thank you.

[Francis "Topper" McFaun (Vice Chair)]: That's helpful.

[Brian Cina (Member)]: Is Deb watching on Zoom? The reason I ask is because then the only committee member who really has missed this is the chair, and so once she's here, then I feel like that doesn't The most inclusive approach is to talk further when everyone is present. She's really the only one missing. So knowing that now, I get why you wanna wait before we talk about what's next. Even though the urgency I feel is because crossover is coming. But that urgency doesn't supersede the fact that there's a missing person who's a got significant role, we can wait a little for that. That's right.

[Leslie Goldman (Member)]: So we could describe six forty four. Got it.

[Brian Cina (Member)]: But we discard is a strong word. I would like to look at it, and we look at it as strengths. It's what we're going wait for the chair to talk. So I

[Josiah Raiche (Agency of Digital Services)]: don't wanna

[Leslie Goldman (Member)]: That's what you're saying. As I read six forty four, look for strengths that we might wanna incorporate in one of the other two bills. Is that how we're thinking about it? That's not

[Daisy Berbeco (Ranking Member)]: how I'm thinking about it. We are taking up two bills. We're taking up eight fourteen and eight sixteen.

[Francis "Topper" McFaun (Vice Chair)]: Okay. Right now.

[Leslie Goldman (Member)]: Okay. And they have different ledge councils, so I'm just trying to Yeah. Yeah. Trying to get my head around. Is it the this

[Francis "Topper" McFaun (Vice Chair)]: committee to take up the next two bills when these two bills are done?

[Brian Cina (Member)]: It wouldn't make sense because those What two bills we're doing is we're having the discussion we're supposed to be waiting for our chair to have, and I'm uncomfortable with that. I just wanna say that. Like, Copper said repeatedly, we're gonna wait.

[Daisy Berbeco (Ranking Member)]: Yeah. And I'm reiterating what the chair has told Topper and I, which is we're just doing these two bills. I think we need to remember the timing. We have this week, and we have the week we're back.

[Leslie Goldman (Member)]: So think that will play a lot.

[Brian Cina (Member)]: The other two bills couldn't be done if we We should wait for the chair. The issue is that there are three bills that are doing the same thing in different ways. And we already decided one of them we're not doing it in.

[Leslie Goldman (Member)]: So what's the fourth bill?

[Brian Cina (Member)]: The fourth bill has to do with utilization review and we're not even looking at that.

[Leslie Goldman (Member)]: So don't talk about workbills because that got me killed.

[Daisy Berbeco (Ranking Member)]: All AI is the same. Okay. Ready to offer?

[Francis "Topper" McFaun (Vice Chair)]: Ready to go out for lunch.