Meetings
Transcript: Select text below to play or share a clip
[Rep. Alyssa Black (Chair)]: Yes. We're almost there. Hi, welcome back from our very, very, very short break. And we have two witnesses in with us today on Zoom, and we thank them very much for joining us, both from Neuro Rights Foundation, and we're really excited to hear about something that, I think as we were doing a walkthrough of the bill, we were sort of in not quite sure what we were reading. So thank you, Sean. Thank you for joining us.
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: Yes, thank you so much. Would you like me to just go ahead?
[Rep. Alyssa Black (Chair)]: Yes.
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: All right. And you can hear me okay?
[Rep. Alyssa Black (Chair)]: Hear you very well. Thank you.
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: All right. Well, thank you. Thank Thank you, chair Black, representative Singh, members of the community. Sean Pazowski, and I'm a practicing neurologist and medical director at the NeuroRice Foundation, which is devoted to promoting innovation and ensuring the safe and ethical development of neurotechnology. And I first just wanna thank you for the opportunity to testify and for your leadership on this bill, eight fourteen, and for the seriousness with which you are approaching artificial intelligence in health care and human services. The stated intent of the bill to protect human rights, promote equity, increase transparency, and prevent harm while maximizing the benefits of AI reflects a balanced and thoughtful approach. And that balance is exactly what this moment requires. I'd like to focus briefly on two areas, the neurological rights framework and the provisions related to AI and mental health. First, the creation of chapter 42c on neurological rights is historic. The recognition that each individual has rights to mental and neural data privacy and protection from unauthorized neurotechnological manipulation places Vermont at the forefront of protecting the integrity of the human mind. The bill's definition of neural data, information generated by measuring activity of the central or peripheral nervous systems that can be processed by a device, is scientifically sound and appropriately targeted. Importantly, neural data is not simply another category of biometric identifier, nor is it what some have described as cognitive biometric data. As reflected in the American Medical Association policy on neurotechnologies and neural data, neural data refers specifically to signals derived from direct measurement of the nervous system. It is physiologic data distinct from behavioral inferences, consumer profiling, or traditional biometric markers such as fingerprints or facial recognition. That distinction matters. Conflating neural data with broader biometric or behavioral data risks either over regulating ordinary digital activity or under protecting uniquely sensitive brain derived signals. In my clinical practice as a neurologist, I care for patients with epilepsy, depression, traumatic brain injury, and neurodegenerative diseases. Increasingly, devices and platforms can measure brain signals outside of traditional medical settings. When those signals are combined with powerful AI systems, they can reveal patterns about mood, cognitive vulnerability, and even emerging psychiatric risk. For a patient struggling with severe depression, neural data may reflect physiologic markers associated with suicidality before a crisis occurs. For a family navigating early dementia, it may reveal subtle changes in neural function before functional decline is obvious. These technologies hold tremendous promise. But if neural data is misused, sold, or manipulated without meaningful consent, the consequences are not merely commercial. They affect autonomy, dignity, and mental health itself. The bill's requirements around written informed consent, limitations on collection of sharing of neural data, and explicit prohibition on prohibitions on consciousness bypass without specific consent are especially important. The recognition that consent obtained through a consciousness bypass is not informed consent is a profound and necessary safeguard. It acknowledges that the brain is not simply another data source. It is the biological substrate of personhood. Second, the provisions addressing generative AI and mental health chatbots are timely and essential. The requirement that patients be notified when generative AI is used in patient communications protects trust. The clear disclosure that a mental health chatbot is artificial intelligence and not human protects vulnerable users from confusion or false assumptions about the nature of care. As someone who works with patients and families in crisis, as well as an uncle to two young nieces navigating their own adolescent and teenage mental health journeys, I can tell you, when someone is experiencing acute mental distress, clarity and trust are lifesaving. Guardrails around advertising, data sharing, and safety protocols and mental health chatbots are not anti innovation. They are pro safety. If there's an opportunity for refinement, I would suggest explicitly aligning the statutory definitions of neurotechnology with the American Medical Association's policy language to ensure consistency with national medical standards and to future proof the statute as technologies evolve. I'd be happy to provide that suggested language that reinforces the distinction between neurophysiologic signals and broader biometric and behavioral data categories. What makes this legislation unique is not simply that it regulates AI, many jurisdictions are attempting that. What is unique is that it recognizes the human brain as deserving of special protections in law. By acting early, Vermont can set a precedent for other states and help shape national norms around neural data, AI, and mental health. If done thoughtfully, framework will not slow responsible innovation, it will strengthen public trust in it. In the realm of mental health and neurological disease, trust is inseparable from care. In that sense, this effort has the potential not only to shape policy, but to save lives. I thank you so much again for the opportunity to testify and happy to take any questions.
[Rep. Alyssa Black (Chair)]: Thank you. Any questions? Yes, go ahead.
[Rep. Brian Cina (Member)]: Hey, I'm wondering if you could share with the committee some examples of neurotechnology that's currently in use that is collecting neural data, like for example, the headbands?
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: Oh, yeah. Yeah, there's a company called Muse, which is one of the oldest companies marketing these headbands. Most of the products come in the form of either a headband, earbuds, or headphones that have sensors that collect neural data. So you can just imagine a headband that goes around here with about seven sensors collecting brainwaves. There's a product that has, believe it or not, in the actual earphones collecting neural data. I've actually over here in my jacket, I've got a wristband that Meta, the company Meta just launched. If you can imagine these ARVR glasses that they have where you can control your ARVR glasses just using simple hand signals. Nike just released, believe it or not, a Neurotech shoe a couple weeks ago. So you can see that this is starting to really take off in terms of consumer penetrance in the market and all kind of starting with these headbands, earbuds, earphones. That's kind of how these products are arriving into the consumer space.
[Rep. Brian Cina (Member)]: Can you say a little bit about the potential for these products in healthcare? It sounds like currently they're being used in the wellness market, but not in formal medical interventions. Is that correct? What do you see the potential being in terms of medical? Can you give some examples of ways they might be used in medical or healthcare interventions?
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: Oh, yeah. The big revolution here is that to have an EEG in the past, you used to have to come into the hospital in a very uncomfortable setting. It would take a long time. You would have all this goop put all over your head and these electrodes hooked up, it would take thirty or forty five minutes. Very uncomfortable. You would get, you know, great signal, great data. But that's how it was for basically the first ninety years of EEG. That's how it was done. The revolution has really been these consumer products where now, know, imagine you can put on this headband and meditate every day. Well, what what's the difference there? Is that you're getting data every single day that you're meditating or measuring your mood. So the longitudinal nature of this data collection is really the revolution here, which is allowing us to diagnose conditions like depression, anxiety, believe it or not, even schizophrenia, obsessive compulsive disorder because of this longitudinal collection of data. I'll give you another example. So one of the studies that I actually got introduced to this technology with was the management of intractable epilepsy at home with patients who, you know, had several seizures a day. And as you could imagine, doing an EEG on them every single day is just not practical. But with the use of this consumer headset, I was able to do a study to put this headset on these patients and everyday collect data and tease out patterns in the data that would allow us to titrate medicines in in a more logical way using neural data in order to preempt and preclude these seizures. That's one medical use. But I see this going much further in the very near future. Apple has a patent on earbuds that have an EEG sensor in it. You know, as you can imagine, a 100,000,000 people every day wear AirPods. And so anybody with epilepsy, for example, could put in a a simple pair of AirPods and have a kind of like a pre seizure detection mechanism sort of like atrial fibrillation in the Apple Watch right now. You can wear the Apple Watch and it can detect atrial fibrillation. I see that coming as far as seizure disorders in the very near future and over a million people every year suffer from epilepsy. So those are just a few examples.
[Rep. Francis "Topper" McFaun (Vice Chair)]: No, you can have faith.
[Rep. Brian Cina (Member)]: I think we're wondering, like, if you could say more about how the data, I appreciate you explaining how these technologies can help people. How can the data collected be sold or like how Oh yes. Commercialized. Commercialized, yes.
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: Yes. So the reason that this is a problem is because as you can imagine, that study that I did in the hospital or with my clinic patients is all protected by HIPAA. These companies, on the other hand, don't have any there's essentially no regulation. In fact, some people have called this the Wild West of neural data, where there are simply no guardrails. There are regulations of what companies can or can't do with it. Our foundation, the Neural Rights Foundation, released a report about two years ago that examined the privacy practices of 30 companies collecting neural data. And as you could imagine, they really fell short of standards for both international human rights and just data protection practices in general. Right now, it's just like any other, you know, click on the Internet that can be used for any purpose, manipulation, interrogation, advertising, discrimination, all these things that neural data, very sensitive signal can be sold, it can be used in any other way that data brokerage is used today without any regulation. That's what we see as this very big problem, is that, you know, your your cell phone number, your your bank account, you know, your address, these are all protected under you know, most states have a privacy act that treats those those types of information as sensitive. Neural data is not currently treated that way under current statute and that's what we think needs to change.
[Rep. Brian Cina (Member)]: Just to be in the more dystopian aspect of this, can you say a bit about the risk to autonomy? Without protection, how might these devices be used by corporations or others to affect our decision making or influence our personalities or our behaviors without our permission?
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: Yeah. So you could just imagine this as sort of like what's already happening right now with social media, but just on steroids. You know, once you've gotten somebody's neural data, you've got it. You know, you've got everything about what's going on in that person's mind. And so you can imagine a future where you're wearing a pair of earbuds and you're looking at your, you know, your phone, you're scrolling on social media, you're scrolling on the news. Not only does it know what you're looking at, now it knows how interested you are in that and could tailor the algorithms, provide you with more targeted advertising, to provide you with more, you know, material to manipulate you to to buy things that could provide you recommendations on certain diagnoses, which I see as very dystopian. Like, if my earbuds know that I have anxiety or depression and I start getting advertisements for medications or things like that, I'd become more depressed, I think, or more anxious knowing that that company knew that about me. That's kind of where this could drift towards a dystopia. You know, there are other examples about people being monitored at work. There's something called neurofeedback, which you can actually, believe it or not, tailor people's preferences without their knowledge, like make them prefer certain things more than the other using neural data. And so this is the future that we really need to get out ahead of. And I think we have an opportunity and I really think that Vermont can lead the way on this. So it's it's very it's I'm optimistic. I'm trying not to be, you know, too dystopian about it, but it it could really get there really fast to representative Cina's point if we don't act if we don't act now.
[Rep. Brian Cina (Member)]: It's happening quickly. Right? Like, these products are rolling out with greater frequency.
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: Yes. You know, Meta became the first really big tech company to release this wristband to control the ARVR glasses just a couple weeks ago, and Apple soon gonna follow with the AirPod. Neural data brain signals are just gonna be a part of daily life here in the next couple years. And, you know, I want to see, you know, this technology flourish. I wanna see humanity flourish because of it, and it can. But I don't think that we can do that unless we have the necessary guardrails and safeguards to be sure that people are confident using this technology without the risks that Rep Cina is referring to.
[Rep. Brian Cina (Member)]: Thanks. It's something, to your point, I've been talking more about headbands with people just because we're talking about this. My, what is it called, feed in my social media is all advertisements for the different headbands now. So they're trying to get me, and the thing is, it's working, it's making me want one, because they're telling me that I can use the headbands to enhance my cognitive performance, that I can use the headbands to treat insomnia. So the things I'm talking about with my clients, it's listening. Somehow social media is detecting these conversations and interpreting it as I need it. So it's like the ads are actually marketing to me and it's only over our communication. So imagine when it's actually tracking our neural data. And the other question that generated for me is, how close are we to these devices and algorithms being able to actually record our thoughts and memories and dreams and actually store them or share them without our consent?
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: Yeah. So that's that's a little bit trickier question just because, you know, it's sort of one of those what's possible today versus what we expect to be possible in the next couple years. I'll just give one example. But two years ago, in a lab in Australia, they were able to detect the literal words in somebody's brain with 40% accuracy just using kind of kind of a headband skull cap. So, you know, if you could imagine 40% accuracy last year, the way the algorithms and technology advances, the more data, the more accuracy in the algorithms, I could see a a near future. This isn't you know, hype or anything. I could see a near future where Apple is using this technology to do actual thought to text. Like, you just put in your earbuds and the words show up on the phone within the next, say, three to five years.
[Rep. Alyssa Black (Chair)]: Did
[Rep. Brian Cina (Member)]: you hear that?
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: Oh, sorry.
[Rep. Brian Cina (Member)]: Oh, I just
[Rep. Alyssa Black (Chair)]: said, oh, no, I would be sending some terrible, terrible text messages.
[Rep. Brian Cina (Member)]: Yeah. The questions I've been asking a
[Rep. Alyssa Black (Chair)]: lot to I'm crowded hopeful. I am really grateful that you're asking questions because you're asking all these things that I keep thinking about in a more intelligent way.
[Rep. Francis "Topper" McFaun (Vice Chair)]: I'm very ignorant on this subject and I ain't afraid to admit it. But what you're saying is if they can maybe AI can determine what words I'm going to say, can they tell you what words they want to tell you?
[Rep. Brian Cina (Member)]: They want me to say that. Safeguard.
[Rep. Alyssa Black (Chair)]: Yeah. To say.
[Rep. Francis "Topper" McFaun (Vice Chair)]: Where's the safeguard to that?
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: Yeah. Exactly. That that's why we need these robust consent mechanisms. We need people to know what they're getting into. We need people to know what the capabilities are, and we need to give people the control so that we don't get into a future to your point where we're just kind of like these AI automatons just following whatever pattern or algorithm wants us to do or say. We need to keep that individuality and independence in the mind of the individual. And I feel like if we do that, then that will leave a future where the augmentation of cognition, if you want to use the products and you want the benefits, then that's all we should be enjoying. But to your point, yeah, we don't want a future where AI is essentially feeding us things. We need to keep that, you know, within the realm of the individual.
[Rep. Brian Cina (Member)]: And so we're without this legislation, we're on the track for what Penny is afraid of, what the representative Demar is afraid of, which is them tell if we if we pass this, it would protect us from them telling us what to do. But if we don't pass this or if the federal government doesn't take action, then there's nothing stopping them from that.
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: Exactly. Exactly. I think this legislation could be historic in the sense that it's the first to address neural data and neural rights in the context of mental health and AI and these chatbots. And I think combining those two is a very powerful, and it's never been done before. So, I mean, you guys could really be sort of leading the way, leading the nation on not only neural data, but its application in mental health and AI. And I see that as a very powerful combination.
[Rep. Alyssa Black (Chair)]: I keep thinking about, I'll bet this is happening right now. You mentioned with watch for someone with AFib. If that data is being collected and then that data is being sold, it could be sold to say, life insurance companies who can then deny coverage or drop coverage.
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: Yes, discrimination. Sorry, go ahead.
[Rep. Alyssa Black (Chair)]: No, I was just, we tried to do a genetics, I remember a couple of years ago.
[Rep. Francis "Topper" McFaun (Vice Chair)]: Just don't know of any law that will give us a 100% safeguard for what we're trying to accomplish.
[Rep. Alyssa Black (Chair)]: Well, it wouldn't, I mean there is no 100%, but it puts it in law.
[Rep. Brian Cina (Member)]: This would be a first step. Yeah,
[Rep. Alyssa Black (Chair)]: the disclosure. This bill is really about the disclosure, correct? Also prohibits Consent and disclosure and prohibits unless we have
[Rep. Brian Cina (Member)]: Yeah, can go through the language again with Jen, Yeah. To be more clear. But, like, it it puts in Vermont law human rights that aren't in any law right now to protect our privacy, our freedom of thought, our cognitive liberty, the ability to change our decisions about this technology and what it means, to protect us from them messing with our minds, that's my simple way of putting it, because it's like the words here are bigger and more complex, but basically saying people can't affect our mind without us letting them, that that's not banned right now, they can
[Rep. Francis "Topper" McFaun (Vice Chair)]: do it. I think they will be able to do it anyway.
[Rep. Brian Cina (Member)]: They might find other ways around it, but at least we're putting a I do understand that. And this technology, I think we heard Sean say, this technology is revolutionary in terms of what it's gonna mean for human quality of life improving and health. In fact, it could be used to improve health outcomes. If you look at the intent of the bill, it says that, but if we don't protect people, it may be used to discriminate. It may be used to influence people, to make certain people more money at their expense or to generate certain outcomes. And so it may not be perfect, but if we take a step forward, what usually happens, like Vermont was the first state to create a division of artificial intelligence in the state. And then within a few years, 13 states had it. And now it's called the Vermont model. And it's awaited for the government to be more ethical in the way And we're gonna hear from our Vermont people at some point as witnesses. So this would be another way Vermont can set an example. If we do this and a few other states do it, then you may see many states take action in this direction, and we build on each other's success. So that's my argument for why I think it's worth doing something rather than nothing, even if it's not perfect.
[Rep. Alyssa Black (Chair)]: Sean, thank you so much for joining us.
[Rep. Brian Cina (Member)]: Can make a request of him? Yes. Sean, just between now and next week, if you have a minute, you mentioned two things that I think would be helpful. You said you could provide written suggestions for improvements to the bill. And if you could send those to Tasha, we would review them. You said that you did a report on data protection. If that's public and you could share that with us, I'd like to add that to our record. And if you have any suggestions for additional witnesses we should hear from on this topic, please let us know and we'll try to make time for them.
[Dr. Sean Pazowski (Medical Director, NeuroRights Foundation)]: Yes, absolutely. I I will get all those things to you. Okay. And yes, I I just I'm so grateful for what you're doing and yeah, think it's better to get out ahead of things rather than try to claw it back. And I think Vermont could really lead the way on this. So so thank you so much.
[Rep. Alyssa Black (Chair)]: Thank you. Thank you for joining us. We have Ashley Collins with us also. The Neuro Rights Foundation. Hi, Ashley. How are you?
[Ashley Collins (Legal Advisor, NeuroRights Foundation)]: Hello. Good afternoon.
[Rep. Francis "Topper" McFaun (Vice Chair)]: Hi.
[Ashley Collins (Legal Advisor, NeuroRights Foundation)]: Great, so I can just launch into it then.
[Rep. Alyssa Black (Chair)]: Yeah, absolutely.
[Ashley Collins (Legal Advisor, NeuroRights Foundation)]: Good afternoon members of the House Committee on Healthcare. Thank you for the opportunity to speak. As was mentioned, my name is Ashley Collins and I am a human rights lawyer based in Washington DC. I serve as a legal advisor to the Neuro Rights Foundation, which as you know, is a nonprofit organization that has the mission of promoting innovation, protecting human rights, and ensuring the ethical development of neurotechnology. Over the last five years, the Neuro Rights Foundation has engaged stakeholders such as the United Nations, governments around the world, scientists, the tech industry, and lawmakers in an expansive effort to establish guardrails capable of safeguarding neurotechnology users. My remarks today will focus on the legal and regulatory framework in general that will need to be in place to ensure that the dissemination of neurotechnologies is undertaken in a responsible manner, as well as some of the efforts to protect neural data that have been advanced in The United States and around the world. As a starting point, in order to understand the risks associated with consumer neurotechnology and the importance of robust protections against the abuse and misuse of neurotechnology, it's helpful to examine the regulatory landscape in which these devices operate. There is a significant difference between the regulation of invasive neurotechnologies on one hand, which are implanted in the brain through surgery, which must be licensed medical devices and whose gathered neural data is generally protected under health data privacy laws, as my colleague was alluding to, and that of wearable neurotechnologies on the other hand, which are subjected to little or no regulation, even though such devices are also medical grade. There are already more than 30 consumer neurotechnologies on the open market for purchase today. In terms of what the future could bring, as my colleague discussed, researchers have already used wearable neurotechnology devices combined with generative AI to decode thoughts to text with increasing accuracy. It is possible that these devices and others like them will be improved in the next few years and could even be used in consumer devices. As my colleague mentioned, Apple has already applied for a patent for biosensors to monitor brain activity for its next generation AirPods, and these could be a few years from being deployed on the market. Thus, we are talking about questions of science and not science fiction with these technologies. The gaps in regulation are concerning then given the extreme sensitivity of neural data. According to the American Medical Association, neural data refers to information that is obtained by measuring the activity of a person's central or peripheral nervous system through the use of neural technologies, but neural data does not include data inferred from non neural information. Neural data is capable of revealing intimate information about consumers, including information about individual mental states, emotions, and neural processing. The sensitivity of neural data and the imperative of its protection heightens privacy risks posed to neurotechnology users. Since every human brain is unique, neural data is uniquely specific to the individual from whom it was sourced. Because neural data contains distinctive information about the structure and functioning of individual brains and nervous systems, it always contains sensitive information that can link an identifiable individual with their data. For example, neurotechnology devices with sufficient resolution currently have the capability to reveal health related information regarding mental and neurological diseases and to decode about a dozen different mental states. For example, if a person is angry or if they are sad. This is deeply intimate information to entrust companies with. Although it is largely unprotected by regulations, it is just as sensitive as protected medical data. Without protections in place, it will be entirely up to neurotechnology companies to decide what they do and do not do with their customer sensitive neural data. As my colleague mentioned in 2024, the Neurorites Foundation published a landmark study where we reviewed the user and privacy agreements of 30 consumer neurotechnology companies and benchmarked them across a half dozen global privacy standards. The study finds that 29 of the 30 companies failed to adequately protect consumers' neural data. All those companies exercise significant control over the collection, storage, retention, and repurposing of neural data. Concerningly, only one company explicitly states in its policies that it will not share data with third parties, and only three of the 30 companies state that they cannot sell data to third parties. In other words, all but one of the companies exert near total control over consumers' neural data. They can use it as they please, including selling it to a wide range of actors. In 2021, Chile became the first country in the world to protect neural data through the adoption of a constitutional amendment that protects mental integrity. In 2023, the Brazilian state of Rio Grande do Sul incorporated neural rights into its state constitution, and there is a bill in the Brazilian National Congress that proposes to amend the constitution to protect brain activity and data. Similar kinds of efforts are underway in Mexico and Spain, among other countries. In The United States, four states, California, Colorado, Montana, and Connecticut, have taken important steps to protect consumers who purchase and use neurotechnologies. For example, in California, an amendment to the state's consumer data privacy law, the California Consumer Privacy Act, defined neural data and protected it as sensitive personal information. This allows users of these technologies to request, delete, correct, and limit the data that neurotechnology companies collect from them, and they can opt out from companies selling or sharing their data. In short, companies in California will now have to treat neural data with sensitivity so that it does not enable undesired disclosures of information or unwarranted violations of privacy. And while the exact approaches of US states have differed, legislation has generally followed this format of defining neural data and then extending the protections of state consumer data privacy laws to protect it as sensitive data or sensitive personal information, depending on the specific language used in the statute. The international level, UN entities have also urged action and begun to provide recommendations on what can and should be done to prevent the misuse and abuse of neurotechnology. For example, in October 2022, the UN Human Rights Council adopted Resolution 50 onethree on neurotechnology and human rights and requested that the Human Rights Council Advisory Committee prepare a study on the impact, opportunities, and challenges of neurotechnology with regard to the promotion and protection of all human rights. In August 2024, the UN Human Rights Council Advisory Committee published this study, underscoring the significant concerns around mental privacy presented by neurotechnology, as well as the threats to human dignity, autonomy, and integrity. The advisory committee recommended that UN treaty bodies draft new general comments to clarify and strengthen human rights protections. And additionally, in January 2025, the UN Special Rapporteur on the Right to Privacy issued a report where she highlighted that the use and processing of neuro data must be governed by sound principles that protect individuals against risks such as invasion of mental privacy. She further noted that any development or use of neurotechnology shall be undertaken for the purpose of contributing to the right of every person to enjoy a dignified life and the benefits of scientific and technological process while respecting in Australia, rights related to privacy and the proper processing of personal data. Thus, the challenges and opportunities presented by the rapid development of neurotechnologies, as the committee, was stating earlier, require protections in the form of laws and policies at the international, national, and state level. Based on the Neuro Rights Foundation's experience engaging on these issues with stakeholders around the world, we believe that efforts to establish legal and regulatory frameworks to protect citizens from the potential misuse or abuse of neurotechnologies, especially in unregulated consumer products, must be rapidly accelerated. There are a few key issues that should be addressed in these efforts. In 2017, the Morningside Group, a group of researchers, clinicians, engineers, and bioethicists, proposed that the concerns raised by neurotechnology could be addressed through five neuro rights by applying rights that are already protected or that could be further interpreted from existing international and domestic law. The five neuro rights include one, the right to mental privacy and to protect the inner workings of one's brain from disclosure. Two, the right to identity or the ability to protect one's mental integrity. Three, the right to agency or freedom of thought and free will. Four, the right to fair access to mental augmentation. And five, the right to non discrimination in the development and application of neurotechnologies. Neurotechnologies. Rather than proposing new human rights, the central focus of neurorites is to further interpret existing human rights and improve the enforcement of international and domestic law. This means that concepts such as cognitive liberty, which is not a recognized human right under any treaty in the world and for which there is no consensus definition of the term, should not be part, in our view, of efforts to protect neural data. The protections that that particular term aims to encapsulate are already found in the existing rights to privacy, freedom of thought and conscience and freedom of opinion and expression. Thus, the strongest foundation for protections can be found in existing law and steps taken at all levels would need to take these elements into consideration laws and policies around the world, including in Vermont with the current bill need to adapt to address the serious human rights concerns raised by the dissemination of neurotechnology, including in relation to the issues of privacy, surveillance, fair access, algorithmic bias and safety. While advances in neurotechnology hold immense promise for deepening our understanding of how the brain and nervous system function, they also create significant risks. As we consider the development and implementation of neurotechnologies in The United States, we should do so with an awareness of these promises and perils and a commitment to advancing common sense steps to protect all patients and consumers. Thank you very much. And yes, open for any questions you may have.
[Rep. Alyssa Black (Chair)]: Thank you so much. Brian?
[Rep. Brian Cina (Member)]: I'm just going to start because you just said something that feels helpful. I mean, it was all useful and beneficial to hear this, but this one particular What I was hearing is what might be a suggested amendment to what we are the language we're looking at. You mentioned that there were five identified rights, and then you talked about how cognitive liberty was technically covered by others. Would
[Rep. Francis "Topper" McFaun (Vice Chair)]: you
[Rep. Brian Cina (Member)]: be willing to, you or whoever at Neuro Rights Foundation, it makes sense to submit this, would you be willing to submit some formal suggested amendments to the bill? I think Sean sent me at some point something, me as an individual, after the bill was introduced, so it was too late for us to incorporate that in the drafting. If there's something updated that you could send, it would be very useful to see. Because if the way that we are framing these rights isn't considered the clearest expression of our intent, I would want us to look at that and amend them. Because those five things you said made a lot of sense.
[Ashley Collins (Legal Advisor, NeuroRights Foundation)]: Yeah, certainly. As to who it would be at the Neurorites Foundation, it would be me who would make the suggestions for alterations or amendments to the language. So I certainly can propose alterations on that front. Any other small refinements in the language, I'll go through the bill and see, at least as far as it relates to the neurotechnology piece, we can certainly provide some suggested changes.
[Rep. Brian Cina (Member)]: It's very appreciated. And can you remind us again of your training, what your background is?
[Ashley Collins (Legal Advisor, NeuroRights Foundation)]: Yes. So I am a lawyer. I primarily have a background in international human rights law. That is kind of the angle with which I approach these issues. And so, yeah, I'm relatively newer to the foundation, but I do serve as a legal advisor, so a lot of the efforts that we have undertaken in terms of engaging with different state legislatures that are considering bills that pertain to neurotechnology, those are very much projects that I am involved with.
[Rep. Brian Cina (Member)]: Thank you. I just thought it's important that people know that because for the record, we heard from a scientist and we're hearing from a lawyer. So we're hearing from two highly trained people with a lot of experience on this topic. And so I think it's important to recognize your expertise. So thank you.
[Rep. Alyssa Black (Chair)]: Thank you. You. Topper, do have a question?
[Rep. Francis "Topper" McFaun (Vice Chair)]: Yes, thanks for coming in. Would you send us a copy of your testimony?
[Ashley Collins (Legal Advisor, NeuroRights Foundation)]: Yes, certainly.
[Rep. Alyssa Black (Chair)]: That would be great. Yeah, send that to Tasha, our committee assistant, and we'll make sure to get that posted. And also, we like to refer back to
[Rep. Daisy Berbeco (Ranking Member)]: Daisy. Ashley, thank you so much for sharing your expertise with us. I'm finding this topic very difficult. My background is in mental health policy and advocacy, and my training is in completely unrelated fields. So I wonder if you can make any sort of suggestions for how a committee like ours of people who are not necessarily who are finding entry into this topic as struggle. How can we grapple with doing this topic justice and doing also careful, good work?
[Ashley Collins (Legal Advisor, NeuroRights Foundation)]: Yeah. I I think that there are a lot of different entry points. I think that one of those is relying on experts. My colleague Sean, of course, has a great amount of expertise on these matters. I come at this from the legal angle, as was discussed earlier. I think that gathering perspectives of those who work on these topics is helpful because I think a lot of what we do at the Neurorites Foundation is try to distill these incredibly complex concepts into understandable and intelligible suggestions and recommendations for lawmakers who are trying to contend with such complex issues. So I think that already having us at, you know, providing testimony and and speaking with us and and getting our input and and reading the reports that we that we put out, I think, is already a a great step. I I think other suggestions are to look at the models that have been set. That was kind of the the reason that I went through some of the efforts that have been undertaken around the world as well as in The United States to understand all of the different ways that this topic can be approached because there isn't one way. Know, there have been countries that have approached this with a constitutional amendment. There have been other, you know, states here in The US that have focused on consumer data privacy. It's not coming from, you know, necessarily a a health care angle. So there are a variety of ways to engage with the topic. Think that understanding the broader universe of the efforts to introduce regulations and guardrails around neurotechnologies, I think is one step to deepening the understanding of these issues.
[Rep. Alyssa Black (Chair)]: Thank you. Go ahead, Brian.
[Rep. Brian Cina (Member)]: Yeah, I'd ask this of Sean as well, but if you come up with any ideas of additional witnesses, we're open to suggestions as you review the bill, and if there's a section, for example, that you think there's someone we might want to hear from on that section. Not that your input isn't enough, but what I found is that it's good to have more than one witness to confirm things, and just like for anyone watching, if anyone opposes this stuff, we want to hear from you too. If there's anyone out there who believes that this action is going to cause harm, we want to hear from you too, So that's out for you, anyone else listening. But yeah, we're just trying to have a robust witness list. So any ideas or both?
[Ashley Collins (Legal Advisor, NeuroRights Foundation)]: Yeah, absolutely. Will coordinate with Sean. Also took note of the requests that were made of him in terms of providing the reports that we had prepared, 2024 report on consumer devices. So we'll make sure to send all of those to you in sort of a package and we'll share that Thank with the committee
[Rep. Alyssa Black (Chair)]: you. Thank you. Thank you much to both of you for your expertise in that and also for making it feel palatable for folks like us that aren't like representative Cina and have not really been exposed to thinking about this sort of thing. So we really do appreciate that.
[Ashley Collins (Legal Advisor, NeuroRights Foundation)]: Certainly, thank you for the opportunity and thank you for your great questions. We really appreciate that you're so engaged on such an important topic.
[Rep. Alyssa Black (Chair)]: Great, thank you. All right, I think we're done for the day. I keep passing out the window.
[Rep. Brian Cina (Member)]: It just started snowing.
[Rep. Alyssa Black (Chair)]: I think I just saw my first snowflake.
[Rep. Brian Cina (Member)]: Yeah, it just started.
[Rep. Alyssa Black (Chair)]: So I think we're going to end for the day and everybody drive home safely. Thank you everyone.