Meetings

Transcript: Select text below to play or share a clip

[Michael Marcotte (Chair)]: Okay. Great. Good morning, everyone. This is the Vermont House Committee on Commerce and Economic Development. Again, it's Wednesday, 02/04/2026 at 10:45 in the morning. So we're back to continue our hearing on our education on the pillars of privacy. And so we have with us a number of people that will be testifying this morning. Who is first? Ariel. Ariel?

[Ariel Garcia]: Yes.

[Michael Marcotte (Chair)]: Good morning. Thank you for joining us.

[Ariel Garcia]: Good morning. Are you able to see my screen share?

[Monique Priestley (Clerk)]: Yes.

[Ariel Garcia]: Wonderful. Chair Marcotte, Chair Clarkson, and members of the House and Senate committees, thank you for the opportunity to testify today about real time bidding and advertising technology. My name is Ariel Garcia. I'm Chief Operating Officer of Check My Ads Institute. We're an independent nonprofit digital advertising watchdog that advocates for a transparent and fair digital advertising market for advertisers, publishers, and the public. Something that sets us apart is that we're proud to be an organization led by former ad practitioners and industry insiders like myself. Prior to joining Check My Ads in 2024, I spent a decade in digital advertising, serving most recently as chief privacy officer at Worldwide, a major global advertising agency. For years, I helped advertisers navigate the evolving regulatory environment as they look to strike the balance between privacy and targeted advertising. In fact, I resigned from my role precisely because of my realization that the only winners in the current state of the digital ad industry are big tech giants like Google and the data brokers and ad tech middlemen that thrive in their shadows. The $750,000,000,000 digital ad industry is the main business model of the Internet, and today that business model is broken. Big tech firms like Google and Meta have built empires on unbridled extraction. They've amassed massive troves of data. They've cemented themselves on all corners of industry and society, unilaterally setting norms that benefit their own business to the detriment of their users, advertisers, and publishers alike. Yet while these companies are household names, there are hundreds of others in the ad tech sector that have followed in their footsteps. Now before the advent of modern digital advertising, advertisers would typically work with an ad agency to identify what publications or programs their target audiences are likely to consume. But today, most advertising online is transacted through an automated real time auction. That's known as programmatic advertising. And over the past two decades, the central promise of programmatic was that it would help advertisers of all sizes more efficiently and effectively reach their audiences, and it would help publishers monetize or sell more of their ad space. While you'll hear more on how this works and how data flows within programmatic from Rowdy Irwin in just a bit, it's important to understand that this real time bidding system has fundamentally taken advertisers and publishers out of the driver's seat. Advertisers were told that they could reach the right person at the right time at in the right place and at the right price. And that they didn't have to worry anymore about where their ads were appearing, the data they were told would just take care of it. So instead today, advertisers are relying on a supply chain of various intermediaries. And and that includes ad tech companies, the agencies that plan and execute campaigns for them, and data brokers. For every dollar a business spends on digital ads, each of these middlemen gets a percentage revenue share, and most of them are paid on volume. That creates dysfunctional incentives because every company in the middle stands to gain by having more money pass through their pipes. And in addition, in each transaction, these middlemen get access to personal data. Now that's how programmatic advertising has resulted in constant tracking and widespread sale and leakage of consumer personal data. Now while this process sounds efficient and powerful in theory, the reality is much different. Out of every dollar spent on programmatic advertising, less than half of that makes it to the publisher. About 30¢ is absorbed by ad tech intermediaries. Over 20¢ is wasted on fraud, invalid traffic like bots, and low quality made for advertising websites. Those have been exploding in this age of generative AI. The average programmatic campaign saw a single brand's advertising placed on a staggering 23,000 websites. And now these figures are a conservative estimate. These costs and this waste, it doesn't reflect agency fees, ineffective fraud prevention that marketers pay for. It doesn't include spend wasted on ads that while not technically, meeting industry definitions for fraud, doesn't actually adhere to the advertiser's criteria. And these statistics are looking at are from studies looking at some of the largest brands with the most leverage Even they have limited ability to confirm where exactly their ads appeared, let let alone to reduce this waste and inefficiency. So while research suggests that ad fraud accounts for a whopping $84,000,000,000 in global ad spend, which would place it second only to the drug trade as source of revenue for organized crime. That is actually just the tip of the iceberg as it comes to advertiser waste, especially for smaller businesses. The reality is that for small businesses, this waste isn't just detrimental to their growth, but at times to their survival. So recently, we at Check My Ads launched our own experimental ad campaign where we emulated the experience of an average small business user. We set up a new Google Ads account as thousands of other small businesses do. We were nudged into Google's AI driven opaque product called Performance Max. This product is notorious for restricting advertiser transparency and control, and it has history of placing ads on potentially illegal content while ignoring advertiser targeting criteria. Our findings echoed the earlier research. We found our ads on AI generated spam websites and parked domains. We ran on sites that you can't navigate to directly that seem like they may have been fraud. We selected to target English language placements. That's one of the few inputs advertisers have into this AI campaign type, and we still ended up on foreign language videos. With some of our ads landing on kids videos, we actually have no idea who Google's algorithm targeted. We ran on videos no longer available that were removed for policy violations, but we don't know what policies they were. Was it adult content? Was it piracy? We simply can't know. There is one place we did not run. We did not run on one single recognizable publisher website. So, obviously, the next question is how did it perform? According to Google, it did great. For a small $75 test campaign, we got 34 sign ups, except we didn't. In our actual CRM system, we only got five. We tried to ask Google about it. They said they don't have a team dedicated to nonprofits, the same company that fights tooth and nail against being regulated as it would harm small businesses. Now that's one measly campaign. But, again, this is the default campaign type that the 80% of advertisers using Google Ads are pushed into. That is a lot of money that is wasted, especially for small businesses with small budgets. It's also a lot of money that's not making its way to publishers, that's not funding real content or local news that Vermont residents benefit from. And this same phenomenon happens outside of Big Tech's empire. The consumer data that's peddled by data brokers for ad targeting while invasively collected is also astoundingly ineffective for marketing. We talked about this a little bit earlier today. Research has shown that leading ad data brokers were only right about people's gender 42% of the time, worse than guessing at random. This is what advertisers today are paying for. My own personal desktop research echoed these findings. One ad platform held over 500 audience segments to which I belonged, which it got from seven different data brokers. To their credit, supplying this much information is very uncommon in the absence of consistent legal requirements. Yet, these these audiences were highly inaccurate, often contradictory, and sometimes sensitive. So if you look at this data that I have here, according to this, I'm both high income and below the poverty level. I'm listed as a middle aged suburban homeowner without kids as well as a mother of two. I am neither. I am, according to this profile, a defense contractor, but I also work in food service, agriculture, and I love and hate my job. I was identified as likely suffering from heart disease, speaking Spanish, and celebrating Ramadan. None of these things are true. Now while the industry has finally started to acknowledge these data quality realities out loud, that insight is far less accessible to small businesses. Yet right or wrong, this data can be used within and beyond advertising in ways that cause real real harm. The chief privacy officer of Acxiom, one of the world's largest data brokers, has admitted that their consumer data used by advertisers around the world is comprised of little more than informed guesses. He said that he hopes that if they guess wrong, it doesn't have negative consequences for denial of benefits or credit. But credit reference agencies have purchased and used Acxiom data to determine credit

[Michael Marcotte (Chair)]: screen bigger. Is it possible you could Yes. Blow that up a little bit?

[Ariel Garcia]: Yep. Is is that bigger now?

[Michael Marcotte (Chair)]: No. It was smaller. Wait. Smaller.

[Ariel Garcia]: Oh, that's so strange. Hold on. Let me, stop share and reshare. Give me one moment.

[Monique Priestley (Clerk)]: Alright.

[Ariel Garcia]: No. You're fine.

[Monique Priestley (Clerk)]: Actually, we'll have a copy of

[Michael Marcotte (Chair)]: it too. We'll get it.

[Daniel J. Solove]: We'll

[Ariel Garcia]: Yes. I'm I'm happy to share a copy. Is this still too small?

[Michael Marcotte (Chair)]: Yes. I think possible to blow it up a little bit more.

[Ariel Garcia]: Alright. Let me try one last time. I'm gonna try content only. Okay. Did that is that any better?

[Michael Marcotte (Chair)]: Yes.

[Ariel Garcia]: Okay. Perfect. So, the the chief privacy officer of Acxiom, which is one of the world's largest data brokers, admitted that their consumer data that's used by advertisers everywhere is little more than informed guesses. He said that he hopes if they guessed wrong, it won't have negative consequences for people for denial of benefits or credit. But the reality is that credit reference agencies have purchased and used Acxiom data to determine creditworthiness. Now all of this so that data brokers and ad tech companies can sell data that has less than 50% chance of being accurate to unsuspecting small business advertisers, is this the alleged pinnacle of efficiency that props up the industry's complete disregard for consumer privacy and the data enabled harms that result from ubiquitous tracking? The reality is that without privacy laws that codify transparency and meaningful choice that include principles of data minimization and restrict use of sensitive data, these data enabled harms will continue unchecked. So to close, the privacy of Vermont residents is not at odds with relevant effective advertising. An effective privacy law would not end all targeted advertising. What it would do is reduce the supply of low quality data at the core of the broken ad tech market. It would create conditions for more relevant advertising that benefits Vermont's businesses while restricting harmful practices that undermine the safety, liberty, and security of its citizens. Thank you once again for the opportunity to speak today. I look forward to your questions, and I will now pass the floor to Routin Irwin.

[Michael Marcotte (Chair)]: Thank you, Ariel.

[Rowdy Irwin]: Routin? Yes. Hello. Good morning. My name is, good morning. Thank you. My name is Rutland. I'm a media director who's managed media buying for small independent agencies and large holding companies alike for nearly a decade, with many of my past clients belonging to the Fortune one hundred. For those who don't know what media buying is, I typically explain it to friends and family this way. If you think of a modern ad agency, there are two primary functions that are being accomplished. There's the creative side of the house, which is where creative assets, commercials, or ad units are conceptualized and subsequently produced. And then there's the side of the house that deals with media buying, which is what I do. This involves the process of purchasing ad space on behalf of a brand to accomplish a specific business goal or objective. The creative assets that get produced on the creative side of the house are what fill the ad space that was purchased. Pre internet with traditional media, television, billboards, magazines, etcetera, the buying process for ad space was often long and arduous involving phone calls, fax machines, and one on one conversations between brand reps and publisher reps. With the rise of the internet came a slew of new mediums that ads could be displayed, shown, or played. While digital ad buying initially remained a one to one exercise between buyers and sales reps, eventually programmatic advertising came to be. Programmatic advertising is defined as the automated data driven buying and selling of digital advertising in real time using algorithms to place ads instantly across display, video, mobile and audio channels. Programmatic buying is facilitated via an auction based system referred to as real time bidding or RTB for short, where advertisers bid on ad inventory in milliseconds as a user loads a webpage. To truly understand the privacy implications, we can't just talk about theory, we need to look at the mechanics. I've built a live simulation of what happens in the split second between a constituent clicking a link and the page loading. This entire process takes roughly one hundred milliseconds faster than the blink of an eye. So I'm gonna share my screen. Pull that up. I'm sorry. There's a system setting I have to toggle here. Apologies.

[Michael Marcotte (Chair)]: Not a problem.

[Rowdy Irwin]: Having some trouble. I'm gonna try one more thing here. I think I got it. Let me know if you can see my screen. Can you guys see Yes. The screen or Okay. Okay, great. So I've built a live simulation of what happens in the split second between the constituent clicking away and the page loading. As I mentioned, this entire process takes roughly one hundred milliseconds, which is faster than the blink of an eye. I want you to imagine for a moment that you're a client, maybe a marketing director at a large retailer, and I'm your media trader, you've given me $100,000 to find potential customers. Here's the machine I use to spend your money. In step one, we have the visit. A user, let's say a Vermont resident visits a news site. The content begins to load, but the ad slot is empty. Now as a trader, I don't care about the specific website. I care about the user. I want to find who matches audio segment I bought. So we have the bid request. This is the most critical moment for privacy. The website takes that user's data, packages it into a digital envelope called a bid request. If we look inside this envelope, we see it's not just a signal, It is a specific data payload containing their device ID, which is a digital fingerprint. DDS coordinates and keywords about what they're reading right now.

[Michael Marcotte (Chair)]: Routy, you may wanna turn your video off. You keep breaking up.

[Rowdy Irwin]: Yes. Sounds good. Thank you.

[Michael Marcotte (Chair)]: Okay.

[Rowdy Irwin]: Okay. So this is where the data broker economy plugs in. As a trader, I might pay a third party broker a fee to enrich this payload. I can layer on data like credit card holder likely to move or even health inferences. I don't verify this data, I just pay the premium to target it. And step three, we have the broadcast. The exchange now broadcasts this user's sensitive profile to hundreds of bidders simultaneously. Even if a bidder loses the auction, they still receive and can store this location and device data. Here's the reality of the job. The system favors scale, not privacy. If I'm under pressure to spend your budget, I might be told to loosen my targeting. I've seen instances where platforms discourage explicit demographic targeting, like gray specified affinity keywords like specific cultural interests or zip codes that act as effective proxies for that same demographic. The machine wants to deliver the ad and it will find the path of least resistance. In step four, we have the evaluation. Algorithms on the buying side analyze that data payload instantly to determine the value of the user. There's often conflict of interest here. Sometimes as a trader, it might be incentivized to buy the inventory not because you as the client because my agency has an upfront volume commitment with a specific vendor that we need to get a rebate. The algorithm can be to prioritize this deal's over performance. Step five, the auction. The exchange value is at auction that compared to the bid in milliseconds. And in step six, the highest bidder, which is this DSP two wins the right to share. And then step seven, we have delivery. If we have access the same back to the user browser. And then in step eight, we have the aftermath. So the app loads. Look at these false signals firing off the side, those gray dots. These are tracking pixels. We just need to verify the ad was seen, but functionally they're harvesting data again. They re log users presence and send it to the third party verification partners, which allows us to track them across the web. In conclusion, helps answer the question regarding small business competitiveness. This is a pay to place system. A small Vermont business cannot afford the minimum spend thresholds required to use software like this, which is somewhere or it can sometimes be nearly $10 a month to access these top tier tools. They're priced out of the cockpit, yet their data and their customers data is harvested by this machine every single day. This is data in real time today. Thank you.

[Michael Marcotte (Chair)]: Thank you, Robbie.

[Rowdy Irwin]: Is it Ian?

[Monique Priestley (Clerk)]: Oh, no. He's not doing it till the next

[Michael Marcotte (Chair)]: yeah. Ian?

[Monique Priestley (Clerk)]: Next is Debbie.

[Michael Marcotte (Chair)]: Debbie is the next one? Yep. Okay. Debbie?

[Rowdy Irwin]: Abbey?

[Debbie Reynolds]: Can you hear me?

[Michael Marcotte (Chair)]: Yes.

[Rowdy Irwin]: Alright. Excellent. All right,

[Debbie Reynolds]: first of all, thank you so much to this committee, the Senate and the House Committee, Vermont for inviting me to speak today. I will share my screen actually, let's see.

[Daniel J. Solove]: All right.

[Debbie Reynolds]: Can you see my screen?

[Michael Marcotte (Chair)]: Yes.

[Debbie Reynolds]: Alright. So today I'll be talking about business risks and opportunities as it relates to privacy, trust, and business outcomes. I am Debbie Reynolds. They call me the Data Diva. I'm the founder and chief data privacy officer at Debbie Reynolds Consulting. I work at the intersection of data privacy and emerging technology. I'm a technologist by trade, but I work and focus now in privacy. I have over two decades of experience in the privacy and tech industry. I've been named one of the top global data privacy experts, top 30 cyber risk communicators. Also been quoted in publications like the New York Times, Forbes, Bloomberg, and the like. I have a podcast called the Data Devotax Privacy Podcast. It's the number one data privacy podcast in the world. We have listeners in over 158 countries. Also do keynote speaking for companies you've never heard of, like TikTok and PayPal. Also the chair of a committee for IEEE. We are focused on a data tool agnostic and legal agnostic privacy labeling regime. I was also one of 16 Americans chosen to be on IoT advisory board with the US Department of Commerce. My focus was privacy across all sectors. Also, I've been a speaker for US Senate on privacy and cybersecurity. The first thing I want to address is innovation. We hear this a lot. People say that rules or regulations stop innovation. But I am reminded, I actually have talked to the privacy commissioner in New Zealand recently. She said something was very striking. Actually, I love cars, so car analogies really fit what I really like to talk about. She said the innovation that help cars go faster is brakes. When we think about regulation and the types of things the states want to do to protect people, part of innovation is being able to make sure that it's safe for people, and that is what really accelerates the ability for companies or organizations to innovate. So guardrails accelerate innovation, because it allows people to have control. It also allows confidence in being able to go fast and to have speed. We know the systems can scale better and grow best when they are best controlled. So what's the data landscape like today? We've heard from many speakers today about the complication, the rapid rise of technology in terms of the things that people can do with technology. Is a very exciting time to be a technologist. But what we're seeing is more complex data systems that companies are dealing with. Businesses definitely want to avoid headaches and barriers to adoption. People, whether that be business partners or customers, what they're really looking for is more transparency and the ability to have more control or say over what happens with their data. So regardless of what maybe an individual business is thinking about, there's a whole world of possibilities and things that are happening around them that mean that they have to pay closer attention to data and privacy. It may be something as simple as a company that is installing a new point of sale system. Maybe there's a new, surveillance system that they want to do for their business internal, in inside their business or outside their business. Maybe they want to start a newsletter. Perhaps they collect customers' names when they they use their service. Even bars and restaurants that are using either tracking people for for reservations or even checking people's IDs. Also, a big use that we're seeing maybe companies moving into emerging technologies are things like using tools that track employees, whether that be them punching in and out electronically using maybe their thumbprint, or even tracking that happens on the internet or data uses that they have that are in the cloud. So all these things touch all types of businesses, businesses of all sizes, And that means that they're because they're collecting more information, that means that that is raising the risk level to the organization in terms of how that data is collected. So it it gives them more responsibility, and it also creates risk for the individuals whose data is being collected or retained, stored, or shared. So here are the things that we hear that businesses say they're concerned about. Definitely increased operating costs, the complexity of managing all these different things and all these different rules and expectations. And then also, think from a state perspective, no state wants to be put at a disadvantage. But really the real risks are a loss of trust. So when companies or organizations lose trust, they lose revenue. So whether that be existing clients, curbing new clients, also business partnerships become harder when there is a lack of data trust between them. There may be expensive hidden data liabilities that have to be addressed, and, also, companies definitely don't want to be reactive in terms of data fixes, especially because a lot of the the things and disruption that can happen to a business will stop or either cost them more money or cost them more headache down the line. So regulation or rules that may be set by states, really the goal, regardless of how it's articulated, is to create more transparency in how data is being used and then also try to give an individual more agency, meaning how their data is used and also what actions can they take to decide what they want to do to make better choices. So the goal is really to make sure that if companies or organizations are using data in data systems, that they're transparent, and then also that a consumer or user has the ability to make informed decisions or choices. And that really is the foundation part of the foundation of privacy. So why privacy expectations matter? Having clear expectations between a consumer and a business is very important, especially as we know that the digital landscape is getting more complex. Consistency prevents reputational and customer harm. We've seen this a number of times, especially let's say there have been situations, this is a publicly available situation, where General Motors, there was an article written by Cashmere Hill in the New York Times about General Motors and they were collecting car data and it was being sold to data brokers and then to insurance companies. And people were upset because either car insurance had gone up in terms of costs or they were canceled in their car insurance. And so that caused a huge uproar. And there were many out studying at that point data privacy in cars. And I, you know, I went read through all the articles. I read through all the the comments, all those articles from consumers. And there were people saying, you know, I'll never you know, we've been buying GM cars for three generations. And if they're selling my data, I'll never buy another car from this company. And I thought, wow. You know, this is a huge risk for businesses if they're handling data in a way that a customer may not agree to or may not like, even beyond regulation. So I always tell companies, do you want to lose a dollar to gain a bime? So you have to really think about those data uses and how that really benefits the individual. So what this unlocks for businesses really, especially if they are really thinking about the human impact of how data is being used used, stored, and shared, it allows them to make better decisions with fewer roadblocks. So doing that that upfront work, thinking ahead of time of what those those not only benefits, risks are of using that data. Also, it allows them to adopt new technologies without constant rework because they have the foundation there to be able to do that, and then it allows business to grow without escalating risk. So for companies that collect data, a lot of times, especially we hear in the news about companies, they have data breaches and things like that. Many times those data breaches involve data that maybe had a low business value. Perhaps it was an old customer list, perhaps it was data that was collected for a program that people forgot about, but because it became a breach risk, it became a huge business risk. So I tell companies data that has a low business value often has a high privacy or cyber risk. And that's really what we're thinking about trying to help companies understand and to avoid. So the biggest business risk really is to do nothing. So data problems don't go away if you don't address them. They just get more expensive and larger if they're ignored. A lack of clarity really exposes companies, so you wanna really create that clarity between businesses and consumers. And then late fixes costs more than parenter. So my takeaways are guardrails really accelerate innovation. They don't stop innovation. Transparency and agency are really vital foundations of privacy. Nothing leads doing nothing leads to higher data risk and business disruption, and that's what we want to avoid. And, also, in my view, trust is the new goal. So trust in the future in terms of how companies are handling data, that is really how they can reduce risk, but then also increase revenue. That's it.

[Michael Marcotte (Chair)]: Abbey, thank you very much. Daniel?

[Ariel Garcia]: It's online.

[Daniel J. Solove]: Hello.

[Michael Marcotte (Chair)]: Morning.

[Daniel J. Solove]: Morning, how are you? Good and you? Good. So thank you so much for having me here. I'm here to talk about the enforcement of privacy laws. And I recently wrote a paper that took a big picture view of enforcement. I guess before I begin, I'll introduce myself. I'm Daniel Solov. I'm a law professor at George Washington University Law School. I've been studying and teaching privacy for about twenty five years. And I recently have tried to look at the big picture of enforcement. And I've reached a number of conclusions that I think are important to highlight for what makes for infective enforcement and why it's so important in privacy laws. So I want to just give some kind of just a brief overview of some of my conclusions. First is enforcement's really an essential dimension for privacy laws. You really have to focus on enforcement and get enforcement right for the laws to have any effect whatsoever. Otherwise they're going to be incredibly weak. They're not gonna be effective and they might as well not even exist. They're gonna be pretty much worthless on the page. You can even have a strong law but if the enforcement is botched it might as well be weak because it won't really work. Unfortunately, government enforcement has many shortcomings that undermine its effectiveness. I wanna talk to you about some of those things. Enforcement is about incentives. And I think far too often the incentives just don't work. They're just very poorly aligned and are broken and therefore that leads to laws not being enforced well. And then my last conclusion that I'd like to draw is that a private right of action is essential for an effective privacy law. I really believe this despite the costs and I know this is a controversial issue, despite the costs for a private right of action I think it's worth it because I think the limitations of government enforcement can't be overcome and are not sufficient for adequate enforcement. So let me dive in a little bit and talk a little bit about how

[Michael Marcotte (Chair)]: sort

[Daniel J. Solove]: of why the laws are kind of not getting the job done when it comes to government enforcement. First is that there are political constraints. The reality is enforcement occurs in a political landscape. Enforcers are not fully insulated from the political process. There's a lot of pressure on enforcement agencies to heed the political winds and companies have a lot of power and they can exert a lot of power in the legislature. An example is the FTC. The FTC you know despite all the legal protections it has and the independence it has still has to answer to the legislature and that has put a lot of pressure on the FTC. In the 70s Congress slapped the FTC incredibly hard when the FTC tried to do a rulemaking for resulted in the FTC losing some of its rulemaking power, losing a lot of its power generally. It still causes PTSD at the FTC. They still talk about KidVid it has overall resulted in weakening that agency significantly and the lessons that the agency learned is be really cautious. Don't push too hard or else we could get slapped down and lose it all. And as a result that's weakened the FTC, it made the FTC a lot more conservative in the cases it brings and that is a problem. Another problem is that you know we're seeing right now with the FTC is that you know a new administration could come in, the Trump administration and it knocked out the FTC's rulemaking, it's fired the democratic commissioners at the FTC and now the FTC is far weaker and enforcement priorities can dramatically change when an administration starts meddling with it. So this is a problem. The agencies cannot escape the kind of political landscape. Another problem that we have is lack of resources. Most government agencies are woefully underfunded. They lack the personnel and this is the case in The U. S. And also in the EU. So in The U. S. The budgets are woefully inadequate. Just an example, the FTC for example enforces 80 statutes most of which are not related to privacy. The budget in 2025 for the FTC was $425,700,000 but recently former FTC chair William Kovacic spoke at one of my events and he said that he thinks the budget was way too low. It should be more than a billion dollars given what the FTC is doing. There are about 60 personnel at the FTC that focus on privacy and you go over to the EU and there are a 150 at the Ireland's data protection commissioner. You go to Germany, there are more than 700 folks working in their data protection commissioner and these countries are far smaller than The United States and we also have the fact that you go to the EU and they are complaining that they're woefully understaffed and woefully under budgeted. So we look at The US we really have a dramatically weak resources and inadequate staffing at the agencies and the result is they just can't go after most cases. They don't have the resources so they have to pick and choose and only go after a few cases which really and companies know that companies know they're only going to go after a fraction of what's there. The penalties are generally weak, the fines that are issued are generally low compared to the infraction. The fines that the government issues for violations don't go to compensating victims. So consumer gets harmed and you know great, know the state gets a little bit richer but what happens to the consumer? What does the consumer get for this? And this also goes to incentivizing complaints. Know, so let's say I'm harmed. Should I waste my time to complain to the government? Like, hey, you know, or the agency enforcing it? Hey, I've been harmed. What do I get out of it? I don't get compensated. I might not even hear anything back. I just raised the complaint. I feel like a chump. I've wasted my time and if they do get money I don't get anything. Know just the state gets richer and I guess they get to you know buy like a you know a new painting for their walls and their offices but what do I get? So I think we're going to see we see actually a way under reporting of complaints because we really have no system to, you know, for the individuals to actually get any redress for harms that they've suffered. Cases take a very, very long time to get resolved. Some of the, if the agency actually starts to litigate and really fight the cases, they could take years. As a result, they are very selective about the cases they bring and they bring, you know, clear cut violations because if you bring a more substantial case, a more controversial case or a case that's in the grayer area that's more likely to be contested so they want a gotcha. It's easier to actually go after a clear cut violation that is more trivial because you just got them than a more important violation but that's more qualitative in its dimensions that takes more time and is more likely to be contested and fought. They just don't have the resources to do that. We're just looking at and we look at the stats, there's woefully insufficient quantity of enforcement and just one case in point, the FTC in the decade between 02/2019 brought only 101 internet privacy enforcement actions. That's about 10 per year and I will say that like the number of violations of privacy per year is a hell of a lot more than 10. I would say it's thousands, tens of thousands. We're dealing with a very small fraction of cases. You look at the HHS, will have certain cases it brings for monetary penalties. It typically will be like 10 to 20 a year. Just a year or two ago when you look at the number of cases it actually had to resolve, it had to resolve more than 30,000 cases. So, this the odds that you're going to get a monetary penalty from the HHS for a HIPAA violation are less than point 5%. So companies know this. Also leads to the fact you have enforcement that's very inconsistent. And so when we put all this together I think when you want to understand enforcement, we need to understand enforcement is about incentives. And I think the best way to see things is to imagine the amoral company. Imagine that companies take morality out of it. Because everyone sort of hopes that somehow companies have morality, that they're going to just do the right thing and maybe sometimes they will but generally that's not how they're structured. They're built to make a profit and it's totally understandable for them to make calculations based on what makes economic sense. So let's just take the morality out of it and just look at economically does it make sense for companies to follow privacy laws under the current regime of enforcement? And I would say no. Given the incentives companies should just screw the law, forget the law, it doesn't matter because you're gonna get ahead. So companies know that there's a very low what is the risk that they're facing? Risk is magnitude of the penalty and the likelihood of the penalty. So they know already the likelihood is really really low. And if they get the penalty it's often at worse it basically puts them back to what would have happened if they didn't violate the law. And a lot of times it's even less than that. It's a slap. A slap on the wrist. And so they know like okay, you know, if we get caught it hurts a little bit but we still came out ahead and there's a very low likelihood. So economically, violate the law, go with it. And there's two ways they can violate the law that become economically advantageous. One is if the law kind of impedes them in certain ways they just ignore it and get a great benefit. They want data to train an AI system, they don't have the proper permissions or what they're doing is improper collection, they know they can just get it. And by the time they get it and use it, know they can just ask for forgiveness. No regulator's gonna have them delete their entire AI model. It's just not gonna happen. So they know like okay they'll ask for forgiveness, they'll pay a fine but they get ahead and the benefits of doing it are are great. The risks of not doing it, a lot of times these companies see this as an existential race. If they don't keep up with technology, they don't keep pace, they have a big risk that, you know, others win the race and there are trillions of dollars at stake. So why why care about enforcement? And then there are those that just, you know, they want to enforce, they just don't put a lot of resources into the department for privacy. They just say, okay, you know, we'll just put a minimal amount of resources in and think we can just basically get away with it. And, you know, if they enforce, we'll we'll pay. But but why put a lot of money into this when we can invest the money in growth and other things rather than compliance. We'll make a lot more on that investment than we do avoiding a very low risk. So that's what we have and I think economically you know companies are behaving exactly as they, I don't think they're evil, don't think they're bad. I think they're behaving exactly as the law has structured them to behave. Unfortunately the law has structured it in a pathological way. I think ultimately we can do a lot to improve government enforcement. We can give more resources but at the end of the day even with more resources there's going to be a ceiling. At some point you know there's just so much we're going to be able to spend on enforcement. There's so much we can really do to insulate agencies from politics. These things are just not going to disappear. The reality is going to be that there's a ceiling. There's just so much government enforcers are going to be able to do. It's costly to spend a lot of taxpayer money for the enforcers to enforce. Do you really wanna spend what it would take? Do you want an office of a thousand people paying their salaries to do all this enforcement and a ridiculous budget and having them litigate all these cases and fight all this stuff? I think that this is not necessarily something that taxpayers are going to want to fund at the end of the day. And that gets to the last one I wanna make which is, this is why I think a private right of action is essential. This has become a very controversial issue in privacy law. It actually wasn't controversial a long time ago. So if you look at a lot of the federal privacy laws, many of them have private rights of action. And I think in those days it wasn't as controversial as it has become now. I think it's become controversial because industry realizes that private rights of action actually work. They actually do have a bite. They do deter them and they don't like it. They much rather have just government enforcement which they know you know is weaker. But if you look at the laws there's a lot of federal laws that have a private right of action. You know you go the states for example, some of the laws have private rights of action. Subject specific laws like the Illinois Biometric Information Privacy Act but a lot of the state consumer privacy laws lack a private right of action and I think this is a really bad thing that essentially makes these laws empty pieces of paper quite honestly. And you know if a state says hey we're going to pass another consumer privacy law I would generally say that without a private right of action, who cares? Don't waste your time. Yeah, it's just you know, you're printing stuff on a page, you're going to make no difference. You have a few enforcement actions. You can, you glom on to a bunch of other states and you know but in the end of the day you'll move no, the needle won't move at all. Consumer privacy will not really be protected unfortunately, you know, these laws don't really, you know, have much meaning. The Illinois Biometric Information Privacy Act, in contrast, at the end of the day people don't mess with biometric information in Illinois. They take the law seriously because they know that if they don't it's going to be vigorously enforced. The reason why private rights of action work is because they're essentially they deputize private attorneys general. And what's great about these private attorneys general is the state doesn't have to pay for them. They don't cost money. They don't they pay for their own they pay for themselves. They pay for their own office. They're much more insulated politically as well. So they don't have the same political constraints. They don't have the same resource constraints. They also have other benefits. They can, through the litigation process that provides significant transparency, discovery reveals a lot of information about companies and what they're doing that has been brought to light through that process and it's worked throughout history. We've seen a lot of industries and bad practices exposed through the litigation process. We also see that this process compensates for harm. Unlike government enforcement which does not compensate people in most cases, there's more compensation for individuals through private litigation. Now there's sometimes where the harm to individuals is pretty low. So you know that's why we have class actions and aggregated claims. But generally, mean at least the litigation process has the ability to compensate people who are harmed, whereas government enforcement doesn't. Another thing that's important is that civil litigation allows for more Daniel,

[Rebecca Kelly Slaughter]: I'm sorry.

[Ariel Garcia]: We have one more person to hear testimony from, and then we would like to have the committee have time for questions as well. Is this a place where you could wrap up?

[Daniel J. Solove]: Yeah, I can wrap this up right now. So basically the last point is just that there's a big variation among what state AGs enforce when it comes to privacy. That can change depending on who's in office and it can change over time. An individual who's harmed by a privacy violation, if the state AG is not interested in enforcing or doesn't think the case is big enough or important enough, can just ignore it. The nice thing about civil litigation is an individual whose harm now has an ability to stand up for themselves. They can stand up and they can say, hey you know I can assert my rights under this law and I'm not dependent on hoping that you know, depending on whatever political winds are blowing that it's going to be enforced or not. So I think this this adds a tremendously valuable benefit for privacy laws. And it's very important to have both government enforcement and civil litigation as enforcement tools to make these laws more than hollow words on the page.

[Michael Marcotte (Chair)]: Thank you, Daniel. Commissioner Slaughter?

[Rebecca Kelly Slaughter]: Hello. Are you? Thank you good so much. It's morning. Thank you so much for having me. It's always an honor to follow Professor Solov. He made a number of really great points. I'll try to be really brief in my overview remarks. First, let me just tell you a little bit about myself. While I am not a native New Yorker I mean, native I am a native New Yorker. I'm not a native Vermonter, but I did spend a semester in high school on a farm in Versure, Vermont. And so Vermont has a very special place in my heart and always has. I had a number of amazing educational experiences through my life, but that time living, learning, and working in on a farm in Bershore was really, I think, one of the most foundational informative. So it's such an honor to be here talking to you. After I lived on a farm in Vermont, went to law school, I spent a decade working in the US Senate. And then in 2018, I was confirmed to the Federal Trade Commission. I was confirmed again in 2024. And as you probably know, right now, I'm in the middle of some litigation about whether I get to keep doing that job. But I'm not here to talk about that. I'm here to talk about the importance of data privacy legislation. And I mentioned that background, so, you know, I think about it both from the perspective of someone who has worked in a legislature and understands the challenges, the opportunities, the nuances of crafting effective legislation, and then someone who's been in the position to administer statutes that have been passed by a legislative body and see what works well and what doesn't work as well. Professor Zolav mentioned a bunch of the limitations with FTC only enforcement. One he didn't add is that we don't have a data privacy specific law in The United States, at the federal level. We have COPPA. We have

[Ariel Garcia]: a few

[Rebecca Kelly Slaughter]: other more narrow laws, but we don't have a general federal data privacy law. So in addition to all the other limitations that Daniel mentioned, we are dealing with the fact that the FTC is mostly enforcing under its general UDAP authority, its authority to prohibit or to stop unfair and deceptive acts and practices. And that has some real limitations when it comes to data privacy specifically. I think the only other big overarching point that I would make is that the details of what is in legislation really matters. I think it is so great and so important that so many states are taking up the mantle that Congress has not been able to carry of passing data privacy legislation. But in order for that to be meaningful, to protect your constituents and any other Americans who benefit from a state specific law, how the law is crafted matters so that it can be enforced effectively. Professor Solove talked a lot about who does the enforcement. I just want to pick up on what they are enforcing. So what are the prohibitions, limitations, requirements in the law? You know, COPPA, which is our closest to a federal privacy law, which really just protects children's data, was revolutionary when it was passed because it introduced this idea of a federal data protection law. And it was built largely on the notion that parents should be in charge of what happens with their kids' data, and so they should be able to choose whether their kids' data is accessed or stored. It's generally, not entirely, but generally a notice and choice law. Tell people what you're doing with their data and let them choose whether to do it. What we have seen over time, and now I speak not only as an FTC experienced person but also as a parent of four children, it is impossible to adequately monitor all the ways in which different companies are using one's own data, much less that of your children. And even if you can monitor it, we don't actually have meaningful choice in the marketplace. So I would just encourage you to think outside of the boundaries of a traditional notice and choice regime, which leads to very lengthy privacy policies, very good for lawyers or whatever, AI that is drafting them, not very good for the consumers they're supposed to protect. I think focusing instead on meaningful restrictions around how data and what data is collected, how it can be retained, how it can be used. I think that kind of framework, much more of a minimization framework, is a much more meaningful way to put limitations on the marketplace and allows for much more meaningful enforcement. Because if the enforcement mechanism if a company can evade shoddy data practices enforcement by just saying, look, we were honest with people that we were gonna collect their data and do whatever we wanted with it, we buried it in paragraph 37 of our 85 paragraph privacy policy. That doesn't provide any meaningful protection to people. And then the last point that I think is really important to focus on is remedies. Something I carry very much about is deterrence. In order for laws and enforcement to be effective, the enforcement must have adequate deterrence, both specific deterrence, you want the individual lawbreaker not to break the law again, and general deterrence in the market. You want any enforcement to send a signal to the rest of the market that breaking the law is not the the cost of breaking the law is not worth whatever benefit the company may receive from it. And so that means having remedies that make law violations costly for the violators. That sounds like an obvious principle, but I think it is a really important one. Because if you end up with a law that has not meaningful restrictions on what can be done with people's data and does not have meaningful remedies that make violations of the law costly, then I very much agree with Professor Solove. You end up in a situation where that's worse than having no law at all because you have the impression of protection without any meaningful ability to do that protection. And I think consumers are left worse off at the end of the day. So I'm happy to answer any questions you might have. That's just my general overview.

[Michael Marcotte (Chair)]: Thank you, Commissioner. So committee, I think we have a few minutes for some questions from anyone that was on the panel. So, Monique?

[Monique Priestley (Clerk)]: Yeah. Actually, so this question is for Debbie, if Debbie's still on. And thank you, Commissioner Slog. Debbie, so I was just wondering if you could help us understand the costs to, like you walked through kind of the the risks to businesses as far as, you know, holding data, having data. And as we heard from the other people, like, we're not if companies aren't minimizing the data they're collecting, then they're potentially holding on to, old records or or information that they don't necessarily need to have and with varying levels of of sensitivity.

[Rowdy Irwin]: I'm just

[Monique Priestley (Clerk)]: wondering if you could help us understand, like, if a small business is breached, then what are the actual costs and risks of that that they're essentially incurring and should be careful of?

[Debbie Reynolds]: Yeah, excellent, excellent question. So the most recent statistics on data breach for small to medium businesses are that a breach record a cost of average of $160 per record. Let's say a small and medium sized business has maybe 10,000 customer records, that could be up to $1,600,000 We're also seeing obviously higher numbers, especially we know a lot of small and medium businesses have a lot more than 10,000 customer records, so that can be very destabilizing. There are also statistics that say that many small and medium businesses, if they have a breach of data, a lot of them go out of business more than 50% within a year. That's a very staggering statistic and something that should be very chilling to people. A lot of what I see in small and medium sized business is this over collection of data or collecting data that maybe does not have a very long term business value that creates their risk. So I'm very much in favor of, and I preach this all the time about data minimization. So being very circumspect about what you collect and why you collect it and try to make sure you tie it to a business purpose. Once that purpose expires, there needs to be some action taken by the company. That helps them not only with the consumer to business relationship, business to business relationships. And for me, minimizing that risk is vital.

[Michael Marcotte (Chair)]: Other questions?

[Jonathan Cooper]: Mike, I have one if there's time.

[Michael Marcotte (Chair)]: Jon?

[Jonathan Cooper]: Great. Thanks so much. Ms. Reynolds, it really does sound like one of the things we would really appreciate from the Academy or wherever these degrees in information science are sort of coming from is that sense of a, sort of building in that missing Hippocratic oath that had been discussed earlier in the day. My question is for miss Garcia, if she is still on, Because in the morning, we'd heard, I think, about a list of a few 100,000 people who said this with Alzheimer's disease. And I was just wondering, in light of your testimony that had shown that there's so much slop and inaccuracy in sort of how individuals profiled, if there's any reason to think that those lists are not fully accurate, or if this is a different sort of branch of the tree? Thank you.

[Ariel Garcia]: Yeah, it's kind of a little bit of both, right? It's not that the audiences are completely inaccurate all of the time, but it's a little bit like shooting fish in a barrel, right? And the problem with it is more so that whether it's right or wrong, it can still create consequences and pose risk of harm to individuals. So I gave, for example, the audience segment I'm in that that says I celebrate Ramadan seems like a proxy for, inferring my religion. It's not correct. Does it mean that if it's in someone's hands that intends to use it for discriminatory purposes, the effect on me is any less? Probably not. So it's a little bit of a mixed bag there. Also, like, depending on the data broker itself, the quality of the data is going to be different. I would hazard a guess that, like, you know, we have a little bit of a blurring between the ad tech industrial complex and the military industrial complex right now. So I would hazard a guess that whatever the government gets, there's a greater attention that might be paid to fidelity to the extent possible, right? Whereas for marketing, like brands can't confirm whether the data is accurate or not, you have no way of knowing. So there's simply no incentive to strive for accuracy.

[Jonathan Cooper]: Thank you. I appreciate that.

[Michael Marcotte (Chair)]: Questions?

[Monique Priestley (Clerk)]: Ariel, this is for you again too. I was just wondering if you could explain or if you've seen there's been reports about things like Department of Defense, state tourism type of ads, things like that, small business ads ending up on things like child sexual abuse material websites, terrorism websites. And I think some of what Rutland was presenting got a little bit broken up as far as and I was just wondering if you could help explain how some of that ad spend can actually end up funding, illegal activity unknown to a small business that might be paying for advertising or a government's, agency or a state government.

[Ariel Garcia]: Yeah, sure. So it goes back to kind of what I said during my testimony that in the process of programmatic advertising promising marketers precision, they've taken their eye off the ball. So when you're using data, often inaccurate data, like I've been sharing with you to target people and, a website is broadcasting saying, oh, we we have that person over here. That's that's what's driving the ad placement. Right? Like, the the brand has no idea what what website they're actually running on. In many cases, brands have a hard time even especially small businesses have a really difficult time even knowing where their ads have appeared after the fact. There's no requirement to provide that transparency today. Further, and this is like a bigger bigger, kind of tangential issue, ad tech platforms that are used by websites to sell ads aren't under any requirement to do something like, know your customer due diligence. Right? So if you take all of those things together, you end up with, like, automated placement of ads wherever the system is saying that that person is found. And there's a lot of, there like, beyond the inaccurate data, there's also fraud and mis representation of user IDs used for ad targeting that can also result in that. Right? So if a particular user commands, a a higher CPM if there's higher value, right, there's incentive to, say that that user is currently visiting the website. So there's a it's a lot of it's it's there are a lot of different areas that can be points of failure, and they're all kind of symptomatic of this deeper issue of opacity and perverse incentives that exists with everyone being paid on volume, with no ability to verify.

[Michael Marcotte (Chair)]: Questions? Ariel, something that I've been wondering is, I mean, generally business will advertise to bring more customers in. You generally can see a return on investment. How do how do small businesses understand what their return on investment is if they don't know where the targets are?

[Ariel Garcia]: Yeah. So that's kind of the challenge. Right? And I gave the example of our test campaign where we did something really simple. We were trying to drive newsletter sign ups, and Google gave us their measurements. Google said that the campaign we ran through Google Ads generated 34 new sign ups, leads. Right? But when we looked in our other in our real system where the email addresses would be, we only had five. Not all small businesses, why would they assume that they shouldn't trust the measurement that they're getting from Google? And the reality is that that even be like, beyond that for more sophisticated marketers, this type of gaming of attribution is not uncommon. It's a part of the problem with these big tech platforms being able to grade their own homework, right? So at the end of the day, for a small business in particular, it can be very difficult if they're not knowledgeable enough to do the type of cross checking that we did, for example, to be able to say, hey. Something something feels off here.

[Michael Marcotte (Chair)]: I would imagine after a while, they they may be wondering why their ad campaign is not working.

[Ariel Garcia]: One would one would think, and I I also mentioned, like, when we tried to ask Google about this, we also found other anomalies, like, things that were in our reporting, one day. So, like, our placement reporting showing where ads ran, it they were removed the next day, and we tried asking questions about it. Again, Google's answer was, hey. Sorry. Can't support you. We don't have a team to support nonprofits.

[Michael Marcotte (Chair)]: Wow. I wonder if they have a team that supports for profit.

[Ariel Garcia]: I I can confidently say from the agency side that they do, but I will also say that their answers are not usually more satisfactory.

[Michael Marcotte (Chair)]: Yeah. Okay. Any other questions? Well, great. Thank you all for joining us this morning. We certainly appreciate your time. Very helpful for us. To hear to get to be educated and we certainly again appreciate your time. Thank you. So, committee, we're it's lunch time. We're back at 01:00 in our own committee room. Taking up eight 06:39. So, at 01:00, we'll be