U bent hier

Rechten

Sen. Wyden Exposes Data Brokers Selling Location Data to Anti-Abortion Groups That Target Abortion Seekers

Electronic Frontier Foundation (EFF) - nieuws - 28 februari 2024 - 1:58am

This post was written by Jack Beck, an EFF legal intern

In a recent letter to the FTC and SEC, Sen. Ron Wyden (OR) details new information on data broker Near, which sold the location data of people seeking reproductive healthcare to anti-abortion groups. Near enabled these groups to send targeted ads promoting anti-abortion content to people who had visited Planned Parenthood and similar clinics.

In May 2023, the Wall Street Journal reported that Near was selling location data to anti-abortion groups. Specifically, the Journal found that the Veritas Society, a non-profit established by Wisconsin Right to Life, had hired ad agency Recrue Media. That agency purchased location data from Near and used it to target anti-abortion messaging at people who had sought reproductive healthcare.

The Veritas Society detailed the operation on its website (on a page that was taken down but saved by the Internet Archive) and stated that it delivered over 14 million ads to people who visited reproductive healthcare clinics. These ads appeared on Facebook, Instagram, Snapchat, and other social media for people who had sought reproductive healthcare.

When contacted by Sen. Wyden’s investigative team, Recrue staff admitted that the agency used Near’s website to literally “draw a line” around areas their client wanted them to target. They drew these lines around reproductive health care facilities across the country, using location data purchased from Near to target visitors to 600 Planned Parenthood different locations. Sen. Wyden’s team also confirmed with Near that, until the summer of 2022, no safeguards were in place to protect the data privacy of people visiting sensitive places.

Moreover, as Sen. Wyden explains in his letter, Near was selling data to the government, though it claimed on its website to be doing no such thing. As of October 18, 2023, Sen. Wyden’s investigation found Near was still selling location data harvested from Americans without their informed consent.

Near’s invasion of our privacy shows why Congress and the states must enact privacy-first legislation that limits how corporations collect and monetize our data. We also need privacy statutes that prevent the government from sidestepping the Fourth Amendment by purchasing location information—as Sen. Wyden has proposed. Even the government admits this is a problem.  Furthermore, as Near’s misconduct illustrates, safeguards must be in place that protect people in sensitive locations from being tracked.

This isn’t the first time we’ve seen data brokers sell information that can reveal visits to abortion clinics. We need laws now to strengthen privacy protections for consumers. We thank Sen. Wyden for conducting this investigation. We also commend the FTC’s recent bar on a data broker selling sensitive location data. We hope this represents the start of a longstanding trend.

Categorieën: Openbaarheid, Privacy, Rechten

'Digitaal werken in kort geding is vooral prettiger werken'

Digitaal procederen in kort geding handel en familie bij rechtbank Rotterdam

‘Op LinkedIn zag ik de oproep van een Rotterdamse voorzieningenrechter: wie is de eerste die digitaal een kort geding indient? Dat werd ik dus’, vertelt Jeanne Post, senior procesondersteuner bij advocatenkantoor De Brauw Blackstone Westbroek. Via het webportaal Mijn Rechtspraak kunnen advocaten sinds 16 oktober 2023 digitaal procederen in kort geding handel en familie bij de rechtbank Rotterdam.

Voor Jeanne was Mijn Rechtspraak niet helemaal nieuw. Ze gebruikte het webportaal al eerder voor het indienen van beslagrekesten. 'Toen ik vlak na het LinkedIn-bericht een kort geding-dossier kreeg, hoefde ik er niet lang over na te denken. Digitaal werken is anders werken, en ik vind het vooral prettiger werken. Mijn Rechtspraak werkt gemakkelijk en snel. Daarnaast heb je overal en altijd inzicht in het digitale dossier. Het bespaart ook veel papier en bovendien tijd, denk aan de koerier of de post om stukken te sturen. Maar ook het vonnis ontvang je sneller.'

Verbeterpunt: kort bericht sturen

Voor nog meer tijdsbesparing ziet Jeanne een verbeterpunt. 'Eenvoudig een kort bericht naar de rechtbank sturen via het webportaal, dat zou een uitkomst zijn. Bijvoorbeeld om een toelichting te sturen of een vraag. Nu gaat dat namelijk via het uploaden van een pdf-bestand en dat is best omslachtig.' Aan deze veelgehoorde wens gaat de Rechtspraak de komende maanden invulling geven. Daarnaast wil Jeanne benadrukken dat de mondelinge behandeling van het kort geding gewoon in de zittingszaal plaatsvindt. 'Sommige advocaten denken dat volledig digitaal procederen betekent dat er een online zitting is. Maar alleen de communicatie en de uitwisseling van stukken gaat digitaal.'

Tip: handig inloggen met de Advocatenpas-app

De Rechtspraak streeft ernaar dat advocaten vanaf 6 mei 2024 ook bij de andere rechtbanken digitaal kunnen procederen in kort geding handel en familie. Jeannes tip is om de Advocatenpas-app te gebruiken. 'Dat is handig inloggen. Je hebt geen gedoe met de reader en codes. Je telefoon heb je toch altijd bij je.' Ook de handleidingen op de website (pdf, 684,8 KB) vindt ze een uitkomst om snel wegwijs te raken in Mijn Rechtspraak. 'Verder is het goed om van tevoren te bedenken hoe je binnen je kantoor de bedrijfsvoering regelt. Door in Mijn Rechtspraak een mailadres op te geven van een gezamenlijke mailbox in plaats van steeds het mailadres van individuele advocaten, kan er direct actie worden ondernomen als er nieuwe berichten of stukken klaar staan in het webportaal.'

Digitale Toegang

Digitaal procederen in kort geding handel en familie is onderdeel van Digitale Toegang van de Rechtspraak. Hiermee realiseert de Rechtspraak eenvoudige digitale toegang voor alle rechtzoekenden en hun procesvertegenwoordigers in de rechtsgebieden civiel recht en bestuursrecht. Digitaal procederen is vooralsnog vrijwillig, maar wordt op enig moment verplicht voor juridische professionals.

Categorieën: Rechten

EFF to D.C. Circuit: The U.S. Government’s Forced Disclosure of Visa Applicants’ Social Media Identifiers Harms Free Speech and Privacy

Electronic Frontier Foundation (EFF) - nieuws - 27 februari 2024 - 10:24pm

Special thanks to legal intern Alissa Johnson, who was the lead author of this post.

EFF recently filed an amicus brief in the U.S. Court of Appeals for the D.C. Circuit urging the court to reverse a lower court decision upholding a State Department rule that forces visa applicants to the United States to disclose their social media identifiers as part of the application process. If upheld, the district court ruling has severe implications for free speech and privacy not just for visa applicants, but also the people in their social media networks—millions, if not billions of people, given that the “Disclosure Requirement” applies to 14.7 million visa applicants annually.

Since 2019, visa applicants to the United States have been required to disclose social media identifiers they have used in the last five years to the U.S. government. Two U.S.-based organizations that regularly collaborate with documentary filmmakers around the world sued, challenging the policy on First Amendment and other grounds. A federal judge dismissed the case in August 2023, and plaintiffs filed an appeal, asserting that the district court erred in applying an overly deferential standard of review to plaintiffs’ First Amendment claims, among other arguments.

Our amicus brief lays out the privacy interests that visa applicants have in their public-facing social media profiles, the Disclosure Requirement’s chilling effect on the speech of both applicants and their social media connections, and the features of social media platforms like Facebook, Instagram, and X that reinforce these privacy interests and chilling effects.

Social media paints an alarmingly detailed picture of users’ personal lives, covering far more information that that can be gleaned from a visa application. Although the Disclosure Requirement implicates only “public-facing” social media profiles, registering these profiles still exposes substantial personal information to the U.S. government because of the number of people impacted and the vast amounts of information shared on social media, both intentionally and unintentionally. Moreover, collecting data across social media platforms gives the U.S. government access to a wealth of information that may reveal more in combination than any individual question or post would alone. This risk is even further heightened if government agencies use automated tools to conduct their review—which the State Department has not ruled out and the Department of Homeland Security’s component Customs and Border Protection has already begun doing in its own social media monitoring program. Visa applicants may also unintentionally reveal personal information on their public-facing profiles, either due to difficulties in navigating default privacy setting within or across platforms, or through personal information posted by social media connections rather than the applicants themselves.

The Disclosure Requirement’s infringements on applicants’ privacy are further heightened because visa applicants are subject to social media monitoring not just during the visa vetting process, but even after they arrive in the United States. The policy also allows for public social media information to be stored in government databases for upwards of 100 years and shared with domestic and foreign government entities.  

Because of the Disclosure Requirement’s potential to expose vast amounts of applicants’ personal information, the policy chills First Amendment-protected speech of both the applicant themselves and their social media connections. The Disclosure Requirement allows the government to link pseudonymous accounts to real-world identities, impeding applicants’ ability to exist anonymously in online spaces. In response, a visa applicant might limit their speech, shut down pseudonymous accounts, or disengage from social media altogether. They might disassociate from others for fear that those connections could be offensive to the U.S. government. And their social media connections—including U.S. persons—might limit or sever online connections with friends, family, or colleagues who may be applying for a U.S. visa for fear of being under the government’s watchful eye.  

The Disclosure Requirement hamstrings the ability of visa applicants and their social media connections to freely engage in speech and association online. We hope that the D.C. Circuit reverses the district court’s ruling and remands the case for further proceedings.

Categorieën: Openbaarheid, Privacy, Rechten

Beschikking familierechter voortaan gelijktijdig naar alle procespartijen

Mr. Online (juridisch nieuws) - 27 februari 2024 - 11:10am

Stel je voor dat een strafrechter een uitspraak doet en de officier van justitie deze dagen eerder ontvangt dan de verdachte om wie het gaat. Denk je dat dat geaccepteerd zou worden? Zo begint familierechtadvocaat Ingrid Vledder twee maanden geleden een bericht op LinkedIn. Wat we in een strafzaak kennelijk niet accepteren, is ‘normaal’ in het familierecht: als de kinderrechter een beslissing neemt waarbij (bijvoorbeeld) een kind uit huis wordt geplaatst, dan krijgt Jeugdbescherming deze uitspraak via de e-mail dagen eerder dan de advocaat en de ouder ‘die op de postduif aan het wachten zijn’.

In strijd met de wet

De wet is volgens Vledder duidelijk: de rechtbank verstuurt zo spoedig mogelijk een afschrift aan de procespartijen. Hieruit volgt niet dat de ene procespartij eerder de beschikking mag ontvangen dan de andere, aldus Vledder. Maar de praktijk is anders. Via een apart digitaal knooppunt (CORV) – waarop de advocatuur niet is aangesloten – worden berichten tussen de Raad en gecertificeerde instellingen gewisseld, wat volgens Vledder in strijd is met de wet.

Procedurele rechtvaardigheid

Zij verwijst naar hun reflectierapport van de jeugdrechters (begin dit jaar), waarin zij constateerden dat ouders een gebrek aan procedurele rechtvaardigheid ervaren. Zij voelen zich onvoldoende geïnformeerd, niet gehoord en niet serieus genomen. Maar die ervaren procedurele onrechtvaardigheid wordt niet opgelost door alleen ‘beter te luisteren.’ “Er zal veel meer moeten gebeuren om de rechtsbescherming van ouders en hun kinderen te verbeteren, bijvoorbeeld het gelijktijdig versturen van beschikkingen aan alle procespartijen”, eindigt Vledder haar bijdrage.

Signaal

Gisteren sprak Vledder met Henk Naves (voorzitter van de Raad voor de rechtspraak) en met Ellen van Kalveen (voorzitter van de Landelijke expertgroep jeugdrechters) over deze kwestie. Uit het gesprek blijkt dat de Rechtspraak serieus met dit signaal aan de slag gaat. De verwachting is dat landelijk vanaf juli 2024 beschikkingen gelijktijdig aan advocatuur en Raad/Jeugdbescherming zullen worden verzonden. Bij het Landelijk Overleg Familierechters (8 maart) zal worden voorgesteld om tot die tijd de beschikkingen via Zivver aan advocaten te versturen, zodat gelijktijdige ontvangst wordt verzekerd.

‘Noodverband’

In ieder geval gaat de rechtbank Noord-Nederland deze werkwijze toepassen. “Hopelijk nemen de andere gerechten deze werkwijze over. Het lost alleen niet het probleem op voor de (pleeg)ouders die zich niet laten bijstaan door een advocaat.” Vledder is blij met dit ‘noodverband’. “Het lost natuurlijk niet de procedurele onrechtvaardigheid op die ouders ervaren in jeugdrechtzaken, maar het is wel een stap voorwaarts.”

Het bericht Beschikking familierechter voortaan gelijktijdig naar alle procespartijen verscheen eerst op Mr. Online.

Categorieën: Rechten

Sociaal advocaten schrikken van toezichtsplannen dekens

Mr. Online (juridisch nieuws) - 27 februari 2024 - 10:00am

De fractie Oost-Brabant van het College van Afgevaardigden van de Nederlandse Orde Advocaten was onaangenaam verrast door de Toelichting op begroting Dekenberaad 2024, waarin het extra toezicht werd aangekondigd.

Volgens de fractie wordt de tijdsdruk op de sociale advocatuur alleen nog groter door het extra toezicht. De fractie is daarom beducht dat de interesse voor de sociale advocatuur zal afnemen. De fractie is verder van mening dat uit de maatregel wantrouwen spreekt ten aanzien van de welwillendheid van de advocaten die vooral toevoegingen doen. De Brabantse advocaten noemen de maatregel ‘discriminatoir’. Ze vinden extra toezicht ook onnodig omdat de Raad voor Rechtsbijstand met een grote regelmaat de dossiers van de sociale advocatuur controleert. De fractie mist ook de cijfermatige onderbouwing voor de noodzaak van extra toezicht.

Dat de dekens het toezicht willen uitvoeren door advocaten (‘peer review’) is ook tegen het zere been van de fractie. “We hebben minder vertrouwen in een “peer review” als die “peer” niet ook ervaringskennis heeft van de uitdagingen van een praktijk die overwegend met toevoegingen werkt,” licht sociaal advocaat Caspar Brouwers (fractie Oost-Brabant en advocatenkantoor Advocasus) toe. De fractie heeft in de-vergadering van 20 december van het College van Afgevaardigden (CvA) een stuk ingebracht waarin de voorgestelde maatregel wordt afgekeurd.

Repressief

Caspar Brouwers zegt dat de Nederlandse Orde van Advocaten en de lokale ordes zich steeds meer profileren als toezichthouder en minder als belangenbehartiger van de advocatuur. “De belangen van de beroepsgroep worden, zeker waar het de sociale advocatuur betreft, nu onvoldoende behartigd om de kwaliteit in de toekomst te kunnen waarborgen,” meent Brouwers. “De Orde stelt zich repressief op en gebruikt stokken om mee te slaan, maar daarmee wordt de kwaliteit niet beter.”

De fractie Oost-Brabant voelt zich in haar kritiek gesteund door een brief van advocaat mr. J. Hemelaar (fractie Den Haag en Per Aspera Ad Astra Advocaten) aan de Algemene Raad. In deze brief, op persoonlijke titel, zet Hemelaar een kanttekening bij de bewering van de dekens dat cliënten van de sociale advocatuur minder goed in staat zijn in te schatten of ze adequaat wordt bijgestaan. “Ik meen, dat dit ook voor veel cliënten buiten de gefinancierde rechtsbijstand geldt,” schrijft Hemelaar. “Het voordeel bij de laatste groep is evenwel, dat het tenminste nog uit te leggen is. Dat is bij een heel aantal cliënten in de gefinancierde rechtsbijstand een stuk moeilijker en vraagt extra inzet en tijd van de sociaal advocaat.” Hij noemt het “niet opportuun” om de gefinancierde rechtsbijstand onder verscherpt toezicht te stellen.

Caspar Brouwers (Advocasus) Sleepnetmethode

Voorzitter Reinier Feiner van de Vereniging Sociale Advocatuur Nederland (VSAN) is evenzeer kritisch op de toezichtsplannen van het dekenberaad. “Controle is prima,” vindt Feiner. “Maar waarom zou je speciaal toevoegingsadvocaten controleren? Dat is een soort sleepnetmethode. Kijk liever of er bij de rechtbanken en andere ketenpartners klachten zijn binnengekomen over bepaalde advocaten, en controleer die dossiers.”

Hij wijst er, net als Brouwers, op dat toevoegingsdossiers steeksproefsgewijs al achteraf worden gecontroleerd door de Raad voor Rechtsbijstand, en dat er specialisatie-eisen worden gesteld. “Terwijl bij een advocaat die commercieel werkt niemand die de juistheid van de declaraties controleert. Ik heb moeite met dat onderscheid. Bij de categorie uurtje-factuurtje zijn er veel meer kwetsbaarheden, zoals contant geld aannemen, witwassen en excessief declareren.”

Feiner noemt het plan van de dekens ‘een verkeerd signaal’ omdat daardoor de indruk wordt gewekt dat toevoegingsadvocaten hun werk niet goed doen. Dat is wrang, meent Feiner, omdat de sociale advocatuur, ondanks beperkte middelen, bijdraagt aan kosten voor financieel toezicht die vooral voor commerciële advocatuur van toepassing zijn.

Reinier Feiner (VSAN) Bestaansrecht rechtvaardigen

De VSAN heeft zelf geen kritische signalen gekregen over kwaliteit, zegt Feiner. “Nee. Wat mij opvalt is dat de Raad voor Rechtsbijstand en NOvA meer bezig zijn met kwaliteit naar beneden te brengen, bijvoorbeeld doordat gewone familierechtadvocaten ondertoezichtstellingen mogen doen.”

Met een verwijzing naar de plannen van minister Weerwind om het toezicht weg te halen bij de dekens en onder te brengen bij het Onafhankelijk Toezicht Advocatuur (OTA): “Het lijkt erop dat de dekens op deze manier proberen hun bestaansrecht te rechtvaardigen.”

De Rotterdamse deken Peter Hanenberg zegt dat de zorgen van Brouwers en Feiner onnodig zijn. “De dekens doen al jaren kantoorbezoeken bij de hele balie,” licht hij toe. “Ik zie die bezoeken als een APK-keuring. Je gaat niet met je auto naar de garage omdat ervan wordt uitgegaan dat je met een ondeugdelijk voertuig de weg opgaat, maar het is voor ieders veiligheid dat ernaar wordt gekeken.”

Hij zegt ook: “Als je een Tesla hebt, hoef je geen viergastest.” Met die opmerking bedoelt Hanenberg dat de dekens per kantoor kijken of welke punten ze focussen.  Er is, zegt Hanenberg, dus wel degelijk sprake van risicogestuurd toezicht. “Het risico op een foute bedrijfscultuur met bullenbakkengedrag is op grote kantoren groter dan bij de sociale advocatuur. Dus daarop controleren we bij grote kantoren. Strafrechtkantoren worden gecontroleerd op ondermijning en Wet ter voorkoming en van witwassen en financieren van terrorisme (WWFT). De sociaal advocaten werken vooral voor particulieren en die particulieren kunnen de kwaliteit van de dienstverlening niet goed beoordelen. Die cliëntengroep heeft extra bescherming nodig heeft, daarom kijken we naar toevoegingsdossiers. We kijken wat goed gaat, en wat beter kan.”

Verhaal naar de buitenwereld

Hanenberg vindt het jammer dat het plan van de dekens met wantrouwen wordt bekeken. “Ik ga ervan uit dat we heel veel goeds gaan vinden. Maar dan hebben we wel een verhaal naar de buitenwereld dat we de toevoegingen gecontroleerd hebben en dat het in orde was.” Hij zegt ook: “Als de sociaal advocaten zeggen dat de kwaliteit in orde is, dan mag dat bij een controle toch bevestigd worden?”

Dat de Raad voor Rechtsbijstand ook toevoegingsdossiers controleert, klopt. “Maar de Raad controleert puur of iemand heeft voldaan aan de vereisten van de toevoeging: of er een zitting is geweest, een verzoekschrift is geschreven, of uren zijn besteed et cetera. De Raad beoordeelt niet of de advocaat de juiste strategische keuzes heeft gemaakt. Dat wordt terecht aan de advocaat overgelaten. De deken kan dit wél controleren, want die heeft een geheimhoudingsplicht.”

Het toezicht op toevoegingsdossiers wordt namens de dekens uitgevoerd door mensen die deskundigheid hebben op het rechtsgebied waarop de advocaat werkzaam is en die ook ervaren zijn in het beoordelen van dossiers van anderen. Hanenberg wil daarvoor de methode van peer review gebruiken die al breed binnen de advocatuur wordt toegepast als middel van intercollegiaal overleg.

Peter Hanenberg (Rotterdamse Orde van Advocaten) Specialisatieverenigingen benaderd

Het zal niet eenvoudig zijn die ‘peers’ te vinden, denkt Hanenberg. “Misschien moeten mensen extra worden opgeleid. De bedoeling van dit onderdeel in het werkplan van de dekens is om voorzichtig te kijken hoe het uitpakt. Om zoveel mogelijk in overleg met de advocatuur te komen tot een goede aanpak.”

Hij heeft specialisatieverenigingen benaderd met de vraag of ze willen meedoen. “Peer review is een vrijwillig traject op basis van vertrouwelijkheid dat je niet mag misbruiken voor toezicht,” vindt Hanenberg. “We willen kijken hoe de bevindingen aan de dekens kunnen worden gerapporteerd zonder dat we de peer reviewer compromitteren. Daar zijn we nog niet uit.”

Zo’n rapport is niet zonder consequenties. “Als iemand door het ijs zakt, krijgt ie dat gepresenteerd in een verslagje, en als de advocaat dan doorgaat op die weg, dan kunnen er maatregelen volgen, eventueel dat hij zich moet verantwoorden bij de tuchtrechter.”

De bedoeling is dat dit jaar de eerste kantoorbezoeken worden gedaan. De dekens hebben afgesproken in 2024 gezamenlijk vijftig kantoorbezoeken aan sociaal advocaten af te leggen, waarbij per bezoek minimaal vijf dossiers onder de loep worden genomen.

De uitkomst van het toezicht kan ook gebruikt worden als argument in de politieke discussie over de gefinancierde rechtsbijstand, meent Hanenberg.  “Als de kwaliteit lijdt onder de tijdsdruk, dan kunnen we tegenover de politiek onderbouwen dat er iets verbeterd moet worden aan de financiering van de rechtsbijstand.”

Vergoedingen herijken

Raadsheer Herman van der Meer (gerechtshof Amsterdam) gaat voor de tweede keer een commissie leiden die zich over de gesubsidieerde rechtsbijstand buigt. Hij doet dat in opdracht van minister Weerwind.

De commissie moet de vergoedingen in het stelsel voor gesubsidieerde rechtsbijstand herijken. De tarieven voor toevoegingen hebben de afgelopen jaren geen gelijke tred gehouden met de inflatie. Weerwind zegt het van groot belang te vinden dat de vergoedingen aansluiten bij de tijdsbesteding van advocaten.

Herman van der Meer was in 2016 voorzitter van de commissie Evaluatie puntentoekenning gesubsidieerde rechtsbijstand. Die kwam tot de conclusie dat toevoegingsadvocaten vaak veel meer uren maakten dan waar ze voor betaald kregen. Later trok minister Dekker in 2020 en 2021 36,5 miljoen euro extra uit voor rechtshulp. Desondanks verkeert de sociale advocatuur nog steeds in zwaar weer. Daarom riep een Kamermeerderheid de regering in oktober 2023 op om fors te investeren in rechtsbijstand. Eerder dat pleitte jaar pleitte de NOvA al voor een noodinvestering om sociaal advocaten te compenseren voor inflatie.

Het rapport van de tweede commissie-Van der Meer wordt dit najaar verwacht.

Het bericht Sociaal advocaten schrikken van toezichtsplannen dekens verscheen eerst op Mr. Online.

Categorieën: Rechten

Podcast Episode: Open Source Beats Authoritarianism

Electronic Frontier Foundation (EFF) - nieuws - 27 februari 2024 - 9:07am

What if we thought about democracy as a kind of open-source social technology, in which everyone can see the how and why of policy making, and everyone’s concerns and preferences are elicited in a way that respects each person’s community, dignity, and importance?

play %3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F3269fca8-4236-4af6-b482-73e13b643b93%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com


Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

This is what Audrey Tang has worked toward as Taiwan’s first Digital Minister, a position the free software programmer has held since 2016. She has taken the best of open source and open culture, and successfully used them to help reform her country’s government. Tang speaks with EFF’s Cindy Cohn and Jason Kelley about how Taiwan has shown that openness not only works but can outshine more authoritarian competition wherein governments often lock up data.

In this episode, you’ll learn about:

  • Using technology including artificial intelligence to help surface our areas of agreement, rather than to identify and exacerbate our differences 
  • The “radical transparency” of recording and making public every meeting in which a government official takes part, to shed light on the policy-making process 
  • How Taiwan worked with civil society to ensure that no privacy and human rights were traded away for public health and safety during the COVID-19 pandemic 
  • Why maintaining credible neutrality from partisan politics and developing strong public and civic digital infrastructure are key to advancing democracy. 

Audrey Tang has served as Taiwan's first Digital Minister since 2016, by which time she already was known for revitalizing the computer languages Perl and Haskell, as well as for building the online spreadsheet system EtherCalc in collaboration with Dan Bricklin. In the public sector, she served on the Taiwan National Development Council’s open data committee and basic education curriculum committee and led the country’s first e-Rulemaking project. In the private sector, she worked as a consultant with Apple on computational linguistics, with Oxford University Press on crowd lexicography, and with Socialtext on social interaction design. In the social sector, she actively contributes to g0v (“gov zero”), a vibrant community focusing on creating tools for the civil society, with the call to “fork the government.”

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

AUDREY TANG
In 2016, October, when I first became Taiwan's digital minister, I had no examples to follow because I was the first digital minister. And then it turns out that in traditional Mandarin, as spoken in Taiwan, digital, shu wei, means the same as “plural” - so more than one. So I'm also a plural minister, I'm minister of plurality. And so to kind of explain this word play, I wrote my job description as a prayer, as a poem. It's very short, so I might as well just quickly recite it. It goes like this:
When we see an internet of things, let's make it an internet of beings.
When we see virtual reality, let's make it a shared reality.
When we see machine learning, let's make it collaborative learning.
When we see user experience, let's make it about human experience.
And whenever we hear that a singularity is near, let us always remember the plurality is here.

CINDY COHN
That's Audrey Tang, the Minister of Digital Affairs for Taiwan. She has taken the best of open source and open culture, and successfully used them to help reform government in her country of Taiwan. When many other cultures and governments have been closing down and locking up data and decision making, Audrey has shown that openness not only works, but it can win against its more authoritarian competition.
I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I'm Jason Kelley, EFF's Activism Director. This is our podcast series, How to Fix the Internet.

CINDY COHN
The idea behind this show is we're trying to make our digital lives better. We spend so much time imagining worst-case scenarios, and jumping into the action when things inevitably do go wrong online but this is a space for optimism and hope.

JASON KELLEY
And our guest this week is one of the most hopeful and optimistic people we've had the pleasure of speaking with on this program. As you heard in the intro, Audrey Tang has an incredibly refreshing approach to technology and policy making.

CINDY COHN
We approach a lot of our conversations on the podcast using Lawrence Lessig’s framework of laws, norms, architecture and markets – and Audrey’s work as the Minister of Digital Affairs for Taiwan combines almost all of those pillars. A lot of the initiatives she worked on have touched on so many of the things that we hold dear here at EFF and we were just thrilled to get a chance to speak with her.
As you'll soon hear, this is a wide-ranging conversation but we wanted to start with the context of Audrey's day-to-day life as Taiwan's Minister of Digital Affairs.

AUDREY TANG
In a nutshell I make sure that every day I checkpoint my work so that everyone in the world knows not just the what of the policies made, but the how and why of policy making.
So for easily more than seven years everything that I did in the process, not the result, of policymaking, is visible to the general public. And that allows for requests, essentially - people who make suggestions on how to steer it into a different direction, instead of waiting until the end of policymaking cycle, where they have to say, you know, we protest, please scratch this and start anew and so on.
No, instead of protesting, we welcome demonstrators that demonstrates better ways to make policies as evidenced during the pandemic, where we rely on the civil society lead contact tracing and counter pandemic methods and for three years we've never had a single day of lockdown.

JASON KELLEY
Something just popped into my head about the pandemic since you mentioned the pandemic. I'm wondering if your role shifted during that time, or if it sort of remained the same except to focus on a slightly different element of the job in some way.

AUDREY TANG
That's a great question. So entering the pandemic, I was the minister with a portfolio in charge of open government, social innovation and youth engagement. And during the pandemic, I assumed a new role, which is the cabinet Chief Information Officer. And so the cabinet CIO usually focuses on, for example, making tax paying easier, or use the same SMS number for all official communications or things like that.
But during the pandemic, I played a role of like a Lagrange Point, right? Between the gravity centers of Privacy protection, social movement on one side and protecting the economy, keep TSMC running on the other side, whereas many countries, I would say everyone other than say Taiwan, New Zealand and a handful of other countries, everyone assumed it would be a trade-off.
Like there's a dial you'll have to, uh, sacrifice some of the human rights, or you have to sacrifice some lives, right? A very difficult choice. We refuse to make such trade-offs.
So as the minister in charge of social innovation, I work with the civil society leaders who themselves are the privacy advocates, to design contact tracing systems instead of relying on Google or Apple or other companies to design those and as cabinet CIO, whenever there is this very good idea, we make sure that we turn it into production, making a national level the next Thursday. So there's this weekly iteration that takes the best idea from the civil society and make it work on a national level. And therefore, it is not just counter pandemic, but also counter infodemic. We've never had a single administrative takedown of speech during the pandemic. Yet we don't have an anti-vax political faction, for example.

JASON KELLEY
That's amazing. I'm hearing already a lot of, uh, things that we might want to look towards in the U.S.

CINDY COHN
Yeah, absolutely. I guess what I'd love to do is, you know, I think you're making manifest a lot of really wonderful ideas in Taiwan. So I'd like you to step back and you know, what does the world look like, you know, if we really embrace openness, we embrace these things, what does the bigger world look like if we go in this direction?

AUDREY TANG
Yeah, I think the main contribution that we made is that the authoritarian regimes for quite a while kept saying that they're more efficient, that for emerging threats, including pandemic, infodemic, AI, climate, whatever, top-down, takedown, lockdown, shutdowns are more effective. And when the world truly embraces democracy, we will be able to pre-bunk – not debunk, pre-bunk – this idea that democracy only leads to chaos and only authoritarianism can be effective. If we do more democracy more openly, then everybody can say, oh, we don't have to make those trade-offs anymore.
So, I think when the whole world embraces this idea of plurality, we'll have much more collaboration and much more diversity. We won't refuse diversity simply because it's difficult to coordinate.

JASON KELLEY
Since you mentioned democracy, I had heard that you have this idea of democracy as a social technology. And I find that really interesting, partly because all the way back in season one, we talked to the chief innovation officer for the state of New Jersey, Beth Noveck, who talked a lot about civic technology and how to facilitate public conversations using technology. So all of that is a lead-in to me asking this very basic question. What does it mean when you say democracy is a social technology?

AUDREY TANG
Yeah. So if you look at democracy as it's currently practiced, you'll see voting, for example, if every four years someone votes for among, say, four presidential candidates, that's just two bits of information uploaded from each individual and the latency is very, very long, right? Four years, two years, one year.
Again, when emerging threats happen, pandemic, infodemic, climate, and so on, uh, they don't work on a four year schedule. They just come now and you have to make something next Thursday, in order to counter it at its origin, right? So, democracy, as currently practiced, suffers from the lack of bandwidth, so the preference of citizens are not fully understood, and latency, which means that the iteration cycle is too long.
And so to think of democracy as a social technology is to think about ways that make the bandwidth wider. To make sure that people's preferences can be elicited in a way that respects each community's dignities, choices, context, instead of compressing everything into this one dimensional poll results.
We can free up the polls so that it become wiki surveys. Everybody can write those polls, questions together. It can become co-creation. People can co-create a constitutional document for the next generation of AI that aligns itself to that document, and so on and so forth. And when we do this, like, literally every day, then also the latency shortens, and people can, like a radar, sense societal risks and come up with societal solutions in the here and now.

CINDY COHN
That's amazing. And I know that you've helped develop some of the actual tools. Or at least help implement them, that do this. And I'm interested in, you know, we've got a lot of technical people in our audience, like how do you build this and what are the values that you put in them? I'm thinking about things like Polis, but I suspect there are others too.

AUDREY TANG
Yes, indeed. Polis is quite well known in that it's a kind of social media that instead of polarizing people to drive so called engagement or addiction or attention, it automatically drives bridge making narratives and statements. So only the ideas that speak to both sides or to multiple sides will gain prominence in Polis.
And then the algorithm surfaces to the top so that people understand, oh, despite our seeming differences that were magnified by mainstream and other antisocial media, there are common grounds, like 10 years ago when UberX first came to Taiwan, both the Uber drivers and taxi drivers and passengers all actually agreed that insurance registration not undercutting existing meters. These are important things.
So instead of arguing about abstract ideas, like whether it's sharing economy, or extractive gig economy, uh, we focus, again, on the here and now and settle the ideas in a way that's called rough consensus. Meaning that everybody, maybe not perfectly, live with it, can live with it.

CINDY COHN
I just think they're wonderful and I love the flipping of this idea of algorithmic decision making such that the algorithm is surfacing places of agreement, and I think it also does some mapping as well about places of agreement instead of kind of surfacing the disagreement, right?
And that, that is really, algorithms can be programmed in either direction. And the thinking about how do you build something that brings stuff together to me is just, it's fascinating and doubly interesting because you've actually used it in the Uber example, and I think you've used some version of that also back in the early work with the Sunflower movement as well.

AUDREY TANG
Yeah, the Uber case was 2015, and the Sunflower Movement was, uh, 2014, and at 2014, the Ma Ying-jeou administration at the time, um, had a approval rate for citizens of less than 10%, which means that anything the administration says, the citizens ultimately don't believe, right? And so instead of relying on traditional partisan politics, which totally broke down circa 2014, Ma Ying-jeou worked with people that came from the tech communities and named, uh, Simon Chang from Google, first as vice premier and then as premier. And then in 2016, when the Tsai Ing Wen administration began again, the premier Lin Chuan was also independent. So we are after 2014-15, at a new phase of our democracy where it becomes normal for me to say, Oh, I don't belong to any parties but I work with all the parties. That credible neutrality, this kind of bridge making across parties, becomes something people expect the administration to do. And again, we don't see that much of this kind of bridge making action in other advanced democracies.

CINDY COHN
You know, I had this question and, and I know that one of our supporters did as well, which is, what's your view on, you know, kind of hackers? And, and by saying hackers here, I mean people with deep technical understanding. Do you think that they can have more impact by going into government than staying in private industry? Or how do you think about that? Because obviously you made some decisions around that as well.

AUDREY TANG
So my job description basically implies that I'm not working for the government. I'm just working with the government. And not for the people, but with the people. And this is very much in line with the internet governance technical community, right? The technical community within the internet governance communities kind of places ourselves as a hub between the public sector, the private sector, even the civil society, right?
So, the dot net suffix is something else. It is something that includes dot org, dot com, dot edu, dot gov, and even dot military, together into a shared fabric so that people can find rough consensus. And running code, regardless of which sector they come from. And I think this is the main gift that the hacker community gives to modern democracy, is that we can work on the process, but the process or the mechanism naturally fosters collaboration.

CINDY COHN
Obviously whenever you can toss rough consensus and running code into a conversation, you've got our attention at EFF because I think you're right. And, and I think that the thing that we've struggled with is how to do this at scale.
And I think the thing that's so exciting about the work that you're doing is that you really are doing a version of. transparency, rough consensus, running code, and finding commonalities at a scale that I would say many people weren't sure was possible. And that's what's so exciting about what you've been able to build.

JASON KELLEY
I know that before you joined with the government, you were a civic hacker involved in something called gov zero. And I'm wondering, maybe you can talk a little bit about that and also help people who are listening to this podcast think about ways that they can sort of follow your path. Not necessarily everyone can join the government to do these sorts of things, but I think people would love to implement some of these ideas and know more about how they could get to the position to do so.

AUDREY TANG
Collaborative diversity works not just in the dot gov, but if you're working in a large enough dot org or dot com, it all works the same, right? When I first discovered the World Wide Web, I learned about image tags, and the first image tag that I put was the Blue Ribbon campaign. And it was actually about unifying the concerns of not just librarians, but also the hosting companies and really everybody, right, regardless of their suffix. We saw their webpages turning black and there's this prominent blue ribbon at a center. So by making the movement fashionable across sectors, you don't have to work in the government in order to make a change. Just open source your code and somebody In the administration, that's also a civic hacker will notice and just adapt or fork, or merge your code back.
And that's exactly how Gov Zero works. In 2012 a bunch of civic hackers decided that they've had enough with PDF files that are just image scans of budget descriptions, or things like that, which makes it almost impossible for average citizens to understand what's going on with the Ma Ying-jeou administration.And so, they set up forked websites.
So for each website, something dot gov dot tw, the civic hackers register something dot g0v dot tw, which looks almost the same. So, you visit a regular government website, you change your O to a zero, and this domain hack ensures that you're looking at a shadow government versions of the same website, except it's on GitHub, except it’s powered by open data, except there's real interactions going on and you can actually have a conversation about any budget item around this visualization with your fellow civic hackers.
And many of those projects in Gov Zero became so popular that the administration, the ministries finally merged back their code so that if you go to the official government website, it looks exactly the same as the civic hacker version.

CINDY COHN
Wow. That is just fabulous. And for those who might be a little younger, the Blue Ribbon Campaign was an early EFF campaign where websites across the internet would put a blue ribbon up to demonstrate their commitment to free speech. And so I adore that that was one of the inspirations for the kind of work that you're doing now. And I love hearing these recent examples as well, that this is something that really you can do over and over again.

JASON KELLEY
Let’s pause for just a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

TIME magazine recently featured Audrey Tang as one of the 100 most influential people in AI and one of the projects they mentioned is Alignment Assemblies, a collaboration with the Collective Intelligence Project policy organization that employs a chatbot to help enable citizens to weigh in on their concerns around AI and the role it should play.

AUDREY TANG
So it started as just a Polis survey of the leaders at the Summit for Democracy and AI labs and so on on how exactly are their concerns bridge-worthy when it comes to the three main values identified by the Collective Intelligence Project, which is participation, progress and safety. Because at the time, the conversation because of the GPT4 and its effect on everybody's mind, we hear a lot of strong trade-off arguments like to maximize safety, we have to, I don't know, restrict GPU Purchasing across the world to put a cap on progress or we hear that for to make open source possible we must give up the idea of the AI's aligning themselves, but actually having the uncensored model be like personal assistant so that everybody has one so that people become inoculated against deepfakes because everybody can very easily deepfake and so on.
And we also hear that maybe internet communication will be taken over by deepfakes. And so we will have to reintroduce some sort of real name internet because otherwise everybody will be a bot on the internet and so on. So all these ideas really push over the window, right? Because before generative AI, these ideas were considered fringe.
And suddenly, at the end of March this year, those ideas again gained prominent ground. So using Polis and using TalkToTheCity and other tools, we quickly mapped an actually overlapping consensus. So regardless of which value you come from, people generally understand that if we don't tackle the short term risks - the interactive deepfakes, the persuasion and addiction risks, and so on - then we won't even coordinate enough to live together to see the coordination around the extinction risks a decade or so down the line, right?
So we have to focus on the immediate risks first, and that led to the safe dot ai joint statement, which I signed, and also the Mozilla open and safety joint statement which I signed and so on.
So the bridge-making AI actually enabled a sort of deep canvassing where I can take all the sides and then make the narratives that bridges the three very different concerns. So it's not a trilemma, but rather reinforcing each other mutually. And so in Taiwan, a surprising consensus that we got from the Polis conversations and the two face-to-face day-long workshops, was that people in Taiwan want the Taiwanese government to pioneer this use of trustworthy AI.
So instead of the private sector producing the first experiences, they want the public servants to exercise their caution of course, but also to use gen AI in the public service. But with one caveat that this must be public code, that is to say, it should be free software, open source, the way it integrates into decision making should be an assistive role and everything need to be meticulously documented so the civil society can replicate it on their own personal computers and so on. And I think that's quite insightful. And therefore, we're actually doubling down on the societal evaluation and certification. And we're setting up a center for that at the end of this year.

CINDY COHN
So what are some of the lessons and things that you've learned in doing this in Taiwan that you think, you know, countries around the world or people around the world ought to take back and, and think about how they might implement it?
Are there pitfalls that you might want to avoid? Are there things that you think really worked well that people ought to double down on?

AUDREY TANG
I think it boils down to two main observations. The first one is that credible neutrality and alignment with the career public service is very, very important. The political parties come and go, but a career public service is very aligned with the civic hackers' kind of thinking because they maintain the mechanism.
They want the infrastructure to work and they want to serve people who belong to different political party. It doesn't matter because that's what a public service does. It serves the public. And so for the first few years of the Gov Zero movement the projects found not just natural allies in the Korean public service, but also the credibly neutral institutions in our society.
For example, our National Academy which doesn't report to the ministers, but rather directly to the president is widely seen as credibly neutral. And so civil society organizations can play such a role equally effectively if they work directly with the people, not just for the policy think tanks and so on.
So one good example may be like consumer report in the U. S. or the National Public Radio, and so on. So, basically, these are the mediators that are very similar to us, the civic hackers, and we need to find allies in them. So this is the first observation. And the second observation is that you can turn any crisis that urgently need clarity into an opportunity to future mechanisms that works better.
So if you have the civil society trust in it and the best way to win trust is to give trust. So by simply saying the opposition party, everyone has the real time API of the open data, and so if you make a critique of our policy, well, you have the same data as we do. So patches welcome, send us pull requests, and so on. This turns what used to be a zero sum or negative sum dynamic in politics thanks to a emergency like pandemic or infodemic and turned it into a co-creation opportunity and the resulting infrastructure become so legitimate that no political parties will dismantle it. So it become another part of political institution.
So having this idea of digital public infrastructure and ask for the parliament to give it infrastructure, money and investment, just like building parks and roads and highways. This is also super important.
So when you have a competent society, when we focus on not just the literacy, but competence of everyday citizens, they can contribute to public infrastructures through civic infrastructures. So credible neutrality on one and public and civic infrastructure as the other, I think these two are the most fundamental, but also easiest to practice way to introduce this plurality idea to other polities.

CINDY COHN
Oh, I think these are great ideas. And it reminds me a little of what we learned when we started doing electronic voting work at EFF. We learned that we needed to really partner with the people who run elections.
We were aligned that all of us really wanted to make sure that the person with the most votes was actually the person who won the election. But we started out a little adversarial and we really had to learn to flip that around. Now that’s something that our friends at Verified Voting have really figured out and have build some strong partnerships. But I suspect in your case it could have been a little annoying to officials that you were creating these shadow websites. I wonder, did it take a little bit of a conversation to flip them around to the situation in which they embraced it?

AUDREY TANG
I think the main intervention that I personally did back in the days when I run the MoEdDict, or the Ministry of Education Dictionary project, in the Gov Zero movement, was that we very prominently say, that although we reuse all the so-called copyright reserve data from the Ministry of Education, we relinquish all our copyright under the then very new Creative Commons 0, so that they cannot say that we're stealing any of the work because obviously we're giving everything back to the public.
So by serving the public in an even more prominent way than the public service, we make ourselves not just the natural allies, but kind of reverse mentors of the young people who work with cabinet ministers. But because we serve the public better in some way, they can just take entire website design, the entire Unicode, interoperability, standard conformance, accessibility and so on and simply tell their vendors, and say, you know, you can merge it. You don't have to pay these folks a dime. And naturally then the service increases and they get praise from the press and so on. And that fuels this virtuous cycle of collaboration.

JASON KELLEY
One thing that you mentioned at the beginning of our conversation that I would love to hear more about is the idea of radical transparency. Can you talk about how that shows up in your workflow in practice every day? Like, do you wake up and have a cabinet meeting and record it and transcribe it and upload it? How do you find time to do all that? What is the actual process?

AUDREY TANG
Oh I have staff of course. And also, nowadays, language models. So the proofreading language models are very helpful. And I actually train my own language models. Because the pre-training of all the leading large language models already read from the seven years or so of public transcript that I published.
So they actually know a lot about me. In fact, when facilitating the chatbot conversations, one of the more powerful prompts we discovered was simply, facilitate this conversation in the manner of Audrey Tang. And then language model actually know what to do because they've seen so many facilitative transcripts.

CINDY COHN
Nice! I may start doing that!

AUDREY TANG
It's a very useful elicitation prompt. And so I train my local language model. My emails, especially English ones, are all drafted by the local model. And it has no privacy concern because it runs in airplane mode. The entire fine tuning inference. Everything is done locally and so while it does learn from my emails and so on, I always read fully before hitting send.
But this language model integration of personal computing already saved, I would say 90 percent of my time, during daily chores, like proofreading, checking transcripts, replying to emails and things like that. And so I think one of the main arguments we make in the cabinet is that this kind of use of what we call local AI, edge AI, or community open AI, are actually better to discover the vulnerabilities and flaws and so on, because then the public service has a duty to ensure the accuracy and what better way to ensure accuracy of language model systems than integrating it in the flow of work in a way that doesn't compromise privacy and personal data protection. And so, yeah, AI is a great time saver, and we're also aligning AI as we go.
So for the other ministries that want to learn from this radical transparency mechanism and so on, we almost always sell it as a more secure and time saving device. And then once they adopt it, then they see the usefulness of getting more public input and having a language model to digest the collective inputs and respond to the people in the here and now.

CINDY COHN
Oh, that is just wonderful because I do know that when you start talking with public servants about more public participation, often what you get is, Oh, you're making my job harder. Right? You're making more work for me. And, and what you've done is you've kind of been able to use technology in a way that actually makes their job easier. And I think the other thing I just want to lift up in what you said, is how important it is that these AI systems that you're using are serving you. And it's one of the things we talk about a lot about the dangers of AI systems, which is, who bears the downside if the AI is wrong?
And when you're using a service that is air gapped from the rest of the internet and it is largely using to serve you in what you're doing, then the downside of it being wrong doesn't go on, you know, the person who doesn't get bail. It's on you and you're in the best position to correct it and actually recognize that there's a problem and make it better.

AUDREY TANG
Exactly. Yeah. So I call these AI systems assistive intelligence, after assistive technology because it empowers the dignity of me, right? I have this assistive tech, which is a bunch of eyeglasses. It's very transparent, and if I see things wrong after putting those eyeglasses, nobody blamed the eyeglasses.
It's always the person that is empowered by the eyeglasses. But if instead I wear not eyeglasses, but those VR devices that consumes all the photons, upload it to the cloud for some very large corporation to calculate and then project back to my eyes and maybe with some advertisement in it and so on, then it's very hard to tell whether the decision making falls on me or on those intermediaries that basically blocks my eyesight and just present me a alternate reality. So I always prefer things that are like eyeglasses, or bicycles for that matter that someone can repair it themselves, without violating an NDA or paying $3 million in license fees.

CINDY COHN
That's great. And open source for the win again there. Yeah.

AUDREY TANG
Definitely.

CINDY COHN
Yeah, well thank you so much, Audrey. I tell you, this has been kind of like a breath of fresh air, I think, and I really appreciate you giving us a glimpse into a world in which, you know, the values that I think we all agree on are actually being implemented and implementing, as you said, in a way that scales and makes things better for ordinary people.

AUDREY TANG
Yes, definitely. I really enjoy the questions as well. Thank you so much. Live long and prosper.

JASON KELLEY
Wow. A lot of the time we talk to folks and it's hard to get to a vision of the future that we feel positive about. And this was the exact opposite. I have rarely felt more positively about the options for the future and how we can use technology to improve things and this was just - what an amazing conversation. What did you think, Cindy?

CINDY COHN
Oh I agree. And the thing that I love about it is, she’s not just positing about the future. You know, she’s telling us stories that are 10 years old about how they fix things in Taiwan. You know, the Uber story and some of the other stories of the Sunflower movement. She didn't just, like, show up and say the future's going to be great, like, she's not just dreaming, They're doing.

JASON KELLEY
Yeah. And that really stood out to me when talking about some of the things that I expected to get more theoretical answers to, like, what do you mean when you say democracy is a technology and the answer is quite literally that democracy suffers from a lack of bandwidth and latency and the way that it takes time for individuals to communicate with the government can be increased in the same way that we can increase bandwidth and it was just such a concrete way of thinking about it.
And another concrete example was, you know, how do you get involved in something like this? And she said, well, we just basically forked the website of the government with a slightly different domain and put up better information until the government was like, okay, fine, we'll just incorporate it. These are such concrete things that people can sort of understand about this. It's really amazing.

CINDY COHN
Yeah, the other thing I really liked was pointing out how, you know, making government better and work for people is really one of the ways that we counter authoritarianism. She said one of the arguments in favor of authoritarianism is that it's more efficient, and it can get things done faster than a messy, chaotic, democratic process.
And she said, well, you know, we just fixed that so that we created systems in which democracy was more efficient. than authoritarianism. And she talked a lot about the experience they had during COVID. And the result of that being that they didn't have a huge misinformation problem or a huge anti-vax community in Taiwan because the government worked.

JASON KELLEY
Yeah that's absolutely right, and it's so refreshing to see that, that there are models that we can look toward also, right? I mean, it feels like we're constantly sort of getting things wrong, and this was just such a great way to say, Oh, here's something we can actually do that will make things better in this country or in other countries,
Another point that was really concrete was the technology that is a way of twisting algorithms around instead of surfacing disagreements, surfacing agreements. The Polis idea and ways that we can make technology work for us. There was a phrase that she used which is thinking of algorithms and other technologies as assistive. And I thought that was really brilliant. What did you think about that?

CINDY COHN
I really agree. I think that, you know, building systems that can surface agreement as opposed to doubling down on disagreement seems like so obvious in retrospect and this open source technology, Polis has been doing it for a while, but I think that we really do need to think about how do we build systems that help us build towards agreement and a shared view of how our society should be as opposed to feeding polarization. I think this is a problem on everyone's mind.
And, when we go back to Larry Lessig's four pillars, here's actually a technological way to surface agreement. Now, I think Audrey's using all of the pillars. She's using law for sure. She's using norms for sure, because they're creating a shared norm around higher bandwidth democracy.
But really you know in her heart, you can tell she's a hacker, right? She's using technologies to try to build this, this shared world and, and it just warms my heart. It's really cool to see this approach and of course, radical openness as part of it all being applied in a governmental context in a way that really is working far better than I think a lot of people believe could be possible.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org/podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.
We’ve got a newsletter, EFFector, as well as social media accounts on many, many, many platforms you can follow.
This podcast is licensed Creative Commons Attribution 4.0 International, and includes music licensed Creative Commons Attribution 3.0 Unported by their creators. In this episode you heard reCreation by airtone, Kalte Ohren by Alex featuring starfrosch and Jerry Spoon, and Warm Vacuum Tube by Admiral Bob featuring starfrosch.
You can find links to their music in our episode notes, or on our website at eff.org/podcast.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
I hope you’ll join us again soon. I’m Jason Kelley.

CINDY COHN
And I’m Cindy Cohn.

Categorieën: Openbaarheid, Privacy, Rechten

‘Reddit sluit licentiedeal met AI-gigant voor trainen AI-modellen’, mag dat?

IusMentis - 27 februari 2024 - 8:11am
red and white 8 logoPhoto by Brett Jordan on Unsplash

Reddit heeft een licentiedeal gesloten met een ‘groot AI-bedrijf’, ten behoeve van het trainen van AI-modellen. Dat meldde Tweakers vorige week. persagentschap Bloomberg. Het zou gaan om Google, waar Reddit in 2023 nog tegen dreigde de crawlers van te blokkeren. Diverse redditors vonden dit heel vervelend nieuws. Dus vandaar de vraag: mag Reddit dat doen?

Tweakers vult aan: De licentiedeal zou betekenen dat de inhoud van de door gebruikers gegenereerde inhoud op Reddit zal worden gebruikt om de AI-modellen van een niet nader genoemd bedrijf te trainen, meldt Bloomberg op basis van ingewijden. Het zou gaan om een overeenkomst ter waarde van omgerekend ruim 55,5 miljoen euro op jaarbasis. De honger naar kwalitatieve content om generatieve AI mee te trainen is enorm. Reddit is een van de grootste social news aggregatoren, waar mensen commentaar geven op nieuws en andere links die men met elkaar deelt (“read it”). De discussie is vaak van goede kwaliteit, en er is ook veel context om inhoud te duiden – reacties krijgen up- en downvotes en onderwerpen worden in zogeheten subreddits op onderwerp verdeeld. Dankbaar voer om AI mee te trainen.

De Reddit Terms of Use zijn afgelopen september aangepast. De site hanteert daarbij de gebruikelijke mooi klinkende constructie van “je blijft eigenaar maar wij krijgen een beperkte licentie”. Of nou ja, ‘beperkt’: a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit. In gewoon Nederlands staat hier dat Reddit alles mag doen dat ze willen, inclusief als dataset verkopen aan andere bedrijven. En vanwege dat ‘irrevocable’ kun je die licentie dus ook niet meer snel intrekken. Je kunt natuurlijk je account opheffen, maar de licentie blijft gegeven.

In Europa kun je via de AVG eisen dat je persoonsgegevens niet langer verwerkt worden. Weghalen van je account of je naam bij publicaties is dus zeker mogelijk. Verdedigbaar is dat je posts ook persoonsgegevens kunnen zijn, afhankelijk van de inhoud. Over weghalen daarvan zegt de privacy policy: Please note, however, that the posts, comments, and messages you submitted prior to deleting your account will still be visible to others unless you first delete the specific content. After you submit a request to delete your account, it may take up to 90 days for our purge script to complete deletion. We may also retain certain information about you for legitimate business purposes and/or if we believe doing so is in accordance with, or as required by, any applicable law. Ik lees dat “we may retain certain information for legitimate purposes” dus als een recht om tóch je berichten te blijven gebruiken, zij het zonder je naam erbij. In de context van verhandelen voor het trainen van AI is dat een logische verwachting. Het lijkt me ook AVG-compliant, omdat een bericht zonder naam zeker niet perse een persoonsgegeven is.

Arnoud

Het bericht ‘Reddit sluit licentiedeal met AI-gigant voor trainen AI-modellen’, mag dat? verscheen eerst op Ius Mentis.

EFF Statement on Nevada's Attack on End-to-End Encryption

Electronic Frontier Foundation (EFF) - nieuws - 26 februari 2024 - 8:39pm

EFF learned last week that the state of Nevada is seeking an emergency order prohibiting Meta from rolling out end-to-end encryption in Facebook Messenger for all users in the state under the age of 18. The motion for a temporary restraining order is part of a lawsuit by the state Attorney General alleging that Meta’s products are deceptively designed to keep users addicted to the platform. While we regularly fight legal attempts to limit social media access, which are primarily based on murky evidence of its effects on different groups, blocking minors’ use of end-to-end encryption would be entirely counterproductive and just plain wrong.

Encryption is the most vital means we have to protect privacy, which is especially important for young people online. Yet in the name of protecting children, Nevada seems to be arguing that merely offering encryption on a social media platform that Meta knows has been used by criminals is itself illegal. This cannot be the law; in practice it would let the state prohibit all platforms from offering encryption, and such a ruling would raise serious constitutional concerns. Lawsuits like this also demonstrate the risks posed by bills like EARN IT and Stop CSAM that are now pending before Congress: state governments already are trying to eliminate encryption for all of us, and these dangerous bills would give them even more tools to do so.

EFF plans to speak up for users in the Nevada proceeding and fight this misguided effort to prohibit encryption.  Stay tuned.

Categorieën: Openbaarheid, Privacy, Rechten

Amsterdam Law Hub bestaat vijf jaar: ‘broedplaats voor juridische innovatie’

Mr. Online (juridisch nieuws) - 26 februari 2024 - 9:53am

De Amsterdam Law Hub, verbonden aan de Universiteit van Amsterdam, heeft de afgelopen jaar aardig wat initiatieven opgeleverd, zoals de Vrouwenrechtswinkel, het mastervak Justice Entrepreneurship en een incubatorprogramma voor start-ups. De Law Hub noemt zich een ‘broedplaats voor juridische innovatie en dienstverlening’, waarbinnen maatschappelijke partners, studenten, onderzoekers en ondernemers worden samengebracht ‘met als inzet een rechtvaardiger samenleving’. Tijdens het jubileum wordt een interactieve workshop georganiseerd, in samenwerking met het Klimaatmuseum, een pop-up museum over de klimaatcrisis.

Zeespiegelstijging

In deze workshop wordt ingezoomd op de Vanuatu-zaak, een eilandstaat die door de zeespiegelstijging onder water dreigt te verdwijnen. In de zaak draait het om de plichten die landen onder internationaal recht hebben op het gebied van klimaat. Aanwezigen gaan – in groepjes met elkaar en ondersteund door kunstenaars en curatoren van het Klimaatmuseum – creatieve oplossingen bedenken om het bewustzijn over dit onderwerp te vergroten. De groepjes pitchen hun ideeën aan een jury en het winnende idee wordt gerealiseerd als kunstwerk. Dit zal vanaf 16 mei te zien zijn in het Klimaatmuseum.

Demonstratierechten

Eerder al werd het initiatie empowearing gelanceerd: het bedrukken van T-shirts met daarop de rechten van burgers die demonstreren. Daarop staat bijvoorbeeld dat opgepakte demonstranten recht op een advocaat hebben en hoelang de voorlopige hechtenis mag duren. Die rechten heb je dan altijd ‘bij de hand’, je hoeft ze niet op te zoeken in je telefoon (die mogelijk al in beslag is genomen). In het verlengde daarvan ligt Let us speak: trainingen en informatie over demonstratierechten en hoe met gelijkgestemden in contact kan worden gekomen.

Wooncrisis

Daarnaast is met twee programma’s gewerkt om de wooncrisis aan te pakken: het goedkoper maken van het vinden van een huurwoning en het eenvoudig uploaden van huurovereenkomsten. En weer twee initiatieven maken reizen naar het buitenland prettiger: de ene geeft rechten van LGBT+’ers weer op een ‘queer reis-app’, de andere geeft informatie over oplichtingspraktijken over de grens. Tot slot biedt adviesplatform Raising Rights gratis juridisch advies aan alleenstaande verzorgers en kinderen over een echtscheiding, en ondersteunt Ukraine Legal Network slachtoffers van internationale misdaden.

Meer informatie over de Amsterdam Law Hub: klik hier.

Het bericht Amsterdam Law Hub bestaat vijf jaar: ‘broedplaats voor juridische innovatie’ verscheen eerst op Mr. Online.

Categorieën: Rechten

Chatbot zat fout: Air Canada moet reiziger compenseren

IusMentis - 26 februari 2024 - 8:20am
ai-generated, monster, robotPhoto by AlexandraKoch on Pixabay

De Canadese luchtvaartmaatschappij Air Canada moet een passagier zijn geld terugbetalen nadat een chatbot hem verkeerd had geïnformeerd. Dat meldde RTL Nieuws vorige week. Het opmerkelijke verweer op de klacht was dat de chatbot een ‘aparte juridische entiteit’ zou zijn, ‘verantwoordelijk voor zijn eigen acties’. Wacht, wat?

De kern was een klacht over het willen gebruiken van het ‘rouwtarief’, een bereavement flight. Dit concept komt erop neer dat je voor een vlucht naar een uitvaart een goedkoper lastminute ticket kunt krijgen. Het bestaat nauwelijks meer,  maar Air Canada had dat in 2022 nog wel: In November 2022, following the death of their grandmother, Jake Moffatt booked a flight with Air Canada. While researching flights, Mr. Moffat used a chatbot on Air Canada’s website. The chatbot suggested Mr. Moffatt could apply for bereavement fares retroactively. Mr. Moffatt later learned from Air Canada employees that Air Canada did not permit retroactive applications. Het probleem was dus dat de chatbot zei dat je dat rouwtarief met terugwerkende kracht kon claimen op een al geboekt ticket. In die uitspraak zat een link naar de “Bereavement policy” van Air Canada, waar dan weer in staat dat je alleen recht hebt op dat tarief als je er vooraf om vraagt.

Zowel in Nederland als in Canada is het zo dat als een medewerker van een bedrijf een duidelijke toezegging doet, je daar als klant in principe op mag vertrouwen. Uitzonderingen kunnen altijd, met name als jij moet weten dat die medewerker zoiets niet mag toezeggen.

Air Canada gooide het over een iets andere boeg: Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot. De uitspraak gaat hier wat snel van “cannot be held liable” naar “is a separate legal entity”, maar de kern is duidelijk: Air Canada vond dat zij niet gehouden kon worden aan de toezegging van de chatbot. Je had de ‘echte’ bron maar moeten controleren.

Het tribunaal gaat daar niet in mee. Hoe zou een consument moeten weten dat de ene informatiebron betrouwbaarder is dan de andere? Air Canada zet ze beiden op haar site, dus moet ze de gevolgen daarvan dragen. Je kunt niet verwachten dat mensen alles dubbelchecken. Ook een beroep op de algemene voorwaarden bij het ticket wordt afgewezen, hoewel met het argument dat Air Canada die voorwaarden niet had overlegd in de procedure.

Ik lees de uitspraak dus niet als een discussie over het “aparte juridische entiteit” zijn van de chatbot. Waar het om gaat, is waar de consument staat als bron A zegt dat iets wel mag en bron B dat dat niet mag. Dan geldt dus wat de bron zegt waar de consument op afgegaan is, omdat je bij een duidelijke toezegging deze niet hoeft te dubbelchecken. Het roept de filosofische vraag op of de klager recht zou hebben op een gedeeltelijke teruggaaf als hij achteraf de chatbot had geraadpleegd, na eerst een duur ticket te hebben gekocht omdat de Bereavement Policy dat zei.

De rechter benoemt nadrukkelijk dat hij geen verschil ziet tussen een statische webpagina en een chatbot. Ik zie dat zelf anders: de indruk die je wekt met een chatbot is ergens toch serieuzer, specifieker. Een webpagina zegt hoe het in het algemeen is. Een chatbot helpt jou met jouw probleem, je hebt jouw specifieke feiten genoemd en je vraagt een advies over jouw situatie. Dan komt daar iets uit dat voor jou opgaat, en dat zou best anders kunnen zijn dan de algemene regel.

Ik ben een beetje bezorgd dat uitspraken als deze ertoe gaan leiden dat chatbots ieder gesprek openen met een disclaimer dat je alles moet dubbelchecken dat ze zeggen, en dat ze geen bindende toezeggingen kunnen doen.

Arnoud

 

Het bericht Chatbot zat fout: Air Canada moet reiziger compenseren verscheen eerst op Ius Mentis.

Zet in je agenda: 23 maart Open Dag bij rechtbank Alkmaar

Mr. Online (juridisch nieuws) - 25 februari 2024 - 8:07am

Denk jij wel eens aan een carrière later bij een rechtbank? Tijdens de Kom Binnen Bij Bedrijven Dagen in Noord-Holland kun je bij rechtbank Noord-Holland naar binnen lopen. De rechtbank vertelt: “Je kunt twee nagespeelde zittingen bijwonen, met de politie een rondleiding door het cellencomplex doen, vragen stellen aan een rechter en kinderrechter en een selfie maken in een echte toga.”

Meer weten? Kijk hier.

Het bericht Zet in je agenda: 23 maart Open Dag bij rechtbank Alkmaar verscheen eerst op Mr. Online.

Categorieën: Rechten

EFF Urges Ninth Circuit to Reinstate X’s Legal Challenge to Unconstitutional California Content Moderation Law

Electronic Frontier Foundation (EFF) - nieuws - 23 februari 2024 - 10:06pm

The Electronic Frontier Foundation (EFF) urged a federal appeals court to reinstate X’s lawsuit challenging a California law that forces social media companies to file reports to the state about their content moderation decisions, and with respect to five controversial issues in particular—an unconstitutional intrusion into platforms’ right to curate hosted speech free of government interference.

While we are enthusiastic proponents of transparency and have worked, through the Santa Clara Principles and otherwise, to encourage online platforms to provide information to their users, we see the clear threat in the state mandates. Indeed, the Santa Clara Principles itself warns against government’s use of its voluntary standards as mandates. California’s law is especially concerning since it appears aimed at coercing social media platforms to more actively moderate user posts.

In a brief filed with the U.S. Court of Appeals for the Ninth Circuit, we asserted—as we have repeatedly in the face of state mandates around the country about what speech social media companies can and cannot host—that allowing California to interject itself into platforms’ editorial processes, in any form, raises serious First Amendment concerns.

At issue is California A.B. 587, a 2022 law requiring large social media companies to semiannually report to the state attorney general detailed information about the content moderation decisions they make and, in particular, with respect to hot button issues like hate speech or racism, extremism or radicalization, disinformation or misinformation, harassment, and foreign political interference.

A.B. 587 requires companies to report “detailed descriptions” of its content moderation practices generally and for each of these categories, and also to report detailed information about all posts flagged as belonging to any of those categories, including how content in these categories is defined, how it was flagged, how it was moderated, and whether their action was appealed. Companies can be fined up to $15,000 a day for failing to comply.

X, the social media company formerly known as Twitter, sued to overturn the law, claiming correctly that it violates its First Amendment right against being compelled to speak. A federal judge declined to put the law on temporary hold and dismissed the lawsuit.

We agree with Twitter and urge the Ninth Circuit to reverse the lower court. The law was intended to be and is operating as an informal censorship scheme to pressure online intermediaries to moderate user speech, which the First Amendment does not allow.

It’s akin to requiring a state attorney general or law enforcement to be able to listen in on editorial board meetings at the local newspaper or TV station, a clear interference with editorial freedom. The Supreme Court has consistently upheld this general principle of editorial freedom in a variety of speech contexts. There shouldn’t be a different rule for social media.

From a legal perspective, the issue before the court is what degree of First Amendment scrutiny is used to analyze the law. The district court found that the law need only be justified and not burdensome to comply with, a low degree of analysis known as Zauderer scrutiny, that is reserved for compelled factual and noncontroversial commercial speech. In our brief, we urge that as a law that both intrudes upon editorial freedom and disfavors certain categories of speech it must survive the far more rigorous strict First Amendment scrutiny. Our brief sets out several reasons why strict scrutiny should be applied.

Our brief also distinguishes A.B. 587’s speech compulsions from ones that do not touch the editorial process such as requirements that companies disclose how they handle user data. Such laws are typically subject to an intermediate level of scrutiny, and EFF strongly supports such laws that can pass this test.

A.B. 587 says X and other social media companies must report to the California Attorney General whether and how it curates disfavored and controversial speech and then adhere to those statements, or face fines. As a practical matter, this requirement is unworkable—content moderation policies are highly subjective, constantly evolving, and subject to numerous influences.

And as a matter of law, A.B. 587 interferes with platforms’ constitutional right to decide whether, how, when, and in what way to moderate controversial speech. The law is a thinly veiled attempt to coerce sites to remove content the government doesn’t like.

We hope the Ninth Circuit agrees that’s not allowed under the First Amendment.

Categorieën: Openbaarheid, Privacy, Rechten

EFF Opposes California Initiative That Would Cause Mass Censorship

Electronic Frontier Foundation (EFF) - nieuws - 23 februari 2024 - 6:37pm

In recent years, lots of proposed laws purport to reduce “harmful” content on the internet, especially for kids. Some have good intentions. But the fact is, we can’t censor our way to a healthier internet.

When it comes to online (or offline) content, people simply don’t agree about what’s harmful. And people make mistakes, even in content moderation systems that have extensive human review and appropriate appeals. The systems get worse when automated filters are brought into the mix–as increasingly occurs, when moderating content at the vast scale of the internet.

Recently, EFF weighed in against an especially vague and poorly written proposal: California Ballot Initiative 23-0035, written by Common Sense Media. It would allow for plaintiffs to sue online information providers for damages of up to $1 million if it violates “its responsibility of ordinary care and skill to a child.”

We sent a public comment to California Attorney General Rob Bonta regarding the dangers of this wrongheaded proposal. While the AG’s office does not typically take action for or against ballot initiatives at this stage of the process, we wanted to register our opposition to the initiative as early as we could.

Initiative 23-0035  would result in broad censorship via a flood of lawsuits claiming that all manner of content online is harmful to a single child. While it is possible for children (and adults) to be harmed online, Initiative 23-0035’s vague standard, combined with extraordinarily large statutory damages, will severely limit access to important online discussions for both minors and adults. Many online platforms will censor user content in order to avoid this legal risk.

The following are just a few of the many areas of culture, politics, and life where people have different views of what is “harmful,” and where this ballot initiative thus could cause removal of online content:

  • Discussions about LGBTQ life, culture, and health care.
  • Discussions about dangerous sports like tackle football, e-bikes, or sport shooting.
  • Discussions about substance abuse, depression, or anxiety, including conversations among people seeking treatment and recovery.

In addition, the proposed initiative would lead to mandatory age verification. It’s wrong to force someone to show ID before they go online to search for information. It eliminates the right to speak or to find information anonymously, for both minors and adults.

This initiative, with its vague language, is arguably worse than the misnamed Kids Online Safety Act, a federal censorship bill that we are opposing. We hope the sponsors of this initiative choose not to move forward with this wrongheaded and unconstitutional proposal. If they do, we are prepared to oppose it.

You can read EFF’s full letter to A.G. Bonta here.

Categorieën: Openbaarheid, Privacy, Rechten

Curator heeft zestien miljoen komkommers in de aanbieding

Mr. Online (juridisch nieuws) - 23 februari 2024 - 2:17pm

Curator Souren (Hoens & Souren, Zoetermeer) is bezig met de afwikkeling van een faillissement van  de eigenaars van twee kassencomplexen in Steenbergen en Made. Ze kwamen in de problemen toen ze door de fors gestegen prijzen hun energiecontract niet meer konden betalen en werden eind 2022 failliet verklaard.

Faillissementsfraude

Bij de afwikkeling van het faillissement stuitte Souren op tal van onregelmatigheden. Vlak voor het faillissement werd bijvoorbeeld de pachtprijs van de kassen met de helft verlaagd, en er is sprake van een wirwar van bv’s. Souren vermoedt dat er sprake is van een faillissementsfraude van elf miljoen euro, zo blijkt uit het verslag van het AD van een kort geding dat op 22 februari plaatsvond. Met dit kort geding wil Souren de huidige pachter uit de kassen krijgen.

Opnieuw beplant

Alleen staan die kassen momenteel vol met komkommer- en paprikaplanten. Souren op LinkedIn: “Tot mijn niet geringe verbazing zijn ondanks herhaalde kennisgeving van mijn onderzoek naar de rechtsgeldigheid van de pacht de volledige 22 ha in december 2023 zonder enig overleg met mij opnieuw beplant.”
De geraamde kosten van “deze opmerkelijke actie” zijn ruim twee miljoen euro. Als de kortgedingrechter tot ontruiming beslist zal een oplossing voor de groenten moeten worden gezocht, anders gaat deze investering verloren (Souren rekende uit dat alleen al in de kassen in Steenbergen zo’n zestien à zeventien miljoen komkommers aan de planten kunnen groeien).

Creatieve oplossingen

Omdat hij zelf net zo veel verstand heeft van het kweken van groenten “als een bekende politicus uit Venlo van Mozart”, plaatste Souren alvast een oproep op LinkedIn. Daarin vraagt hij glastuinders of anderen met kennis van zaken of ze kans zien om ondanks de beperkingen van de boedel “profijt te trekken uit deze nood van de curator. Alle serieuze creatieve oplossingen komen in aanmerking.”

De uitspraak in het kort geding is uiterlijk 14 maart.

Het bericht Curator heeft zestien miljoen komkommers in de aanbieding verscheen eerst op Mr. Online.

Categorieën: Rechten

As India Prepares for Elections, Government Silences Critics on X with Executive Order

Electronic Frontier Foundation (EFF) - nieuws - 23 februari 2024 - 12:55pm

It is troubling to see that the Indian government has issued new demands to X (formerly Twitter) to remove accounts and posts critical of the government and its recent actions. This is especially bears watching as India is preparing for general elections this spring, and concerns for the government’s manipulation of social media critical of it grows.

On Wednesday, X’s Global Government Affairs account (@GlobalAffairs) tweeted:

The Indian government has issued executive orders requiring X to act on specific accounts and posts, subject to potential penalties including significant fines and imprisonment. 

In compliance with the orders, we will withhold these accounts and posts in India alone; however, we disagree with these actions and maintain that freedom of expression should extend to these posts.

Consistent with our position, a writ appeal challenging the Indian government's blocking orders remains pending. We have also provided the impacted users with notice of these actions in accordance with our policies.

Due to legal restrictions, we are unable to publish the executive orders, but we believe that making them public is essential for transparency. This lack of disclosure can lead to a lack of accountability and arbitrary decision-making.

India’s general elections are set to take place in April or May and will elect 543 members of the Lok Sabha, the lower house of the country’s parliament. Since February, farm unions in the country have been striking for floor pricing (also known as a minimum support price) for their crops. While protesters have attempted to march to Delhi from neighboring states, authorities have reportedly barricaded city borders, and two neighboring states ruled by the governing Bharatiya Janata Party (BJP) have deployed troops in order to stop the farmers from reaching the capital.

According to reports, the accounts locally withheld by X in response to the Indian government’s orders are critical of the BJP, while some accounts that were supporting or merely covering the farmer’s protests have also been withheld. Several account holders have identified themselves as being among those notified by X, while other users have identified many other accounts.

This isn’t the first time that the Indian government has gone after X users. In 2021, when the company—then called Twitter—was under different leadership, it suspended 500 accounts, then first reversed its decision, citing freedom of speech, and later re-suspended the accounts, citing compliance with India’s Information Technology Act. And in 2023, the company withheld 120 accounts critical of the BJP and Prime Minister Narendra Modi.

This is exactly the type of censorship we feared when EFF previously criticized the ITA’s rules, enacted in 2021, that force online intermediaries to comply with strict removal time frames under government orders. The rules require online intermediaries like X to remove restricted posts within 36 hours of receiving notice. X can challenge the order—as they have indicated they intend to—but the posts will remain down until that challenge is fully adjudicated.

EFF is also currently fighting back against efforts related to an Indian court order that required Reuters news service to de-publish one of its articles while a legal challenge to it is considered by the courts. This type of interim censorship is unauthorized in most legal systems. Those involved in the case have falsely represented to others who wrote about the Reuters story that the order applied to them as well.

Categorieën: Openbaarheid, Privacy, Rechten

Aanklagers werken in 81 zaken anoniem door dreiging georganiseerde misdaad

Mr. Online (juridisch nieuws) - 23 februari 2024 - 10:58am

Officieren van justitie kunnen bij een commissie een verzoek indienen voor een beperking van hun herkenbaarheid of volledige anonimiteit in strafzaken waarin veiligheidsrisico’s bestaan. Het moet gaan om een voorstelbare of concrete dreiging.
De commissie werd in het najaar van 2019 in het leven geroepen, kort na de moord op advocaat Derk Wiersum, die kroongetuige Nabil B. in het Marengo-proces bijstond. De broer van Nabil B. was eerder al vermoord, vertrouwenspersoon Peter R. de Vries zou later volgen.

Duidelijke behoefte

Sinds de oprichting van de commissie is in 81 zaken van de mogelijkheid tot (gedeeltelijke) anonimisering gebruikgemaakt. ‘Dat cijfer geeft heel duidelijk de behoefte aan vanuit de officieren om minder herkenbaar hun werk te kunnen doen”, aldus de Amsterdamse persofficier van justitie Katelijne den Hartog in de uitzending van EenVandaag. “Dat zijn hoge aantallen. Het geeft helaas aan dat officieren dat ook echt nodig vinden, gezien alle nare situaties: de moorden die zijn gepleegd, dreigingen die hebben plaatsgevonden.”

Naam in kluis

In 54 zaken werken officieren van justitie onder een nummer. Hun naam staat in geen enkel stuk in het dossier  Er komt een nummer in het dossier dat correspondeert met de naam van de betreffende officier. De namen van deze aanklagers liggen in een kluis bij de rechter-commissaris en het OM.
In 21 zaken vroegen officieren van justitie om niet herkenbaar in beeld gebracht te worden tijdens een openbare rechtszitting. In zo’n geval verzoekt het Openbaar Ministerie de rechtbank om de officier niet te filmen en ook geen geluidsopnames te maken.  In zes zaken verzochten officieren van justitie om zowel onder nummer te werken als niet herkenbaar in beeld te komen.
Uit navraag bij de Raad voor de Rechtspraak bleek dat er in drie zaken uiteindelijk helemaal niet mocht worden gefilmd tijdens de zitting.

Ernstige situatie

Demissionair minister Rechtspraak van Rechtsbescherming Franc Weerwind spreekt in uitzending over een ernstige situatie. Rechtspraak is een fundament onder onze democratische rechtsstaat, en officieren van justitie en rechters moeten in alle vrijheid hun werk kunnen doen en zich veilig voelen, benadrukt de bewindsman. “Natuurlijk zorgen wij er met Bewaken en Beveiligen voor dat we achter deze magistratuur blijven staan, zowel de zittende als de openbare magistratuur, de officieren en de rechters. Dat we zorgen dat ze hun werk kunnen blijven doen. Tegelijkertijd moeten we de dreiging van deze zware criminelen de kop indrukken.”

Het bericht Aanklagers werken in 81 zaken anoniem door dreiging georganiseerde misdaad verscheen eerst op Mr. Online.

Categorieën: Rechten

Mag mijn werkgever me dwingen mijn BYOD te voorzien van Microsoftsoftware?

IusMentis - 23 februari 2024 - 8:12am
my workstations – windows laptop” by anotherjesse is licensed under CC BY-NC-SA 2.0

Een lezer vroeg me: Mijn werkgever wil het BYOD beleid voor onze organisatie zodanig aanpassen, dat er alleen nog met laptops gewerkt kan worden, die volledig op Microsoft producten draaien. (Windows 11, Office365, enz). Ik heb gewetensbezwaren tegen het gebruik van software van Big Tech bedrijven en werk al 25 jaar zonder problemen met open source. Hoe sterk sta ik als ik bezwaar wil maken tegen deze beleidswijziging? Ik moet nu even cru zijn: nee. De werkgever kiest de gereedschappen en bepaalt hoe het werk gebeurt. Gewetensbezwaren brengen vrijwel altijd met zich mee dat je ontslag moet nemen. Van je werkgever eisen dat die het werk anders inricht vanwege jouw gewetensbezwaren kan enkel heel soms bij een kernaspect van een religieuze overtuiging en alleen als er een reëel alternatief is.

Hier gaat het om bring-your-own-device beleid, waarbij werknemers de eigen laptop met eigen software mogen inzetten voor werkzaken. Dat kan, en in zo’n vrije setting is het dan helemaal prima dat jij dat met een laptop met FreeBSD en LibreOffice doet in plaats van Windows 11 en Office 365.

Maar ook als dit al enige jaren het beleid is, blijft de werkgever gerechtigd van gedachten te veranderen en over te willen stappen naar een meer gecontroleerde omgeving. Goed werkgeverschap vereist dat hij hier wel enige vooraankondiging en overgangstijd voor laat, dus niet op vrijdag 13:00 aankondigen dat iedereen maandag op Windows 11 moet zitten of dat om 9:00 de IT-afdeling langskomt met een installatie-CD.

Daar staat tegenover dat een werknemer niet verplicht kan worden de eigen laptop mee naar het werk te nemen. BYOD in deze vorm is een keuze, en die keuze impliceert niet dat je dat device voor altijd moet blijven gebruiken. Je kunt dus als werknemer met gewetensbezwaren de BYOD regeling opzeggen en een laptop van de werkgever verlangen. Wel zal daar dan natuurlijk Windows 11 en Office 365 op staan.

Arnoud Dit is een update van een vraag in de comments van een blog uit 2017. Ik ben sinds de discussie daar van gedachten veranderd.

Het bericht Mag mijn werkgever me dwingen mijn BYOD te voorzien van Microsoftsoftware? verscheen eerst op Ius Mentis.

Is the Justice Department Even Following Its Own Policy in Cybercrime Prosecution of a Journalist?

Electronic Frontier Foundation (EFF) - nieuws - 23 februari 2024 - 1:38am

Following an FBI raid of his home last year, the freelance journalist Tim Burke has been arrested and indicted in connection with an investigation into leaks of unaired footage from Fox News. The raid raised questions about whether Burke was being investigated for First Amendment-protected journalistic activities, and EFF joined a letter calling on the Justice Department to explain whether and how it believed Burke had actually engaged in wrongdoing. Although the government has now charged Burke, these questions remain, including whether the prosecution is consistent with the DOJ’s much-vaunted policy for charging criminal violations of the Computer Fraud and Abuse Act (CFAA).

The indictment centers on actions by Burke and an alleged co-conspirator to access two servers belonging to a sports network and a television livestreaming service respectively. In both cases, Burke is alleged to have used login credentials that he was not authorized to use, making the access “without authorization” under the CFAA. In the case of the livestream server, he is also alleged to have downloaded a list of unique, but publicly available URLs corresponding to individual news networks’ camera feeds and copied content from the streams, in further violation of the CFAA and the Wiretap Act. However, in a filing last year seeking the return of devices seized by the FBI, Burke’s lawyers argued that the credentials he used to access the livestream server were part of a “demo” publicly posted by the owner of the service, and therefore his use was not “unauthorized.”

Unfortunately, concepts of authorization and unauthorized access in the CFAA are exceedingly murky. EFF has fought for years—with some success—to bring the CFAA in line with common sense notions of what an anti-hacking law should prohibit: actually breaking into private computers. But the law remains vague, too often allowing prosecutors and private parties to claim that individuals knew or should have known what they were doing was unauthorized, even when no technical barrier prevented them from accessing a server or website.

The law’s vagueness is so apparent that in the wake of Van Buren v. United States, a landmark Supreme Court ruling overturning a CFAA prosecution, even the Justice Department committed to limiting its discretion in prosecuting computer crimes. EFF felt that these guidelines could have gone further, but we held out hope that they would do some work in protecting people from overbroad use of the CFAA.

Mr. Burke’s prosecution shows the DOJ needs to do more to show that its charging policy prevents CFAA misuse. Under the guidelines, the department has committed to bringing CFAA charges only in specific instances that meet all of the following criteria:

  • the defendant’s access was not authorized “under any circumstances”
  • the defendant knew of the facts that made the access without authorization
  • the prosecution serves “goals for CFAA enforcement”

If Mr. Burke merely used publicly available demo credentials to access a list of public livestreams which were themselves accessible without a username or password, the DOJ would be hard-pressed to show that the access was unauthorized under any circumstances and he actually knew that.

This is only one of the concerning aspects of the Burke indictment. In recent years, there have been several high-profile incidents involving journalists accused of committing computer crimes in the course of their reporting on publicly available material. As EFF argued in an amicus brief in one of these cases, vague and overbroad applications of computer crime laws threaten to chill a wide range of First Amendment protected activities, including reporting on matters of public interest. We’d like to see these laws—state and federal—be narrowed to better reflect how people use the Internet and to remove the ability of prosecutors to bring charges where the underlying conduct is nothing more than reporting on publicly available material.

Related Cases: Van Buren v. United States
Categorieën: Openbaarheid, Privacy, Rechten

NSA Spying Shirts Are Back Just In Time to Tell Congress to Reform Section 702

Electronic Frontier Foundation (EFF) - nieuws - 22 februari 2024 - 7:43pm

We’ve been challenging the National Security Agency's mass surveillance of ordinary people since we first became aware of it nearly twenty years ago. Since then, tens of thousands of supporters have joined the call to fight what became Section 702 of the FISA Amendments Act, a law which was supposed to enable overseas surveillance of specific targets, but has become a backdoor way of mass spying on the communications of people in the U.S. Now, Section 702 is back up for a major renewal since it was last approved in 2018, and we need to pull out all the stops to make sure it is not renewed without massive reforms and increased transparency and oversight. 

 "stop NSA's Mass surveillance." Below that is the EFF logo

Section 702 is up for renewal, so we decided our shirts should reflect the ongoing fight. For the first time in a decade, our popular NSA Spying shirts are back, with an updated EFF logo and design. The image of the NSA's glowering, red-eyed eagle using his talons to tap into your data depicts the collaboration of telecommunication companies with the NSA - a reference to our Hepting v. AT&T and Jewel v. NSA warrantless wiretapping cases. Every purchase helps EFF’s lawyers and activists stop the spying and unplug big brother.

Get your shirt in our shop today

Wear this t-shirt to proudly let everyone know that it’s time to rein in mass surveillance. And if you haven’t yet, let your representatives know today to Stop the Spying. 

EFF is a member-supported nonprofit and we value your contributions deeply. Financial support from people like you has allowed EFF to educate the public, reach out to lawmakers, organize grassroots action, and challenge threats to digital freedom at every turn.  Join the cause now to fight government secrecy and end illegal surveillance!

EFF is a U.S. 501(c)(3) organization and donations are tax deductible to the full extent provided by law.

Categorieën: Openbaarheid, Privacy, Rechten

TDM: Poland challenges the rule of EU copyright law

International Communia Association - 22 februari 2024 - 1:59pm

This article was first published on the Kluwer Copyright Blog on date 20/02/2024

When life gives you lemons, make lemonade. This must have been the key insight at the Polish Culture and National Heritage Ministry when the new administration took over and discovered that more than 2.5 years after the implementation deadline, Poland still had to implement the provisions of the 2019 Copyright in the Digital Single Market Directive into national law. So how do you make lemonade out of the fact that you are the only EU Member State without an implementation? You claim that the delay allows you to propose a better implementation.

In this particular case, the government claims that the delay allowed it to properly consider the impact of generative AI on copyright and come to the conclusion that training generative AI systems on copyrighted works does not in fact fall within the scope of the text and data mining exceptions contained in the directive. From the explanatory memorandum accompanying the draft implementation law published on Thursday last week for public consultation (all quotes below are own translations from the Polish original):

The implementation of the directive now, in 2024, dictates that we refer here to the issue of artificial intelligence and the question of whether text and data mining within the meaning of the directive also includes the possibility of reproducing works for the purpose of machine learning. Undoubtedly, at the time the directive was adopted in 2019, the capabilities of artificial intelligence were not as recognizable as they are today, when “works” with artistic and commercial value comparable to real works, i.e., man-made, are beginning to be created with the help of this technology. Thus, it seems fair to assume that this type of permitted use was not conceived for artificial intelligence. An explicit clarification is therefore introduced that the reproduction of works for text and data mining cannot be used to create generative models of artificial intelligence.

This “explicit clarification” can be found in the text of the proposed implementation for both articles 3 and 4 of the CDSM directive. The article 3 implementation states that cultural heritage institutions and academic research organizations…

may reproduce works for the purpose of text and data mining for scientific research, with the exception of the creation of generative models of artificial intelligence, if these activities are not performed for direct or indirect financial gain.

The same exception to the exception can also be found in the implementation of the general text and data mining exception:

It is allowed to reproduce distributed works for the purpose of text and data mining, except for the creation of generative artificial intelligence models, unless otherwise stipulated by the authorized party.

It is worth stressing that the language quoted above is contained in the public consultation version of the implementation law and thus not final. It also seems clear that this language has not been widely consulted within the Polish government as it clearly contradicts efforts undertaken by other parts of the government. Still it is worth taking a closer look at the rationale behind this implementation and to assess the conformity with the provisions of the directive and the overall impact of the proposed approach.

A flawed rationale

First of all, while it is understandable that lawmakers seek more clarity about the relationship between the EU copyright framework and the use of copyrighted works for training AI models, the assumption that the TDM exceptions were “not conceived for artificial intelligence” is simply wrong. While there is little publicly available documentation of what lawmakers had in mind when they agreed on the structure of the TDM exceptions, what is available makes it clear that the development of artificial intelligence was explicitly factored into the discussions. Both the European Parliament statement and the European Commission’s explainer of the directive, published after the adoption of the directive in March 2019 specifically highlight that the TDM exception in Article 4 was introduced “in order to contribute to the development of data analytics and artificial intelligence”.

If there was any doubt if the exception was conceived in order to facilitate the development of generative Artificial Intelligence, this relationship was further clarified in March 2023 (at a time when the impact of Generative AI was widely recognized). In response to a Parliamentary question that suggested that “The [CDSM] Directive does not address this particular matter”, Commissioner Breton pointed out that TDM exceptions do in fact “provide balance between the protection of rightholders including artists and the facilitation of TDM, including by AI developers”.

Finally the upcoming Artificial Intelligence Act — which has been supported by the Polish government — contains a provision that points out that developers of generative AI systems must “put in place a policy to respect Union copyright law in particular to identify and respect, including through state of the art technologies, the reservations of rights expressed pursuant to Article 4(3) of [the CDSM] Directive”. In addition, the AI act also contains a recital (60i) that explains the interaction between the training of generative Ai systems and the exceptions contained in article 3 & 4 of the copyright directive.

All of this makes it clear that “now, in 2024” the TDM exceptions as introduced in 2019 do in fact provide the framework for the use of copyrighted works for the purpose of training generative AI systems, even though some stakeholders would much prefer that this was not the case.

Compliance with the Directive

It is also clear that any attempt to exclude from the scope of the TDM provision the reproductions made in the context of training generative AI models would, prima facie, result in a non-compliant implementation. Defined in Article 2(2) as “any automated analytical technique aimed at analyzing text and data in digital form in order to generate information which includes, but is not limited to, patterns, trends and correlations”, the term must be considered as an autonomous concept of EU law that cannot be modified by Member States in line with political considerations. As outlined above, there is a broad consensus that the concept of text and data mining includes the training of AI models. Even if the Polish Ministry of Culture and National Heritage does not wish this to be the case, it must still implement the Directive without changing a core concept introduced in the Directive.

Expected impact

While we are waiting for the consultation process to play out, it is instructive to consider what would be the consequences should the TDM exception be implemented as proposed by the Ministry of Culture and National Heritage. By excluding the creation of generative artificial intelligence from the scope of both TDM exceptions, the Polish copyright law would remove any statutory basis for the use of copyrighted works in the context of building generative AI models. This would require AI developers to obtain permission from all rightsholders whose works are included in their training data. Given the amounts of copyrighted works that are required to train the current generation of AI models (often measuring in the billions of individual works) this would likely be impossible for anyone but the most well-resourced companies making it virtually impossible for smaller companies or public efforts (such as the Polish open PLLuM language model), as they would lack the resources to undertake the effort required to obtain the required permissions.

What is especially stunning in the Polish implementation proposal is that it not only excludes the creation of AI models from the scope of the Article 4 exception (which applies to commercial AI developers) but also from the scope of the Article 3 exception (which is designed to enable non-profit scientific research) which seems especially short sighted. The implementation proposal should therefore be read like a misguided attempt to hinder any development or use of generative AI models in Poland.

At this point, it seems useful to recall the key balances inherent in the EU’s regulatory framework for the use of copyrighted works in AI training. They form the basis of claims by the Commission and others that the EU has a uniquely balanced approach to this thorny issue. Taken together, the TDM provisions address 4 key concerns: (1) They limit permission to use copyrighted works for training data to those works that are lawfully accessible. They (2) privilege non-profit scientific research, (3) they ensure that creators and other rights holders can exclude their works from being used to train generative AI systems, and (4) they ensure that works that are not actively managed by their rights holders can be used to train AI models.

Excluding the training of generative AI from this balanced arrangement may please some creators and rights holders, but it also pushes AI back into a legal gray area. It also seems incompatible with the provisions of the AI Act, which situates the training of generative AI models within the broader concept of TDM, and which will be directly applicable in Poland.

What is needed, instead of efforts to undermine the existing framework, are measures to ensure that the current approach can work in practice. The new copyright provisions in the AI act are an important step into this direction, but they need to be complemented by the creation of a public infrastructure to facilitate opt-outs and measures aimed at ensuring fair licensing arrangements between rights holders and AI developers.

The post TDM: Poland challenges the rule of EU copyright law appeared first on COMMUNIA Association.

Pagina's

Abonneren op Informatiebeheer  aggregator - Rechten