U bent hier

Rechten

Weerwind steekt 26 miljoen in sociale advocatuur

Mr. Online (juridisch nieuws) - 9 november 2023 - 7:30am

In de laatste week voor het verkiezingsreces nam de Tweede Kamer nog gauw de motie-Sneller aan, waarin met spoed werd aangedrongen op een financiële injectie voor de sociale advocatuur die het water aan de lippen staat. In de motie (ondertekend door D66, CDA, SP, PvdA/GroenLinks, PvdD en ChristenUnie) werd erop gewezen dat de tariefindexatie van 0,67% ver achterbleef op de consumentenprijsindexinflatie van 5,6%.

Belangrijk werk

Minister Franc Weerwind voor Rechtsbescherming (D66) geeft nu invulling aan dit verzoek met een eenmalige subsidie. Daarmee sluit de compensatie volgens hem beter aan “bij de kosten die zij maken in hun belangrijke werk.” De minister ziet dat in dat inflatie de hele samenleving heeft geraakt – en dus ook rechtsbijstandverleners. “De oproep in de motie-Sneller heb ik duidelijk gehoord. Ik wil deze beroepsgroep steunen waar mogelijk.”

Direct op de rekening

De compensatie wordt automatisch uitgekeerd via de Raad voor de Rechtsbijstand en hoeven dus niet apart te worden aangevraagd. Het uitgekeerde bedrag hangt af van het aantal toevoegingen, piketmeldingen en extra uren in de periode tussen 1 oktober 2022 en 1 oktober 2023. In de begeleidende Kamerbrief rekent Weerwind een voorbeeld voor: “50 reguliere toevoegingen x € 46,32 + 10 lichte adviestoevoegingen x € 11,33 + 20 piketmeldingen x € 15,48 + 100 toegekende extra uren x € 5,55 = € 3.293,90 (excl. btw) aan compensatie.”

Commissie

De noodinvestering is incidenteel, maar gaat gepaard met de belofte dat de minister eraan zal werken dat rechtsbijstandvergoedingen in de toekomst beter aansluiten op de praktijk. Er komt voor het einde van het jaar nog een “onafhankelijke commissie” om te onderzoeken hoeveel tijd verschillende zaaksoorten gemiddeld kosten, en of de vergoedingen bijstelling behoeven.

Het bericht Weerwind steekt 26 miljoen in sociale advocatuur verscheen eerst op Mr. Online.

Categorieën: Rechten

‘Heroverweeg wetsvoorstel handhaving sociale zekerheid’

Raad voor de rechtspraak waarschuwt: samenhang met andere nieuwe wetgeving ontbreekt

De Raad voor de rechtspraak heeft grote zorgen over de kwaliteit van het wetsvoorstel Handhaving sociale zekerheid. De samenhang met alle andere lopende wetsvoorstellen op hetzelfde terrein ontbreekt, wat mogelijk negatieve gevolgen kan hebben. In een vandaag gepubliceerd wetgevingsadvies roept de Raad het demissionaire kabinet daarom op om het huidige wetsvoorstel te heroverwegen. 

Nut en noodzaak

Het wetsvoorstel moet het voor uitvoeringsorganisaties mogelijk maken om beter passende sancties op te leggen bij handhaving rondom (onder andere) uitkeringen. De Centrale Raad van Beroep (CRvB) heeft in een vroeg stadium op het voorontwerp van de wet gereageerd en aangestipt dat niet duidelijk wordt waarom het huidige stelsel van maatregelen en boetes niet meer voldoet. Ook stelde de CRvB vragen over nut en noodzaak van een bestuurlijke boete en of het wel wenselijk is dat een uitvoeringsinstantie verantwoordelijk is voor het zowel het verstrekken als het handhaven van uitkeringen. In het wetsvoorstel wordt op deze kritiek nauwelijks ingegaan. 

Terugvordering

Nu zet dus ook de Raad voor de rechtspraak vraagtekens bij het wetsvoorstel. Zo is de terugvordering van te veel ontvangen bedragen aan uitkering of onterecht verstrekte uitkeringen niet in het wetsvoorstel opgenomen. Dat is opmerkelijk, omdat terugvorderen wel wordt gezien als een belangrijk onderdeel van handhaving. Dit onderwerp wordt geregeld in een ander (initiatief)wetsvoorstel, namelijk het wetsvoorstel Maatwerk bij terugvordering. Deze ontbrekende samenhang verbaast de Raad en is exemplarisch voor de zorgen van de Raad dat consistentie ontbreekt. Dat er op het terrein van sociale zekerheid verschillende wetsvoorstellen worden gepresenteerd die passen in de wens om meer 'menselijke maat' te hanteren bij uitvoering van wetgeving, juicht de Raad toe. Maar het grotere geheel is onvoldoende in beeld en bestaande regelgeving is uit het oog verloren. Ook vraagt de Raad zich af of deze wetsvoorstellen niet beter in één voorstel kunnen worden opgenomen.

Toelichtingsgesprek

Tot slot heeft de Raad ook aarzelingen over het zogenoemde toelichtingsgesprek dat in het wetsvoorstel wordt voorgesteld. Voordat een uitvoeringsorganisatie een besluit neemt over het opleggen van een sanctie, wordt de betrokkene uitgenodigd voor een toelichtingsgesprek. In het wetsvoorstel geldt een uitzondering voor boetes onder de 340 euro, vermoedelijk ingegeven omdat een toelichting bij een 'laag' bedrag niet nodig is. Alleen: voor iemand in de bijstand is dit doorgaans een hoog, mogelijk niet op te hoesten bedrag. Bovendien is een gesprek van belang om geschillen te voorkomen en bij te dragen aan vertrouwen in de overheid. Wat de Raad betreft zou er een plicht voor uitvoeringsorganisaties moeten zijn om uitkeringsgerechtigden vooraf actief en zorgvuldig voor te lichten over hun rechten en plichten. Zo kan voorkomen worden dat uitkeringsgerechtigden hun verplichtingen niet nakomen, en is ook een toelichtingsgesprek achteraf niet nodig. 

Categorieën: Rechten

Speaking Freely: David Kaye

David Kaye is a clinical professor of law at the University of California, Irvine, the co-director of the university’s Fair Elections and Free Speech Center, and the independent board chair of the Global Network Initiative. He also served as the UN Special Rapporteur on Promotion and Protection of the Right to Freedom of Opinion and Expression from 2014-2020. It is in that capacity that I had the good fortune of meeting and working with him; he is someone that I consider both a role model and a friend and I enjoy any chance we have to discuss the global landscape for expression.

York: What does free expression mean to you?

Oh gosh, that is such a big opening question. I guess I’ve thought of freedom of expression in a bunch of different ways. One is as a kind of essential tool for human development. It’s the way that we express who we are. It’s the way that we learn. It’s our access to information, but it’s also what we share with others. And that’s a part of being human. I mean, to me, expression is that one quality, you know, animals also communicate with one another, but they don’t communicate in a way that humans do. That is both communicating thoughts and ideas, but also developing one’s own person and personality. So one part of it is just about being human. And the other part, for me, that has made me so committed to freedom of expression is the part that’s related to democratic life. We can’t have good government, we can’t have the essential kinds of communication that leads to better ideas and so forth, if we’re not able to communicate. When we’re censored, we’re denying ourselves the ability to solve problems. So, to me, freedom of expression means both the personal, but also the community and the democratic.

York: I love that. Well, okay, then I’m really curious to hear about an early experience that you had that shaped these views.

I actually, as a kid, my parents were somewhat observant Jewish. Not totally, we were what I considered suburban observant. Meaning we’d go to the synagogue and then I’d go play baseball or we’d go to the mall. It wasn’t any kind of deeply religious thing, but the community was really important to my family. And back in the 1970s and early 80s when I was growing up, the Jewish community, at least where I lived, kind of rotated around, not Israel—which it is today, which is problematic in all sorts of ways—but it tended to focus around the plight of Jews in the Soviet Union. And that’s where we were kind of engaged. And so my earliest engagement with community and whatever religious background my parents were bringing to the table was human rights focused. It was community, it was our community in that sense, but it was human rights focused.

And the thing I took away as a kid about Jews in the Soviet Union was that they were denied two basic things. One, total freedom of expression. Which I didn’t put in those terms when I was seven or eight years old, but it was about access to information, kind of the closed nature of a regime that didn’t allow individuals either to develop as themselves and to develop their religious beliefs, or to communicate with others about them. The other part was freedom of movement. You know most dissidents and Jews and other minorities were totally denied freedom of movement. And so I guess my earliest experience with freedom of expression and my earliest commitment was pretty deeply personal. Not that I was suffering anything from that. Because the nature of our community was organized around not just our immediate community in our little Suburb of Los Angeles, but the broader community. You know, I thought in those terms. The other part, obviously, as I was growing up, was Holocaust education was a big part of the Jewish community, and the phrase, “never again” actually meant something and it connected us in our community to what was happening to others around the world. [Editors note: This interview took place in September 2023]

So, I was blessed in the sense that I lived in this kind of isolated place—I mean, not totally isolated—

York: I’m also from suburbia, I get it.

So you feel isolated in those places. But somehow, even though we were in this kind of tribal community, it was outward looking and I just am sure that had a big part in my thinking about human rights and how we think about others and how we think about our own role in improving either our own existence and our own well-being and that of others.

York: I love that background and that makes a lot of sense. You’ve then gone on to do all these things, including being the UN Special Rapporteur on Opinion and Expression. I want to note that it seems like you’ve always had an international outlook, which I think is kind of rare in the US, where we can often be quite insular.

Yeah, it bugs me to no end. And I actually think it’s getting worse. The first part of it is, and I say this and it sounds glib, but Americans don’t speak human rights. We don’t even speak this language that allows us to communicate globally. And so, I just came to Lund, I’m at the Raoul Wallenberg Institute of Human Rights at Lund University. This is basically my second day here, and it’s amazing to me how people speak this common language. This language that you speak, that I speak, that across Europe and around the world, people speak. It’s like this common set of norms, and then this language that comes out of human rights. And it allows us to kind of level set and discuss issues and have a common framework for them. And we don’t have that in the United States. I mean Americans, whether they’re progressive or liberal or what, we tend to discuss all of our issues in the context of “constitutional rights.” And it’s a language that doesn’t really translate well globally. So I think that is a bit of a barrier for Americans. I mean I’d love to see that change.

But the other part of it in terms of the global that I see every day is there’s kind of an ebbing and flowing of academic interest in the world. Less about what students are interested in, but what faculties are interested in. There is an overall trend, I think, of focusing inward in ways that are just not useful. It’s true. To me it’s amazing because the biggest issue, in some ways, globally is climate change. And so it requires global solutions and global vocabularies. And we move away from that. I don’t get it. I don’t get why we’re like that. 

York: I don’t either!

It’s very frustrating.

York: It is. It absolutely is. I’m going to bring it back globally, though, I’m going to bring it back to your former role as the Special Rapporteur. I think, from my perspective, as someone who got to work with you and see you in that role, it was great to see you bringing digital issues into it. And since, coming from an EFF perspective here I want to focus a little bit on that, because you were coming from a human rights background – and I know of course you have some digital background as well. But what did you expect going into that role? And how did your early work in that role change your views on the platform economy and how internet speech should be viewed and possibly regulated?

There’s so much richness in that question. And it’s true, we could spend all day just going off that question. I started as Special Rapporteur mainly with a human rights background. I had, I guess I would say dabbled in freedom of expression issues, I would say it had been a concern of mine. But I hadn’t written much in that space. And I hadn’t written a whole lot, I’d done a lot or work in International Criminal Justice and focused on some of the early use of technology in international criminal law. But hadn’t done a whole lot of that intersection of freedom of expression and the digital economy. One of the things that I was most excited about when I was appointed to the position was how it opened doors to allow me access to what people were thinking around the world. And I was really mindful of the fact that it was a bit weird for an American to be appointed Special Rapporteur for freedom of expression because so many people around the world see the First Amendment [to the U.S. Constitution] as somehow exceptional. Like the American First Amendment is somehow different. I think that’s overstated, but I was still mindful that there was that view. That the First Amendment was somehow different from Article 19 of the Universal Declaration on Human Rights or the International Covenant on Civil and Political Rights in some fundamental ways.

So what I wanted to do early on was reach out and just find out from—mainly from civil society—like what are the concerns? What are people focused on when it comes to freedom of expression and the digital age? And, actually, within the first six months I did a few convenings, like consultations with civil society. And we did one, I remember in December of 2014 in London, and the recurrent theme—and, remember, this is like a year and a half after the Snowden declarations—so a recurrent theme was the intersection of privacy and freedom of expression. And, to me, there’s something about the digital age that really brings that intersection forward. Because so much of what we do is so easily surveilled, whether it’s by the private sector or by governments, that that naturally has an impact on—well, first, how we think—but also how we think about what we’re able to express, who’s hearing us express those things, where we are engaging in what we typically would have thought of as private expression. The kind of expression where you’re with your friends and you’re trying to work through an idea. Well, who’s listening in while you’re doing that? Who’s watching while you’re browsing? Which is a form of freedom of expression, it’s access to information. Who’s surveilling you?

And so, very early, I just saw that that kind of intersection was going to be the focus of my mandate, probably more than anything else. And my first report was on encryption as a human right, it was encryption and anonymity. And I think that in some ways shaped, like I haven’t looked back at that report in a while, but if I looked back at that report I’d imagine most of the themes that were most interesting to me over those six years of the mandate were probably present in that first report.

But you asked a bigger question about the digital economy, right?

York: Yeah, the initial question was around what do you think makes the digital economy unique, specifically the platform economy, and has it changed any of your views on the regulation of online speech?

It’s hard to have been watching this space over the last ten years, and not be influenced by the kind of cesspool that social media became. And I think there were moments—I’m curious about your thoughts on this, too, because you’ve been engaged in this from such an early time and seen the whole development of social media as a kind of centralizing force for internet communication. But I feel that there was a kind of dogmatism in the way that I approached these issues early on around freedom of expression. That probably evolved as I saw hate, harassment, disinformation and all that kind of coursing through the veins of social media. And, ultimately, I don’t think my views changed, in terms of either what platforms should do or what regulation should look like. I mean, I’m still very wary of regulation, even though I think there’s a role for regulation. But it’s hard not to have been influenced by the nature of harms people have seen.

I guess the difference might be that because I was engaging with people like you, and people around the world who were involved in, whether it was communities in repressive societies because the government was repressive, or socially there was a lot of repression, I tended to look at the issues through the lens of those who are most disadvantaged. That’s something that I’m sure that I would not have gained access to if I were basically just teaching in Irvine, California and not having access to those communities.

The thing that I learned that I think was so interesting, at least for me, was how those who you might expect to be most in favor of the state intervening to protect those who are at risk—those communities, and the individuals who represent those communities were often the ones who were the most often saying, “No. Don’t force censorship as a response to these harms. Give us autonomy. Give us the tools to address those harms on our own, using our own access to technology, our own ability to express ourselves and fight back, rather than imposing it from some state orientation.” And that definitely influenced the way I see these things. That goes back to your question about the international. And it’s not just about international law, of course. I think the American conversation is often divorced from that sense of how people who are historically underrepresented or historically harmed, how folks in those communities see things differently than, you know, Moms for Liberty or those pushing for the Kids Online Safety Act (KOSA). You know, it’s all about protection, protection, protection – but those who have the voice to say, “here’s how we want to be protected,” don’t want those approaches. And I think that that’s somehow a barrier between the global—and maybe also the grassroots—in the United States and decision-makers and the state.

York: That’s such a great answer. And having been in that space for such a long time I think, like so many people, I was really excited about these platforms in the beginning. And seeing, actually I gave a talk last week where I set it up by talking about the importance of the internet and social media through the lens of the Arab Spring, which is, kind of, I don’t want to say it was my intro, but even just the years running up to that, were my intro to the importance of these platforms for free expression for democracy, activism, and all of that. So I think that gave me this certain idealism and then to see things crash so quickly in that era of the Islamic State, Gamergate, and all those things that happened around the same time. It’s definitely shifted my views from being… I don’t want to say an absolutist… but a strong maximalist to trying to really see the variety of perspectives and the variety of harms that can come from absolute free expression on these platforms. And yet, I still worry that a lot of the people who have the loudest voices right now—and I don’t mean the horrible people who are coming from the side of hate, but the people who have the loudest voices within this debate around expression on platforms—I think a lot of them are not looking at the most marginalized voices. They might be marginalized themselves, but they’re often marginalized within a democratic context. And I think it’s hard for people in the US to see outside of that.

I think that’s absolutely right. And I think one of the things that people forget is that human rights protections are designed to protect those who are most at risk. Even thinking about Fourth Amendment protections in the United States or due process protections under human rights law, those are designed to ensure a kind of rule of law that protects people who aren’t necessarily popular. Or are seen, putting aside popularity, are not the communities that are kind of given the biggest profile in the public or in the media or whatnot. And I think that when you get something that’s—like KOSA for example—it’s driven by people or ideas that are majoritarian. And human rights is sort of, in principle, about protecting minorities. I think that when you think of it in those ways it means that in human rights conversations we need to be centering and raising the profile of voices that aren’t necessarily going to be heard otherwise. Because those are the people who are most harmed by regulation, by choices that are made at the state level.

York: Is there anything else you want to touch on?

Well, you were asking one question before that I didn’t exactly answer that was kind of touching on, maybe, the regulatory moment that we’re in and about platforms. And there I would just say that we’re in this very pivotal moment. Where you have Europe adopting a whole lot of rules on the one hand. You have platforms seemingly stepping away from—and certainly this the case of Twitter, but even others—stepping away from some of their earlier commitments to promoting freedom of expression and pushing back against government demands and so forth. I just think that this is a very important moment in the next couple of years as we see how regulation develops. I guess I’m more concerned than I was a couple of years ago. I think we had a little exchange, but I saw on BlueSky—which I’m still trying to figure out—but you said something that resonated with me, which is that a couple of years ago it was common to think, “Europe is the future.” And now, it’s not clear that they really are. Either they’re not getting it right in Brussels or when you hear things coming out of France or other places, you think, God, nothing’s really changed, it’s just getting worse. The demand for censorship or for access to user data or whatnot is getting worse, not better. So I just think it’s a really pivotal and perhaps troubling moment about where we’re headed in terms of regulation and the digital economy.

York: I agree, and I worry that if Europe fails on this, who do we have? It’s not the US. It’s certainly not most of Asia. Is it Latin America? That might be a rhetorical question. Okay, so my final and favorite question to ask. Alive or dead, who is your free speech hero?

Free speech hero—that is really a great question. You know who I have been thinking a lot about recently? Well, we’re sort of in this Oppenheimer moment, you know, Barbie and Oppenheimer. I was thinking a lot about, there’s this sense in Oppenheimer, both in the book American Prometheus and the movie there’s this sense that Oppenheimer, who’s this scientist, was kind of burned by his freedom of expression. And Oppenheimer is super interesting from the perspective of him, Robert Oppenheimer’s own opposition to government secrecy. Actually the book is much better on this, although the movie gets to this, too, about how much he was fighting against government secrecy and open access to information. But this led me to think about… and I’m getting to your question eventually, alive or dead… and this sort of goes back to the beginning of the conversation. So I was interested in the fact that nuclear scientists at the dawn of the nuclear age tended to be kind of activists definitely on the left. Super interesting. And the one person who was harmed more than anybody else within that nuclear scientist community, was Andrei Sakharov. So Sakharov was basically the father of the Soviet hydrogen bomb. So, on the one hand, that’s kind of gross. That’s a terrible legacy to have, to have helped create basically civilization-ending weapons. But then after he did that he became an activist. He became really outspoken about the hell of Soviet totalitarianism. And for his speaking out, he was sent to Siberia. He was basically a dissident for decades of his life until he passed away. To me that was heroic. I don’t know if he’s the most important free speech avatar, but the fact that he, and that people still today like him, speak out in situations that are deeply, deeply personally dangerous, to me is remarkable. I mean, I can post something about the awfulness of Saudi money infecting so much of our politics and sport and culture right now. I’ll be fine. I probably wouldn’t want to go to Saudi Arabia, but I’ll be fine. 

Then you think about all the people we know, whether they’re in Egypt or Saudi or anywhere around the world, where minor engagements get them thrown in a dark hole of Egyptian or Saudi prisons. I’m thinking of someone like Gamal Eid in Egypt, who—and this is heroic—his basic stand in favor of freedom of expression was taking money through a [Roland Berger Foundation Human Dignity Award] prize and devoting all his money to building libraries in underprivileged neighborhoods in Cairo. To me that’s amazing. And also something that could easily—and it’s put him in travel ban territory for years—could easily get somebody in trouble. So I start with Sakharov and end with Gamal, but that kind of approach, that kind of commitment to freedom of expression, to me, is the most empowering and inspirational.

York: What a perfect answer. I agree absolutely and fundamentally, and I thank you for this wonderful interview.

Categorieën: Openbaarheid, Privacy, Rechten

Platforms Must Stop Unjustified Takedowns of Posts By and About Palestinians

Legal intern Muhammad Essa Fasih contributed to this post.

Social media is a crucial means of communication in times of conflict—it’s where communities connect to share updates, find help, locate loved ones, and reach out to express grief, pain, and solidarity. Unjustified takedowns during crises like the war in Gaza deprives people of their right to freedom of expression and can exacerbate humanitarian suffering.

In the weeks since war between Hamas and Israel began, social media platforms have removed content from or suspended accounts of Palestinian news sites, activists, journalists, students, and Arab citizens in Israel, interfering with the dissemination of news about the conflict and silencing voices expressing concern for Palestinians.

The platforms say some takedowns were caused by security issues, technical glitches, mistakes that have been fixed, or stricter rules meant to reduce hate speech. But users complain of unexplained removals of posts about Palestine since the October 7 Hamas terrorist attacks.

Meta’s Facebook shut down the page of independent Palestinian website Quds News Network, a primary source of news for Palestinians with 10 million followers. The network said its Arabic and English news pages had been deleted from Facebook, though it had been fully complying with Meta's defined media standards. Quds News Network has faced similar platform censorship before—in 2017, Facebook censored its account, as did Twitter in 2020.

Additionally, Meta’s Instagram has locked or shut down accounts with significant followings. Among these are Let’s Talk Palestine, an account with over 300,000 followers that shows pro-Palestinian informative content, and Palestinian media outlet 24M. Meta said the accounts were locked for security reasons after signs that they were compromised.

The account of the news site Mondoweiss was also banned by Instagram and taken down on TikTok, later restored on both platforms.

Meanwhile, Instagram, Tiktok, and LinkedIn users sympathetic to or supportive of the plight of Palestinians have complained of “shadow banning,” a process in which the platform limits the visibility of a user's posts without notifying them. Users say the platform limited the visibility of posts that contained the Palestinian flag.

Meta has admitted to suppressing certain comments containing the Palestinian flag in certain “offensive contexts” that violate its rules. Responding to a surge in hate speech after Oct.7, the company lowered the threshold for predicting whether comments qualify as harassment or incitement to violence from 80 percent to 25 percent for users in Palestinian territories. Some content creators are using code words and emojis and shifting the spelling of certain words to evade automated filtering. Meta needs to be more transparent about decisions that downgrade users’ speech that does not violate its rules.

For some users, posts have led to more serious consequences. Palestinian citizens of Israel, including well-known singer Dalal Abu Amneh from Nazareth, have been arrested for social media postings about the war in Gaza that are alleged to express support for the terrorist group Hamas.

Amneh’s case demonstrates a disturbing trend concerning social media posts supporting Palestinians. Amneh’s post of the Arabic motto “There is no victor but God” and the Palestinian flag was deemed as incitement. Amneh, whose music celebrates Palestinian heritage, was expressing religious sentiment, her lawyer said, not calling for violence as the police claimed.

She received hundreds of death threats and filed a complaint with Israeli police, only to be taken into custody. Her post was removed. Israeli authorities are treating any expression of support or solidarity with Palestinians as illegal incitement, the lawyer said.

Content moderation does not work at scale even in the best of times, as we have said repeatedly. At all times, mistakes can lead to censorship; during armed conflicts they can have devastating consequences.

Whether through content moderation or technical glitches, platforms may also unfairly label people and communities. Instagram, for example, inserted the word “terrorist” into the profiles of some Palestinian users when its auto-translation converted the Palestinian flag emoji followed by the Arabic word for “Thank God” into “Palestinian terrorists are fighting for their freedom.” Meta apologized for the mistake, blaming it on a bug in auto-translation. The translation is now “Thank God.”

Palestinians have long fought private censorship, so what we are seeing now is not particularly new. But it is growing at a time when online speech protections are sorely needed. We call on companies to clarify their rules, including any specific changes that have been made in relation to the ongoing war, and to stop the knee jerk reaction to treat posts expressing support for Palestinians—or notifying users of peaceful demonstrations, or documenting violence and the loss of loved ones—as incitement and to follow their own existing standards to ensure that moderation remains fair and unbiased.

Platforms should also follow the Santa Clara Principles on Transparency and Accountability in Content Moderation notify users when, how, and why their content has been actioned, and give them  the opportunity to appeal. We know Israel has worked directly with Facebook, requesting and garnering removal of content it deemed incitement to violence, suppressing posts by Palestinians about human rights abuses during May 2021 demonstrations that turned violent.

The horrific violence and death in Gaza is heartbreaking. People are crying out to the world, to family and friends, to co-workers, religious leaders, and politicians their grief and outrage. Labeling large swaths of this outpouring of emotion by Palestinians as incitement is unjust and wrongly denies people an important outlet for expression and solace.

Categorieën: Openbaarheid, Privacy, Rechten

Nederlandse juristen zien potentie in AI

Mr. Online (juridisch nieuws) - 8 november 2023 - 2:00pm

Embracing innovation, adapting to change Lawyers navigate challenges in an evolving legal landscape is de titel van deze lustrumeditie. In het rapport zijn de inzichten verwerkt van zevenhonderd juristen uit de Verenigde Staten en Europa (Duitsland, Nederland, Verenigd Koninkrijk, België, Frankrijk, Italië, Spanje, Polen en Hongarije).

Terugblik en toekomst

Generatieve artificiële intelligentie (GenAI) is zonder enige twijfel het meest urgente thema van dit moment. GenAI, het meest bekend van ChatGPT, heeft het afgelopen jaar een verbluffende ontwikkeling doorgemaakt, wat ook de juridische beroepsgroep tot reflectie noopt. Het vijfjarig jubileum van Future Ready Lawyer-rapporten biedt ook de gelegenheid tot terugblikken: was AI in 2019 AI nog een stip aan de horizon, nu is het realiteit.

Uit het nieuwste rapport komt naar voren dat 73% van de advocaten verwacht in de komende twaalf maanden GenAI te zullen incorporeren in hun juridische werkzaamheden. Dat zou aantonen dat zij zich bewust zijn van de impact en begrijpen hoe het kan worden toegepast om tijd te besparen, diepere kritische inzichten te verkrijgen en juridische argumenten te vormen. Niettemin zien respondenten ook mogelijke risico’s en beschouwt 25% GenAI als een bedreiging waar het gaat om onder meer precisie, consistentie en ingebouwde vooroordelen.

ESG-diensten

Een ander thema vormen diensten op het gebied van Environmental, Social, and Governance (ESG). De snelgroeiende vraag naar ESG-expertise is een strategische prioriteit geworden voor juridische dienstverleners, zal ook de noodzaak tot gebruik van legal tech doen toenemen – zeker in een tijd (post-corona) waarin het moeilijker is personeel vast te houden en de verwachtingen van werkenden veranderen. Meer dan een kwart van de ondervraagde juristen noemt hybride werken in de top drie van eisen ten aanzien van de arbeidsvoorwaarden (in 2022 was dat nog 21%).

NL ziet kansen

Doordat juristen uit verschillende delen van de wereld zijn ondervraagd, kan onderling worden vergeleken. Met name Nederlanders blijken AI te zien als kans (65%) en het grootste begrip te hebben van hoe de techniek werkt (89%). Meer specifiek zijn advocaten hier meer overtuigd van de voordelen van GenAI dan hun vakbroeders en -zusters over de grens: meer dan twee derde tegenover slechts een vijfde in Frankrijk. Juridisch Nederland lijkt klaar voor de toekomst.

Het bericht Nederlandse juristen zien potentie in AI verscheen eerst op Mr. Online.

Categorieën: Rechten

Honderden security-experts slaan alarm over EU-plan waarmee alle internetverkeer valt te onderscheppen

IusMentis - 8 november 2023 - 8:11am

Honderden wereldwijde cybersecurity-experts, onderzoekers en wetenschappers hebben een open brief ondertekend waarin alarm wordt geslagen over de Europese eIDAS-verordening. Dat meldde AG Connect onlangs. Deze creëert een achterdeur waarmee internetverkeer grootschalig afgetapt kan worden.

De eIDAS-Verordening bestaat sinds 2018 en regelt onder meer de rechtsgeldigheid van elektronische handtekeningen, zegels en certificaten. Een voorbeeld van dat laatste zijn de SSL- of TLS-certificaten die door webbrowers worden gecontroleerd om na te gaan of websites echt zijn.

Om de echtheid na te gaan, moet je een vertrouwde entiteit hebben die zegt welke certificaatuitgevende instanties te vertrouwen zijn. Dit ging laatst mis bij DeGiro: die hadden niet de goede certificaten en wisten niet wie die vertrouwde entiteit was – de root certificate issuer, in jargon.

Dit is een bug én feature in het systeem: iedereen moet uitzoeken wie hij vertrouwt, er is geen bindende mededeling vanuit de overheid dat Diginotar partij X altijd geloofd moet worden. En de ophef is dus ontstaan over de voorgenomen wijziging aan de eIDAS-Verordening die dat wél gaat introduceren: Under the eIDAS regulation, each member state of the EU (as well as recognised third party countries) is able to designate Qualified Trust Service Providers (Qualified TSPs) for the distribution of Qualified Website Authentication Certificates (QWACs). Outside the EU, these TSPs and QWACs are more typically known as Certificate Authorities (CAs) and TLS Certificates, respectively. Article 45 requires browsers to recognise these certificates. Hiermee wordt het huidige proces van controle en vertrouwen opgeheven en vervangen door een controle door de overheid. Daar hebben velen moeite mee: een kwaadwillende overheid zou een certificaat onder valse naam kunnen uitgeven, waarna browsers de verkeerde site als authentiek aanmerken en men zo gegevens kan onderscheppen.

In de open brief wordt het iets concreter uitgelegd: Concretely, the regulation enables each EU member state (and recognised third party countries) to designate cryptographic keys for which trust is mandatory; this trust can only be withdrawn with the government’s permission (Article 45a(4)). This means any EU member state or third party country, acting alone, is capable of intercepting the web traffic of any EU citizen and there is no effective recourse. We ask that you urgently reconsider this text and make clear that Article 45 will not interfere with trust decisions around the cryptographic keys and certificates used to secure web traffic. Voor de duidelijkheid: het gaat dus niet om aftappen bij de internetprovider of vanuit de browser afluisteren, maar om het kunnen omleiden van argeloze internetgebruikers naar malafide sites om te zien wat ze daar doen. Omdat het certificaat als authentiek wordt gezien, zijn die sites niet meer van echt te onderscheiden – sterker nog, juridisch zijn ze de echte site.

Een echte oplossing hiervoor is er nog niet. DNS CAA is op papier een tegenkracht: een website kan in haar domeinnaamgegevens noteren welke autoriteiten voor haar certificaten mag uitgeven. Een browser zou dan kunnen zeggen “dit certificaat voor blog.iusmentis.com is weliswaar vertrouwd vanwege het root certificate van de Democratische Republiek Bordurië, maar iusmentis.com zelf geeft aan dat alleen het Nederlandse SIDN dit mag zeggen”. Het is alleen niet echt in gebruik.

Morgen is de stemming over dit voorstel, en de hoop is dat men op het laatste moment toch de onzinnigheid hiervan inziet.

Arnoud

Het bericht Honderden security-experts slaan alarm over EU-plan waarmee alle internetverkeer valt te onderscheppen verscheen eerst op Ius Mentis.

Introducing Badger Swarm: New Project Helps Privacy Badger Block Ever More Trackers

Today we are introducing Badger Swarm, a new tool for Privacy Badger that runs distributed Badger Sett scans in the cloud. Badger Swarm helps us continue updating and growing Privacy Badger’s tracker knowledge, as well as continue adding new ways of catching trackers. Thanks to continually expanding Badger Swarm-powered training, Privacy Badger comes packed with its largest blocklist yet.

A line chart showing the growth of blocked domains in Privacy Badger’s pre-trained list from late 2018 (about 300 domains blocked by default) through 2023 (over 2000 domains blocked by default). There is a sharp jump in January 2023, from under 1200 to over 1800 domains blocked by default.

We continue to update and grow Privacy Badger’s pre-trained list. Privacy Badger now comes with the largest blocklist yet, thanks to improved tracking detection and continually expanding training. Can you guess when we started using Badger Swarm?

Privacy Badger is defined by its automatic learning. As we write in the FAQ, Privacy Badger was born out of our desire for an extension that would automatically analyze and block any tracker that violated consent, and that would use algorithmic methods to decide what is and isn’t tracking. But when and where that learning happens has evolved over the years.

When we first created Privacy Badger, every Privacy Badger installation started with no tracker knowledge and learned to block trackers as you browsed. This meant that every Privacy Badger became stronger, smarter, and more bespoke over time. It also meant that all learning was siloed, and new Privacy Badgers didn’t block anything until they got to visit several websites. This made some people think their Privacy Badger extension wasn’t working.

In 2018, we rolled out Badger Sett, an automated training tool for Privacy Badger, to solve this problem. We run Badger Sett scans that use a real browser to visit the most popular sites on the web and produce Privacy Badger data. Thanks to Badger Sett, new Privacy Badgers knew to block the most common trackers from the start, which resolved confusion and improved privacy for new users.

In 2020, we updated Privacy Badger to no longer learn from your browsing by default, as local learning may make you more identifiable to websites. 1 In order to make this change, we expanded the scope of Badger Sett-powered remote learning. We then updated Privacy Badger to start receiving tracker list updates as part of extension updates. Training went from giving new installs a jump start to being the default source of Privacy Badger’s tracker knowledge.

Since Badger Sett automates a real browser, visiting a website takes a meaningful amount of time. That’s where Badger Swarm comes in. As the name suggests, Badger Swarm orchestrates a swarm of auto-driven Privacy Badgers to cover much more ground than a single badger could. On a more technical level, Badger Swarm converts a Badger Sett scan of X sites into N parallel Badger Sett scans of X/N sites. This makes medium scans complete as quickly as small scans, and large scans complete in a reasonable amount of time.

Badger Swarm also helps us produce new insights that lead to improved Privacy Badger protections. For example, Privacy Badger now blocks fingerprinters hosted by CDNs, a feature made possible by Badger Swarm-powered expanded scanning. 2

We are releasing Badger Swarm in hope of providing a helpful foundation to web researchers. Like Badger Sett, Badger Swarm is tailor-made for Privacy Badger. However, also like Badger Sett, we built Badger Swarm so it's simple to use and modify. To learn more about how Badger Swarm works, visit its repository on GitHub.

The world of online tracking isn't slowing down. The dangers caused by mass surveillance on the internet cannot be overstated. Privacy Badger continues to protect you from this pernicious industry, and thanks to Badger Swarm, Privacy Badger is stronger than ever.

To install Privacy Badger, visit privacybadger.org. Thank you for using Privacy Badger!

  • 1. You may want to opt back in to local learning if you regularly browse less popular websites. To do so, visit your Badger’s options page and mark the checkbox for learning to block new trackers from your browsing.
  • 2. As a compromise to avoid breaking websites, CDN domains are allowed to load without access to cookies. However, sometimes the same domain is used to serve both unobjectionable content and obnoxious fingerprinters that do not need cookies to track your browsing. Privacy Badger now blocks these fingerprinters.
Categorieën: Openbaarheid, Privacy, Rechten

This Month, The EU Parliament Can Take Action To Stop The Attack On Encryption

Update 11/14/2023: The LIBE committee adopted the compromise amendments by a large majority. Once the committee's version of the law becomes the official position of the European Parliament, attention will shift to the Council of the EU. Along with our allies, EFF will continue to advocate that the EU reject proposals to require mass scanning and compromise of end-to-end encryption.

A key European parliamentary committee has taken an important step to defend user privacy, including end-to-end encryption. The Committee on Civil Liberties, Justice and Home Affairs (LIBE) has politically agreed on much-needed amendments to a proposed regulation that, in its original form, would allow for mass-scanning of people’s phones and computers. 

The original proposal from the European Commission, the EU’s executive body, would allow EU authorities to compel online services to analyze all user data and check it against law enforcement databases. The stated goal is to look for crimes against children, including child abuse images. 

But this proposal would have undermined a private and secure internet, which relies on strong encryption to protect the communications of everyone—including minors. The EU proposal even proposed reporting people to police as possible child abusers by using AI to rifle through people’s text messages. 

Every human being should have the right to have a private conversation. That’s true in the offline world, and we must not give up on those rights in the digital world. We deserve to have true private communication, not bugs in our pockets. EFF has opposed this proposal since it was introduced

More than 100 civil society groups joined us in speaking out against this proposal. So did thousands of individuals who signed the petition demanding that the EU “Stop Scanning Me.” 

The LIBE committee has wisely listened to those voices, and now major political groups have endorsed a compromise proposal that has language protecting end-to-end encryption. Early reports indicate the language will be a thorough protection that includes language disallowing client-side scanning, a form of bypassing encryption. 

The compromise proposal also takes out earlier language that could have allowed for mandatory age verification. Such age verification mandates amount to requiring people to show ID cards before they get on the internet; they are not compatible with the rights of adults or minors to speak anonymously when necessary. 

The LIBE committee is scheduled to confirm the new agreement  on November 13. The language is not perfect; some parts of the proposal, while not mandating age verification, may encourage its further use. The proposal could also lead to increased scanning of public online material that could be less than desirable, depending on how it’s done. 

Any time governments access peoples’ private data it should be targeted, proportionate, and subject to judicial oversight. The EU legislators should consider this agreement to be the bare minimum of what must be done to protect the rights of internet users in the EU and throughout the world. 

Categorieën: Openbaarheid, Privacy, Rechten

Observation Mission Stresses Key Elements of Ola Bini's Case for Upholding Digital Rights

Despite an Ecuadorian court’s unanimous acquittal of security expert Ola Bini in January this year due to complete lack of evidence, Ecuador’s attorney general's office has moved to appeal the decision, perpetuating several years of unjust attacks on Bini’s rights. 

In the context of the Internet Governance Forum 2023 (IGF) held in Japan, the Observation Mission on the Bini case, which includes EFF and various digital and human rights groups, analyzed how advocates can utilize key elements of the judgment that found Bini not guilty. The Mission released a new statement pointing out these elements. The statement also urges Ecuadorian authorities to clarify Bini's procedural status as the attorney general's office has been posing difficulties for Bini's compliance with the precautionary measures still pending against him, particularly the requirement of periodic appearances to the AG's office.  

The full statement in Spanish is available here

Below we’ve summarized these key elements, which are critical for the protection of digital rights.

Irrelevant Evidence. The court characterized all evidence presented by the attorney general's office as irrelevant or unfit: "None of these elements led to a procedural truth for the purpose of proving any crime." With this decision, the court refused to convict Bini based on stereotyped views of security experts.  It has refused to apply criminal law based on a person's identity, connections, or activity, instead of actual conduct, or to apply criminal law based on a "political and arbitrary interpretation of what constitutes the security of the State and who could threaten it." Politically motivated prosecutions like Bini’s receive extensive media coverage, but what is often presented as "suspicious" is neither technically nor legally consistent. Civil society has worked to raise awareness among journalists about what is at stake in such cases, and to prevent judicial authorities from being pressured by publicized political accusations. 

The Importance of Proper Digital Evidence. The court emphasized the necessity of proper evidence to prove that an alleged computer crime occurred and that the image of a telnet session presented in Bini’s case is not fit for this purpose. The court explained that graphical representations, which can be altered, does not constitute evidence of a cybercrime since an image cannot verify whether the commands illustrated in it were actually executed. Building on technical experts' testimonies, the court said that what does not emerge or can be verified from digital forensics is not proper digital evidence. The Observation Mission's statement notes this is a key precedent that clarifies the type of evidence that is considered technically valid for proving alleged computer crimes. 

Unauthorized Access. The court clarified the meaning of unauthorized access, even though no access was proven in Bini's case. According to the court, access without authorization of a computer system requires the breach of some security system, which the ruling understands as overcoming technical barriers or using access credentials without authorization. In addition, and following Ecuador's penal code, the criminal offense of unauthorized access also requires proving an illegitimate purpose or malicious intent. While prosecutors failed to prove that any access has taken place (much less an unauthorized access), this interpretation aids in setting a precedent for defining unauthorized access in digital rights cases. It's particularly crucial as it ensures that individuals who test systems for vulnerabilities and report them do not face undue criminalization.

In light of these key elements, the Observation Mission's statement stresses that it is essential for Ecuadorian appellate authorities to affirm the lower court’s acquittal of Bini. It's also imperative that authorities clarify his procedural status and the requirement for periodic appearances, as any violation of his fundamental rights raises concerns about the legitimacy of the proceedings.

The Case's Legacy and Global Implications

This verdict has significant implications for digital rights beyond Bini's case. It underscores the importance of incorporating malicious intent into the configuration of computer crimes in legal and public policy discussions, as well as the importance of guarding against politically motivated prosecutions that rely on suspicion and public fear. 

Bini's case serves as a beacon for the defense of digital rights. It establishes critical precedents for the treatment of evidence, the importance of digital forensics, and relevant elements for assessing the offense of unauthorized access. It's a testament to the global fight for digital rights and an opportunity to safeguard the work of those who enhance our privacy, security, and human rights in the digital era.

Categorieën: Openbaarheid, Privacy, Rechten

Article 45 Will Roll Back Web Security by 12 Years

The EU is poised to pass a sweeping new regulation, eIDAS 2.0. Buried deep in the text is Article 45, which returns us to the dark ages of 2011, when certificate authorities (CAs) could collaborate with governments to spy on encrypted traffic—and get away with it. Article 45 forbids browsers from enforcing modern security requirements on certain CAs without the approval of an EU member government. Which CAs? Specifically the CAs that were appointed by the government, which in some cases will be owned or operated by that selfsame government. That means cryptographic keys under one government’s control could be used to intercept HTTPS communication throughout the EU and beyond.

This is a catastrophe for the privacy of everyone who uses the internet, but particularly for those who use the internet in the EU. Browser makers have not announced their plans yet, but it seems inevitable that they will have to create two versions of their software: one for the EU, with security checks removed, and another for the rest of the world, with security checks intact. We’ve been down this road before, when export controls on cryptography meant browsers were released in two versions: strong cryptography for US users, and weak cryptography for everyone else. It was a fundamentally inequitable situation and the knock-on effects set back web security by decades.

The current text of Article 45 requires that browsers trust CAs appointed by governments, and prohibits browsers from enforcing any security requirements on those CAs beyond what is approved by ETSI. In other words, it sets an upper bar on how much security browsers can require of CAs, rather than setting a lower bar. That in turn limits how vigorously browsers can compete with each other on improving security for their users.

This upper bar on security may even ban browsers from enforcing Certificate Transparency, an IETF technical standard that ensures a CA’s issuing history can be examined by the public in order to detect malfeasance. Banning CT enforcement makes it much more likely for government spying to go undetected.

Why is this such a big deal? The role of a CA is to bootstrap encrypted HTTPS communication with websites by issuing certificates. The CA’s core responsibility is to match web site names with customers, so that the operator of a website can get a valid certificate for that website, but no one else can. If someone else gets a certificate for that website, they can use it to intercept encrypted communications, meaning they can read private information like emails.

We know HTTPS encryption is a barrier to government spying because of the NSA’s famous “SSL added and removed here” note. We also know that misissued certificates have been used to spy on traffic in the past. For instance, in 2011 DigiNotar was hacked and the resulting certificates used to intercept emails for people in Iran. In 2015, CNNIC issued an intermediate certificate used in intercepting traffic to a variety of websites. Each CA was subsequently distrusted.

Distrusting a CA is just one end of a spectrum of technical interventions browsers can take to improve the security of their users. Browsers operate “root programs” to monitor the security and trustworthiness of CAs they trust. Those root programs impose a number of requirements varying from “how must key material be secured” to “how must validation of domain name control be performed” to “what algorithms must be used for certificate signing.” As one example, certificate security rests critically on the security of the hash algorithm used. The SHA-1 hash algorithm, published in 1993, was considered not secure by 2005. NIST disallowed its use in 2013. However, CAs didn't stop using it until 2017, and that only happened because one browser made SHA-1 removal a requirement of its root program. After that, the other browsers followed suit, along with the CA/Browser Forum.

The removal of SHA-1 illustrates the backwards security incentives for CAs. A CA serves two audiences: their customers, who get certificates from them, and the rest of the internet, who trusts them to provide security. When it comes time to raise the bar on security, a CA will often hear from their customers that upgrading is difficult and expensive, as it sometimes is. That motivates the CA to drag their feet and keep offering the insecure technology. But the CA’s other audience, the population of global internet users, needs them to continually improve security. That’s why browser root programs need to (and do) require a steadily increasing level of security of CAs. The root programs advocate for the needs of their users so that they can provide a more secure product. The security of a browser’s root program is, in a very real way, a determining factor in the security of the browser itself.

That’s why it’s so disturbing that eIDAS 2.0 is poised to prevent browsers from holding CAs accountable. By all means, raise the bar for CA security, but permanently lowering the bar means less accountability for CAs and less security for internet users everywhere.

The text isn't final yet, but is subject to approval behind closed doors in Brussels on November 8.

Categorieën: Openbaarheid, Privacy, Rechten

The Government Surveillance Reform Act Would Rein in Some of the Worst Abuses of Section 702

With Section 702 of the Foreign Intelligence Surveillance Act (FISA) set to expire at the end of the year, Congress is considering whether to reauthorize the law and if so, whether to make any necessary amendments to the invasive surveillance authority. 

While Section 702 was first sold as a tool necessary to stop foreign terrorists, it has since become clear that the government uses the communications it collects under this law as a domestic intelligence source. The program was intended to collect communications of people outside of the United States, but because we live in an increasingly globalized world, the government retains a massive trove of communications between people overseas on U.S. persons. Increasingly, it’s this U.S. side of digital conversations that are being routinely sifted through by domestic law enforcement agencies—all without a warrant. 

The congressional authorization for Section 702 expires in December 2023, and it’s in light of the current administration’s attempts to renew this authority that we demand that Congress must not reauthorize Section 702 without reforms. It’s more necessary than ever to pass reforms that prevent longstanding and widespread abuses of the program and that advance due process for everyone who communicates online.

U.S. Senators Ron Wyden, and Sen. Mike Lee, with cosponsors Senators Tammy Baldwin, Steve Daines, Mazie Hirono, Cynthia Lummis, Jon Tester, Elizabeth Warren, and Edward Markey, along with Representatives Zoe Lofren, Warren Davidson have introduced the Government Surveillance Reform Act that would reauthorize Section 702 with many of these important safeguards in place.

EFF supports this bill and encourages Congress to implement these critical measures:

Government Queries of Section 702 Databases

Under the Fourth Amendment, when the FBI or other law enforcement entity wants to search your emails, it must convince a judge there’s reason to believe your emails will contain evidence of a crime. But because of the way the NSA implements Section 702, communications from innocent Americans are routinely collected and stored in government databases, which are accessible to the FBI, the CIA, and the National Counterterrorism Center.

So instead of having to get a warrant to collect this data, it’s already in government servers. And the government currently decides for itself whether it can look through (“query”) its databases for Americans’ communications—decisions which it regularly makes incorrectly, even according to the Foreign Intelligence Surveillance Court. Requiring a judge to examine the government’s claims when it wants to query its Section 702 databases for Americans’ communications isn’t just a matter of standards: it’s about ensuring government officials don’t get to decide themselves whether they can compromise Americans’ privacy in their most sensitive and intimate communications.

The Government Surveillance Reform Act would prohibit warrantless queries of information collected under Section 702 to find communications or certain information of or about U.S. persons or persons located in the United States. Importantly, this prohibition would also include geolocation information, web browsing, and internet search history.

Holding the Government Accountable

A cornerstone of our legal system is that if someone—including the government—violates your rights, you can use the courts to hold them accountable if you can show that you were affected, i.e. that you have standing.

But, in multiple cases, courts interpreting an evidentiary provision in FISA have prevented Americans who alleged injuries from Section 702 surveillance from obtaining judicial review of the surveillance’s legality. The effect is a one-way ratchet that has “created a broad national-security exception to the Constitution that allows all Americans to be spied upon by their government while denying them any viable means of challenging that spying.”

Section 210 of the Government Surveillance Reform Act would change this. This provision says that if a U.S. person has a reasonable basis to believe that their rights have been, are being, or imminently will be violated, they have suffered an “injury in fact” and they have standing to bring their case. It also clarifies that courts should follow FISA’s provision for introducing and weighing evidence of surveillance. These are critical protections in preventing government overreach, and Congress should not reauthorize Section 702 without this provision.

Criminal Notice

Another important safeguard in the American legal system is the right of defendants in criminal cases to know how the evidence against them was obtained and to challenge the legality of how it was collected.

Under FISA as written, the government must disclose when it intends to use evidence it has collected under Section 702 in criminal prosecutions. But in the fifteen years since Congress enacted Section 702, the government has only provided notice to eleven criminal defendants of such intent—and has provided notice to zero defendants in the last five years.

Section 204 of the Government Surveillance Reform Act would clarify that the government is required to notify defendants whenever it would not have had any evidence “but for” Section 702 or other FISA surveillance. This is a common-sense rule, and Congress cannot reauthorize Section 702 without clarifying the government’s duty to disclose evidence collected under Section 702.

Government Surveillance Reform Act

Section 702 expires in December 2023, and Congress should not renew this program without serious consideration of the past abuses of the program and without writing in robust safeguards.

EFF applauds the Government Surveillance Reform Act, which recognizes the need to make these vital reforms, and many more, to Section 702. Requiring court approval of government queries for Americans’ communications in Section 702 databases, allowing Americans who have suffered injuries from Section 702 surveillance to use the evidentiary provisions FISA sets forth, and strengthening the government’s duties to provide notice when using data resulting from Section 702 surveillance in criminal prosecutions must serve as priorities for Congress as it considers reauthorizing Section 702.

 

Take action

TELL congress: End 702 Absent serious reforms

Categorieën: Openbaarheid, Privacy, Rechten

Strafrechtpodcast ‘napleiten’ valt in de prijzen

Mr. Online (juridisch nieuws) - 7 november 2023 - 12:04pm

Ook werd Napleiten uitgeroepen tot winnaar in de categorie ‘chatcast informatief’. In de podcast gaan de makers in gesprek met advocaten en officieren van justitie over bijzondere zaken die de gesprekspartners altijd zijn bijgebleven. Wekelijks verschijnt er een nieuwe aflevering op de podcastkanalen, die ook op zaterdagochtend tussen 10:00 en 11:00 is te horen op BNR Nieuwsradio.

Vakjury

Sinds 2018 worden de Dutch Podcast Awards uitgereikt op initiatief van BNR Nieuwsradio. De zestienkoppige jury bestaat voornamelijk uit podcastmakers (die niet over hun eigen werk oordelen). Tevens is er een publieksprijs, die dit jaar door misdaadpodcast Moordzaken in de wacht werd gesleept. Als grote winnaar kwam ditmaal echter de juridische podcast van Flokstra en Lomans uit de bus. De vakjury noemt de productie zeer inhoudelijk, maar nooit sensatiebelust – “wat in dit genre niet altijd vanzelfsprekend is.” De slimme opzet wordt geprezen waardoor het ook voor juridische leken altijd te volgen blijft.

Wederom succes

Het is niet voor het eerst dat de heren van Napleiten in de prijzen vallen bij de DPA. De editie van 2022 leverde hen al de winst op in de categorie ‘True Crime’. Ter gelegenheid van de nominatie destijds sprak Flokstra met Mr. over hun beweegreden om de podcast te maken. De advocaat vertelde dat hij en Laumans waren begonnen vanuit het idee dat er in de samenleving nogal eens onbegrip bestaat over het strafrecht en het functioneren van de advocaat daarbinnen. “Wij weten echter allebei vanuit onze eigen professie dat ons strafrechtsysteem best goed in elkaar zit en dat er honderden advocaten zijn in Nederland die met passie en liefde voor dat systeem hun werk doen als belangenbehartiger van een verdachte.”

Amusement

Daarnaast vinden mensen het nu eenmaal leuk om naar verhalen te luisteren over het strafrecht. Herkenbaar voor Flokstra: “Dat heb ik zelf ook en is een van de redenen dat ik ooit strafrechtadvocaat wilde worden.” Hij zag en ziet Napleiten dan ook als een soort van amusement.

Het bericht Strafrechtpodcast ‘napleiten’ valt in de prijzen verscheen eerst op Mr. Online.

Categorieën: Rechten

The transparency provision in the AI Act: What needs to happen after the 4th trilogue?

International Communia Association - 7 november 2023 - 10:34am

Before the trilogue, COMMUNIA issued a statement, calling for a comprehensive approach on the transparency of training data in the Artificial Intelligence (AI) Act. COMMUNIA and the co-signatories of that statement support more transparency around AI training data, going beyond data that is protected by copyright. It is still unclear whether the co-legislators will be able to pass the regulation before the end of the current term. If they do, proportionate transparency obligations are key to realising the balanced approach enshrined in the text and data mining (TDM) exception of the Copyright Directive.

How can transparency work in practice?

As discussed in our Policy Paper #15, transparency is key to ensuring a fair balance between the interests of creators on the one hand and those of commercial AI developers on the other. A transparency obligation would empower creators, allowing them to assess whether the copyrighted materials used as AI training data have been scraped from lawful sources, as well as whether their decision to opt-out from AI training has been respected. At the same time, such an obligation needs to be fit-for-purpose, proportionate and workable for different kinds of AI developers, including smaller players.

While the European Parliament’s text has taken an important step towards improving transparency, it has been criticised for falling short in two key aspects. First, the proposed text focuses exclusively on training data protected under copyright law which arbitrarily limits the scope of the obligation in a way that may not be technically feasible. Second, the Parliament’s text remains very vague, calling only for a “sufficiently detailed summary” of the training data, which could lead to legal uncertainty for all actors involved, given how opaque the copyright ecosystem itself is.

As such, we are encouraged to see the recent work of the Spanish presidency on the topic of transparency, improving upon the Parliament’s proposed text. The presidency recognises that there is a need for targeted provisions that facilitate the enforcement of copyright rules in the context of foundation models and proposes that providers of foundation models should demonstrate that they have taken adequate measures to ensure compliance with the opt-out mechanism under the Copyright Directive. The Spanish presidency has also proposed that providers of foundation models should make information about their policies to manage copyright-related aspects public.

This proposal marks an important step in the right direction by expanding the scope of transparency beyond copyrighted material. Furthermore, requiring providers to share information about their policies to manage copyright-related aspects could provide important clarity as to the methods of opt-out that are being respected, empowering creators to be certain that their choices to protect works from TDM are being respected.

In search of a middle ground

Unfortunately, while the Spanish presidency has addressed one of our key concerns by removing the limitation to copyrighted material, ambiguity remains. Calling for a sufficiently detailed summary about the content of training data leaves a lot of room for interpretation and may lead to significant legal uncertainty going forward. Having said that, strict and rigid transparency requirements which force developers to list every individual entry inside of a training dataset would not be a workable solution either, due to the unfathomable quantity of data used for training. Furthermore, such a level of detail would provide no additional benefits when it comes to assessing compliance with the opt-out mechanism and the lawful access requirement. So what options do we have left?

First and foremost, the reference to “sufficiently detailed summary” must be replaced with a more concrete requirement. Instead of focussing on the content of training data sets, this obligation should focus on the copyright compliance policies followed during the scraping and training stages. Developers of generative AI systems should be required to provide a detailed explanation of their compliance policy including a list of websites and other sources from which the training data has been reproduced and extracted, and a list of the machine-readable rights reservation protocols/techniques that they have complied with during the data gathering process. In addition, the AI Act should allocate the responsibility to further develop transparency requirements to the to-be-established Artificial Intelligence Board (Council) or Artificial Intelligence Office (Parliament). This new agency, which will be set up as part of the AI Act, must serve as an independent and accountable actor, ensuring consistent implementation of the legislation and providing guidance for its application. On the subject of transparency requirements, an independent AI Board/Office would be able to lay down best-practices for AI developers and define the granularity of information that needs to be provided to meet the transparency requirements set out in the Act.

We understand that the deadline to find an agreement on the AI Act ahead of the next parliamentary term is very tight. However, this should not be an excuse for the co-legislators to rush the process by taking shortcuts through ambiguous language purely to find swift compromises, creating significant legal uncertainty in the long run. In order to achieve its goal to protect Europeans from harmful and dangerous applications of AI while still allowing for development and encouraging innovation in the sector, and to potentially serve as model legislation for the rest of the world, the AI Act must be robust and legally sound. Everything else would be a wasted opportunity.

The post The transparency provision in the AI Act: What needs to happen after the 4th trilogue? appeared first on COMMUNIA Association.

OSR: het beste aanbod in bij- & nascholing!

Mr. Online (juridisch nieuws) - 7 november 2023 - 8:51am
Actualiteitencursus slachtofferzaken: vordering benadeelde partij (met startgarantie)

In deze bijscholingscursus, voor ervaren strafrechtadvocaten, worden zowel uit civiel- als uit strafrechtelijk oogpunt de recente ontwikkelingen op het gebied van de benadeelde partij besproken. In één dag ben je weer volledig op de hoogte van de laatste stand van zaken in de jurisprudentie. Je kunt deze cursus gebruiken om te voldoen aan de jaarlijkse bijscholingsverplichting van de Raad voor Rechtsbijstand op dit gebied. De cursus wordt beoordeeld met een 8,3.

Je kunt deze cursus volgen op 14 november in Utrecht.

Actualiteitencursus bewindvoering WSNP (met startgarantie)

Deze actualiteitencursus brengt je in één dag helemaal op de hoogte van de veranderingen, nieuwe inzichten en ontwikkelingen rond de Wet schuldsanering natuurlijke personen (Wsnp). Als je in je werk te maken hebt met de Wsnp, heb je vast al ervaren dat hierin regelmatig van alles verandert. Daarom krijg je in deze cursus les van drie ervaren docenten die je meenemen in de wet- en regelgeving, rechtspraak en actuele thema’s uit de Wsnp. De cursus wordt beoordeeld met een 7,8. Je kunt deze cursus volgen op 17 november in Utrecht.

 

Actualiteitencursus arbeidsongeschiktheidsrecht

Tijdens de actualiteitencursus arbeidsongeschiktheidsrecht worden de belangrijkste actuele onderwerpen op het gebied van arbeidsongeschiktheid uitgebreid doorgenomen. Hierin gaat het onder andere over de uitspraken van het CRvB over het overgangsrecht, over arbeidsongeschiktheid bij aanvang van de verzekering en over duurzame arbeidsongeschiktheid volgens artikel 47 van de Wet WIA. De cursus wordt beoordeeld met een 8,3. Je kunt deze cursus volgen op 21 november in Utrecht.

 

Actualiteitencursus vreemdelingenbewaring (met startgarantie op 22-11)

Een advocaat vreemdelingenrecht heeft een zware verantwoordelijkheid. Door je cliënt zo goed mogelijk bij te staan, hoeft deze niet langer gedetineerd te blijven dan strikt noodzakelijk. Een groot belang, waarbij een eenvoudige verwijzing naar het oordeel van de rechter niet volstaat. Er wordt veel meer van je verwacht, zoals een goede argumentatie. Na het volgen van deze cursus ben je in een dagdeel op de hoogte van alle actuele jurisprudentie en wet- en regelgeving. De cursus wordt beoordeeld met een 8,1. Je kunt deze cursus volgen op 22 november in Amersfoort met startgarantie of op 29 november in Amsterdam.

Actualiteitencursus Awb

Het bestuursrecht is continu in beweging. Ook de afgelopen tijd zijn er belangrijke uitspraken gedaan over tal van verschillende onderwerpen. Deze actualiteitencursus start met een behandeling van de basisbegrippen uit de Awb om de in het verleden opgedane kennis weer even op te frissen. Daarna besteden de docenten in deze actualiteitencursus uitvoerig aandacht aan jurisprudentie en wetgeving, maar ook aan actuele ontwikkelingen. De cursus wordt beoordeeld met een 8,2. Je kunt deze cursus volgen op 8 december in Utrecht.

 

Actualiteitencursus huurrecht (met startgarantie)

Wet- en regelgeving binnen het huurrecht zijn continu aan verandering onderhevig. Als juridisch professional wil je op de hoogte zijn van actuele thema’s en ontwikkelingen in het huurrecht. Met deze actualiteitencursus huurrecht ben je in één dag weer helemaal up-to-date. De cursus wordt beoordeeld met een 8,2. Je kunt deze cursus volgen op 13 december in Utrecht.

Actualiteitencursus ruimtelijk bestuursrecht

De wereld van de ruimtelijke ordening en vergunningverlening bouwen is continu in beweging, zowel qua wetgeving (Wabo, Chw, Wro) als de stroom aan jurisprudentie. Nu komt ook de nieuwe Omgevingswet eraan. In deze bijeenkomst komen recente ontwikkelingen binnen de omgevingsrechtelijke jurisprudentie van de ABRvS aan bod. Hierbij wordt vanuit het perspectief van de ruimtelijke ordening ook onderwerpen uit sectorale wet- en regelgeving besproken, zoals natuurbescherming en geluid. De cursus wordt beoordeeld met een 8. Je kunt deze cursus volgen op 18 december in Utrecht.

 

Actualiteitencursus sociale zekerheid

In de periode tussen het Regeerakkoord 2017 en het IBO-rapport vereenvoudiging sociale zekerheid ( Moeilijk makkelijker maken) zijn er heel wat veranderingen in de ZW en WIA aangekondigd. Die beoogde veranderingen houden ook verband met het zo effectief mogelijk inzetten van de beschikbare capaciteit aan verzekeringsartsen. Behalve op deze onderwerpen zal in deze actualiteitencursus ook stil gestaan worden bij de rechtspraak het afgelopen jaar rond de WW en relevante artikelen voor de procespraktijk uit de AWB. De cursus wordt beoordeeld met een 8,1.

Je kunt deze cursus volgen op 19 december in Utrecht.

 

Het bericht OSR: het beste aanbod in bij- & nascholing! verscheen eerst op Mr. Online.

Categorieën: Rechten

CISO van gehackt SolarWinds aangeklaagd vanwege cybersecurityfalen

IusMentis - 7 november 2023 - 8:21am
Courtany / Pixabay

De Amerikaanse bedrijfstoezichthouder SEC komt met een aanklacht tegen de diepgaand gehackte leverancier van beheersoftware én tegen de CISO van het bedrijf. Dat meldde AG Connect onlangs. Zorgen over aansprakelijkheid voor topmanagers laaien nu weer op. Hoe zou dat in Nederland uitpakken?

Eind augustus 2021 blogde ik ook over zo’n claim tegen de CISO persoonlijk, maar dat ging toen over een civielrechtelijke claim van aandeelhouders die securities fraud zagen in de wanboel die deze man ervan had gemaakt. Nu is het dus de beurstoezichthouder, maar de insteek is hetzelfde: door enerzijds te zeggen dat de security top is, en anderzijds diezelfde security volledig te verprutsen, misleid je aandeelhouders.

Kun je zoiets de CISO persoonlijk verwijten? Dat hangt allereerst af van waar deze staat. Ondanks de C-titel is een CISO namelijk niet perse een chief, een directeur of vergelijkbaar. Zoals het NCSC het uitlegt: De CISO heeft, als kenner van cybersecurity, een adviserende, coördinerende en controlerende rol. De CISO heeft geen zelfstandige verantwoordelijkheid voor de cybersecurity van de organisatie. De CISO is namelijk geen eigenaar van IT- of OT-systemen. Die rol wordt altijd belegd bij het lijnmanagement van de organisatie. Een adviseur of controlerende functie in een organisatie is natuurlijk geen bestuurder en kan dan niet persoonlijk aansprakelijk gehouden worden door derden. De juridische norm van “opzet of bewuste roekeloosheid” (art. 7:661 BW) ligt zó hoog dat je dit in de praktijk vrijwel nooit haalt.

Wie wél bestuurder is (een CISO in de board, dus) zou wel als bestuurder persoonlijk aansprakelijk gesteld kunnen worden. Het criterium is dan kort gezegd dat in het algemeen, alleen dan kan worden aangenomen dat de bestuurder jegens de schuldeiser van de vennootschap onrechtmatig heeft gehandeld wanneer hem persoonlijk, mede gelet op zijn verplichting tot een behoorlijke taakuitoefening als bedoeld in artikel 2:9 BW, een voldoende ernstig verwijt kan worden gemaakt. Het gaat dan vaak om zaken als willens en wetens verplichtingen aangaan die het bedrijf niet kan hebben, opzettelijk aansturen op contractbreuk enzovoorts. Maar hier gaat het primair om je werk niet goed doen, zitten te slapen in plaats van de security op orde te maken. Dat ligt een stuk moeilijker: … aansprakelijkheid van een bestuurder op grond van onbehoorlijk bestuur is een interne aansprakelijkheid jegens de vennootschap en niet tevens een aansprakelijkheid jegens de individuele aandeelhouder. Dit laat echter onverlet dat een aandeelhouder een falende bestuurder zou kunnen aanspreken op grond van onrechtmatige daad. De eis bij dat laatste is echter dat de bestuurder van plan was die aandeelhouders te benadelen. En dat is nogal lastig te bewijzen als je gewoon in het algemeen zegt “onze security is top” terwijl die bestond uit een wachtwoord “solarwinds123”, nul personeel op security en een Documentatie.pdf van nul bytes.

Voor de falende security zelf zie ik ook geen claims richting de bestuurder-CISO persoonlijk. Dat is ‘gewoon’ tekortschieten in je werk, en geen opzet om klanten of anderen schade toe te brengen.

In Europa kan er straks onder de NIS2-richtlijn meer. Het governance-artikel (20) bepaalt namelijk in lid 1: De lidstaten zorgen ervoor dat de bestuursorganen van essentiële en belangrijke entiteiten de door deze entiteiten genomen maatregelen voor het beheer van cyberbeveiligingsrisico’s goedkeuren om te voldoen aan artikel 21, toezien op de uitvoering ervan en aansprakelijk kunnen worden gesteld voor inbreuken door de entiteiten op dat artikel. De bestuursorganen (zoals de directie) moeten dus zorgen voor adequate cyberbeveiliging. En dan staat in artikel 29: De lidstaten zorgen ervoor dat elke natuurlijke persoon die verantwoordelijk is voor of optreedt als wettelijke vertegenwoordiger van een essentiële entiteit op basis van de bevoegdheid om deze te vertegenwoordigen, de bevoegdheid om namens deze entiteit beslissingen te nemen of de bevoegdheid om controle uit te oefenen op deze entiteit, de bevoegdheid heeft om ervoor te zorgen dat deze entiteit deze richtlijn nakomt. De lidstaten zorgen ervoor dat dergelijke natuurlijke personen aansprakelijk kunnen worden gesteld voor het niet nakomen van hun verplichtingen om te zorgen voor de naleving van deze richtlijn. Een eindverantwoordelijke voor informatiebeveiliging zou dus in theorie aan te spreken kunnen worden onder de NIS2 regels. Maar dat gaat niet direct over “de security was bagger” maar over “je bedrijf was zo ingericht dat de security wel bagger moest zijn”, over het niet kunnen nakomen van de richtlijn vanwege te beperkte bevoegdheden.

Arnoud

Het bericht CISO van gehackt SolarWinds aangeklaagd vanwege cybersecurityfalen verscheen eerst op Ius Mentis.

Constitutioneel hof in NL? ‘Surinaams hof maakte een vliegende start’

Mr. Online (juridisch nieuws) - 7 november 2023 - 7:30am

Dat schreef de auteur, oud-hoofddocent staatsrecht aan de Universiteit Leiden, onlangs in het Nederlands Juristenblad. Het toetsingsverbod uit het huidige artikel 120 staat al ter discussie sinds het in 1848 in Thorbeckes Grondwet belandde. Recentelijk is het niettemin een reëel politiek thema geworden.

Het kabinet-Rutte IV schreef in 2020 een hoofdlijnenbrief over constitutionele toetsing, waarin werd aangestuurd op gespreide toetsing door de gewone rechter (Mr. sprak erover met drie kenners). De nieuwe partij NSC van (ex-CDA’er) Pieter Omtzigt wil een constitutioneel hof oprichten dat exclusief bevoegd is tot toetsing, waarover – als we de peilingen mogen geloven – waarschijnlijk serieus gesproken zal worden.

45 jaar in de maak

Suriname kan volgens Fernandes Mendes ter inspiratie dienen. Al sinds het land in 1975 onafhankelijk werd van Nederland werd de oprichting van een constitutioneel hof mogelijk gemaakt, maar dit liet maar liefst 45 jaar op zich wachten. Niettemin heeft het Constitutioneel Hof Suriname (CHS) volgens de auteur een “vliegende start” gemaakt met het oordelen in ”zeer gevoelige onderwerpen”. Een aantal voorbeelden dienen ter illustratie.

Gewezen wordt op de ongrondwettigverklaring van de Amnestiewet, welke ervoor zorgde dat Desi Bouterse c.s. geen straf zouden krijgen voor de Decembermoorden in 1982; het CHS durfde het aan in deze zeer beladen zaak. Een andere kwestie betrof het kiesstelsel. Het maakte korte metten met de Kiesregeling, ook al was het “constitutionele en politieke effect van de uitspraak […] enorm” en had dit directe gevolgen voor politieke partijen. Een patstelling van 35 jaar werd hiermee doorbroken; “De positieve impact van de beslissing van het CHS op de democratische ontwikkeling is hiermee geïllustreerd.”

Terughoudend

Deze principiële uitspraken betekenen echter niet noodzakelijkerwijs dat het CHS een “activistisch Hof” is dat op de stoel van de wetgever zitten. Aangaande het homohuwelijk stelde het zich juist terughoudend op over zo’n “maatschappelijk gevoelig vraagstuk”. Beleid opstellen hieromtrent zou aan de wetgever zijn; al werd door het Hof “in niet mis te verstane termen de wetgever aangezegd dat er wetgeving moet komen”.

Fernandus Mendes benadrukt dat de Nederlandse situatie in vele opzichten niet vergelijkbaar is met Suriname, maar dat ook hier onconstitutionele wetten gemaakt kunnen worden – waarbij hij wijst op de toeslagenaffaire, vreemdelingen- en asielwetgeving en de Pensioenwet. Een blik over de grens kan lonen: “De ervaringen met het opheffen van het toetsingsverbod in de andere landen van het Koninkrijk en Suriname zijn het waard een rol te spelen in toekomstige discussies.”

Het bericht Constitutioneel hof in NL? ‘Surinaams hof maakte een vliegende start’ verscheen eerst op Mr. Online.

Categorieën: Rechten

EFF to Ninth Circuit: Activists’ Personal Information Unconstitutionally Collected by DHS Must Be Expunged

Electronic Frontier Foundation (EFF) - nieuws - 7 november 2023 - 12:32am

EFF filed an amicus brief in the U.S. Court of Appeals for the Ninth Circuit in a case that has serious implications for people’s First Amendment rights to engage in cross-border journalism and advocacy.

In 2019, the local San Diego affiliate for NBC News broke a shocking story: components of the federal government were conducting surveillance of journalists, lawyers, and activists thought to be associated with the so-called “migrant caravan” coming through Central America and Mexico.

The Inspector General for the Department of Homeland Security, the agency’s watchdog, later reported that the U.S. government shared sensitive information with the Mexican government, and U.S. officials had improperly asked Mexican officials to deny entry into Mexico to Americans to prevent them from doing their jobs.

The ACLU of Southern California, representing three of these individuals, sued Customs & Border Protection (CBP), Immigration & Customs Enforcement (ICE), and the FBI, in a case called Phillips v. CBP. The lawsuit argues, among other things, that the agencies collected information on the plaintiffs in violation of their First Amendment rights to free speech and free association, and that the illegally obtained information should be “expunged” or deleted from the agencies’ databases.

Unfortunately, both the district court and a three-judge panel of the Ninth Circuit ruled against the plaintiffs.

The panel held that the plaintiffs don’t have standing to bring the lawsuit because they don’t have sufficient privacy interests in the personal information the government collected about them, in part because the data was gleaned from public sources such as social media. The panel also held there is no standing because there isn’t a sufficient risk of future harm from the government’s retention of the information.

The plaintiffs recently asked the three-judge panel to reconsider its decision, or alternatively, for the full Ninth Circuit to conduct an en banc review of the panel’s decision. 

In our amicus brief, we argued that the plaintiffs have privacy interests in the personal information the government collected about them, which included details about their First Amendment-protected “political views and associations.” We cited to Supreme Court precedent that has found privacy interests in personal information compiled by the government, even when the individual bits of data are available from public sources, and especially when the data collection is facilitated by technology.

We also argued that, because the government stored plaintiffs’ personal information in various databases, there is a sufficient risk of future harm. These risks include sharing data across agencies or even with other governments due to lax or nonexistent policies on data sharing; government employees abusing individuals’ data; and CBP’s poor track record of keeping digital data safe from data breaches.

We hope that the panel reconsiders its erroneous decision and holds that the plaintiffs have standing to seek expungement of the information the government collected about them; or that the full Ninth Circuit agrees to review the panel’s original decision, to protect Americans’ free speech and privacy rights.

Categorieën: Openbaarheid, Privacy, Rechten

Digital Rights Updates with EFFector 35.14

There's been lots of news and updates recently in the realm of digital rights, from EFF's recent investigation (and quiz!) into the student monitoring tool GoGuardian, to a recent victory in California regarding law enforcement's sharing of ALPR data outside of the state. It can feel overwhelming to stay up to date, but we've got you covered with our EFFector newsletter!

Version 35, issue 14 is out now — you can read the full newsletter here, and subscribe to get the next issues in your inbox automatically. You can also listen to the audio version below:

LISTEN ON YouTube

EFFECTOR 35.14 - Digital Rights in Times of War

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Categorieën: Openbaarheid, Privacy, Rechten

Congress Shouldn't Limit The Public's Right To Fight Bad Patents

The U.S. Senate Subcommittee on Intellectual Property will debate a bill this week that would dramatically limit the public’s right to challenge bad granted patents. The PREVAIL Act, S. 2220 would bar most people from petitioning the U.S. Patent and Trademark Office (USPTO) to revoke patents that never should have been granted in the first place. 

If the bill passes, it would be a giant gift to patent trolls, who will be able to greatly increase the extortionate toll they demand from small businesses, software developers, and everyday internet users. EFF opposes the bill, and we’re reaching out to Congress to let them know they should stand with technology makers and users—not patent trolls. 

The PREVAIL Bill Will Ban Most People From Challenging Bad Patents 

Every year, hundreds of patent lawsuits are filed over everyday internet activities–“inventions” like watching ads (online), showing picture menus (online), sharing sports data (online), selling binders or organic produce (online), or clocking in to work (online). 

The patents are usually controlled by “patent assertion entities,” also called patent trolls, which don’t actually do business or use the ideas themselves. 

There are two main ways of fighting back against these types of bogus patents. First, they can be invalidated in federal court, which can cost millions of dollars. Second, they can be challenged at the U.S. Patent and Trademark Office, usually in a process called an inter partes review, or IPR. The IPR process is also expensive and difficult, but this quasi-judicial process is much faster and cheaper than district courts. IPR has allowed the cancellation of thousands of patent claims that never should have been issued in the first place. 

The PREVAIL Act will limit access to the IPR process to only people and companies that have been directly threatened or sued over a patent. No one else will have standing to even file a petition. That means that EFF, other non-profits, and membership-based patent defense companies like Unified Patents won’t be able to access the IPR process at all. 

EFF used the IPR process back in 2013, when thousands of our supporters chipped in to raise more than $80,000 to fight against a patent that claimed to cover all podcasts. EFF was able to present prior art (earlier technology), and the judges at the Patent Office agreed that the so-called “podcasting patent,” which belonged to a patent troll called Personal Audio LLC, should never have been granted. The patent was knocked out and although the patent owner appealed all the way to the Supreme Court, our win was upheld at every stage. 

We’re not the only non-profit to use IPRs to protect users and developers. The Linux Foundation, for instance, funds an “open source zone” that uses IPR to knock out patents that may be used to sue open source projects. Hundreds of patents per year are litigated against open source projects, the great majority of them being owned by patent trolls. 

Challenges Matter Because Most Software Patents Were Wrongly Granted

Patents are a government-granted monopoly that lasts 20 years. The owners of those monopoly rights often want payment from those who use the method or system they claim to “own.” These are vast and broad claims–particularly the ones used by patent trolls, which will sue dozens of companies that have very different ways of doing things. 

When a patent troll comes along like Lodsys, which threatened hundreds of independent app developers over online payment technology, software makers shouldn’t have to wait until they get a threat letter in the mail to fight back. What’s more, funding defensive measures like IPRs through crowdfunding or membership-based systems are critical to making them accessible to everyday people. 

The government gives out software monopolies in the name of spurring innovation. But the Patent Office gets it wrong very, very often. Roughly half of patents litigated to judgment are found invalid (even though the legal standard heavily favors issued patents), and one study found the most heavily-litigated patents win in court only 11% of the time

There’s a growing body of evidence that patents don’t work to spur software innovation, and in fact do harm. The least we should be allowed is the right to bring evidence forward to challenge the most abused monopoly rights. 

The PREVAIL Act Tilts The Table In Favor Of Trolls At Every Step 

The bill tweaks the patent challenge system in other ways too, nearly all of which favor patent trolls and a few large patent-owners. 

First, the PREVAIL Act greatly raises the standard of evidence needed to invalidate a patent. If it passes, the Patent Trial and Appeal Board will presume that the patent claims are valid unless the challengers carry a heavy burden to show otherwise. Right now, the legal standards for inter partes review are roughly aligned with the standards for issuance of proposed patent claims. That alignment is what makes IPR a true opportunity for the PTO to reconsider the issuance of patent claims. 

Second, the bill would require parties that file IPR challenges to give up their right to fight in federal court. This doesn’t make any sense. Certain arguments, like arguing that a patent is abstract and ineligible under the Alice precedent, can only be made in district court. Choosing to present prior art arguments at the Patent Office shouldn’t bar someone who has been targeted by a patent troll from being able to properly defend themselves in court. Ultimately, it’s only a judge or jury that can decide a patent has been infringed—and award damages. Anyone accused of patent infringement must have all defenses available to them when they face such a proceeding. 

Third, when patents get invalidated, PREVAIL allows patent owners to simply add new claims that avoid the issues presented before the court. The owners get to do this with the benefit of years of hindsight, and new information about what real innovators have found to be successful in the market. Combined with the waiver of the ability to later challenge validity in court, this is clearly a raw deal for the public and anyone on the receiving end of a patent threat.

Finally, the bill gives the USPTO Director the power to throw out petitions in cases where more than one petition challenges the same patent. But it makes perfect sense for there to be multiple challenges over the most controversial, heavily litigated patents. The same patent or group of patents is often used to threaten hundreds–or even thousands–of small businesses. There shouldn’t be a race to the door where only the first challenger (who might not be the best challenger) gets their case heard. 

But that’s exactly what will happen if PREVAIL passes. Even a patent that’s been used dozens of times will only have to face one challenger at the USPTO, even if the two challengers bring forth very different arguments and evidence. Given that there are so few limitations on who patent owners can threaten and sue, it’s unfair to place harsh limits on the comparatively few people or companies that have the desire, and the resources, to fight back against wrongly granted patents. 

Congress Should Allow More Challenges to U.S. Patents, Not Fewer 

The PREVAIL Act proceeds from a misguided, evidence-free belief system that attributes innovation and economic success to government idea-monopolies. The evidence is growing that the U.S. software industry thrives in spite of software patents, not because of them. 

There are useful ways we could change the IPR system. In 2021, EFF supported a bill brought by then-Senator Patrick Leahy brought forward a bill that would have updated the IPR system and closed four loopholes in the system. Importantly, that bill expanded access to IPR–it didn’t shrink it. 

In fact, the public benefits from challenging bad patents. We could go even further in expanding patent challenges, and it would be to the public’s benefit. For instance, we could allow patent challengers to explain when a patent shouldn’t have been issued because it claims abstract ideas, which aren’t patentable per Section 101 of the Patent Act. Those arguments currently aren’t allowed during IPR proceedings. 

The forces behind this bill represent patent trolls and other large-scale patent enforcers. In the end, the IPR process has benefited the public, but it’s been a net loss for a small group of people and companies that have made a lot of money from asserting patent threats. That’s why we’ve seen continuous efforts to destroy the IPR process, including misguided court challenges and pushing harmful rule changes at USPTO to limit its effects. 

The PREVAIL Act will do far more harm than good, and we’ll be calling on lawmakers in Congress to oppose this bill. 

Categorieën: Openbaarheid, Privacy, Rechten

NOvA: tien verkiezingsprogramma’s bevatten onrechtsstatelijke voorstellen

Mr. Online (juridisch nieuws) - 6 november 2023 - 10:00am

De rechtstaattoets van de Nederlandse orde van advocaten is inmiddels een traditie te noemen. Ook in 2012, 2017 en 2021 werden de verkiezingsprogramma’s langs de staatsrechtelijke meetlat gelegd (overigens niet altijd zonder kritiek). De Commissie Rechtsstatelijkheid in Verkiezingsprogramma’s 2023 van de NOvA werd wederom voorgezeten door voormalig vicepresident van de Hoge Raad Willem van Schendel en bestond verder uit secretaris François van Vloten en advocaten Channa Samkalden, Irma van den Berg, Robert Sanders, Thomas van Houwelingen-Boer en Jorg Werner. De commissie geeft geen stemadvies, maar signaleert slechts of plannen positief of negatief voor de rechtsstaat kunnen uitpakken.

Stoplicht

De commissie beoordeelt de verschillende programmaonderdelen aan de hand van een drietal kleuren. Groen staat voor plannen die de rechtsstaat kunnen verbeteren, geel voor plannen die mogelijk een risico vormen voor de rechtsstaat en rood voor plannen die “regelrecht in strijd zijn met de rechtsstaat”. Tien van de achttien onderzochte partijen doen voorstellen die de toets niet kunnen doorstaan (tegenover zeven van de veertien in 2021). Partijen kleuren met name buiten de lijntjes waar het gaat om asiel en immigratie, maar ook de onafhankelijkheid van de rechter is  in het geding (zoals waar het gaat minimumstraffen).

Van de 26 partijen die deelnemen aan de verkiezingen heeft de commissie er achttien onderzocht. Dat zijn de partijen die ofwel op dit moment in de Kamer zijn vertegenwoordigd, ofwel gerede kans maken om een zetel te behalen. Braafste jongetjes van de klas zijn 50PLUS, BIJ1, ChristenUnie, D66, GL-PvdA, PvdD, SP, Volt. Ook bij hen waren weliswaar gele voorstellen te vinden, maar geen rode die de toets aan de minimumnormen van de rechtsstaat niet kunnen doorstaan.

Onvoldoendes

Dat geldt niet voor de overige partijen: Belang van Nederland, de BBB, BIJ1, het CDA, DENK, Forum voor Democratie, JA21, Nieuw Sociaal Contract (NSC), de PVV, de SGP en de VVD. Zo krijgt de NSC, de nieuwe partij van Pieter Omtzigt, onder meer een ondvoldoende voor zijn voorstel om een migratiesaldo van maximaal 50.000 in te voeren wegens strijd met internationale wet- en regelgeving. De VVD, op dit moment de grootste partij in de Tweede Kamer, krijgt verscheidene tikken op de vingers – bijvoorbeeld vanwege het opleggen van minimumstraffen en een taakstrafverbod en het tot een minimum beperken van rechtsbijstand voor kansarme asielzoekers.

Hoop op vertrouwen

Overigens zijn er volgens het rapport ook lichtpuntjes te bespeuren. De commissie merkt in het algemeen op dat bij het merendeel van de partijen toenemende aandacht bestaat voor “versterking van democratische legitimatie van het bestuur” en vertrouwen in de burger als uitgangspunt in plaats van wantrouwen. “Een overheid met een menselijk gezicht geldt als één van de lijnen die de verschillende verkiezingsprogramma’s aan elkaar bindt, waarbij ook wordt teruggekeken naar de toeslagenaffaire en de gang van zaken rond de gaswinning in Groningen.”

Het bericht NOvA: tien verkiezingsprogramma’s bevatten onrechtsstatelijke voorstellen verscheen eerst op Mr. Online.

Categorieën: Rechten

Pagina's

Abonneren op Informatiebeheer  aggregator - Rechten