U bent hier

Rechten

The FISA Amendments Reauthorization Act Restricts Congress, Not Surveillance

Electronic Frontier Foundation (EFF) - nieuws - 18 november 2017 - 12:16am

The FISA Amendments Reauthorization Act of 2017—legislation meant to extend government surveillance powers—squanders several opportunities for meaningful reform and, astonishingly, manages to push civil liberties backwards. The bill is a gift to the intelligence community, restricting surveillance reforms, not surveillance itself.

The bill (S. 2010) was introduced October 25 by Senate Select Committee on Intelligence Chairman Richard Burr (R-NC) as an attempt to reauthorize Section 702 of the FISA Amendments Act. That law authorizes surveillance that ensnares the communications of countless Americans, and it is the justification used by agencies like the FBI to search through those collected American communications without first obtaining a warrant. Section 702 will expire at the end of this year unless Congress reauthorizes it.

Other proposed legislation in the House and Senate has used Section 702’s sunset as a moment to move surveillance reform forward, demanding at least minor protections to how 702-collected American communications are accessed. In contrast, Senator Burr’s bill uses Section 702’s sunset as an opportunity codify some of the intelligence community’s more contentious practices while also neglecting the refined conversations on surveillance happening in Congress today. 

Here is a breakdown of the bill.

“About” Collection

Much of the FISA Amendments Reauthorization Act (the “Burr bill” for short) deals with a type of surveillance called “about” collection, a practice in which the NSA searches Internet traffic for any mentions of foreign intelligence surveillance targets. As an example, the NSA could search for mentions of a target’s email address. But the communications being searched do not have to be addressed to or from that email address, the communications would simply need to include the address in their text.  This is not normal for communications surveillance.

Importantly, nothing in Section 702 today mentions or even hints at “about” collection, and it wasn’t until 2013 that we learned about it. A 2011 opinion from the Foreign Intelligence Surveillance Court—which provides judicial review for the Section 702 program—found this practice to be unconstitutional without strict post-collection rules to limit its retention and use.

Indeed, it is a practice the NSA ended in April precisely “to reduce the chance that it would acquire communications of U.S. persons or others who are not in direct contact with a foreign intelligence target.”  Alarmingly, it is a practice the FISA Amendments Reauthorization Act defines expansively and provides guidelines for restarting.

According to the bill, should the Attorney General and the Director of National Intelligence decide that “about” collection needs to start up again, all they need to do is ask specified Congressional committees. Then, a 30-day clock begins ticking. It’s up to Congress to act before the clock stops.

In those 30 days, at least one committee—including the House Judiciary Committee, the House Permanent Select Committee on Intelligence, the Senate Judiciary Committee, and the Senate Select Committee on Intelligence—must draft, vote, and pass legislation that specifically disallows the continuation of “about” collection, working against the requests of the Attorney General and the Director of National Intelligence.

If Congress fails to pass such legislation in 30 days, “about” collection can restart.

The 30-day period has more restrictions. If legislation is referred to any House committee because of the committee’s oversight obligations, that committee must report the legislation to the House of Representatives within 10 legislative days. If the Senate moves legislation forward, “consideration of the qualifying legislation, and all amendments, debatable motions, and appeals in connection therewith, shall be limited to not more than 10 hours,” the bill says.

Limiting discussion on “about” collection to just 10 hours—when members of Congress have struggled with it for years—is reckless. It robs Congress of the ability to accurately debate a practice whose detractors even include the Foreign Intelligence Surveillance Court (FISC)—the judicial body that reviews and approves Section 702 surveillance.

Worse, the Burr bill includes a process to skirt legislative approval of “about” collection in emergencies. If Congress has not already disapproved “about” collection within the 30-day period, and if the Attorney General and the Director of National Intelligence determine that such “about” collection is necessary for an emergency, they can obtain approval from the FISC without Congress.

And if during the FISC approval process, Congress passes legislation preventing “about” collection—effectively creating both approval and disapproval from two separate bodies—the Burr bill provides no clarity on what happens next. Any Congressional efforts to protect American communications could be thrown aside.

These are restrictions on Congress, not surveillance—as well as an open invitation to restart “about” searching.

What Else is Wrong?

The Burr bill includes an 8-year sunset period, the longest period included in current Section 702 reauthorization bills. The USA Liberty Act—introduced in the House—sunsets in six years. The USA Rights Act—introduced in the Senate—sunsets in four.

The Burr bill also allows Section 702-collected data to be used in criminal proceedings against U.S. persons so long as the Attorney General determines that the crime involves a multitude of subjects. Those subjects include death, kidnapping, seriously bodily injury, incapacitation or destruction of critical infrastructure, and human trafficking. The Attorney General can also determine that the crime involves “cybersecurity,” a vague term open to broad abuse.

The Attorney General’s determinations in these situations are not subject to judicial review.

The bill also includes a small number of reporting requirements for the FBI Director and the FISC. These are minor improvements that are greatly outweighed by the bill’s larger problems.

No Protections from Warrantless Searching of American Communications

The Burr bill fails to protect U.S. persons from warrantless searches of their communications by intelligence agencies like the FBI and CIA.

The NSA conducts surveillance on foreign individuals living outside the United States by collecting communications both sent to and from them. Often, U.S. persons are communicating with these individuals, and those communications are swept up by the NSA as well. Those communications are then stored in a massive database that can be searched by outside agencies like the FBI and CIA. These unconstitutional searches do not require a warrant and are called “backdoor” searches because they skirt U.S. persons’ Fourth Amendment rights.

The USA Liberty Act, which we have written extensively about, creates a warrant requirement when government agents look through Section 702-collected data for evidence of a crime, but not for searches for foreign intelligence. The USA Rights Act creates warrant requirements for all searches of American communications within Section 702-collected data, with “emergency situation” exemptions that require judicial oversight.

The Burr bill offers nothing.

No Whistleblower Protections

The Burr bill also fails to extend workplace retaliation protections to intelligence community contractors who report what they believe is illegal behavior within the workforce. This protection, while limited, is offered by the USA Liberty Act. The USA Rights Act takes a different approach, approving new, safe reporting channels for internal government whistleblowers.

What’s Next?

The Burr bill has already gone through markup in the Senate Select Committee on Intelligence. This means that it could be taken up for a floor vote by the Senate.

Your voice is paramount right now. As 2017 ends, Congress is slammed with packages on debt, spending, and disaster relief—all which require votes in less than six weeks. To cut through the log jam, members of Congress could potentially attach the Burr bill to other legislation, robbing surveillance reform of its own vote. It’s a maneuver that Senator Burr himself, according to a Politico report, approves.

Just because this bill is ready, doesn’t mean it’s good. Far from it, actually.

We need your help to stop this surveillance extension bill. Please tell your Senators that the FISA Amendments Reauthorization Act of 2017 is unacceptable.

Tell them surveillance requires reform, not regression.  

Take action today.

Related Cases: Jewel v. NSA
Categorieën: Openbaarheid, Privacy, Rechten

Hoge Raad oordeelt over reikwijdte legessanctie bij oude bestemmingsplannen

Het doel en de strekking van de legessanctie bij bestemmingsplannen ouder dan tien jaar brengen mee dat niet alleen de bevoegdheid tot het invorderen van de leges vervalt maar ook de bevoegdheid tot het, aan de invordering voorafgaande, heffen daarvan. Verder strekt de legessanctie zich ook uit tot aanvragen voor vergunningen voor buitenplanse afwijkingen. Dat oordeelt de Hoge Raad vandaag.

De legessanctie

Om gemeenten te stimuleren bestemmingsplannen actueel te houden, is in de wet bepaald dat bestemmingsplannen binnen tien jaar opnieuw moeten worden vastgesteld. Als dat niet is gebeurd, geldt de zogenoemde legessanctie. Dit betekent dat gemeenten vanaf dat tijdstip geen leges mogen vorderen voor diensten die verband houden met dat bestemmingsplan, bijvoorbeeld een vergunningaanvraag.

In deze zaak heeft de belanghebbende in 2013 een aanvraag voor een omgevingsvergunning ingediend voor het bouwen van een woning. Het sinds 2001 geldende bestemmingsplan stond deze activiteit niet toe, daarvoor was een zogeheten buitenplanse afwijking vereist. In verband met de vergunningaanvraag zijn aan de belanghebbende leges in rekening gebracht. De belanghebbende was het daar niet mee eens en startte een juridische procedure.

De Hoge Raad oordeelt in zijn arrest dat de reikwijdte van de legessanctie niet alleen de bevoegdheid tot invordering van leges betreft, maar ook het heffen daarvan. Daarnaast zegt de Hoge Raad dat de legessanctie zowel ziet op aanvragen voor vergunningen voor activiteiten in overeenstemming met een bestemmingsplan, als ook op aanvragen in strijd met een bestemmingsplan (de zogenoemde buitenplanse afwijking). Uit het arrest blijkt verder dat voor de toepassing van de legessanctie bepalend is dat de tienjaarstermijn is verstreken ten tijde van het belastbare feit, in dit geval het in behandeling nemen van de vergunningaanvraag. Dit belastbare feit omvat alle werkzaamheden die voor de verdere behandeling van de aanvraag moeten worden verricht.

Categorieën: Rechten

Datenpolitik jenseits von Datenschutz

iRights.info - 17 november 2017 - 8:16am

Der klassische Datenschutz erreicht sein Ziel nicht mehr. Auch die neue EU-Grundverordnung wird das Problem allein nicht lösen. Wo kann eine Politik der Daten ansetzen, die ihre sinnvolle Nutzung ermöglicht und Individuen wie Gemeinwohl schützt?

Wenn wir digitale Dienste nutzen, begleiten uns Überraschung und Schrecken. Einerseits schätzen wir die sich täglich erweiternden Möglichkeiten, die sie uns bieten; andererseits erschrecken uns die Dinge, die Anbieter mit den dabei gesammelten Nutzerdaten treiben. Wer Nutzungsbedingungen zustimmt und in die Verarbeitung seiner Daten einwilligt, ist für gewöhnlich im Unwissen darüber, wie viel das von ihm eingetauschte Gut – seine Daten – tatsächlich wert ist.

Oft erschließt sich der Wert unserer Daten erst zu einem späteren Zeitpunkt und möglicherweise erst im Zusammenhang mit anderen Daten. Das liegt daran, dass Innovationen auf der Basis von Daten wie alle Entdeckungsverfahren offen sind. Die Politik muss daher grundsätzlich bereit sein, Innovationstätigkeiten und Gewinnaussichten zu beschränken, sollte sie zu dem Schluss kommen, dass die Praktiken, die datengetriebenen Geschäftsmodellen zugrunde liegen, demokratische Grundwerte aushöhlen und Persönlichkeitsrechte verletzen.

Genau das aber scheint derzeit regelmäßig der Fall zu sein. Es besteht unserer Ansicht nach wenig Aussicht, dass sich das mit dem Inkrafttreten der europäischen Datenschutz-Grundverordnung grundlegend ändern wird. Bei Datenpolitik geht es auch nicht allein um Privatsphäre, sondern ebenso um soziale und wirtschaftliche Gerechtigkeit. Digitale Profile mit hunderten Datenpunkten mögen bessere Dienstleistungen ermöglichen, sie können aber auch Entscheidungen über den Zugang zu wesentlichen Ressourcen wie Gesundheitsleistungen, Bildungsangeboten oder finanziellen Mitteln beeinflussen.

Das Dilemma des Datenzeitalters

Momentan fehlen uns Instrumente, um positive Effekte mit potenziellen Schäden adäquat auszutarieren. Das Dilemma dabei: Wenn wir keine Modelle finden, die es ermöglichen, Daten zu gesellschaftlich akzeptierten Zwecken zu nutzen, ohne dass Bürger ihrer Grundrechte beraubt werden, werden uns auch die damit verbundenen Chancen zur Förderung des Gemeinwohls entgehen. Das mantraartige Bekenntnis zu den Prinzipien des traditionellen Datenschutzes bei gleichzeitiger Artikulation des Willens, die positiven Möglichkeiten des Sammelns und Auswertens von Daten abseits der formalen Restriktionen des Datenschutzes nutzen zu wollen, bringt uns hier nicht weiter.

Bevor wir uns kritisch mit den Mechanismen des Datenschutzes auseinandersetzen, möchten wir grundsätzlich betonen, dass wir den deutschen und europäischen Ansatz eines starken Schutzes von Persönlichkeits-, Bürger- und Verbraucherrechten ausdrücklich begrüßen. Unsere Kritik ist als konstruktives Weiterdenken dieser Prinzipien dort zu verstehen, wo bestehende Regelungen offensichtlich an Grenzen stoßen.

Grundprinzipien des Datenschutzes

Beim Datenschutz steht einerseits der Schutz der Freiheit der Bürger vor Überwachung und andererseits das Recht auf die persönliche Kontrolle von privaten Informationen im Zentrum. Datenschutzgesetze sind insofern abgeleitet vom Selbstbestimmungsrecht und dem Schutz der Menschenwürde, die auch das Fundament der Europäischen Grundrechte-Charta bilden. Mit dem Volkszählungsurteil von 1983 hat das Bundesverfassungsgericht verschiedene, aufeinander bezogene Grundprinzipien aufgestellt, die zunächst im deutschen, später in den Datenschutzgesetzen anderer europäischer Länder umgesetzt wurden:

  • Verbot mit Erlaubnisvorbehalt
    Nach diesem Prinzip ist die Verarbeitung personenbezogener Daten grundsätzlich untersagt, es sei denn, es liegt ein Erlaubnistatbestand vor: Erstens, wenn ein Gesetz die Verarbeitung erlaubt, zweitens, wenn der Betroffene in die Datenverarbeitung einwilligt.
  • Direkterhebung
    Nach dem Prinzip der Direkterhebung dürfen Daten nur beim Betroffenen selbst erhoben werden und der Betroffene muss Kenntnis über die Erhebung der Daten haben – zumindest, sofern keine Rechtsvorschrift vorliegt, die eine Abweichung gestattet.
  • Zweckbindung
    Der Zweck der Verarbeitung muss bereits vor der Verarbeitung klar definiert sein. Wenn eine Einwilligung durch den Verbraucher erteilt wird, so gilt diese nur für den darin ausbuchstabierten Zweck.
  • Erforderlichkeit
    Daten dürfen nur gespeichert werden, wenn es für die Erreichung des Zwecks erforderlich ist. Wenn der Zweck der Datenerhebung erfüllt wurde, müssen die Daten gelöscht werden, sofern keine Aufbewahrungspflichten bestehen.
  • Datensparsamkeit und Datenminimierung
    Zweckbindung und Erforderlichkeit dienen überdies der Sicherstellung des Prinzips der Datensparsamkeit und Datenminimierung. Generell gilt also, dass so wenig personenbezogene Daten wie möglich erhoben, verarbeitet und genutzt werden sollen. Auch die Technik kann dieses Prinzip begünstigen.
  • Transparenz
    Der Betroffene soll zu jeder Zeit wissen, wer welche Daten über ihn speichert. Diese Informations- und Benachrichtigungspflicht gilt sowohl wenn der Betroffene der Datenverarbeitung zugestimmt hat als auch wenn sie gesetzlich zulässig ist.
  • Kontrolle und Sanktion
    Auf staatlicher Seite überprüfen die Datenschutzaufsichtsbehörden die Einhaltung der Regularien durch die staatlichen Stellen. Auf betrieblicher Seite sind Beauftragte für die Datenschutz-Compliance zuständig. Durch Transparenz-Rechte können auch die Betroffenen selber zur Kontrolle beitragen. Die Aufsichtsbehörden sind zudem berechtigt, die Einhaltung der Bestimmungen in den Unternehmen zu prüfen und Verstöße zu ahnden.
Datenschutzgrundverordnung schreibt Prinzipien fort

Im Mai 2018 wird die Datenschutz-Grundverordnung (DSGVO) in allen europäischen Mitgliedsstaaten in Kraft treten. Sie wird durch die E-Privacy-Richtlinie ergänzt, die zurzeit überarbeitet wird und ebenfalls im Mai 2018 in Kraft treten soll. So soll erstmals europaweit ein ähnlicher Datenschutzstandard gelten und das Datenschutzniveau angehoben werden.

Inhaltlich schreibt die DSGVO die genannten Prinzipien des Datenschutzes fort, wobei sie einige durch gezielte Maßnahmen verstärkt. Darüber hinaus führt die Verordnung einige wesentliche Neuerungen ein. Eine der wohl gravierendsten Änderungen der DSGVO sind die deutlich erhöhten Strafen bei Gesetzesverstößen, die bei bis zu vier Prozent des Jahresumsatzes eines Unternehmens liegen können.

Die zentralen Prinzipien des Datenschutzes und damit auch der DSGVO wurden jedoch zu einer Zeit definiert, als es zwar bereits erste Ansätze der digitalisierten Datenverarbeitung gab, der Stand der Technik aber nicht mit dem heutigen Datenverkehr vergleichbar war. Wir sind der Meinung, dass dieser regulative Rahmen Bürger- und Freiheitsrechte nicht ausreichend zu schützen vermag. Zusammenfassend zeichnen sich mindestens die folgenden zentralen Problemfelder ab.

1. Personenbezogene Daten lassen sich immer schwerer von anderen Daten unterscheiden

Die DSGVO basiert wie andere Datenschutzgesetze auf der Prämisse, zwischen personenbezogenen und nicht-personenbezogenen Daten zu unterscheiden. Nur personenbezogene Daten fallen unter den Anwendungsbereich der Rechtsvorschriften des Datenschutzes. Dazu gehören alle Daten, die sich eindeutig einer bestimmten Person zuordnen lassen, durch die eine Zuordnung erfolgen kann oder Daten, durch die sich ein Personenbezug herstellen lässt, wie etwa Kfz-Kennzeichen. Auch pseudonymisierte Daten, bei denen zum Beispiel der Name durch eine Nummer ersetzt wird, gelten nach der DSGVO als personenbezogene Daten.

Anders verhält es sich mit anonymisierten Daten. Durch Anonymisierung werden Daten in einem Maße verändert, dass diese nicht mehr einer Person zuzuordnen sind. Die Anonymisierung gilt daher auch in der DSGVO als wichtige Lösungsstrategie um Daten zu nutzen, ohne Grundrechte zu kompromittieren. Nicht umsonst werden Befürworter einer verstärkten Datennutzung nicht müde zu bekräftigen, dass ihre Geschäftsmodelle auf anonymisierten Daten beruhen. Die Politik springt bisweilen allzu bereitwillig auf diesen Zug auf.

In der Praxis zeigt sich jedoch, dass eine statische Einteilung in anonyme und nicht-anonyme Daten an der Realität der technologischen Möglichkeiten vorbeigeht. So häufen sich Fälle, in denen anonym geglaubte Daten re-identifiziert werden konnten. Forscher haben unter anderem gezeigt, wie Individuen in anonymisierten Netflix-Nutzerdaten, Mobilfunkdaten oder Kreditkartendaten re-identifiziert werden können, der Personenbezug also wiederhergestellt wird.

Der technische Fortschritt führt dazu, dass es nach heutigem Stand schier unmöglich ist, eine dauerhafte Anonymisierung zu garantieren. Immer mehr Datenpunkte sind öffentlich zugänglich und bieten in der Kombination mit nicht-öffentlichen Daten eine Angriffsfläche für De-Anonymisierungstechniken. Als in New York etwa Taxi-Daten auf der Open-Data-Plattform der Stadt frei zugänglich gemacht wurden, zeigten Programmierer, wie sie mit Hilfe von Ortsangaben in den öffentlich verfügbaren Twitter- und Instagram-Daten die Taxifahrten von Prominenten in den Datensätzen nachverfolgen konnten.

Damit soll nicht gesagt werden, dass man sich vom Verfahren der Anonymisierung verabschieden sollte. Sie ist eine wichtige Sicherheitsmaßnahme, um das Missbrauchspotenzial von Daten zu minimieren. Doch eine statische Einteilung in personenbezogene und nicht-personenbezogene Daten ist angesichts des aktuellen Kenntnisstands nicht mehr sachgemäß.

2. Die informierte Einwilligung wird zum Kontrollverlust für Verbraucher

Das Prinzip der Einwilligung verfolgt das Ziel, Verbrauchern Kontrolle über die Nutzung ihrer Daten zu geben. Doch tatsächlich belegen zahlreiche Studien, dass kaum ein Verbraucher die AGB tatsächlich liest. Aus rechtlicher Sicht gestattet der Verbraucher hier also regelmäßig Eingriffe in seine Grundrechte, ohne wirklich über das, was im Hintergrund passiert, informiert zu sein.

Überdies gibt es für den Verbraucher keinerlei Verhandlungsmöglichkeiten. Er kann nur zustimmen oder muss auf den Dienst verzichten. Zudem wird er mit Einwilligungserklärungen regelrecht überflutet, was seine Überforderung erhöht. Zu Recht wird daher von einer fehlenden Souveränität der Nutzer und vom Kontrollverlust in Bezug auf persönliche Daten gesprochen.

Die DSGVO hält jedoch am Prinzip der Einwilligung fest und räumt ihr Vorrang gegenüber anderen gesetzlichen Lösungen ein. Verbraucher- und Datenschützer feiern dies als Erfolg im Kampf gegen das bisherige Wegklicken der Grundrechte. Allerdings stellt sich hier eher grundsätzlicher Zweifel ein, ob das letztlich angestrebte Ziel – der Schutz der Grund- und Freiheitsrechte – tatsächlich durch dieses Prinzip erreicht werden kann. Selbst wenn volle Transparenz über die Nutzung der Daten herrschen würde, würde dies aus verhaltensökonomischen Gründen kaum ausreichen, um den Verbraucher davon zu überzeugen, sich nun intensiv mit etwas zu befassen, was früher am Rande stattfand.

Darüber hinaus suggeriert das Prinzip der Einwilligung Freiwilligkeit. Doch mag es in der Theorie möglich sein, sich bestimmten Online-Diensten zu entziehen, so kommt dies in der Praxis oftmals einer Verweigerung der modernen Welt als solcher gleich. Hinzu kommt, dass der Verbraucher sich durch die Verbreitung des Internets der Dinge in immer mehr Szenarien wiederfinden wird, in denen es schlicht keine Entscheidungsautonomie mehr gibt. Etwa dann, wenn er das vernetzte Nahverkehrs-System nutzen will oder die für den automatisierten Straßenverkehr essenziellen Daten der Geschwindigkeitssensoren teilen muss.

Schon heute sind die Folgen von Datenspeicherungen, -verarbeitungen und -kombinierungen so komplex geworden, dass es für den Einzelnen im Grunde unmöglich ist, sie adäquat einzuschätzen. Für den Verbraucher ist schwer nachvollziehbar, warum die Nutzung eines Dienstes in einem Kontext (etwa der Scan eines Dokuments mittels App) Konsequenzen in einem ganz anderen Bereich (etwa dem Kreditscoring) haben könnte.

Angesichts der hohen Strafen, die in der DSGVO bei Gesetzesbruch vorgesehen sind, ist es fraglich, ob sich Unternehmen auf die Wirksamkeit der Einwilligung verlassen wollen. Experten vermuten eher, dass viele Unternehmen versuchen werden, die Daten auf Basis anderer gesetzlicher Erlaubnisse zu verarbeiten, etwa dem „berechtigten Interesse“. Wie weit es interpretiert werden darf – ob etwa Werbezwecke oder die Weiterentwicklung des Geschäftsmodells dazu zählen – ist bislang unklar. Daten- und Verbraucherschützer werden aufgrund ihrer chronischen Unterausstattung nicht die besten Karten dabei haben, diese Fragen zu klären.

3. Der Datenhandel und seine Akteure sind unüberschaubar geworden

Hinzu kommt, dass das Netzwerk an Akteuren, die Daten sammeln, verarbeiten und verkaufen, derart komplex und verflochten geworden ist, dass aktuell weder der Verbraucher noch der Gesetzgeber genau weiß, wer was mit den Daten macht.

Der Verbraucher interagiert zum Beispiel mit Google, Facebook oder mit seinem Mobilfunkanbieter, hinzu kommen zahlreiche weitere App- und Service-Hersteller. Dabei ahnt der Nutzer in vielen Fällen zwar, dass die Anbieter mit seinen Daten Geld verdienen, doch tatsächlich sind diese nur die Spitzen einer Eisberglandschaft. Zahlreiche Unternehmen, die Daten sammeln, veredeln, verschneiden und veräußern, agieren im Hintergrund. Vor allem Datenhändler, deren Haupttätigkeit das Sammeln, Verknüpfen und Verkaufen von Daten ist, haben wenig Interesse, als solche öffentlich in Erscheinung zu treten. Der bekannteste Datenhändler, die Firma Acxiom, gibt beispielsweise an, inzwischen recht umfassende Daten von rund 44 Millionen Deutschen zu haben.

In der Regel argumentieren Datenhändler, sie würden ausschließlich mit anonymisierten Daten agieren. De facto geht es um Pseudonymisierung, wenn zum Beispiel Kunden durch sogenanntes Hashing über verschiedene Geräte hinweg wiedererkannt werden sollen. Aufgrund der fehlenden Transparenz im Markt bleibt oft im Verborgenen, ob die Daten, mit denen die Händler agieren, unter das Datenschutzrecht fallen müssten oder nicht.

4. Das Recht allein hilft nicht immer

Darüber hinaus häufen sich Fälle, in denen aus juristischer Sicht zwar kein Datenschutzproblem vorliegt, in denen der Verbraucher den Umgang mit Daten aber dennoch als Eingriff empfindet. Erst vor kurzem stellte sich etwa heraus, dass in Deutschland die Supermarktkette Real und die Deutsche Post Gesichtserkennungssoftware in ihren Filialen einsetzen, um so auf Basis von Alter und Geschlecht zielgerichtete Werbung auf analogen Werbeflächen zu schalten.

Zwar wurde dies vom zuständigen Datenschützer mit der Begründung, dass es sich nicht um personenbezogene Daten handele, abgesegnet. Die Kunden sind nach einer Umfrage des Bundesverbands der Verbraucherzentralen damit dennoch nicht einverstanden. Das zeigt, dass eine rein juristische Betrachtungsweise womöglich nicht ausreicht, um das Datenzeitalter angemessen zu gestalten.

5. Individuelle Entscheidungen zeitigen kollektive Effekte

Der Umgang mit personenbezogenen oder personenbeziehbaren Daten entfaltet kollektive Wirkungen, auch wenn das individualisierte Prinzip der Einwilligung etwas anderes suggeriert. In der Realität kann die Einwilligung eines Einzelnen Konsequenzen für Personen haben, die in keinerlei Zusammenhang mit der einwilligenden Person stehen.

So können Datenanalysen auf Basis weniger Datensätze von Zustimmenden vorgenommen werden, dann aber für ganze Bevölkerungsgruppen generalisiert werden. Potenzielle Schäden können für andere Individuen, für ganze Gruppen oder Mitglieder einer Profil-Kategorie eintreten. Trotzdem orientieren sich Datenschutzkonzepte durch den Fokus auf Instrumente wie die Einwilligung immer noch vornehmlich an den Folgen für Individuen und deren Schutz.

Hier zeigt sich erneut die Problematik einer Unterscheidung zwischen personenbezogenen und nicht-personenbezogenen Daten und dem darauf basierenden Prinzip der Einwilligung. Denn ihre Wirkung ist stark eingeschränkt, wenn automatisierte Entscheidungen letztlich auch Personen betreffen, die der Nutzung ihrer Daten nicht zugestimmt haben. Algorithmische Entscheidungen erfolgen nicht auf der Grundlage der Daten eines einzelnen Individuums, über das eine Entscheidung getroffen wird – zum Beispiel welcher Werbebanner Person X beim Surfen angezeigt wird. Die Entscheidung mag zwar eine begrenzte Anzahl Daten über diese Person benötigen, sie basiert aber auf einer Fülle von Daten anderer Personen oder Quellen.

Die österreichischen Forscher Wolfie Christl und Sarah Spiekermann nennen eine Vielzahl von Beispielen, in denen Benachteiligungen weltweit im Online-Marketing und -Handel auftreten. Etwa indem ein Algorithmus entscheidet, einem Kunden ein Produkt zu einem höheren Preis anzubieten oder sogenannten „verletzlichen, verzweifelten Gruppen“ Werbung für riskante Kreditangebote anzuzeigen. Gerade in der Summe und in der sich wechselseitig verstärkenden Wirkung können die Auswirkungen signifikant sein und Ungleichheiten verstärken.

Wenn Menschen zunehmend auf solche Art eingeteilt und in Rangfolge gebracht werden, wenn jeder kontinuierlich auf Basis seines ökonomischen Potenzials sortiert und adressiert wird, geht es nicht allein um den Schutz der Privatsphäre und personenbezogene Daten. Gefährdet sind einerseits wichtige Grundprinzipien von Demokratien wie Würde und Gleichheit, andererseits können unter Umständen bereits Benachteiligte in den Möglichkeiten sozialer Teilhabe sowie dem Recht auf Selbstbestimmung und Autonomie beschnitten werden.

Die Datenschutzgrundverordnung hat durchaus versucht, diese neuen Entwicklungen zu adressieren. Insbesondere Artikel 22 ist an dieser Stelle zu nennen, der den Einzelnen vor rein automatisierten Entscheidungen schützen soll, etwa beim Scoring. Etliche Wissenschaftler haben allerdings auf Mängel in diesem Artikel hingewiesen. Diese könnten letztlich dazu führen, dass das Gesetz für den Verbraucher wirkungslos bleibt. So gibt es in vielen Fällen durchaus menschliche Zwischeninstanzen. Diese Fälle sind vom Gesetz nicht betroffen. Auch die Auskunftsrechte sind laut Expertenmeinung nicht wirksam genug, um die Rechte von Verbrauchern ausreichend zu schützen.

Perspektiven über den Datenschutz hinaus

Im Zuge der Anpassung des Bundesdatenschutzgesetzes (BDSG) an die DSGVO sind immer wieder Szenarien über Deutschlands Abstieg am Weltmarkt gezeichnet worden, sollte man als mehr oder weniger einziges Land die Datenschutzprinzipien weiterhin streng auslegen und umsetzen. Teilweise ist der Erfolg dieser Argumentationslinie an den Änderungen des BDSG sogar zu erkennen.

Auch bei anderen Technologiethemen werden immer wieder Chancen gegen Gefahren ausgespielt. Diese Herangehensweise suggeriert vor dem Hintergrund der Alltagsrationalität – alles hat zwei Seiten –, dass es sich hierbei um Nullsummenspiele handelt. Wir setzen dieser Annahme unsere Überzeugung gegenüber, dass wir Chancen, die uns die Auswertung digitaler Daten bieten, nutzen können, ohne uns gleichzeitig möglichen negativen Folgen ausliefern zu müssen.

Wir müssen dabei über den traditionellen Datenschutzansatz hinaus denken. Einerseits müssen wir uns von der Vorstellung verabschieden, dass Probleme nur dort entstehen können, wo personenbezogene oder -beziehbare Daten im Spiel sind. Andererseits werden wir weiterhin damit rechnen müssen, dass Individuen arglos mit persönlichen Informationen und mit den Informationen anderer umgehen werden, und dass Unternehmen sowohl davon als auch von den Schlupflöchern im Datenschutzrecht Gebrauch machen werden, um ihren Profit zu maximieren.

Daten nach Anwendungsbereichen unterscheiden

Praktiken der Datenerschließung und -verarbeitung unterscheiden sich in den verschiedenen Anwendungsbereichen zum Teil erheblich. Lösungen für alle Bereiche gleichermaßen sind daher sehr wahrscheinlich ungeeignet, den traditionellen Datenschutzansatz für die Zukunft sinnvoll zu erweitern. Mobilitätsdaten sind anderer Natur als Steuerdaten oder Gesundheitsdaten, sie werden anders erfasst, verarbeitet und erlauben jeweils andere Schlüsse. Sobald wir jenseits des Datenschutzes, der seiner Natur nach ein Generalinstrument ist, über Datensteuerung nachdenken, tun wir daher gut daran, das bereichsspezifisch und fallbezogen zu tun.

Beispielsweise kann sinnvoll danach gefragt werden, welches Datensteuerungsregime – abgesehen vom Datenschutz, den wir nicht über Bord werfen wollen – etabliert werden sollte, um öffentlichen Nahverkehr auf Bestellung flächendeckend einzuführen oder autonomes Fahren zu ermöglichen. Dieselben Überlegungen werden mutmaßlich aber wenig dabei helfen zu entscheiden, was wir in Zukunft mit der persönlichen Patientenakte anfangen oder was wir bei gezielter Werbung im Online-Marketing zulassen wollen.

Datenkontrolle durch Technik

Letztlich müssen wir uns in all diesen Bereichen die Frage stellen, welche Daten in welchem Umfang zwischen welchen Akteuren und Systemen geteilt werden müssen, damit wir erwünschte Ziele erreichen und unerwünschte Effekte vermeiden. Es geht dabei um eine zeitgemäße Datenverbreitungskontrolle, um Datenzugangskontrolle und um Datennutzungskontrolle.

Es liegt auf der Hand, dass technischen Lösungen hier eine entscheidende Rolle zukommt. Diskutiert werden dazu Ansätze wie etwa „Personal Information Management Services“ (PIMS). Solche Dienste basieren darauf, dass Nutzer ihre Daten selbst zusammenhalten und auch technisch über deren Verwendung bestimmen. Verwandte Ansätze sind Personal Data Stores (PDS) und Vendor Relationship Management (VRM).

Diese Modelle sollen zwei Fliegen mit einer Klappe zu schlagen, indem sie dem Verbraucher wieder mehr Kontrolle geben und so Vertrauen schaffen, aber personenbezogene Daten auch für gewerbliche Zwecke nutzbar machen. Die am Massachusetts Institute of Technology entstandenen Projekte „Enigma“ und „openPDS“ sind einschlägige Beispiele, das finnische MyData-Projekt ein weiteres. Kernidee ist, das Individuum ins Zentrum des Datenmanagements zu stellen. Damit wird der Verbraucher zum zentralen Verbindungs- und Kontrollpunkt der eigenen Daten.

Die Bewegung befindet sich noch in den Kinderschuhen. PIMS sehen sich einer Reihe von Herausforderungen gegenüber, um am Markt bestehen zu können. So müssen sie zwei Kundengruppen gleichzeitig erreichen und befriedigen: Datenanbieter und Datennutzer. Es ist noch weithin unklar, wie PIMS-Anbieter Geld verdienen können, ohne das Vertrauen der Nutzer zu gefährden. Trotz dieser und anderer ungeklärter Fragen sind PIMS gesellschaftlich sehr interessant, weil sie auf einer technischen Ebene Alternativen zu den aktuellen zentralisierten Modellen erproben.

Data Governance

Darüber hinaus müssen wir uns auf gesellschaftliche Grundprinzipien im Umgang mit Daten und datenverarbeitenden Systemen verständigen. Zusätzlich braucht es praktische Datensteuerungsansätze, wie sie etwa größere und mittlere Unternehmen bereits unter dem Stichwort „Data Governance“ implementieren, um die Datenflüsse in- und außerhalb ihrer Grenzen zu organisieren. In Europa weit fortgeschritten ist Österreich. Dort wurde im Rahmen eines Open-Government-Programms ein schon recht reifes Data-Governance-Modell für Institutionen der öffentlichen Hand entwickelt.

Teil einer jeden effektiven Datensteuerung müssen avancierte technische Maßnahmen sein. Deutschland sollte daher mehr in Technologien investieren, die den Datenschutz verbessern (Privacy Enhancing Technologies, PET). Erst als letztes Mittel wäre eventuell staatliche Regulierung in Betracht zu ziehen, sollte sich in einigen Jahren herausgestellt haben, dass die Datenschutzgrundverordnung und die begleitenden Privacy-by-Design-Lösungen noch nicht die erhoffte Wende zum Guten in der Datengesellschaft gebracht haben.

Wir leben in einer Zeit, in der sich Autofahrer allein durch Auswertung, wann sie wo und wie aufs Bremspedal drücken, mit großer Sicherheit eindeutig identifizieren lassen. Vor dem Hintergrund solcher Potenziale haben wir keine andere Wahl, als bestimmte Datennutzungs- und Auswertungsmöglichkeiten gesellschaftlich oder auch gesetzlich zu ächten. Jedoch werden starke technische Maßnahmen und institutionelle, bereichsspezifische und gesellschaftliche Datenmanagement-Ansätze unerlässlich sein, wenn wir gesellschaftlich nützliche Datennutzungen fördern und schädliche unterbinden wollen.

Der Beitrag basiert auf einem Impulspapier, das die Autoren für die Stiftung Neue Verantwortung verfasst haben. Lizenz des Papiers und dieser Fassung: CC BY-SA.

Foto Julia Manske

Julia Manske berät aktuell Unicef Mexiko in Digital- und Innovationsfragen und schreibt als Gutachterin für internationale Organisationen zu digitalen Rechten. Zuvor arbeitete sie in Berlin für die Stiftung Neue Verantwortung zu Datenpolitik sowie für das Vodafone-Institut für Gesellschaft und Kommunikation zu digital-sozialen Innovationen.

Foto Tobias Knobloch

Dr. Tobias Knobloch ist Projektleiter Datenpolitik bei der Stiftung Neue Verantwortung. Davor war er in einem Bundesministerium für Online-Kommunikation zuständig, hat ein Social-Software-Startup mitgegründet, über Computersimulationen promoviert und als Projektmanager in einer Web-Agentur gearbeitet.

Zou je een robot vertrouwen die je juridisch advies geeft?

IusMentis - 17 november 2017 - 8:13am

Afgelopen week kwam nog een interessante kwestie langs over NDA Lynn haar kwaliteiten: wie zou er durven te varen op het advies van een robot? Want nou ja, het blijft software en die maakt fouten. En leuk en aardig dat ze gemiddeld 94,1% van de zinnen correct herkent, maar een geniepige zin die net in die 6,9% valt wordt dan dus gemist. Bovendien, hoe kun je achterhalen wat Lynn nu doet, echt uitleg op zinsniveau krijg je niet. Dus moet je dat wel vertrouwen, zo’n robotadvies?

Het punt van die uitleg is een algemene zorg bij AI beslissingen en -adviezen. Veel AI opereert als black box, je gooit er een input in en je krijgt te horen wat de output is, maar over het waarom zwijgt het systeem. En als je dan in die doos kijkt, dan helpt dat maar beperkt. Vaak komt het neer op “het leek meer op categorie A dan categorie B”, maar zien welke factoren daarbij de doorslag gaven, is erg moeilijk. Ook het doorgronden van die factoren is soms een puzzel; Lynn vond bijvoorbeeld een tijd NDA’s onder Californisch recht strenger, omdat mijn traindataset toevallig veel strenge Californische en wat relaxtere Europese NDA’s bevatte.

De angst voor dit soort onduidelijkheid speelt met name bij besluiten over personen. De AVG bevat een streng verbod op geautomatiseerde besluiten op basis van profielen, en AI-gebaseerde analyses “u krijgt geen hypotheek” vallen daar natuurlijk onder. Vandaar dat er tegenwoordig veel wordt geïnvesteerd in uitlegbare AI.

Toevallig heeft BigML (de technologie achter NDA Lynn) dat ook – het systeem produceert heel grof gezegd een stapel gigantische beslisbomen en geeft het gewogen gemiddelde van elke boom z’n uitkomst als de uitslag. Je kunt dus exact zien hoe een classificatie tot stand komt: er stond “state of California” achterin, ik zag ook “security measures” dus dit is een strenge NDA. Het is uitleg, maar niet eentje waar je perse veel aan hebt.

De cynicus mag nu reageren dat uitleg van een menselijke jurist óók lastig te volgen is voor een leek. Maar daar gaan we dan toch in mee, en de reden is simpel: die jurist vertrouwen we. Hij heeft er voor geleerd, hij doet dit al tien jaar, hij heeft een dik pand en een sjiek pak dus de zaken gaan goed, hij is verzekerd tegen claims en ga zo maar door.

Een AI heeft dergelijke vertrouwensfactoren niet, dus is het veel moeilijker om in te schatten of het klopt. In een Engels onderzoek bleek bijvoorbeeld slechts 7% van de mensen een AI advies over financiën te vertrouwen. Gek genoeg zou 14% wel onder het mes gaan bij een robotchirurg. Waar zit hem het verschil? Het mentale model dat we hebben bij deze handelingen, denk ik. Dat een robot heel precies en met vaste hand kan snijden, dat is voor te stellen. Dat hij ook geen millimeter af zal wijken van waar hij moet zijn, dat is ook nog wel logisch. Maar dat een robot mijn financiële situatie snapt en een passende hypotheek uitzoekt, nee dat is bizar.

Wat zit daar dan achter? Is het het fysieke, dat we een robotarm met mesje kunnen zien en ongeveer kunnen inschatten dat dat kan werken? Of is het de intelligentie die nodig is – bij een chirurg is dat vrij rechtomlijnd, bij een advies is het veel fuzzy’er. Een hart is een concreet item dat je tegen kunt komen in een borstkas, en daar doe je dan wat mee. De opties lijken daarmee eindig. Terwijl bij een advies het veel lastiger voorstelbaar is dat zo’n robot alle situaties kan inschatten. Dus dat is enger, en dan denkt men: doe maar niet.

Dus, hoe krijgt een adviesrobot het vertrouwen in zijn kunnen omhoog? Kwestie van gewoon doen en wachten tot men omslaat?

Arnoud

Afkomstig van de blog Internetrecht door Arnoud Engelfriet. Koop mijn boek!

Time Will Tell if the New Vulnerabilities Equities Process Is a Step Forward for Transparency

Electronic Frontier Foundation (EFF) - nieuws - 16 november 2017 - 8:00pm

The White House has released a new and apparently improved Vulnerabilities Equities Process (VEP), showing signs that there will be more transparency into the government’s knowledge and use of zero day vulnerabilities. In recent years, the U.S. intelligence community has faced questions about whether it “stockpiles” vulnerabilities rather than disclosing them to affected companies or organizations, and this scrutiny has only ramped up after groups like the Shadow Brokers have leaked powerful government exploits. According to White House Cybersecurity Coordinator Rob Joyce, the form of yesterday’s release and the revised policy itself are intended to highlight the government’s commitment to transparency because it’s “the right thing to do.”

EFF agrees that more transparency is a prerequisite to any debate about government use of vulnerabilities, so it’s gratifying to see the government take these affirmative steps. We also appreciate that the new VEP explicitly prioritizes the government’s mission of protecting “core Internet infrastructure, information systems, critical infrastructure systems, and the U.S. economy” and recognizes that exploiting vulnerabilities can have significant implications for privacy and security. Nevertheless, we still have concerns over potential loopholes in the policy, especially how they may play into disputes about vulnerabilities used in criminal cases.

The Vulnerabilities Equities Process has a checkered history. It originated in 2010 as an attempt to balance conflicting government priorities. On one hand, disclosing vulnerabilities to vendors and others outside the government makes patching and other mitigation possible. On the other, these vulnerabilities may be secretly exploited for intelligence and law enforcement purposes. The original VEP document described an internal process for weighing these priorities and reaching a decision on whether to disclose, but it was classified, and few outside of the government knew much about it. That changed in 2014, when the NSA was accused of long-term exploitation of the Heartbleed vulnerability. In denying those accusations and seeking to reassure the public, the government described the VEP as prioritizing defensive measures and disclosure over offensive exploitation.

The VEP document itself remained secret, however, and EFF waged a battle to make it public using a Freedom of Information Act lawsuit. The government retreated from its initial position that it could not release a single word, but our lawsuit concluded with a number of redactions remaining in the document.

The 2017 VEP follows a similar structure as the previous process: government agencies that discover previously unknown vulnerabilities must submit them to an interagency group which weighs the “equities” involved and reaches a determination of whether to disclose. The process is facilitated by the National Security Council and the Cybersecurity Coordinator, who can settle appeals and disputes. 

Tellingly, the new document publicly lists information that the government previously claimed would damage national security if released in our FOIA lawsuit. The government’s absurd overclassification and withholdings extended to such information as the identities of the agencies that regularly participate in the decision-making process, the timeline, and the specific considerations used to reach a decision. That’s all public now, without any claim that it will harm national security.

Many of the changes to the VEP do seem intended to facilitate transparency and to give more weight to policies that were previously not reflected in the official document. For example, Annex B to the new VEP lists “equity considerations” that the interagency group will apply to a vulnerability. Previously, the government had argued that a similar, less-detailed list of considerations published in a 2014 White House blog post was merely a loose guideline that would not be applied in all cases. We don’t know how this more rigorous set of considerations will play out in practice, but the new policy appears to be better designed to account for complexities such as the difficulty of patching certain kinds of systems. The new policy also appears to recognize the need for swift action when vulnerabilities the government has previously retained are exploited as part of “ongoing malicious cyber activity,” a concern we’ve raised in the Shadow Brokers case.

The new policy also mandates yearly reports about the VEP’s operation, including an unclassified summary. Again, it remains to be seen how much insight these reports will provide, and whether they will prompt further oversight from Congress or other bodies, but this sort of reporting is a necessary step.

In spite of these positive signs, we remain concerned about exceptions to the VEP. As written, agencies need not introduce certain vulnerabilities to the process at all if they are “subject to restrictions by partner agreements and sensitive operations.” Even vulnerabilities which are part of the process can be explicitly restricted by non-disclosure agreements. The FBI avoided VEP review of the Apple iPhone vulnerability in the San Bernardino case due to an NDA with an outside contractor, and such agreements are apparently extremely common in the vulnerabilities market. And exempting vulnerabilities involved in “sensitive operations” seems like an exceptionally wide loophole, since essentially all offensive uses of vulnerabilities are sensitive. Unchecked, these exceptions could undercut the process entirely, defeating its goal of balancing secrecy and disclosure.

Finally, we’ve seen the government rely on NDAs, classification, and similar restrictions to improperly and illegally withhold material from defendants in criminal cases. As the FBI and other law enforcement agencies increasingly use exploits to hack into unknown computers, the government should not be able to hide behind these secrecy claims to shield its methods from court scrutiny. We hope the VEP doesn’t add fuel to these arguments.

Related Cases: EFF v. NSA, ODNI - Vulnerabilities FOIA
Categorieën: Openbaarheid, Privacy, Rechten

Court Rules Platforms Can Defend Users’ Free Speech Rights, But Fails to Follow Through on Protections for Anonymous Speech

Electronic Frontier Foundation (EFF) - nieuws - 16 november 2017 - 7:18pm

A decision by a California appeals court on Monday recognized that online platforms can fight for their users’ First Amendment rights, though the decision also potentially makes it easier to unmask anonymous online speakers.

Yelp v. Superior Court grew out of a defamation case brought in 2016 by an accountant who claims that an anonymous Yelp reviewer defamed him and his business. When the accountant subpoenaed Yelp for the identity of the reviewer, Yelp refused and asked the trial court to toss the subpoena on grounds that the First Amendment protected the reviewer’s anonymity.

The trial court ruled that Yelp did not have the right to object on behalf of its users and assert their First Amendment rights. It next ruled that even if Yelp could assert its users’ rights, it would have to comply with the subpoena because the reviewer’s statements were defamatory. It then imposed almost $5,000 in sanctions on Yelp for opposing the subpoena.

The trial court’s decision was wrong and dangerous, as it would have prevented online platforms from standing up for their users’ rights in court. Worse, the sanctions sent a signal that platforms could be punished for doing so. When Yelp appealed the decision earlier this year, EFF filed a brief in support [.pdf].

The good news is that the Fourth Appellate District of the California Court of Appeal heard those concerns and reversed the trial court’s ruling regarding Yelp’s ability – known in legal jargon as “standing” – to assert its users’ First Amendment rights.

In upholding Yelp and other online platforms’ legal standing to defend their users’ anonymous speech, the court correctly recognized that the trial court’s ruling would have a chilling effect on anonymous speech and the platforms that allow it. The court also threw out the sanctions the trial court issued against Yelp.

We applaud Yelp for fighting a bad court decision and standing up for its users in the face of court sanctions.  Although we’re glad that the court affirmed Yelp’s ability to fight for its users’ rights, another part of Monday’s ruling may ultimately make it easier for parties to unmask anonymous speakers.

After finding that Yelp could argue on behalf of its anonymous reviewer, the appeals court agreed with the trial court that Yelp nevertheless had to turn over information about its user on grounds that the review contained defamatory statements about the accountant.

In arriving at this conclusion, the court adopted a test that provides relatively weak protections for anonymous speakers. That test requires that plaintiffs seeking to unmask anonymous speakers make an initial showing that their legal claims have merit and that the platforms provide notice to the anonymous account being targeted by the subpoena. Once those prerequisites are met, the anonymous speaker has to be unmasked.

EFF does not believe that the California court’s test adequately protects the First Amendment rights of anonymous speakers, especially given that other state and federal courts have developed more protective tests. Anonymity is often a shield used by speakers to express controversial or unpopular views that allows the ensuing debate to focus on the substance of the speech rather than the identity of the speaker.

Courts more protective of the First Amendment right to anonymity typically require that before unmasking speakers, plaintiffs must show that they can prove their claims—similar to what they would need to show at a later stage in the case. And even when plaintiffs prove they have a legitimate case, these courts separately balance plaintiffs’ need to unmask the users against those speakers’ First Amendment rights to anonymity.

By not adopting a more protective test, the California court’s decision potentially makes it easier for civil litigants to pierce online speakers’ anonymity, even when their legal grievances aren’t legitimate. This could invite a fresh wave of lawsuits against anonymous speakers that are designed to harass or intimidate anonymous speakers rather than vindicate actual legal grievances.

We hope that we’re wrong about the implications of the court’s ruling and that California courts will take steps to prevent abuse of unmasking subpoenas. In the meantime, online platforms should continue to stand up for their users’ anonymous speech rights and defend them in court when necessary.

Categorieën: Openbaarheid, Privacy, Rechten

EFF Urges DHS to Abandon Social Media Surveillance and Automated “Extreme Vetting” of Immigrants

Electronic Frontier Foundation (EFF) - nieuws - 16 november 2017 - 4:10pm

EFF is urging the Department of Homeland Security (DHS) to end its programs of social media surveillance and automated “extreme vetting” of immigrants. Together, these programs have created a privacy-invading integrated system to harvest, preserve, and data-mine immigrants' social media information, including use of algorithms that sift through posts using vague criteria to help determine who to admit or deport.

EFF today joined a letter from the Brennan Center for Justice, Georgetown Law’s Center on Privacy and Technology, and more than 50 other groups urging DHS to immediately abandon its self-described "Extreme Vetting Initiative."Also, EFF's Peter Eckersley joined a letter from more than 50 technology experts opposing this program. This follows EFF's participation last month in comments from the Center for Democracy & Technology and dozens of other advocacy groups urging DHS to stop retaining immigrants' social media information in a government record-keeping system called "Alien Files" (A-files).

DHS for some time has collected social media information about immigrants and foreign visitors. DHS recently published a notice announcing its policy of storing that social media information in its A-Files. Also, DHS announced earlier this year that it is developing its “Extreme Vetting Initiative,” which will apply algorithms to the social media of immigrants to automate decision-making in deportation and other procedures.

These far-reaching programs invade the privacy and chill the freedoms of speech and association of visa holders, lawful permanent residents, and naturalized U.S. citizens alike. These policies not only invade privacy and chill speech, they also are likely to discriminate against immigrants from Muslim nations. Furthermore, other countries may imitate DHS’s policies, including countries where civil liberties are nascent and freedom of expression is limited.

Storing Social Media Information in the A-Files Chills First Amendment Rights

The U.S. government assigns alien registration numbers to people immigrating to the United States and to non-immigrants granted authorization to visit. In addition to containing these alien registration numbers, the government’s A-File record-keeping system stores the travel and immigration history of millions of people, including visa holders, asylees, lawful permanent residents, and naturalized citizens.

 In our previous post on the DHS’s new A-Files policy, we outlined the many problems with the government’s use of this record keeping system to store, share, and use immigrants’ social media information. In the new comments, we urge DHS to stop storing social media surveillance in the A-Files for the following reasons:

  • Chilled Expression. Activists, artists, and other social media users will feel pressure to censor themselves or even disengage completely from online spaces. Afraid of surveillance, the naturalized and U.S.-born citizens with whom immigrants engage online may also limit their social media presence by sanitizing or deleting their posts.
  • Privacy of Americans Invaded. DHS’s social media surveillance plan, while directed at immigrants, will burden the privacy of naturalized and U.S-born citizens, too. Even after immigrants are naturalized, DHS will preserve their social media data in the A-Files for many years. DHS’s sweeping surveillance will also invade the privacy of the many millions of U.S.-born Americans who engage with immigrants on social media.
  • Creation of Second-Class Citizens. DHS’s 100-year retention of naturalized citizens’ social media content in A-Files means a life-long invasion of their privacy. Effectively, DHS’s policy will relegate over 20 million naturalized U.S. citizens to second-class status.
  • Unproven Benefits. While DHS claims that collecting social media can help identify security threats, research shows that expressive Internet conduct is an inaccurate predictor of one’s propensity for violence. Furthermore, potential bad actors can easily circumvent social media surveillance by deleting their content or altering their online personas. Also, the meaning of social media content is highly idiosyncratic. Posts replete with sarcasm and allusions are especially difficult to decipher. This task is further complicated by the rising use of non-textual information like emojis, GIFs, and “likes.”

Immigrants feel increasingly threatened by the policies of the Trump administration. Social media surveillance contributes to a climate of fear among immigrant communities, and deters First Amendment activity by immigrants and citizens alike. Thus, EFF urges DHS not to retain social media content in immigrants’ A-Files.

"Extreme Vetting" of Immigrants is Ineffective and Discriminatory

In July, DHS’s Immigration and Customs Enforcement (ICE) sought the expertise of technology companies to help it automate its review of social media and other information for purposes of immigration enforcement. Specifically, ICE documents reveal that DHS seeks to develop:

  1. “processes that determine and evaluate an applicant’s probability of becoming a positively contributing member of society as well as their ability to contribute to national interests”; and
  1. “methodology that allows [the agency] to assess whether an applicant intends to commit criminal or terrorist acts after entering the United States.”

In the November letter, we urge DHS to abandon “extreme vetting” for many reasons.

  • Chilling of Online Expression. ICE’s scouring of social media to make deportation and other immigration decisions will encourage immigrants, and Americans who communicate with immigrants, to censor themselves or delete their social media accounts. This will greatly reduce the quality of our national public discourse.
  • Technical Inadequacy. ICE’s hope to forecast national security threats via predictive analytics is misguided. The necessary computational methods do not exist. Algorithms designed to judge the meaning of text struggle to identify the tone of online posts, and most fail to understand the meaning of posts in other languages. Flawed human judgment can make human-trained algorithms similarly flawed.
  • Discriminatory Impact. ICE never defines the critical phrases “positively contributing member of society” and “contribute to national interests.” They have no meaning in American law. Efforts to automatically identify people on the basis of these nebulous concepts will lead to discriminatory results. Moreover, these vague and overbroad phrases originate in President Trump’s travel ban executive orders (Nos. 13,769 and 13,780), which courts have enjoined as discriminatory. Thus, extreme vetting would cloak discrimination behind a veneer of objectivity.

In short, EFF urges DHS to abandon “extreme vetting” and any other efforts to automate immigration enforcement. DHS should also stop storing social media information in immigrants’ A-Files. Social media surveillance of our immigrant friends and neighbors is a severe intrusion on digital liberty that does not make us safer.

Categorieën: Openbaarheid, Privacy, Rechten

Mag een bedrijf vragen om een kopie van je ID bij een inzageverzoek?

IusMentis - 16 november 2017 - 8:11am

Een lezer vroeg me:

Wanneer je je wettelijk recht op inzage of verwijdering van je persoonsgegevens wilt uitoefenen, wordt steeds vaker gevraagd of je een kopie van je identiteitsbewijs wilt opsturen. Is dat eigenlijk wel toegestaan?

Onder de Wet bescherming persoonsgegevens, en straks de Algemene Verordening Gegevensbescherming (AVG of GDPR) heb je het recht om inzage te krijgen in alle persoonsgegevens die een bedrijf of instantie over je heeft, en ook het recht om daarin correcties aan te brengen of deze te laten verwijderen (mits niet meer relevant).

Dat laatste recht heet onder de AVG ook wel het recht te worden vergeten, maar dat is meer een marketingterm dan iets anders. Het gaat gewoon om het recht om irrelevante of verouderde gegevens te laten verwijderen uit bestanden.

Het recht van inzage wordt onder de AVG verder nog uitgebreid: je hebt dan recht op een elektronische kopie in een gebruikelijk bestandsformaat, zodat je de data elders kunt hergebruiken. (Wel met de beperking dat de data dan wordt verwerkt met jouw toestemming of krachtens een overeenkomst.)

Organisaties moeten binnen vier weken reageren op het verzoek. Natuurlijk moeten ze daarbij zorgvuldig omgaan met je gegevens, en onderdeel daarvan is verifiëren of jij wel echt de persoon bent om wie het gaat. Dit met een identiteitsbewijs nagaan is dus een logische manier.

In principe mag een bedrijf dus vragen om een kopie ID, hoewel men dan wel natuurlijk zorgvuldig om moet gaan met die kopie. Je zou voor de grap eens kunnen informeren welke beveiligingsmaatregelen men daarbij neemt, want ook dat is deel van je inzagerecht (informatierecht).

Natuurlijk is het altijd verstandig om op zo’n kopie je foto en BurgerServiceNummer door te strepen en over de kopie de naam te schrijven van het bedrijf waar je deze heen stuurt. Dat voorkomt identiteitsfraude en beperkt de impact van datalekken.

Arnoud

Afkomstig van de blog Internetrecht door Arnoud Engelfriet. Koop mijn boek!

Stupid Patent Data of the Month: the Devil in the Details

Electronic Frontier Foundation (EFF) - nieuws - 15 november 2017 - 8:28pm
A Misunderstanding of Data Leads to a Misunderstanding of Patent Law and Policy

Bad patents shouldn’t be used to stifle competition. A process to challenge bad patents when they improperly issue is important to keeping consumer costs down and encouraging new innovation. But according to a recent post on a patent blog, post-grant procedures at the Patent Office regularly get it “wrong,” and improperly invalidate patents. We took a deep dive into the data being relied upon by patent lobbyists to show that contrary to their arguments, the data they rely on undermines their arguments and conflicts with the claims they’re making.

The Patent Office has several procedures to determine whether an issued patent was improperly granted to a party that does not meet the legal standard for patentability of an invention. The most significant of these processes is called inter partes review, and is essential to reining in overly broad and bogus patents. The process helps prevent patent trolling by providing a target with a low-cost avenue for defense, so it is harder for trolls to extract a nuisance-value settlement simply because litigating is expensive. The process is, for many reasons, disliked by some patent owners. Congress is taking a new look at this process right now as a result of patent owners’ latest attempts to insulate their patents from review.

An incorrect claim about the inter partes review (IPR) and other procedures like IPR at the Patent Trial and Appeal Board (PTAB) has been circulating, and was recently repeated in written comments at a congressional hearing by Philip Johnson, former head of intellectual property at Johnson & Johnson. Josh Malone and Steve Brachmann, writing for a patent blog called “IPWatchdog,” are the source of this error. In their article, cited in the comments to Congress, they claim that the PTAB is issuing decisions contrary to district courts at a very high rate.

We took a closer look at the data they use, and found that the rate is disagreement is actually quite small: about 7%, not the 76% claimed by Malone and Brachmann. How did they get it so wrong? To explain, we’ll have to get into the nuts and bolts of how such an analysis can be run.

Malone and Brachmann relied on data provided by a service called “Docket Navigator,” which collects statistics and documents related to patent litigation and enforcement. The search they used was to see how many cases Docket Navigator marked as a finding of “unpatentable” (from the Patent Office) and a finding of “not invalid” (from a district court).

This is a very, very simplistic analysis. For instance, it would consider an unpatentability finding by the PTAB about Claim 1 of a patent to be inconsistent with a district court finding that Claim 54 is not invalid. It would consider a finding of anticipation by the PTAB to be inconsistent with a district court rejecting an argument for invalidity based on a lack of written description. These are entirely different legal issues; different results are hardly inconsistent.

EFF, along with CCIA, ran the same Docket Navigator search Malone and Brachmann ran for patents found “not invalid” and “unpatentable or not unpatentable,” generating 273 results, and a search for patents found “unpatentable” and “not invalid,” generating 208 results (our analysis includes a few results that weren’t yet available when Malone and Brachmann ran their search). We looked into each of 208 results that Docket Navigator returned for patents found unpatentable and not invalid. Our analysis shows that the “200” number, and consequently the rate at which the Patent Office is supposedly “wrong” based on a comparison to times a court supposedly got it “right” is well off the mark.

We reached our conclusions based on the following methodology:

  • We considered “inconsistent results” to occur any time the Patent Office reached a determination on any one of the conditions for patentability (namely, any of 35 U.S.C. §§ 101, 102, 103 or 112) and the district court reached a different conclusion based on the same condition for patentability, with some important caveats, as discussed below. For example, if the Patent Office found claims invalid for lack of novelty (35 U.S.C. § 102), we would not treat a district court finding of claims definite (35 U.S.C. § 112(b)) as inconsistent.
  • We did not distinguish between a finding of invalidity or lack of invalidity based on lack of novelty (35 U.S.C. § 102) or obviousness (35 U.S.C. § 103), as these bases are highly related. For example, if the Patent Office determined claims unpatentable based on anticipation, we would mark as inconsistent any jury finding that the claims were not obvious.
  • We did not consider a decision relating the validity of one set of claims to be inconsistent with a decision relating to the validity of a different, distinct set of claims. For example, if the Patent Office found claims 1-5 of a patent not patentable, we would not consider that inconsistent with a district court finding claims 6-10 not invalid. We would count as inconsistent, however, any two differing decisions that overlapped in terms of claims, even if there was not identity of claims.
  • We distinguished between the conditions for patentability of 35 U.S.C. § 112. For example, a district court finding of definiteness under 35 U.S.C. § 112(b) would not treated as inconsistent with a Patent Office finding of lack of written description under 35 U.S.C. § 112(a).
  • We did not consider a district court decision to be inconsistent with Patent Office decision if that district court decision was later overturned by the Federal Circuit. However, we did treat a Patent Office decision as inconsistent with a district court decision even if that Patent Office decision were later reversed.1 For example, if the Patent Office found claims to be not patentable, but the Patent Office was later reversed by the Federal Circuit, we would still mark that decision as inconsistent with the district court. We even counted Patent Office decisions as inconsistent in the five cases where they were affirmed by the Federal Circuit and therefore were correct according to a higher authority than a district court. We did this in order to ensure we included results tending to support Malone and Brachmann’s thesis that the Patent Office was reaching the “wrong” results.
  • We excluded fourteen results that were not the result of any district court finding. Specifically, several patents were included because of findings by the International Trade Commission, an agency (like the Patent Office) which hears cases in a non-Article III court and that does not have a jury. Those results would not meet Malone and Brachmann’s thesis of being considered “valid in full and fair trials in a court of law.”
  • We excluded two results that should not have been included in the set and appear to be a coding error by Docket Navigator. These results were excluded because there was no final decision from the Patent Office as to unpatentability.

Here’s what we found of the 194 remaining cases:

  • A plurality of the results (n=85) were only included because the Patent Office determined claims were unpatentable based on failure to meet one or more requirements for patentability (usually 35 U.S.C. § 102 or 103) and a district court found the claims met other requirements for patentability (usually 35 U.S.C. § 101 or 112). That is, the district court made no finding whatsoever relating to the reasons why the Patent Office determined the claims should be canceled. Thus the Patent Office and the court did not disagree as to a finding on validity.
    • For example, the Docket Navigator results include U.S. Patent No. 5,563,883. The Patent Office determined claims 1, 3, and 4 of that patent were unpatentable based on obviousness (35 U.S.C. § 103). A district court determined that those same claims however, met the definiteness requirements (35 U.S.C. § 112(b)). The Federal Circuit affirmed the Patent Office’s decision invalidating the claims, and the district court did not decide whether those claims were obvious at all.
  • A further 46 results were situations where either (1) the patent owner requested the Patent Office cancel claims or (2) claims were stipulated to be “valid” as part of a settlement in district court. Thus the Patent Office and the court findings were not inconsistent because at least one of them did not reach any decision on the merits.
    • For example, the Docket Navigator results includes U.S. Patent No. 6,061,551. A jury found claims not invalid, but the Federal Circuit reversed that finding, holding the claims invalid. After that determination, the Patent Owner requested an adverse judgment at the Patent Office.
    • As another example, the Docket Navigator results includes U.S. Patent No. 7,676,411. The Patent Office found claims invalid as abstract (35 U.S.C. § 101) and obvious (35 U.S.C. § 103). Because the parties stipulated that this patent was “valid” as part of settlement, which is generally not considered to be a merits determination, this patent is also tagged as “not invalid” by Docket Navigator.
  • A further 15 results were not inconsistent for a variety of reasons.
    • For example, five results were not inconsistent because the Patent Office and the district court considered different patent claims. As another example, U.S. Patent No. 7,135,641 represented an instance where a jury found claims not invalid, but the district court judge reversed that finding post-trial. As another example, in the district court, U.S. patent 5,371,734 was held “not invalid” on summary judgment, but that determination was later reversed by the Federal Circuit.

Under this initial cut, only 48 of the entries arguably could be considered to have inconsistent or disagreeing results between the Patent Office and a district court.

But in the majority of those cases, a judge or jury considered one set of prior art when determining whether the claim was new and nonobvious, but the Patent Office considered a different set (n=28). It is not surprising that the two forums would consider different evidence. The Patent Office proceedings generally only consider certain types of prior art (printed publications). That a district court proceeding may result in a finding of “not invalid” based on, e.g., prior use, is not an inconsistent result.

Eliminating those results where the Patent Office was considering completely different arguments and art means the total number of times the Patent Office arguably reached a different conclusion than a district court is only 20 times out of 273 that  a district court determined a patent "not invalid" for some reason. That means that the Patent Office is “inconsistent” with district courts only 7% of the time, not 76% of the time.

It is also important to keep in mind that there have been over 1,800 final decisions in inter partes review proceedings, covered business method review proceedings, or post grant review proceedings. In all that though, only 20 times did the Patent Office reach a conclusion that may be considered inconsistent with the district court in ways that negatively impact patent owners. That’s a rate of only around 1% of the time. That’s a remarkably low rate. Moreover, inconsistent results happen even within the court system. For example, in Abbott v. Andrx, 452 F.3d 1331, the Federal Circuit found that Abbott’s patent was likely to be held invalid. But only one year later, in Abbott v. Andrx, 473 F.3d 1196, the Federal Circuit found that the same patent was likely to be not invalid. The two different results were explained by the fact that the two defendants had presented different defenses. This is not unusual. Thus the fact that there may be different results doesn’t lead to a conclusion that the whole system is faulty.

An analysis like ours with respect to this data set takes time and a few cases might slip through the cracks or be incorrectly coded, but the overall result demonstrates that the vast majority of patent owners are never subject to inconsistent results between district court and the Patent Office.

It is disappointing that Johnson, Malone, and Brachmann made claims that the data don’t support, but demonstrates a valuable lesson. When using data sets, it is important to understand what, exactly, the data is and how to interpret it. Unfortunately here it looks like an error in understanding the results provided by Docket Navigator by Malone and Brachmann propagated to Johnson’s testimony, and would likely travel further if no one looked harder at it.

We’ve used both Docket Navigator and Lex Machina in our analyses on numerous occasions, and even briefs we submit to the court. Both services provide extremely valuable information about the state of patent litigation and policy. But its usefulness is diminished where the data they present are not understood. As always, the devil is in the details.


  • 1. For this reason, our results differ slightly from those of CCIA, reported here. CCIA did not treat decisions as inconsistent if the Patent Office decision was later affirmed on appeal. Five patents we considered inconsistent in our analysis were excluded in CCIA’s analysis. Each approach has merit.
Categorieën: Openbaarheid, Privacy, Rechten

Announcing the Security Education Companion

Electronic Frontier Foundation (EFF) - nieuws - 15 november 2017 - 5:45pm

The need for robust personal digital security is growing every day. From grassroots groups to civil society organizations to individual EFF members, people from across our community are voicing a need for accessible security education materials to share with their friends, neighbors, and colleagues.

We are thrilled to help. Today, EFF has launched the Security Education Companion, a new resource for people who would like to help their communities learn about digital security but are new to the art of security training.

It’s rare to find someone with not only technical expertise but also a strong background in pedagogy and education. More often, folks are stronger in one area: someone might have deep technical expertise but little experience teaching, or, conversely, someone might have a strong background in teaching and facilitation but be new to technical security concepts. The Security Education Companion is meant to help these kinds of beginner trainers share digital security with their friends and neighbors in short awareness-raising gatherings.

A new resource for people who would like to help their communities learn about digital security but are new to the art of security training.

Lesson modules guide you through creating sessions for topics like passwords and password managers, locking down social media, and end-to-end encrypted communications, along with handouts, worksheets, and other remix-able teaching materials. The Companion also includes a range of shorter “Security Education 101” articles to bring new trainers up to speed on getting started with digital security training, foundational teaching concepts, and the nuts and bolts of planning a workshop.

Teaching requires mindful facilitation, thoughtful layering of content, sensitivity to learners’ needs and concerns, and mutual trust built up over time. When teaching security in particular, the challenge includes communicating counterintuitive security concepts, navigating different devices and operating systems, recognizing learners’ different attitudes toward and past experiences with various risks, and taking into account a constantly changing technical environment. What people learn—or don’t learn—has real repercussions.

Nobody knows this better than the digital security trainers currently pushing this work forward around the world, and we’ve been tremendously fortunate to learn from their expertise. We’ve interviewed dozens of U.S.-based and international trainers about what learners struggle with, their teaching techniques, the types of materials they use, and what kinds of educational content and resources they want. We’re working hard to ensure that the Companion supports, complements, and adds to the existing collective body of training knowledge and practice.

We will keep adding new materials in the coming months, so check back often as the Companion grows and improves. Together, we look forward to improving as security educators and making our communities safer.

Categorieën: Openbaarheid, Privacy, Rechten

Besluit over gaswinning in Groningen moet opnieuw

De minister van Economische Zaken en Klimaat moet een nieuw besluit nemen over de gaswinning in Groningen. Zijn eerdere besluit om de komende 5 jaar 21,6 miljard kubieke meter per gasjaar te winnen, heeft hij niet goed onderbouwd. Dat oordeelt de Afdeling bestuursrechtspraak van de Raad van State in een uitspraak van vandaag (15 november 2017). Zo heeft de minister het risico voor de mensen in het aardbevingsgebied niet genoeg betrokken in zijn motivering. Hij heeft ook niet gemotiveerd waarom de leveringszekerheid als ondergrens is genomen voor de hoeveelheid te winnen gas, ondanks de onzekerheid over de gevolgen. Bovendien heeft hij niet duidelijk gemaakt welke maatregelen mogelijk zijn om de behoefte aan gas te beperken. De minister krijgt een jaar de tijd om een nieuw, beter onderbouwd besluit te nemen. Tot die tijd ligt de gaswinning niet stil. De Afdeling bestuursrechtspraak heeft in de uitspraak bepaald dat de NAM in de tussenliggende periode gas mag blijven winnen.

Instemmings- en wijzigingsbesluit vernietigd

De Afdeling bestuursrechtspraak heeft zowel het instemmingsbesluit van de toenmalige minister van Economische Zaken van september 2016 als zijn wijzigingsbesluit van mei 2017 vernietigd. Bij dat laatste besluit stond hij toe dat de NAM de komende jaren 21,6 miljard kubieke meter gas per jaar mag winnen uit het Groningenveld. Tegen beide besluiten waren ruim 20 bezwaarmakers in beroep gekomen, waaronder de Groninger Bodem Beweging, individuele burgers, het college van gedeputeerde staten van Groningen en diverse Groningse gemeenten.

Risico's in het aardbevingsgebied

De minister is er in zijn besluiten van uitgegaan dat het niet mogelijk is om de risico's van gaswinning voor de mensen in het aardbevingsgebied te beoordelen. Maar hij heeft de Afdeling bestuursrechtspraak niet van de juistheid van dit standpunt kunnen overtuigen. De minister had in ieder geval nader onderzoek moeten doen naar de mogelijkheden om de risico's in kaart te brengen. Of hij had beter moeten motiveren waarom hij zonder zo'n onderzoek toch instemde met het winningsniveau van 21,6 miljard kubieke meter. De Afdeling bestuursrechtspraak vindt het "niet aanvaardbaar" dat de minister de gaswinning voor vijf jaar heeft vastgelegd, terwijl hij de risico's daarvan niet heeft beoordeeld. Als die risico's inderdaad niet kunnen worden beoordeeld, "mag van de minister ten minste worden verwacht dat hij onderzoekt en uiteenzet op welke alternatieve wijze het veiligheidsbelang van de personen in het aardbevingsgebied bij de besluitvorming wordt betrokken", aldus de Afdeling bestuursrechtspraak.

Leveringszekerheid

De minister vindt het belangrijk dat hij met de hoeveelheid te winnen gas kan voldoen aan de vraag naar gas (de leveringszekerheid). De 21,6 miljard kubieke meter per gasjaar is daarvoor toereikend. Naar het oordeel van de Afdeling bestuursrechtspraak heeft de minister de leveringszekerheid terecht in zijn afweging betrokken. Maar omdat het maar de vraag is of het risico van de toegestane gaswinning aanvaardbaar is en omdat de minister de gaswinning voor 5 jaar heeft vastgelegd, had hij wel moeten uitleggen waarom hij heeft vastgehouden aan de leveringszekerheid als ondergrens voor de hoeveelheid te winnen gas. "De minister had moeten motiveren waarom zich in dit geval geen omstandigheden voordoen die nopen tot het winnen van minder gas dan de voor leveringszekerheid voor die periode benodigde hoeveelheid. Onzekerheden over het risico duren al lang", aldus de Afdeling bestuursrechtspraak. Ook had de minister duidelijkheid moeten bieden over de mogelijkheden die bestaan om de benodigde hoeveelheid gas voor de leveringszekerheid te verminderen.

Voorlopige voorziening

Vanwege de gebreken in de besluiten van de minister heeft de Afdeling bestuursrechtspraak deze besluiten vernietigd. Als hiermee zou worden volstaan, zou het winningsplan uit 2007 weer gaan gelden. Dit zou betekenen dat de NAM een onbeperkte hoeveelheid gas kan winnen. Daardoor zouden bezwaarmakers in een slechtere positie komen dan wanneer de Afdeling bestuursrechtspraak de besluiten niet zou vernietigen. Dat is niet aanvaardbaar. Daarom heeft de Afdeling bestuursrechtspraak in haar uitspraak als tijdelijke maatregel een 'voorlopige voorziening' getroffen. Deze voorziening houdt in dat de gaswinning mag plaatsvinden volgens het laatste wijzigingsbesluit. Dit betekent dat de NAM het komende jaar voorlopig dus 21,6 miljard kubieke meter gas mag winnen. Deze voorlopige voorziening geldt totdat het nieuwe besluit dat de minister over de gaswinning moet nemen, in werking is getreden.

Gaswinning in Groningen

De NAM wint sinds 1963 gas uit het Groningenveld. De gaswinning vindt plaats in vier regio's in Groningen die bijna allemaal bestaan uit meerdere productielocaties. Het gaat om de regio's Loppersum, Zuid-West, Eemskanaal en Oost. In september 2016 besloot de toenmalige minister van Economische Zaken de totale gaswinning in het Groningenveld van 39,4 miljard kubieke meter in 2015 terug te brengen naar 24 miljard kubieke meter in 2016. In mei 2017 besloot hij dat tot 2021 in totaal niet meer dan 21,6 miljard kubieke meter gas mocht worden gewonnen met ingang van het gasjaar 2017-2018.

Categorieën: Rechten

Minister pakt Russische site met privédocumenten van Nederlanders niet aan

IusMentis - 15 november 2017 - 8:17am

Tegen de Russische website DocPlayer, die automatisch pdf-bestanden van internetgebruikers publiceert, wordt geen actie ondernomen. Dat meldde Nu.nl vorige week. De site publiceert 4,3 miljoen bestanden staan die door een computerprogramma worden verzameld en gepubliceerd. “Echter, het valt niet op voorhand te stellen dat deze gegevens illegaal verkregen zijn.” Want als ze uit openbare bronnen verkregen zijn, dan is het niet strafbaar. Eh, wacht, wat?

In september ontstond ophef over de site, toen RTL meldde dat daar miljoenen documenten te vinden waren met privéinformatie die via onduidelijke bronnen verkregen zouden zijn. Daarover werden kamervragen gesteld, wat ging de minister hier aan doen?

Vrij weinig dus. Op zich niet zo heel gek, wat kún je ook tegen zo’n site die in Duitsland staat. Zei hij sarcastisch. Maar iets serieuzer, als minister kun je hier natuurlijk echt weinig aan doen, die taak ligt immers bij toezichthouders zoals de Autoriteit Persoonsgegevens of bij de mensen die getroffen zijn door deze publicatie. Hoe vervelend ook, het is een civiele kwestie – je moet zelf een rechtszaak aanspannen als je auteursrechten worden geschonden of je privacy te grabbel wordt gegooid.

Indienen van een verwijderverzoek bij de site (haha) en bij Google (wie weet) zou dus de eerste aangewezen stap moeten zijn. Daarnaast een handhavingsverzoek bij de AP, hoewel de minister daar meteen een bezwaar oproept: de site zit in Duitsland, dus is de AP niet bevoegd. Nee, maar ze zouden toch kunnen bemiddelen bij de Duitse collega’s wellicht?

Het argument dat die gegevens “bewust of onbewust door gebruikers op het internet zijn geplaatst en daarna door deze site zijn verzameld” is natuurlijk compleet irrelevant en het is jammer dat dat zo prominent vooraan in de antwoorden staat. Want dat dóet er niet toe; ook als je zelf bewust iets op je site plaatst dan nog mag het niet worden gekopieerd in zo’n handigejongenssite.

Arnoud

Afkomstig van de blog Internetrecht door Arnoud Engelfriet. Koop mijn boek!

Appeals Court’s Disturbing Ruling Jeopardizes Protections for Anonymous Speakers

Electronic Frontier Foundation (EFF) - nieuws - 15 november 2017 - 3:38am

A federal appeals court has issued an alarming ruling that significantly erodes the Constitution’s protections for anonymous speakers—and simultaneously hands law enforcement a near unlimited power to unmask them.

The Ninth Circuit’s decision in  U.S. v. Glassdoor, Inc. is a significant setback for the First Amendment. The ability to speak anonymously online without fear of being identified is essential because it allows people to express controversial or unpopular views. Strong legal protections for anonymous speakers are needed so that they are not harassed, ridiculed, or silenced merely for expressing their opinions.

In Glassdoor, the court’s ruling ensures that any grand jury subpoena seeking the identities of anonymous speakers will be valid virtually every time. The decision is a recipe for disaster precisely because it provides little to no legal protections for anonymous speakers.

EFF applauds Glassdoor for standing up for its users’ First Amendment rights in this case and for its commitment to do so moving forward. Yet we worry that without stronger legal standards—which EFF and other groups urged the Ninth Circuit to apply (read our brief filed in the case)—the government will easily compel platforms to comply with grand jury subpoenas to unmask anonymous speakers.

The Ninth Circuit Undercut Anonymous Speech by Applying the Wrong Test

The case centers on a federal grand jury in Arizona investigating allegations of fraud by a private contractor working for the Department of Veterans Affairs. The grand jury issued a subpoena to Glassdoor, which operates an online platform that allows current and former employees to comment anonymously about their employers, seeking the identities of eight accounts who posted about the contractor.

Glassdoor challenged the subpoena by asserting its users’ First Amendment rights. When the trial court ordered Glassdoor to comply, the company appealed to the U.S. Court of Appeals for the Ninth Circuit.

The Ninth Circuit ruled that because the subpoena was issued by a grand jury as part of a criminal investigation, Glassdoor had to comply absent evidence that the investigation was being conducted in bad faith.

There are several problems with the court’s ruling, but the biggest is that in adopting a “bad faith” test as the sole limit on when anonymous speakers can be unmasked by a grand jury subpoena, it relied on a U.S. Supreme Court case called Branzburg v. Hayes.

In challenging the subpoena, Glassdoor rightly argued that Branzburg was not relevant because it dealt with whether journalists had a First Amendment right to  protect the identities of their confidential sources in the face of grand jury subpoenas, and more generally, whether journalists have a First Amendment right to gather the news. This case, however, squarely deals with Glassdoor users’ First Amendment right to speak anonymously.

The Ninth Circuit ran roughshod over the issue, calling it “a distinction without a difference.” But here’s the problem: although the law is all over the map as to whether the First Amendment protects journalists’ ability to guard their sources’ identities, there is absolutely no question that the First Amendment grants anonymous speakers the right to protect their identities.

The Supreme Court has repeatedly ruled that the First Amendment protects anonymous speakers, often by emphasizing the historic importance of anonymity in our social and political discourse. For example, many of our founders spoke anonymously while debating the provisions of our Constitution.

Because the Supreme Court in Branzburg did not outright rule that reporters have a First Amendment right to protect their confidential sources, it adopted a rule that requires a reporter to respond to a grand jury subpoena for their source’s identity unless the reporter can show that the investigation is being conducted in bad faith. This is a very weak standard and difficult to prove.

By contrast, because the right to speak anonymously has been firmly established by the Supreme Court and in jurisdictions throughout the country, the tests for when parties can unmask those speakers are more robust and protective of their First Amendment rights. These tests more properly calibrate the competing interests between the government’s need to investigate crime and the First Amendment rights of anonymous speakers.

The Ninth Circuit’s reliance on Branzburg effectively eviscerates any substantive First Amendment protections for anonymous speakers by not imposing any meaningful limitation on grand jury subpoenas. Further, the court’s ruling puts the burden on anonymous speakers—or platforms like Glassdoor standing in their shoes—to show that an investigation is being conducted in bad faith before setting aside the subpoena.

The Ninth Circuit’s reliance on Branzburg is also wrong because the Supreme Court ruling in that case was narrow and limited to the situation involving reporters’ efforts to guard the identities of their confidential sources. As Justice Powell wrote in his concurrence, “I … emphasize what seems to me to be the limited nature of the Court’s ruling.” The standards in that unique case should not be transported to cases involving grand jury subpoenas to unmask anonymous speakers generally. However, that’s what the court has done—expanded Branzburg to now apply in all instances in which a grand jury subpoena targets individuals whose identities are unknown to the grand jury.

Finally, the Ninth Circuit’s use of Branzburg is further improper because there are a number of other cases and legal doctrines that more squarely address how courts should treat demands to pierce anonymity. Indeed, as we discussed in our brief, there is a whole body of law that applies robust standards to unmasking anonymous speakers, including the Ninth Circuit’s previous decision in Bursey v. U.S., which also involved a grand jury.

The Ninth Circuit Failed to Recognize the Associational Rights of Anonymous Online Speakers

The court’s decision is also troubling because it takes an extremely narrow view of the kind of anonymous associations that should be protected by the First Amendment. In dismissing claims by Glassdoor that the subpoena chilled their users’ First Amendment rights to privately associate with others, the court ruled that because Glassdoor was not itself a social or political organization such as the NAACP, the claim was “tenuous.”

There are several layers to the First Amendment right of association, including the ability of individuals to associate with others, the ability of individuals to associate with a particular organization or group, and the ability for a group or organization to maintain the anonymity of members or supporters.

Although it’s true that Glassdoor users are not joining an organization like the NAACP or a union, the court’s analysis ignores that other associational rights are implicated by the subpoena in this case. At minimum, Glassdoor’s online platform offers the potential for individuals to organize and form communities around their shared employment experiences. The First Amendment must protect those interests even if Glassdoor lacks an explicit political goal.

Moreover, even if it’s true that Glassdoor users may not have an explicitly political goal in commenting on their current or past employers, they are still associating online with others with similar experiences to speak honestly about what happens inside companies, what their professional experiences are like, and how they believe those employers can improve.

The risk of being identified as a Glassdoor user is a legitimate one that courts should recognize as analogous to the risks of civil rights groups or unions being compelled to identify their members. Disclosure in both instances chills individuals’ abilities to explore their own experiences, attitudes, and beliefs.

The Ninth Circuit Missed an Opportunity to Vindicate Online Speakers’ First Amendment Rights

Significantly absent from the court’s decision was any real discussion about the value of anonymous speech and its historical role in our country. This is a shame because the case would have been a great opportunity to show the importance of First Amendment protections for online speakers.

EFF has long fought for anonymity online because we know its importance in fostering robust expression and debate. Subpoenas such as the one issued to Glassdoor deter people from speaking anonymously about issues related to their employment. Glassdoor provides a valuable service because its anonymous reviews help inform other people’s career choices while also keeping employers accountable to their workers and potentially the general public.

The Ninth Circuit’s decision appeared unconcerned with this reality, and its “bad faith” standard places no meaningful limit on the use of grand jury subpoenas to unmask anonymous speakers. This will ultimately harm speakers who can now be more easily targeted and unmasked, particularly if they have said something controversial or offensive. 

Categorieën: Openbaarheid, Privacy, Rechten

Who Has Your Back in Colombia? Our Third-Annual Report Shows Progress

Electronic Frontier Foundation (EFF) - nieuws - 15 november 2017 - 1:34am

Fundación Karisma in cooperation with EFF has released its third-annual ¿Dónde Estan Mis Datos? report, the Colombian version of EFF’s Who Has Your Back. And this year’s report has some good news.
 
According to the Colombian Ministry of Information and Communication Technologies, broadband Internet penetration in Colombia is well over 50% and growing fast. Like users around the world, Colombians put their most private data, including their online relationships, political, artistic and personal discussions, and even their minute-by-minute movements online. And all of that data necessarily has to go through one of a handful of ISPs. But without transparency from those ISPs, how can Colombians trust that their data is being treated with respect?
 
This project is part of a series across Latin America, adapted from EFF’s annual Who Has Your Back? report. The reports are intended to evaluate mobile and fixed ISPs to see which stand with their users when responding to government requests for personal information. While there’s definitely room for improvement, the third edition of the Colombian report shows substantial improvement.
 
The full report is available only in Spanish from Fundación Karisma, but here are some highlights.
 
This third-annual report goes even further in evaluating companies than ever before. The 2017 edition doesn’t just look at ISPs data practices; it evaluates whether companies have corporate policies of gender equality and accessibility, whether they publicly report data breaches, and whether they’ve adopted HTTPS to protect their users and employees. By and large, the companies didn’t do very well at the new criteria, but that’s part of the point. Reports like this help push the companies to do better.
 
That’s especially clear by looking at the criteria evaluated in previous years. There’s been significant improvement.
 
New for 2017, a Colombian ISP, known as ETB, has released the country’s first transparency report. This type of report, which lists the number and type of legal demands for data from government and law enforcement, is essential to helping users understand the scope of Internet surveillance and make informed decisions about storing their sensitive data or engaging in private communications. We’ve long urged companies to release these reports regularly, and we’re happy to see a Colombian ISP join in.
 
In addition, this year’s report shows that more companies than ever are releasing public information about their data protection policies and their related corporate policies. We applaud this transparency, especially when their policies go further than the law requires as is the case with both Telefonica and ETB.
 
Finally, more companies than ever are taking the proactive step of notifying their users of data demands, even when they are not formally required to do so. This commitment is important because it gives users a chance to defend themselves against overreaching government requests. In most situations, a user is in a better position than a company to challenge a government request for personal information, and of course, the user has more incentive to do so.
 
We’re proud to have worked with Fundación Karisma to push for transparency and users’ rights in Colombia and look forward to seeing further improvement in years to come.

Categorieën: Openbaarheid, Privacy, Rechten

¿Dónde Están Mis Datos en Colombia? Nuestro tercer informe anual muestra el progreso

Electronic Frontier Foundation (EFF) - nieuws - 15 november 2017 - 1:34am

La Fundación Karisma en cooperación con EFF ha lanzado su tercer año, ¿Dónde Estan Mis Datos? Informe que es la versión colombiana de Who Has Your Back de EFF. Y la edición de este año tiene algunas buenas noticias.

Según el Ministerio de Tecnologías de la Información y las Comunicaciones de Colombia, la penetración de Internet de banda ancha en Colombia supera con creces el 50% y está creciendo rápidamente. Al igual que los usuarios de todo el mundo, los colombianos ponen sus datos más privados en línea, incluidas sus relaciones en línea, debates políticos, artísticos y personales, e incluso sus movimientos minuto a minuto. Y todos esos datos necesariamente tienen que pasar por alguno del puñado de ISP disponibles. Pero sin transparencia por parte de esos ISP, ¿cómo pueden, los colombianos, confiar en que sus datos están siendo tratados con respeto?

Este proyecto forma parte de una serie a lo largo de América Latina, a partir de la publicación anual del informe Who Has Your Back? de EFF. Estos informes tienen la intención de evaluar los ISP móviles y fijos para ver qué soporte tienen sus usuarios cuando responden a las solicitudes gubernamentales de información personal. Si bien, claramente , hay margen de mejora, la tercera edición del informe colombiano muestra una mejora sustancial.

El informe completo está disponible, solo en español, desde la web de la Fundación Karisma [LINK], pero aquí hay algunos puntos destacados.

Este tercer informe anual evalúa a las empresas más concienzudamente que nunca. La edición de 2017 no solo mira las prácticas de datos de los ISP; evalúa si las empresas tienen políticas corporativas de igualdad de género y accesibilidad, ya sea que denuncien públicamente las infracciones de datos, y si han adoptado HTTPS para proteger a sus usuarios y empleados. En general, a las empresas no les fue muy bien con los nuevos criterios, pero esto es parte del punto a tratar. Informes como este ayudan a las empresas a mejorar.

Eso es especialmente claro al observar los criterios evaluados en años anteriores. Ha habido una mejora significativa.

Como novedad  este 2017; un ISP colombiano, conocido como ETB, ha lanzado el primer informe de transparencia del país. Este tipo de informe, que enumera el número y tipo de demandas legales de datos del gobierno y las fuerzas del orden público, es esencial para ayudar a los usuarios a comprender el alcance de la vigilancia de Internet y tomar decisiones informadas sobre el almacenamiento de sus datos confidenciales o comunicaciones privadas. Hace tiempo que instamos a las empresas a que publiquen estos informes con regularidad, y nos complace ver a un ISP colombiano unirse.

Además, el informe de este año muestra a más compañías que nunca publicando información pública sobre sus políticas de protección de datos y sus políticas corporativas relacionadas. Aplaudimos esta transparencia, especialmente cuando sus políticas van más allá de lo que exige la ley, como es el caso tanto de Telefónica como de ETB.

Finalmente, más empresas que nunca están dando, proactivamente,  el paso de notificar a sus usuarios acerca de las demandas de datos, incluso cuando no están formalmente obligados a hacerlo. Este compromiso es importante porque les da a los usuarios la oportunidad de defenderse contra las solicitudes excesivas del gobierno. En la mayoría de casos, un usuario se encuentra en una mejor posición que una empresa para impugnar una solicitud gubernamental de información personal y, por supuesto, el usuario tiene más incentivos para hacerlo.

Estamos orgullosos de haber trabajado con la Fundación Karisma para impulsar la transparencia y los derechos de los usuarios en Colombia y esperamos ver mejoras adicionales en los próximos años.

Categorieën: Openbaarheid, Privacy, Rechten

20 Years of Protecting Intermediaries: Legacy of 'Zeran' Remains a Critical Protection for Freedom of Expression Online

Electronic Frontier Foundation (EFF) - nieuws - 14 november 2017 - 9:58pm

This article first appeared on Nov. 10 in Law.com.

At the Electronic Frontier Foundation (EFF), we are proud to be ardent defenders of §230. Even before §230 was enacted in 1996, we recognized that all speech on the Internet relies upon intermediaries, like ISPs, web hosts, search engines, and social media companies. Most of the time, it relies on more than one. Because of this, we know that intermediaries must be protected from liability for the speech of their users if the Internet is to live up to its promise, as articulated by the U.S. Supreme Court in ACLU v. Reno, of enabling “any person … [to] become a town crier with a voice that resonates farther than it could from any soapbox“ and hosting “content … as diverse as human thought.”

As we hoped—and based in large measure on the strength of the Fourth Circuit’s decision in Zeran—§230 has proven to be one of the most valuable tools for protecting freedom of expression and innovation on the Internet. In the past two decades, we’ve filed well over 20 legal briefs in support of §230, probably more than on any other issue, in response to attempts to undermine or sneak around the statute. Thankfully, most of these attempts were unsuccessful. In most cases, the facts were ugly—Zeran included. We had to convince judges to look beyond the individual facts and instead focus on the broader implications: that forcing intermediaries to become censors would jeopardize the Internet’s promise of giving a voice to all and supporting more robust public discourse than ever before possible.

This remains true today, and it is worth remembering now, in the face of new efforts in both Congress and the courts to undermine §230’s critical protections.

Attacks on §230: The First 20 Years

The first wave of attacks on §230’s protections came from plaintiffs who tried to plead around §230 in an attempt to force intermediaries to take down online speech they didn’t like. Zeran was the first of these, with an attempt to distinguish between “publishers” and “distributors” of speech that the Fourth Circuit rightfully rejected. As we noted above, the facts were not pretty: the plaintiff sought to hold AOL responsible after an anonymous poster used his name and phone number on an AOL message board to indicate—incorrectly—that he was selling horribly offensive t-shirts about the Oklahoma City bombing. The court rightfully held that §230 protected against liability for both publishing and distributing user content.

The second wave of attacks came from plaintiffs trying to deny §230 protection to ordinary users who reposted content authored by others—i.e., an attempt to limit the statute to protecting only formal intermediaries. In one case, Barrett v. Rosenthal, the attackers succeeded at the California court of appeals. But in 2006, the California Supreme Court ruled that §230 protects all non-authors who republish content, not just formal intermediaries like ISPs. This ruling—which was urged by EFF as amicus along with several other amici—still protects ordinary bloggers and Facebook posters in California from liability for content they merely republish. Unsurprisingly, the California Supreme Court’s opinion included a four-page section dedicated entirely to Zeran.

Another wave of attacks, also in the mid-2000s, came as plaintiffs tried to use the Fair Housing Act to hold intermediaries responsible when users posted housing advertisements that violated the law. Both Craigslist and Roommates.com were sued over discriminatory housing advertisements posted by their users. The Seventh Circuit, at the urging of EFF and other amici, held that §230 immunized Craigslist from liability for classified ads posted by its users—citing Zeran first in a long line of cases supporting broad intermediary immunity. Despite our best efforts, however, the Ninth Circuit found that §230 did not immunize Roommates.com from liability if, indeed, it was subject to the law. The majority opinion ignored both us and Zeran, citing the case only once in a footnote responding to the strong dissent. It found that Roommates.com could be at least partially responsible for the development of the ads because it had forced its users to fill out a questionnaire about housing preferences that included options that the plaintiffs asserted were illegal. The website endured four more years of needless litigation before the Ninth Circuit ultimately found that it hadn’t actually violated any anti-discrimination laws at all, even with the questionnaire. The court left its earlier opinion intact, however, and we were worried the exception carved out in Roommates.com would wreak havoc on §230’s protections. It luckily hasn’t been applied broadly by other courts—undoubtedly thanks in large part to Zeran’s stronger legal analysis and influence.

The Fight Continues

We are now squarely in the middle of a fourth wave of attack—efforts to hold intermediaries responsible for extremist or illegal online content. The goal, again, seems to be forcing intermediaries to actively screen users and censor speech. Many of these efforts are motivated by noble intentions, and the speech at issue is often horrible, but these efforts also risk devastating the Internet as we know it.

Some of the recent attacks on §230 have been made in the courts. So far, they have not been successful. In these cases, plaintiffs are seeking to hold social media platforms accountable on the theory that providing a platform for extremist content counts as material support for terrorism. Courts across the country have universally rejected these efforts. The Ninth Circuit will be hearing one of these cases, Twitter v. Fields, in December.

But the current attacks are unfortunately not only in the courts. The more dangerous threats are in Congress. Both the House and Senate are considering bills that would exempt charges under federal and state criminal and civil laws related to sex trafficking from §230’s protections—the Stop Enabling Sex Trafficking Act (S. 1693) (SESTA) in the Senate, and the Allow States and Victims to Fight Online Sex Trafficking Act (H.R. 1865) in the House. While the legislators backing these laws are largely well meaning, and while these laws are presented as targeting commercial classified ads websites like Backpage.com, they don’t stop there. Instead, SESTA and its house counterpart punish small businesses that just want to run a forum where people can connect and communicate. They will have disastrous consequences for community bulletin boards and comment sections, without making a dent in sex trafficking. In fact, it is already a federal criminal offense for a website to run ads that support sex trafficking, and §230 doesn’t protect against prosecutions for violations of federal criminal laws.

Ultimately, SESTA and its house counterpart would impact all platforms that host user speech, big and small, commercial and noncommercial. They would also impact any intermediary in the chain of online content distribution, including ISPs, web hosting companies, websites, search engines, email and text messaging providers, and social media platforms—i.e., the platforms that people around the world rely on to communicate and learn every day. All of these companies come into contact with user-generated content: ads, emails, text messages, social media posts. Under these bills, if any of this user-generated content somehow related to sex trafficking, even without the platform’s knowledge, the platform could be held liable.

Zeran’s analysis from 20 years ago demonstrates why this is a huge problem. Because these bills would have far-reaching implications—just as every other legislative proposal for limiting §230—they would open Internet intermediaries, companies, nonprofits, and community supported endeavors alike to massive legal exposure. Under this cloud of legal uncertainty, new websites, along with their investors, would be wary of hosting open platforms for speech—or of even starting up in the first place—for fear that they would face crippling lawsuits if third parties used their websites for illegal conduct. They would have to bear litigation costs even if they were completely exonerated, as Roommates.com was after many years. Small platforms that already exist could easily go bankrupt trying to defend against these lawsuits, leaving only larger ones. And the companies that remained would be pressured to over-censor content in order to proactively avoid being drawn into a lawsuit.

EFF is concerned not only because this would chill new innovation and drive smaller players out of the market. Ultimately, these bills would shrink the spaces online where ordinary people can express themselves, with disastrous results for community bulletin boards and local newspapers’ comment sections. They threaten to transform the relatively open Internet of today into a closed, limited, censored Internet. This is the very result that §230 was designed to prevent.

Since Zeran, the courts have recognized that without strong §230 protections, the promise of the Internet as a great leveler—amplifying and empowering voices that have never been heard, and allowing ideas to be judged on their merits rather than on the deep pockets of those behind them—will be lost. Congress needs to abandon its misguided efforts to undermine §230 and heed Zeran’s time-tested lesson: if we fail to protect intermediaries, we fail to protect online speech for everyone.

Categorieën: Openbaarheid, Privacy, Rechten

EFF’s Street-Level Surveillance Project Dissects Police Technology

Electronic Frontier Foundation (EFF) - nieuws - 14 november 2017 - 9:36pm

Step onto any city street and you may find yourself subject to numerous forms of police surveillance—many imperceptible to the human eye.

A cruiser equipped with automated license plate readers (also known as ALPRs) may have just logged where you parked your car. A cell-site simulator may be capturing your cell-phone data incidentally while detectives track a suspect nearby. That speck in the sky may be a drone capturing video of your commute. Police might use face recognition technology to identify you in security camera footage.

EFF first launched its Street-Level Surveillance project in 2015 to help inform the public about the advanced technologies that law enforcement are deploying in our communities, often without any transparency or public process.  We’ve scored key victories in state legislatures and city councils, limiting the adoption of these technologies and how they can be used, but the surveillance continues to spread, agency by agency. To combat the threat, EFF is proud to release the latest update to our work: a new mini-site that shines light on a wide range of surveillance technologies, including ALPRs, cell-site simulators, drones, face recognition, and body-worn cameras.


Designed with community advocates, journalists, and policymakers in mind, Street-Level Surveillance seeks to answer the pressing questions about police technology. How does it work? What kind of data does it collect? How are police using it? Who’s selling it? What are the threats, and what is EFF doing to defend our rights? We also offer resources specially tailored for criminal defense attorneys, who must confront evidence collected by these technologies in court.

These resources are only a launching point for advocacy. Campus and community organizations working to increase transparency and accountability around the use of surveillance technology can find additional resources and support through our Electronic Frontier Alliance. We hope you’ll join us in 2018 as we redouble our efforts to combat invasive police surveillance. 

Categorieën: Openbaarheid, Privacy, Rechten

Driekwart zaken mediation in strafrecht succesvol

Nieuwe editie infoblad verschenen

Driekwart van de strafzaken waarbij slachtoffer en dader een mediationtraject ingaan wordt succesvol afgerond. Bij 5,5 procent van de zaken wordt gedeeltelijke overeenstemming bereikt. Dit is te lezen in de nieuwe editie van het Infoblad Mediation in strafzaken (pdf, 825,8 KB) dat vandaag is verschenen. Bij een mediationtraject gaan slachtoffers en verdachten onder begeleiding van een mediator met elkaar in gesprek.

Van januari tot en met september dit jaar zijn 326 strafzaken doorverwezen naar een mediator. 479 zaken kwamen hiervoor in aanmerking, maar bij 32 procent van deze zaken startte het mediationtraject niet omdat de betrokken partijen niet bereid waren hieraan mee te werken.

Verschillende politieke partijen hebben de minister van Justitie en Veiligheid gevraagd op welke manier mediation in strafzaken ook in de toekomst gefinancierd kan worden. Hier is op dit moment namelijk nog onduidelijkheid over.

Meer informatie
Categorieën: Rechten

Publiekslezing ‘Fairness and Accountability of Sociotechnical Algorithmic Systems’

Kennisplatform Openbaarheid - 14 november 2017 - 9:47am

 

Op dinsdag 19 december 2017 organiseren Rijksmuseum Boerhaave en het Lorentz Center een publiekslezing over de interacties tussen informatietechnologie en maatschappij. Onder de titel ‘Fairness and Accountability of Sociotechnical Algorithmic Systems’ spreekt de New Yorkse Microsoft-onderzoeker danah boyd over algoritmes die eerlijk met data omgaan dan wel die data manipuleren. 

Steeds vaker komen beslissingen tot stand op basis van algoritmes: geautomatiseerde reeksen  instructies, gevoed met data. Zo’n algoritme bepaalt waar een politiebureau moet komen, of welk nieuws je op sociale media krijgt voorgeschoteld. Maar hoe eerlijk zijn die data? Zijn in het ontwerp van zulke systemen geen culturele of persoonlijke vooroordelen ingebakken? Hebben politieke of economische belangen een rol gespeeld bij het selecteren van data? In haar lezing zal danah boyd (die haar naam zonder hoofdletters spelt en oprichter en president is van het onderzoekcentrum Data & Society in New York) aan de hand van een reeks kwesties het maatschappelijk belang van eerlijke, controleerbare  procedures benadrukken.

De publiekslezing van danah boyd is onderdeel van de Lorentz Center workshop ‘Intersectionality and Algorithmic Discrimination: Intersecting Disciplinary Perspectives’, welke van 18 tot 22 december plaatsvindt. Specialisten op het gebied van computerwetenschap, kunstmatige intelligentie, sociologie, media studies, wetenschap- en technologiestudies, recht en ethiek buigen zich over de manier waarop verschillende soorten discriminatie zich in en via algoritmes kunnen manifesteren in het maatschappelijk verkeer. In het bijzonder bekijkt deze workshop dit vraagstuk vanuit het uitgangspunt dat een identiteit altijd meerdere dimensies tegelijk omvat (ras, geslacht, leeftijd etc.), hetgeen leidt tot subtiele mechanismen van inclusie en exclusie die vaak onvoldoende herkend worden.

Deze publiekslezing  maakt deel uit van de samenwerking tussen Rijksmuseum Boerhaave en het Lorentz Center. Internationaal gerespecteerde deskundigen spreken voor een breed publiek over onderwerpen van maatschappelijk en/of wetenschappelijk belang. Zie www.lorentzcenter.nl voor nadere informatie.

 

Wat: Publiekslezing ‘Fairness and Accountability of Sociotechnical Algorithmic Systems’
Wanneer: Dinsdag 19 december van 17:00 tot 18:00 uur
Waar: Rijksmuseum Boerhaave, Lange Sint Agnietenstraat 10, Leiden
Toegang: Gratis, wel even aanmelden via de website: https://rijksmuseumboerhaave.nl/te-zien-te-doen/sociale-technologie-voor-big-data-en-kunstmatige-intelligentie/
Nadere inlichtingen: dr. Francien Dechesne, f.dechesne@law.leidenuniv.nl

Vader mag van rechter geen foto’s van zijn kind op Facebook delen

IusMentis - 14 november 2017 - 8:19am

Een gescheiden vader uit Twente mag op Facebook geen foto’s van zijn 2-jarige kind meer delen, las ik bij de NOS. De kinderrechter bepaalde dat dit niet in het belang is van het kind en de broze omgangsregeling die net getroffen is. (Volgens mij was de man niet getrouwd met de moeder.) Met de ietwat opmerkelijke uitspraak dat je bij een Facebook-publicatie het sociale netwerk eigenaar maakt van de foto zodat die er alles mee mag doen. Nee, dat klopt niet.

Steeds vaker speelt bij echtscheidingen en ouderschapsregelingen de vraag hoe om te gaan met kinderbeelden op sociale media. In principe zijn de regels simpel: je hebt als ouders rekening te houden met de privacy van je kind (dat volgt uit het Kinderrechtenverdrag), en je moet dus bij alles een afweging maken of dit de privacy van je kind wel waard is. Het kan natuurlijk dat je daarvoor kiest, zoals wanneer je je kind aan een film laat meewerken, maar het moet een bewuste keuze zijn.

Helemaal ingewikkeld wordt het wanneer de ouders er niet hetzelfde in staan, zoals in deze rechtszaak. De moeder had als enige het gezag, en de vader vorderde nu een omgangsregeling. Daarbij werd een twistpunt dat de vader af en toe foto’s van het kind op Facebook plaatste, iets dat de moeder absoluut niet wilde hebben. Het argument van de vader was dat de foto zo veel mogelijk werd afgeschermd, waar de moeder tegenover zette dat je nooit écht controle hebt op iets dat internet op gaat.

De rechtbank erkent dat, maar in een tikje ongelukkige bewoordingen:

Echter, op het moment dat een foto op Facebook staat, is deze eigendom van Facebook. Facebook kan de foto bijvoorbeeld doorverkopen aan derden. Zo kan het zijn dat een foto opeens opduikt in een reclamecampagne of voor andere doeleinden wordt gebruikt. Degene die de foto heeft geplaatst heeft er dan geen controle meer over.

Het is natuurlijk onjuist dat Facebook eigenaar wordt van foto’s die je daar plaatst, en de voorwaarden van Facebook staan ook niet toe dat zij deze mogen doorverkopen aan derden. Maar de laatste zin klopt wel, die foto kán overal opduiken en praktisch gezien heb je daar weinig tot geen grip op. En dat is dan een goede reden om te zeggen, dit mag niet, dit is niet in het belang van het kind.

Uiteindelijk komt de rechtbank dan ook tot de enige juisteconclusie: de vader mag geen foto’s van het kind op sociale media plaatsen, mede gezien de lange weg die vader en moeder al zijn gekomen om tot een omgangsregeling te komen. En om dan terug te komen op de NOS-kop en tekst: wel érg tendentieus, het is niet zo dat vaders in het algemeen geen kinderfoto’s op Facebook mogen zetten natuurlijk.

Arnoud

Afkomstig van de blog Internetrecht door Arnoud Engelfriet. Koop mijn boek!

Pagina's

Abonneren op Informatiebeheer  aggregator - Rechten