U bent hier

Rechten

Kontroverser Vorschlag: Spyware soll Nutzer*innen von Schattenbibliotheken überwachen

iRights.info - 10 uur 8 min geleden

Eine Allianz großer Wissenschaftsverlage erwägt, dass Universitätsbibliotheken auf ihren Servern Überwachungssoftware einsetzen sollen, um illegale Zugriffe auf wissenschaftliche Literatur zu erkennen. Der Vorschlag erntet heftige Kritik bei Vertreter*innen digitaler Grundrechte sowie bei Wissenschaftler*innen.

Diesen Text hat Gautama Mehta für das Web-Magazin codastory.com verfasst, Georg Fischer hat ihn für iRights.info ins Deutsche übertragen. Ursprünglich erschien er unter dem Titel „Proposal to install spyware in university libraries to protect copyrights shocks academics“. Wir veröffentlichen die deutsche Übersetzung mit freundlicher Genehmigung.

Die Idee zur Überwachung wurde im Rahmen eines Webinars am 22. Oktober vorgestellt, an dem die weltweit führenden Wissenschaftsverlage teilnahmen – sowie Sicherheitsexpert*innen, die die Bedrohung der Wissenschaft durch vermeintliche Cyberkriminelle und digitale Piraterie diskutierten.

Ein Teilnehmer des Webinars präsentierte eine neuartige Taktik, die die Verlage zum Schutz gegen Urheberrechtsverletzungen anwenden könnten: Nämlich durch den Einsatz von Spionagesoftware in den Proxy-Servern, die wissenschaftliche Bibliotheken verwenden, um den Zugang zu ihren Online-Diensten, wie Verlagsdatenbanken, zu ermöglichen.

Technisch gestützte Überwachung von Urheberrechtsverletzungen

Der Vorschlag stammt von Corey Roach, Sicherheitsbeauftragter an der Universität Utah. Roach skizzierte ein Plugin, mit dem sich „biometrische Daten wie beispielsweise Schreibgeschwindigkeit oder Mausbewegungen“ sammeln ließen, um einzelne Nutzer*innen zu unterscheiden und zu identifizieren, die sonst von den Proxy-Servern der Universität anonymisiert würden.

„Wir haben mehr Daten als nur Usernamen und Passwort“, so Roach in dem Webinar. „Das können auch Informationen über die Nutzer*innen, also Studierende oder Mitarbeiter*innen der Universität, sein. Wir kennen die IP-Adressen der Nutzer*innen, wissen, woher sie kommen und welche URL für das Material angefordert wurde.“

Um den Bibliotheken einen Anreiz zur Installation der Software zu geben, brachte Roach Ermäßigungen für Verlagsdatenbanken ins Spiel.

SNSI: Allianz der Großverlage

Das Webinar wurde veranstaltet von einer neuen Gruppe namens „Scholarly Networks Security Initiative (SNSI)“, einem gemeinsamen Verband von Elsevier, Springer Nature und anderen wissenschaftlichen Großverlagen. Diese hatten sich im Februar mit dem erklärten Ziel zusammengetan, das Hochschulwesen gegen Cyberkriminalität und Websites wie Sci-Hub zu schützen.

Sci-Hub ist eine sogenannte „Schattenbibliothek“, die illegal Millionen von wissenschaftlichen Publikationen bereitstellt und freien Zugang zu diesen Dokumenten ermöglicht, welche eigentlich hinter den Paywalls der Verlage versteckt wären.

Nachdem eine Mitschrift des Webinars auftauchte, schlugen Forscher*innen und Vertreter*innen digitaler Grundrechte Alarm, angesichts der Möglichkeit, dass Universitätsbibliotheken sich mit Großverlagen zusammentun könnten, um Studierende und Forscher*innen zu überwachen.

„Es ist zutiefst beunruhigend, dass Wissenschaftsverlage Pläne hegen, schlecht getarnte Überwachungssoftware in Universitätsbibliotheken einzusetzen“, schrieb Bastian Greshake Tzovaras vom Pariser Zentrum für Forschung und Interdisziplinarität per E-Mail auf Anfrage.

Corey Roach, der Sicherheitsbeauftragte der Universität Utah, von dem der Überwachungs-Vorschlag stammte, reagierte nicht auf eine Interviewanfrage.

Wissenschaftliche Großverlage verfügen über riesige Korpora an Urheberrechten

Die Motivation für den Urheberrechtsschutz ist das Ergebnis eines jahrzehntelangen Streits in der Wissenschaftsgemeinde wegen das lukrativen Geschäftsmodells der Großverlage. Kritiker*innen werfen den Großverlagen vor, dass sie mit ihrem Geschäftsmodell der Wissenschaft Schaden zufügen und sich parasitär verhalten.

Großverlage verlangen exorbitante Preise für die Abonnements: 2018 etwa zahlte die University of California für den Zeitschriftenzugang knapp 11 Millionen US-Dollar an Elsevier, den größten der Wissenschaftsverlage. Dabei greifen die Verlage für die Veröffentlichungen zu großen Teilen auf öffentlich finanzierte Forschung sowie die kostenlose Arbeit der Gutachter*innen zurück, die meist als Wissenschaftler*innen an Universitäten angestellt sind.

Dieser Konflikt zog einen Massenboykott von Tausenden von Wissenschaftler*innen an Spitzenuniversitäten nach sich. Und er führte dazu, dass 2018 fast 300 deutsche und schwedische Universitäten sowie ein Jahr später auch die University of California ihre Zeitschriftenabonnements bei Elsevier kündigten.

Illegal, aber nützlich: Schattenbibliotheken schaffen Zugang zu wissenschaftlichen Texten

Sci-Hub, das 2011 von der in Kasachstan geborenen russischen Programmiererin Alexandra Elbakyan gegründet wurde, wird häufig von Verlagen genannt, um die Notwendigkeit strenger Anti-Piraterie-Maßnahmen zu rechtfertigen. Sci-Hub verfügt inzwischen über den Zugang zu fast 85 Millionen wissenschaftlichen Publikationen.

Während die Verlage Sci-Hub wegen Urheberrechtsverletzungen verurteilen, betonen die Befürworter*innen der Open-Access-Bewegung in der Wissenschaft, dass Sci-Hub für sie ein entscheidendes Mittel geworden ist, um mit den Verlagen über bessere Abonnementangebote zu verhandeln.

Die Kontroverse um Sci-Hub wird sowohl von Befürworter*innen als auch von Kritiker*innen der Schattenbibliothek als „Napster-Moment“ bezeichnet – in Anspielung an den Piraterie-Streit zwischen der Musikindustrie und den Filesharing-Plattformen in den 2000er Jahren.

Auch wurde Elbakyan vielfach mit dem US-amerikanischen Whistleblower Edward Snowden sowie dem Internet-Aktivisten Aaron Swartz verglichen. Und sie wurde – wenn auch ohne öffentlich zugängliche Beweise – beschuldigt, mit dem russischen Geheimdienst in Verbindung zu stehen.

Überwachungssoftware: Ein neuer Schritt im Kampf gegen vermeintliche Piraterie

Weitgehend ohne Erfolg haben die Verlage bisher eine Vielzahl von Methoden versucht, um gegen Sci-Hub vorzugehen. Das Webinar im Oktober zeigte ihre neueste Taktik: zu argumentieren, dass die Schattenbibliothek nicht nur ihr Profitmodell untergräbt, sondern auch, dass Sci-Hubs Aktivitäten auf staatlich geförderte Cyberkriminalität hinausliefen und eine Sicherheitsbedrohung für Universitäten bedeuteten.

Bislang ist der Vorschlag, Überwachungssoftware in Universitätsbibliotheken zu installieren, nur hypothetisch. Björn Brembs gehört zu einem Kollektiv aus Forscher*innen, die bei der Europäischen Union Lobbyarbeit betreiben, um die Überwachungsmöglichkeiten der Verlage einzuschränken. Er sagte mir, eine solche Strategie würde „mit den Ergebnissen übereinstimmen, die wir bei unserer Untersuchung der Überwachungspraktiken der Verlage erzielt haben“.

Akademische Verlage sind zunehmend in die Kritik geraten, weil sie mit Sicherheitsfirmen zusammenarbeiten, die auch als Datenvermittler fungieren. Das wiederum bedeutet, dass Nutzerdaten, die in Verlagsdatenbanken gesammelt werden, gewinnbringend verkauft oder mit den Strafverfolgungsbehörden geteilt werden können.

Brembs, der als Professor für Neurogenetik an der Uni Regensburg arbeitet, war der Erste, der das Transkript des SNSI-Webinars erhielt. Er veröffentlichte es daraufhin in seinem Blog. [Die Website von Herrn Brembds ist derzeit offline, der Blogeintrag nur noch über das Internet Archive zugänglich, Anmerkung der Redaktion.]

Brembs erklärte, dass die Art der Überwachung, die bei dem SNSI-Webinar vorgeschlagen wurde, eine besondere Bedrohung für Forscher*innen darstelle: Die Wissenschaftsfreiheit der Forscher*innen könnte verletzt werden, „entweder wenn sie an einem besonders brisantem Thema oder zusammen mit gefährdeten Personen arbeiten, oder wenn Sie medizinische oder sozialwissenschaftliche Forschung betreiben“.

Verlage: Es geht um die Sicherheit der Daten

Per E-Mail verschickten sowohl der SNSI-Verbund als auch Elsevier Statements. Darin erklärten sie, das Ziel der Initiative sei es, „die Sicherheit und den Schutz persönlicher und firmenbezogener Daten“ zu gewährleisten. Die SNSI sagte, sie führte keine Spyware in Universitätsbibliotheken ein, um den Zugang zu Sci-Hub zu blockieren oder zu überwachen.

Der Organisationforscher Leonhard Dobusch von der Universität Innsbruck stellte heraus, dass das Programm weniger den Schutz der Nutzer*innen bedeuten würde, sondern es „tatsächlich Risiken für Sicherheit und Privatsphäre schafft, da die Software eine ganze Menge personalisierter Daten sammelt.“

Dobusch sagte, das eigentliche Ziel der Implementierung jeglicher Überwachungstechnologie sei es, „den Zugang zu Schattenbibliotheken über Universitätsnetzwerke zu erschweren“ – eine Eskalation der Maßnahmen, die Verlage derzeit gegen Sci-Hub fahren: Darunter auch das Domain Name System-Blocking, eine Strategie zur Beschränkung des Zugriffs auf Websites, die Sci-Hub dazu gebracht hat, immer wieder die eigene Domäne zu ändern und eine Liste der derzeit funktionierenden Links zur Bibliothek zu führen.

Aber wie schon bei früheren Anstrengungen ist es unwahrscheinlich, dass die neue Taktik – sofern implementiert – große Wirkung bei der Bekämpfung digitaler Piraterie zeigt.

VoetbalTV hoeft boete Autoriteit Persoonsgegevens niet te betalen

IusMentis - 14 uur 35 min geleden

VoetbalTV heeft een boete van €575.000 van de Autoriteit Persoonsgegevens gekregen, maar hoeft die van de rechtbank niet te betalen. Dat meldde Rechtspraak.nl maandag.  Een mooie winst voor de amateurvoetbalaggregator en -uitzender: de Autoriteit Persoonsgegevens heeft volgens de rechtbank onvoldoende uitgelegd waarom VoetbalTV geen gerechtvaardigd belang heeft bij het opnemen en uitzenden van amateurwedstrijden. De AP vond immers dat het hier ging om zuiver commerciële belangen, die sowieso nooit de privacy mogen schenden. Dat ligt dus – het zal eens een keer niet – een stukje genuanceerder.

In juni stapte VoetbalTV naar de rechter. Privacytoezichthouder AP was anderhalf jaar geleden een onderzoek gestart naar het videoplatform, maar de resultaten lieten maar op zich wachten. Op het platform kun je amateurvoetbalwedstrijden uit heel Nederland bekijken, maar omdat men onzekerheid voelde of dat wel mag van de AVG, kwam het niet op gang. Ondanks een redelijk vlotte uitspraaktermijn is het doek toch reeds gevallen voor VoetbalTV, maar dat terzijde.

Waar ging het nu om? VoetbalTV vond dat zij die beelden mocht maken zonder iedere voetballer individueel toestemming te vragen. Dat zou immers ook een onvoorstelbare administratieve ramp zijn geweest,  zeker omdat je toestemming op ieder moment kan intrekken – zeg, 30 seconden na een achterstand. VoetbalTV moest het hebben van het zogeheten gerechtvaardigd belang, waarmee je in beginsel de privacy mag passeren als je genoeg waarborgen neemt en jouw belang zo inkleedt dat het zwaarder weegt.

Alleen, wat voor belangen moet het dan om gaan? De AVG zwijgt hierover, en noemt alleen indirect een aantal zaken. Vrijheid van meningsuiting (journalistiek) is enerzijds een voorbeeld, anderzijds worden specifieke dingen als ‘bewaking’ en ‘direct marketing’ genoemd. De AP was er eens goed voor gaan zitten en kwam met een principieel standpunt: Wat ook niet als een gerechtvaardigd belang kwalificeert, is bijvoorbeeld: het enkel dienen van zuiver commerciële belangen, winstmaximalisatie, het zonder gerechtvaardigd belang volgen van het gedrag van werknemers of het (koop)gedrag van (potentiële) klanten, etc. Om als “gerechtvaardigd” belang te tellen, moest het belang eigenlijk gewoon tot een wettelijk recht te herleiden zijn. Vrijheid van meningsuiting dus, dat is een grondrecht dus daar kun je op gaan zitten. Vervolgens krijg je dan een belangenafweging die al dan niet pro-uitingsvrijheid kan uitvallen. Maar als je komt met “ik wil heel amateurvoetballend Nederland filmen en dat uitzenden als betaal-tv” dan is dat zuiver commercieel en dan gaan we het niet eens hébben over een belangenafweging. Bah, commercie, dat telt niet.

De rechtbank ziet dat anders: de AVG noch haar voorganger (de Richtlijn uit 1995) definieert in beperkte zin wat zo’n belang is. Er is niet direct jurisprudentie, maar uiteindelijk wel een gezaghebbende uitspraak van de verzamelde toezichthouders (de artikel 29-Werkgroep). En die zeiden, aldus de rechtbank: In haar opinie uit 2014 heeft de WP29 geschreven dat het gerechtvaardigd belang moet worden geïnterpreteerd als een begrip waar een scala aan verschillende belangen onder kan vallen, of het nu gaat om triviale of dwingende belangen, en of deze nu evident of meer controversieel zijn, mits het een werkelijk en aanwezig (en dus niet speculatief) belang is. Niet alleen juridische, maar ook allerhande feitelijke, economische en ideële belangen kunnen dus als gerechtvaardigd belang kwalificeren. Dit is dus wat de meeste juristen destijds ook al riepen: je kunt niet op voorhand zeggen dat belang X nóóit gerechtvaardigd kan zijn. Een belang moet er zijn, grofweg de enige echte grenzen zijn dan nog of het binnen de wet past (dus geen illegale belangen of tegen de openbare orde, zeg maar). Maar kwaliteitseisen of arbitraire grenzen, dat gaat niet.

Natuurlijk zal bij een zuiver commercieel winstbelang de belangenafweging sneller in het nadeel van de geldbeluste wolf uitvallen dan bij een hoogwaardige uiting die een zwaarwegend belang dient. Maar dat is dus het hele punt van die afweging van belangen. En omdat die niet was gemaakt, wordt het besluit van de AP vernietigd en moet zij gaan beoordelen hoe die afweging er dan wel zou hebben uitgezien.

Als bijvangst weten we nu dat dit soort activiteiten niét onder de persvrijheid vallen. Weliswaar is ook het filmen van voetballers een vorm van informatiegaring en -verbreiding, maar: De uitzending van de amateurvoetbalwedstrijden kan namelijk niet worden aangemerkt als een bekendmaking aan het publiek van informatie, meningen of ideeën. De wedstrijden hebben daarvoor te weinig nieuwswaarde; het gaat om het uitzenden van amateursport en- spel. De beelden geven geen informatie over bekende personen, bijvoorbeeld bekende voetballers, en dragen ook niet bij aan enig maatschappelijk debat. Het gaat om een ongefilterd verwerken van een grote hoeveelheid door eiseres zelf verzamelde persoonsgegevens. Voor het geheel van de verwerkingen geldt niet dat zij uitsluitend een journalistiek doeleinde hebben. Dat er in alle door haar verzamelde beelden nieuwswaardige informatie kan zitten, maakt niet dat alle duizenden uitgezonden wedstrijden kunnen worden gezien als journalistiek en rechtvaardigt niet het maken van opnames en uitzenden van al deze wedstrijden. Ik zag dat zelf altijd anders; er is immers geen kwaliteitstoets bij de uitingsvrijheid. Nieuwswaarde is een argument voor de belangenafweging, als iets te weinig nieuwswaarde heeft dan zeg je dat het nieuwsbelang niet opweegt tegen de privacy van je nieuwsonderwerp. Maar ik zie wel het punt dat een grove berg met beeldmaterial (nog) geen nieuws is, je moet zoeken naar de pareltjes die je wil brengen.

Arnoud

Het bericht VoetbalTV hoeft boete Autoriteit Persoonsgegevens niet te betalen verscheen eerst op Ius Mentis.

Let’s Stand Up for Home Hacking and Repair

Let’s tell the Copyright Office that it’s not a crime to modify or repair your own devices.

Every three years, the Copyright Office holds a rulemaking process where it grants the public permission to bypass digital locks for lawful purposes. In 2018, the Office expanded existing protections for jailbreaking and modifying your own devices to include voice-activated home assistants like Amazon Echo and Google Home, but fell far short of the broad allowance for all computerized devices that we’d asked for. So we’re asking for a similar exemption, but we need your input to make the best case possible: if you use a device with onboard software and DRM keeps you from repairing that device or modifying the software to suit your purposes, see below for information about how to tell us your story.

DMCA 1201: The Law That Launched a Thousand Crappy Products

Why is it illegal to modify or repair your own devices in the first place? It’s a long story. Congress passed the Digital Millennium Copyright Act in 1996. That’s the law that created the infamous “notice-and-takedown” process for allegations of copyright infringement on websites and social media platforms. The DMCA also included the less-known Section 1201, which created a new legal protection for DRM—in short, any technical mechanism that makes it harder for people to access or modify a copyrighted work. The DMCA makes it unlawful to bypass certain types of DRM unless you’re working within one of the exceptions granted by the Copyright Office.

Suddenly manufacturers had a powerful tool for restricting how their customers used their products: build your product with DRM, and you can argue that it’s illegal for others to modify or repair it.

The technology landscape was very different in 1996. At the time, when most people thought of DRM, they were thinking of things like copy protection on DVDs or other traditional media. Some of the most dangerous abuses of DRM today come in manufacturers’ use of it to limit how customers use their products—farmers being unable to repair their own tractors, or printer manufacturers trying to restrict users from buying third-party ink.

When the DMCA passed, manufacturers suddenly had a powerful tool for restricting how their customers used their products: build your product with DRM, and you can argue that it’s illegal for others to modify or repair it.

Section 1201 caught headlines recently when the RIAA attempted to use it to stop the distribution of youtube-dl, a tool that lets people download videos from YouTube and other user-uploaded video platforms. Fortunately, GitHub put the youtube-dl repository back up after EFF explained on behalf of youtube-dl’s developers that the tool doesn’t circumvent DRM.

mytubethumb play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2Fck7utXYcZng%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Abuse of legal protections for DRM isn’t just a United States problem, either. Thanks to the way in which copyright law has been globalized through a series of trade agreements, much of the world has similar laws on the books to DMCA 1201. That creates a worst-of-both-worlds scenario for countries that don’t have the safety valve of fair use to protect people’s free expression rights or processes like the Copyright Office rulemaking to remove the legal doubt around bypassing DRM for lawful purposes. The rulemaking process is deeply flawed, but it’s better than nothing.

Let’s Tell the Copyright Office: Home Hacking Is Not a Crime

Which brings us back to this year’s Copyright Office rulemaking. We’re asking the Copyright Office to grant a broad exception for people to take advantage of in modifying and repairing all software-enabled devices for their own use.

If you have a story about how:

  • someone in the United States;
  • attempted or planned to modify, repair, or diagnose a product with a software component; and
  • encountered a technological protection measure (including DRM or digital rights management—any form of software security measure that restricts access to the underlying software code, such as encryption, password protection, or authentication requirements) that prevented completing the modification, repair, or diagnosis (or had to be circumvented to do so)

—we want to hear from you! Please email us at RightToMod-2021@lists.eff.org with the information listed below, and we’ll curate the stories we receive so we can present the most relevant ones alongside our arguments to the Copyright Office. The comments we submit to the Copyright Office will become a matter of public record, but we will not include your name if you do not wish to be identified by us. Submissions should include the following information:

  1. The product you (or someone else) wanted to modify, repair, or diagnose, including brand and model name/number if available.
  2. What you wanted to do and why.
  3. How a TPM interfered with your project, including a description of the TPM.
    • What did the TPM restrict access to?
    • What did the TPM block you from doing? How?
    • If you know, what would be required to get around the TPM? Is there another way you could accomplish your goal without doing this?
  4. Optional: Links to relevant articles, blog posts, etc.
  5. Whether we may identify you in our public comments, and your name and town of residence if so. We will treat all submissions as anonymous unless you expressly give us this permission to identify you.
Categorieën: Openbaarheid, Privacy, Rechten

Victory! Court Protects Anonymity of Security Researchers Who Reported Apparent Communications Between Russian Bank and Trump Organization

Electronic Frontier Foundation (EFF) - nieuws - 24 november 2020 - 10:50pm

Security researchers who reported observing Internet communications between the Russian financial firm Alfa Bank and the Trump Organization in 2016 can remain anonymous, an Indiana trial court ruled last week.

The ruling protects the First Amendment anonymous speech rights of the researchers, whose analysis prompted significant media attention and debate in 2016 about the meaning of digital records that reportedly showed computer servers linked to the Moscow-based bank and the Trump Organization in communication.

In response to these reports, Alfa Bank filed a lawsuit in Florida state court alleging that unidentified individuals illegally fabricated the connections between the servers. Importantly, Alfa Bank’s lawsuit asserts that the alleged bad actors who fabricated the servers’ communications are different people than the anonymous security researchers who discovered the servers’ communications and reported their observations to journalists and academics.

Yet that distinction did not stop Alfa Bank from seeking the security researchers’ identities through a subpoena issued to Indiana University Professor L. Jean Camp, who had contacts with at least one of the security researchers and helped make their findings public. 

Prof. Camp filed a motion to quash the subpoena. EFF filed a friend-of-the-court brief in support of the motion to ensure the court understood that the security researchers had the right to speak anonymously under both the First Amendment and Indiana’s state constitution.

The brief argues: 

By sharing their observations anonymously, the researchers were able to contribute to the electorate’s understanding of a matter of extraordinary public concern, while protecting their reputations, families, and livelihoods from potential retaliation. That is exactly the freedom that the First Amendment seeks to safeguard by protecting the right to anonymous speech.

It’s not unusual for companies embarrassed by security researchers’ findings to attempt to retaliate against them, which is what Alfa Bank tried to do. That’s why EFF’s brief also asked the court to recognize that Alfa Bank’s subpoena was a pretext:

[T]he true motive of the litigation and the instant subpoena is to retaliate against the anonymous computer security researchers for speaking out. In seeking to impose consequences on these speakers, Alfa Bank is violating their First Amendment rights to speak anonymously.

In rejecting Alfa Bank’s subpoena, the Indiana court ruled that the information Alfa Bank sought to identify the security researchers “is protected speech under Indiana law” and that the bank had failed to meet the high bar required to justify the disclosure of the individuals’ identities.

EFF is grateful that the court protected the identities of the anonymous researchers and rejected Alfa Bank’s subpoena. We would also like to thank our co-counsel Colleen M. Newbill, Joseph A. Tomain, and D. Michael Allen of Mallor Grodner LLP for their help on the brief.

Categorieën: Openbaarheid, Privacy, Rechten

Podcast Episode: Control Over Users, Competitors, and Critics

Electronic Frontier Foundation (EFF) - nieuws - 24 november 2020 - 3:54pm
Episode 004 of EFF’s How to Fix the Internet

Cory Doctorow joins EFF hosts Cindy Cohn and Danny O’Brien as they discuss how large, established tech companies like Apple, Google, and Facebook can block interoperability in order to squelch competition and control their users, and how we can fix this by taking away big companies' legal right to block new tools that connect to their platforms – tools that would let users control their digital lives.

In this episode you’ll learn about:

  • How the power to leave a platform is one of the most fundamental checks users have on abusive practices by tech companies—and how tech companies have made it harder for their users to leave their services while still participating in our increasingly digital society;
  • How the lack of interoperability in modern tech platforms is often a set of technical choices that are backed by a legal infrastructure for enforcement, including the Digital Millennium Copyright Act (DMCA) and the Computer Fraud and Abuse Act (CFAA). This means that attempting to overcome interoperability barriers can come with legal risks as well as financial risks, making it especially unlikely for new entrants to attempt interoperating with existing technology;
  • How online platforms block interoperability in order to silence their critics, which can have real free speech implications;
  • The “kill zone” that exists around existing tech products, where investors will not back tech startups challenging existing tech monopolies, and even startups that can get a foothold may find themselves bought out by companies like Facebook and Google;
  • How we can fix it: The role of “competitive compatibility,” also known as “adversarial interoperability”  in reviving stagnant tech marketplaces;
  • How we can fix it by amending or interpreting the DMCA, CFAA and contract law to support interoperability rather than threaten it.
  • How we can fix it by supporting the role of free and open source communities as champions of interoperability and offering alternatives to existing technical giants.

Cory Doctorow (craphound.com) is a science fiction author, activist and journalist. He is the author of many books, most recently ATTACK SURFACE, RADICALIZED and WALKAWAY, science fiction for adults, IN REAL LIFE, a graphic novel; INFORMATION DOESN’T WANT TO BE FREE, a book about earning a living in the Internet age, and HOMELAND, a YA sequel to LITTLE BROTHER. His latest book is POESY THE MONSTER SLAYER, a picture book for young readers.

Cory maintains a daily blog at Pluralistic.net. He works for the Electronic Frontier Foundation, is a MIT Media Lab Research Affiliate, is a Visiting Professor of Computer Science at Open University, a Visiting Professor of Practice at the University of North Carolina’s School of Library and Information Science and co-founded the UK Open Rights Group. Born in Toronto, Canada, he now lives in Los Angeles. You can find Cory on Twitter at @doctorow.

Please subscribe to How to Fix the Internet via RSSStitcherTuneInApple PodcastsGoogle PodcastsSpotify or your podcast player of choice. You can also find the Mp3 of this episode on the Internet Archive.  If you have any feedback on this episode, please email podcast@eff.org.

Below, you’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio.

Resources

Anti-Competitive Laws

Anti-Competitive Practices 

Lawsuits Against Anti-Competitive Practices

Competitive Compatibility/Adversarial Interoperability & The Path Forward

State Abuses of Lack of Interoperability

Other

Transcript of Episode 004: Control Over Users, Competitors, and Critics

Danny O'Brien:
Welcome to How to Fix the Internet with the Electronic Frontier Foundation, the podcast that explores some of the biggest problems we face online right now, problems whose source and solution is often buried in the obscure twists of technological development, societal change, and the subtle details of internet law.

Cindy Cohn:
Hello, everyone. I'm Cindy Cohn, I'm the executive director of the Electronic Frontier Foundation. And for our purposes today, I'm also a lawyer.

Danny O'Brien:
And I'm Danny O'Brien, and I work at the EFF too, and I could only dream of going to law school. So, this episode has its roots in a long and ongoing discussion that we have at EFF about competition in tech, or rather, the complete lack of it these days. I think there's a growing consensus that big tech--Facebook, Google, Amazon, you can make your own list at home--have come to dominate the net and tech more widely and really not in a good way. They stand these days as potentially impregnable monopolies and there doesn't seem much consensus on how to best fix that.

Cindy Cohn:
Yeah. This problem affects innovation, which is a core EFF value, but it also impacts free speech and privacy. The lack of competition has policymakers pushing companies to censor us more and more, which, as we know, despite a few high-profile exceptions, disproportionately impacts marginalized voices, especially around the world.

Cindy Cohn:
And critically, way too many of these companies have privacy-invasive business models. At this point, I like to say that Facebook doesn't have users, it has hostages. So, addressing competition empowers users, and today we're going to focus on one of the ways that we can reintroduce competition into our world. And that's interoperability. Now this is largely a technical approach, but as you'll hear, it can work in tandem with legal strategies, and it needs some legal support right now to bring it back to life.

Danny O'Brien:
Interoperability is going to be useful because it accelerates innovation, and right now, the cycle of innovation just seems to be completely stuck. I mean, this may make me sound old, but I do remember when the pre-Facebook and the pre-Google quasi-monopolies just popped up, but grew, lived gloriously, and then died and shriveled like dragonflies.

Danny O'Brien:
We had Friendster, then Myspace, we had Yahoo and Alta Vista, and then they moved away. Nothing seems to be shifting this new generation of oligopolies, in the marketplace at least. I know lawsuits and antitrust investigations take a long time. We think at EFF that there's a way of speeding things up so we can break these down as quickly as their predecessors.

Cindy Cohn:
Yep. And that's what's so good about talking this through with our friend, Cory Doctorow. He comes at this from a deeply technological, economic, and historical perspective, and especially a historical perspective on how we got here in terms of our technology and law.

Cindy Cohn:
Now, I tend to think of it as a legal perspective, because I'm a litigator--I think, what doctrines are getting in the way? How can we address them? And how can we get the legal doctrines out of the way? But Danny, if I may, I had some personal experience here too. I bought an HP printer a while back, and because I wouldn't sign up for their ink delivery service, the darn thing just bricked. It wouldn't let me use anybody else's ink, and ultimately, it just stopped working entirely.

Danny O'Brien:
Interoperability is the ability for other parties to connect and build upon existing hardware and software without asking for permission, or begging for authorization, or being thrown out if they don't follow all the rules. So, in your printer's case, Cindy--and I love how when your printer doesn't work, you recognize it as an indictment of our zaibatsu control prison, rather than me who just thinks I failed to install the right driver. But in your case, your printer in Hewlett-Packard was building an ecosystem that only allowed other Hewlett-Packard projects to connect with it.

Danny O'Brien:
There's no reason why third-party ink couldn't work in HP, except that the printer has code in it that specifically rejects cartridges, not based on whether they work or not, but whether they come from the parent company or not. And there's a legal infrastructure around that too. It's much harder for third-party companies to interoperate with Hewlett-Packard printers, simply because there's so much legal risk about doing so.

Danny O'Brien:
This is the sort of thing that Cory excels at explaining, and I'm so glad we managed to grab him between, oh my god, all the million things he does. For those of you who don't know him, Cory works as a special advisor to EFF, but he's also a best-selling science fiction author. He has his own daily newsletter, at pluralistic.net, and a podcast of his own at craphound.com/podcast.

Danny O'Brien:
We caught him between publicizing his new kid's book Poesy the Monster Slayer, and promoting his new sequel to his classic "Little Brother" called "Attack Surface". And also curing world hunger, I'm pretty sure.

Cindy Cohn:
Hey, Cory.

Cory Doctorow:
It's always a pleasure to talk to you, and it's an honor to be on the EFF podcast.

Cindy Cohn:
So, let's get to it. What is interoperability? And why do we need to fix it?

Cory Doctorow:
Well, I like to start with an interoperability view that's pretty broad, right? Let's start with the fact that the company that sells you your shoes doesn't get to tell you whose socks you can wear, or that the company that makes your breakfast cereal doesn't get to tell you which dairy you have to go to. And that stuff is ... We just take it for granted, but it's a really important bedrock principle, and we see what happens when people lose interoperability: they also lose all agency and self-determination.

Cory Doctorow:
If you've ever heard those old stories about company mining towns where you were paid in company scrip that you could only spend at the company store, that was like non-interoperable money, right? The only way you could convert your company scrip into dollars would be to buy corn at the company store and take it down to the local moonshiner and hope he'd give you greenbacks, right?

Cory Doctorow:
And so, to the extent that you can be stuck in someone else's walled garden, it can turn, instead of, from a walled garden into a feedlot, where you become the fodder. And the tech industry has always had a weird relationship with interoperability,. On the one hand, computers have this amazing interoperable characteristic just kind of built into them. The underlying idea of things like von Neumann architectures, and Turing completeness really says that all computers can run all programs, and that you can't really make a computer that just, like, only uses one app store.

Cory Doctorow:
Instead, what you have to do is make a computer that refuses to use other app stores. You know, that tablet or that console you have, it's perfectly capable of using any app store that you tell it to. It just won't let you, and there's a really important difference, right? Like, I can't use a kitchen mixer to apply mascara, because the kitchen mixer is totally unsuited to applying mascara and if I tried, I would maim myself. But you can install any app on any device, provided that the manufacturer doesn't take steps to stop you.

Cory Doctorow:
And while manufacturers--tech manufacturers especially--have for a long time tried to take measures to stop use so they could increase their profits, what really changed the world was the passage of a series of laws, laws that we're very familiar with at the EFF: the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and so on, that started to allow companies to actually make it illegal--both civilly and criminally--for you to take steps to add interoperability to the products that you use, and especially for rivals to take steps.

Cory Doctorow:
I often say that the goal of companies who want to block interoperability is to control their critics, their customers, and their competitors, so that you have to arrange your affairs to benefit their shareholders. And if you don't, you end up committing an offense that our friend Saurok from the Cydia Project calls a felony contemptive business model.

Cindy Cohn:
This is something we care about in general at the EFF, because we worry a lot about the pattern of innovation, but I think it also has spillover effects on censorship and on surveillance. And I know you've thought about that a little bit, Cory, and I'd love to kind of just bring those out, because I think that it's important ... I mean, we all care, I think, about having functioning tools that really work. But there are effects on our rights as well, and that kind of old-school definition of rights, like what's in the constitution.

Cory Doctorow:
Yeah. Well, a lot of people are trusting of the firms that handle their communications. And that's okay, right? You might really think that Tim Cook is always going to exercise his judgment wisely, or that Mark Zuckerberg is a truly benevolent dictator and so on. But one of the things that keeps firms honest when they regulate your communications is the possibility that you might take your business elsewhere. And when firms don't face that possibility, they have less of an incentive to put your needs ahead of the needs of their shareholders. Or sometimes there's a kind of bank shot shareholder interest where, say, a state comes in and says, "We demand that you do something that is harmful to your users." And you weigh in the balance how many users you'll lose if you do it, versus how much it's going to cost you to resist the state.

Cory Doctorow:
And the more users you lose in those circumstances, the more you're apt to decide that the profitable thing to do is to resist state incursions. And there's another really important dimension, which is a kind of invitation to mischief that arises when you lock your users up, which is that states observe the fact that you can control the conduct of your users. And they show up and they say, "Great, we have some things your users aren't allowed to do." And you are now deputized to ensure that they don't do it, because you gave yourself that capability.

Cory Doctorow:
So, the best example of this ... I don't mean to pick on Apple, but the best example of this is Apple in China, where Apple is very dependent on the Chinese market, not just to manufacture its devices, but to buy and use its devices. Certainly, with President Trump's TikTok order, a lot of people have noted that some of the real fallout is going to be for Apple if they can't do business with Chinese firms and have Chinese apps and so on. And the Chinese government showed up at Apple's door and said, "You have to block working VPNs from your app store. We need to be able to spy on everyone who uses an iPhone. And so, the easiest way for us to accomplish that is to just tell you to evict any VPN that doesn't have a backdoor for us."

Danny O'Brien:
Just to connect those two things together, Cory, so what you're saying here is that because Apple phones don't have ... Apple has sort of exclusive control over them, and you can't just install your own choice of program on the iPhone. That means that Apple is this sort of choke point that bad actors can use, because they've got all this control for themselves, and then they can be pressured to impose that control on their customers.

Cory Doctorow:
They installed it so they could extract a 30% vig from Epic and other independent software vendors. But the day at which a government would knock on their door and demand that they use the facility that they developed to lock-in users to a store, to also lock-in users to authoritarian software, that day was completely predictable. You don't have to be a science fiction writer to say, "Oh, well, if you have a capability and it will be useful to a totalitarian state, and you put yourself in reach of that totalitarian states authority, they will deputize you to be part of their authoritarian project."

Cindy Cohn:
Yeah. And that's local as well as international. I mean, the pressure for the big platforms to be censors, to decide to be the omnipotent and always-correct deciders of what people get to say, is very strong right now. And that's a double-edged sword. Sometimes that can work well when there are bad actors on that, but really, we know how power works. And once you empower somebody to be the censor, they're going to be beholden to everybody who comes along who's got power over them to censor the people they don't like.

Cindy Cohn:
And it also then, I think, feeds this surveillance business model, this business model where tracking everything you do, and pay and trying to monetize that gets fed by the fact that you can't leave.

Danny O'Brien:
I want to try and channel the ghost of Steve Jobs here and present the other argument that lots of companies give for locking down their systems, which is that it prevents other smaller bad actors, it prevents malware, it means that Apple can control... But by controlling all of these avenues, it can build a securer, more consumer-friendly tool.

Cory Doctorow:
Yeah. I hear that argument, and I think there's some merit to it. Certainly, like, I don't have either the technical chops or the patience and attention to do a full security audit of every app I install. So, I like the idea of deputizing someone to figure out whether or not I should install an app, I just want to choose that person. I had a call recently with one of our colleagues from EFF, Mitch, who said that argument is a bit like the argument about the Berlin Wall, where the former East German government claimed that the Berlin Wall wasn't there to keep people in who wanted out, it was to stop people from breaking into the worker's paradise.

Cory Doctorow:
And if Apple was demonstrably only blocking things that harmed users, one would expect that those users would just never tick the box that says, "Let me try something else." And indeed, if that box was there, it would be much less likely that the Chinese state would show up and say, "Give us a means to spy on all your users," because Apple could say, "I will give you that means, but you have to understand that as soon as that's well understood, everyone who wants to evade your surveillance just ticks the box that says, 'Let me get a VPN somewhere else.'"

Cory Doctorow:
And so, it actually gives Apple some power to resist it. In that way, it's a bit like the warrant canaries that we're very fond of, where you have these national security letters that firms cannot disclose when they get them. And so, firms as they launch a new product say, "The number of national security letters we have received in respect to this product is zero," and they reissue that on a regular basis. And then they remove that line if they get a national security letter.

Cory Doctorow:
Jessamyn West, the librarian after the Patriot Act was passed, put a sign up in her library that said, "The FBI has not been here this week, watch for this sign to disappear," because she wasn't allowed to disclose that the FBI had been there, but she could take down the sign, and in the same way... And so, the idea here is that states are disincentivized to get up to this kind of mischief, where it relies on them keeping the existence of the mischief a secret, if that secrecy vanishes the instant they embark upon the mischief.

Cory Doctorow:
In the same way, if you have a lock-in model that disappears the instant you cease to act as a good proxy for your users' interests, then people who might want to force you to stop being a good proxy for your users' interest, have a different calculus that they make.

Cindy Cohn:
I just want to, sorry, put my lawyer hat on here. Warrant canaries are a really cute hack that are not likely to be something the FBI is just going to shrug its shoulders and say, "Oh, gosh, I guess you got us there, folks." So, I just, sorry...

Cory Doctorow:
Fair enough.

Cindy Cohn:
Sometimes I have to come in and actually make sure people aren't taking legal advice from Cory.

Danny O'Brien:
When we were kicking around ideas for the name of this podcast, one of them was, "This is not legal advice."

Cory Doctorow:
Well, okay, so instead, let's say binary transparency, where you just... automatically built into the app is a thing that just checks to see whether you got the same update as everyone else. And so that way, you can tell if you've been pushed to a different update from everyone else, and that's in the app when the app ships. And so, the only way to turn it off is to ship an update that turns it off, and if they only ship that to one user, it happens automatically. It's this idea of Ulysses pact, where you take some step before you're under coercion, or before you're in a position of weakness, to protect you from a future moment. It's equivalent of throwing away the Oreos than you go on a diet.

Cindy Cohn:
So, let's talk just a little bit more specifically about what are the things that we think are getting in the way of interoperability? And then, let's pivot to what we really want to do, which is fix it. So, what I've heard from you so far, Cory, is that we see law getting in the way, whether that's the Digital Millennium Copyright Act, Section 12, or one of the CFAA, or contract law--these kind of tools that get used by companies to stop interoperability. What are some of the other things that get in the way of us having our dream future where everything plugs into everything else?

Cory Doctorow:
I'd say that there's two different mechanisms that are used to block interop and that they interact with each other. There's law and there's tech, right? So, we have these technical countermeasures: the sealed vaults, chips, the TPMS inside of our computers, and our phones, and our other devices, which are dual-use technologies that have some positive uses, but they can be used to block interop.

Cory Doctorow:
As we move to more of a software-as-a-service model were some key element of the process happens in the cloud, it gives firms that control, that cloud, a gateway where they can surveil how users are using them and try and head off people who are adding interoperability--what we call competitive compatibility to a service, that's when you add a new interoperable feature without permission from the original manufacturer, and so on.

Cory Doctorow:
And those amount to a kind of cold war between different technologists working for different firms. So, on the one hand, you have companies trying to stop you from writing little scrapers that go into their website, and scrape their users waiting inboxes, and put them in a rival service on behalf of those users. And on the other hand, you have the people who are writing the scrapers, and we haven't seen a lot of evidence about who would win that fight, at least if it were a fair fight, because of the law--because between the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and a lot of other laws that kind of pop up as they are repurposed by firms with a lot of money to spend on legal entrepreneurship.

Danny O'Brien:
I want to drill down just a little bit with this because I loved your series that you wrote on competitive compatibility, which talked about the old age of the Internet, where we did have a far faster pace of innovation and the life and death of tech giants was far shorter, because they were kind of in this tooth and claw competitive mode, where ... I mean, just to plug an example, right? You would have, sort of, Facebook building on the contact lists that telephones and Google had by adversarially interoperating with them, right?

Danny O'Brien:
You would go to Facebook, and it would say, "Hey, tell us your friends." And it would be able to do that by connecting to their systems. Now, you can't do that with Facebook now, and you can't write an app that competes with Apple's software business, because neither of them will let you. And they're able to do that, I think what we're both saying, because... Not so much because of technical restrictions, but because of the laws that prevent you from doing that. You will get sued rather than out-innovated.

Cory Doctorow:
Well, yes. So, I think that's true. We don't know, right? I'm being careful here, because I have people who I trust as technologists who say, "No, it's really hard, they've got great engineers." I'm skeptical of that claim because we've had about a decade or more of companies being very afraid to try their hand at adversarial interoperability. And one of the things that we know is that well-capitalized firms can do a lot that firms that lack capital can't, and our investor friends tell us that what big tech has more than anything else is a kill zone--that even though Facebook, Apple, Google, and the other big firms have double-digit year-on-year growth with billions of dollars in clear profit every year, no one will invest in competitors of theirs.

Cory Doctorow:
So, I think that when technologists say, "Well, look, we beat our brains out on trying to write a bot that Facebook couldn't detect, or make an ad blocker that ... I don't know, the Washington Post couldn't stop or whatever, or write an app store and install it on iPhones and we couldn't do it." The part that they haven't tested is, well, what if an investor said, "Oh, I'm happy to get 10% of Facebook's total global profit, and I will capitalize you to reflect that expected return and let you spend that money on whatever it takes to do that"?

Cory Doctorow:
What if they didn't have the law on their side? What if they just had engineers versus engineers? But I want to get to this last piece, which is where all this law and these new legal interpretations come from, which is this legal entrepreneurship piece. So as I say, Facebook and its rivals, they have double-digit growth, billions of dollars in revenue every year, in profit, clear profit every year.

Cory Doctorow:
And some of that money is diverted to legal entrepreneurship. Instead of being sent to the shareholders, or being spent on engineering, or product design, it's spent on law. And that spend is only possible because there's just so much money sloshing around in those firms, and that spend is particularly effective, because they're all gunning for the same thing. They're a small number of firms that dominate the sector, and they have all used competitive compatibility to ascend to the top, and they are all committed to kicking away the ladder. And the thing that makes Oracle/Google so exceptional, is because it's an instance in which the two major firms actually have divergent interests.

Cory Doctorow:
Far more often, we see their industry associations and the executives from the firm's asking for the same things. And so, one of the things that we know about competition is when you lose competition, the firms that remain, find it easier to emerge a collusion. They don't have to actually all sit down and say, "This is what we all want." It's just easy for them to end up in the same place. Think about the kinds of offers you get for mobile phone plans, right? It's not that the executives all sat down and cooked up what those plans would be, it's just that they copy each other, and they all end up in the same place. Or publishing contracts, or record contracts.

Cory Doctorow:
Any super-concentrated industry is going to have a unified vision for what it wants in its lobbying efforts, and it's going to have a lot of money to spend on it.

Cindy Cohn:
So, let's shift because our focus here is fixing it. And my journey in this podcast is to have a vision of what a better future would look like, what does the world look like if we get this right? Because at the EFF and we spend a lot of time articulating all the ways in which things are broken, and that's a fine and awesome thing to do, but we need to fix them.

Cindy Cohn:
So, Cory, what would the world look like if we fixed interoperability? Give us the vision of this world.

Cory Doctorow:
I had my big kind of "road to Damascus" moment about this, when I gave a talk for the 15th anniversary of the computer science program at the University of Waterloo. They call themselves the MIT of Canada. I'm a proud University of Waterloo dropout. And I went back to give this speech and all of these super bright computer scientists were in the audience, grad students, undergrads, professors, and after I talked about compatibility, and so on, someone said, "How do we convince everyone to stop using Facebook and start using something else?"

Cory Doctorow:
And I had this, just this moment where I was like, "Why would you think that that was how you will get rid of Facebook?" Like, "When was it ever the case that if you decided you wanted to get a new pair of shoes, you throw away all your socks?" Why wouldn't we just give people the tool to use Facebook at the same time as something else, until enough of their friends have moved to the something else, that they're ready to quit Facebook?

Cory Doctorow:
And that, to me is the core of the vision, right? That rather than having this model that's a bit like the model that my grandmother lived through--my grandmother was a Soviet refugee. So, she left the Soviet Union, cut off all contact, didn't speak to her mother for 15 years, and was completely separated from her, it was a big momentous decision to leave the Soviet Union. We leave that, right? Where we tell people, "You either use Twitter or use Mastodon, but you don't read Twitter through Mastodon and have a different experience, and have different moderation rules," and so on. You're just either on Mastodon or you're on Twitter, they're on either sides of the iron curtain.

Cory Doctorow:
And instead, we have an experienced a lot more like the one I had when I moved to Los Angeles from London five years ago, where we not only got to take along the appliances that we liked and just fit them with adapters, we also get to hang out with our family back home by video call and visit them when we want to, and so on--that you let people take the parts of the offer that they like and stick with them, and leave behind the parts they don't like and go to a competitor. And that competitor might be another firm, it might be a co-op, it might be a thing just started by a tinkerer in their garage, it might be a thing started by a bright kid in a Harvard dorm room the way that Zuck did with Facebook.

Cory Doctorow:
And when those companies do stuff that makes you angry or sad, you take the parts of their service that you like, and you go somewhere else where people will treat you better. And you remain in contact with the people, and the hardware, and the services that you still enjoy, and you block the parts that you don't. So, you have technological self-determination, you have agency, and companies have to fight to keep your business because you are not a hostage, you're a customer.

Cindy Cohn:
Yeah. I think that that's exactly it, and well put. I think that we've gotten used to this idea of, what we called back in the days about the Apple app store--I don't know why Apple keeps coming up, because they're only one of the actors we're concerned about--but we used to call it the crystal prison, right? You buy an Apple device, and then it's really hard to get out of the Apple universe. It used to be that it was hard to use Microsoft Word unless you used a Windows machine. But we managed to pressure, and some of that was antitrust litigation, but we managed to make pressure so that that didn't work.

Cindy Cohn:
We want browsers that can take you to anywhere on the web, not just the ones that have made deals with the browsers. We want ISPs that offer you the entire web, just not the ones that pay for it. It really is an extension of network neutrality, this idea that we as users get to go where we want and get to dictate the terms of how we go there, at least to the extent of being able to interoperate.

Cory Doctorow:
I mean apropos of Apple, I don't want to pick on them either, because Apple are fantastic champions of interoperability when it suits them, right? As you say that, the document wars were won in part by the iWork suite, where Apple took some really talented engineers, reversed engineer the gnarly, weird hairball that is Microsoft Office formats, and made backwards compatible new office suites that were super innovative but could also save out to Word and Excel, even though you're writing into Numbers or Pages, and that part's great. And just like Amazon broke the DRM on music monopoly that Apple had when it launched the MP3 store, but now will not release its audiobooks from DRM through its Audible program.

Cory Doctorow:
Apple was really in favor of interoperability without permission when it came to document formats--it benefited--but doesn't like it when it comes to, say, rival app stores. Google is 100% all over interoperability when it comes to APIs, but not so much when it comes to the other areas where they enjoy lock-in. And I think that the lesson here is that we as users want interoperability irrespective of the effect that it has on a company's shareholders.

Cory Doctorow:
The companies have a much more instrumental view of interoperability: that they want interoperability when it benefits them, and they don't want it when it harms them. And I'm always reminded, Cindy, of the thing you used to say, when we were in the early days of the copyright wars around Napster. And we would talk to these lobbyists from the entertainment industry, and they would say, "Well, we are free speech organizations, that's where we cut our teeth." And you would say, "We know you love the First Amendment, we just wish you'd share."

Cindy Cohn:
Yeah, absolutely. Absolutely, and one of the things that we've done recently is, we started off talking about this as interoperability, then we called it adversarial interoperability to make it clear that you don't need to go on bended knee for permission. And recently, we started rebranding it to competitive compatibility. And Cory, you've used both of those terms in this conversation, and I just want to make sure our listeners get what we're doing. We're trying to really think about this. I mean, all of them are correct, but I think competitive compatibility, the reason we ended up there is not only is it fewer syllables, and we can call it ComCom, which we all like, but it's the idea that it's compatibility, it's competitive compatibility, it's being compatible with a competitive market-based environment, a place where users get to decide because people are competing for their interest and for their time.

Cindy Cohn:
And I really love the vision that if you're the old ... This was the product that Power Ventures tried to put out, was a service where it didn't matter whether you had a friend on LinkedIn, or you had a friend on Orkut--it's an old tool--or Facebook. You just knew you had a friend, and you just typed in, "Send the message to Cory," and the software just figured out where you were connected with them and sent the message through it.

Cindy Cohn:
I mean, that's just the kind of thing that should be easy, that isn't easy anymore. Because everybody is stuck in this platform mentality, where you're in one crystal prison or you're in the other, and you might be able to switch from one to the other, but you can't take anything with you in the way that you go.

Cindy Cohn:
The other tool that Power had that I thought was awesome was being able to look at all your social feeds in one screen. So, rather than switching in between them all the time, and trying to remember--which I spend a lot of time on right now, trying to remember whether I learned something on Twitter, or I learned it here on Facebook, or I learned it somewhere else--you have one interface, you have your own cockpit for your social media feeds, and you get to decide how it looks, instead of having to log into each of them separately and switch between them.

Cindy Cohn:
And those are just two ideas of how this world could serve you better. I think there are probably a dozen more. And I'd like for us to ... If there's other ones that you could think of like, how would my life be different if we fix this in ways that we can think about right now? And then of course, I think as with all tech and all innovation, the really cool things are going to be the things that we don't think about that show up anyway, because that's how it works.

Cory Doctorow:
Yeah, sure. I mean, a really good example right now is you'd be able to install Fortnite on your iPhone or your Android device, which is the thing you can't do as of the day that we record this. And again, that's what the app store lock-in is for, it's to take a bite out of Fortnite and other independent software vendors. But if you decided that you wanted to keep using the security system that your cable provider Comcast gave to you but then decided it wouldn't support anymore, which is the thing Comcast did last year, you could plug its cameras into a different control system.

Cory Doctorow:
If you decided that you liked the camera that Canary sent you and that you paid for, but you didn't like the fact that it's an unencrypted video--or rather, video that was an end-to-end encrypted to your phone and instead decrypted it in its data center so it could look at the video (and it does so for non-nefarious reasons, it wants to make sure it doesn't send you a motion control alert just because your cat walked by the motion sensor)--but you may decide that having a camera in your house that's always on, and that's sending video that third parties can look at, is not a thing you like, but you like the hardware, so you just plug that into something else too.

Danny O'Brien:
I love the Catalog of Missing Devices. This is the thing that Cory co-wrote, which was just a list of devices that we cannot see right now, because of some of the laws that prevent people confidently being able to innovate in this space. And I sort of concede, because we plod about this all the time, like, EFF's role in this, right? We're continuing to sort of lobby and also in the courts as well work out ways that we can challenge and redefine the legal environment here. But what's the message here if you're someone who's an open-source developer, or an entrepreneur, or a user? What's going to move the needle? What's going to take us into this future? And what can individuals do?

Cory Doctorow:
So, this is an iterative process, there isn't a path from A to Z. There is a heuristic for how we climb the hill towards a better future. And the way to understand this, as I tried to get at with this sort of technology and law and monopoly, is that our current situation is the result of companies having these monopoly rents, laws being bad because they got to spend them on it, companies being able to collude because their sectors are concentrated, and technology that works against users being in the marketplace.

Cory Doctorow:
And each one of those affects the other. So for example, if we had, say merger scrutiny, right? Say we said that firms were no longer allowed to buy nascent competitors, either to crush them or to acquire something that they couldn't do internally, the way you say Google has with most of its successful products. Really it's got search, and to a lesser extent Android, that was mostly an acquisition, and ... What's the other one? Search ... Oh, and Gmail are the really successful in-house products. Maybe Google Photos, although that's probably just successful because every Android device ships with it. But if we just say Google can't buy Fitbit, Google, a company that has tried repeatedly and failed to make a wearable, isn't allowed to buy Fitbit. In order to acquire that, then Google starts to lose some of its stranglehold on data, especially if you stop its rivals from buying Fitbit too. And that makes it weaker, so that it's harder for it to spend on legal entrepreneurship.

Cory Doctorow:
If we make devices that compete with Google, or tools that compete with Google--ad blockers, tracker blockers, and so on--then that also weakens them. And if they are weaker, they have fewer legal resources to deploy against these competitors as well. If we convince people that they can want more, right? If we can have a normative intervention, to say, "No one came down off a mount with two stone tablets," saying, 'Only the company that made your car can fix it,' or 'Only the company that made your phone can fix it,'" and we got them to understand that the right to repair has been stolen from them, then, when laws are contemplated that either improve our right to repair or take away our right to repair, there's a constituency to fight those laws.

Cory Doctorow:
So the norms, and the markets, and the technology, and the law, all work together with each other. And I'm not one of nature's drivers, I have very bad spatial sense, and when I moved to Los Angeles and became perforce a driver, I find myself spending a lot of time trying to parallel park. And the way that I parallel park is, I turn the wheel as far as I can, and then get a quarter of an inch of space, and then I turn it in the other direction, and I get a quarter of an inch of space. And I think as we try to climb this hill towards a more competitive market, we're going to have to see which direction we can pull in from moment to moment, to get a little bit more policy space that we can leverage to get a little bit more policy space.

Cory Doctorow:
And the four directions we can go in are: norms, conversations about what's right and wrong; laws, that tell you what's legal and not legal; markets, things that are available for sale; and tech, things that are technologically possible. And our listeners, our constituents, the people in Washington, the people in Brussels, they have different skill sets that they can use here, but everyone can do one of these things, right? If you're helping with the norm conversation, you are creating the market for the people who want to start the businesses, and you are creating the constituency for the lawmakers who want to make the laws.

Cory Doctorow:
So, everybody has a role to play in this fight, but there isn't a map from A to Z. That kind of plotting is for novelists, not political struggles.

Cindy Cohn:
I think this is so important. One of the things that ... And I want to close with this, because I think it's true for almost all of the things that we're talking about fixing, is that the answer to, "Should we do X or Y?" is "yes," right? We are in some ways, the kind of scrappy, underfunded side of almost every fight we're in around these kinds of things. And so, anybody who's going to force you to choose between strategies is undermining the whole cause.

Cindy Cohn:
These are multi-strategic questions. Should we break up big tech? Or should we create interoperability? The answer is, yes, we need to aim towards doing a bit of all of these things. There might be times when they conflict, but most of the time, they don't. And it's a false choice if somebody is telling you that you have to pick one strategy, and that's the only thing you can do.

Cindy Cohn:
Every big change that we have done has been a result of a whole bunch of different strategies, and you don't know which one is going to give way, which is going to pave the way faster. You just keep pushing on all things. So, we're finally moving on the Fourth Amendment on privacy, and we're moving in the courts. But we could have passed a privacy law, but the legislation got stuck. We got to do all of these things. They feed each other, they don't take away each other if we do it right.

Cory Doctorow:
Yeah, yeah. And I want to close by just saying, EFF's 30 years old, which is amazing. I've been with the organization nearly 20 years, which is baffling, and the thing that I've learned on the way is that these are all questions of movements and not individuals. Like as an individual, the best thing you can do is join a movement, right? If you're worried about climate change, it doesn't really ... How well you recycle is way less important than what you do with your neighbors to change the way that we think about our relationship to the climate.

Cory Doctorow:
And if you're worried about our technological environment, then your individual tech choices do matter. But they don't matter nearly so much as the choices that you make when you get together with other people to make this part of a bigger, wider struggle.

Cindy Cohn:
I think that's so right. And even those who are out there in their garages, innovating right now, they need all the rest of the conversation to work. Nobody just put something out there in the world and it magically caught fire and changed the world. I mean, we like that narrative, but that's not how it works. And so, even if you're one of those people--and there are many of them who are EFF fans and we love them--who are out there thinking about the next big idea, this whole movement has to move forward, so that that big idea finds the fertile ground it needs, take seed and grows, and then gives all the rest of us the really cool stuff in our fixed future.

Cindy Cohn:
So, thank you so much, Cory, for taking time with us. You never fail to bring exciting ideas. And I think that you also are really willing to talk to a sophisticated audience and not talk down to people and bring in complicated ideas without having ... And expect and get the audience to come up to the level of the conversation, so I certainly always learn from talking with you.

Cory Doctorow:
I was going to say, I learned it all from you guys, so thank you very much. And I miss you guys, I can't wait to see you in person again.

Danny O'Brien:
Cory is this little ball of pure idea concentrate, and I was madly scribbling notes through all of that discussion. But one of the phrases that stuck with me was that, he said the companies are blocking interoperability to control critics, customers, and competitors.

Cindy Cohn:
Yeah. I thought that was really good too, and obviously, the most important part of all of this is control. I mean, that's what the companies have. Of course, the part about critics is what especially triggers the First Amendment concerns, but control is the thing and I think that the ultimate power that we should have, the ultimate amount of control we should have is the ability to leave.

Cindy Cohn:
The ultimate power is the power to leave. That's the core thing that is needed to get companies to concentrate on their users. The conflict here is really between companies' desire to control users and users having the right to choose where they want to be.

Danny O'Brien:
One of the other things that I think comes out of this discussion is when you realize that companies, by blocking interoperability, can have exclusive power of censorship or control over their users. There's always someone else more powerful who has influenced itself over the companies and is ultimately going to take and use that power, right? And that's, generally speaking, governments.

Danny O'Brien:
We notice that when you have this capability to influence, or to censor, or to manipulate your users, governments and states ultimately would like access to that power also.

Cindy Cohn:
Yeah. We're seeing this all over the place, there's always a bigger fish. Right now, we see politicians in the United States, in very different directions, jockeying to force companies like Facebook to obey their preferences or agendas. And again, we have high-profile counter-examples, but where we live, EFF, in the trenches, we see that this power of censorship is most often used against those with the least voice in the political arena.

Cindy Cohn:
That kind of branches out to why we care about censorship and the First Amendment. I think that sometimes people forget this. We don't care about the First Amendment and free speech because we think it's okay for anybody to be able to say whatever they want, no matter how awful it is. The First Amendment isn't in our Constitution because we think it's really great to be an asshole. It's because the power to censor is so strong, and so easily misused.

Cindy Cohn:
As we've seen, once somebody has that power, everybody wants to control them. The other thing I think Cory really has a good grasp on is how we got here. We talked a little bit about the kill zone, that venture capitalists won't fund startups that attempt to compete. I think that's really right, and it's a piece that we're going to have to fix.

Danny O'Brien:
Yeah. I think one of the subtleties about the current VC environment that powers so much of current tech investment, at least, is the nature of the exit strategy. These days, a venture capitalist won't give ... expects to get their return, not by a company IPOing, or successfully overturning one of these monopolies, but actually by being bought out by those monopolies. And that really constrains and influences what new innovators or entrepreneurs plan on doing in the next few years. And I think that's one of the things that sticks in this current, less-than-useful cycle.

Danny O'Brien:
And usually, in these situations, I think that the community that I most expect to provide adversarial interoperability is, at least in theory, free of those financial incentives. And that's the free and open-source software community. So much of the history of open-source has been using interoperability to build and escape from existing proprietary systems, from the early days of Unix, to LibreOffice being a competitor, to Microsoft's word processing monopoly and so on.

Danny O'Brien:
And I think where these two things interact, is that these days a lot of open-source and free software gets its funding from the big companies themselves, because they don't necessarily want to fund interoperability. So, that means that the stuff that doesn't cater to interoperability gets a lot of rewards, and other communities who are fighting to shake off the shackles of proprietary software and dominant monopolies struggle without financial support.

Danny O'Brien:
And of course, there's legal liability there too. We just watched the youtube-dl case with GitHub throwing that off their service, because it's an attempt to interoperate with one of these big tech magnets.

Cindy Cohn:
Yeah. Free and open-source world is vital. They have those muscles, and it's always been how they work. They've always had to make sure that they can play on whatever hardware you have, as well as with other software. So, I think that this is a key to getting us into a place where we can make interoperability the norm, not the exception.

Cindy Cohn:
I also, am really pleased about the Internet Archive's work in really supporting the idea of a more distributed web. I think they really get the censorship possibilities, and are really supporting a lot of little companies, or little developers, or innovators who are trying to build a community to really get this done. And yes, the youtube-dl case, this is a situation in which you see the lack of protection for interoperability really meaning that the first thing that happened was this tool that so many people rely on went away as opposed to any other step. The first thing that happens is we lose the tool. That's because the legal system isn't set up to be really even or handle these kinds of situations, but rather just move to censorship first.

Cindy Cohn:
So in this, we've gone over what Cory talked about as the four levers of change. These four levers are things that were originated by Larry Lessig in the 90s. And those four levers are: law, like the DMCA, which is used in the YouTube case, the Computer Fraud and Abuse Act, and antitrust; norms; technology; and markets.

Cindy Cohn:
They all work together, and you can't just pick one. And there's a lot of efforts to try to say, "Well, you just have to pick one and let the others go." But in my experience, you really can't tell which one will create change, they all reinforce each other. And so, to really fix the Internet, we have to push on all four together.

Danny O'Brien:
But at least now we have four levers rather than no levers at all. On that note, I think we'll wrap up for today. Join us next time.

Danny O'Brien:
Thanks again for joining us. If you'd like to support the Electronic Frontier Foundation, here are three things you can do today. One: you can hit subscribe in your podcast player of choice. And if you have time, please leave a review. It helps more people find us. Two: please share on social media and with your friends and family. Three: please visit eff.org/podcasts, where you will find more episodes, learn about these issues, you can donate to become a member, and lots more.

Danny O'Brien:
Members are the only reason we can do this work, plus you can get cool stuff like an EEF hat, or a EFF hoodie, or even a camera cover for your laptop. Thanks once again for joining us, and if you have any feedback on this episode, please email podcast@eff.org. We do read every email. This podcast was produced by the Electronic Frontier Foundation with the help from Stuga Studios. Music by Nat Keefe of BeatMower.

Categorieën: Openbaarheid, Privacy, Rechten

Interdisziplinäre Betrachtungen zu den Wendepunkten des Urheberrechts

iRights.info - 24 november 2020 - 11:47am

Das rechtliche Tauziehen bei kurzen Musiksamples, die Gretchenfrage der Schöpfungshöhe, der Paradigmenwechsel durch Uploadfilter – diese und weitere, ebenso ungewohnte wie aufschlussreiche Perspektiven auf das Urheberrecht finden sich in dem nun erhältlichen Sammelband „Tipping Points“.

Üblicherweise ist der akademische Diskurs über das Urheberrecht und dessen Anpassung ans digitale Zeitalter die Domäne von Rechtswissenschaftler*innen und Jurist*innen. In dem jetzt erschienenen Sammelband „Tipping Points“ nähern sich hingegen auch Musik- und Literaturwissenschaftler*innen, Soziolog*innen sowie Historiker*innen den Fragen an das Urheberrecht von heute und an das von morgen.

„Tipping Points“ bedeutet im Englischen soviel wie „Wendepunkte“ oder auch „Kipppunkte“ und meint die Momente, wenn Prozesse an einem bestimmten Moment umkippen und ab dann – unumkehrbar – in eine andere Richtung verlaufen. Diesen Wendepunkten gehen die Beiträge im Großen nach – zum Beispiel in Bezug auf die derzeitige Urheberrechtsreform – wie auch im Kleinen, etwa bei der Produktion neuer Werke.

Urheberrechtliche Wendepunkte im Großen wie im Kleinen

Allein vier der insgesamt 13 Fachbeiträge gehen auf urheberrechtliche Aspekte des Samplings ein. Etwa auf die „Schöpfungshöhe aus der Perspektive empirischer Musikforschung“, auf die „Onlinemärkte für Musiksamples und die Fixierung flüchtiger Waren“ und auf „Aneignungskunst (Appropriation Art) in Zeiten von ‚Metall auf Metall‘ und des Internets“.

Zudem setzen sich die Autor*innen mit den Herausforderungen auseinander, die sich für das Urheberrecht bei nutzergenerierten Inhalten auf großen Plattformen stellen, am Beispiel von Clips bei Tik-Tok, die häufig geschützte Musik enthalten.

Des weiteren beleuchten einzelne Beiträge die „Konflikte zwischen ‚Musikverbrauchern‘ und Verwertungsgesellschaften im historischen Kontext“ sowie Crowdfunding und Crowdsourcing als „neue Geschäfts- und Rechtsmodelle der Netzliteratur“.

Die nach wie vor stark umstrittenen Uploadfilter stellt ein Beitrag in den Zusammenhang mit „Gemeinfreiheit, Kollektiv und Kulturallmende“, um sie als Auslöser für Wendepunkte in der Kulturindustrie einzuordnen.

Rechtliche Fragen zu Schattenbibliotheken und Forschungsdatenmanagement

Zu den vier Herausgeber*innen des Sammelbands zählt iRights.info-Redakteur Georg Fischer. In seinem Beitrag untersucht er das Phänomen der wissenschaftlichen Schattenbibliotheken und erörtert, was diese für das derzeitige Publikationswesen in der akademischen Welt bedeuten.

Fabian Rack, Autor bei iRights.info und Anwalt bei iRights.Law, steuert mit drei Kolleg*innen einen Beitrag zur nationalen Forschungsdaten-Infrastruktur bei, die derzeit von einem Konsortium aufgebaut wird. Ihr Aufsatz befasst sich mit dem langfristigen Management von Forschungsdaten in den Kulturwissenschaften und diesbezüglichen rechtlichen Herausforderungen.

Beiträge der gleichnamigen Fachtagung

Die „Tipping Points“-Publikation geht zurück auf die Fachtagung „zum Verhältnis von Freiheit und Restriktion im Urheberrecht“, die das Weizenbaum-Institut in Kooperation mit der Gesellschaft für Musikwirtschafts- und Musikkulturforschung im Februar 2020 in Berlin veranstaltete.

Der Sammelband enthält alle Beiträge dieser Veranstaltung. Er erscheint beim Nomos-Verlag und ist sowohl in gedruckter Fassung käuflich zu erwerben, als auch als kostenloses e-Book online verfügbar. Die Artikel stehen unter der freien Creative Commons-Lizenz CC BY-NC-ND, lassen sich online lesen und als PDF-Dateien herunterladen.

Hoe maak je een product van je juridische dienstverlening? #legaltechtuesday

IusMentis - 24 november 2020 - 8:19am

Het gaat langzaam, maar de trend is onvermijdelijk: juridische dienstverlening gaat naar standaardisatie toe. Klanten verwachten meer dan alleen zuiver maatwerk, met name omdat ze de prijs er niet meer voor over hebben. Alternatieve dienstverleners, van accountants tot verzekeraars, bieden dan ook meer en meer producten aan die op prijs concurreren met dat maatwerk van de traditionele juridische aanbieders. Alleen, wat is dan een product, een dagvaarding met een streepjescode voor de kassa?

Al eerder schreef ik over wat commodificatie heet. Traditioneel leverden juridisch adviseurs volledig maatwerk, net als de kleermaker van vroeger die met een rol stof voor elke klant perfect maatwerk leverde. Slimme kantoren standaardiseren het proces en leveren maatwerk op basis van standaardclausules of snel aan te passen modelcontracten. Zij hebben half-affe broeken en jasjes in de kast en knippen de stof bij voor de klant. Standaardisatie is steeds verder te trekken, en het onvermijdelijke eindpunt is het product: een volkomen standaard stukje dienstverlening, tegen een vaste prijs en bij meerdere aanbieders in vrijwel identieke vorm te krijgen.

Aha, denkt u misschien: die snap ik wel, dat is gewoon je dienst tegen een vaste prijs aanbieden. Nou is een vaste prijs wel noodzakelijk voor een product, maar het is zeker niet genoeg. Het product moet ook vastomlijnd zijn, anders is die prijs niet meer dan een schatting van het werk maal het uurtarief (met de belofte daar niet overheen te gaan).

Een juridisch product voldoet aan de volgende eisen:

  • De dienst is standaard, echt een commodity oftewel uitwisselbaar product. De dienst moet dus met geen tot nauwelijks maatwerk te leveren zijn. Een claim tot schadevergoeding vanwege vertraging met de trein is bijvoorbeeld standaard, een schadevergoeding eisen bij een misgelopen verbouwing is te zeer maatwerk.
  • De dienst moet eenvoudig toegankelijk te maken zijn, de doelgroep moet zonder moeite deze kunnen inroepen. In de praktijk betekent dat vaak een app of online dienst, waar men de benodigde gegevens invult en de resultaten direct ontvangt.
  • Levering moet schaalbaar zijn. Ongeacht het aantal klanten dat de dienst afneemt, moet de kwaliteit en snelheid hetzelfde zijn. Dat is waar het bij juridische diensten vaak misgaat: die gaan uit van een persoon die de dienst levert. Natuurlijk kun je meer juristen inzetten op dezelfde dienst (“wij hebben nu zés procesadvocaten”) maar dat kent een natuurlijke grens.
  • De prijs moet vast te bepalen zijn, of in ieder geval per onderdeel vast. Een product moet bij elke klant hetzelfde kosten. Hooguit wanneer men een optionele extra wil (niet alleen een aanvraag invullen maar ook indienen, bijvoorbeeld) mag dat een vast, extra bedrag kosten.
In de praktijk wordt een juridisch product eigenlijk altijd neergezet als een softwaredienst, zoals een app. Dit is een van de mooiere voorbeelden van legal tech: het transformeren van juridische dienstverlening met technologie. Wat is nu een meer fundamentele transformatie dan een maatwerkdienst omzetten in een standaardproduct?

Voor een juridische dienstverlener kan productizatie bedreigend voelen, en met reden. Een standaardproduct zal per stuk minder geld opleveren dan de maatwerkdienst van voorheen, en bovendien risico’s geven zoals een incomplete of inadequate dienstverlening met alle klachten van dien.

Er zijn echter diverse manieren om deze bedreiging het hoofd te bieden. De eerste manier is het product als leadgeneratie in te zetten. Het product is dan eigenlijk de eerste helft, de intake van de eigenlijke dienstverlening. Bij sommige klanten is de dienst eigenlijk niet meer nodig, want die klanten voldoen aan het standaardprofiel. Andere klanten hebben veel dienstverlening nodig, en daar tussen is ook het nodige mogelijk. Vaak is een vaste prijs voor meerwerk, zoals nalopen van alle details, goed mogelijk.

Een iets verder gaande aanpak is om de productverkoop duidelijk los neer te zetten van de maatwerkdienstverlening. Een aparte juridische entiteit, waarbij doorverwijzen (“wilt u meer?”) natuurlijk mogelijk is. Zo staat de goedkope productverkoop naast, maar duidelijk los van, de high-end dienstverlening. Dat kan natuurlijk als volledig apart merk, maar het kan ook prima naast elkaar. De Albert Heijn is er groot mee geworden: AH Basic, daarna het ‘echte’ huismerk en dan het Excellent merk.

Natuurlijk zal er in de markt altijd behoefte blijven aan zuiver maatwerk. Maar vergis u niet: die markt wordt kleiner en kleiner, want steeds meer klanten ontdekken dat een standaard product eigenlijk ook wel goed is. Net zoals veel mensen liever hun pak als confectie aanschaffen, met hooguit beperkte aanpassingen zoals uitleggen van de broekspijpen, dan dat ze naar een kleermaker gaan. De kleermaker zal zeker blijven bestaan, maar steeds meer een niche worden. Het grote marktaandeel – en dus de grote omzet – zal verplaatsen naar die confectieverkopers.

Arnoud

Het bericht Hoe maak je een product van je juridische dienstverlening? #legaltechtuesday verscheen eerst op Ius Mentis.

Procesreglement homologatie onderhands akkoord gepubliceerd

Wetswijziging maakt bindend onderhands akkoord tussen een onderneming en zijn schuldeisers mogelijkDe Rechtspraak publiceert vandaag een nieuw procesreglement met betrekking tot een bijzondere categorie insolventiezaken. Het reglement is opgesteld omdat op 1 januari 2021 de Wet homologatie onderhands akkoord in werking treedt. Door de nieuwe wet kunnen ondernemingen, hun schuldeisers en aandeelhouders bij een dreigend faillissement een bindend onderhands akkoord sluiten. Als dit akkoord wordt bekrachtigd door de rechter, geldt het voor alle schuldeisers – ook zij die niet met het akkoord hebben ingestemd.

De wet, die zijn oorsprong vindt in de vorige financiële crisis, moet voorkomen dat bedrijven failliet worden verklaard terwijl zij nog wel (deels) levensvatbaar zijn. Een onderhands akkoord kan een manier zijn om schulden te herstructureren en zo een faillissement af te wenden, is de gedachte. Pas als de rechter oordeelt dat het akkoord aan alle wettelijke eisen voldoet, is het akkoord bindend. In het nieuwe procesreglement is onder meer te lezen hoe betrokken partijen een verzoek met betrekking tot de homologatie van een onderhands akkoord kunnen indienen bij de rechtbank.

Het nieuwe procesreglement treedt op 1 januari 2021 in werking en is te vinden op de reglementen-, procedures- en formulierenpagina.

Categorieën: Rechten

The FCC’s Independence and Mission Are at Stake with Trump Nominee

Electronic Frontier Foundation (EFF) - nieuws - 23 november 2020 - 8:01pm

When there are only five people in charge of a major federal agency, the personal agenda of even one of them can have a profound impact. That’s why EFF is closely watching the nomination of Nathan Simington to the Federal Communications Commission (FCC).

Simington’s nomination appears to be the culmination of a several-month project to transform the FCC and expand its purview in ways that threaten our civil liberties online. The Senate should not confirm him without asking some crucial questions about whether and how he will help ensure that the FCC does the public interest job Congress gave it, which is to expand broadband access, manage the public’s wireless spectrum to their benefit, and protect consumers when they use telecommunications services.

There’s good reason to worry: Simington was reportedly one of the legal architects behind the president’s recent executive order seeking to have the FCC issue “clarifying” regulations for social media platforms. The executive order purports to give the FCC authority to create rules to which social media platforms must adhere in order to enjoy liability protections under Section 230, the most important law protecting our free speech online. Section 230 protects online platforms from liability for the speech of their users, while protecting their flexibility to develop their own speech moderation policies. The Trump executive order would upend that flexibility. 

As we’ve explained at length, this executive order was based on a legal fiction. The FCC’s role is not to enforce or interpret Section 230; its job is to regulate the United States’ telecommunications infrastructure: broadband, telephone, cable television, satellite, and all the various infrastructural means of delivering information to and from homes and businesses in the U.S. Throughout the Trump administration, the FCC has often shirked that duty—most dramatically, by abandoning any meaningful defense of net neutrality. Simington’s nomination seems to be an at-the-buzzer shot by an administration that’s been focused on undermining our protections for free speech online, instead of upholding the FCC’s traditional role of ensuring affordable access to the Internet and other communications technologies, and ensuring that those technologies don’t unfairly discriminate against specific users or uses.

The FCC Is Not the Speech Police—And Shouldn’t Be

Let’s take a look at the events leading up to Simington’s nomination. Twitter first applied a fact-check label to a tweet of President Trump’s in May, in response to his claims that mail-in ballots were part of a campaign of systemic voter fraud. As a private company, Twitter has the First Amendment right to implement such fact-checks, or even to choose not to carry someone’s speech for any reason.

The White House responded with its executive order that, among other things, directed the FCC to draft regulations that would narrow the Section 230 liability shield. As a result, it perverted the FCC’s role: it’s supposed to be a telecom regulator, not the social media police.

The White House executive order reflects a long-running (and unproven) claim in conservative circles that social media platforms are biased against conservative users. Some lawmakers and commentators have even claimed that their biased moderation practices somehow strip social media platforms of their liability protections under Section 230. As early as 2018, Sen. Ted Cruz incorrectly told Facebook CEO Mark Zuckerberg that in order to be shielded by 230, a platform had to be a “neutral public forum.” In the years since then, members of Congress have introduced multiple bills purporting to condition platforms’ 230 immunity on “neutral” moderation policies. As we’ve explained to Congress, a law demanding that platforms moderate speech in a certain way would be unconstitutional. The misguided executive order has the same inherent flaw as the bills: the government cannot dictate online platforms’ speech policies.

It’s not the FCC’s job to police social media, and it’s also not the president’s job to tell it to. By design, the FCC is an independent agency and not subject to the president’s demands. But when Republican FCC commissioner Michael O’Rielly correctly pointed out that government efforts to control private actor speech were unconstitutional, he was quickly punished. O’Rielly wrote [pdf], “the First Amendment protects us from limits on speech imposed by the government – not private actors – and we should all reject demands, in the name of the First Amendment, for private actors to curate or publish speech in a certain way.” The White House responded by withdrawing O’Rielly’s nomination and nominating Simington, one of the drafters of the executive order.

During a transition of power, it’s customary for independent agencies like the FCC to pause on controversial actions. The current FCC has so far adhered to that tradition, only moving forward items that have unanimous support. Every item the FCC has voted on since the election had the support of the Chair, the other four commissioners, and industry and consumer groups. For example, the FCC has moved forward on freeing up of 5.9 Ghz spectrum for unlicensed uses, a move applauded by EFF and most experts. But we worry that in nominating Simington, the administration is attempting to pave the way for a future FCC to go far beyond its traditional mandate and move into policing social media platforms’ policies. We’re glad to see Fight for the Future, Demand Progress, and several other groups rightfully calling on the Senate to not move forward on Nate Simington’s nomination.

The FCC’s Real Job Is More Important Than Ever 

There’s no shortage of work to do within FCC’s traditional role and statutory mandate. The FCC must begin to address the pressure test that the COVID-19 pandemic has posed to the U.S. telecommunications infrastructure. Much of the U.S. population must now rely on home Internet subscriptions for work, education, and socializing. Millions of families either have no home Internet access at all or lack sufficient access to meet this new demand. The new FCC has a monumental task in front of itself. 

During his Senate confirmation hearing, Simington gave no real indication on how he plans to work on the real issues facing the agency: broadband access, remote school challenges, spectrum management, improving competition, and public safety rules, for example. The only things we learned from the hearing are that he plans to continue the Trump-era policy of refusing to regulate large ISPs and that he refuses to recuse himself from decisions on the misguided executive order that he helped write. Before the Simington confirmation hearing started, Trump again urged Republicans to quickly confirm his nominee on a partisan basis.

In response, Senator Richard Blumenthal called for a hold on Simington’s nomination, indicating real concern for the FCC’s independence from the White House. That means the Senate would need to bypass his filibuster if it truly wanted to confirm Trump’s nominee.

Sen. Blumenthal’s concerns are real and important. President Trump effectively fired his own commissioner (O’Rielly) for expressing basic First Amendment principles. Before it confirms Simington, the Senate ought to consider what the nomination means for the future of the FCC. As the pandemic continues to worsen, there are too many mission critical issues for the FCC to tackle for it to continue with Trump’s misguided war on Section 230.

Categorieën: Openbaarheid, Privacy, Rechten

Vorschlag zur E-Privacy-Verordnung vom Tisch, Noyb vs. Apple, Uploadfilter gegen terroristische Inhalte

iRights.info - 23 november 2020 - 9:48am

Die vergangene Woche im Rückblick: Keine Einigung zur e-Privacy-Verordung auf EU-Ebene; Datenschutzaktivist*innen um Max Schrems reichen Beschwerde gegen Apple ein; und Bundesregierung macht neuen Vorschlag für den Einsatz von Uploadfiltern gegen terroristische Inhalte.

Vorschlag e-Privacy Verordnung gescheitert

Die Bundesregierung hat im Rahmen ihrer EU-Ratspräsidentschaft keine Einigung im EU-Ministerrat zur e-Privacy Verordnung (e-Privacy-VO) erzielen können. Dies wurde diese Woche auf der Jahrestagung der Gesellschaft für Datenschutz und Datensicherheit (GDD) bekannt. Der Ministerrat hat den Vorschlag vom 4.11.2020 letzte Woche diskutiert und nun abgelehnt. Die Bundesregierung will die Erhebung persönlicher Daten aus „berechtigtem Interesse“ verbieten. Außerdem bleiben in dem Entwurf Artikel 9 und 10 komplett gestrichen. Artikel 9 e-Privacy-VO präzisiert rechtliche Vorgaben für eine wirksame Zustimmung. Artikel 10 e-Privacy VO schreibt Do-not-track-Mechanismen gegenüber Drittanbietern bei Browsern vor. Man rechne im Rahmen der deutschen Ratspräsidentschaft nicht mehr mit einer Einigung. Auf nationaler Ebene plant das Wirtschaftsministerium (BMWi) nun aber einen Entwurf für ein Telekommunikations-Telemedien-Datenschutz-Gesetz (TTDSG). Damit sollen die Vorgaben zu Cookies aus der überarbeiteten Richtlinie zum Datenschutz in der elektronischen Kommunikation von 2009 umgesetzt werden. 2021 übernimmt Portugal die EU-Ratspräsidentschaft und kann sich dann an einem neuen Kompromissvorschlag versuchen.

Datenschutzbeschwerde gegen Apple

Noyb, eine Gruppe von Datenschutzaktivist*innen hat gegen Apple Beschwerde beim Berliner Beauftragten für Datenschutz und Informationsfreiheit und der Spanischen Datenschutzbehörde AEPD eingereicht. Die von Max Schrems gegründete Organisation hat von Apple die Löschung des Identifiers for Advertising/Advertisers (IDFA) und eine Geldbuße gefordert. Der Identifier würde es Apple und Anwendungen des i-Phones ermöglichen, Informationen über das Online-Verhalten der Nutzer*innen auszuwerten. Das vereinfache die gezielte Schaltung von Werbung ohne die Zustimmung der Nutzer*innen. IDFA werde laut noyb vom Betriebssystem erstellt und lasse sich, ähnlich wie Cookies, für Werbe-Tracking nutzen. Apple weist die Vorwürfe in einer Stellungnahme allerdings als falsch zurück. Apple greife auf den IDFA der Nutzer*innen nicht zu. Vor allem mit iOS 14 würden Nutzer*innen sogar noch mehr Kontrolle über Werbe-Tracking erhalten.

Uploadfilter bei EU-Terrorismusbekämpfung

Die Bundesregierung hat dem EU-Parlament einen Kompromissvorschlag zu grenzüberschreitenden Schnell-Löschanordnungen für terroristische Inhalte vorgelegt. Die geplante Verordnung gegen Terrorpropaganda soll vor allem nach den Anschlägen in Nizza und Wien schnellstmöglich beschlossen werden. Die Bundesregierung schlägt im Rahmen ihrer EU-Ratspräsidentschaft ein zweistufiges Verfahren vor. Verdächtige Beiträge in sozialen Netzwerken sollen innerhalb von einer Stunde gelöscht werden. Anschließend müsse das Land, in dem der Provider des Hosts sitzt, die Löschung innerhalb von 24 Stunden kontrollieren. Der Vorschlag der Bundesregierung sehe außerdem einen Rechtsbehelf gegen die Löschanordnung vor. Europol müsse einmal im Jahr einen Bericht über alle Ersuchen vorlegen. Von Betreiber*innen sozialer Netzwerke würden „spezifische“ Mittel gegen die Bekämpfung terroristischer Inhalte gefordert. Dabei wären Uploadfilter nicht verpflichtend aber optional einsetzbar. Kritik kam von den Linken und der Piratenpartei: Sie äußerten Sorge um willkürliche Löschanordnungen von Regierungschefs aus Polen oder Ungarn.

Dieser Wochenrückblick wurde von und verfasst; Lizenz: BY-NC-SA 3.0

Wacht, Marktplaatsoplichting is geen oplichting?

IusMentis - 23 november 2020 - 8:19am

Een verrassing voor velen: de bekendste vorm van online oplichting, namelijk gewoon het geld pakken en niet leveren, is juridisch geen oplichting. Kwam recent nog langs en het blijft verwarrend. Maar de kern is dat we niet iedere vorm van wanprestatie tot een strafbaar feit willen verheffen, zeker omdat het wetsartikel voor oplichting uitgaat van echt een of andere truc.

In het gewone spraakgebruik spreken we al snel van oplichting als iets tegenvalt en je je geld (of spullen) kwijt bent. De Van Dale stelt het gelijk aan bedriegen, wat dan weer “met opzet misleiden” betekent. Maar juridisch gezien zitten er heel wat meer haken en ogen aan. Dit is wat de wet zegt (art. 326 Strafrecht): Hij die, met het oogmerk om zich of een ander wederrechtelijk te bevoordelen, hetzij door het aannemen van een valse naam of van een valse hoedanigheid, hetzij door listige kunstgrepen, hetzij door een samenweefsel van verdichtsels, iemand beweegt tot de afgifte van enig goed, tot het verlenen van een dienst, tot het ter beschikking stellen van gegevens, tot het aangaan van een schuld of tot het teniet doen van een inschuld, wordt, als schuldig aan oplichting, gestraft met gevangenisstraf van ten hoogste vier jaren of geldboete van de vijfde categorie. De juridische definitie van “oplichting” eist dus heel wat meer dan alleen een stukje voor de gek houden of misleiden. Je moet echt een vuile truc uithalen. Dat kan dus met een valse naam of hoedanigheid, of door (oud-juridische taal is echt ongelofelijk prachtig) listige kunstgrepen of (nóg prachtiger) een samenweefsel van verdichtsels uit te halen. Maar doe je dat allemaal niet, dan is het dus geen oplichting.

Het simpelste voorbeeld: ik zet op Marktplaats een telefoon te koop voor een lage prijs, mensen betalen me dat geld, ik sluit mijn account en lever nooit en te nimmer die telefoon. Sterker nog, ik héb niet eens een telefoon. Ik ben nu juridisch niet strafbaar, want welke valse naam of hoedanigheid heb ik gebruikt, wat was mijn listige kunstgreep of samenweefsel van verdichtsel dan? Die zijn er niet.

Natuurlijk, ik pleeg wanprestatie en ben dus civielrechtelijk voor de rechter te dagen. Ook heb ik (als ik professioneel handelaar ben) waarschijnlijk een oneerlijke handelspraktijk gepleegd en ben ik bestuurlijk te beboeten als de ACM heel veel zin heeft. En ik ben mijn account op Marktplaats kwijt, inclusief al mijn reviews en geschiedenis en dat doet ook pijn. Maar strafbaar, in de zin van een strafblad, boete en de cel in? Nee.

De kern is dat als je alléén maar niet levert (en dat ook niet van plan was), dat je dan geen oplichter bent omdat dat “valse voorwendselen of samenweefsel van verdichtsels” vereist. Je moet echt een truc uithalen om oplichter te zijn.

In 2016 zette de Hoge Raad de principes hierover nog eens op een rijtje. De kern is dat je niet iedere vorm van bedrog tot het misdrijf oplichting (vier jaar cel) wil verheffen. Het moet wel een ernstig geval zijn. Men citeert een voorbeeld van een internetondernemer die wist dat hij niet kon leveren maar desondanks de webshop open hield: Niet elke vorm van bewust oneerlijk zakendoen levert het in artikel 326 van het Wetboek van Strafrecht strafbaar gestelde misdrijf ‘oplichting’ op. Dat geldt eveneens wanneer kan worden bewezen dat men is benadeeld door een persoon die niet van plan of in staat was zijn verplichting na te komen en die zich in strijd met de waarheid heeft voorgedaan als een bonafide (ver)koper. Natuurlijk staan kopers dan een tikje in de kou. Maar er zijn genoeg andere middelen, zoals betalen met een beschermd middel (zoals de creditcard), onderzoek doen naar betrouwbaarheid (de reviews) of kopen bij bekende webwinkels (met keurmerken zoals Thuiswinkel Waarborg) of achteraf betalen dan wel bij het ophalen/ontvangen. (Oh ja, en in theorie de verkoper dus voor de rechter slepen, maar wie doet dat nou.) Het is dan eigenlijk niet meer nodig dat het strafrecht hier tegen ingezet kan worden.

Specifiek in de internetcontext zijn wel ontzettend vervelend de figuren die een gewoonte maken van niet leveren maar wel het geld houden. Daar is een oplossing voor bedacht, die recent in het strafrecht is beland (art. 326e Strafrecht):

Hij die een beroep of een gewoonte maakt van het door middel van een geautomatiseerd werk verkopen van goederen of verlenen van diensten tegen betaling met het oogmerk om zonder volledige levering zich of een ander van de betaling van die goederen of diensten te verzekeren, wordt gestraft met gevangenisstraf van ten hoogste vier jaren of geldboete van de vijfde categorie.

Hiermee is het dus wel mogelijk om grootschalige internetbedriegers aan te pakken, maar blijven de individuele “je zei dat je zou leveren dus ik ga aangifte doen” gevallen buiten het strafrecht. Ik ken nog geen veroordelingen onder dit artikel.

Arnoud

Het bericht Wacht, Marktplaatsoplichting is geen oplichting? verscheen eerst op Ius Mentis.

ICANN Can Stand Against Censorship (And Avoid Another .ORG Debacle) by Keeping Content Regulation and Other Dangerous Policies Out of Its Registry Contracts

Electronic Frontier Foundation (EFF) - nieuws - 22 november 2020 - 11:37pm

The Internet’s domain name system is not the place to police speech. ICANN, the organization that regulates that system, is legally bound not to act as the Internet’s speech police, but its legal commitments are riddled with exceptions, and aspiring censors have already used those exceptions in harmful ways. This was one factor that made the failed takeover of the .ORG registry such a dangerous situation. But now, ICANN has an opportunity to curb this abuse and recommit to its narrow mission of keeping the DNS running, by placing firm limits on so-called “voluntary public interest commitments” (PICs, recently renamed Registry Voluntary Commitments, or RVCs).

For many years, ICANN and the domain name registries it oversees have given mixed messages about their commitments to free speech and to staying within their mission. ICANN’s bylaws declare that “ICANN shall not regulate (i.e., impose rules and restrictions on) services that use the Internet’s unique identifiers or the content that such services carry or provide.” ICANN’s mission, according to its bylaws, “is to ensure the stable and secure operation of the Internet's unique identifier systems.” And ICANN, by its own commitment, “shall not act outside its Mission.”

But…there’s always a but. The bylaws go on to say that ICANN’s agreements with registries (the managing entities of each top-level domain like .com, .org, and .horse) and registrars (the companies you pay to register a domain name for your website) automatically fall within ICANN’s legal authority, and are immune from challenge, if they were in place in 2016, or if they “do not vary materially” from the 2016 versions.

Therein lies the mischief. Since 2013, registries have been allowed to make any commitments they like and write them into their contracts with ICANN. Once they’re written into the contract, they become enforceable by ICANN. These “voluntary public interest commitments”  have included many promises made to powerful business interests that work against the rights of domain name users. For example, one registry operator puts the interests of major brands over those of its actual customers by allowing trademark holders to stop anyone else from registering domains that contain common words they claim as brands.

Further, at least one registry has granted itself “sole discretion and at any time and without limitation, to deny, suspend, cancel, or transfer any registration or transaction, or place any domain name(s) on registry lock, hold, or similar status” for vague and undefined reasons, without notice to the registrant and without any opportunity to respond.  This rule applies across potentially millions of domain names. How can anyone feel secure that the domain name they use for their website or app won’t suddenly be shut down? With such arbitrary policies in place, why would anyone trust the domain name system with their valued speech, expression, education, research, and commerce?

Voluntary PICs even played a role in the failed takeover of the .ORG registry earlier this year by the private equity firm Ethos Capital, which is run by former ICANN insiders. When EFF and thousands of other organizations sounded the alarm over private investors’ bid for control over the speech of nonprofit organizations, Ethos Capital proposed to write PICs that, according to them, would prevent censorship. Of course, because the clauses Ethos proposed to add to its contract were written by the firm alone, without any meaningful community input, they had more holes than Swiss cheese. If the sale had succeeded, ICANN would have been bound to enforce Ethos’s weak and self-serving version of anti-censorship.

A Fresh Look by the ICANN Board?

The issue of PICs is now up for review by an ICANN working group known as “Subsequent Procedures.” Last month, the ICANN Board wrote an open letter to that group expressing concern about PICs that might entangle ICANN in issues that fall “outside of ICANN’s technical mission.” It bears repeating that the one thing explicitly called out in ICANN’s bylaws as being outside of ICANN’s mission is to “regulate” Internet services “or the content that such services carry or provide.” The Board asked the working group [pdf] for “guidance on how to utilize PICs and RVCs without the need for ICANN to assess and pass judgment on content.”

A Solution: No Contractual Terms About Content Regulation

EFF supports this request, and so do many other organizations and stakeholders who don’t want to see ICANN become another content moderation battleground. There’s a simple, three-part solution that the Subsequent Procedures working group can propose:

  • PICs/RVCs can only address issues with domain names themselves—not the contents of websites or apps that use domain names;
  • PICs/RVCs should not give registries unbounded discretion to suspend domain names;
  • and PICs/RVCs should not be used to create new domain name policies that didn’t come through ICANN processes.

In short, while registries can run their businesses as they see fit, ICANN’s contracts and enforcement systems should have no role in content regulation, or any other rules and policies beyond the ones the ICANN Community has made together.

A guardrail on the PIC/RVC process will keep ICANN true to its promise not to regulate Internet services and content.  It will help avoid another situation like the failed .ORG takeover, by sending a message that censorship-for-profit is against ICANN’s principles. It will also help registry operators to resist calls for censorship by governments (for example, calls to suppress truthful information about the importation of prescription medicines). This will preserve Internet users’ trust in the domain name system.

Categorieën: Openbaarheid, Privacy, Rechten

Once Again, Facebook Is Using Privacy As A Sword To Kill Independent Innovation

Electronic Frontier Foundation (EFF) - nieuws - 20 november 2020 - 9:37pm

Facebook claims that their role as guardian of users’ privacy gives them the power to shut down apps that give users more control over their own social media experience. Facebook is wrong. The latest example is their legal bullying of Friendly Social Browser.

Friendly is a web browser with plugins geared towards Facebook, Instagram, and other social media sites. It’s been around since 2010 and has a passionate following. Friendly offers ad and tracker blocking and simplifies downloading of photos and videos. It lets users search their news feeds by keyword, or reorder their feeds chronologically, and it displays Facebook pages with alternative “skins.”

To Facebook’s servers, Friendly is just a browser like any other. Users run Friendly much as they would Google Chrome, Mozilla Firefox, or any other standard web browser. According to Friendly, its software doesn’t call any developer interfaces (APIs) into Facebook or Instagram. Friendly has also stated that they don’t collect any personal information about users, including posts or uploads. Friendly does collect some anonymous usage data, and sends the ads that people view to a third-party analytics firm.

Over the summer, Facebook’s outside counsel demanded that Friendly stop offering its browser. Facebook’s lawyer claimed that Friendly violated Facebook’s terms of service by “chang[ing] the way Facebook and Instagram look and function” and “impairing [their] intended operation.” She claimed, incorrectly, that violating Facebook’s terms of service was also a violation of the federal Computer Fraud and Abuse Act (CFAA) and its California counterpart.

Although Friendly explained to Facebook’s lawyers that their browser didn’t access any Facebook developer APIs, Facebook hasn’t budged from its demand that Friendly drop dead. 

Today, EFF sent Facebook a letter challenging Facebook’s legal claims. We explained that the CFAA and its California counterpart are concerned with “access” to a protected computer:

California law defines “access” as “to gain entry to, instruct, cause input to, cause output from, cause data processing with, or communicate with” a computer. Friendly is a web browser, so it is our understanding that Friendly does not itself “gain entry to” or “communicate with” Facebook in any way. Like other popular browsers such as Google Chrome or Mozilla Firefox, therefore, Friendly does not “access” Facebook; Facebook users do. But presumably Facebook knows better than to directly accuse its users of being malicious hackers if they change the colors of websites they view.

While EFF is not representing Friendly at this time, we weighed in because Facebook’s claims are dangerous. Facebook is claiming the power to decide which browsers its users can use to access its social media sites, an extremely broad claim. According to the reasoning of Facebook’s demand, accessibility software like screen readers, magnifiers, and tools that change fonts or colors to make pages more readable for visually impaired people all exist by Facebook’s good will, and could be shut down anytime if Facebook decides they “change the way Facebook and Instagram look and function.”

Friendly is far from the only victim of the company’s strong-arming. Just last month, Facebook threatened the NYU Ad Observatory, a research project that recruits Facebook users to install a plugin to collect the ads they’re shown. And in 2016, Facebook convinced a federal court of appeals that the CFAA barred a third-party social media aggregator from interacting with user accounts, even when those users chose to sign up for the aggregator’s service. In sum, Facebook’s playbook—using the CFAA to enforce spurious privacy claims—has made it harder for innovators, security experts, and researchers of all stripes to use Facebook in their work. 

Facebook has claimed that it must bring its legal guns to bear on any software that interoperates with Facebook or Instagram without permission, citing to the commitments that Facebook made to the Federal Trade Commission after the Cambridge Analytica scandal. But there are different kinds of privacy threats. Facebook’s understandable desire to protect users (and its own reputation) against privacy abuses by third parties like Cambridge Analytica doesn’t take away users’ right to guard themselves against Facebook’s own collection and mishandling of their personal data by employing ad- and tracker-blocking software like Friendly (or EFF’s Privacy Badger, for that matter). 

Nor do Facebook’s privacy responsibilities justify stopping users from changing the way they experience Facebook, and choosing tools to help them do that. Attempts to lock out third-party innovators are not a good look for a company facing antitrust investigations, including a pending lawsuit from the Federal Trade Commission.

The web isn’t television. Website owners might want to control every detail about how their sites look and function, but since the very beginning, users have always been in control of their own experience—it’s one of the defining features of the Web. Users can choose to re-arrange the content they receive from websites, save it, send it along to others, or ignore some of it by blocking advertisements and tracking devices. The law can’t stop users from choosing how to receive Facebook content, and Facebook shouldn’t be trying to lock out competition under a guise of protecting privacy.

Related Cases: Facebook v. Power Ventures
Categorieën: Openbaarheid, Privacy, Rechten

Onze toekomstplannen voor de Toolbox

Bits of Freedom (BOF) - 20 november 2020 - 1:10pm

Vorige week lanceerden we onze totaal vernieuwde privacy Toolbox en de campagne Fix je privacy. Dit zijn onze plannen voor de komende tijd.

Sinds de coronacrisis is iedereen meer online dan ooit. Dus is het ook belangrijker dan ooit om aandacht te hebben voor online veiligheid en privacy. Op fixjeprivacy.nl geven we vanaf nu tips om veilig te videobellen, bestanden te delen en internetoplichters te herkennen. Ook helpen we je op de website een veilig wachtwoord te maken, privacyinstellingen op je telefoon aan te passen en kan je er vinden waarom softwareupdates belangrijk zijn.

De toolbox is sinds een week live en voor iedereen te bezoeken. Maar nu de website eenmaal bestaat, hebben we natuurlijk nog veel meer plannen. En jij kunt ons daar bij helpen.

Help ons en doneer

Toekomstplannen
  • Meer tips: er zijn al tientallen tips op de site te vinden, maar we krijgen ook verzoekjes voor nieuwe tips. En we hebben zelf ook zo ons wensenlijstje. De komende tijd blijven we dus tips toevoegen, zoals voor privacyvriendelijke fitnessapps en over de privacyvriendelijkheid van tablets.
  • Meer toolkits: een nieuw onderdeel van de toolbox zijn de toolkits, waarvoor we experts interviewen die over een specifiek onderwerp meer tips kunnen geven. We willen bijvoorbeeld een toolkit voor ondernemers en een toolkit voor activisten toevoegen.

Laat ons jouw suggestie voor een tip of toolkit weten.

  • Up to date: natuurlijk is het belangrijk dat de stappenplannen en tools die we aanraden in de toolbox up to date blijven. Dat betekent dat we in de gaten moeten houden welke ontwikkelingen er zijn en de teksten vaak zullen moeten herschrijven.
  • Speciale nieuwsbrief: we zouden graag een functie willen toevoegen waarin je je kunt abonneren op een nieuwsbrief, die je in 10 weken helpt je privacy te fixen. We maken dan een mail-programma waarin je wekelijks een tip in je mailbox krijgt.
Help mee

Jij kunt ons helpen om dit waar te maken. We zijn een kleine organisatie, en voor onze inkomsten afhankelijk van donaties en donateurs. Jouw bijdrage maakt dus echt verschil. Je helpt ons het meest met een vaste donatie, bijvoorbeeld maandelijks of per kwartaal, maar ook met een eenmalige donatie zijn we heel blij.

Jouw donatie helpt niet alleen met het waarmaken van onze plannen voor de toolbox, maar ook met de grotere strijd om te zorgen voor scherpere regels en wetgeving op het gebied van privacy. Zo fixen we ook de zaken die je niet met de Toolbox zelf kunt oplossen.

Ja, ik help graag mee!

Categorieën: Openbaarheid, Privacy, Rechten

„Zeitgleich Nutzer und Inhaber von Urheberrechten“: 48 Content-Creator fordern Interessenausgleich bei der Umsetzung der EU-Richtlinie

iRights.info - 20 november 2020 - 12:34pm

Sie veröffentlichen selbstgemachte Let’s Play-Videos und Foto-Stories auf Youtube, Twitch, Instagram oder TikTok – für mehrere Millionen Follower. Mit ihrer Stellungnahme zum Referenten-Entwurf des Bundesjustizministeriums melden sich erstmals „Prosumer“ in großer Zahl gemeinsam zu Wort. Sie kritisieren die geplanten Uploadfilter und machen Vorschläge zum Flagging.

Gleich zu Beginn ihrer Stellungnahme zum Reformentwurf des Bundesministeriums für Justiz und Verbraucherschutz (BMJV) erklären knapp 50 Content-Creator, dass es für sie als junge Branche ein Novum ist, sich in ein Gesetzgebungsverfahren einzubringen. An gleicher Stelle beschreiben sie sich und erläutern, warum sie es für erforderlich halten, sich als Akteure*innen einer ganz besonderen Mediengattung zu äußern, die enorme Reichweite erzielt.

Zu den Unterzeichner*innen zählen unter anderem die bekannten Let’s Player und Youtuber PietSmiet, Gronkh, Le Floyd, Rezo, Felix von der Laden, Max Knabe, Rewi, Tilo Jung und weitere. Viele zählen auf ihren jeweiligen Kanälen mehrere hunderttausend Follower*innen, einige sogar mehrere Millionen. Und insgesamt erreichen sie weit über 50 Millionen Nutzer*innen auf Youtube, Twitch, Instagram und Tiktok.

Damit gehören die Unterzeichnenden gewiss zu den reichweitenstarken und populären Köpfen ihres Genres beziehungsweise ihrer jeweiligen Subgenres. Mehr noch: Einige von ihnen, so heißt es in der Stellungnahme, „benutzen Upload-Plattformen, um unsere Werke zu präsentieren und viele von uns konnten so kleine und mittelständische Unternehmen aufbauen, die seit über zehn Jahren wachsen.“

Zehntausende Content-Creator würden nicht berücksichtigt

Gleichwohl stehen die knapp 50 Unterzeichner*innen für die Tausenden und Abertausenden, die auf allen vier genannten, großen Plattformen als die oft beschriebenen „Prosumer“ unterwegs sind: Nutzer*innen, die Games, Videos, Musik, Blogs und mediale Angebote nicht nur als „Consumer“ rezipieren sondern zugleich „Producer“ sind, weil sie eigene Inhalte herstellen und als „User Generated Content“ auf entsprechenden Plattformen platzieren.

Diesen zehntausenden Prosumern eine Stimme zu geben, ist als Leitmotiv hinter der Stellungnahme zu erkennen. So heißt es dort, dass Content-Creator „zeitgleich Nutzer und Inhaber von Urheberrechten“ seien. Doch der Reformentwurf des BMJV würde dies nicht berücksichtigen, weil er (etwa in den Begründungen) die Nutzer*innen von den Kreativen und Unternehmen der Kulturwirtschaft zu sehr voneinander trenne:

Hier wird eine Abgrenzung zwischen Inhabern von Urheberrechten und Nutzern von Upload-Plattformen geschaffen, die nicht der gelebten Praxis entspricht. Wir Content Creator haben sowohl als Nutzer von Upload-Plattformen aber auch als Inhaber von Urheberrechten, als Kreative und als Unternehmen der Kulturwirtschaft gegen den damaligen Artikel 13 (danach: Artikel 17) protestiert. Content-Creator sind ein relevanter Teil der deutschen Kultur und wir bitten das Bundesministerium der Justiz  und für Verbraucherschutz darum, dies im Referentenentwurf widerzuspiegeln.

Generell begrüßen die Unterzeichnenden den Entwurf des BMJV zwar. Ihre deutliche Kritik gilt jedoch den geplanten Regelungen, die ein automatisiertes Überprüfen von Uploads vorschreiben:

Wir lehnen automatisierte Systeme, die Uploads auf Urheberrechtsverletzung  überprüfen, grundsätzlich ab, da solche automatisierten Systeme hochgeladene Werke inhaltlich nicht verstehen können und sie somit nicht dafür geeignet sind, fehlerfrei Verletzungen von Urheberrecht zu erkennen. Durch ihren Einsatz wird es zu Fehleinschätzungen und damit zur Unterdrückung eigentlich legaler Werke kommen. Wir erkennen jedoch an, dass der Referentenentwurf aufgrund der EU-Vorgaben nicht ganz ohne automatisierte Überprüfungssysteme auskommen kann.

Flagging sollte auch nach dem Upload möglich sein

Zudem halten sie die vorgeschlagenen Vorgaben für das sogenannte Flagging für überarbeitungsbedürftig. Gemeint ist die strittige Frage, wann und wie Plattform-Nutzer*innen die von ihnen hochgeladenen Inhalte als legal markieren können. Etwa weil sie über etwaig erforderliche Nutzungsrechte verfügen oder weil sie sich auf Ausnahmen berufen, wie beim Zitatrecht, für Satire oder Pastiches, um ein ungerechtfertigtes Sperren zu verhindern.

Konkret nennen die Content-Creator in ihrer Stellungnahme die im Paragraph 8 des Entwurfs vorgesehene Regelung, demzufolge die Plattformen ein etwaiges Flagging nur während des Uploads prüfen müssten, nicht aber danach.

Doch es komme in der Praxis ständig vor, dass Content-Creator nachträglich Lizenzrechte erhalten, als auch dass Rechteinhaber ungerechtfertigte Sperrungen verlangen, heißt es in der Stellungnahme. Deswegen plädieren die Unterzeichner*innen dafür, dass „das Flagging und der darauffolgende Prozess der Schlichtung auch bei bereits hochgeladenen Werken zur Anwendung kommen sollte“.

Lob für die geplante „Legalisierung der Memes“ 

Ausdrücklich positiv beurteilen die Content-Creator die im Entwurf vorgesehenen Ausnahmen für die öffentliche Wiedergabe eines winzigen Ausschnittes von urheberrechtlich geschützten Werken und Teilen von Werken zu nicht kommerziellen Zwecken oder zur Erzielung nicht erheblicher Einnahmen:

Unter den genannten Voraussetzung wäre es dank  Artikel 3 Paragraph 6 legal bis zu 20 Sekunden eines Films, bis zu 20 Sekunden  einer Tonspur, bis zu 1.000 Zeichen eines Textes und ein Bild bis zu 250 Kilobyte zu nutzen, ohne für die Nutzung eine Lizenz erwerben zu müssen, da die Nutzung über die Dienstanbieter gegenüber den Rechteinhabern vergolten wird. Wir Content Creator unterstützen diese Regelung ausdrücklich. Diese sogenannte ‘Legalisierung der Memes‘ zeigt, dass das BMJV Politik nah an  der Realität betreibt. Was unsere Kultur ausmacht, entwickelt sich immer weiter, auch durch die Digitalisierung. Memes sind seit vielen Jahren Teil unserer Kultur und werden durch diesen Referentenentwurf endlich anerkannt.

Fazit

Mit dieser Stellungnahme meldet sich wohl erstmals in der Geschichte des Urheberrechts eine Querschnittsinteressengruppe zu Wort, deren Belange bislang so gut wie gar nicht wahrgenommen wurden. Mehr noch: Die „Prosumer“ stellen damit auch die tradierten Gegenüberstellungen der Interessen von Rechteinhaber*innen/Urheber*innen sowie Nutzer*innen grundsätzlich in Frage.

Die Content-Creator üben am Referenten-Entwurf gut begründete Kritik, die es wert ist, gehört und berücksichtigt zu werden. Dafür spricht insbesondere ihr Ausgangspunkt, dass das kreative Nutzen medialer Angebote für Streams, Clips und Memes alltäglich hunderttausendfach gelebte Alltagspraxis ist – und daher Regelungen erfordert, die für mehr Ausgleich zwischen alten und – gar nicht mehr so – neuen Interessengruppen sorgen.

Ticketmaster krijgt boete van 1,4 miljoen euro van Britse privacywaakhond

IusMentis - 20 november 2020 - 8:20am

Het Britse bedrijf Ticketmaster heeft een boete gekregen voor een datalek in 2018, las ik bij Nu.nl. Door een kwetsbaarheid in de chatbot op de website konden derden toegang krijgen tot betaalgegevens van 37.000 creditcardhouders, en in theorie zelfs 9.4 miljoen mensen uit de hele EU. Opmerkelijk aan de zaak vond ik vooral dat die chatbot een ingekochte component was en dat de leverancier al enige tijd op de hoogte was van het lek.

Het boetebesluit legt uit dat in 2018 een aantal bank-klanten ontdekten dat hun creditcard frauduleus werd gebruikt, wat te herleiden was tot een aanschaf bij Ticketmaster. (Smoking gun: een testbetaling met gefingeerde expiry date dook met diezelfde datum op in het criminele circuit.) De site van Ticketmaster bleek gehackt en voorzien van malware die gegevens doorstuurde.

De hack bleek mogelijk door een ingezette chatbot van het bedrijf Inbenta. De bot was zo’n typische vraagbeantwoordbot om de menselijke helpdesk te ontlasten, en kwam gezellig mee op elke pagina in het besteltraject. Daardoor kon – na het compromitteren van de code – de bot ook gegevens lezen uit bestel- en betaalformulieren, wat dus de creditcarddiefstal mogelijk maakte. Inbenta was desgevraagd erg verbaasd:

The Javascript we created specifically for Ticketmaster was used on a payments page, which is not what it was built for. Had we known that script would have been used in that way, we would have advised against it, as it poses a security threat.

Deze werkwijze was verder in strijd met de PCI-DSS regels voor het verwerken van creditcardgegevens. Desondanks vond Ticketmaster dat niet haar maar Inbenta verwijten te maken waren, zij mocht contractueel rekenen op een veilige chatbot dus dan is het overmacht als de chatbot onveilig blijkt. Een argument dat ik vaak hoor: “In de verwerkersovereenkomst staat dat de leverancier garandeert veilig te zijn, we hebben de veiligheid dus geregeld”.

De ICO accepteert dat excuus volstrekt niet, en ik zou iedereen ook willen aanraden om wie dat argument gebruikt een stevige schrobbering te geven. Security is niet iets dat je contractueel afspreekt, dat is iets dat je praktisch regelt én verifieert. Security doe je.

Natuurlijk zijn superzerodayhacks altijd mogelijk, en ik kan best accepteren dat dat overmacht zou kunnen opleveren. Maar het ging hier om een simpele Javascript aanpassing waarmee betaalformulieren uit te lezen en te kapen waren, iets dat zeker weten in 2018 tot de gewone stand der techniek behoorde en waar dus gewoon beveiliging tegen hoort te zijn.

Maar maar maar, aldus Ticketmaster: wij hadden netjes compliance geregeld, kijk maar:

At §§8-9 of its Comments, Ticketmaster relies upon its receipt of security certifications provided by Inbenta. At §29.3 of the Comments, Ticketmaster emphasises Inbenta’s ISO 27001 certification. The Commissioner places little weight on the mere provision of such certifications by Inbenta as a mechanism of securing the chat bot in the circumstances. Further, ISO 27001 is an information security management standard, which does not apply directly to software development.

Een certificering is nog geen bugfix, laat staan een securitymaatregel. Een code review of een security audit was dat wel, maar daar had Ticketmaster nul moeite in gestoken. En dat wordt ze serieus kwalijk genomen:

Rather, the GDPR requires that each organisation assess the risks arising in the circumstances of their own implementation and put controls in place to protect the personal data that it processes. Ticketmaster has shown very limited knowledge at the date of the Incident of the risk of implementing third party scripts into a payment page, despite it being widely known and documented at that time. A fortiori, Ticketmaster has not evidenced that it deployed appropriate and proportionate controls to manage this risk.

Verder noemt de ICO dit een serieuze overtreding van de AVG: in theorie hadden 9.4 miljoen klanten hun betaalgegevens gestolen kunnen hebben, en Ticketmaster had geen enkele control in place om hier wat tegen te doen. (Zelfs passieve detectie door wekelijks zelf een nepcreditcard te proberen was er niet.) De boete komt uiteindelijk dus uit op 1.4 miljoen euro. Een terechte tik op de vingers, wat mij betreft.

En ik zeg het nogmaals, want ik weet dat er veel contractsjuristen, FG’s en ander AVG compliance volk meeleest: doe alsjeblieft méér dan alleen contractueel vastleggen dat de veiligheid op orde moet zijn. Zonder daadwerkelijk feitelijk handelen ben je gewoon niét AVG compliant, al staan er honderd garanties en heb je duizend certificeringen.

Arnoud

Het bericht Ticketmaster krijgt boete van 1,4 miljoen euro van Britse privacywaakhond verscheen eerst op Ius Mentis.

Video Analytics User Manuals Are a Guide to Dystopia

Electronic Frontier Foundation (EFF) - nieuws - 19 november 2020 - 9:39pm

A few years ago, when you saw a security camera, you may have thought that the video feed went to a VCR somewhere in a back office that could only be accessed when a crime occurs. Or maybe you imagined a sleepy guard who only paid half-attention, and only when they discovered a crime in progress. In the age of internet-connectivity, now it’s easy to imagine footage sitting on a server somewhere, with any image inaccessible except to someone willing to fast forward through hundreds of hours of footage.

That may be how it worked in 1990s heist movies, and it may be how a homeowner still sorts through their own home security camera footage. But that's not how cameras operate in today's security environment. Instead, advanced algorithms are watching every frame on every camera and documenting every person, animal, vehicle, and backpack as they move through physical space, and thus camera to camera, over an extended period of time. 

The term "video analytics" seems boring, but don't confuse it with how many views you got on your YouTube “how to poach an egg” tutorial. In a law enforcement or private security context, video analytics refers to using machine learning, artificial intelligence, and computer vision to automate ubiquitous surveillance. 

Through the Atlas of Surveillance project, EFF has found more than 35 law enforcement agencies that use advanced video analytics technology. That number is steadily growing as we discover new vendors, contracts, and capabilities. To better understand how this software works, who uses it, and what it’s capable of, EFF has acquired a number of user manuals. And yes, they are even scarier than we thought. 

Briefcam, which is often packaged with Genetec video technology, is frequently used at real-time crime centers. These are police surveillance facilities that aggregate camera footage and other surveillance information from across a jurisdiction. Dozens of police departments use Briefcam to search through hours of footage from multiple cameras in order to, for instance, narrow in on a particular face or a specific colored backpack. This power of video analytic software would  be particularly scary if used to identify people out practicing their First Amendment right to protest. 

Avigilon systems are a bit more opaque, since they are often sold to business, which aren't subject to the same transparency laws. In San Francisco, for instance, Avigilon provides the cameras and software for at least six business improvement districts (BIDs) and Community Benefit Districts (CBDs). These districts blanket neighborhoods in surveillance cameras and relay the footage back to a central control room. Avigilon’s video analytics can undertake object identification (such as whether things are cars and people), license plate reading, and potentially face recognition. 

You can read the Avigilon user manual here, and the Briefcam manual here. The latter was obtained through the California Public Records Act by Dylan Kubeny, a student journalist at the University of Nevada, Reno Reynolds School of Journalism. 

But what exactly are these software systems' capabilities? Here’s what we learned: 

Pick a Face, Track a Face, Rate a Face

Instructions on how to select a face

If you're watching video footage on Briefcam, you can select any face, then add it to a "watchlist." Then, with a few more clicks, you can retrieve every piece of video you have with that person's face in it. 

Briefcam assigns all face images 1-3 stars. One star: the AI can't even recognize it as a person. Two stars: medium confidence. Three stars: high confidence.  

Detection of Unusual Events

A chart showing the different between the algorithms.

Avigilon has a pair of algorithms that it uses to predict what it calls "unusual events." 

The first can detect "unusual motions," essentially patterns of pixels that don't match what you'd normally expect in the scene. It takes two weeks to train this self-learning algorithm.  The second can detect "unusual activity" involving cars and people. It only takes a week to train. 

Also, there's "Tampering Detection" which, depending on how you set it, can be triggered by a moving shadow:

Enter a value between 1-10 to select how sensitive a camera is to tampering Events. Tampering is a sudden change in the camera field of view, usually caused by someone unexpectedly moving the camera. Lower the setting if small changes in the scene, like moving shadows, cause tampering events. If the camera is installed indoors and the scene is unlikely to change, you can increase the setting to capture more unusual events.

Pink Hair and Short Sleeves 

Color tool

With Briefcam’s shade filter, a person searching a crowd could filter by the color and length of items of clothing, accessories, or even hair. Briefcam’s manual even states the program can search a crowd or a large collection of footage for someone with pink hair. 

In addition, users of BriefCam can search specifically by what a person is wearing and other “personal attributes.” Law enforcement attempting to sift through crowd footage or hours of video could search for someone by specifying blue jeans or a yellow short-sleeved shirt.

Man, Woman, Child, Animal

BriefCam sorts people and objects into specific categories to make them easier for the system to search for. BriefCam breaks people into the three categories of “man,” “woman,” and “child.” Scientific studies show that this type of categorization can misidentify gender nonconforming, nonbinary, trans, and disabled people whose bodies may not conform to the rigid criteria the software looks for when sorting people. Such misidentification can have real-world harms, like triggering misguided investigations or denying access.

The software also breaks down other categories, including distinguishing between different types of vehicles and recognizing animals.

Proximity Alert

An example of the proximity filter

In addition to monitoring the total number of objects in a frame or the relative size of objects, BriefCam can detect proximity between people and the duration of their contact. This might make BriefCam a prime candidate for “COVID-19 washing,” or rebranding invasive surveillance technology as a potential solution to the current public health crisis. 

Avigilon also claims it can detect skin temperature, raising another possible assertion of public health benefit. But, as we’ve argued before, remote thermal imaging can often be very inaccurate, and fail to detect virus carriers that are asymptomatic. 

Public health is a collective effort. Deploying invasive surveillance technologies that could easily be used to monitor protestors and track political figures is likely to breed more distrust of the government. This will make public health collaboration less likely, not more. 

Watchlists 

One feature available both with Briefcam and Avigilon are watchlists, and we don't mean a notebook full of names. Instead, the systems allow you to upload folders of faces and spreadsheets of license plates, and then the algorithm will find matches and track the targets’ movement. The underlying watchlists can be extremely problematic. For example, EFF has looked at hundreds of policy documents for automated license plate readers (ALPRs) and it is very rare for an agency to describe the rules for adding someone to a watchlist. 

Vehicles Worldwide 

Often, ALPRs are associated with England, the birthplace of the technology, and the United States, where it has metastasized. But Avigilon already has its sights set on new markets and has programmed its technology to identify license plates across six continents

It's worth noting that Avigilon is owned by Motorola Solutions, the same company that operates the infamous ALPR provider Vigilant Solutions.

Conclusion

We’re heading into a dangerous time. The lack of oversight of police acquisition and use of surveillance technology has dangerous consequences for those misidentified or caught up in the self-fulfilling prophecies of AI policing

In fact,  Dr. Rashall Brackney, the Charlottesville Police Chief, described these video analytics as perpetuating racial bias at a recent panel. Video analytics "are often incorrect," she said. "Over and over they create false positives in identifying suspects."

This new era of video analytics capabilities causes at least two problems. First, police could rely more and more on this secretive technology to dictate who to investigate and arrest by, for instance, identifying the wrong hooded and backpacked suspect. Second, people who attend political or religious gatherings will justifiably fear being identified, tracked, and punished. 

Over a dozen cities across the United States have banned government use of face recognition, and that’s a great start. But this only goes so far. Surveillance companies are already planning ways to get around these bans by using other types of video analytic tools to identify people. Now is the time to push for more comprehensive legislation to defend our civil liberties and hold police accountable. 

To learn more about Real-Time Crime Centers, read our latest report here

Banner image source: Mesquite Police Department pricing proposal.

Categorieën: Openbaarheid, Privacy, Rechten

Introducing Cover Your Tracks!

Electronic Frontier Foundation (EFF) - nieuws - 19 november 2020 - 8:40pm

Today, we’re pleased to announce Cover Your Tracks, the newest edition and rebranding of our historic browser fingerprinting and tracker awareness tool Panopticlick. Cover Your Tracks picks up where Panopticlick left off. Panopticlick was about letting users know that browser fingerprinting was possible; Cover Your Tracks is about giving users the tools to fight back against the trackers, and improve the web ecosystem to provide privacy for everyone.

A demonstration of the new, green Cover Your Tracks website, which uses animal paw prints to illustrate the concept of tracking and fingerprinting your browser. The user clicks "Test your fingerprint" to get results.

A screen capture of the front page of coveryourtracks.eff.org. The mouse clicks on “Test your browser” button, which loads a results page with a summary of protections the browser has in place against fingerprinting and tracking. The mouse scrolls down to toggle to “detailed view”, which shows more information about each metric, such as further information on System Fonts, Language, and AudioContext fingerprint, among many other metrics.

Over a decade ago, we launched Panopticlick as an experiment to see whether the different characteristics that a browser communicates to a website, when viewed in combination, could be used as a unique identifier that tracks a user as they browse the web. We asked users to participate in an experiment to test their browsers, and found that overwhelmingly the answer was yes—browsers were leaking information that allowed web trackers to follow their movements.

A screenshot of the older orange Panopticlick website, which shows a human fingerprint graphic.

The old Panopticlick website.

In this new iteration, Cover Your Tracks aims to make browser fingerprinting and tracking more understandable to the average user.  With helpful explainers accompanying each browser characteristic and how it contributes to their fingerprint, users get an in-depth look into just how trackers can use their browser against them.

Our browsers leave traces of identifiable information just like an animal might leave tracks in the wild. These traces can be combined into a unique identifier which follows users’ browsing of the web, like wildlife which has been tagged by an animal tracker. And, on the web and in the wild, one of the best ways to confuse trackers and make it hard for them to identify you individually. Some browsers are able to protect their users by making all instances of their browser look the same, regardless of the computer it’s running on. In this way, there is strength in numbers. Users can also “cover their tracks,” protecting themselves by installing extensions like our own Privacy Badger.

 "Every time you visit a website, your browser sends little bits of information about itself.

A screenshot from Cover Your Tracks’ learning page, https://coveryourtracks.eff.org/learn

For beginners, we’ve created a new learning page detailing the methodology we use to mimic trackers and test browsers, as well as next steps users can take to learn more and protect themselves. Because tracking and fingerprinting are so complex, we wanted to provide users a way to deep-dive into exactly what kind of tracking might be happening, and how it is performed.

We have also worked with browser vendors such as Brave to provide more accurate results for browsers that are employing novel anti-fingerprinting techniques. Add-ons and browsers that randomize the results of fingerprinting metrics have the potential to confuse trackers and mitigate the effects of fingerprinting as a method of tracking. In the coming months, we will provide new infographics that show users how they can become safer by using browsers that fit in with large pools of other browsers.

We invite you to test your own browser and learn more - just head over to Cover Your Tracks!

Categorieën: Openbaarheid, Privacy, Rechten

Find Out How Ad Trackers Follow You On the Web With EFF’s “Cover Your Tracks” Tool

Electronic Frontier Foundation (EFF) - nieuws - 19 november 2020 - 8:27pm
Beginner-Friendly Tool Gives Users Options for Avoiding Browser Fingerprinting and Tracking

San Francisco—The Electronic Frontier Foundation (EFF) today launched Cover Your Tracks, a interactive tool that teaches users how advertisers follow them as they shop or browse online, and how to fight back against corporate trackers to protect their privacy, mitigate relentless ad targeting, and improve the web ecosystem for everyone.

With Black Friday and Cyber Monday just days away, when millions of users will be shopping online, Cover Your Tracks provides an in-depth learning experience—aimed at non-technical users—about how they are unwittingly being tracked online through their browsers.

“Our browsers leave traces of identifiable information when we visit websites, like animals might leave tracks in the wild, and that can be combined into a unique identifier that follows us online, like wildlife that’s been tagged,” said EFF Senior Staff Technologist Bill Budington. “We want users to take back control of their Internet experience by giving them a tool that lets them in on the hidden tricks and technical ploys online advertisers use to follow them so they can cover their tracks.”

Cover Your Tracks allows users to test their browsers to see what information about their online activities is visible to, and scooped up by, trackers. It shines a light on tracking mechanisms that utilize cookies, code embedded on websites, and more. Users can also learn how to cover some of their tracks by changing browser settings and using anti-tracking add-ons like EFF’s Privacy Badger.

Cover Your Tracks builds on EFF’s ground-breaking tracker awareness tool Panopticlick, which exposed how advertisers create “fingerprints” of users by capturing little bits of information given off by their browsers and using that to identify and follow them around the web and build profiles for ad targeting.

Panopticlick showed users that browser fingerprinting existed. Cover Your Tracks takes the next step, helping empower users to uncover and combat trackers. The goal is to provide easy-to-understand information about exactly what kind of fingerprint tracking might be happening and how it’s performed.

“Cover Your Tracks shows how Amazon, Facebook, Google, Twitter, and hundreds of lesser known entities work together to exploit browser information in order to track users. They then use that information to bombard users with ads,”  said Budington. “We want users to learn a few tricks of their own to confuse trackers by utilizing browsers and extensions that give off the same information regardless of what computers they’re running on, or randomize certain bits of information so they can’t be used as a reliable tracker.”

Cover Your Tracks offers a learning page about the methodology EFF uses to mimic trackers and test browsers. EFF plans to add new infographics demonstrating how users can employ add-ons and new kinds of anti-fingerprinting browsers to fight tracking.

Visit Cover Your Tracks:
https://coveryourtracks.eff.org/

For more on corporate surveillance:
https://www.eff.org/wp/behind-the-one-way-mirror

For more on Panopticlick:
https://panopticlick.eff.org/

Contact:  WilliamBudingtonSenior Staff Technologistbill@eff.org
Categorieën: Openbaarheid, Privacy, Rechten

Tweede Kamer aan zet: dwing naleving regels door politie af

Bits of Freedom (BOF) - 19 november 2020 - 2:43pm

De Wet politiegegevens moet er voor zorgen dat de politie verantwoord omgaat met gevoelige gegevens. Onze gevoelige gegevens. Maar de politie overtreedt deze regels al jarenlang en op enorme schaal. Toch komt ze er al die tijd mee weg. Hoe kan dat?

Politie als veelpleger

Uit een analyse die we vanochtend publiceerden bleek dat geen van de 36 'mission critical'-systemen van de politie voldoet aan de regels rond privacy en informatiebeveiliging. Dat weten we nu pas, nadat we de politie dwongen deze verslagen openbaar te maken. Maar dit is al jaren de realiteit. In een van de geanalyseerde rapporten staat: "Binnen VROS worden belangrijke gegevens verwerkt over lopende en afgesloten recherche-onderzoeken. [...] Echter het is een legacy applicatie waarvan de ontwikkeling al jaren stilstaat [...]." De ontwikkeling staat dus al jarenlang stil. Er is al jarenlang sprake van een dikke vette onvoldoende. Al die systemen worden al jarenlang gebruikt en het is echt niet zo dat bij de introductie ervan de regels wél werden nageleefd.

Want als de rechter je vrijspreekt, wil je ook niet langer als verdachte vermeld blijven staan. Dat gebeurt nu wel vaak.

En ook dat is eigenlijk geen nieuws. In 2012 deden we al een onderzoek naar de naleving van deze regels door de politie. Onze conclusie destijds: "Het is diep triest gesteld met de bescherming van gegevens van de burger bij de politie." Geen enkel korps voldeed aan alle gestelde eisen. Een enkel korps leefde zelfs minder dan twintig procent van de normen na. En dat ook nog slechts "op hoofdlijnen". Soms is het gebrek aan naleving een gevolg van verouderde ICT-infrastructuur waar de politie meet moet werken, maar soms ging het om relatief eenvoudige problemen, zoals het beheer van autorisaties. De reactie van de minister? "Realisme is op zijn plaats. Het zal jaren duren voordat alle genoemde knelpunten zijn opgelost." En sommige overtredingen zouden alleen op te lossen zijn door niet de politie, maar de wet aan te passen. Dat was eind 2015. Vijf jaar later zijn we nog nauwelijks verder.

Handhaving van de wet

De politie is dus eigenlijk een veelpleger. En dat terwijl het gaat om ontzettend gevoelige gegevens van iedereen in ons land. Vaak gaat het om gegevens van mensen in een uiterst kwetsbare positie. Hoe kan dat? Helaas is de publieke druk niet groot genoeg. De belangrijkste media besteden er alleen aandacht aan als er ook een slachtoffer geïnterviewd kan worden. Maar een getuige van een liquidatie die zijn verhaal niet aan de politie durft te vertellen uit angst dat zijn gegevens bij criminelen terechtkomt, vertelt zijn verhaal ook niet op televisie. En dus blijft de publieke verontwaardiging uit en voelt niemand druk om hier iets aan te doen. De politie kan dit probleem blijven negeren, en de minister kan, zonder écht verbetering af te dwingen, ermee weg komen.

Al die systemen worden al jarenlang gebruikt en het is echt niet zo dat bij de introductie ervan de regels wél werden nageleefd.

En ja, als jij langdurig en op grootschalige wijze de wet negeert, dan krijg je een wetshandhaver achter je aan. Bij de politie is dat de Autoriteit Persoonsgegevens. Zij handhaaft de wet die de politie zou moeten dwingen om verantwoord met onze gevoelige gegevens om te gaan. Maar waarom zat de Autoriteit Persoonsgegevens de politie niet achter haar broek? Het antwoord is eenvoudig: een gebrek aan capaciteit. In een interview in Trouw zei de baas van de 180 fte die bij de toezichthouder werken: "[...] [D]e Autoriteit Financiële Markten [heeft] nu zo’n zeshonderd fte. En de AFM houdt alleen toezicht op de ­financiële markten, wij in principe op alles, omdat overal persoonsgegevens kunnen worden verwerkt." Wij weten niet precies hoe groot de afdeling is die zich bezighoudt met de politie en justitie, maar het zijn er duidelijk te weinig.

Tweede Kamer aan zet

Als we niet willen dat de politie de komende jaren een veelpleger blijft moet de Tweede kamer nu echt eens in actie komen. Komende week behandelt zij de begroting van het Ministerie van Justitie en Veiligheid. Een uitstekend moment om er voor te zorgen dat de Autoriteit Persoonsgegevens de capaciteit, en daarmee de slagkracht, krijgt die zij nodig heeft. Want dat is wat we nu nodig hebben: de lange arm van de Autoriteit Persoonsgegevens. Het wordt tijd dat de toezichthouder gaat doen wat de politie zelf altijd doet: dikke vette boetes uitdelen. En dat kan alleen als ze voldoende menskracht heeft om onderzoek te doen.

Als jij langdurig en op grootschalige wijze de wet negeert, dan krijg je de politie achter je aan. Maar wie zit achter de politie aan?

Maar voor de lange termijn is overigens veel meer nodig. Voor sommige problemen is het voldoende als iemand de politie achter haar broek zit. Dingen als het "autorisatiebeheer" (simpel gezegd: wie mag wanneer in welke database) is iets dat relatief snel goed te organiseren is. Elke andere grote organisatie doet dat ook. Net als het goed in kaart brengen van de beveiligingsrisico's. Echt geen rocket science.

Meer is nodig

Andere zaken vragen meer investering, zoals het op orde brengen van de ICT-infrastructuur van de politie. Aan sommige van die systemen is al jaren niet ontwikkeld, en het beheer hangt soms aan één enkele ambtenaar. Dat vraagt om een forse investering in die systemen, maar ook in het slim organiseren van die upgrade. En niet alleen de infrastructuur en organisatie van de politie moet op de schop, ook in de rest van de strafrechtelijke keten moet er iets veranderen. Want als de rechter je vrijspreekt, wil je niet langer als verdachte in de politiesystemen vermeld blijven staan. Dat gebeurt nu wel vaak, omdat de zogenaamde afloopberichten de politie nooit bereiken of, als dat wel gebeurt, met de hand moeten worden afgehandeld.

Het is misschien veel werk dat verzet moet worden, maar het moet wel gebeuren. We hebben er niets aan als we, als maatschappij, bewaartermijnen voor politiegegevens afspreken, maar de politie niet ook de systemen geven waarmee zij die gegevens na afloop van de termijn kan verwijderen. Er moet snel een einde komen aan de jarenlange en grootschalige overtreding van de wet door de politie.

Categorieën: Openbaarheid, Privacy, Rechten

Pagina's

Abonneren op Informatiebeheer  aggregator - Rechten