U bent hier


Open Access Fiber Networks Will Bring Much-Needed High-Speed Internet Service and Competition to Communities More Efficiently and Economically: Report

Wholesale Networks Will Build Future-Proof Communications Networks

San Francisco—Public investments in open access fiber networks, instead of more subsidies for broadband carriers, will bring high-speed internet on a more cost-efficient basis to millions of Americans and create an infrastructure that can handle internet growth for decades, according to a new report.

Commissioned by EFF, “Wholesale Fiber is the Key to Broad US Fiber to the Premises (FTTP) Coverage” shows how wholesale, open access networks that lease capacity to service providers who market  to consumers are the most cost-effective and efficient way to end the digital divide that has left millions of people, particularly those in rural and low-income areas, with inadequate or no internet service. These inequities were laid bare by the pandemic, when millions of workers and schoolchildren needed high-speed internet.

Billions of dollars funneled to AT&T, Comcast, and others to provide minimum speeds has left more than half of America without 21st century-ready broadband access to date. Investing in wholesale fiber networks will promote competition and lower prices for consumers stuck on cable monopolies and efficiently replace legacy infrastructure.

A wholesale network model could cover close to 80 percent of the U.S. with fiber to the premises before government subsidies would even be necessary, whereas the existing broadband carrier model is expected to only 50 percent profitably, according to the report by Diffraction Analyses, an independent, global broadband consulting and research firm.

“We can’t afford to repeat the mistakes of the past,” said EFF Senior Legislative Counsel Ernesto Falcon. “The federal government and states like California are gearing up to potentially invest billions of dollars on broadband. This report includes economic models showing that funding wholesale broadband operations is a better long-term investment strategy. It will provide more coverage than throwing money at large publicly traded for-profit companies that are making a killing on the current model and have no incentive to change and deploy fiber.”

For the report:

For more on community broadband:

Contact:  ErnestoFalconSenior Legislative Counselernesto@eff.org
Categorieën: Openbaarheid, Privacy, Rechten

ACM verklaart bezwaar Dataprovider over zonefile .nl-domeinnamen ongegrond

IusMentis - 12 uur 34 min geleden

JanBaby / Pixabay

Het besluit van de Autoriteit Consument & Markt (ACM) dat de Stichting Internet Domeinregistratie Nederland (SIDN) de zonefile van alle .nl-domeinnamen niet aan het bedrijf Dataprovider hoeft te verstrekken blijft staan. Dat meldde Security.nl onlangs. Dataprovider zal dus met crawlers moeten blijven werken, en SIDN maakt geen misbruik van een machtspositie door verstrekking te weigeren.

In mei bepaalde de ACM dat SIDN de zonefile niet hoefde te verstrekken. Dataprovider wilde de zonefile van het .nl-domein omdat dit behulpzaam zou zijn bij het leveren van online merkbeschermingsdiensten, waarbij het direct kunnen signaleren van nieuw geregistreerde domeinnamen erg nuttig is. Dan hoef je niet te wachten tot iemand daadwerkelijk nepproducten gaat verkopen bijvoorbeeld.

SIDN biedt zelf ook merkbewakingsdiensten, waarmee je de .nl (en andere zones die SIDN beheert) domeinnamen die lijken op je merk kunt bewaken. Dat zag Dataprovider als een oneerlijke concurrent, en ze stapte naar de ACM om deze toezichthouder een eind te laten maken aan die situatie. Want SIDN heeft de macht over .nl en zou dan het misdrijf van leveringsweigering kunnen plegen, om zo een concurrent op afstand te houden.

Van de ACM kreeg Dataprovider ongelijk, en dat ongelijk is bevestigd in de bezwaarprocedure bij de toezichthouder. Deze bekijkt het handelen van SIDN door de bril van het mededingingsrecht. De ACM vat de casus als volgt samen. Stel SIDN heeft een machtspositie in de stroomopwaartse markt, zij beheert de toegang tot de zonefile. Dan noemen we wat ze doet tegen Dataprovider een leveringsweigering aan haar concurrent in de stroomafwaartse markt.

Binnen de huidige Europese jurisprudentie is dat een probleem als aan drie voorwaarden is voldaan:

  • de toegang tot de input waarvan de levering geweigerd wordt, moet onontbeerlijk zijn om te kunnen concurreren op de stroomafwaartse markt;
  • een leveringsweigering zal leiden tot het verdwijnen van alle effectieve mededinging op de stroomafwaartse markt;
  • er is geen sprake van een objectieve rechtvaardiging voor de leveringsweigering.
Het eerste probleem voor Dataprovider is natuurlijk dat toegang tot de zonefile weliswaar voordelig is maar niet onontbeerlijk. Ze biedt nu haar diensten al aan en dat gaat prima, dus hoe pest SIDN ze dan de tent uit, zeg maar. Met datzelfde argument is ook het tweede criterium snel van tafel: er is genoeg mededinging op de markt voor domeinbewaking, Dataprovider heeft vele concurrenten in binnen- en buitenland en die lijken hier geen last van hebben.

Dit is natuurlijk een tikje tweestrijdig: de klacht van Dataprovider gaat over meer kunnen, makkelijker kunnen werken. Dan is “maar het werkt toch zoals het nu gaat” een onbevredigend antwoord. En belangrijker, zou je dan echt moeten wachten tot de concurrentie weggevaagd is om dán pas te zeggen, zonder die zonefile kun je helemaal niets.

Maar het mededingingsrecht is niet bedoeld om het concurrenten makkelijker te maken, een volstrekt gelijk speelveld te creëren. Het is bedoeld om volstrekte blokkades op te heffen. Daarmee is de situatie niet ernstig genoeg om juridisch in te hoeven grijpen.


Het bericht ACM verklaart bezwaar Dataprovider over zonefile .nl-domeinnamen ongegrond verscheen eerst op Ius Mentis.

Resisting the Menace of Face Recognition

Face recognition technology is a special menace to privacy, racial justice, free expression, and information security. Our faces are unique identifiers, and most of us expose them everywhere we go. And unlike our passwords and identification numbers, we can’t get a new face. So, governments and businesses, often working in partnership, are increasingly using our faces to track our whereabouts, activities, and associations.

Fortunately, people around the world are fighting back. A growing number of communities have banned government use of face recognition. As to business use, many communities are looking to a watershed Illinois statute, which requires businesses to get opt-in consent before extracting a person’s faceprint. EFF is proud to support laws like these.

Face Recognition Harms

Let’s begin with the ways that face recognition harms us. Then we’ll turn to solutions.


Face recognition violates our human right to privacy. Surveillance camera networks have flooded our public spaces. Face recognition technologies are more powerful by the day. Taken together, these systems can quickly, cheaply, and easily ascertain where we’ve been, who we’ve been with, and what we’ve been doing. All based on a unique marker that we cannot change or hide: our own faces.

In the words of a federal appeals court ruling in 2019, in a case brought against Facebook for taking faceprints from its users without their consent:

Once a face template of an individual is created, Facebook can use it to identify that individual in any of the other hundreds of millions of photos uploaded to Facebook each day, as well as determine when the individual was present at a specific location. Facebook can also identify the individual’s Facebook friends or acquaintances who are present in the photo. … [I]t seems likely that a face-mapped individual could be identified from a surveillance photo taken on the streets or in an office building.

Government use of face recognition also raises Fourth Amendment concerns. In recent years, the U.S. Supreme Court has repeatedly placed limits on invasive government uses of cutting-edge surveillance technologies. This includes police use of GPS devices and cell site location information to track our movements. Face surveillance can likewise track our movements.

Racial Justice

Face recognition also has an unfair disparate impact against people of color.

Its use has led to the wrongful arrests of at least three Black men. Their names are Michael Oliver, Nijeer Parks, and Robert Williams. Every arrest of a Black person carries the risk of excessive or even deadly police force. So, face recognition is a threat to Black lives. This technology also caused a public skating rink to erroneously expel a Black patron. Her name is Lamya Robinson. So, face recognition is also a threat to equal opportunity in places of public accommodation.

These cases of “mistaken identity” are not anomalies. Many studies have shown that face recognition technology is more likely to misidentify people of color than white people. A leader in this research is Joy Buolamwini.

Even if face recognition technology was always accurate, or at least equally inaccurate across racial groups, it would still have an unfair racially disparate impact. Surveillance cameras are over-deployed in minority neighborhoods, so people of color will be more likely than others to be subjected to faceprinting. Also, history shows that police often aim surveillance technologies at racial justice advocates.

Face recognition is just the latest chapter of what Alvaro Bedoya calls “the color of surveillance.” This technology harkens back to “lantern laws,” which required people of color to carry candle lanterns while walking the streets after dark, so police could better see their faces and monitor their movements.

Free Expression

In addition, face recognition chills and deters our freedom of expression.

The First Amendment protects the right to confidentiality when we engage in many kinds of expressive activity. These include anonymous speech, private conversations, confidential receipt of unpopular ideasgathering news from undisclosed sources, and confidential membership in expressive associations. All of these expressive activities depend on freedom from surveillance because many participants fear retaliation from police, employers, and neighbors. Research confirms that surveillance deters speech.

Yet, in the past two years, law enforcement agencies across the country have used face recognition to identify protesters for Black lives. These include the U.S. Park Police, the U.S. Postal Inspection Service, and local police in Boca RatonBroward CountyFort LauderdaleMiamiNew York City, and Pittsburgh. This shows, again, the color of surveillance.

Police might also use face recognition to identify the whistleblower who walked into a newspaper office, or the reader who walked into a dissident bookstore, or the employee who walked into a union headquarters, or the distributor of an anonymous leaflet. The proliferation of face surveillance can deter all of these First Amendment-protected activities.

Information Security

Finally, face recognition threatens our information security.

Data thieves regularly steal vast troves of personal data. These include faceprints. For example, the faceprints of 184,000 travellers were stolen from a vendor of U.S. Customs and Border Protection.

Criminals and foreign governments can use stolen faceprints to break into secured accounts that the owner’s face can unlock. Indeed, a team of security researchers did this with 3D models based on Facebook photos.

Face Recognition Types

To sum up: face recognition is a threat to privacy, racial justice, free expression, and information security. However, before moving on to solutions, let’s pause to describe the various types of face recognition.

Two are most familiar. “Face identification” compares the faceprint of an unknown person to a set of faceprints of known people. For example, police may attempt to identify an unknown suspect by comparing their faceprint to those in a mugshot database.

“Face verification” compares the faceprint of a person seeking access, to the faceprints of people authorized for such access. This can be a minimally concerning use of the technology. For example, many people use face verification to unlock their phones.

There’s much more to face recognition. For example, face clustering, tracking, and analysis do not necessarily involve face identification or verification.

“Face clustering” compares all faceprints in a collection of images to one another, to group the images containing a particular person. For example, police might create a multi-photo array of an unidentified protester, then manually identify them with a mugshot book.

“Face tracking” follows the movements of a particular person through a physical space covered by surveillance cameras. For example, police might follow an unidentified protester from a rally to their home or car, then identify them with an address or license plate database.

“Face analysis” purports to learn something about a person, like their race or emotional state, by scrutinizing their face. Such analysis will often be wrong, as the meaning of a facial characteristic is often a social construct. For example, it will misgender people who are transgender or nonbinary. If it  “works,” it may be used for racial profiling. For example, a Chinese company claims it works as a “Uighur alarm.” Finally, automated screening to determine whether a person is supposedly angry or deceptive can cause police to escalate their use of force, or expand the duration and scope of a detention.

Legislators must address all forms of face recognition: not just identification and verification, but also clustering, tracking, and analysis.

Government Use of Face Recognition

EFF supports a ban on government use of face recognition. The technology is so destructive that government must not use it at all.

EFF has supported successful advocacy campaigns across the country. Many local communities have banned government use of face recognition, from Boston to San Francisco. The State of California placed a three-year moratorium on police use of face recognition with body cameras. Some businesses have stopped selling face recognition to police.

We also support a bill to end federal use of face recognition. If you want to help stop government use of face recognition in your community, check out EFF’s “About Face” toolkit.

Corporate Use of Face Recognition The Problem

Corporate use of face recognition also harms privacy, racial justice, free expression, and information security.

Part of the problem is at brick-and-mortar stores. Some use face identification to detect potential shoplifters. This often relies on error-prone, racially biased criminal justice data. Other stores use it to identify banned patrons. But this can misidentify innocent patrons, especially if they are people of color, as happened to Lamya Robinson at a roller rink. Still, other stores use face identification, tracking, and analysis to serve customers targeted ads or track their behavior over time. This is part of the larger problem of surveillance-based advertising, which harms all of our privacy.

There are many other kinds of threatening corporate uses of face recognition. For example, some companies use it to scrutinize their employees. This is just one of many high-tech ways that bosses spy on workers. Other companies, like Clearview AI, use face recognition to help police identify people of interest, including BLM protesters. Such corporate-government surveillance partnerships are a growing threat.

The Solution

Of all the laws now on the books, one has done the most to protect us from corporate use of face recognition: the Illinois Biometric Information Privacy Act, or BIPA.

At its core, BIPA does three things:

  1. It bans businesses from collecting or disclosing a person’s faceprint without their opt-in consent.
  2. It requires businesses to delete the faceprints after a fixed time.
  3. If a business violates a person’s BIPA rights by unlawfully collecting, disclosing, or retaining their faceprint, that person has a “private right of action” to sue that business.

EFF has long worked to enact more BIPA-type laws, including in Congress and the states. We regularly advocate in Illinois to protect BIPA from legislative backsliding. We have also filed amicus briefs in a federal appellate court and the Illinois Supreme Court to ensure that everyone who has suffered a violation of their BIPA rights can have their day in court.

BIPA prevents one of the worst corporate uses of face recognition: dragnet faceprinting of the public at large. Some companies do this to all people entering a store, or all people appearing in photos on social media. This practice violates BIPA because some of these people have not previously consented to faceprinting.

People have filed many BIPA lawsuits against companies that took their faceprints without their consent. Facebook settled one case, arising from their “tag suggestions” feature, for $650 million.

First Amendment Challenges

Other BIPA lawsuits have been filed against Clearview AI. This is the company that extracted faceprints from ten billion photographs, and uses these faceprints to help police identify suspects. The company does not seek consent for its faceprinting. So Clearview now faces a BIPA lawsuit in Illinois state court, brought by the ACLU, and several similar suits in federal court.

In both venues, Clearview asserts a First Amendment defense. EFF disagrees and filed amicus briefs saying so. Our reasoning proceeds in three steps.

First, Clearview’s faceprinting enjoys at least some First Amendment protection. It collects information about a face’s measurements, and creates information in the form of a unique mathematical representation. The First Amendment protects the collection and creation of information because these often are necessary predicates to free expression. For example, the U.S. Supreme Court has ruled that the First Amendment protects reading books, gathering news, creating video games, and even purchasing ink by the barrel. Likewise, appellate courts protect the right to record on-duty police.

First Amendment protection of faceprinting is not diminished by its use of computer code, because code is speech. To paraphrase one court: just as musicians can communicate among themselves with a musical score, computer programmers can communicate among themselves with computer code.

Second, Clearview’s faceprinting does not enjoy the strongest forms of First Amendment protection, such as “strict scrutiny.” Rather, it enjoys just “intermediate scrutiny.” This is because it does not address a matter of public concern. The Supreme Court has emphasized this factor in many contexts, including wiretapping, defamation, and emotional distress. Likewise, lower courts have held that common law claims of information privacy—namely, intrusion on seclusion and publication of private facts—do not violate the First Amendment if the information at issue was not a matter of public concern.

Intermediate review also applies to Clearview’s faceprinting because its interests are solely economic. The Supreme Court has long held that “commercial speech,” meaning “expression related solely to the economic interests of the speaker and its audience,” receives “lesser protection.” Thus, when laws that protect consumer data privacy face First Amendment challenge, lower courts apply intermediate judicial review under the commercial speech doctrine.

To pass this test, a law must advance a “substantial interest,” and there must be a “close fit” between this interest and what the law requires.

Third, the application of BIPA to Clearview’s faceprinting passes this intermediate test. As discussed earlier, the State of Illinois has strong interests in preventing the harms caused by faceprinting to privacy, racial justice, free expression, and information security. Also, there is a close fit from these interests to the safeguard that Illinois requires: opt-in consent to collect a faceprint. In the words of the Supreme Court, data privacy requires “the individual’s control of information concerning [their] person.”

Some business groups have contested the close fit between BIPA’s means and ends by suggesting Illinois could achieve its goals, with less burden on business, by requiring just an opportunity for people to opt-out. But defaults matter. Opt-out is not an adequate substitute for opt-in. Many people won’t know a business collected their faceprint, let alone know how to opt-out. Other people will be deterred by the confusing and time-consuming opt-out process. This problem is worse than it needs to be because many companies deploy “dark patterns,” meaning user experience designs that manipulate users into giving their so-called “agreement” to data processing.

Thus, numerous federal appellate and trial courts have upheld consumer data privacy laws that are similar to BIPA against First Amendment challenge. Just this past August, an Illinois judge rejected Clearview’s First Amendment defense.

Next Steps

In the hands of government and business alike, face recognition technology is a growing menace to our digital rights. But the future is unwritten. EFF is proud of its contributions to the movement to resist abuse of these technologies. Please join us in demanding a ban on government use of face recognition, and laws like Illinois’ BIPA to limit private use. Together, we can end this threat.

Categorieën: Openbaarheid, Privacy, Rechten

Honoring Elliot Harmon—EFF Activism Director, Poet, Friend—1981-2021

It is with heavy hearts that we mourn and celebrate our friend and colleague Elliot Harmon, who passed away peacefully on Saturday morning following a lengthy battle with melanoma. We will deeply miss Elliot’s clever mind, powerful pen, generous heart, and expansive kindness.  We will carry his memory with us in our work. 

Elliot understood how intellectual property could be misused to shut down curiosity, silence artists, and inhibit research—and how open access policies, open licensing, and a more nuanced and balanced interpretation of copyright could reverse those trends. A committed copyleft activist, he led campaigns against patent trolls and fought for open access to research. He campaigned globally for freedom of expression and access to knowledge, and his powerful articles helped define many of these issues for a global community of digital rights activists.

A side profile of Elliot, wearing glasses and a flannel shirt.

This photo was taken shortly before Elliot went to speak on top of a truck at a Stop SESTA/FOSTA rally in Oakland.


Elliot’s formidable activism touched upon every aspect of EFF’s work. In his early days with us, he continued the work that he began at Creative Commons campaigning for the late Palestinian-Syrian activist, technologist, and internet volunteer Bassel Khartabil. He also ran a successful campaign for Colombian student Diego Gomez, fighting against that country’s steep copyright infringement laws and advocating for open access and academic freedom. Following the same values, Elliot spearheaded EFF’s Reclaim Invention campaign urging universities to protect their inventions from patent trolls. He went on to help steer our campaign to get the FCC to restore net neutrality rules, framing the issue as a matter of free speech and calling on “Team Internet” to join him in the fight. In all of these efforts and more, Elliot brought a natural sense of how to build and nurture community around a shared cause. 

Elliot was also a leading advocate for free expression online, and helped educate the public on how laws policing online speech or ratcheting up the liability of online platforms could have serious consequences for marginalized communities. In 2018, when SESTA-FOSTA came to the legislative table and it looked as though many organizations feared standing up for sex workers, Elliot made sure we weren’t one of them, and directed his and EFF’s energy to fiercely advocating for their rights online. Elliot’s op-ed in the New York Times still stands as a crucial and powerful explanation of how Section 230 enables millions of the voiceless to have a voice. As he wrote: “History shows that when platforms clamp down on their users’ speech, the people most excluded are the ones most excluded from other aspects of public life, too.”

Elliot is wearing his glasses, a suit and a blue tie underneath an illustration for Open Wireless.

Elliot spoke to the press frequently about EFF's issues and campaigns. In this early 2020 photo, he was preparing to speak about protecting the .ORG domain.

More recently, Elliot coordinated a global effort to prevent a private equity firm from purchasing the .ORG domain, rallying the troops for what was undoubtedly one of the most dramatic shows of non-profit sector solidarity of all time, to use his own words. His sense of humor and humility are on full display in this Deeplinks post about the campaign.

But Elliot’s deepest digital rights commitment may have been his belief in open access to knowledge and culture—and he knew how to write about that belief as an invitation, not a command. To give just one of many examples, this post helped draw attention to the removal of a tool used by journalists and activists to save eyewitness videos. 

In an organization filled with tireless advocates, Elliot’s thoughtfulness, quick wit, and wide-ranging interests—along with his loud and buoyant laugh, sparked easily by the team members he worked alongside for three years then led for nearly three more—set him apart. We knew a meeting or planning session was going well when we could hear Elliot’s laughter from across the office. And an edit by Elliot on a blog post or call to action was sure to make it smarter, sharper, and more persuasive.  

Elliot is turned to the side, speaking to the other six people, who are laughing. He has red hair, and is wearing glasses and a flannel shirt.

Elliot with members of the activism team in 2018. His colleague Katharine shared: 'I do not remember what Elliot said to provoke this reaction, but it’s how I will remember him.'

Elliot joined EFF in 2015 from Creative Commons, where he had served in the role of Director of Communications—a role in which many EFF staffers first encountered him. It was not, however, his first introduction to EFF; he planted an easter egg in his cover letter applying for a role on the activism team encouraging us to search for the old Geocities website he launched as a teenager, where one would come across EFF’s Blue Ribbon Campaign sticker. He was a lifelong supporter and a true believer in equal digital rights for all, and he always took great care to look for the underdog.

In 2018, Elliot took over the role of Activism Director from Rainey Reitman and built up a powerful team with the delicate strength required of one stepping into the shoes of a long-serving team leader. He excelled in the position, bringing joy, structure, and quirky “funtivities” to his team during a particularly difficult time for the world and the internet. His careful leadership style, constant awareness of his team members’ needs, and conviction that our work can change the world will continue to serve as an inspiration.

Elliot also served as a member of EFF’s senior leadership team. He was a powerful and thoughtful voice in helping us figure out how to remain scrappy and smart even as we put into place management and other structures appropriate for our now-larger organization.   

Of course, Elliot was not just a digital rights activist; he was a husband, a friend, a pro wrestling fan, an accomplished poet and performer, a Master of Fine Arts in Writing, a mentor to many, and a skilled and caring manager. 

We will miss Elliot’s incredible talent and leadership, but more than that, we will miss his sincerity, his hearty laugh, and his extraordinary sense of fairness and kindness. And we will continue the fight and honor his dreams of a free and open internet.

Elliot smiling, wearing a black shirt with a rainbow unicorn and the text "EFF25".

Elliot in front of the Electronic Frontier Foundation offices, wearing an EFF 25th anniversary member shirt. This photo was taken during his first week working at EFF.

Categorieën: Openbaarheid, Privacy, Rechten

What About International Digital Competition?

EFF Legislative Intern Suzi Ragheb wrote this blog post

Antitrust has not had its moment since the 1911 breakup of Standard Oil. But this past year, policymakers and government leaders around the globe have been taking a hard look at the technology markets. ‘Break up Big Tech’ is the newest antitrust catchphrase. On both sides of the Atlantic, policies have been introduced to foster digital competition.

Congress has introduced several competition and anti-trust bills, including a bipartisan package that passed out of committee. The Biden administration has nominated antitrust advocates to key positions: Lina Khan as chair of the Federal Trade Commission, Jonathan Kanter as the Assistant Attorney General for Antitrust at the Department of Justice, and Tim Wu at the National Economic Council. And across the Atlantic, the European Commission is marking up two key pieces of legislation, the Digital Markets Act and the Digital Services Act, that would create new rules for digital services and enhanced competition in the technology sector.   

Early this summer and on his first international travel trip, President Biden headed to Brussels to talk about creating a new U.S.- EU Tech and Trade Council (TTC) and a Joint Technology Competition Policy Dialogue (JTCPD). There have been few details aside from the initial press releases on what policy approaches would be considered. However, it is a clear sign that there is a transatlantic appetite for tackling competition in the technology space. But what would an international competition policy look like?

International Interoperability and Data Portability Standards

At EFF, we have long advocated for interoperability and data portability as the answers to outsized market power. We believe that creating open standards and allowing users to move their data around to different platforms shifts the market power away from companies and into the hands of consumers. Pursuing this at an international level would be a seismic power shift and would boost innovation and competition.

Having open, interoperable standards between international platforms would allow users to easily transfer their information to the platform that best suits their needs. It would mean that platforms would compete not on the size of their networks, but the quality of their services. When platforms take advantage of network effects, it’s not a competition of offering the best functions, it’s a competition of who can collect the most personal data. The JTCPD would be remiss if they did not address platform and service interoperability, not just ancillary services, as a key part of digital competition.

In an interoperable data world, if you don’t like Facebook’s functions, you would be able to take your data to another platform, one with better services, and you would be able to connect with individuals across platforms.

Given the global nature of the internet, creating international standards would be less burdensome for tech companies, as they wouldn’t have to navigate a patchwork of differing standards. And despite pushback from the platforms, this is not an impossible feat. In fact, interoperability is a cornerstone of the internet. Consider that after Facebook purchased Instagram, the company added chat interoperability between the two platforms, and it plans to make WhatsApp interoperable with both platforms. If we had interoperability standards before the companies merged, the market would have looked and acted differently.

International Antitrust Is Incomplete Without Privacy

Privacy is a fundamental human right recognized by the UN and it must be a part of any international agreement on digital competition. Users today feel hopeless when it comes to their right to online privacy. While interoperability could address privacy concerns by allowing users to self-determine their platform of choice as well as give privacy-conscious platforms the ability to compete on a level playing field with big platforms, there is still a need to establish international privacy standards.  Setting a minimum privacy standard pushes companies away from the personal-data-for-profit model that has become inimical to tech monopolies.

In the EU, data privacy standards have been established by the GDPR in 2016, codifying it as a fundamental right with high data protection standards across the EU. The U.S. significantly lags on developing federal privacy standards, despite bipartisan support. Privacy is also a national security concern, as it endangers the welfare of its citizens. A recent report commissioned by the Department of Defense’s Cyberspace Solarium calls on Congress to create national privacy standards as baseline protection against cyberattacks. Setting international privacy standards greatly benefits tech companies. It reduces compliance costs and confusion. And it gives a fair competitive chance to all tech companies, regardless of size.

The Promise of a Truly Competitive Digital Economy Lies in an International Agreement

Otherwise, we create a fractured world for a global internet, rampant with confusion and unequal protection under the law. Under an international agreement, interoperable and portable data standards would be adopted by the industry, leveling the field for both old and new firms. Interoperability will expand opportunities for start-ups to build new tech that works in existing dominant systems. International privacy standards and data minimization enshrines privacy as a human right and pushes the digital market away from the model that relies on personal data exploitation. Creating an international agreement sets up consumers for broader data protections and companies for expanded market access. And a U.S.-EU agreement on tech competition would set the tone for the rest of the globe.

Categorieën: Openbaarheid, Privacy, Rechten

30 Jahre Linux: Podcast erzählt die Geschichte Freier Software

iRights.info - 26 oktober 2021 - 11:00am

30 Jahre Freie Software in 30 Minuten Podcast: Der Bayerische Rundfunk spürt in einer hörenswerten Sendung den Ideen hinter Linux und Freier Software nach – und erläutert, wie sich die Bewegung für herstellerunabhängige Programme bis heute entwickelt hat.

Still und leise im Hintergrund: So charakterisiert ein aktueller Podcast des Bayerischen Rundfunks die Funktionsweise von Linux. Denn das Betriebssystem und darauf basierende Varianten finden in unzähligen Geräten Anwendung. Oft, ohne dass Nutzer*innen davon wissen.

Es geht nicht nur um bekannte Beispiele wie Libre Office oder den VLC Media Player. So funktionieren viele Ampeln, Waschmaschinen, Telefone und auch Entertainment-Systeme in Autos und Wohnzimmern auf der Grundlage von Linux oder Varianten davon. Auch das bekannte Smartphone-Betriebssystem Android basiert auf Linux.

Wie kommt das? Was sind die Vorzüge sogenannter freier, also herstellerunabhängiger und offen zugänglicher Software, dass sich so viele Unternehmen bei der Entwicklung ihrer Produkte darauf verlassen? In der Reihe „IQ – Wissenschaft und Forschung“ geht man diesen Fragen nach.

Gewaltenteilung auch für Technik und Software

Unter dem Stichwort „technische Gewaltenteilung“ fasst Matthias Kirschner, Präsident der Free Software Foundation Europe, einige Stärken und Effekte von Freier Software zusammen.

Er spielt damit auf die politische Gewaltenteilung in modernen Demokratien an, in denen Gesetzgebung, Rechtsprechung und ausführende Gewalt voneinander getrennt sind. Das soll verhindern, dass sich zu viel Macht auf einzelne Institutionen vereint.

Mit Freier Software verhalte es sich ähnlich, so Kirschner im Podcast: Denn offene zugängliche Quellcodes könnten alle einsehen, verbessern, für eigene Zwecke übernehmen und dadurch gleichzeitig für die Allgemeinheit weiterentwickeln.

Auf diese Weise liege die Kontrolle nicht nur bei einer einzigen Person oder Firma, sondern sei auf viele verteilt – zum Nutzen und Vorteil der Allgemeinheit, so Kirschner.

Gelungener Einstieg in die Welt der Freien Software

Sachlich und unterhaltsam, aber ohne große Polemik, nähert sich der Podcast dem Thema Freie Software. Besonders eignet er sich für Interessierte, die in das Thema einsteigen wollen und einen leichten Zugang dazu suchen.

Anlass für die Sendung ist ein kleines Jubiläum: Im September 1991, also vor gut 30 Jahren, veröffentlichte der finnische Informatiker Linus Torvalds das Linux-Betriebssystem in der ersten Version. Er hatte das Programm geschrieben und stellte es wenig später unter die Lizenz GNU General Public License (GPL).

Näheres zu den sogenannten „Vier Freiheiten“ der GNU-GPL-Lizenz findet sich auch in einem frühen iRights.info-Artikel aus dem Jahr 2005.

The post 30 Jahre Linux: Podcast erzählt die Geschichte Freier Software appeared first on iRights.info - Kreativität und Urheberrecht in der digitalen Welt.

Mag de politie je Tesla leegslurpen bij een strafrechtelijk onderzoek?

IusMentis - 26 oktober 2021 - 8:11am

webandi / Pixabay

De data die in Tesla’s is opgeslagen bevat een schat aan informatie voor strafrechtelijke onderzoeken, las ik bij Security.nl, op gezag van het Nederlands Forensisch Instituut (NFI). Dat was erin geslaagd  gecodeerde rijgegevens die Tesla opslaat in haar auto’s uit te lezen, te vertalen en te publiceren. Volgens de onderzoeker is het belangrijk om te weten welke data de auto allemaal opslaat, zodat justitie weet welke gegevens het kan vorderen en hoe exact die gegevens kloppen met de werkelijkheid. Maar mag dat?

Het artikel meldt meteen al hoe het nu gaat: de politie komt met een bevel van de officier van justitie bij Tesla en die lezen het op afstand uit. Dat moet ook wel, want Tesla’s zijn niet zo eenvoudig toegankelijk dat je ze met standaardspullen open kunt maken en de data naar een USB-stick kunt halen.

De experts van het NFI is het dus wel gelukt, en bij het lezen van het paper viel op dat het dus wél een kwestie is van auto openmaken en SD-kaart uitnemen: Acquiring this micro-SD card requires access to the ‘car computer’ located in the passenger footwell, behind the glovebox. On Autopilot generation 2 and 2.5 systems, there is a metal cover on  the bottom left corner (when viewing it from the footwell) that can be unscrewed to reach the PCB and subsequently the SD card as shown in Figure 2. Vervolgens blijk je de kaarten gewoon te kunnen lezen als typische Linux-logbestanden, dus platte tekst en je kunt met een beetje raden al achterhalen wat er staat. De onderzoekers hebben dit geverifieerd door de logs te vergelijken met wat ze met een bevel bij Tesla hadden gekregen. Identiek, alleen beperkt Tesla de resultaten tot exact het gevraagde interval en de genoemde gegevens uit het bevel.

Logisch, maar wel lastig voor Justitie want je moet wel weten wat je kunt vragen. Dat is dan ook de meerwaarde van dit onderzoek: weten wat een Tesla allemaal logt, zodat je gerichter kunt vragen.

Maar mij bekruipt dan de gedachte, móet je Tesla nog wel wat vragen als je die SD-kaart gewoon uit de auto kunt halen?

Dit is een open vraag volgens mij. In 2017 bepaalde de Hoge Raad dat de politie smartphones van verdachten niet zomaar mag doorzoeken, omdat deze een zeer persoonlijk voorwerp van mensen vertegenwoordigen: Indien dat onderzoek zo verstrekkend is dat een min of meer compleet beeld is verkregen van bepaalde aspecten van het persoonlijk leven van de gebruiker van de gegevensdrager of het geautomatiseerde werk, kan dat onderzoek jegens hem onrechtmatig zijn. Er is geen expliciete wettelijke regeling, het mag als er bewijs te verwachten valt maar hoe breder de scope van het onderzoek, hoe meer waarborgen men moet nemen bij het onderzoeken. En het maakt daarbij niet uit of je de telefoon moest ontgrendelen of kunstgrepen moest uithalen om bij de data te kunnen.

Ik zou zeggen dat een auto gelijk te stellen is aan een telefoon qua persoonlijk en wat je over iemand te weten kunt komen. Dan kom je dus in hetzelfde kader terecht als wat de Hoge Raad voor smartphones schetste, en zal al te gretig uitlezen van allerhande logs dus onrechtmatig verkregen bewijs opleveren. Maar omgekeerd: als je heel gericht gaat zoeken op basis van een concrete verdenking, dan is dat niet perse een probleem.

Een interessante is dan nog in hoeverre “je kunt het ook aan Tesla vragen met een bevel” relevant is. Dit voelt gek, maar door het via Tesla te spelen maak je het zorgvuldiger. Dat bedrijf maakt dan als professionele partij immers een afweging, er zit zo een extra check op.


Het bericht Mag de politie je Tesla leegslurpen bij een strafrechtelijk onderzoek? verscheen eerst op Ius Mentis.

Vanaf 15 november digitaal procederen in beslagrekesten mogelijk

Advocaten kunnen vanaf 15 november bij alle rechtbanken eenvoudig digitaal procederen in de zaakstroom beslagrekesten. Dat biedt de advocatuur een snelle en meer eenvoudige toegang tot de Rechtspraak. Ook hebben advocaten op elk moment een actueel overzicht van ingediende zaken en de bijbehorende correspondentie in een digitaal zaakdossier.

Nieuw webportaal

Digitaal indienen van beslagrekesten gebeurt via het nieuwe webportaal van de Rechtspraak via rechtspraak.nl. Inloggen gebeurt met de Advocatenpas. Via het portaal kunnen advocaten stukken in pdf-formaat uploaden. Na elk ingediend stuk, ontvangt de advocaat automatisch een digitale ontvangstbevestiging voor de eigen administratie. Nabellen is niet meer nodig.


Ruim 80 advocaten hebben in de afgelopen periode het nieuwe webportaal getest via een pilot met de rechtbank Amsterdam. Advocaten die hebben deelgenomen aan de pilot beoordelen het gebruik van het webportaal met een 4,9 uit 5. Als grote pluspunten werden genoemd: tijdswinst, verminderd papiergebruik en de mogelijkheid om ongeacht tijd of plaats zaaksdossiers in te kunnen zien.

De Rechtspraak nodigt advocaten dan ook van harte uit beslagrekesten digitaal in te dienen.

Project Digitale ToegangHet digitaal indienen van beslagrekesten is onderdeel van het project Digitale Toegang (Project DT) van de Rechtspraak. Met dit project (pdf, 145,3 KB) wil de rechtspraak in de komende jaren eenvoudige digitale toegang realiseren voor alle rechtzoekenden en hun procesvertegenwoordigers in de rechtsgebieden civiel recht en bestuursrecht. Digitaal procederen is vooralsnog vrijwillig, maar wordt op enig moment verplicht voor juridische professionals.

In de komende periode gaat de Rechtspraak actief aan de slag met het informeren van partijen via onder meer Rechtspraak.nl. Heeft u een concrete vraag over het digitaal indienen van beslagrekesten? Neem dan contact op met het Rechtspraak Servicecentrum.

Categorieën: Rechten

De politie naait ons (en heeft daar bakken met geld voor over)

Bits of Freedom (BOF) - 25 oktober 2021 - 12:45pm

De politie naait ons. Pardon our French. We kunnen het gewoon niet netter zeggen, want dit is wat het is. We vroegen de politie het verslag van een onderzoek naar hun afluisterapparatuur openbaar te maken. Ze weigert dat niet eens, maar reageert gewoon niet. Daarmee steekt ze een middelvinger op, naar ons, naar de maatschappij én naar de democratische rechtsstaat.

Onder de pet

Waarom de politie dat rapport onder de politiepet houdt? We weten het niet. Wat we wél weten is dat er in 2017 of 2018 onderzoek gedaan is naar de afluisterapparatuur van de politie. Onderzoekers hebben gekeken naar het systeem dat voorkomt dat advocaten afgeluisterd worden. De onderzoekers keken ook naar de opslag van afgetapte gesprekken. Dat weten we omdat de minister dat vorig jaar aan de Tweede Kamer schreef. In diezelfde brief schreef hij heel (maar dan ook echt héél) beknopt over de resultaten van het onderzoek. Over het filter voor advocaten schreef hij: "behoeft aandacht". Over de opslag van de afgetapte gesprekken zweeg hij in alle talen.

Misschien staan er wel strafzaken op het spel.

Doodse stilte

Wat ook te denken geeft, is de reactie van de politie op ons verzoek om het rapport openbaar te maken. Eerst weigerde de politie met een kulargument (een beroep op de Grondwet, die in dit geval helemaal niet relevant is). En dus tekenden we bezwaar aan. De politie weigert ons nu niet het document, ze weigert überhaupt een beslissing op ons bezwaar. En dat maakt dat we onthand zijn. Want zonder beslissing kunnen we die beslissing ook niet aanvechten. Natuurlijk zijn we alsnog naar de rechter gestapt. En die gaf ons gelijk. De uitspraak was glashelder: neem direct een beslissing op het verzoek van Bits of Freedom op straffe van een dwangsom. Het bleef angstvallig stil. Doodstil. Inmiddels zijn we alweer een half jaar verder en is de politie ons meer dan 15.000 euro schuldig, maar wachten we nog altijd op een beslissing.

De politie heeft misschien de langere arm, maar dankzij jou hebben wij de langere adem. Doneer nu!


Dus nogmaals de vraag: waarom houdt de politie dat rapport letterlijk (!) kostte wat het kost geheim? We weten het niet. Maar het feit dat de politie er zo ontzettend geheimzinnig over doet, is een veeg teken aan de wand. Ondertussen beginnen we te denken dat de inhoud van dat rapport misschien wel explosief is. Misschien blijkt dat de opslag van de afgetapte gegevens niet in orde is. Of dat criminelen erbij kunnen. Of mogelijk dat andere landen de getapte gegevens kunnen manipuleren. Of kan het zijn dat er soms wel eens een opname kwijt raakt. Dat zou een pijnlijk puntje zijn, voor wie zich de verdwenen Teeven-taps nog herinnert. Misschien staan er wel strafzaken op het spel. We weten het niet. Maar dat er meer mis is dan de minister lijkt te zeggen, is zo klaar als een klontje.

De politie weigert ons nu niet het document, ze weigert zelfs de beslissing (op ons bezwaar).


Dat de politie het vertikt om een beslissing te nemen kan haast niet anders zijn dan een vertragingstactiek, bedoeld om de inhoud van het rapport te ontwapenen. Het is ons democratische recht om de politie te vragen dit soort informatie openbaar te maken. Want zonder informatie kun je als burger niets controleren, geen mening vormen en dus ook geen invloed uitoefenen. De weigering om een beslissing te nemen laat zien dat niet transparant zijn alleen maar vermoedens, speculatie en meer zorgen aanwakkert. De politie benadeelt niet alleen ons allemaal, maar draait ook zichzelf een loer. Nu de politie weigert een beslissing te nemen, de wet overtreedt, de rechter negeert én er ook nog eens bakken met geld voor over heeft, doet ze niets minder dan onze democratische rechtstaat ondermijnen.

Categorieën: Openbaarheid, Privacy, Rechten

Kun je via algemene voorwaarden je auteurs- en persoonlijkheidsrechten afstaan?

IusMentis - 25 oktober 2021 - 8:13am

stevepb / Pixabay

Een interessante op het forum van Security.nl: “Bij ons in het dorp heeft een bedrijf diverse uitjes, dit is te boeken voor een groep. Van escaperoom, kinderfeestjes tot aan Wie is de Mol. Maar nu staat er in de Algemene voorwaarde dit: 7.5 Mogelijk zullen beeld- en/of geluidsopnamen worden gemaakt van het evenement en de deelnemers en worden opnamen openbaar gemaakt of verveelvoudigd. De bezoeker verleent onvoorwaardelijk toestemming tot het maken van voornoemde opnamen en exploitatie daarvan zonder dat de organisatie of derden een vergoeding aan de bezoeker verschuldigd is of zal zijn. Een eventueel naburig- en/of auteurs- en/of portretrecht draagt de bezoeker bij deze en zonder enige beperking over aan de organisatie. Voorts doet de bezoeker onherroepelijk afstand van het recht zich te beroepen op zijn/haar persoonlijkheidsrechten. Tsja, hoe rechtsgeldig is zoiets. Het staat er -zoals zo vaak bij algemene voorwaarden- vooral omdat het mooi klinkt en je 90% van de klachten kunt pareren door meewarig te glimlachen en te zeggen “ja meneertje, maar artikel 7.5 zegt wat anders he”.

Het achterliggende punt is natuurlijk dat de organisatie video wil produceren op zo’n evenement, bijvoorbeeld voor reclame of social media. Dat kan niet zomaar, de mensen die daar rondlopen hebben recht op privacy en bescherming van hun persoonsgegevens. (Vroeger zouden we dan ook nog zeggen dat mensen portretrecht hebben als ze gefilmd worden, maar portretrecht is opgegaan in de AVG.)

Wat hier opvalt, is dat deze clausule zijn leven begon als een auteursrechtclausule, waar het portretrecht en persoonlijkheidsrechten aan toegevoegd zijn. Als forensisch jurist zeg ik dus dat deze tekst geschreven is door een IE-specialist. Echter, de kans dat een bezoeker een naburig of auteursrechtelijk relevante handeling verricht op het evenement is minimaal. Nog los van het feit dat voor overdracht van die rechten een ondertekend document nodig is, en algemene voorwaarden worden niet ondertekend.

Privacy/AVG technisch is dit een stuk lastiger te regelen. Je kunt in AV geen toestemming opeisen voor gebruik van je persoonsgegevens, dat moet immers van de AVG expliciet en dus apart gebeuren. Bovendien moet je vrijelijk toestemming kunnen weigeren en intrekken – je hebt er dus eigenlijk niets aan, als je toestemming vraagt aan mensen.

En ja, specifiek bij journalistieke verwerkingen is toestemming niet intrekbaar, dat klopt. Maar het soort video dat deze organisatie gaat maken zou ik héél moeilijk een journalistieke verwerking kunnen noemen, hoewel ik daar normaal erg ruimhartig in ben. Maar zelfs dan blijft het punt over dat je de toestemming expliciet moet vragen en dat deze vooraf geweigerd moet kunnen worden.


Het bericht Kun je via algemene voorwaarden je auteurs- en persoonlijkheidsrechten afstaan? verscheen eerst op Ius Mentis.

John Gilmore Leaves the EFF Board, Becomes Board Member Emeritus

Since he helped found EFF 31 years ago, John Gilmore has provided leadership and guidance on many of the most important digital rights issues we advocate for today. But in recent years, we have not seen eye-to-eye on how to best communicate and work together, and we have been unable to agree on a way forward with Gilmore in a governance role. That is why the EFF Board of Directors has recently made the difficult decision to vote to remove Gilmore from the Board.

We are deeply grateful for the many years Gilmore gave to EFF as a leader and advocate, and the Board has elected him to the role of Board Member Emeritus moving forward. "I am so proud of the impact that EFF has had in retaining and expanding individual rights and freedoms as the world has adapted to major technological changes,” Gilmore said. “My departure will leave a strong board and an even stronger staff who care deeply about these issues."

John Gilmore co-founded EFF in 1990 alongside John Perry Barlow, Steve Wozniak and Mitch Kapor, and provided significant financial support critical to the organization's survival and growth over many years. Since then, Gilmore has worked closely with EFF’s staff, board, and lawyers on privacy, free speech, security, encryption, and more.

In the 1990s, Gilmore found the government documents that confirmed the First Amendment problem with the government’s export controls over encryption, and helped initiate the filing of Bernstein v DOJ, which resulted in a court ruling that software source code was speech protected by the First Amendment and the government's regulations preventing its publication were unconstitutional. The decision made it legal in 1999 for web browsers, websites, and software like PGP and Signal to use the encryption of their choice.

Gilmore also led EFF’s effort to design and build the DES Cracker, which was regarded as a fundamental breakthrough in how we evaluate computer security and the public policies that control its use. At the time, the 1970s Data Encryption Standard (DES) was embedded in ATM machines and banking networks, as well as in popular software around the world. U.S. government officials proclaimed that DES was secure, while secretly being able to wiretap it themselves. The EFF DES Cracker publicly showed that DES was in fact so weak that it could be broken in one week with an investment of less than $350,000. This catalyzed the international creation and adoption of the much stronger Advanced Encryption Standard (AES), now widely used to secure information worldwide.

Among Gilmore’s most important contributions to EFF and to the movement for digital rights has been recruiting key people to the organization, such as former Executive Director Shari Steele, current Executive Director Cindy Cohn, and Senior Staff Attorney and Adams Chair for Internet Rights Lee Tien.

EFF has always valued and appreciated Gilmore’s opinions, even when we disagree. It is no overstatement to say that EFF would not exist without him. We look forward to continuing to benefit from his institutional knowledge and guidance in his new role of Board Member Emeritus.

Categorieën: Openbaarheid, Privacy, Rechten

„Fair Lesen”: Buchhandel vs. Bibliotheken

iRights.info - 22 oktober 2021 - 8:55am

Die „Initiative Fair Lesen“ wendet sich gegen eine „Zwangslizenzierung“ von elektronischen Büchern im Rahmen der Online-Bibliotheksausleihe. Warum das Thema hohe Wellen schlägt, was eigentlich der Unterschied zwischen Büchern und E-Books ist und wie genau Bibliotheken diese verleihen – iRights.info erläutert Hintergründe und ordnet den Konflikt ein.

„Fair Lesen“ richtet sich auf Ihrer Website gegen die Forderung öffentlicher Bibliotheken, Bestseller in Zukunft vom ersten Tag ihres Erscheinens an auch als E-Book verleihen zu dürfen. Unter den meinungsstarken Motti „Schreiben ist nicht umsonst“ und „Gegen die Zwangslizenzierung. Für Vielfalt und Meinungsfreiheit“ kritisiert die Initiative, die

„erzwungene Online-Ausleihe zu Niedrigpreis-Bedingungen – insbesondere für Neuerscheinungen – wäre ein wirtschaftliches Desaster für alle, die vom Kulturgut Buch leben“.

Hintergrund des Streits ist die Forderung des Deutschen Bibliotheksverbandes (kurz: DBV; Eigenschreibweise: dbv), eine gesetzliche Grundlage für den Verleih von E-Books in Öffentlichen Bibliotheken zu schaffen. Der DBV ist der Zusammenschluss aller Bibliotheken in Deutschland.

Zuletzt hatte der DBV im Rahmen der Reform des Urheberrechts 2020 diese Forderung erneuert und sich mit einem offenen Brief Anfang des Jahres an die Bundestagsabgeordneten gewendet. Der Bundesrat hatte daraufhin einen Gesetzesentwurf im März dieses Jahr eingebracht, wonach Verleger*innen öffentlichen Bibliotheken ein Nutzungsrecht an im Handel erhältlichen E-Books einräumen müssen.

„Fair Lesen“ richtet sich – nun fast ein Jahr später und pünktlich zum Beginn der Buchmesse in Frankfurt – gegen diese Forderung und die Verankerung im Urheberrechtsgesetz.

Das wirft eine Reihe von Fragen auf: Wer steht hinter der Kampagne „Fair lesen“ und worin besteht ihr Zweck? Wie zeigt sich der Unterschied zwischen gedruckten Büchern und digitalen E-Books überhaupt? Wie funktioniert die Ausleihe von E-Books derzeit bei den Bibliotheken? Und was brächte eine gesetzliche Regelung?

Wer „Fair lesen“ verantwortet

Wer hinter der Initiative steht, wird erst auf den zweiten Blick klar. Auf der Webseite heißt es, man sei eine „Gemeinschaft von Autorinnen, Autoren, Urheberverbänden, Verlagen und Buchhandlungen“. Die Unterstützer*innen der Initiative lesen sich wie das „Who is Who“ der deutschen Autorenschaft: So haben etwa Maxim Biller, Juli Zeh und Charlotte Link den offenen Brief unterschrieben.

Als Verantwortlicher für die Bereitstellung und die Nutzung der Webseite – sogenannter „Diensteanbieter“ – ist dagegen im Impressum der Webseite der Börsenverein des Deutschen Buchhandels e. V.  genannt, der als Dachverband die Interessen von Verlagen und Buchhandel vertritt. Gleichzeitig ist der Börsenverein auch Organisator der Frankfurter Buchmesse, dem weltweit größten Branchentreffen von Autor*innen, Verlagen und Lesepublikum.

Eigentum vs. Lizenz: Unterschiede zwischen Büchern und E-Books

Als nächstes stellt sich die Frage, warum es überhaupt eine gesetzliche Grundlage für die Ausleihe von E-Books braucht – jedenfalls, wenn es nach dem DBV und dem Bundesrat geht. Hintergrund ist, dass Bücher und elektronische Bücher, auch „E-Books“ genannt, rechtlich unterschiedlich behandelt werden. Während es sich bei „normalen“ Büchern um dingliche Gegenstände handelt, an denen man Eigentum erwerben kann, erhält man beim Kauf eines E-Books rechtlich gesehen weniger: Mit einem Buch können Käufer*innen machen, was sie wollen – sie können Anmerkungen hineinschreiben, es weiterverkaufen, verleihen oder vererben.

Das ist beim E-Book nicht so einfach möglich. Anstatt des Eigentums erlangt der Kunde hier in der Regel ein Nutzungsrecht in Form einer Lizenz, die an verschiedene Bedingungen geknüpft ist. Diese Lizenzbedingungen finden sich häufig in den Allgemeinen Geschäftsbedingungen (AGB) der Internetbuchhändler*innen. In den AGB von Thalia heißt es etwa:

„Digitale Inhalte sind urheberrechtlich geschützt. Wir verschaffen Ihnen daran kein Eigentum. Sie erhalten das einfache, nicht übertragbare Recht, die angebotenen digitalen Inhalte zum ausschließlich persönlichen Gebrauch gemäß Urheberrechtsgesetz in der jeweils angebotenen Art und Weise zu nutzen.“

Kauft man also ein E-Book, besitzt man kein Buch im engen Sinne, sondern erwirbt lediglich das Recht, einen digitalen Inhalt zu lesen. Ein Weiterverkauf eines E-Books etwa ist nicht gestattet, wie der Europäische Gerichtshof Ende 2019 klarstellte.

Wie funktioniert „E-Lending“?

Weil Bücher und E-Books rechtlich unterschiedlich behandelt werden, gibt es auch bei der Ausleihe Unterschiede. Denn während der Bibliotheksverleih von Büchern ebenfalls gesetzlich geregelt ist, fehlt es für E-Books auch hier an einer rechtlichen Grundlage.

Ausleihe analoger Werke: Erschöpfungsgrundsatz

Das Urheberrecht enthält für den Verleih von Büchern und anderer analoger Medien (zum Beispiel DVDs) sogenannte Erlaubnisnormen. Nach diesen dürfen Bibliotheken Bücher verleihen (siehe dazu die Paragraphen 17 und 27 des Urheberrechtsgesetzes). Verkauft ein Urheber beziehungsweise eine Rechteinhaberin erstmalig das Werk im Gebiet der Europäischen Union, verliert er oder sie das Verbreitungsrecht – also das Recht, das Werk als Original oder in Kopie der Öffentlichkeit anzubieten oder in den Verkehr zu bringen. Das nennt man den sogenannten „Erschöpfungsgrundsatz“: Wer sein Werk verkauft, zum Beispiel an einen Verlag, kann keinen Einfluss mehr darauf nehmen, wie das Werk weiterveräußert oder anderweitig genutzt wird. Dazu gehört grundsätzlich auch, dass Bibliotheken keine zusätzliche Genehmigung dafür einholen müssen, ein Buch zu erwerben und anschließend zu verleihen.

Allerdings erhalten die Autor*innen und Verlage für das Verleihen eine Vergütung. Diese Vergütung fingiert als Entschädigung für die entgangenen Einnahmen, dass ein Buch nicht gekauft, sondern geliehen wurde. Sie wird als „Bibliothekstantieme“ bezeichnet und von Bund und Ländern getragen.

E-Lending: Alles ganz anders?

Im Gegensatz dazu ist der „Verleih“ digitaler Medien durch Bibliotheken, das sogenannte E-Lending (oft auch „eLending“), nicht gesetzlich geregelt. Die Normen des Urheberrechtsgesetzes gelten nämlich nur für physische Werke. Auch der Erschöpfungsgrundsatz gilt hier nicht: Das bedeutet, dass die Urheber*innen und Rechteinhaber*innen frei entscheiden können, ob sie das E-Book überhaupt für eine Bibliothek lizenzieren wollen und wenn ja, zu welchen Bedingungen und zu welchem Zeitpunkt.

Windowing: Späte Ausleihe, mehr Geld für die Verlage

Die Bibliotheken müssen also die Lizenzen mit den Verlagen verhandeln, bevor sie diese für ihre E-Ausleihe zur Verfügung stellen können. Bei Bestsellern kommt es dabei häufig zu verzögerten Bereitstellungen für die Online-Ausleihe. Das wird auch als „Windowing“ bezeichnet. Windowing meint: Verlage stellen den Bibliotheken einen Titel als E-Book nicht bei Neuerscheinung zur Verfügung, sondern erst später, wenn sie bereits zahlreiche E-Books verkauft haben.

Die meisten Bibliotheken im deutschsprachigen Raum nutzen für das E-Lending das Angebot „Onleihe“ der Firma divibib GmbH, die die Lizenzen verhandelt und anschließend auf einer technischen Plattform bereitstellt. Die Bibliotheken können ihren Nutzer*innen dann Zugang zu den lizenzierten Inhalten geben. Dabei gilt das Prinzip „eine Kopie, eine ausleihende Person“: Ein E-Book kann also grundsätzlich nur von einer Person zeitgleich geliehen und gelesen werden – ganz wie bei der analogen Ausleihe. Hat die Bibliothek Mehrfachlizenzen erworben, um das E-Book an mehrere Personen gleichzeitig ausleihen zu können, werden diese auch entsprechend vergütet. Die Ausleihe von E-Books führt dagegen nicht dazu, dass Bibliotheksnutzer*innen unbegrenzt digitale Kopien von E-Books leihen und sogar weiterverbreiten können. Das bestätigt auch der Börsenverein des Deutschen Buchhandels im Rahmen eines „Faktenchecks“ über die Ausleihe von E-Books:

„Der Nutzer kann dann über eine zentrale Plattform (Onleihe) unentgeltlich E-Books ausleihen. Die E-Books werden auf den Computer oder Reader der Nutzer geladen und mit technischen Schutzmaßnahmen (Digital Rights Management) versehen. Nach Ablauf der Leihfrist von 14 Tagen wird die Datei unbrauchbar gemacht, erst danach kann ein weiterer Nutzer das E-Book ausleihen. Bei begehrten Titeln müssen daher mehrere Exemplare angeschafft werden.“

Warum macht es einen Unterschied, ob ein Bestseller digital oder analog als Buch verliehen wird?

Die Initiative „Fair Lesen“ kritisiert, durch das Verfügbarmachen von Neuerscheinungen in der Online-Ausleihe würde die finanzielle Existenzgrundlage der Autor*innen, Verlagen und Buchhandlungen gefährdet. Jedoch werden auch E-Books nur einzeln verliehen – das regeln die Lizenzen der Onleihe. Grundsätzlich gelten damit in der Praxis dieselben Bedingungen wie bei der Ausleihe analoger Bücher – zumindest aus Sicht der verleihenden Bibliotheken.

Der Unterschied besteht vor allem darin, was finanziell bei den Autor*innen ankommt: Mangels Bibliothekstantieme für E-Books erhalten sie weniger Geld für das Verleihen eines E-Books als bei einem analogen Buch – jedenfalls dann, wenn sie dafür keine vertragliche Regelung mit ihrem Verlag abgeschlossen haben. Eine gesetzliche Regelung könnte da Abhilfe schaffen: Vor allem Bibliotheken und Autor*innen würden davon profitieren.

Bibliotheken: Wichtige Rolle für offene Bildung und Wissenschaft

Wie wichtig der Zugang gerade auch zu digitaler Bildung und Wissenschaft ist, hat nicht zuletzt die Corona-Pandemie eindrücklich gezeigt. Gerade öffentliche Bibliotheken fördern mit ihrem Angebot Bildung und Teilhabe – und zwar unabhängig von sozialen Schichten. Damit stehen sie für ein seit Jahrhunderten anerkanntes Prinzip, dass es neben dem individuellen Erwerb auch die Möglichkeit geben soll, Bücher bei öffentlichen Einrichtungen zu leihen. Der Staat fördert diesen Grundsatz unter anderem durch die Bibliothekstantieme, denn Bibliotheken erfüllen dadurch auch eine politische Funktion: Ihr Informationsangebot ist weder von wirtschaftlichen Zwängen noch von weltanschaulichen und politischen Perspektiven geprägt. Es ermöglicht den Bürger*innen, sich unvoreingenommen zu informieren, um am gesellschaftlichen Diskurs teilzunehmen. Die Digitalisierung dieses Informationsangebotes ist damit abhängig von den zur Verfügung stehenden E-Books.

Die Diskussion geht weiter: Ein (vorläufiges) Fazit

Anders, als es auf den ersten Blick scheinen mag, geht es bei dem Streit um die Ausleihe von E-Books also nicht so sehr um die Einschränkung von Vielfalt und Meinungsfreiheit, wie die Kampagne „Fair Lesen“ suggeriert. Es geht vor allem um eine Gesetzesreform, die die Ausleihe digitaler Medien auf eine gesetzliche Grundlage stellen würde und den Autor*innen die Bibliothekstantieme zusichern soll, die sie im analogen Bereich längst schon erhalten.

Der Gesetzesvorschlag des Bundesrates, der regeln soll, dass Verlage nicht kommerziell tätigen Bibliotheken zu „angemessenen Bedingungen“ ein Nutzungsrecht einräumen müssen, sobald ein Werk auf dem Markt erschienen ist, wurde nicht in die Urheberrechtsreform aufgenommen. Diese tra im August 2021 in Kraft. Insofern hat „Fair Lesen“ einen klaren Appell in Richtung einer neuen Bundesregierung gesetzt.

Auch der DBV hat zwischenzeitlich Stellung genommen und dabei verschiedene „Falsch- und Fehlinformationen“ der Kampagne beklagt. Wie die neue Bundesregierung mit der Forderung umgehen wird, ist noch unklar. Mit der Kampagne hat es der Börsenverein des Deutschen Buchhandels indes geschafft, die Lizenzierung von E-Book-Ausleihen als Thema zu setzen, das im Rahmen der Buchmesse diskutiert wird – und sicherlich darüber hinaus.

The post „Fair Lesen”: Buchhandel vs. Bibliotheken appeared first on iRights.info - Kreativität und Urheberrecht in der digitalen Welt.

Eh nee, F12 indrukken bij een website is geen criminele handeling

IusMentis - 22 oktober 2021 - 8:12am

(Geen zorgen, nog een keer F12 en je scherm is weer normaal.) Het ziet er misschien heel imponerend uit, maar dit is gewoon het onderwaterscherm waarmee je onder meer de broncode van de huidige site in beeld krijgt. Dan kun je soms net iets meer informatie zien. Zoals recent journalist Josh Renaud uit Missouri ontdekte: de social security nummers van tienduizend docenten uit de staat. Mag dat? Nee, dat is crimineel, aldus gouverneur Mike Parson. Moehaha kom nou, aldus de hele wereld.

Het bronbericht geeft een 451 error (lawyer says no) vanuit Europa, maar het betrof een zoekfunctie voor docenten, waarbij de zoekresultaten werden meegegeven aan de resultaatpagina inclusief hun social security number, zeg maar hun bsn. Bij het programmeren van de site bedacht iemand toen dat dat niet echt handig is om te publiceren, dus werd het verborgen in de uitvoer. Maar het stond dus nog gewoon in de broncode, en die krijg je te zien met een druk op de knop.

Een datalek, zouden wij in Europa zeggen. Een beperkte security. Want als die data niet zichtbaar hoeft te zijn in resultaten, dan hoeft deze ook niet mee naar de webpagina om daar vervolgens buiten beeld te blijven. Dan houd je dat gewoon lekker op de server. Afijn, de melding werd opgepakt, de fout werd hersteld, daarna pas publiceerde men, weinig bijzonders.

Bijzonder was wel de reactie van de gouverneur, want in de vertaalslag omhoog naar het politieke ging iets mis. “Onze server stuurde bsn’s mee en dat kon iedereen zien die op F12 drukt” werd namelijk dit: Through a multi-step process, an individual took the records of at least three educators, decoded the HTML source code, and viewed the SSN of those specific educators. Die meer dan drie klopt (het waren er 100.000), de rest is laten we zeggen een ietwat complexe voorstelling van zaken. Die multi-step is namelijk dat je een zoekopdracht doet, de resultaatpagina krijgt, F12 drukt en in de broncode scrollt tot je “ssn” ziet. “Decoding the HTML source code” is, eh, zeggen dat je langs <tags> kunt lezen?

En dan gaat men nog verder: A hacker is someone who gains unauthorized access to information or content. This individual did not have permission to do what they did. They had no authorization to convert and decode the code. Het punt is alleen dus dat de code in kwestie is wat er naar je computer wordt gestuurd, en dat deze voor mensen leesbaar is. Of nou ja, leesbaar: <div class=”text”><h1><a reF=”https://blog.iusmentis.com”>Internetrecht door Arnoud Engelfriet</a></h1> <p>Arnoud Engelfriet is ICT-jurist, gespecialiseerd in internetrecht.Hij werkt als partner bij juridisch adviesbureau <a Href=”http://ictrecht.nl” rel=”external” target=”_blank”>ICTRecht</a>. Zijn site <a hRef=”http://www.iusmentis.com/”>Ius mentis</a> heeft meer dan 350 artikelen over internetrecht.</p></div> Ik wil niet zeggen dat dit metéén net zo helder is als een gemiddeld juridisch contract, maar om dit nu een “code” te noemen die je moet “ontcijferen” gaat wel erg ver. Maar hoe dan ook is het absurd om te zeggen dat hier sprake is van “toegang zonder toestemming”, dit is gewoon wat de server je geeft en het is juist je browser die er wat moois van maakt.

En natuurlijk, voor computercriminaliteit is niet perse nodig dat je een moeilijke technische truc uithaalt. Zodra je ergens bent waarvan je weet dat je er niet mag zijn, ben je eigenlijk al in overtreding. Vandaar die discussie over URL-manipulatie, het aanpassen van een URL om te gokken dat je elders nog informatie kunt vinden waarvan je zo snel niet de navigatie erheen kunt bepalen. Maar hoe je het ook bekijkt, zelf een URL aanpassen om te raden wat elders staat, is complexer dan bekijken welke HTML broncode een site naar je stuurde.




Het bericht Eh nee, F12 indrukken bij een website is geen criminele handeling verscheen eerst op Ius Mentis.

New Global Alliance Calls on European Parliament to Make the Digital Services Act a Model Set of Internet Regulations Protecting Human Rights and Freedom of Expression

The European Parliament’s regulations and policy-making decisions on technology and the internet have unique influence across the globe. With great influence comes great responsibility. We believe the European Parliament (EP) has a duty to set an example with the Digital Services Act (DSA), the first major overhaul of European internet regulations in 20 years. The EP should show that the DSA can address tough challenges—hate speech, misinformation, and users’ lack of control on big platforms—without compromising human rights protections, free speech and expression rights, and users’ privacy and security.

Balancing these principles is complex, but imperative. A step in the wrong direction could reverberate around the world, affecting fundamental rights beyond European Union borders. To this end, 12 civil society organizations from around the globe, standing for transparency, accountability, and human rights-centered lawmaking, have formed the Digital Services Act Human Rights Alliance to establish and promote a world standard for internet platform governance. The Alliance is comprised of digital and human rights advocacy organization representing diverse communities across the globe, including in the Arab world, Europe, United Nations member states, Mexico, Syria, and the U.S.

In its first action towards this goal, the Alliance today is calling on the EP to embrace a human rights framework for the DSA and take steps to ensure that it protects access to information for everyone, especially marginalized communities, rejects inflexible and unrealistic take down mandates that lead to over-removals and impinge on free expression, and strengthen mandatory human rights impact assessments so issues like faulty algorithm decision-making is identified before people get hurt.

This call to action follows a troubling round of amendments approved by an influential EP committee that crossed red lines protecting fundamental rights and freedom of expression. EFF and other civil society organizations told the EP prior to the amendments that the DSA offers an unparalleled opportunity to address some of the internet ecosystem’s most pressing challenges and help better protect fundamental rights online—if done right.

So, it was disappointing to see the EP committee take a wrong turn, voting in September to limit liability exemptions for internet companies that perform basic functions of content moderation and content curation, force companies to analyze and indiscriminately monitor users’ communication or use upload filters, and bestow special advantages, not available to ordinary users, on politicians and popular public figures treated as trusted flaggers.

In a joint letter, the Alliance today called on the EU lawmakers to take steps to put the DSA back on track:

  • Avoid disproportionate demands on smaller providers that would put users’ access to information in serious jeopardy.
  • Reject legally mandated strict and short time frames for content removals that will lead to removals of legitimate speech and opinion, impinging rights to freedom of expression.
  • Reject mandatory reporting obligations to Law Enforcement Agencies (LEAs), especially without appropriate safeguards and transparency requirements.
  • Prevent public authorities, including LEAs, from becoming trusted flaggers and subject conditions for becoming trusted flaggers to regular reviews and proper public oversight.   
  • Consider mandatory human rights impact assessments as the primary mechanism for examining and mitigating systemic risks stemming from platforms' operations.

For the DSA Human Rights Alliance Joint Statement:

For more on the DSA:

Categorieën: Openbaarheid, Privacy, Rechten

Police Can’t Demand You Reveal Your Phone Passcode and Then Tell a Jury You Refused

Electronic Frontier Foundation (EFF) - nieuws - 22 oktober 2021 - 12:29am

The Utah Supreme Court is the latest stop in EFF’s roving campaign to establish your Fifth Amendment right to refuse to provide your password to law enforcement. Yesterday, along with the ACLU, we filed an amicus brief in State v. Valdez, arguing that the constitutional privilege against self-incrimination prevents the police from forcing suspects to reveal the contents of their minds. That includes revealing a memorized passcode or directly entering the passcode to unlock a device.

In Valdez, the defendant was charged with kidnapping his ex-girlfriend after arranging a meeting under false pretenses. During his arrest, police found a cell phone in Valdez’s pocket that they wanted to search for evidence that he set up the meeting, but Valdez refused to tell them the passcode. Unlike many other cases raising these issues, however, the police didn’t bother seeking a court order to compel Valdez to reveal his passcode. Instead, during trial, the prosecution offered testimony and argument about his refusal. The defense argued that this violated the defendant’s Fifth Amendment right to remain silent, which also prevents the state from commenting on his silence. The court of appeals agreed, and now the state has appealed to the Utah Supreme Court.

As we write in the brief: 

The State cannot compel a suspect to recall and share information that exists only in his mind. The realities of the digital age only magnify the concerns that animate the Fifth Amendment’s protections. In accordance with these principles, the Court of Appeals held that communicating a memorized passcode is testimonial, and thus the State’s use at trial of Mr. Valdez’s refusal to do so violated his privilege against self-incrimination. Despite the modern technological context, this case turns on one of the most fundamental protections in our constitutional system: an accused person’s ability to exercise his Fifth Amendment rights without having his silence used against him. The Court of Appeals’ decision below rightly rejected the State’s circumvention of this protection. This Court should uphold that decision and extend that protection to all Utahns.

Protecting these fundamental rights is only more important as we also fight to keep automated surveillance that would compromise our security and privacy off our devices. We’ll await a decision on this important issue from the Utah Supreme Court.

Related Cases: Andrews v. New Jersey
Categorieën: Openbaarheid, Privacy, Rechten

Victory! Oakland’s City Council Unanimously Approves Communications Choice Ordinance

Oakland residents shared the stories of their personal experience; a broad coalition of advocates, civil society organizations, and local internet service providers (ISPs) lifted their voices; and now the Oakland City Council has unanimously passed Oakland’s Communications Service Provider Choice Ordinance. The newly minted law frees Oakland renters from being constrained to their landlord's preferred ISP by prohibiting owners of multiple occupancy buildings from interfering with an occupant's ability to receive service from the communications provider of their choice.

Across the country—through elaborate kickback schemes—large, corporate ISPs looking to lock out competition have manipulated landlords into denying their tenants the right to choose the internet provider that best meets their family’s needs and values. In August of 2018, an Oakland-based EFF supporter emailed us asking what would need to be done to empower residents with the choice they were being denied. Finally, after three years of community engagement and coalition building, that question has been answered.  

Modeled on a San Francisco law adopted in 2016, Oakland’s new Communications Choice ordinance requires property owners of multiple occupancy buildings to provide reasonable access to any qualified communication provider that has received a service request from a building occupant. San Francisco’s law has already proven effective. There, one competitive local ISP, which had previously been locked out of properties of forty or more units with active revenue sharing agreements, gained access to more than 1800 new units by 2020. Even for those who choose to stay with their existing provider, a competitive communications market benefits all residents by incentivizing providers to offer the best services at the lowest prices. As Tracy Rosenberg, the Executive Director of coalition member Media Alliance—and a leader in the advocacy effort—notes, "residents can use the most affordable and reliable services available, alternative ISP's can get footholds in new areas and maximize competitive benefits, and consumers can vote with their pockets for platform neutrality, privacy protections, and political contributions that align with their values.”

Unfortunately, not every city is as prepared to take advantage of such measures as San Francisco and Oakland. The Bay Area has one of the most competitive ISP markets in the United States, including smaller ISPs committed to defending net neutrality and their users’ privacy. In many U.S. cities, that’s not the case.

We hope to see cities and towns across the country step up to protect competition and foster new competitive options by investing in citywide fiber-optic networks and opening that infrastructure to private ISPs.

Categorieën: Openbaarheid, Privacy, Rechten

Why Is It So Hard to Figure Out What to Do When You Lose Your Account?

We get a lot of requests for help here at EFF, with our tireless intake coordinator being the first point of contact for many. All too often, however, the help needed isn’t legal or technical. Instead, users just need an answer to a simple question: what does this company want me to do to get my account back?

People lose a lot when they lose their account. For example, being kicked off Amazon could mean losing access to your books, music, pictures, or anything else you have only licensed, not bought, from that company. But the loss can have serious financial consequences for people who rely on the major social media platforms for their livelihoods, the way video makers rely on YouTube or many artists rely on Facebook or Twitter for promotion.

And it’s even worse when you can’t figure out why your account was closed, much less how to get it restored.  The deep flaws in the DMCA takedown process are well-documented, but at least the rules of a DMCA takedown are established and laid out in the law. Takedowns based on ill-defined company policies, not so much.

Over the summer, writer and meme king Chuck Tingle found his Twitter account suspended due to running afoul of Twitter’s ill-defined repeat infringer policy. That they have such a policy is not a problem in and of itself: to take advantage of the DMCA safe harbor, Twitter is required to have one. It’s not even a problem that the law doesn’t specify what the policy needs to look like—flexibility is vital for different services to do what makes the most sense for them. However, a company has to make a policy with an actual, tangible set of rules if they expect people to be able to follow it.

This is what Twitter says:

What happens if my account receives multiple copyright complaints?

If multiple copyright complaints are received Twitter may lock accounts or take other actions to warn repeat violators. These warnings may vary across Twitter’s services.  Under appropriate circumstances we may suspend user accounts under our repeat infringer policy. However, we may take retractions and counter-notices into account when applying our repeat infringer policy. 

That is frustratingly vague. “Under appropriate circumstances” doesn’t tell users what to avoid or what to do if they run afoul of the policy. Furthermore, if an account is suspended, this does not tell users what to do to get it back. We’ve confirmed that “We may take retractions and counter-notices into account when applying our repeat infringer policy” means that Twitter may restore the account after a suspension or ban, in response to counter-notices and retractions of copyright claims. But an equally reasonable reading of it is that they will take those things into account only before suspending or banning a user, so counter-noticing won’t help you get your account back if you lost it after a sudden surge in takedowns.

And that assumes you can even send a counter-notice. When Tingle lost his account under its repeat infringer policy, he found that because his account was suspended, he couldn’t use Twitter’s forms to contest the takedowns. That sounds like a minor thing, but it makes it very difficult for users to take the steps needed to get their accounts back.

Often, being famous or getting press attention to your plight is the way to fast-track getting restored. When Facebook flagged a video of a musician playing a public domain Bach piece, and Sony refused to release the claim, the musician got it resolved by making noise on Twitter and emailing the heads of various Sony departments. Most of us don’t have that kind of reach.

Even when there are clear policies, those rules mean nothing if the companies don’t hold up their end of the bargain. YouTube’s Content ID rules claim a video will be restored if, after an appeal, a month goes by with no word from the complaining party. But there are numerous stories from creators in which a month passes, nothing happens, and nothing is communicated to them by YouTube. While YouTube’s rules need fixing in many ways, many people would be grateful if YouTube would just follow those rules.

These are not new concerns. Clear policies, notice to users, and a mechanism for appeal are at the core of the Santa Clara principles for content moderation. They are basic best practices for services that allow users to post content, and companies that have been hosting content for more than a decade have no excuse not to follow them.

EFF is not a substitute for a company helpline. Press attention is not a substitute for an appeals process. And having policies isn’t a substitute for actually following them.

Categorieën: Openbaarheid, Privacy, Rechten

Crowd-Sourced Suspicion Apps Are Out of Control

Technology rarely invents new societal problems. Instead, it digitizes them, supersizes them, and allows them to balloon and duplicate at the speed of light. That’s exactly the problem we’ve seen with location-based, crowd-sourced “public safety” apps like Citizen.

These apps come in a wide spectrum—some let users connect with those around them by posting pictures, items for sale, or local tips. Others, however, focus exclusively on things and people that users see as “suspicious” or potentially hazardous. These alerts run the gamut from active crimes, or the aftermath of crimes, to generally anything a person interprets as helping to keep their community safe and informed about the dangers around them.

These apps are often designed with a goal of crowd-sourced surveillance, like a digital neighborhood watch. A way of turning the aggregate eyes (and phones) of the neighborhood into an early warning system. But instead, they often exacerbate the same dangers, biases, and problems that exist within policing. After all, the likely outcome to posting a suspicious sight to the app isn’t just to warn your neighbors—it’s to summon authorities to address the issue.

And even worse than incentivizing people to share their most paranoid thoughts and racial biases on a popular platform are the experimental new features constantly being rolled out by apps like Citizen. First, it was a private security force, available to be summoned at the touch of a button. Then, it was a service to help make it (theoretically) even easier to summon the police by giving users access to a 24/7 concierge service who will call the police for you. There are scenarios in which a tool like this might be useful—but to charge people for it, and more importantly, to make people think they will eventually need a service like this—adds to the idea that companies benefit from your fear.

These apps might seem like a helpful way to inform your neighbors if the mountain lion roaming your city was spotted in your neighborhood. But in practice they have been a cesspool of racial profiling, cop-calling, gatekeeping, and fear-spreading. Apps where a so-called “suspicious” person’s picture can be blasted out to a paranoid community, because someone with a smartphone thinks they don’t belong, are not helping people to “Connect and stay safe.” Instead, they promote public safety for some, at the expense of surveillance and harassment for others.

Digitizing an Age Old Problem

Paranoia about crime and racial gatekeeping in certain neighborhoods is not a new problem. Citizen takes that old problem and digitizes it, making those knee-jerk sightings of so-called suspicious behavior capable of being broadcast to hundreds, if not thousands of people in the area.

But focusing those forums on crime, suspicion, danger, and bad-faith accusations can create havoc. No one is planning their block party on Citizen like they might be on other apps, which is filled with notifications like “unconfirmed report of a man armed with pipe” and “unknown police activity.” Neighbors aren’t likely to coordinate trick-or-treating on a forum they exclusively use to see if any cars in their neighborhood were broken into. And when you download an app that makes you feel like a neighborhood you were formerly comfortable in is now under siege, you’re going to use it not just to doom scroll your way through strange sightings, but also to report your own suspicions.

There is a massive difference between listening to police scanners, a medium that reflects the ever-changing and updating nature of fluid situations on the street, and taking one second of that live broadcast and turning it into a fixed, unverified, news report. Police scanners can be useful by many people for many reasons and ought to stay accessible, but listening to a livestream presents an entirely different context than seeing a fixed geo-tagged alert on a map. 

As the New York Times writes, Citizen is “converting raw scanner traffic—which is by nature unvetted and mostly operational—into filtered, curated digital content, legible to regular people, rendered on a map in a far more digestible form.” In other words, they’re turning static into content with the same formula the long-running show Cops used to normalize both paranoia and police violence.

Police scanners reflect the raw data of dispatch calls and police response to them, not a confirmation of crime and wrongdoing. This is not to say that the scanner traffic isn’t valuable or important—the public often uses it to learn what police are doing in their neighborhood. And last year, protesters relied on scanner traffic to protect themselves as they exercised their First Amendment rights.

But publication of raw data is likely to give the impression that a neighborhood has far more crime than it does. As any journalist will tell you, scanner traffic should be viewed like a tip and be the starting point of a potential story, rather than being republished without any verification or context. Worse, once Citizen receives a report, many stay up for days, giving the overall impression to a user that a neighborhood is currently besieged by incidents—when many are unconfirmed, and some happened four or five days ago.

From Neighborhood Forum to Vigilante-Enabler

It’s well known that Citizen began its life as “Vigilante,” and much of its DNA and operating procedure continue to match its former moniker. Citizen, more so than any other app, is unsure if it wants to be a community forum or a Star Wars cantina where bounty hunters and vigilantes wait for the app to post a reward for information leading to a person’s arrest.

When a brush fire broke out in Los Angeles in May 2021, almost a million people saw a notification pushed by Citizen offering a $30,000 reward for information leading to the arrest of a man they thought was responsible. It is the definition of dangerous that the app offered money to thousands of users, inviting them to turn over information on an unhoused man who was totally innocent.

Make no mistake, this kind of crass stunt can get people hurt. It demonstrates a very narrow view of who the “public” is and what “safety” entails.

Ending Suspicion as a Service

Users of apps like Citizen, Nextdoor, and Neighbors should be vigilant about unverified claims that could get people hurt, and be careful not to feed the fertile ground for destructive hoaxes.

These apps are part of the larger landscape that law professor Elizabeth Joh calls “networked surveillance ecosystems.” The lawlessness that governs private surveillance networks like Amazon Ring and other home surveillance systems—in conjunction with social networking and vigilante apps—is only exacerbating age-old problems. This is one ecosystem that should be much better contained.

Categorieën: Openbaarheid, Privacy, Rechten

On Global Encryption Day, Let's Stand Up for Privacy and Security

At EFF, we talk a lot about strong encryption. It’s critical for our privacy and security online. That’s why we litigate in courts to protect the right to encrypt, build technologies to encrypt the web, and it’s why we lead the fight against anti-encryption legislation like last year’s EARN IT Act.

We’ve seen big victories in our fight to defend encryption. But we haven’t done it alone. That’s why we’re proud this year to join dozens of other organizations in the Global Encryption Coalition as we celebrate the first Global Encryption Day, which is today, October 21, 2021.

For this inaugural year, we’re joining our partner organizations to ask people, companies, governments, and NGOs to “Make the Switch” to strong encryption. We’re hoping this day can encourage people to make the switch to end-to-end encrypted platforms, creating a more secure and private online world. It’s a great time to turn on encryption on all the devices or services you use, or switch to an end-to-end encrypted app for messaging—and talk to others about why you made that choice. Using strong passwords and two-factor authentication are also security measures that can help keep you safe. 

If you already have a handle on encryption and its benefits, today would be a great day to talk to a friend about it. On social media, we’re using the hashtag #MakeTheSwitch.

The Global Encryption Day website has some ideas about what you could do to make your online life more private and secure. Another great resource is EFF’s Surveillance Self Defense Guide, where you can get tips on everything from private web browsing, to using encrypted apps, to keeping your privacy in particular security scenarios—like attending a protest, or crossing the U.S. border. 

We need to keep talking about the importance of encryption, partly because it’s under threat. In the U.S. and around the world, law enforcement agencies have been seeking an encryption “backdoor” to access peoples’ messages. At EFF, we’ve resisted these efforts for decades. We’ve also pushed back against efforts like client-side scanning, which would break the promises of user privacy and security while technically maintaining encryption.

If you already have a handle on encryption and its benefits, today would be a great day to talk to a friend about it. On social media, we’re using the hashtag #MakeTheSwitch.

The Global Encryption Coalition is listing events around the world today. EFF Senior Staff Technologist Erica Portnoy will be participating in an “Ask Me Anything” about encryption on Reddit, at 17:00 UTC, which is 10:00 A.M. Pacific Time. Jon Callas, EFF Director of Technology Projects, will join an online panel about how to improve user agency in end-to-end encrypted services, on Oct. 28.

Categorieën: Openbaarheid, Privacy, Rechten

EFF to Federal Court: Block Unconstitutional Texas Social Media Law

Users are understandably frustrated and perplexed by many big tech companies’ content moderation practices. Facebook, Twitter, and other social media platforms make many questionable, confounding, and often downright incorrect decisions affecting speakers of all political stripes. 

A new Texas law, which Texas Governor Greg Abbott said would stop social media companies that “silence conservative viewpoints and ideas,” restricts large platforms from removing or moderating content based on the viewpoint of the user. The measure, HB 20, is unconstitutional and should not be enforced, we told a federal court in Texas in an amicus brief filed Oct. 15. 

In NetChoice v. Paxton, two technology trade associations sued Texas to prevent the law from going into effect. Our brief, siding with the plaintiffs, explains that the law forces popular online platforms to publish speech they don’t agree with or don’t want to share with their users. Its broad restrictions would destroy many online communities that rely on moderation and curation. Platforms and users may not want to see certain kinds of content and speech that is legal but still offensive or irrelevant to them. They have the right under the First Amendment to curate, edit, and block everything from harassment to opposing political viewpoints.

Contrary to HB 20’s focus, questionable content moderation decisions are in no way limited to conservative American speakers. In 2017, for example, Twitter disabled the verified account of Egyptian human rights activist Wael Abbas. That same year, users discovered that Twitter had marked tweets containing the word “queer” as offensive. Recent reporting has highlighted how Facebook failed to enforce its policies against hate speech and promotion of violence, or even publish those policies, in places like Ethiopia.

However, EFF’s brief explains that users also rely on the First Amendment to create communities online, whether they are niche or completely unmoderated. Undermining speech protections would ultimately hurt users by limiting their options online. 

HB 20 also requires large online platforms to follow transparency and complaint procedures, such as publishing an acceptable use policy and biannual statistics on content moderation. While EFF urges social media companies to be transparent with users about their moderation practices, when governments mandate transparency, they must accommodate constitutional and practical concerns. Voluntary measures such as implementing the Santa Clara Principles, guidelines for a human rights framework for content moderation, best serve a dynamic internet ecosystem.

HB 20’s requirements, however, are broad and discriminatory. Moreover, HB 20 would likely further entrench the market dominance of the very social media companies the law targets because compliance will require a significant amount of time and money.

EFF has filed several amicus briefs opposing government control over content moderation, including in a recent successful challenge to a similar Florida law. We urge the federal court in Texas to rule that HB 20 restricts and burdens speech in violation of the Constitution.

Categorieën: Openbaarheid, Privacy, Rechten


Abonneren op Informatiebeheer  aggregator - Rechten