U bent hier

Rechten

What Really Does and Doesn’t Work for Fair Use in the DMCA

On July 28, the Senate Committee on the Judiciary held another in its year-long series of hearings on the Digital Millennium Copyright Act (DMCA). The topic of this hearing was “How Does the DMCA Contemplate Limitations and Exceptions Like Fair Use?

We’re glad Congress is asking the question. Without fair use, much of our common culture would be inaccessible, cordoned off by copyright. Fair use creates breathing space for innovation and new creativity by allowing us to re-use and comment on existing works. As Sherwin Siy, lead public policy manager for the Wikimedia Foundation, said in his testimony: “That fair uses aren’t rare exceptions to the exclusive rights of copyright law but a pervasive, constantly operating aspect of the law. Fair use not only promotes journalism, criticism, and education, it also ensures that our everyday activities aren’t constantly infringing copyrights. Especially now that so much of our lives are conducted on camera and online.”

Unfortunately, the answer to Congress’s question is: not enough. The DMCA, in particular, by design and as interpreted, doesn’t do enough to protect online fair uses. This is the case in both Section 1201 of the DMCA—the “anti-circumvention” provision which bans interference on technological restrictions on copyrighted works—and Section 512—the provision which immunizes platforms from liability for copyright infringement by their users so long as certain conditions are met.

Fair Use and Notice and Takedown

The DMCA was meant to be a grand bargain, balancing the needs of tech companies, rightsholders, and users. Section 512 embodies a carefully crafted system that, when properly deployed, gives service providers protection from liability, copyright owners tools to police infringement, and users the ability to challenge the improper use of those tools. Without Section 512, the risk of crippling liability for the acts of users would have prevented the emergence of most of the social media outlets we use today.

But Congress knew that Section 512’s powerful incentives could result in lawful material being censored from the Internet, without prior judicial scrutiny, much less advance notice to the person who posted the material, or an opportunity to contest the removal. For example, users often making fair use of copyrighted works in all kinds of online expression. That use is authorized by law, as part of copyright’s “built-in First Amendment accommodations.” Nonetheless, it is often targeted for takedown under the DMCA. 

For Section 512, user protections are supposed to be located in sections 512(g) and 512(f). In practice, neither of these sections have worked quite as intended.

Section 512(g) lays out the requirements for counternotifications. In theory, if a takedown is issued against a work making fair use, then the target can send a counternotice to get the work restored. The counternotice contains personal information of the creator and an agreement to be subject to a court case. If the person or organization doesn’t respond to the counternotice with a legal action within two weeks, the work goes back up. In practice, very few counternotices are sent, even when the original takedown was flawed.

512(f) was supposed to deter takedowns targeting lawful uses by giving those harmed the ability to hold senders accountable. Once again, in practice, this has done little to actually prevent abusive and false takedowns.

Columbia Law Professor Jane C. Ginsburg agreed, saying that these parts of 512 “may not always have worked out as intended.” She highlighted that automated takedown notice systems don’t take fair use into account and that there are relatively few counternotices. She allowed that “fear or ignorance” would cause users not to take advantage of counternotices, a point backed up by cases of trolling and the intimidating nature of the counternotices.

Evidence of how users avoid the process was given by Rick Beato, a musician who also has a popular YouTube channel that teaches music theory. He noted that he has made 750 YouTube videos, of which 254 have been demonetized and 43 have been blocked or taken down. Beato noted that he’s never disputed anything – it’s too much trouble.

Several witnesses urged the creation of some sort of “alternative dispute resolution” to make taking down and restoring content easier. We disagree. Section 512 already makes takedowns far too easy. The experience of the last 22 years shows just how much of the fundamental right to freedom of expression is harmed by extrajudicial systems like the DMCA. The answer to the DMCA’s failures cannot be yet another one.

As for the European model, there is no way to square the Copyright Directive with fair use. The European Union’s Copyright Directive effectively requires companies to ensure that nothing is ever posted on their platforms that might be infringing. That incentivizes them to over-remove, rather than take fair use into account. And to handle that need, filters become necessary. And so it creates a rule requiring online service providers to send everything we post to the Internet to black-box machine learning filters that will block anything that the filters classify as "copyright infringement." And, as Beato testified to in the hearing, filters routinely fail to distinguish even obvious fair uses. For example, his educational videos have been taken down and demonetized because of a filter. And he is not alone.

Witnesses also suggested that fair use has expanded too far. This is a reassertion of the old bogeyman of “fair use creep,” and it assumes that fair use is set in stone. In fact, fair use, which is flexible by design, is merely keeping up with the changes in the online landscape and protecting users’ rights.

As witness Joseph Gratz put it:

Nobody likes to have their word, or their work, or their music used in ways that they can’t control. But that is exactly what fair use protects. And that is exactly what the First Amendment protects. Whether or not the copyright holder likes the use, and indeed, even more so where the copyright holder does not like the use, fair use is needed to make sure that free expression can thrive.

Fair Use, Copyright Protection Measures, and Right to Repair

On balance, Section 512 supports a great deal of online expression despite its flaws. The same cannot be said for Section 1201.  Section 1201 makes it illegal to circumvent a technological protection on a copyrighted work, even if you are doing so for an otherwise lawful reason.

Sound confusing? It is. Thanks to fair use, you have a legal right to use copyrighted material without permission or payment. But thanks to Section 1201, you do not have the right to break any digital locks that might prevent you from engaging in that fair use. And this, in turn, has had a host of unintended consequences, such as impeding the right to repair.

The only way to be safe under the law is to get an exemption from the Copyright Office, which grants exemptions to classes of uses every three years. And even if your use is covered by an exemption, that exemption must be continually renewed. In other words, you have to participate in an unconstitutional speech licensing regime, seeking permission from the Copyright Office to exercise your speech rights.

Nevertheless, Christopher Mohr, the Vice President for Intellectual Property and General Counsel of the Software and Information Industry Association, called Section 1201 a success because it supposedly prevented the “proliferation of piracy tools in big box stores.” And Ginsburg pointed to the triennial exemption process as a success. She said it “responds effectively to the essential challenge” of balancing the need for controls with the rights of users.

That’s one way of looking at it. Another is that even if you have an exemption allowing you access to material you have a Constitutional right to use, you can’t have someone with the technological know-how to do it for you and no one is supposed to provide you a tool to do it yourself, either. You have to do it all on your own.

So if you are, for example, one of the farmers trying to repair your own tractor, you now have an exemption allowing you to do that. But you still can’t go to an independent repair store to get an expert to let you in. You can’t use a premade tool to help you get in. This is a manifestly absurd result.

We’re glad Congress is asking questions about fair use under the DMCA. We wish there were better answers.

Categorieën: Openbaarheid, Privacy, Rechten

In Historic Opinion, Third Circuit Protects Public School Students’ Off-Campus Social Media Speech

The U.S. Court of Appeals for the Third Circuit issued an historic opinion in B.L. v. Mahanoy Area School District, upholding the free speech rights of public school students. The court adopted the position EFF urged in our amicus brief that the First Amendment prohibits disciplining public school students for off-campus social media speech.

B.L. was a high school student who had failed to make the varsity cheerleading squad and was placed on junior varsity instead. Out of frustration, she posted—over the weekend and off school grounds—a Snapchat selfie with text that said, among other things, “fuck cheer.” One of her Snapchat connections took a screen shot of the “snap” and shared it with the cheerleading coaches, who suspended B.L. from the J.V. squad for one year. She and her parents sought administrative relief to no avail, and eventually sued the school district with the help of the ACLU of Pennsylvania.

In its opinion protecting B.L.’s social media speech under the First Amendment, the Third Circuit issued three key holdings.

Social Media Post Was “Off-Campus” Speech

First, the Third Circuit held that B.L.’s post was indeed “off-campus” speech. The court recognized that the question of whether student speech is “on-campus” or “off-campus” is a “tricky” one whose “difficulty has only increased after the digital revolution.” Nevertheless, the court concluded that “a student’s online speech is not rendered ‘on campus’ simply because it involves the school, mentions teachers or administrators, is shared with or accessible to students, or reaches the school environment.”

Therefore, B.L.’s Snapchat post was “off-campus” speech because she “created the snap away from campus, over the weekend, and without school resources, and she shared it on a social media platform unaffiliated with the school.”

The court quoted EFF’s amicus brief to highlight why protecting off-campus social media speech is so critical:

Students use social media and other forms of communication with remarkable frequency. Sometimes the conversation online is a high-minded one, with students “participating in issue- or cause-focused groups, encouraging other people to take action on issues they care about, and finding information on protests or rallies.”

Vulgar Off-Campus Social Media Speech is Not Punishable

Second, the Third Circuit reaffirmed its prior holding that the ability of public school officials to punish students for vulgar, lewd, profane, or otherwise offensive speech, per the Supreme Court’s opinion in Bethel School District No. 403 v. Fraser (1986), does not apply to off-campus speech.

The court held that the fact that B.L.’s punishment related to an extracurricular activity (cheerleading) was immaterial. The school district had argued that students have “no constitutionally protected property right to participate in extracurricular activities.” The court expressed concern when any form of punishment is “used to control students’ free expression in an area traditionally beyond regulation.”

Off-Campus Social Media Speech That “Substantially Disrupts” the On-Campus Environment is Not Punishable

Third, the Third Circuit finally answered the question that had been left open by its prior decisions: whether public school officials may punish students for off-campus speech that is likely to “substantially disrupt” the on-campus environment. School administrators often make this argument based on a misinterpretation of the U.S. Supreme Court’s opinion in Tinker v. Des Moines Independent Community School (1969).

Tinker involved only on-campus speech: students wearing black armbands on school grounds, during school hours, to protest the Vietnam War. The Supreme Court held that the school violated the student protestors’ First Amendment rights by suspending them for refusing to remove the armbands because the students’ speech did not “materially and substantially disrupt the work and discipline of the school,” and school officials did not reasonably forecast such disruption.

Tinker was a resounding free speech victory when it was decided, reversing the previously widespread assumption that school administrators had wide latitude to punish student speech on campus. Nevertheless, lower courts have more recently read Tinker as a sword against student speech rather than a shield protecting it, allowing schools to punish student off-campus speech they deem “disruptive.”

The Third Circuit unequivocally rejected reading Tinker as creating a pathway to punish student off-campus speech, such as B.L.’s Snapchat post. The court concisely defined “off-campus” speech as “speech that is outside school-owned, -operated, or -supervised channels and that is not reasonably interpreted as bearing the school’s imprimatur.”

The Third Circuit noted that EFF was the only party to argue that the court should reach this holding (p. 22 n.8). The court reasoned that “social media has continued its expansion into every corner of modern life,” and that it was time to end the “legal uncertainty” that “in this context creates unique problems.” The court stated, “Obscure lines between permissible and impermissible speech have an independent chilling effect on speech.”

Possible Limits on Student Social Media Speech

The Third Circuit clarified that schools may punish on-campus disruption that was caused by an off-campus social media post when a “student who, on campus, shares or reacts to controversial off-campus speech in a disruptive manner.” That is, a “school can punish any disruptive speech or expressive conduct within the school context that meets” the Supreme Court’s demanding standards for actual and serious disruption of the school day.

Thus, “a student who opens his cellphone and shows a classmate a Facebook post from the night before” may be punished if that post, by virtue of being affirmatively shared on campus by the original poster, “substantially disrupts” the on-campus environment. Similarly, if other students act disruptively on campus in response to that Facebook post, they may be punished—but not the original poster if he himself did not share the post on campus.

Additionally, the Third Circuit “reserv[ed] for another day the First Amendment implications of off-campus student speech that threatens violence or harasses others,” an issue that was not presented in this case.

Supreme Court Review Possible

The Third Circuit’s opinion is historic because it is the first federal appellate court to affirm that the substantial disruption exception from Tinker does not apply to off-campus speech.

Other circuits have upheld regulating off-campus speech citing Tinker in various contexts and under different specific rules, such as when it is “reasonably foreseeable” that off-campus speech will reach the school environment, or when off-campus speech has a sufficient “nexus” to the school’s “pedagogical interests.”

The Third Circuit rejected all these approaches. The court argued that its “sister circuits have adopted tests that sweep far too much speech into the realm of schools’ authority.” The court was critical of these approaches because they “subvert[] the longstanding principle that heightened authority over student speech is the exception rather than the rule.”

Because there is a circuit split on this important First Amendment student speech issue, it is possible that the school district will seek certiorari and that the Supreme Court will grant review. Until then, we can celebrate this historic win for public school students’ free speech rights.

Categorieën: Openbaarheid, Privacy, Rechten

The PACT Act Is Not The Solution To The Problem Of Harmful Online Content

The Senate Commerce Committee’s Tuesday hearing on the PACT Act and Section 230 was a refreshingly substantive bipartisan discussion about the thorny issues related to how online platforms moderate user content, and to what extent these companies should be held liable for harmful user content.

The hearing brought into focus several real and significant problems that Congress should continue to consider. It also showed that, whatever its good intentions, the PACT Act in its current form does not address those problems, much less deal with how to lessen the power of the handful of major online services we all rely on to connect with each other.

EFF Remains Opposed to the PACT Act

As we recently wrote, the Platform Accountability and Consumer Transparency (PACT) Act, introduced last month by Senators Brian Schatz (D-HI) and John Thune (R-SD), is a serious effort to tackle a serious problem: that a handful of large online platforms dominate users’ ability to speak online. The bill builds on good ideas, such as requiring greater transparency around platforms’ decisions to moderate their users’ content—something EFF has championed as a voluntary effort as part of the Santa Clara Principles.

However, we are ultimately opposed to the bill, because weakening Section 230 (47 U.S.C. § 230) would lead to more illegitimate censorship of user content. The bill would also threaten small platforms and would-be competitors to the current dominant players, and the bill has First Amendment problems.

Important Issues Related to Content Moderation Remain

One important issue that came up during the hearing is to what extent online platforms should be required to take down user content that a court has determined is illegal. The PACT Act provides that platforms would lose Section 230 immunity for user content if the companies failed to remove material after receiving notice that a court has declared that material illegal. It’s not unreasonable to question whether Section 230 should protect platforms for hosting content after a court has found the material to be illegal or unprotected by the First Amendment.

However, we remain concerned about whether any legislative proposal, including the PACT Act, can provide sufficient guardrails to prevent abuse and to ensure that user content is not unnecessarily censored. Courts often issue non-final judgments, opining on the legality of content in a motion to dismiss opinion, for example, before getting to the merits stage of a case. Some court decisions are default judgments because the defendant does not show up to defend herself for whatever reason, making any determination about the illegality of the content the defendant posted suspect because the question was not subject to a robust adversarial process. And even when there is a final order from a trial court, that decision is often appealed and sometimes reversed by a higher court.

Additionally, some lawsuits against user content are harassing suits that might be dismissed under anti-SLAPP laws, but not all states have them and there isn’t one that consistently applies in federal court. Finally, some documents that appear to be final court judgments may be falsified, which would lead to the illegitimate censorship of user speech, if platforms don’t spend considerable resources investigating each takedown request.

We were pleased to see that many of these concerns were discussed at the hearing, even if a consensus wasn’t reached. It’s refreshing to see elected leaders trying to balance competing interests, including how to protect Internet users who are victims of illegal activity while avoiding the creation of broad legal tools that can censor speech that others do not like. But as we’ve said previously, the PACT Act, as currently written, doesn’t attempt to balance these or other concerns. Rather, by requiring the removal of any material that someone claims a court has declared illegal, it tips the balance toward broad censorship.

Another thorny but important issue is the question of competition among online platforms. Sen. Mike Lee (R-UT) expressed his preference for finding market solutions to the problems associated with the dominant platforms and how they moderate user content. EFF has urged the government to consider a more robust use of antitrust law in the Internet space. One thing is certain, though: weakening Section 230 protections will only entrench the major players, as small companies don’t have the financial resources and personnel to shoulder increased liability for user content.

Unfortunately, the PACT Act’s requirements that platforms put in place content moderation and response services will only further cement the dominance of services such as Facebook, Twitter, and YouTube, which already employ vast numbers of employees to moderate users’ content. Small competitors, on the other hand, lack the resources to comply with the PACT Act.

Let’s Not Forget About the First Amendment

The hearing also touched upon understandably concerning content categories including political and other misinformation, hate speech, terrorism content, and child sexual abuse material (“CSAM”). However, by and large, these categories of content (except for CSAM) are protected by the First Amendment, meaning that the government can’t mandate that such content be taken down.

To be clear, Congress can and should be talking about harmful online content and ways to address it, particularly when harassment and threats drive Internet users offline. But if the conversation focuses on Section 230, rather than grappling with the First Amendment issues at play, then it is missing the forest for the trees.

Moreover, any legislative effort aimed at removing harmful, but not illegal, content online has to recognize that platforms that host user-generated content have their own First Amendment rights to manage that content. The PACT Act intrudes on these services’ editorial discretion by requiring that they take certain steps in response to complaints about content.

Amidst a series of bad-faith attacks on Internet users’ speech and efforts to weaken Section 230 protections, it was refreshing to see Senators hold a substantive public discussion about what changes should be made to U.S. law governing Internet users’ online speech. We hope that it can serve as the beginning of a good-faith effort to grapple with real problems and to identify workable solutions that balance the many competing interests while ensuring that Internet users continue to enjoy the diverse forums for speech and community online.

Categorieën: Openbaarheid, Privacy, Rechten

University App Mandates Are The Wrong Call

As students, parents, and schools prepare the new school year, universities are considering ways to make returning to campus safer. Some are considering and even mandating that students install COVID-related technology on their personal devices, but this is the wrong call. Exposure notification apps, quarantine enforcement programs, and similar new technologies are untested and unproven, and mandating them risks exacerbating existing inequalities in access to technology and education. Schools must remove any such mandates from student agreements or commitments, and further should pledge not to mandate installation of any technology.

Even worse, many schools—including Indiana University, UMass Amherst, and University of New Hampshire—are requiring students to make a general blanket commitment to installing an  unspecified tracking app of the university’s choosing in the future. This gives students no opportunity to assess or engage with the privacy practices or other characteristics of this technology. This is important because not all COVID exposure notification and contact tracing apps, for example, are the same. For instance, Utah's Healthy Together app until recently collected not only Bluetooth proximity data but also GPS location data, an unnecessary privacy intrusion that was later rolled back. Google and Apple’s framework for exposure notification based on Bluetooth is more privacy-protective than a GPS-based solution, but the decision to install it or any other app must still be in the hands of the individuals affected.

Further, in many cases, students face disciplinary proceedings and sanctions if they violate these student agreements. That’s why tracking app mandates, particularly by government entities like public universities, have the potential to chill constitutionally protected speech. Students may be afraid to exercise their rights to speak about university policies if they know the university has the potential to sanction them for it.

The speculative discussion of COVID-related technology and schools has obscured a key fact: contact tracing is a long-established medical technique that was effective long before the advent of computers in our pockets. It involves interviews with a trained person to review a diagnosee's recent travels and interactions. It is still effective, and it is still necessary. Exposure notification apps are new, and there is not yet strong evidence of their efficacy. They certainly do not offer a silver-bullet solution.

App mandates also rely on various assumptions: that every person has their own smartphone, that the phone is an up-to-date Android or iOS device, and that it is always charged and close to their body. These assumptions exacerbate the digital divide, and relying excessively on apps over human contact tracing widens the already stark wealth and racial divides in who is most impacted by COVID-19. With app mandates in place, the same students who do not have reliable home broadband connections and study space for remote instruction would likely be unable to meet the smartphone app requirements to attend classes in person.

Universities should strike any app mandates from their existing student commitments, and should pledge to not include them in future student commitments. If and when a university identifies a specific technology it would like students to use, it is the university’s responsibility to present it to students and demonstrate that it is effective and respects their privacy: by sharing privacy policies, by explaining how and by whom student data will be used and shared, by making commitments regarding how the institution will protect students privacy, and by offering avenues for feedback before and during decision-making. Anything short of that abuses the university's power over its students and erodes their rights. It is not too late for schools to commit to a better path.

Categorieën: Openbaarheid, Privacy, Rechten

A Legal Deep Dive on Mexico’s Disastrous New Copyright Law

Mexico has just adopted a terrible new copyright law, thanks to pressure from the United States (and specifically from the copyright maximalists that hold outsized influence on US foreign policy).

This law closely resembles the Digital Millennium Copyright Act enacted in the US 1998, with a few differences that make it much, much worse.

We’ll start with a quick overview, and then dig deeper.

“Anti-Circumvention” Provision

The Digital Millennium Copyright Act included two very significant provisions. One is DMCA 1201, the ban on circumventing technology that restricts access to or use of copyrighted works (or sharing such technology). Congress was thinking about people ripping DVDs to infringe movies or descrambling cable channels without paying, but the law it passed goes much, much farther. In fact, some US courts have interpreted it to effectively eliminate fair use if a technological restriction must be bypassed.

In the past 22 years, we’ve seen DMCA 1201 interfere with media education, remix videos, security research, privacy auditing, archival efforts, innovation, access to books for people with print disabilities, unlocking phones to work on a new carrier or to install software, and even the repair and reverse engineering of cars and tractors. It turns out that there are a lot of legitimate and important things that people do with culture and with software. Giving copyright owners the power to control those things is a disaster for human rights and for innovation.

The law is sneaky. It includes exemptions that sound good on casual reading, but are far narrower than you would imagine if you look at them carefully or in the context of 22 years of history. For instance, for the first 16 years under DMCA 1201, we tracked dozens of instances where it was abused to suppress security research, interoperability, free expression, and other noninfringing uses of copyrighted works.

It’s a terrible, unconstitutional law, which is why EFF is challenging it in court.

Unfortunately, Mexico’s version is even worse. Important cultural and practical activities are blocked by the law entirely. In the US, we and our allies have used Section 1201’s exemption process to obtain accommodations for documentary filmmaking, teachers to use video clips in the classroom, for fans to make noncommercial remix videos, to unlock or jailbreak your phone, to repair and modify cars and tractors, to use competing cartridges in 3D printers, and for archival preservation of certain works. Beyond those, we and our allies have been fighting for decades now to protect the full scope of noninfringing activities that require circumvention, so that journalism, dissent, innovation, and free expression do not take a back seat to an overbroad copyright law. Mexico’s version has an exemption process as well, but it is far more limited, in part because Mexico doesn’t have our robust fair use doctrine as a backstop.

This is not a niche issue. The U.S. Copyright Office received nearly 40,000 comments in the 2015 rulemaking. In response to a petition signed by 114,000 people, the U.S. Congress stepped in to correct the rulemaking authorities when they allowed the protection for unlocking phones to lapse in 2012.

“Notice-and-Takedown” Provision

In order to avoid the uncertainty and cost of litigation (which would have bankrupted every online platform and deprived the public of important opportunities to speak and connect), Congress enacted Section 512, which provides a “safe harbor” for various Internet-related activities. To stay in the safeharbor, service providers must comply with several conditions, including “notice and takedown” procedures that give copyright holders a quick and easy way to disable access to allegedly infringing content. Section 512 also contains provisions allowing users to challenge improper takedowns. Without these protections, the risk of potential copyright liability would prevent many online intermediaries from providing services such as hosting and transmitting user-generated content. Thus the safe harbors have been essential to the growth of the Internet as an engine for innovation and free expression.

But Section 512 is far from perfect, and again, the Mexican version is worse.

First of all, a platform can be fined simply for failing to abide by takedown requests — even if the takedown is spurious and the targeted material does not infringe. In the US, if they opted out of the safe harbor, they would still only be liable if someone sued them and proved secondary liability. Platforms are already incentivized to take down content on a hair trigger to avoid potential liability, and the Mexican law provides new penalties if they don’t.

Second, we have long catalogued the many problems that arise when you provide the public a way to get material removed from the public sphere without any judicial involvement. It is sometimes deployed maliciously, to suppress dissent or criticism, while other times it is deployed with lazy indifference about whether it is suppressing speech that isn’t actually infringing.

Third, by requiring that platforms prevent material from reappearing after it is taken down, the Mexican law goes far beyond DMCA 512 by essentially mandating automatic filters. We have repeatedly written about the disastrous consequences of this kind of automated censorship.

So that’s the short version. For more detail, read on. But if you are in Mexico, consider first exercising your power to fight back against this law.

Take Action

If you are based in Mexico, we urge you to participate in R3D's campaign "Ni Censura ni Candados" and send a letter to Mexico's National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D's privacy policy.

We are grateful to Luis Fernando García Muñoz of R3D (Red en Defensa de los Derechos Digitales) for his translation of the new law and for his advocacy on this issue.

In-depth legislative analysis and commentary

The text of the law is presented in full in blockquotes. EFF's analysis has been inserted following the relevant provisions.

Provisions on Technical Protection Measures

Article 114 Bis.- In the protection of copyright and related neighboring rights, effective technological protection measures may be implemented and information on rights management. For these purposes:

I. An effective technological protection measure is any technology, device or component that, in the normal course of its operation, protects copyright, the right of the performer or the right of the producer of the phonogram, or that controls access to a work, to a performance, or to a phonogram. Nothing in this section shall be compulsory for persons engaged in the production of devices or components, including their parts and their selection, for electronic, telecommunication or computer products, provided that said products are not destined to carry Unlawful conduct, and

This provision adopts a broad definition of ‘technological protection measure’ or TPM, so that a wide range of encryption and authentication technologies will trigger this provision. The reference to copyright is almost atmospheric, since the law is not substantively restricted to penalizing those who bypass TPMs for infringing purposes.

II. The information on rights management are the data, notice or codes and, in general, the information that identifies the work, its author, the interpretation, the performer, the phonogram, the producer of the phonogram, and to the holder of any right over them, or information about the terms and conditions of use of the work, interpretation or execution, and phonogram, and any number or code that represents such information, when any of these information elements is attached to a copy or appear in relation to the communication to the public of the same.

In the event of controversies related to both fractions, the authors, performers or producers of the phonogram, or holders of respective rights, may exercise civil actions and repair the damage, in accordance with the provisions of articles 213 and 216 bis. of this Law, independently to the penal and administrative actions that proceed.

Article 114 Ter.- It does not constitute a violation of effective technological protection measures when the evasion or circumvention is about works, performances or executions, or phonograms whose term of protection granted by this Law has expired.

In other words, the law doesn’t prohibit circumvention to access works that have entered the public domain. This is small comfort: Mexico has one of the longest copyright terms in the world.

Article 114 Quater.- Actions of circumvention or evasion of an effective technological protection measure protection that controls access to a work, performance or execution, or phonogram protected by this Law, shall not be considered a violation of this Law, when:

This provision lays out some limited exceptions to the general rule of liability. But those exceptions won’t work. After more than two decades of experience with the DMCA in the United States, it is clear that when regulators can’t protect fundamental rights by attempting to imagine in advance and authorize particular forms of cultural and technological innovation. Furthermore, several of these exemptions are modeled off of stale US exemptions that have proven completely inadequate in practice. The US Congress could plead ignorance in the 90s; legislators have no excuse today.

It gets worse: because Mexico does not have a general fair use rule, innovators would be entirely dependent on these limited exemptions.

I. Non-infringing reverse engineering processes carried out in good faith with respect to the copy that has been legally obtained of a computer program that effectively controls access in relation to the particular elements of said computer programs that have not been readily available to the person involved in that activity, with the sole purpose of achieving the interoperability of an independently created computer program with other programs;

If your eyes glazed over at “reverse engineering” and you assumed this covered reverse engineering generally, you would be in good company. This exemption is sharply limited, however. The reverse engineering is only authorized for the “computer program that effectively controls access” and is limited to “elements of said computer programs that have not been readily available.” It does not mention reverse engineering of computer programs that are subject to access controls – in part because the US Congress was thinking about DVD encryption and cable TV channel scrambling, not about software. If you circumvent to confirm that the software is the software claimed, do you lose access to this exemption because the program was already readily available to you? Even if you had no way to verify that claim without circumvention? Likewise, your “sole purpose” has to be achieving interoperability of an independently created computer program with other programs. It’s not clear what “independently” means, and this is not a translation error – the US law is similarly vague. Finally, the “good faith” limitation is a trap for the unwary or unpopular. It does not give adequate notice to a researcher whether their work will be considered to be done in “good faith.” Is reverse engineering for competitive advantage a permitted activity or not? Why should any non-infringing activity be a violation of copyright-related law, regardless of intent?

If you approach this provision as if it authorizes “reverse engineering” or “interoperability” generally you are imagining an exemption that is far more reasonable than what the text provides.

In the US, for example, companies have pursued litigation over interoperable garage door openers and printer cartridges all the way to appellate courts. It has never been this provision that protected interoperators. The Copyright Office has recognized this in granting exemptions to 1201 for activities like jailbreaking your phone to work with other software.

II. The inclusion of a component or part thereof, with the sole purpose of preventing minors from accessing inappropriate content, online, in a technology, product, service or device that itself is not prohibited;

It’s difficult to imagine something having this as the ‘sole purpose.’ In any event, this is far too vague to be useful for many.

III. Activities carried out by a person in good faith with the authorization of the owner of a computer, computer system or network, performed for the sole purpose of testing, investigating or correcting the security of that computer, computer system or network;

Again, if you skim this provision and believe it protects “computer security,” you are giving it too much credit. Most security researchers do not have the “sole purpose” of fixing the particular device they are investigating; they want to provide that knowledge to the necessary parties so that security flaws do not harm any of the users of similar technology. They want to advance the state of understanding of secure technology. They may also want to protect the privacy and autonomy of users of a computer, system, or network in ways that conflict with what the manufacturer would view as the security of the device. The “good faith” exemption again creates legal risk for any security researcher trying to stay on the right side of the law. Researchers often disagree with manufacturers about the appropriate way to investigate and disclose security vulnerabilities. The vague statutory provision for security testing in the United States was far too unreliable to successfully foster essential security research, something that even the US Copyright Office has now acknowledged. Restrictions on engaging in and sharing security research are also part of our active lawsuit seeking to invalidate Section 1201 as a violation of free expression.

IV. Access by the staff of a library, archive, or an educational or research institution, whose activities are non-profit, to a work, performance, or phonogram to which they would not otherwise have access, for the sole purpose to decide if copies of the work, interpretation or execution, or phonogram are acquired;

This exemption too must be read carefully. It is not a general exemption for noninfringing archival or educational uses. It is instead an extremely narrow exemption for deciding whether to purchase a work. When archivists want to break TPMs to archive an obsolete format, when educators want to take excerpts from films to discuss in class, when researchers want to run analytical algorithms on video data to measure bias or enhance accessibility, this exemption does nothing to help them. Several of these uses have been acknowledged as legitimate and impaired by the US Copyright Office.

V. Non-infringing activities whose sole purpose is to identify and disable the ability to compile or disseminate undisclosed personal identification data information, reflecting the online activities of a natural person, in a way that it does not to affect the ability of any person to gain access to a work, performance, or phonogram;

This section provides a vanishingly narrow exception, one that can be rendered null if manufacturers use TPMs in such a way that you cannot protect your privacy without bypassing the same TPM that prevents access to a copyrighted work. And rightsholders have repeatedly taken this very position in the United States. Besides that, the wording is tremendously outdated; you may want to modify the software in your child’s doll so that it doesn’t record their voice and send it back to the manufacturer; that is not clearly “online activities” – they’re simply playing with a doll at home. In the US, “personally identifiable information” also has a meaning that is narrower than you might expect.

VI. The activities carried out by persons legally authorized in terms of the applicable legislation, for the purposes of law enforcement and to safeguard national security;

This would be a good model for a general exemption: you can circumvent to do noninfringing things. Lawmakers have recognized, with this provision, that the ban on circumventing TPMs could interfere with legitimate activities that have nothing to do with copyright law, and provided a broad and general assurance that these noninfringing activities will not give rise to liability under the new regime.

VII. Non-infringing activities carried out by an investigator who has legally obtained a copy or sample of a work, performance or performance not fixed or sample of a work, performance or execution, or phonogram with the sole purpose of identifying and analyzing flaws in technologies for encoding and decoding information;

This exemption again is limited to identifying flaws in the TPM itself, as opposed to analyzing the software subject to the TPM.

VIII. Non-profit activities carried out by a person for the purpose of making accessible a work, performance, or phonogram, in languages, systems, and other special means and formats, for persons with disabilities, in terms of the provisions in articles 148, section VIII and 209, section VI of this Law, as long as it is made from a legally obtained copy, and

Why does accessibility have to be nonprofit? This means that companies trying to serve the needs of the disabled will be unable to interoperate with works encumbered by TPMs.

IX. Any other exception or limitation for a particular class of works, performances, or phonograms, when so determined by the Institute at the request of the interested party based on evidence.

It is improper to create a licensing regime that presumptively bans speech and the exercise of fundamental rights, and then requires the proponents of those rights to prove their rights to the government in advance of exercising them.  We have sued the US government over its regime and the case is pending.

Article 114 Quinquies.- The conduct sanctioned in article 232 bis shall not be considered as a violation of this Law:

These are the exemptions to the ban on providing technology capable of circumvention, as opposed to the act of circumventing oneself. They have the same flaws as the corresponding exemptions above, and they don’t even include the option to establish new, necessary exemptions over time. This limitation is present in the US regime, as well, and has sharply curtailed the practical utility of the exemptions obtained via subsequent rulemaking. They also do not include the very narrow privacy and library/archive exemptions, meaning that it is unlawful to give people the tools to take advantage of those rights.

I. When it is carried out in relation to effective technological protection measures that control access to a work, interpretation or execution, or phonogram and by virtue of the following functions:

a) The activities carried out by a non-profit person, in order to make an accessible format of a work, performance or execution, or a phonogram, in languages, systems and other modes , means and special formats for a person with a disability, in terms of the provisions of articles 148, section VIII and 209, section VI of this Law, as long as it is made from a copy legally obtained;

b) Non-infringing reverse engineering processes carried out in good faith with respect to the copy that has been legally obtained of a computer program that effectively controls access in relation to the particular elements of said computer programs that have not been readily available to the person involved in that activity, with the sole purpose of achieving the interoperability of an independently created computer program with other programs;

c) Non-infringing activities carried out by an investigator who has legally obtained a copy or sample of a work, performance or performance not fixed or sample of a work, performance or execution, or phonogram with the sole purpose of identifying and analyzing flaws in technologies for encoding and decoding information;

d) The inclusion of a component or part thereof, with the sole purpose of preventing minors from accessing inappropriate content, online, in a technology, product, service or device that itself is not prohibited;

e) Non-infringing activities carried out in good faith with the authorization of the owner of a computer, computer system or network, carried out for the sole purpose of testing, investigating or correcting the security of that computer, computer system or network, and

f ) The activities carried out by persons legally authorized in terms of the applicable legislation, for the purposes of law enforcement and to safeguard national security.

II. When it is carried out in relation to effective technological measures that protect any copyright or related right protected in this Law and by virtue of the following functions:

a) Non-infringing reverse engineering processes carried out in good faith with respect to the copy that has been legally obtained of a computer program that effectively controls access in relation to the particular elements of said computer programs that have not been readily available to the person involved in that activity, with the sole purpose of achieving the interoperability of an independently created computer program with other programs, and

b) The activities carried out by persons legally authorized in terms of the applicable legislation, for the purposes of law enforcement and to safeguard national security.

Article 114 Sexies.- It is not violation of rights management information, the suspension, alteration, modification or omission of said information, when it is carried out in the performance of their functions by persons legally authorized in terms of the applicable legislation, for the effects of law enforcement and safeguarding national security.

Article 232 Bis.- A fine of 1,000 UMA to 20,000 UMA will be imposed on whoever produces, reproduces, manufactures, distributes, imports, markets, leases, stores, transports, offers or makes available to the public, offer to the public or provide services or carry out any other act that allows having devices, mechanisms, products, components or systems that:

Again, it’s damaging to culture and innovation to ban non-infringing activities and technologies simply because they circumvent access controls.

I. Are promoted, published or marketed with the purpose of circumventing effective technological protection measure;

II. Are used predominantly to circumvent any effective technological protection measure, or

This seems to suggest that a technologist who makes a technology with noninfringing uses can be liable because others, independently, have used it unlawfully.

III. Are designed, produced or executed with the purpose of avoiding any effective technological protection measure.

Article 232 Ter.- A fine of 1,000 UMA to 10,000 UMA will be imposed, to those who circumvent an effective technological protection measure that controls access to a work, performance, or phonogram protected by this Law.

Article 232 Quáter.- A fine of 1,000 UMA to 20,000 UMA will be imposed on those who, without the respective authorization:

I. Delete or alter rights management information;

This kind of vague prohibition invites nuisance litigation. There are many harmless ways to ‘alter’ rights management information – for accessibility, convenience, or even clarity. In addition, when modern cameras take pictures, they often automatically apply information that identifies the author. This creates privacy concerns, and it is a common social media practice to strip that identifying information in order to protect users. While large platforms can obtain a form of authorization via their terms of service, it should not be unlawful to remove identifying information in order to protect the privacy of persons involved in the creation of a photograph (for instance, those attending a protest or religious event).

II. Distribute or import for distribution, rights management information knowing that this information has been deleted, altered, modified or omitted without authorization, or

III. Produce, reproduce, publish, edit, fix, communicate, transmit, distribute, import, market, lease, store, transport, disclose or make available to the public copies of works, performances, or phonograms, knowing that the rights management information has been deleted, altered, modified or omitted without authorization.

Federal Criminal Code

Article 424 bis.- A prison sentence of three to ten years and two thousand to twenty thousand days fine will be imposed:

I. Whoever produces, reproduces, enters the country, stores, transports, distributes, sells or leases copies of works, phonograms, videograms or books, protected by the Federal Law on Copyright, intentionally, for the purpose of commercial speculation and without the authorization that must be granted by the copyright or related rightsholder according to said law.

The same penalty shall be imposed on those who knowingly contribute or provide in any way raw materials or supplies intended for the production or reproduction of works, phonograms, videograms or books referred to in the preceding paragraph;

This is ridiculously harsh and broad, even in the most generous reading. And the chilling effect of this criminal prohibition will go even further. If one “knows” they are providing paper to someone but do not know that person is using it to print illicit copies, there should be complete legal clarity that they are not liable, let alone criminally liable.

II. Whoever manufactures, for profit, a device or system whose purpose is to deactivate the electronic protection devices of a computer program, or

As discussed, there are many legitimate and essential reasons for deactivating TPMs.

III. Whoever records, transmits or makes a total or partial copy of a protected cinematographic work, exhibited in a movie theater or places that substitute for it, without the authorization of the copyright or related rightsholder.

Jail time for filming any part of a movie in a theater is absurdly draconian and disproportionate.

Article 424 ter.- A prison sentence of six months to six years and five thousand to thirty thousand days fine will be imposed on whoever that sells to any final consumer on the roads or in public places, intentionally, for the purpose of commercial speculation, copies of works, phonograms, videograms or books, referred to in section I of the previous article.

If the sale is made in commercial establishments, or in an organized or permanent manner, the provisions of article 424 Bis of this Code will be applied.

Again, jail for such a violation is extremely disproportionate. The same comment applies to many of the following provisions.

Article 425.- A prison sentence of six months to two years or three hundred to three thousand days fine will be imposed on anyone who knowingly and without right exploits an interpretation or an execution for profit.

Article 426.- A prison term of six months to four years and a fine of three to three thousand days will be imposed, in the following cases:

I. Whoever manufactures, modifies, imports, distributes, sells or leases a device or system to decipher an encrypted satellite signal, carrier of programs, without authorization of the legitimate distributor of said signal;

II. Whoever performs, for profit, any act with the purpose of deciphering an encrypted satellite signal, carrier of programs, without authorization from the legitimate distributor of said signal;

III. Whoever manufactures or distributes equipment intended to receive an encrypted cable signal carrying programs, without authorization from the legitimate distributor of said signal, or

IV. Whoever receives or assists another to receive an encrypted cable signal carrying programs without the authorization of the legitimate distributor of said signal.

Article 427 Bis.- Who, knowingly and for profit, circumvents without authorization any effective technological protection measure used by producers of phonograms, artists, performers, or authors of any work protected by copyright or related rights, it will be punished with a prison sentence of six months to six years and a fine of five hundred to one thousand days.

Article 427 Ter.- To who, for profit, manufactures, imports, distributes, rents or in any way markets devices, products or components intended to circumvent an effective technological protection measure used by phonogram producers, artists or performers, as well as the authors of any work protected by copyright or related rights, will be imposed from six months to six years of prison and from five hundred to one thousand days fine.

Article 427 Quater.- To those who, for profit, provide or offer services to the public intended mainly to avoid an effective technological protection measure used by phonogram producers, artists, performers, or performers, as well as the authors of any protected work. by copyright or related right, it will be imposed from six months to six years in prison and from five hundred to a thousand days fine.

Article 427 Quinquies.- Anyone who knowingly, without authorization and for profit, deletes or alters, by himself or through another person, any rights management information, will be imposed from six months to six years in prison and five hundred to one thousand days fine.

The same penalty will be imposed on who for profit:

I. Distribute or import for its distribution rights management information, knowing that it has been deleted or altered without authorization, or

II. Distribute, import for distribution, transmit, communicate, or make available to the public copies of works, performances, or phonograms, knowing that rights management information has been removed or altered without authorization.

Notice and takedown provisions

Article 114 Septies.- The following are considered Internet Service Providers:

I. Internet Access Provider is the person who transmits, routes or provides connections for digital online communications without modification of their content, between or among points specified by a user, of material of the user’s choosing, or that makes the intermediate and transient storage of that material done automatically in the course of a transmission, routing or provision of connections for digital online communications.

II. Online Service Provider is a person who performs any of the following functions:

a) Caching carried out through an automated process;

b) Storage, at the request of a user, of material that is hosted in a system or network controlled or operated by or for an Internet Service Provider, or

c) Referring or linking users to an online location by using information location tools, including hyperlinks and directories.

Article 114 Octies.- The Internet Service Providers will not be responsible for the damages caused to copyright holders, related rights and other holders of any intellectual property right protected by this Law, for the copyright or related rights infringements that occur in their networks or online systems, as long as they do not control, initiate or direct the infringing behavior, even if it takes place through systems or networks controlled or operated by them or on their behalf, in accordance with the following:

I. The Internet Access Providers will not be responsible for the infringement, as well as the data, information, materials and contents that are transmitted or stored in their systems or networks controlled or operated by them or on their behalf when:

For clarity: this is the section that applies to those who provide your Internet subscription, as opposed to the websites and services you reach over the Internet.

a ) Does not initiate the chain of transmission of the materials or content nor select the materials or content of the transmission or its recipients, and

b) Include and do not interfere with effective standard technological measures, which protect or identify material protected by this law, which are developed through an open and voluntary process by a broad consensus of copyright holders and service providers, which are available from in a reasonable and non-discriminatory manner, and that do not impose substantial costs on service providers or substantial burdens on their network systems.

There is no such thing as a standard technological measure, so this is just dormant poison. A provision like this is in the US law and there has never been a technology adopted according to such a broad consensus.

II. The Online Service Providers will not be responsible for the infringements, as well as the data, information, materials and content that are stored or transmitted or communicated through their systems or networks controlled or operated by them or on their behalf, and in cases that direct or link users to an online site, when:

First, for clarity, this is the provision that applies to the services and websites you interact with online, including sites like YouTube, Dropbox, Cloudflare, and search engines, but also sites of any size like a bulletin-board system or a server you run to host materials for friends and family or for your activist group.

The consequences for linking are alarming. Linking isn’t infringing in the US or Canada, and this is an important protection for public discourse. In addition, a linked resource can change from a non-infringing page to an infringing one.

a) In an expeditious and effective way, they remove, withdraw, eliminate or disable access to materials or content made available, enabled or transmitted without the consent of the copyright or related rights holder, and that are hosted in their systems or networks, once you have certain knowledge of the existence of an alleged infringement in any of the following cases:

1. When it receives a notice from the copyright or related rights holder or by any person authorized to act on behalf of the owner, in terms of section III of this article, or

It’s extremely dangerous to take a mere allegation as "certain knowledge" given how many bad faith or mistaken copyright takedowns are sent.

2. When it receives a resolution issued by the competent authority ordering the removal, elimination or disabling of the infringing material or content.

In both cases, reasonable measures must be taken to prevent the same content that is claimed to be infringing from being uploaded to the system or network controlled and operated by the Internet Service Provider after the removal notice or the resolution issued by the competent authority.

This provision effectively mandates filtering of all subsequent uploads, comparing them to a database of everything that has been requested to be taken down. Filtering technologies are overly broad and unreliable, and cannot make infringement determinations. This would be a disaster for speech, and the expense would also be harmful to small competitors or nonprofit online service providers.

b) If they remove, disable or suspend unilaterally and in good faith, access to a publication, dissemination, public communication and/or exhibition of the material or content, to prevent the violation of applicable legal provisions or to comply with the obligations derived of a contractual or legal relationship, provided they take reasonable steps to notify the person whose material is removed or disabled.

c) They have a policy that provides for the termination of accounts of repeat offenders, which is publicly known by their subscribers;

This vague provision is also often a sword wielded by rightsholders. When the service provider is essential, such as access to the Internet, termination is an extreme measure and should not be routine.

d) Include and do not interfere with effective standard technological measures that protect or identify material protected by this Law, which are developed through an open and voluntary process by a broad consensus of copyright holders and service providers, which are available in a reasonable and non-discriminatory manner, and that do not impose substantial costs on service providers or substantial burdens on their systems or networks,

Again, there’s not yet any technology considered a standard technological measure.

e) In the case of Online Service Providers referred to in subsections b) and c) of the section II of article 114 Septies, in addition to the provisions of the immediately preceding paragraph, must not receive a financial benefit attributable to the infringing conduct, when the provider has the right and ability to control the infringing conduct.

This is a bit sneaky and could seriously undermine the safe harbor. Platforms do profit from user activity, and do technically have the ability to remove content – if that’s enough to trigger liability or to defeat a safe harbor, then the safe harbor is essentially null for any commercial platform.

III. The notice referred to in subsection a), numeral 1, of the previous section, must be submitted through the forms and systems as indicated in the regulations of the Law, which will establish sufficient information to identify and locate the infringing material or content.

Said notice shall contain as a minimum:

1. Indicate of the name of the rightsholder or legal representative and the means of contact to receive notifications;

2. Identify the content of the claimed infringement;

3. Express the interest or right regarding the copyright, and

4. Specify the details of the electronic location to which the claimed infringement refers.

The user whose content is removed, deleted or disabled due to probable infringing behavior and who considers that the Online Service Provider is in error, may request the content be restored through a counter-notice, in which he/she must demonstrate the ownership or authorization he/she has for that specific use of the content removed, deleted or disabled, or justify its use according to the limitations or exceptions to the rights protected by this Law.

The Online Service Provider who receives a counter-notice in accordance with the provisions of the preceding paragraph, must report the counter-notice to the person who submitted the original notice, and enable the content subject of the counter-notice, unless the person who submitted the original notice initiates a judicial or administrative procedure, a criminal complaint or an alternative dispute resolution mechanism within a period not exceeding 15 business days since the date the Online Service Provider reported the counter-notice to the person who submitted the original notice.

It should be made clear that the rightsholder is obligated to consider exceptions and limitations before sending a takedown.

IV. Internet Service Providers will not be obliged to supervise or monitor their systems or networks controlled or operated by them or on their behalf, to actively search for possible violations of copyright or related rights protected by this Law and that occur online.

In accordance with the provisions of the Federal Law on Telecommunications and Broadcasting, Internet Service Providers may carry out proactive monitoring to identify content that violates human dignity, is intended to nullify or impair rights and freedoms, as well as those that stimulate or advocate violence or a crime.

This provision is sneaky. It says “you don’t have to filter, but you’re allowed to look for content that impairs rights (like copyright) or a crime (like the new crimes in this law).” Given that the law also requires the platform to make sure that users cannot re-upload content that is taken down, it’s cold comfort to say here that they don’t have to filter proactively. At best, this means that a platform does not need to include works in its filters until it has received a takedown request for the works in question.

V. The impossibility of an Internet Service Provider to meet the requirements set forth in this article by itself does not generate liability for damages for violations of copyright and related rights protected by this Law.

This provision is unclear. Other provisions seem to indicate liability for failure to enact these procedures. Likely this means that a platform would suffer the fines below, but not liability for copyright infringement, if it is impossible to comply.

Article 232 Quinquies.- A fine of 1,000 UMA to 20,000 UMA will be imposed when:

I. Anyone who makes a false statement in a notice or counter-notice, affecting any interested party when the Online Service Provider has relied on that notice to remove, delete or disable access to the content protected by this Law or has rehabilitated access to the content derived from said counter-notice;

This is double-edged: it potentially deters both notices and counternotices. It also does not provide a mechanism to prevent censorship; a platform continues to be obligated to act on notices that include falsities.

II. To the Online Service Provider that does not remove, delete or disable access in an expedited way to the content that has been the subject of a notice by the owner of the copyright or related right or by someone authorized to act on behalf of the holder, or competent authority, without prejudice to the provisions of article 114 Octies of this Law, or

This is a shocking expansion of liability. In the US, the safe harbor provides important clarity, but even without the safe harbor, a platform is only liable if they have actually committed secondary copyright infringement. Under this provision, even a spurious takedown must be complied with to avoid a fine. This will create even worse chilling effects than what we’ve seen in the US.

III. To the Internet Service Provider that does not provide expeditiously to the judicial or administrative authority, upon request, the information that is in their possession and that identifies the alleged infringer, in the cases in which said information is required in order to protect or enforce copyright or related rights within a judicial or administrative proceeding.

We have repeatedly seen these kinds of information requests used alongside a pointless copyright claim in order to unmask critics or target people for harassment. Handing out personal information should not be automatic simply because of an allegation of copyright infringement. In the US, we have fought for and won protections for anonymous speakers when copyright owners seek to unmask them because of their expression of their views. For instance, we recently defended the anonymity of a member of a religious community who questioned a religious organization, when the organization sought to abuse copyright law to learn their identity.

Related Cases: Green v. U.S. Department of Justice2018 DMCA Rulemaking
Categorieën: Openbaarheid, Privacy, Rechten

Mexico's New Copyright Law Undermines Mexico's National Sovereignty, Continuing Generations of Unfair "Fair Trade Deals" Between the USA and Latin America

Earlier this month, Mexico's Congress hastily imported most of the US copyright system into Mexican law, in a dangerous and ill-considered act. But neither this action nor its consequences occurred in a vacuum: rather, it was a consequence of Donald Trump's US-Mexico-Canada Agreement (USMCA), the successor to NAFTA.

Trade agreements are billed as creating level playing fields between nations to their mutual benefit. But decades of careful scholarship show that poorer nations typically come off worse through these agreements, even when they are subjected to the same rules, because the same rules don't have the same effect on different countries. Besides that, Mexico has now adopted worse rules than its trade partners.

To understand how this works, we need only look to the history of the USA's relationship with the copyrights and patents of foreign persons and firms. When the USA was a new, poor, developing nation that imported more copyrights and patents than it exported it did not honor foreigners' copyrights or patents, but rather allowed its people and its businesses to use them without paying, to develop the nation. Once the USA became an industrial and cultural powerhouse, it entered into agreements with other countries for mutual recognition of one another's copyrights and patents in order to extract wealth based on rights to its technology and culture.

But the USA has a short memory for what it once considered just; it has made the foreign enforcement of US copyrights a trade priority for decades, often demanding that its trading partners extend more legal privileges to US copyright holders than they (or anyone else) receive at home in the United States; and preventing local users from benefiting from fair use or other balancing rights available in the United States. The poorer the trading partner, the more the US government and US industry expect it to surrender.

Mexico's copyright is a sad and enervating example of this principle in action. The law imposes restrictions that do not — and could not — exist under US law, because they violate US Constitutional principles (these laws also violate Mexican Constitutional principles).

For example, Mexico's copyright law effectively mandates copyright filters, which automatically screen Mexican Internet users' expressive speech and arbitrarily censor some of it based on an algorithm's decision to treat it as a copyright infringement.

Neither the US nor Canada has such a requirement, which puts Mexican online firms at a significant trade disadvantage relative to its "equal partners" under USMCA. These filters can be very costly to develop and maintain. For example, YouTube has invested over $100,000,000 to develop its content filtering systems. Those are costs that Mexican online services will have to shoulder if they compete with Canadian and US firms, while their counterparts in the USA and Canada face no such requirement.

Just as dangerous to Mexico's prosperity are its new rules on TPMs (including "Digital Rights Management" or DRM). The US version of these rules, Section 1201 of the Digital Millennium Copyright Act (DMCA 1201), sets out a procedure for granting exemptions to the ban on bypassing digital locks. The Mexican version holds out the possibility of creating such a process but does not describe it.

Even if the Mexican government eventually develops an equivalent procedure, people and businesses in the USA will still enjoy more flexibility than their Mexican counterparts: that's because the US system has produced a long, extensive list of exemptions that Mexico will have to develop on its own, through whatever process it eventually creates (if it ever does).

These rules interfere with many key activities, including accessibility adaptations for people with disabilities, education, and repair, including repair of agricultural and medical equipment, most of which come from US firms, who can charge Mexican consumers and the Mexican health-care system arbitrarily high prices for repairs, without having to fear competition from Mexican repair shops. They can also unilaterally declare equipment to be "beyond repair" and insist that it be replaced at full cost.

All of this happened even as the US government is facing a legal challenge to its ban on circumventing access controls that might see the law struck down in the USA, but still in force in Mexico.

Mexico's new copyright law also includes a much narrower set of limitations and exceptions than either the US ("fair use") or Canadian ("fair dealing") systems provide for. That means that Mexican consumers must pay US and Canadian firms for activities that people in the USA and Canada can undertake for free.

This is especially dangerous when coupled with Mexico's new Notice and Takedown system, which allows anyone to have content removed from the Internet simply by claiming to be the victim of copyright infringement. Under the US system, companies that do not act on these notices are only penalized if they actually commit indirect copyright infringement. But Mexico's version of these rules (Article 232 Quinquies (II)) forces compliance with a copyright owner’s takedown demands even if the platform believes the content is a noninfringing use.

That means that US firms and individuals can remove material — for example, negative reviews quoting a book or warnings about defective software — from Mexican online services, while such a tactic could be ignored by US online services.

This asymmetry is not new. It is a recurring feature of US-Mexico trade relations, something that was already present under NAFTA, but which USMCA expands to the digital realm through this outrageous copyright law.

Under NAFTA, US exports of corn syrup to Mexico surged, and Mexican anti-obesity campaigners who tried to stem the tide were rebuffed by the rules of the trade agreement.

As a result, Mexico's obesity epidemic is among the worst in the region, as is Mexican consumption of processed food. Julio Berdegué, a regional representative of the Food and Agriculture Organization of the United Nations, said "Unfortunately, Mexico is one of the leading countries in obesity, both in men and women and children. It is a very serious problem.” Mexico's export sector has also shifted, with much of the fresh fruits and vegetables that once made up the country's dietary staples now being exported to the USA.

Mexico's new copyright law only exacerbates this problem. Mexico's TPM rules hamper the security research that is the country's best hope to secure its people's digital devices. During Mexico's "sugar wars," activists were hacked with weapons sold by the cyber-arms dealer NSO Group, as part of an illegal campaign to neutralize their opposition to the powerful US sugar industry. That attack exploited a vulnerability in the activists' mobile apps, and Mexico's new copyright law impedes the work of those who would reveal those vulnerabilities.

The history of Latin America is filled with shameful instances of US interference to improve its prosperity at the expense of its southern neighbors.

The passage of the Mexican copyright law, rushed through in the middle of the pandemic without adequate consultation or debate, continues this denial of dignity and sovereignty. Lobbyists for just laws don't fear public scrutiny, after all. The only reason to undertake a lawmaking exercise like this under the shroud of haste and obscurity is to sneak it through before the public knows what's going on and can organize in opposition to it.

If you are based in Mexico, we urge you to participate in the R3D’s and allies campaign "Ni Censura ni Candados" and send a letter to Mexico's National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D's privacy policy.

Categorieën: Openbaarheid, Privacy, Rechten

Disability, Education, Repair and Health: How Mexico's Copyright Law Hurts Self-Determination in the Internet Age

Mexico's new copyright law was rushed through Congress without adequate debate or consultation, and that's a problem, because the law -- a wholesale copy of the US copyright system -- creates unique risks to the human rights of the Mexican people, and the commercial fortunes of Mexican businesses and workers.

The Mexican law contains three troubling provisions:

I. Copyright filters: these automated censorship systems remove content from the Internet without human review and are a form of "prior restraint" ("prior censorship" in the Mexican legal parlance), which is illegal under Article 13 of the American Convention on Human Rights, which Mexico's Supreme Court has affirmed is part of Mexican free speech law (Mexico has an outstanding set of constitutional protections for free expression).

II. Technical Protection Measures: "TPMs" (including "digital rights management" or "DRM") are the digital locks that manufacturers use to constrain how owners of their products may use them, and to create legal barriers to competing products and embarrassing disclosures of security defects in their products. As with the US copyright system, Mexico's system does not create reliable exemptions for lawful conduct.

III. Notice and Takedown: A system allowing anyone purporting to be a copyright holder to have material swiftly removed from the Internet, without any judicial oversight or even presentation of evidence. The new Mexican law can easily be abused by criminals and corrupt officials who can use copyright to force online service providers to turn over the sensitive personal details of their critics, simply by pretending to be the victims of copyright infringement.

This system has grave implications for Mexicans' human rights, beyond free expression and cybersecurity.

Implicated in this new system are Mexicans' rights to education, repair, and adaptation for persons with disability.

Unfit for purpose

The new law does contain language that seems to protect these activities, but that language is deceptive, as the law demands that Mexicans satisfy unattainable conditions and subject themselves to vague promises, with dire consequences for getting it wrong. There are four ways in which these exemptions are unfit for purpose:

  1. Sole Purpose. The exemptions specify that one must act for the "sole purpose" of the exempted activity — a security researcher must be investigating a device for the sole purpose of fixing its defects, but arguably not to advance the state of security research in general, or to protect the privacy and autonomy of users of a computer, system, or network in ways that conflict with what the manufacturer would view as the security of the device.
  2. Noncommercial. The exemptions also frequently cover only "noncommercial" actors, implying that you can only modify a system if you can do so yourself, or if you can find someone else to do it for free. If you are blind and want to convert an ebook so that you can read it with your screenreader, you have to write the code yourself or find a volunteer who'll do it for you — you can't pay someone else to do the work.
  3. Good faith. The exemptions frequently require that anyone who uses them must be acting in "good faith," an imprecise term that can be a matter of opinion when corporate interests conflict with those of researchers. If a judge doesn't belief you were acting in good faith, you could face both fines and criminal sanctions.
  4. No tools. Even if you are confident that you are acting for the sole purpose of exercising an exemption and doing so non commercially and in good faith, you are still stuck. Because while the statute recognizes in general terms that there could be a process to create further exemptions for people who bypass digital locks, it does not provide a similar process for those who make tools for those purposes.

The defects in the Mexican law are largely present in the US law from which they were copied. It's telling that no US defendant has ever successfully used any of the statutory exemptions, not in 22 years. Indeed, the US Copyright Office has repeatedly affirmed that these exemptions do not adequately protect legitimate conduct with the clarity that would be required for them to be effective.

Education

The US experience reveals the ways that badly drafted copyright law can interfere with education:

  • Educational materials are removed from the Internet due to incorrect or fraudulent copyright claims, without warning, leaving teachers who relied on those materials with holes in their curriculum;
  • Educational materials are automatically removed from the Internet due to copyright filter errors, also stranding teachers with missing curricular materials; and
  • Educators cannot make lawful use of the materials purchased for their students because they are blocked by TPMs that they are legally prohibited from bypassing.

Right to Repair

Increasingly, dominant firms have used control over repairs as sources of undeserved, monopoly profits. By controlling repair, firms can not only force customers to pay higher prices for repairs and to use more expensive, more profitable original parts -- they can also force customers to discard their devices and buy new ones, by declaring them to be beyond repair.

Enacting legal penalties for bypassing TPMs is a gift to any company seeking to control repairs. Companies use TPMs so that even after the correct part is installed, the device refuses to work unless a company technician inputs an unlock code.

Disturbingly, this conduct has spread to the world of medical devices, where multinational corporations use TPMs to prevent repairs to ventilators.

At the forefront of the Right to Repair movement are farmers, whose must contend with both a remote location (far from the authorized technicians) and urgent timescales (you need to get your crop in before the storm hits, even if the authorized technician can't make it out before then).

During the global pandemic, many of us are living under conditions familiar to farmers, dangling at the end of a long, slow, unreliable supply chain and confronted by urgent needs.

Technology is primarily designed in the global north by engineers and product specialists whose lives are very different from people in the global south. Mexican people have long relied on their own ingenuity and technical mastery to modify, repair and adapt systems built by distant people in foreign lands to suit their own lived experience in their own land.

Mexican law does not provide any clear protection for repairs that require access to or use of copyrighted works.

Repair is a vital part of self-determination, and the Mexican copyright law puts the interests of monopolistic, rent-seeking foreign companies ahead of the rights of Mexican people to decide how they will use their own property.

Adaptation and Disability

Nowhere is the need for technological self-determination more keenly felt than when it involves people with disabilities.

A rallying cry of the disability movement is "nothing about us without us" -- meaning, among other things, that each person with a disability should have the final say about how their technology works.

The creation of assistive adaptations by and with people with disabilities has been a boon for everyone: the principle of "universal design" — design that enables every body and every mind to participate fully in life — means that all of us benefit, whether that's using closed captions to watch a video in a noisy environment or to learn a foreign language; or using screen magnifiers to read small or low-contrast text.

Digital technology holds the promise of incredible advances in universal design: automated caption-generation and scene description, adaptive systems that anticipate a user's intention based on statistical analysis of their historic usage, predictive text input, and more. Some of these adaptations will come from original manufacturers, but many will come from the community of those using the technology.

People with disabilities should face no conditions as to how they adapt their technology or who they chose to work with to make adaptations on their behalf. None. Period.

People with disabilities do not each necessarily have the technical knowledge to modify their own devices, by themselves, to suit their needs. This is why the exemption for people with disabilities in the Mexican law is wholly inadequate. It precludes hiring someone else to effect a modification (that would be "commercial activity") and it forecloses on general-purpose research that helps with adaptation because no one is allowed to provide technology or services to aid in bypassing TPMs to adapt technology.

Under the Mexican law, the way that, say, a blind person is permitted to make a work accessible is to:

  1. become a cybersecurity expert;
  2. discover a defect in the e-reader software;
  3. write a piece of software to liberate the ebook they want to read;

No one is allowed to offer them technical assistance, and they may not share their accomplishment to help others. It would be a joke, if it wasn't so grimly unfunny.

There can be no question that all of this is by intent or extreme negligence. Not only did Mexico's Congress have the benefit of 22 years' worth of documented problems with the US version of this law, they also had an easy remedy to these problems. All they had to do was say, "You are allowed to bypass a TPM provided that you are not violating someone's copyright." That's it. Rather than larding their exemptions with unattainable and vague conditions, Mexico's lawmakers could have articulated a crisp, bright-line rule that anyone could follow: don't bypass TPMs in a way that's connected to copyright infringement, and you're fine.

They didn't.

If you are based in Mexico, we urge you to participate in R3D's campaign "Ni Censura ni Candados" and send a letter to Mexico's National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D's privacy policy.

Categorieën: Openbaarheid, Privacy, Rechten

Turkey's New Internet Law Is the Worst Version of Germany's NetzDG Yet

For years, free speech and press freedoms have been under attack in Turkey. The country has the distinction of being the world’s largest jailer of journalists and has in recent years been cracking down on online speech. Now, a new law, passed by the Turkish Parliament on the 29th of July, introduces sweeping new powers and takes the country another giant step towards further censoring speech online. The law was ushered through parliament quickly and without allowing for opposition or stakeholder inputs and aims for complete control over social media platforms and the speech they host. The bill was introduced after a series of allegedly insulting tweets aimed at President Erdogan’s daughter and son-in-law and ostensibly aims to eradicate hate speech and harassment online. Turkish lawyer and Vice President of Ankara Bar Association IT, Technology & Law Council Gülşah Deniz-Atalar called the law "an attempt to initiate censorship to erase social memory on digital spaces."

Once ratified by President Erdogan, the law would mandate social media platforms with more than a million daily users to appoint a local representative in Turkey, which activists are concerned will enable the government to conduct even more censorship and surveillance. Failure to do so could result in advertisement bans, steep penalty fees, and, most troublingly, bandwidth reductions. Shockingly, the legislation introduces new powers for Courts to order Internet providers to throttle social media platforms’ bandwidth by up to 90%, practically blocking access to those sites. Local representatives would be tasked with responding to government requests to block or take down content. The law foresees that companies would be required to remove content that allegedly violates “personal rights” and the “privacy of personal life” within 48 hours of receiving a court order or face heavy fines. It also includes provisions that would require social media platforms to store users’ data locally, prompting fears that providers would be obliged to transmit those data to the authorities, which experts expect to aggravate the already rampant self-censorship of Turkish social media users. 

While Turkey has a long history of Internet censorship, with several hundred thousand websites currently blocked, this new law would establish unprecedented control of speech online by the Turkish government. When introducing the new law, Turkish lawmakers explicitly referred to the controversial German NetzDG law and a similar initiative in France as a positive example. 

Germany’s Network Enforcement Act, or NetzDG for short, claims to tackle “hate speech” and illegal content on social networks and passed into law in 2017 (and has been tightened twice since). Rushedly passed amidst vocal criticism from lawmakers, academia and civil experts, the law mandates social media platforms with one million users to name a local representative authorized to act as a focal point for law enforcement and receive content take down requests from public authorities. The law mandates social media companies with more than two million German users to remove or disable content that appears to be “manifestly illegal” within 24 hours of having been alerted of the content. The law has been heavily criticized in Germany and abroad, and experts have suggested that it interferes with the EU’s central Internet regulation, the e-Commerce Directive. Critics have also pointed out that the strict time window to remove content does not allow for a balanced legal analysis. Evidence is indeed mounting that NetzDG’s conferral of policing powers to private companies continuously leads to takedowns of innocuous posts, thereby undermining the freedom of expression. 

A successful German export

Since its introduction, NetzDG has been a true Exportschlager, or export success, as it has inspired a number of similarly harmful laws in jurisdictions around the globe. A recent study reports that at least thirteen countries, including Venezuela, Australia, Russia, India, Kenya, the Philippines, and Malaysia have proposed or enacted laws based on the regulatory structure of NetzDG since it entered into force. 

In Russia, a 2017 law encourages users to report allegedly “unlawful” content and requires social media platforms with more than two million users to take down the content in question as well as possible re-posts, which closely resembles the German law. Russia’s copy-pasting of Germany’s NetzDG confirmed critics' worst fears: that the law would serve as a model and legitimization for autocratic governments to censor online speech. 

Recent Malaysian and Phillipinen laws aimed at tackling “fake news” and misinformation also explicitly refer to NetzDG, although NetzDG’s scope does not even extend to cover misinformation. In both countries, NetzDG’s model of imposing steep fines (and in the case of the Philippines up to 20 years of imprisonment) on social media platforms for failing to remove content swiftly was applied.

In Venezuela, another 2017 law that expressly refers to NetzDG takes the logic of NetzDG one step further by imposing a six hour time window for failing to remove content considered to be “hate speech”. The Venezuelan law—which includes weak definitions and a very broad scope and was also legitimized by invoking the German initiative—is a potent and flexible tool for the country’s government to oppress dissidents. 

Singapore is yet another country that got inspired by Germany’s NetzDG: In May 2019, the Protection from Online Falsehoods and Manipulation Bill was adopted, which empowers the government to order platforms to correct or disable content, accompanied with significant fines if the platform fails to comply. A government report preceding the introduction of the law explicitly references the German law. 

Similarly to these examples, the recently adopted Turkish law shows clear parallels with the German approach: targeting platforms of a certain size, the law incentivizes platforms to implement takedown requests by stipulating significant fees, thereby turning platforms into the ultimate gatekeepers tasked with deciding on the legality of online speech. In important ways, the Turkish law goes way beyond NetzDG, as its scope does not only include social media platforms but also new sites. In combination with its exorbitant fines and the threat to block access to websites, the law enables the Turkish government to erase any dissent, criticism or resistance. 

Even worse than NetzDG

But the fact that the Turkish law goes even beyond NetzDG highlights the danger of exporting Germany’s flawed law internationally. When Germany passed the law in 2017, states around the world were getting increasingly interested in regulating alleged and real online threats, ranging from hate speech to illegal content and cyberbullying. Already problematic in Germany, where it is embedded in a functioning legal system with appropriate checks and balances and equipped with safeguards absent from the laws it inspired, NetzDG has served to legitimize draconian censorship legislation across the globe. While it’s always bad if flawed laws are being copied elsewhere, this is particularly problematic in authoritarian states that have already pushed for and implemented severe censorship and restrictions on free speech and the freedom of the press. While the anti-free speech tendencies of countries like Turkey, Russia, Venezuela, Singapore and the Philippines long predate NetzDG, the German law surely provides legitimacy for them to further erode fundamental rights online.

Categorieën: Openbaarheid, Privacy, Rechten

Court Denies EFF, ACLU Effort to Unseal Ruling Rejecting DOJ Effort to Break Encryption

A federal appeals court last week refused to unseal a court order that reportedly stopped the Justice Department from forcing Facebook to break the encryption it offers to users of its Messenger application.

The unpublished decision ends an effort by EFF, ACLU, and Stanford cybersecurity scholar Riana Pfefferkorn to unseal the 2018 ruling from a Fresno, California federal court. The ruling denied an attempt by the Justice Department to hold Facebook in contempt for refusing to decrypt Messenger voice calls. Despite the fact that the ruling has significant implications for Internet users’ security and privacy—and that the only public details about the case come from media reports—the U.S. Court of Appeals for the Ninth Circuit upheld an earlier decision by the trial court that the public had no right to access the court decision or related records.

As we argued, unsealing records in the case is especially important because the public deserves to know when law enforcement tries to compel a company that hosts massive amounts of private communications to circumvent its own security features and hand over users’ private data. The Washington Post also filed a motion to unseal the court order.

The Ninth Circuit ruled that the public has no First Amendment right to access the court records because the documents are part of an ongoing federal law enforcement investigation and that they “have not historically been open to the general public during an investigation.” The court declined to consider whether we had a similar right to access the records under the common law.

EFF is disappointed in the Ninth Circuit’s ruling, which does not discuss, much less analyze, the countervailing public interest in knowing about the limits the law places on government efforts to compromise the digital security and privacy of millions of Facebook Messenger users. It also fails to explain how redacting the court opinion would not address law enforcement concerns while still giving the public important information about the government’s efforts to undermine Internet users’ security.

We thank the ACLU, Pfefferkorn, and The Washington Post for working together to shed light on this important case. We are also grateful for the supportive friend-of-the-court briefs filed by the Reporters Committee for Freedom of the Press, Mozilla and Atlassian, Upturn and several security experts, and former federal magistrate judges.

Related Cases: EFF, ACLU v. DOJ - Facebook Messenger unsealing
Categorieën: Openbaarheid, Privacy, Rechten

Why EFF Doesn’t Support California Proposition 24

This November, Californians will be called upon to vote on a ballot initiative called the California Privacy Rights Act, or Proposition 24. EFF does not support it; nor does EFF oppose it.

EFF works across the country to enact and defend laws that empower technology users to control how businesses process their personal information. The best consumer data privacy laws require businesses to get consumers’ opt-in consent before processing their data; bar data processing except as necessary to give consumers what they asked for (often called “data minimization”); forbid “pay for privacy” schemes that pressure all consumers, and especially those with lower incomes, to surrender their privacy rights; and let consumers sue businesses that break these rules. In California, we’ve worked with other privacy advocates to try to pass these kinds of strengthening amendments to our existing California Consumer Privacy Act (CCPA).

Prop 24 does not do enough to advance the data privacy of California consumers. It is a mixed bag of partial steps backwards and forwards. It includes some but not most of the strengthening amendments urged by privacy advocates. This post addresses some of the provisions in this 52-page ballot initiative, and some missed opportunities.

More compulsion to pay for our privacy

Prop 24 would expand “pay for privacy” schemes. Specifically, the initiative would exempt “loyalty clubs” from the CCPA’s existing limit on businesses charging different prices to consumers who exercise their privacy rights. See Sec. 125(a)(3). This change would allow a business to withhold a discount from a consumer, unless the consumer lets the business harvest granular data about their shopping habits, and then profit on disclosure of that data to other businesses. The initiative also would expand an existing CCPA loophole (allowing “financial incentives” for certain data processing) from just “sale” of such data, to also “sharing” of it.

Unfortunately, pay-for-privacy schemes pressure all Californians to surrender their privacy rights. Worse, because of our society’s glaring economic inequalities, these schemes will unjustly lead to a society of privacy “haves” and “have-nots.”

A missed opportunity on privacy-preserving defaults

EFF advocates for an opt-in model of data processing, where businesses cannot collect, use, share, or store our information without first getting our explicit consent. This makes privacy the default option. Studies show that defaults matter, because most people don’t change the settings of their devices and apps. Privacy should be the default, particularly when it comes to ensuring consumers have control over how their information flows into a complicated data ecosystem.

The CCPA, while an important law, places the burden on consumers to opt-out of the retention and sale of their information. But most people will never do this. This allows businesses to continue to retain and sell their data, though many of these people do not want this.

Now is the time to flip the default, and thus ensure strong privacy protection. Prop 24 misses an opportunity to do so.

A half-step on data minimization

Prop 24’s data minimization rule is only a partial step forward. Businesses must be prohibited from collecting a consumer’s personal information beyond what is necessary to provide the consumer the good or service they requested. That was the approach in this year’s California A.B. 3119 (Asm. Wicks), which the privacy coalition supported.

But Prop 24 uses the wrong yardstick: instead of looking to the consumer’s own expectations, Prop 24 looks instead to the business’ purposes. See Sec. 100(c). Worse, a business can even expand its processing to “another disclosed purpose [of the business] that is compatible with the context.” A business’ privacy policy might disclose a vast number of purposes, many of which the business might deem compatible with the context.

Because the initiative’s minimization rule uses the standard of what a business expects rather than what consumers expect, Californians will be surprised by how companies continue to process their information—running counter to the goals of true data minimization.

Erosion of the right to delete

Prop 24 would expand the power of a business to refuse a consumer’s request to delete their data. Specifically, a business could refuse when it believes retention would “help to ensure security and integrity,” see Sec. 125(d)(2), broadly defined to include the ability of an information system to detect security incidents that compromise data, see Sec. 140(ac). Businesses may argue this allows retention of great volumes of consumer data, despite deletion requests, in the name of detecting adtech fraud.

Moreover, the initiative would diminish a business’ duty to transmit a consumer’s deletion request to downstream entities who got that consumer’s data from that business. Specifically, a business could refuse if doing so required “disproportionate effort.” See Sec. 105(c)(1). Yet it would be highly burdensome for a consumer to identify these downstream entities and then send them additional deletion requests.

Weaker biometric privacy

Prop 24 would end CCPA protection of biometric information (such as DNA or faceprints), when the business processing such information does not use it to establish an individual’s identity or intend to do so. See Sec. 140(c). A business might later change course and use that same biometric information to establish an individual’s identity, at which point CCPA would apply, but the unregulated processing would already have occurred.

More mixing of data

Prop 24 would expand the power of service providers (which process data for businesses) to combine sets of consumer data that they obtain from different businesses or directly from consumers. Specifically, a service provider could do so for “any business purpose” that is later defined by regulations. See Secs. 140(ag)(1) & 185(e)(10). While this power-to-combine cannot extend to advertising to consumers who opt-out, see Sec. 140(e)(6), many consumers will not opt-out, and even as to them, combined data sets can be used for many other purposes.

No enforcement by consumers

Prop 24 does not empower consumers to sue businesses that violate their privacy rights. Without effective enforcement, a law is just a piece of paper. It is not enough to authorize a government agency to enforce the law, whether it is a unit of the California Attorney General’s Office (as currently under CCPA), or a new freestanding data protection agency (as proposed by Prop 24). No agency will have sufficient resources to enforce all violations of a law, and every agency is at risk of excessive influence by businesses over enforcement decisions.

Consumers need a private right of action, so they can do the job when regulators can’t—or won’t. That why many federal bills on consumer data privacy have a private right of action.

Other half-steps

Some provisions of Prop 24 are partial steps forward, so we don’t oppose the initiative outright, but we don’t support it either, because the forward steps are only partial, and must be weighed against the backward steps and missed opportunities. For example:

  • There is a new right to opt-out of certain uses of what Prop 24 calls “sensitive” personal information, see 121, but lots of unprotected data is also highly sensitive (such as immigration status and familial relationships), and the privacy-protection default should be opt-in and not opt-out.
  • There is a new right to opt-out of what Prop 24 calls data “sharing,” see 120(a), and a new limit on data “sharing” by third parties, see Sec. 115(d), but Prop 24 restrains these new “sharing” rules to just data for cross-context behavioral ads, see Sec. 140(ah).
  • While EFF supports laws requiring businesses to comply with “Do Not Track” and similar browser signals displayed by consumers, Prop 24 gives each business the unilateral choice whether to comply with what the initiative calls “opt-out preference or signals,” or instead to comply with CCPA’s existing mandate to post a “Do Not Sell” link on its website. See 135(b). Strong privacy protection would require all businesses to both comply with user opt-out signals and post a “do not sell” link on their websites.
  • There is a small expansion of CCPA’s private right of action for data breaches, see 150(a)(1), and removal of the notice-and-cure obstacle to Attorney General enforcement, see Sec. 155(b), but Prop 24 leaves consumers powerless to enforce almost all its safeguards. Again, all privacy safeguards need enforcement with a robust private right of action. Notably, the original data privacy ballot initiative in 2018 had a private right of action, but this enforcement measure was excised as part of the compromise that led to legislative enactment of the CCPA.
Conclusion

EFF will continue to work with other privacy advocates to pass new consumer data protections in California and across the country. But we won’t be supporting Prop 24.

Categorieën: Openbaarheid, Privacy, Rechten

When the U.S. Patent Office Won’t Do Its Job, Congress Should Step In

When people get sued by patent trolls, they can fight back in one of two places: a U.S. district court or the Patent and Trademark Office. But the Patent Office is putting its thumb on the scale again in favor of patent owners and against technology users. This time, the Office is relying on specious legal arguments to shut down patent reviews at the Patent and Trademark Appeals Board (PTAB).

The procedure that’s being undermined at PTAB is a procedure called inter partes review, or IPR. Congress created IPRs in 2012, as a faster and less expensive way of resolving patent disputes than district courts. Since then, they have become an important part of maintaining the patent system. Many patents (especially software patents) are granted after woefully inadequate examinations, and are ultimately invalidated when challenged in court. Given that, it makes sense to allow the U.S. Patent and Trademark Office to take a second look at the patents they’ve handed out. The Patent Office granted more than 350,000 patents last year, and the median examiner review time is less than 20 hours. Mistakes happen. When users or small businesses are accused of patent infringement, they shouldn’t go broke trying to defend themselves in expensive court litigation. 

The IPR system has proven effective. In the years it has operated, PTAB trials have invalidated more than 1,900 patents altogether. There have been several hundred more cases in which patent owners lost at least some of their

Unfortunately, a series of recent decisions is choking off access to the IPR process. The PTAB is now denying petitions on procedural grounds, to avoid carrying out the job that Congress tasked it with.

The Patent Office is putting a stop to IPR proceedings when there are district court cases going on that involve the same patents. But that’s the opposite of what Congress told the Patent Office to do—review all patents that are likely invalid when presented with a timely petition.

Thanks to these “discretionary denials,” questionable patents go un-reviewed. What’s more, the defendants in these patent cases don’t get a second chance—the law creating IPR makes decisions to deny review final and unappealable.

When IPR isn’t an option, more patent owners can demand bigger settlement payments—effectively, as a ransom for avoiding the costs of litigating a case through discovery and trial. The time and cost it takes to get a patent reviewed are precisely why Congress mandated the IPR procedure in the first place. Now, as the Patent Office denies more IPRs, we’re seeing the same problems that were so acute back in 2012: forum shopping and gamesmanship in patent litigation over patents that should never have been granted in the first place.

Patent owners are pushing aggressive trial schedules in courts, then bringing those timetables to the Patent Office, as an excuse to insist that PTAB should not review their patents. That’s already led to an uptick in lawsuits in “rocket dockets” like the Western District of Texas and the Eastern District of Texas. Those two districts now account for an estimated 45 percent of cases filed by “non-practicing entities,” or patent trolls. Discretionary denials of IPRs are actually making district court litigation more onerous for defendants.

When a government agency like USPTO isn’t doing its job, Congress needs to step in and exercise oversight. That’s why we’ve signed letters to the House [PDF] and Senate [PDF] leaders on the judiciary and IP committees, asking them to stop patent owners’ gamesmanship of the IPR process. Together with Engine Advocacy, a group representing startups, and several other groups supporting patent reform, we’re asking lawmakers to get involved and make sure that IPR “can live up to Congress’s intent of providing a meaningful, low-cost alternative to litigation and promoting patent quality.”  

Categorieën: Openbaarheid, Privacy, Rechten

Mexico's New Copyright Law: Cybersecurity and Human Rights

This month, Mexico rushed through a new, expansive copyright law without adequate debate or consultation, and as a result, it adopted a national rule that is absolutely unfit for purpose, with grave implications for human rights and cybersecurity.

The new law was passed as part of the country's obligations under Donald Trump's United States-Mexico-Canada Agreement (USMCA), and it imports the US copyright system wholesale, and then erases the USA’s own weak safeguards for fundamental rights.

Central to the cybersecurity issue is Article 114 Bis, which establishes a new kind of protection for "Technical Protection Measures" (TPMs) this includes rightsholder technologies commonly known as Digital Rights Management (DRM), but it also includes basic encryption and other security measures that prevent access to copyrighted software. These are the familiar, dreaded locks that stop you from refilling your printer's ink cartridge, using an unofficial App Store with your phone or game console, or watching a DVD from overseas in your home DVD player. Sometimes there is a legitimate security purpose to restricting the ability to modify the software in a device, but when you as the owner of the device aren’t allowed to do so, serious problems arise and you become less able to ensure your device security.

Under the US system, it is an offense to bypass these TPMs when they control access to a copyrighted work, even when no copyright infringement takes place. If you have to remove a TPM to modify your printer to accept third-party ink or your car to accept a new engine part, you do not violate copyright — but you still violate this extension of copyright law.

Unsurprisingly, manufacturers have aggressively adopted TPMs because these allow them to control both their customers and their competitors. A company whose phone or game console is locked to a single, official App Store can monopolize the market for software for their products, skimming a percentage from every app sold to every owner of that device.

Customers cannot lawfully remove the TPM to use a third-party app-store, and competitors can't offer them the tools to unlock their devices. "Trafficking" in these tools is a crime in the USA, punishable by a five-year prison sentence and a $500,000 fine.

But the temptation to use a TPM isn't limited to controlling customers and competitors: companies that use TPMs also get to decide who can reveal the defects in their products.

Computer programs inevitably have bugs, and some of these bugs present terrible cybersecurity risks. Security defects allow hackers to remotely take over your car and drive it off the road, alter the ballot counts in elections, wirelessly direct your medical implants to kill you, or stalk and terrorize people.

The only reliable way to discover these defects before they can be weaponized is to subject products and systems to independent scrutiny. As the renowned security expert Bruce Schneier says, "Anyone can design a security system that works so well they can't think of a way to defeat it. That doesn't mean it works, that just means it works against people stupider than them."

Independent security research is incompatible with laws protecting TPMs. In order to investigate systems and report on their defects, security researchers must be free to bypass TPMs, extract the software from the device, and subject it to testing and analysis.

When security researchers do discover defects, it's common for companies to deny that they exist, or that they are important, painting the matter as a "he said/she said" dispute.

But these disputes have a simple resolution: security researchers routinely publish "proof of concept" code that allows anyone to independently verify their findings. This is simple scientific best practice: since the Enlightenment, scientists have published their findings and invited others to replicate them, a process that is at the core of the Scientific Method.

Section 1201 of the US Digital Millennium Copyright Act (DMCA 1201) defines a process for resolving disputes between TPMs and fundamental human rights. Every three years, the US Copyright Office hears petitions from people whose fundamental rights have been compromised by the TPM law, and grants exemptions to it.

The US government has repeatedly acknowledged that TPMs interfere with security research and granted explicit exemptions to the TPM rule for security research. These exemptions are weak (the US statute does not give the Copyright Office authority to authorize security researchers to publish proof-of-concept code), but it still provides much-needed surety for researchers attempting to warn us that we are in danger from our devices. When powerful corporations threaten security researchers in attempts to silence them, the Copyright Office's exemptions can give them the courage to publish anyway, protecting all of us.

The US exemptions process is weak and inadequate. The Mexican version of this process is even weaker, and even more inadequate (the law doesn't even bother to define how it will work, and merely suggests that some process will be created in the future).

Article 114 Quater (I) of Mexico's law does contain a vague offer of protection for security research, similar to an equally vague assurance in the DMCA. The DMCA has been US law for 22 years, and in all that time, no one has ever used this clause to defend themselves.

To understand why, it is useful to examine the text of the Mexican law. Under the Mexican law, security researchers are only protected if their "sole purpose" is "testing, investigating or correcting the security of that computer, computer system or network." It is rare for a security researcher to have only one purpose: they want to provide the knowledge they glean to the necessary parties so that security flaws do not harm any of the users of similar technology. They may also want to protect the privacy and autonomy of users of a computer, system, or network in ways that conflict with what the manufacturer would view as the security of the device.

Likewise, the Mexican law requires that security researchers be operating in "good faith," creating unquantifiable risk. Researchers often disagree with manufacturers about the appropriate way to investigate and disclose security vulnerabilities. The vague statutory provision for security testing in the United States was far too unreliable to successfully foster essential security research, something that even the US Copyright Office has now repeatedly acknowledged.

The bottom line: our devices cannot be made more secure if independent researchers are prohibited from auditing them. The Mexican law will deter this activity. It will make Mexicans less secure.

Cybersecurity is intimately bound up with human rights. Insecure voting machines can compromise elections, and even when they are not hacked, the presence of insecurities robs elections of legitimacy, leading to civic chaos.

Civil society groups engaged in democratic political activity around the world have been attacked by commercial malware that uses security defects to invade their devices, subjecting them to illegal surveillance, kidnapping, torture, and even murder.

One such product, the NSO Group's Pegasus malware, was implicated in the murder of Jamal Khashoggi. That same tool was used to target Mexican investigative journalists, human rights defenders, even Mexican children whose parents were investigative journalists.

Defects in our devices expose us to politically motivated surveillance, but they also expose us to risk from organized criminals, for example, "stalkerware" can enable human traffickers to monitor their victims.

Digital rights are human rights. Without the ability to secure our devices, we cannot fully enjoy our familiar, civic, political, or social lives.

If you are based in Mexico, we urge you to participate in R3D's campaign "Ni Censura ni Candados" and send a letter to Mexico's National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D's privacy policy.

Categorieën: Openbaarheid, Privacy, Rechten

Musik in Podcasts nutzen. Die GEMA-Lizenz und ihre Alternativen

iRights.info - 29 juli 2020 - 7:16pm

Wer in Podcasts auch Musik einsetzen will, muss dafür Nutzungsrechte einholen. In vielen Fällen sind Vergütungen bei der GEMA erforderlich. Alternativ stehen GEMA-freie oder Creative Commons-lizenzierte Musikwerke zur Verfügung. Wir erklären, was man dabei beachten sollte und welche Kosten entstehen können.

„Podcast“ ist ein sogenanntes Kofferwort, künstlich zusammengesetzt aus iPod und broadcast (englisch für „senden“, „übertragen“). Die Bezeichnung Podcast steht damit auch für eine Trendwende in der Mediennutzung: Radio-ähnliche Sendungen können auf einem mobilen Endgerät, wie einem Smartphone, gespeichert und nach Bedarf abgespielt werden.

Manche Podcaster*innen möchten neben eigenen Sprachaufnahmen auch urheberrechtlich geschützte Musik benutzen. Dafür ist in aller Regel die Erlaubnis von Urheber*innen oder Rechteinhaber*innen erforderlich.

Genau an diesem Punkt kommen Verwertungsgesellschaften ins Spiel, da sie die Erlaubnisse in Form von Lizenzen anbieten und Vergütungen einsammeln. Hierfür müssen sich Urheber*innen und Rechteinhaber*innen einer Verwertungsgesellschaft, etwa der GEMA, angeschlossen haben und sie damit beauftragen, für bestimmte Nutzungen in ihrem Namen tätig zu werden.

Wie funktioniert die GEMA – und ihr neuer Podcast-Tarif?

Die GEMA vertritt als größte Verwertungsgesellschaft Deutschlands die Rechte von mehr als 75.000 Komponist*innen, Textdichter*innen und Verleger*innen (Stand 2019). Die Gesamtheit der Rechte, die die Mitglieder der GEMA zur Wahrnehmung übertragen, heißt „GEMA-Repertoire“.

Die Nutzung von GEMA-Repertoire kann in vielen Fällen lizenziert werden, zum Beispiel wenn bei einem öffentlichen Straßenfest eine Rolling Stones-Coverband auftritt oder ein Musiklabel eine Compilation mit fremden Stücken herausbringt.

Musik aus dem GEMA-Repertoire in Podcasts zu nutzen war bisher wegen der komplizierten Rechtslage und fehlender Lizenzierungsmöglichkeiten nicht ohne weiteres möglich. Doch seit kurzem bietet die GEMA verschiedene Tarife für Podcasts an.

Was die GEMA unter „Podcast“ und „Lizenznehmer“ versteht

Die Podcast-Lizenz führt die GEMA unter dem Kürzel „VR-OD 14“. Unter einem „Podcast“ versteht sie eine Audio-Datei, in der Wortbeiträge im Vordergrund stehen und der in Episodenform, das heißt regelmäßig und als Serie erscheint.

Das Merkmal der zusammenhängenden Serie ist wichtig, weil die Verwertungsgesellschaft die einzelnen Episoden eines Podcasts in einen Zusammenhang stellt und den Podcast dadurch „in seiner Gesamtheit“ lizenziert und abrechnet – und zwar mit dem „Lizenznehmer“.

Als Lizenznehmer kommen Personen oder Firmen in Frage, die unterschiedliche Tätigkeiten ausüben. Werden die einzelnen Episoden eines Podcasts über einen dezentralen Service angeboten, wie zum Beispiel über einen RSS-Feed oder einen automatischen Podcast-Download-Service (sogenannte „Podcatcher“), ist diejenige Person oder Organisation, die den Podcast zur Verfügung stellt, auch Lizenznehmerin – und damit Vertragspartnerin der GEMA, die zur Kasse gebeten wird.

Wird der Podcast hingegen exklusiv bei einem einzigen Dienst verfügbar gemacht, wie etwa Spotify oder iTunes, so dass er nur dort erscheint und nirgendwo anders, ist der jeweilige Diensteanbieter der Lizenznehmer.

Unerheblich dagegen ist, ob ein Podcast per Download speicherbar ist oder als reiner Stream ohne Speichermöglichkeit verfügbar gemacht wird. Dies spielt laut Tarif keine Rolle.

Das kostet die Lizenz für private Podcaster*innen

Die GEMA bietet Musiklizenzen für Podcasts in drei verschiedenen Staffelungen an. Das dafür entscheidende Kriterium ist die Anzahl der monatlichen Abrufe eines Podcasts.

Auch hier zählt der Podcast als Gesamtes: Alle Episoden des Podcasts werden zusammengezählt. Bleiben die monatlichen Abrufe unter 50.000, so wird ein Pauschalbetrag verlangt, der sich nach den verwendeten Musikminuten richtet.

Dieses Tarifangebot zielt also vor allem auf kleine Podcasts mit geringer Reichweite – in der Regel von Privatleuten.

Wichtig ist die Anzahl der Musikminuten. Die GEMA differenziert hier unterschiedliche Nutzungsszenarien: Eines trifft zu, wenn zur Musik gleichzeitig ein Text gesprochen wird, sodass die Musik in den Hintergrund tritt. In diesem Fall wird die Musiklänge halbiert. Musik hingegen, die frei und ohne gleichzeitig gesprochenen Text erklingt, wird normal ihrer Spiellänge entsprechend berechnet.

Ein Beispiel:

Ein Podcast hat zum Meldezeitpunkt vier Folgen. Generell wird nur wenig Musik eingesetzt: pro Episode eineinhalb Minuten Musik ohne Sprachanteil sowie jeweils ein 30-sekündiges Jingle zu Beginn und Ende, auf das zusätzlich Begrüßung und Verabschiedung gesprochen werden. Insgesamt zählen pro Episode also zwei Musikminuten.

In seiner Gesamtheit von vier Episoden mit jeweils zwei Musikminuten verwendet der Podcast also acht Minuten Musik. Bei maximal 10.000 monatlichen Abrufen würden diese acht Musikminuten mit 40 EUR zu Buche schlagen. Steigen die monatlichen Abrufe auf bis zu 40.000, würden die acht Musikminuten 160 EUR kosten.

Die Lizenz für professionelle oder halbprofessionelle Podcaster*innen

Anders dagegen die Berechnung der Lizenzkosten, wenn der Podcast mehr als 50.000 monatliche Abrufe zählt. Die GEMA unterscheidet hier, ob die prozentuale Vergütung oder die Pauschalvergütung höhere Einnahmen einbringt – und lizenziert die teurere Variante.

Das Modell lässt sich am besten anhand zweier Beispielsfälle erläutern:

Beispiel Prozentuale Vergütung

Hier berechnet sich die monatliche Vergütung „als prozentuale Beteiligung an den podcastbezogenen Einnahmen“. Das bedeutet, dass 15 Prozent der Einnahmen, die aus dem Geschäft mit dem jeweiligen Podcast resultieren, an die GEMA abgeführt werden müssen.

Die „Einnahmen“ sind im Sinne der GEMA weit gefasst: Die Verwertungsgesellschaft versteht darunter Entgelte aus „Werbung, Sponsoring, Spenden sowie Tausch-, Kompensations- oder Geschenkgeschäften, Endnutzerentgelte sowie getrennt finanzierte oder berechnete geldwerte Leistungen und Gegenleistungen, wie zum Beispiel Übermittlungs- und Bereitstellungsentgelte“.

Daneben wird das Verhältnis von Musik- zu Sprachanteilen sekundengenau berücksichtigt und in die Berechnung mit einbezogen (sogenanntes „pro rata“-Modell). So würden bei einem 60-minütigen Podcast, der 20 Minuten Musik enthält und 10.000 Euro Einnahmen erzielt, rund 500 Euro GEMA-Lizenzgebühren fällig.

Die prozentuale Vergütung tritt aber nur ein, wenn der berechnete Wert höher wäre als bei einer Pauschalvergütung; dies dürfte nur auf Podcasts zutreffen, die einen recht hohen Musikanteil und zudem hohe Einnahmen haben.

Beispiel Pauschalvergütung

Die Pauschalvergütung orientiert sich bei den Lizenzierungskosten wieder an den Abrufzahlen und den verwendeten Musikminuten des Podcasts. Doch hier zählen die angefangenen Musikminuten: Werden also 4 Minuten und 10 Sekunden Musik gespielt, müssen Gebühren für volle 5 Minuten entrichtet werden.

Die Preise sind dementsprechend nach Musikminuten gestaffelt, die Längen der Sprachanteile hingegen sind nicht relevant. Als Faustregel kann man sich merken: Pro 10.000 Abrufen kostet jede angefangene Musikminute 5 EUR. Zur näheren Einordnung bietet die GEMA in ihrem Tarif auch Beispielrechnungen an.

Vor der Veröffentlichung: Mit der GEMA die Lizenzrechte klären

Neben der Vergütung ist der Zeitpunkt der Lizenzierung wichtig. Dem Tarif zufolge gelten die Nutzungsrechte erst dann „als eingeräumt, wenn die Einwilligung der GEMA […] eingeholt wurde.“

Das heißt im Klartext: Bevor man als Lizenznehmer einen Podcast anbieten möchte, sollte man sich mit der GEMA geeinigt haben. Vermutlich muss man zum Lizenzvertragsabschluss schätzen, wie hoch die erwarteten monatlichen Abrufe des Podcasts sein werden.

Daneben sollte man sich als Podcaster*in darauf gefasst machen, dass die GEMA genaue Auskünfte zu Urheber*innen, Werken, Spiellängen und gegebenenfalls Einnahmen erwartet. Eine saubere Dokumentation der verwendeten Musikstücke, inklusive der GEMA-Werknummer, ist daher empfehlenswert.

GEMA-Repertoire oder „GEMA-frei“?

Damit stellt sich auch die Frage: Was gehört eigentlich zum GEMA-Repertoire und was nicht? Hier hilft die GEMA-Repertoiresuche weiter, mit der sich die Datenbank der Verwertungsgesellschaft durchsuchen lässt.

Möchte man beispielsweise herausfinden, ob das Lied „Doo Wop (That Thing)“ der US-amerikanischen Sängerin Lauryn Hill für einen Podcast durch die GEMA lizenzierbar ist, sucht man am besten anhand des Titels in Kombination mit „Lauryn Hill“ als vermuteter Urheberin.

Lauryn Hill ist als US-amerikanische Musikern kein GEMA-Mitglied. Das gesuchte Stück wäre aber trotzdem für einen Podcast über die GEMA lizenzierbar, da die Sängerin von einer US-amerikanischen Verwertungsgesellschaft vertreten wird, die Verträge mit der GEMA unterhält.

Für eine eindeutige Identifizierung des gewünschten Stücks ist die GEMA-Werknummer am rechten oberen Rand hilfreich – für die GEMA und für sich selbst, wenn man zu einem späteren Zeitpunkt die Urheberrechte des Songs erneut recherchieren will.

Weitere hilfreiche Tipps zur Benutzung der Repertoiresuche stellt die GEMA in diesem PDF zur Verfügung.

Eine mögliche Alternative: GEMA-freie Musik

Es gibt auch Musik, die nicht in das Repertoire der GEMA fällt. Bei der sogenannten „GEMA-freien“ Musik sind weder Komponist*in noch Textdichter*in GEMA-Mitglieder.

Sie sind auch nicht Mitglieder einer ausländischen Verwertungsgesellschaft, mit der die GEMA einen Gegenseitigkeitsvertrag unterhält (beispielsweise die US-amerikanische ASCAP oder die französische SACEM), sondern vertreten ihre Nutzungsrechte selbständig.

„GEMA-frei“ heißt allerdings nicht, dass keinerlei Urheberrechte an den Werken bestehen und auch nicht, dass dafür keine Lizenzkosten entstehen können – „GEMA-frei“ bedeutet in erster Linie, dass die GEMA nicht dazu nicht berechtigt ist, Lizenzkosten dafür einzufordern oder die Urheberrechte in anderer Weise wahrzunehmen.

Das bedeutet im Umkehrschluss, dass man sich bei GEMA-freier Musik als Lizenznehmer*in direkt mit Musikurheber*innen oder Dienstleistern über Kosten und Umfang einer Lizenz verständigen muss.

Wo man GEMA-freie Musik findet und was man beachten sollte

GEMA-freie Musik findet sich beispielsweise bei darauf spezialisierten Portalen. Doch auch hier sollte man sich die Nutzungsbedingungen genau durchlesen. Diese können pauschal für alle angebotenen Tracks gelten; sie können sich aber auch für einzelne Stücke oder Urheber*innen unterscheiden.

Meistens werden auf den Portalen gestaffelte Lizenzmodelle angeboten: So kostet ein privater, nicht-kommerzieller Einsatz von Musik in der Regel weniger als ein nicht-privater oder kommerzieller Einsatz. Ein Podcast gilt als öffentlich, wenn er beispielsweise auf Spotify veröffentlicht wird. Nicht-öffentlich dagegen wäre ein Podcast nur, wenn er beispielsweise in einem Familienchat abrufbar ist.

Manche Lizenzmodelle für GEMA-freie Musik sehen auch zeitlich begrenzte Nutzungen vor, etwa zwölf Monate vom Zeitpunkt der Veröffentlichung. Nach Ablauf dieser Zeitspannen muss unter Umständen die Lizenz kostenpflichtig erneuert werden, sofern die Musik im Podcast weiter abrufbar sein soll.

Manche Musiker*innen bieten GEMA-freie Musik auch auf ihrer persönlichen Website an, etwa ihrem Blog. Auch hier sollte man sich genau ansehen, unter welchen Bedingungen diese Musik für Podcasts genutzt werden kann. Im Zweifel sollte man bei den Seitenbetreiber*innen nachfragen.

Oft muss man mit den Urheber*innen Kontakt aufnehmen sowie eine Vereinbarung treffen, in welchem Umfang und zu welchen Kosten das betreffende Werk genutzt werden darf. Sind mehrere Urheber*innen an einem Werk beteiligt, muss auch deren Erlaubnis eingeholt werden. Auch hier empfiehlt es sich, schriftliche Nachweise und Lizenzbedingungen zu speichern.

Wenn das gewünschte GEMA-freie Musikstück bei einem Verlag erschienen ist, bietet dieser vielleicht entsprechende Geschäfts- und Nutzungsbedingungen oder ist auf individuelle Regelungen ansprechbar.

Eine weitere Alternative: Creative Commons-lizenzierte Musik

Es gibt auch Kreative, die ihre Musik unter einer Creative Commons-Lizenz (abgekürzt: CC) der Öffentlichkeit zur Verfügung stellen. CC-Lizenzen sind darauf ausgelegt, dass Urheber*innen der Öffentlichkeit pauschal bestimmte Nutzungsrechte einräumen.

Auch hier ist wichtig zu wissen, dass eine CC-Lizenzierung nicht bedeutet, dass daran keine Urheberrechte geknüpft sind. Man sollte daher die Nutzungsbedingungen genau kennen und umsetzen.

Die jeweiligen CC-Bedingungen sind mit verschiedenen Kürzeln gekennzeichnet, die auch miteinander kombiniert verwendet werden.

Ist das Musikstück beispielsweise mit dem Kürzel „BY“ gekennzeichnet, muss mindestens auf die Urheber*in, die Quelle sowie die Lizenz hingewiesen werden; ein solcher Hinweis kann gut auf der zum Podcast gehörenden Website oder einem Begleittext angebracht werden.

Werke mit dem Kürzel „NC“ („non commercial“) dürfen nicht für kommerzielle Zwecke eingesetzt werden; bei einem Podcast darf also kein Geld, etwa über Werbung, verdient werden.

Werke mit dem Kürzel „ND“ („non derivative“) dürfen nicht bearbeitet werden. In einem Podcast darf ND-gekennzeichnete Musik also nicht verändert werden, beispielsweise durch Rearrangierung einzelner Werkteile. Ausschnittsweise intakte Einspielungen sind hingegen möglich.

Wer seinen Podcast selbst unter CC veröffentlichen möchte, darf nur CC-lizenzierte Musik darin spielen und sollte sich etwaige Einschränkungen genau ansehen. So sieht die Bedingung „SA“ (share alike) vor, dass eine Weitergabe nur unter gleichen Lizenzbedingungen möglich ist.

Weiterführende Information zum Thema CC-lizenzierte Musik, wie man sie korrekt benutzt und wo man sie am besten findet, dazu mehr hier.

GEMA und Creative Commons – Beziehungsstatus kompliziert

Kann nun Musik via Creative Commons lizenziert sein und zugleich zum GEMA-Repertoire gehören?

Die GEMA bietet ihren Mitgliedern – also ihren Komponist*innen und Texter*innen – an, einzelne Werke unter einer CC-ähnlichen, nicht-kommerziellen Lizenz in die GEMA-Vertretung einzuschließen („NK-Lizenz“). GEMA-Mitglieder müssen die NK-Lizenz für jedes betreffende Werk einzeln bei der GEMA beantragen.

Je nach Antrag gilt die NK-Lizenz für einzelne Nutzungen („Einzelfalllizenz“) oder „für eine Vielzahl von Nutzungsfällen an einen unbeschränkten Nutzerkreis“. Diese „standardisierte Lizenz“ wird von der GEMA auch als „Jedermann-Lizenz“ bezeichnet. Und eine solche Jedermann-Lizenz kann dann eben auch eine CC-Lizenz sein.

Die NK-Lizenz hat den Vorteil, dass sich GEMA-Mitgliedschaft und die Veröffentlichung eigener Werke unter CC-Lizenz nicht kategorisch ausschließen – wie das bis 2016 noch der Fall war. Aber die konkreten Bedingungen, unter denen ein NK-lizenziertes Werk genutzt werden kann, beispielsweise in einem Podcast, können kompliziert ausfallen oder für die Praxis untauglich sein.

Erstens können Podcasts schnell im kommerziellen Bereich liegen (siehe oben). Und zweitens ist in der NK-Lizenz ein „mixed use“ grundsätzlich ausgeschlossen. „Mixed use“ meint die Nutzung von NK-lizenzierten und normal durch die GEMA lizenzierten Werken innerhalb eines Aufführungszusammenhangs (wie einem Podcast). Ist also bereits ein vergütungspflichtiges GEMA-Werk im Podcast vertreten, werden auch die anderen GEMA-Werke, selbst wenn sie unter NK-Lizenz stehen, vergütungspflichtig.

Die vergütungsfreie NK-Lizenz der GEMA ist dem Creative Commons-Lizenzierungsmodell lediglich in Teilen nachempfunden, sie bietet keine vollumfängliche CC-Abdeckung und nicht den Differenzierungsgrad der CC-Module. Die Kompatibilität zwischen den Systemen ist zudem nicht komplett geklärt.

TL;DR

Wer einen Podcast produziert und darin geschützte Musik nutzen will, muss entweder direkte Erlaubnisse der Urheber*innen oder Rechteinhaber*innen einholen oder Nutzungsrechte erwerben. In sehr vielen Fällen ist dann ein Gang zur GEMA fällig, die als Verwertungsgesellschaft Musikurheber*innen vertritt und für sie Lizenzen vergibt.

Seit Mitte Mai 2020 bietet die GEMA einen speziell auf die Lizenzierung von Musikinhalten in Podcasts zugeschnittenen Tarif – der sich sowohl an Privatleute richtet als auch an gewerbliche Nutzer*innen, die Einnahmen aus dem Podcast-Geschäft erzielen. Er gilt zunächst bis Mai 2022.

Der Tarif ist etwas kompliziert in seiner Anwendung und kann bei bereits wenigen Minuten GEMA-Repertoire-Nutzung schon kostspielig werden. Darauf sollte man sich einstellen. Der Vorteil des GEMA-Tarifs ist, dass auch kommerzielle Nutzungsformen ermöglicht werden und sich durch Lizenzzahlungen abgelten lassen.

Alternativen stehen durch GEMA-freies oder Creative Commons-lizenziertes Material zur Verfügung. Das diesbezügliche Angebot wächst stetig und bietet ebenso stilistische Vielfalt. Wie immer gilt: Man sollte sich bereits vor der Nutzung mit den jeweiligen Bedingungen vertraut machen und etwaige Lizenzen rechtzeitig einholen. Spezielle Portale können hierfür geeignete Anlaufstellen sein.

A Legislative Path to an Interoperable Internet

It’s not enough to say that the Internet is built on interoperability. The Internet is interoperability. Billions of machines around the world use the same set of open protocols—like TCP/IP, HTTP, and TLS—to talk to one another. The first Internet-connected devices were only possible because phone lines provided interoperable communication ports, and scientists found a way to send data, rather than voice, over those phone lines.

In the early days of the Internet, protocols dictated the rules of the road. Because the Internet was a fundamentally decentralized, open system, services on the Internet defaulted to acting the same way. Companies may have tried to build their own proprietary networking protocols or maintain unilateral control over the content on the network, but they ultimately failed. The ecosystem was fast-moving, chaotic, and welcoming to new ideas.

Today, big platforms are ecosystems unto themselves. Companies create accounts on Twitter, Facebook, and YouTube in order to interact with consumers. Platforms maintain suites of business-facing APIs that let other companies build apps to work within the boundaries of those platforms. And since they control the infrastructure that others rely on, the platforms have unilateral authority to decide who gets to use it.

This is a problem for competition. It means that users of one platform have no easy way of interacting with friends on other services unless the platform’s owners decide to allow it. It means that network effects create enormous barriers to entry for upstart communications and social networking companies. And it means that the next generation of apps that would work on top of the new ecosystems can only exist at big tech’s pleasure.

That’s where interoperability can help. In this post, we’ll discuss how to bring about a more interoperable ecosystem in two ways: first, by creating minimum standards for interoperability that the tech giants must support; and second, by removing the legal moat that incumbents use to stave off innovative, competitive interoperators.

Interoperability is corporate entropy. It opens up space for chaotic, exciting new innovations, and erodes the high walls that monopolies build to protect themselves.

If Facebook and Twitter allowed anyone to fully and meaningfully interoperate with them, their size would not protect them from competition nearly as much as it does. But platforms have shown that they won’t choose to do so on their own. That’s where governments can step in: regulations could require that large platforms offer a baseline of interoperable interfaces that anyone, including competitors, can use. This would set a “floor” for how interoperable very large platforms must be. It would mean that once a walled garden becomes big enough, its owner needs to open up the gates and let others in.

Requiring big companies to open up specific interfaces would only win half the battle. There are always going to be upstarts who find new, unexpected, and innovative ways to interact with platforms—often against the platforms’ will. This is called “adversarial interoperability” or “competitive compatibility.” Currently, U.S. law gives incumbents legal tools to shut down those who would interoperate without the big companies’ consent. This limits the agency that users have within the services that are supposed to serve them, and it creates an artificial “ceiling” on innovation in markets dominated by monopolists. 

It’s not enough to create new legal duties for monopolists without dismantling the legal tools they themselves use to stave off competition. Likewise, it’s not enough to legalize competitive compatibility, since the platforms have such an advantage in technical resources that serious competitors’ attempts to interoperate face enormous engineering challenges. To break out of the big platforms’ suffocating hold on the market, we need both approaches. 

Mandating Access to Monopolist Platforms: Building a Floor

This post will look at one possible set of regulations, proposed in the bipartisan ACCESS Act, that would require platforms to interoperate with everyone else. At a high level, the ACCESS Act provides a good template for ensuring upstart competitors are able to interoperate and compete with monopolists. It won’t level the playing field, but it will ensure smaller companies have the right to play at all.

We’ll present three specific kinds of interoperability mandate, borrowed from the ACCESS Act’s framing. These are data portability, back-end interoperability, and delegability. Each one provides a piece of the puzzle: portability allows users to take their data and move to another platform; back-end interoperability lets users of upstart competitors interact with users of large platforms; and delegability allows users to interact with content from the big platforms through an interface of their choosing. All three address different ways that large platforms consolidate and guard their power. We’ll break these concepts down one at a time.

Data Portability

Data portability is the idea that users can take their data from one service and do what they want with it elsewhere. Portability is the “low-hanging fruit” of interoperability policy. Many services, Facebook and Google included, already offer relatively robust data portability tools. Furthermore, data portability mandates have been included in several recent data privacy laws, including the General Data Privacy Regulation (GDPR) and the California Consumer Privacy Act (CCPA). 

Portability is relatively uncontroversial, even for the companies subject to regulation. In 2019, Facebook published a whitepaper supporting some legal portability mandates. For its part, Google has repeatedly led the way with user-friendly portability tools. And Google, Facebook, Microsoft, Twitter, and Apple have all poured resources into the Data Transfer Project, a set of technical standards to make data portability easier to implement.

The devil is in the details. Portability is hard at the edges, because assigning “ownership” to data is often hard. Who should have access to a photo that one person takes of another’s face, then uploads to a company’s server? Who should be able to download a person’s phone number: just the owner, or everyone they’re friends with on Facebook? It is extremely difficult for a single law to draw a bright line between what data a user is entitled to and what constitutes an invasion of another’s privacy. While creating portability mandates, regulators should avoid overly prescriptive orders that could end up hurting privacy. 

Users should have a right to data portability, but that alone won’t be enough to loosen the tech giants’ grip. That’s because portability helps users leave a platform but doesn’t help them communicate with others who still use it.

Back-end Interoperability

The second, more impactful concept is back-end interoperability. Specifically, this means enabling users to interact with one another across the boundaries of large platforms. Right now, you can create an account on any number of small social networks, like diaspora or mastodon. But until your friends also move off of Facebook or Twitter, it’s extremely difficult to interact with them. Network effects prevent upstart competitors from taking off. Mandatory interoperability would force Facebook to maintain APIs that let users on other platforms exchange messages and content with Facebook users. For example, Facebook would have to let users of other networks post, like, comment, and send messages to users on Facebook without a Facebook account. This would enable true federation in the social media space.

Imagine a world where social media isn’t controlled by a monopoly. There are dozens of smaller services that look kind of like Facebook, but each has its own policies and priorities. Some services maintain tight control over what kind of content is posted. Others allow pseudonymous members to post freely with minimal interference. Some are designed for, and moderated by, specific cultural or political communities. Some are designed to share and comment on images; others lend themselves better to microblogs; others still to long textual exchanges.

Now imagine that a user on one platform can interact with any of the other platforms through a single interface. Users on one service can engage freely with content hosted on other services, subject to the moderation policies of the hosting servers. They don’t need to sign up for accounts with each service if they don’t want to (though they are more than free to do so). Facebook doesn’t have an obligation to host or promote content that violates its rules, but it does have a duty to connect its users to people and pages of their choosing on other networks. If users don’t like how the moderators of one community run things, they can move somewhere else. That’s the promise of federation.

Open technical standards to federate social networking already exist, and Facebook already maintains interfaces that do most of what the bill would require. But Facebook controls who can access its interfaces, and reserves the right to restrict or revoke access for any reason. Furthermore, Facebook requires that all of its APIs be accessed on behalf of a Facebook user, not a user of another service. It offers “interoperability” in one direction—flowing into Facebook—and it has no incentive to respect users who host their data elsewhere. An interoperability mandate, and appropriate enforcement, could solve both of these problems.

Delegability

The third and final piece of the legislative framework is delegability. This is the idea that a user can delegate a third-party company, or a piece of third-party software, to interact with a platform on their behalf. Imagine if you could read your Facebook feed as curated by a third party that you trust. You could see things in raw chronological order, or see your friends’ posts in a separate feed from the news and content companies that you follow. You could calibrate your own filters for hate speech and misinformation, if you chose. And you could assign a trusted third party to navigate Facebook’s twisted labyrinth of privacy settings for you, making sure you got the most privacy-protective experience by default.

A great deal of the problems caused by monopolistic platforms are due to their interfaces. Ad-driven tech companies use dark patterns and the power of defaults to get users’ “consent” for much of their rampant data collection. In addition, ad-driven platforms often curate information in ways that benefit advertisers, not users. The ways Facebook, Twitter, and Youtube present content are designed to maximize engagement and drive up quarterly earnings. This frequently comes at the expense of user well-being.

A legal mandate for delegability would require platforms to allow third-party software to interface with their systems in the same way users do. In other words, they would have to expose interfaces for common user interactions—sending messages, liking and commenting on posts, reading content, and changing settings—so that users could delegate a piece of software to do those things for them. At a minimum, it would mean that platforms can leave their tech the way it is—after all, these functions are already exposed through a user interface, and so may be automated—and stop suing companies that try to build on top of it. A more interventionist regulation could require platforms to maintain stable, usable APIs to serve this purpose.

This is probably the most interventionist of the three avenues of regulation. It also has the most potential to cause harm. If platforms are forced to create new interfaces and given only limited authority to moderate their use, Facebook and Twitter could become even more overrun with bots. In addition, any company that is able to act on a user’s behalf will have access to all of that person’s information. Safeguards need to be created to ensure that user privacy is not harmed by this kind of mandate.

Security, Privacy, Interoperability: Choose All Three

Interoperability mandates are a heavy-duty regulatory tool. They need to be implemented carefully to avoid creating new problems with data privacy and security. 

Mandates for interoperability and delegability have the potential to exacerbate the privacy problems of existing platforms. Cambridge Analytica acquired its hoard of user data through Facebook’s existing APIs. If we require Facebook to open those APIs to everyone, we need to make sure that the new data flows don’t lead to new data abuses. This will be difficult, but not impossible. The key is to make sure users have control. Under a new mandate, Facebook would have to open up APIs for competing companies to use, but no data should flow across company boundaries until users give explicit, informed consent. Users must be able to withdraw that consent easily, at any time. The data shared should be minimized to what is actually necessary to achieve interoperability. And companies that collect data through these new interoperable interfaces should not be allowed to monetize that data in any way, including using it to profile users for ads.

Interoperability may also clash with security. Back-end interoperability will mean that big platforms need to keep their public-facing APIs stable, because changing them frequently or without notice could break the connections to other services. However, once a service becomes federated, it can be extremely difficult to change the way it works at all. Consider email, the archetypal federated messaging service. While end-to-end encryption has taken off on centralized messaging services like iMessage and WhatsApp, email servers have been slow to adopt even basic, point-to-point encryption with STARTTLS. It’s proven frustratingly difficult to get stakeholders on the same page, so inertia wins, and many messages are sent using the same technology we used in the ‘90s. Some encryption experts have stated, credibly, that they believe federation makes it too “slow” to build a competitive encrypted messaging service.  

But security doesn’t have to take a backseat to interoperability. In a world with interoperability mandates, standards don’t have to be decided by committee: the large platform that is subject to regulation can dictate how its services evolve, as long as it continues to grant fair access to everyone. If Facebook is to make its encrypted chat services interoperable with third parties, it must reserve the right to aggressively fix bugs and patch vulnerabilities. Sometimes, this will make it difficult for competitors to keep up, but protocol security is not something we can afford to sacrifice. Anyone who wants to be in the business of providing secure communications must be ready to tackle vulnerabilities quickly and according to industry best practices.

Interoperability mandates will present new challenges that we must take seriously. That doesn’t mean interoperability has to destroy privacy or undermine security. Lawmakers must be careful when writing new mandates, but they should diligently pursue a path that gives us interoperability without creating new risks for users.

Unlocking Competitive Compatibility: Removing the Ceiling

Interoperability mandates could make a great floor for interoperability. By their nature, mandates are backward-looking, seeking to establish competitive ecosystems instead of incumbent monopolies. No matter how the designers of these systems strain their imaginations, they can never plan for the interoperability needs of all the future use-cases, technologies, and circumstances.

Enter “competitive compatibility,” or ComCom, which will remove the artificial ceiling on innovation imposed by the big platforms. A glance through the origin stories of technologies as diverse as cable TV, modems, the Web, operating systems, social media services, networks, printers, and even cigarette-lighter chargers for cellphones reveals that the technologies we rely on today were not established as full-blown, standalone products, but rather, they started as adjuncts to the incumbent technologies that they eventually grew to eclipse. When these giants were mere upstarts, they shouldered their way rudely into the market by adding features to existing, widely used products, without permission from the companies whose products they were piggybacking on.

Today, this kind of bold action is hard to find, though when it’s tried, it’s a source of tremendous value for users and a real challenge to the very biggest of the Big Tech giants. 

Competitive compatibility was never rendered obsolete. Rather, the companies that climbed up the ComCom ladder kicked that ladder away once they had comfortably situated themselves at the peak of their markets. 

They have accomplished this by distorting existing laws into anti-competitive doomsday devices. Whether it’s turning terms of service violations into felonies, making independent repair into a criminal copyright violation, banning compatibility altogether, or turning troll with a portfolio of low-grade patents, it seems dominant firms are never more innovative than when they're finding ways to abuse the law to make it illegal to compete with them.

Big Tech’s largely successful war on competitive compatibility reveals one of the greatest dangers presented by market concentration: its monopoly rents produce so much surplus that firms can afford to pursue the maintenance of their monopolies through the legal system, rather than by making the best products at the best prices.

EFF has long advocated for reforms to software patents, anti-circumvention rules, cybersecurity law, and other laws and policies that harm users and undermine fundamental liberties. But the legal innovations on display in the war on competitive compatibility show that fixing every defective tech law is not guaranteed to restore a level playing field. The lesson of legal wars like Oracle v. Google is that any ambiguity in any statute can be pressed into service to block competitors. 

After all, patents, copyrights, cybersecurity laws, and other weapons in the monopolist’s arsenal were never intended to establish and maintain industrial monopolies. Their use as anti-competitive weapons is a warning that a myriad of laws can be used in this way.

The barriers to competitive compatibility are many and various: there are explicitly enumerated laws, like section 1201 of the DMCA; then there are interpretations of those laws, like the claims that software patents cover very obvious "inventions" if the words "with a computer" are added to them; and then there are lawsuits to expand existing laws, like Oracle's bid to stretch copyright to cover APIs and other functional, non-copyrightable works. 

There are several ways to clear the path for would-be interoperators. These bad laws can be worked around or struck down, one at a time, through legislation or litigation. Legislators could also enshrine an affirmative right to interoperate in law that would future-proof against new legal threats. Furthermore, regulators could require that entities receiving government contracts, settling claims of anticompetitive conduct, or receiving permission to undertake mergers make binding covenants not to attack interoperators under any legal theory.

Comprehensively addressing threats to competitive compatibility will be a long and arduous process, but the issue is urgent. It’s time we got started.

Categorieën: Openbaarheid, Privacy, Rechten

California Legislator Introduces Anti-Rural Fiber Legislation That Prioritizes DSL

Frontier’s bankruptcy has serious consequences for Americans, including 2 million Californians, who are stuck with their deteriorating DSL monopoly. After deciding for years to never upgrade their networks to fiber—despite the fact that, according to their own bankruptcy filing, they could have profitably upgraded 3 million customers to gigabit fiber already—the pyramid scheme of milking dying DSL assets caught up to the company. This has forced rural communities in California that either lack access to the Internet, or have been dependent on decaying copper DSL lines provided by Frontier Communications, into a serious predicament. The solution, of course, is for the state to build fiber in those markets by empowering local governments and small private ISPs to do the job Frontier neglected for so long.

But, rather than leave this mega-corporation to its own demise and chart out a better future for Californians, a bill  introduced by Assembly Member Aguiar-Curry, A.B. 570, proposes to amend the state’s Internet infrastructure program to prioritize DSL upgrades over fiber.

Take Action

CA: Tell Your Lawmakers to Oppose Anti-Rural Fiber Legislation

How A.B. 570 Builds Slow DSL Networks Instead of Fiber Networks

The bill establishes a criteria where the state must prioritize “cost-effective” deployment of broadband at the woefully out-of-date speed of 25/3 mbps. The biggest beneficiary from such a standard is the now bankrupt Frontier Communications, because it has existing copper assets in the ground that can be incrementally upgraded to deliver 25/3—which would be the cheapest way to deliver 25/3 broadband. This upgrade effort would be financed by a tax that Californians pay into a telecom fund. And, because slow networks are dead on arrival for private investors today, it will have to rely 100% on taxpayer money in order for the corporation to shift the entire loss off its books.

As we noted about the current state law, California’s Advanced Services Fund (CASF) considers markets where 1990s-era DSL delivering 6 megabits per second download and 1 megabits per second upload to be “served,” and establishes a low minimum for eligible projects that are achievable with DSL. Today’s law already leaves more than 1 million Californians who do not have broadband off the table for state support because they are stuck with Frontier’s slow DSL, or slow wireless networks. This makes it very hard for anyone else to build fiber networks in rural markets to solve the problem for everyone. With this kind of definition, it's not possible to leverage whole communities to build these networks—only the edges of those communities.

A.B. 570 arguably makes things worse, by complicating the means of assessing “unserved.” In general, the 6/1 metric remains (with minor caveats to raise it to 25/3), which still makes a wide range of territory ineligible for a fiber upgrade so long as copper DSL networks are in the ground. That still excludes more than 1 million Californians from state support. This approach of helping fewer and fewer people with slower networks is bad policy, and contradicts the long-held belief in telecom policy that all people are entitled to equivalent services.

Ultimately, you could not find a more wasteful means of spending scarce government money on broadband than prioritizing slow DSL upgrades over copper lines. This is especially true in the midst of a pandemic, when everyone needs substantially higher capacity networks. Those copper wires will never transition into the high-speed era and need to be replaced by fiber. There is no short cut around that fact. This is why no private corporation would willingly invest new private dollars into that type of construction. Slow DSL is rapidly approaching obsolescence. If A.B. 570’s goals of building ubiquitous 25/3 DSL connections were law, the state will have nothing to show for it in just a few years. And it will cost the state a lot more in the long run to actually deliver people infrastructure that is ready for the 21st-century economy.

There is No Future in Slow Networks and Nothing to Gain from Building Them Out

Were this bill designed around financing future-proofed fiber infrastructure, it would be designed around permanently solving the problem of the digital divide and written to ensure that people can enjoy networks that improve with advancements in hardware—without needing more government money.

But this legislation stands for the proposition that where you live in the state should mean that you have inferior access to the Internet as state policy. Every Californian who wants a 21st-century ready access point to the Internet who believes their neighbors are entitled to that kind of connection, should reject this premise. What people need are fiber infrastructure plans such as the one envisioned in S.B. 1130, the recently introduced universal fiber infrastructure plan introduced in the House of Representatives by Majority Whip Clyburn, and the FCC’s Rural Development Opportunity Fund plan to finance gigabit networks. Fiber networks will keep up with advancements in applications and services for decades while legacy networks have reached their end and will not continue to increase in capacity to deliver data.

If A.B. 570 were to become law at the end of the year, all it would do is perpetuate the suffering caused by the digital divide by replacing it with a “speed chasm”— where rural Californians have expensive obsolete networks delivering 25/3, while urban Californians have networks delivering more than 400 times the download speeds and 3,333 times the upload speeds. Already, the data shows that the average North American city today enjoys broadband speeds in excess of 250/250 mbps . Such a cliff will only grow in the absence of a fiber infrastructure program for rural markets.

If we are going to spend taxpayer money building broadband infrastructure, it needs to be done right the first time or we will never solve the problem while asking taxpayers to shell out more and more of their limited money.

Categorieën: Openbaarheid, Privacy, Rechten

How Mexico's New Copyright Law Crushes Free Expression

When Mexico's Congress rushed through a new copyright law as part of its adoption of Donald Trump's United States-Mexico-Canada Agreement (USMCA), it largely copy-pasted the US copyright statute, with some modifications that made the law even worse for human rights.

The result is a legal regime that has all the deficits of the US system, and some new defects that are strictly hecho en Mexico, to the great detriment of the free expression rights of the Mexican people.

Mexico's Constitution has admirable, far-reaching protections for the free expression rights of its people. Mexico’s Congress is not merely prohibited from censoring its peoples' speech -- it is also banned from making laws that would cause others to censor Mexicans' speech.

Mexico’s Supreme Court has ruled that Mexican authorities and laws must recognize both Mexican constitutional rights law and international human rights law as the law of the land. This means that the human rights recognized in the Constitution and international human rights treaties such as the American Convention on Human Rights, including their interpretation by the authorized bodies, make up a “parameter of constitutional consistency," except that where they clash, the most speech-protecting rule wins. Article 13 of the American Convention bans prior restraint (censorship prior to publication) and indirect restrictions on expression.

As we will see, Mexico's new copyright law falls very far from this mark, exposing Mexicans to grave risks to their fundamental human right to free expression.

Filters

While the largest tech companies in America have voluntarily adopted algorithmic copyright filters, Article 114 Octies of the new Mexican law says that "measures must be taken to prevent the same content that is claimed to be infringing from being uploaded to the system or network controlled and operated by the Internet Service Provider after the removal notice." This makes it clear that any online service in Mexico will have to run algorithms that intercept everything posted by a user, compare it to a database of forbidden sounds, words, pictures, and moving images, and, if it finds a match, it will have to block this material from public view or face potential fines.

Requiring these filters is an unlawful restriction on freedom of expression. “At no time can an ex ante measure be put in place to block the circulation of any content that can be assumed to be protected. Content filtering systems put in place by governments or commercial service providers that are not controlled by the end-user constitute a form of prior censorship and do not represent a justifiable restriction on freedom of expression." Moreover, they are routinely wrong. Filters often mistake users own creative works for copyrighted works controlled by large corporations and block them at the source. For example, classical pianists who post their own performances of public domain music by Beethoven, Bach, and Mozart find their work removed in an eyeblink by an algorithm that accuses them of stealing from Sony Music, which has registered its own performances of the same works.

To make this worse, these filters amplify absurd claims about copyright — for example, the company Rumblefish has claimed copyright in many recordings of ambient birdsong, with the effect that videos of people walking around outdoors get taken down by filters because a bird was singing in the background. More recently, humanitarian efforts to document war-crimes fell afoul of automated filtering.

Filters can't tell when a copyrighted work is incidental to a user's material or central to it. For example, if your seven-hour scholarly conference's livestream captures some background music playing during the lunch break, YouTube's filters will wipe out all seven hours' worth of audio, destroying the only record of the scientific discussions during the rest of the day.

For many years, people have toyed with the idea of preventing their ideological opponents' demonstrations and rallies from showing up online by playing copyrighted music in the background, causing all video-clips from the event to be filtered away before the message could spread.

This isn’t a fanciful strategy: footage from US Black Lives Matter demonstrations is vanishing from the Internet because the demonstrators played amplified music during their protests.

No one is safe from filters: last week, CBS's own livestreamed San Diego Comic-Con presentation was shut down due to an erroneous copyright claim by itself.

Filters can only tell you if a work matches or doesn't match something in their database — they can't tell if that match constitutes a copyright violation. Mexican copyright contains "limitations and exceptions" for a variety of purposes. While this is narrower than the US's fair use law, it nevertheless serves as a vital escape valve for Mexicans' free expression. A filter can't tell if a match means that you are a critic quoting a work for a legitimate purpose or an infringer breaking the law.

As if all this wasn't bad enough: the Mexican filter rule does not allow firms to ignore those with a history of making false copyright claims. This means that if a fraudster sent Twitter or Facebook — or a Made-In-Mexico alternative — claims to own the works of Shakespeare, Cervantes, or Juana Inés de la Cruz, the companies could ignore those particular claims if their lawyers figured out that the sender did not own the copyright, but would have to continue evaluating each new claim from this known bad actor. If a fraudster included just one real copyright claim amidst the torrent of fraud, the online service provider would be required to detect that single valid claim and honor it.

This isn't a hypothetical risk: "copyfraud" is a growing form of extortion, in which scammers claim to own artists' copyrights, then coerce the artists with threats of copyright complaints.

Algorithms work at the speed of data, but their mistakes are corrected in human time (if at all). If an algorithm is correct an incredible, unrealistic 99 percent of the time, that means it is wrong one percent of the time. Platforms like YouTube, Facebook and TikTok receive hundreds of millions of videos, pictures and comments every day — one percent of one hundred million is one million. That's one million judgments that have to be reviewed by the company's employees to decide whether the content should be reinstated.

The line to have your case heard is long. How long? Jamie Zawinski, a nightclub owner in San Francisco, posted an announcement of an upcoming performance by a band at his club in 2018, only to have it erroneously removed by Instagram. Zawinski appealed. 28 months later, Instagram reversed its algorithm's determination and reinstated his announcement — more than two years after the event had taken place.

This kind of automated censorship is not limited to nightclubs. Your contribution to your community's online discussion of an upcoming election is just as likely to be caught in a filter as Zawinski's talking about a band. When (and if) the platform decides to let your work out of content jail, the vote will have passed, and with it, your chance to be part of your community's political deliberations.

As terrible as filters are, they are also very expensive. YouTube's "Content ID" filter has cost the company more than $100,000,000, and this flawed and limited filter accomplishes only a narrow slice of the filtering required under the new Mexican law. Few companies have an extra $100,000,000 to spend on filtering technology, and while the law says these measures “should not impose substantial burdens" on implementers, it also requires them to find a way to achieve permanent removal of material following a notification of copyright infringement. Filter laws mean even fewer competitors in the already monopolized online world, giving the Mexican people fewer places where they may communicate with one another.

TPMs

Section 1201 of America's Digital Millennium Copyright Act (DMCA) is one of the most catastrophic copyright laws in history. It provides harsh penalties for anyone who tampers with or disables a "technical protection measure" (TPM): massive fines or, in some cases, prison sentences. These TPMs — including what is commonly known as "Digital Rights Management" or DRM — are the familiar, dreaded locks that stop you from refilling your printer's ink cartridge, using an unofficial App Store with your phone or game console, or watching a DVD from overseas in your home DVD player.

You may have noticed that none of these things violate copyright — and yet, because you must remove a digital lock in order to do them, you could be sued in the name of copyright law. DMCA 1201 does not provide the clear, unambiguous protection that would be needed to protect free expression. One appellate court in the United States has explicitly held that you can be liable for a violation of Section 1201 even if you’re making a fair use, and that is the position adopted by the U.S. Copyright Office. Other courts disagree, but the net effect is that you engage in these non-infringing uses and expressions at your peril. The US Congress has failed to clarify this law and tie liability for bypassing a TPM to an actual act of copyright infringement — “you may not remove the TPM from a Netflix video to record it and put it on the public Internet (a copyright infringement), but if you do so in order to make a copy for personal use (not a copyright infringement), that's fine."

The failure to clearly tie DMCA 1201 liability to infringement has had wide-ranging effects for repair, cybersecurity and competition that we will explore in later installments of this series. Today, we want to focus on how TPMs undermine free expression.

TPMs give unlimited power to manufacturers. An ever-widening constellation of devices are designed so that any modifications require bypassing a TPM and incurring liability. This allows companies to sell you a product but dictate how you must use it — preventing you from installing your own apps or other code to make it work the way you want it to.

The first speech casualty of TPM rules is the software author. This person can write code -- a form of speech — but they cannot run it on their devices without permission from the manufacturer, nor can they give the code to others to run on their devices.

Why might a software author want to change how their device works? Perhaps because it is interfering with their ability to read literature, watch films, hear music or see images. TPMs such as the global DVB CPCM standard enforce a policy called the "Authorized Domain" that defines what is — and is not — a family. Authorized Domain devices owned by a standards compliant family can all share creative works among them, allowing parents and children to share among themselves.

But an "Authorized Domain family" is not the same as an actual family. The Authorized Domain was designed by rich people from the global north working for multinational corporations, whose families are far from typical. The Authorized Domain will let you share videos between your boat, your summer home, and your SUV — but it won't let you share videos between a family whose daughter works as a domestic worker in another country, whose son is a laborer in another state, and whose parents are migrant workers who are often separated (there are far more families in this situation than there are families with yachts and second homes!).

Even if your family meets with the approval of an algorithm designed in a distant board-room by strangers who have never lived a life like yours, you still may find yourself unable to partake in culture that you are entitled to. TPMs typically require a remote server to function, and when your Internet goes down, your books or movies can be rendered unviewable.

It's not just Internet problems that can cause the art and culture you own to vanish: last year, Microsoft became the latest in a long list of companies who switched off their DRM servers because they decided they no longer wanted to be a bookstore. Everyone who ever bought a book from Microsoft lost their books.

Forever.

Mexico's Congress did nothing to rebalance its version of America's TPM rules. Indeed, Mexico's rules are worse than America's. Under DMCA 1201, the US Copyright Office holds hearings every three years to grant exemptions to the TPM rule, granting people the right to remove or bypass TPMs for legitimate purposes. America's copyright regulator has granted a very long list of these exemptions, having found that TPMs were interfering with Americans in unfair, unjust, and even unsafe ways. Of course, that process is far from perfect: it’s slow, skewed heavily in favor of rightsholders, and illegally restricts free expression by forcing would-be speakers to ask the government in advance for permission through an arbitrary process.

Mexico's new copyright law mentions a possible equivalent proceeding but leaves it maddeningly undefined — and certainly does nothing to remedy the defects in the US process. Recall that USMCA is a trade agreement, supposedly designed to put all three countries on equal footing — but Americans have the benefit of more than two decades' worth of exemptions to this terrible rule, while Mexicans will have to labor under its full weight until (and unless) they can use this undefined process to secure a comparable list of exemptions. And even then, they won’t have the flexibility offered by fair use under US law.

Notice and Takedown

Section 512 of the US DMCA created a "notice and takedown" rule that allows rightsholders or their representatives to demand the removal of works without any showing of evidence or finding of fact that their copyrights were infringed. This has been a catastrophe for free expression, allowing the removal of material without due care or even through malicious, fraudulent acts (the author of this article had his New York Times bestselling novel improperly removed from the Internet by careless lawyers for Fox Entertainment, who mistook it for an episode of a TV show of the same name).

As bad as America's notice and takedown system is, Mexico's is now worse.

In America, online services that honor notice and takedown get a "safe harbor" — meaning that they are not considered liable for their users' copyright infringements. However, online services in the US that believe a user’s content is noninfringing may ignore it, and they are only liable at all if they meet the tests for “secondary liability" for copyright infringement, something that is far from automatic. If the rightsholder sues, the service may end up in court alongside their user, but the service can still rely on the safe harbor in relation to other works published by other users, provided they remove them upon notice of infringement.

The Mexican law makes it a strict requirement to remove content. Under Article 232 Quinquies (II), providers must honor all takedown demands by copyright owners, even obviously overreaching ones, or they face fines of UMA1,000-20,000.

Further, Article 232 Quinquies (III) of the Mexican law allows anyone claiming to be an infringed-upon rightsholder to obtain the personal information of the alleged infringer. This means that gangsters, thin-skinned public officials, stalkers, and others can use fraudulent copyright claims to unmask their critics. Who will complain about corrupt police, abusive employers, or local crime-lords when their personal information can be retrieved with such ease? We recently defended the anonymity of a person who questioned their religious community, when the religious organization tried to use the corresponding part of the DMCA to identify them. In the name of copyright, the law gives new tools to anyone with power to stifle dissent and criticism.

This isn't the only "chilling effect" in the Mexican law. Under Article 114 Octies (II), a platform must comply with takedown requests for mere links to a Web-page that is allegedly infringing. Linking, by itself, is not an infringement in the United States or Canada, and its legal status is contested in Mexico. There are good reasons why linking is not infringement: It’s important to be able to talk about speech elsewhere on the Internet and to share facts, which may include the availability of copyrighted works whose license or infringement status is unknown. Besides that, Web-pages change all the time: if you link to a page that is outside of your control and it is later updated in a way that infringes copyright, you could be the target of a takedown request.

Act now!

If you are based in Mexico, we urge you to participate in R3D's campaign "Ni Censura ni Candados" and send a letter to Mexico's National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D's privacy policy.

Categorieën: Openbaarheid, Privacy, Rechten

San Francisco Police Accessed Business District Camera Network to Spy on Protestors

The San Francisco Police Department (SFPD) conducted mass surveillance of protesters at the end of May and in early June using a downtown business district's camera network, according to new records obtained by EFF. The records show that SFPD received real-time live access to hundreds of cameras as well as a "data dump" of camera footage amid the ongoing demonstrations against police violence.

The camera network is operated by the Union Square Business Improvement District (BID), a special taxation district created by the City and County of San Francisco, but operated by a private non-profit organization. These networked cameras, manufactured by Motorola Solutions' brand Avigilon, are high definition, can zoom in on a person's face to capture face-recognition ready images, and are linked to a software system that can automatically analyze content, including distinguishing between when a car or a person passes within the frame. Motorola Solutions recently unveiled plans to expand its portfolio of tools for aiding public-private  partnerships with law enforcement by making it easier for police to gain access to private cameras and video analytic tools like license plate readers. 

These unregulated camera networks pose huge threats to civil liberties

Union Square BID is only one of several special assessment districts in San Francisco that have begun deploying these cameras. These organizations are quasi-government agencies that act with state authority to collect taxes and provide services such as street cleaning. While they are run by private non-profits, they are funded with public money and carry out public services. However, in this case, the cameras were driven by one particular private citizen working with these districts. 

In 2012, cryptocurrency mogul Chris Larsen started providing money for what would eventually be $4 million worth of cameras deployed by businesses within the special assessment districts. These camera networks are managed by staff within the neighborhood and streamed to a local control room, but footage can be shared with other entities, including individuals and law enforcement, with little oversight. At least six special districts have installed these camera networks, the largest of which belongs to the Union Square BID. The camera networks now blanket a handful of neighborhoods and cover 135 blocks, according to a recent New York Times report.

A map taped to wall showing where surveillance cameras are in Union Square

The Union Square Business Improvement District's surveillance camera map

According to logs obtained by EFF, SFPD has regularly sought footage related to alleged looting and assault in the area associated with the ongoing protests against police violence. However, SFPD has gone beyond simply investigating particular incident reports and instead engaged in indiscriminate surveillance of protesters. 

Review the documents on DocumentCloud or scroll to the bottom of the page for direct downloads.

SFPD requested and received a "data dump" of 12 hours of footage from every camera in the Union District BID from 5:00 pm on May 30, 2020 to 5:00 am on May 31, 2020. While this may have coincided with an uptick in property destruction in the protests’ vicinity, the fact that SFPD requested all footage without any kind of specificity means that anyone who attended the protests—or indeed was simply passing by—could have been caught in the surveillance dragnet. 

Also on May 31, SFPD's Homeland Security Unit requested real-time access to the Union Square BID camera network "to monitor potential violence," claiming they needed it for "situational awareness and enhanced response." At 9:38am, the BID received SFPD’s initial request via email, and by 11:47am, the BID’s Director of Services emailed a technical specialist saying, “We have approved this request to provide access to all of our cameras for tonight and tomorrow night. Can you grant 48 hour remote access to [the officer]?” 

This email exchange shows that SFPD was given an initial two days of monitoring of live feeds, as well as technical assistance from the BID,  to get their remote access up and running. An email dated June 2 shows that SFPD requested access to live feeds of the camera network for another five days. The email reads:

“I...have been tasked by our Captain to reach out to see if we can extend our request for you [sic] BID cameras. We greatly appreciate you guys allowing us access for the past 2 days,but we are hoping to extend our access through the weekend. We have several planned demos all week and we anticipate several more over the weekend which are the ones we worry will turn violent again.” 

SFPD confirmed to EFF that the Union Square BID granted extended access for live feeds.  

Prior to these revelations, Chris Larsen, the funder of the special assessment district cameras, was on record as describing live access to the camera networks as illegal. “The police can't monitor it live,” said Larsen in a recent interview, “That's actually against the law in San Francisco."

A saucer-shaped camera attached to the outside of the building.

An example of an Avigilon camera in San Francisco's Japantown.

Last year, San Francisco passed a law restricting how and when government agencies may acquire, borrow, and use surveillance technology. Under these new rules, police cannot use any surveillance technology without first going through a public process and obtaining approval for a usage policy by the San Francisco Board of Supervisors. The same restrictions apply to police obtaining information or data based on an external entity’s use of surveillance technology. These records demonstrate a violation of San Francisco’s Surveillance Technology Ordinance. SFPD’s unfettered and indiscriminate live access for over a week to a third-party camera network to monitor protests was exactly the type of harm the ordinance was intended to protect against. 

These unregulated camera networks pose huge threats to civil liberties, even in times outside of the largest protest in U.S. history. In addition to cameras mounted outside of or facing private businesses, many of the special assessment district cameras also provide full view of public parks, pedestrian walkways, and other plazas where people might congregate, socialize, or protest. These documents prove that constant surveillance of these locations might capture—deliberately or accidentally—gatherings protected under the First Amendment. When those gatherings involve protesting the police or other existing power structures, law enforcement access to these cameras could open people up to retribution, harassment, or increased surveillance and ultimately chill participation in civic society. SFPD must immediately stop accessing special assessment district camera networks to indiscriminately spy on protestors.

Categorieën: Openbaarheid, Privacy, Rechten

EFF to Court: Trump Appointee’s Removal of Open Technology Fund Leadership Is Unlawful

Government Attempts Takeover of Private, Independent Nonprofit Protecting Internet Freedom

San Francisco—The Electronic Frontier Foundation (EFF) today joined a group of 17 leading U.S.-based Internet freedom organizations in telling a federal appeals court that Trump administration appointee Michael Pack has no legal authority to purge leadership at the Open Technology Fund (OTF), a private, independent nonprofit that helps hundreds of millions of people across the globe speak out online and avoid censorship and surveillance by repressive regimes.

EFF, Wikimedia, Human Rights Watch, Mozilla, the Tor Project, and a dozen more groups urged the U.S. Court of Appeals for the D.C Circuit in a filing to rule that Pack violated the First Amendment right of association and assembly and U.S. law —which both ensure that OTF is independent and separate from the government—when he ousted the fund’s president and bipartisan board  and replaced them with political appointees. Government-funded OTF filed a lawsuit against Pack last month to stop the takeover.

OTF projects have provided digital tools used by more than 2 billion ordinary citizens, protestors, journalists, and human rights activists in places ranging from Hong Kong, China, to Iran, Venezuela, and Russia to evade government censors and cyberattacks. OTF grants have also supported EFF’s technical security tools like Certbot, the development of the Tor network, the technology underlying the Signal secure messaging app, and much more.

Activists work with OTF and put their trust in the technologies OTF provides because the fund is both perceived to be, and actually has been, independent and free from U.S. government influence, EFF told the court. Government claims that Pack—the newly-installed head of an agency that oversees and financially supports the fund—is authorized to take over OTF undermines Congress’ explicit declarations that OTF is not a federal entity and sets a dangerous precedent for private organizations receiving government grants.

“In our democracy, the state can’t just decide to take control of a private organization, kick out the top officials, and install its own hand-picked administrator, even if it does provide some funding and support for the work of the organization” said EFF Executive Director Cindy Cohn. “At risk is not just the independence of a single small nonprofit that receives U.S. government funding. At risk here is years of work facilitating the technical and educational underpinnings of freedom of speech and assembly, a free press, democracy, and digital security in places where oppressive regimes seek to undermine these and other basic rights. Snatching OTFs independence also puts at risk LGBTQ and domestic violence victims worldwide, along with activists and journalists, who need basic security and safety in their communications. This work requires building trust, and ensuring that those who receive support are not targeted as spies or pawns by often hostile foreign dictatorships.”

The good news is that a panel of three circuit court justices this week issued an order preventing Pack from ousting and replacing OTF’s leadership. “The justices correctly recognized that his actions have already put OTF in jeopardy,” said Cohn. “OTF can only do the important work of combating online censorship around the world if it is regarded as independent and not as a mouthpiece ‘for some partisan agenda,’ as the court put it.” The order will stay in place while OTFs appeals a lower court ruling siding with the government.

“We’re proud to be fighting alongside OTF, whose work protecting Internet freedom and free speech is so vital right now,” said Cohn. “We urge the appeals court to put an end to the government’s blatant attempt to take control of a private, technical support organization relied upon by those seeking freedom around the world.”

Contact:  CindyCohnExecutive Directorcindy@eff.org CorynneMcSherryLegal Directorcorynne@eff.org
Categorieën: Openbaarheid, Privacy, Rechten

Mexico's new copyright law puts human rights in jeopardy

Today, the Electronic Frontier Foundation joins a coalition of international organizations in publishing an open letter of opposition to Mexico's new copyright law; the letter lays out the threats that Mexico's new law poses to fundamental human rights and calls upon Mexico's National Human Rights Commission to take action to invalidate this flawed and unsalvageable law.

In a rushed process without meaningful consultation or debate, Mexico's Congress has adopted a new copyright law modeled on the U.S. system, without taking any account of the well-publicized, widely acknowledged problems with American copyright law. The new law was passed as part of a package of legal reforms accompanying the United States-Mexico-Canada Agreement (USMCA), Donald Trump's 2020 successor to 1989's North American Free Trade Agreement (NAFTA).

However, Mexico's implementation of this Made-in-America copyright system imposes far more restrictions than either the USMCA demands or that Canada or the USA have imposed on themselves. This new copyright regime places undue burdens on Mexican firms and the Mexican people, conferring a permanent trade advantage on the richer, more developed nations of the USA and Canada, while undermining the fundamental rights of Mexicans guaranteed by the Mexican Constitution and the American Convention on Human Rights.

The opposition that sprang up after the swift passage of the new Mexican copyright law faces many barriers, but among the most serious ones is a disinformation campaign that (predictably) characterizes the claims about U.S. copyright law as "fake news". The EFF has more experience with the defects of U.S. copyright law than anyone, and in these coming days we will use it to explain in detail how Mexico's copyright law repeats and magnifies the errors that American lawmakers committed in 1998.

In 1998, the U.S. adopted the Digital Millennium Copyright Act (DMCA), a law whose problems the US government has documented in exquisite detail in the decades since. By the U.S. government's own account, the DMCA presents serious barriers to:

  • free expression;
  • national resiliency;
  • economic self-determination;
  • the rights of people with disabilities;
  • cybersecurity;
  • independent repair;
  • education;
  • archiving;
  • access to knowledge; and
  • competition.

Despite these manifest defects, the U.S. government successfully pressured Canada into adopting substantially similar legislation in 2011 with the passage of Canada's Bill C-11.

Both the U.S. and Canada have taken substantial steps to modify the defects in their copyright law. Canada, in particular, used the USMCA as an occasion to rebalance its copyright law, removing some of the onerous terms that Mexico has adopted.

In a series of posts over the coming days, we will elucidate the ways in which the Mexican copyright bill imposes undue and unique burdens on Mexico, Mexican people, and Mexican industry, and what lessons Mexico should have learned from the U.S. and Canadian experience with this one-sided, overreaching version of copyright for the digital world.

In 1998, the US tragically failed to see the import of getting the rules for the Internet right, passing a copyright law that treated the Internet as a glorified entertainment medium. When Canada adopted its law in 2011, it had no excuse for missing the fact that the Internet had become the world's digital nervous system, a medium where we transact our civics and politics; our personal, familial and romantic lives; our commerce and employment; our health and our education.

But these failings pale in comparison to the dereliction of Mexican lawmakers in importing this system to Mexico. The pandemic and its lockdown made it clear that everything we do not only involves the Internet: it requires the Internet. In today's world, it is absolutely inexcusable for a lawmaker to regulate the net as though it were nothing more than a glorified video-on-demand service.

Mexico's prosperity depends on getting this right. Even more: the human rights of the Mexican people require that the Congress of Mexico or the Mexican Court get this right.

Read the letter from EFF, Derechos Digitales and NGOs around the world to Mexico’s National Human Rights Commission here.

If you are based in Mexico, we urge you to participate in R3D's campaign "Ni Censura ni Candados" and send a letter to Mexico's National Commission for Human Rights to asking them to invalidate this new flawed copyright law. R3D will ask for your name, email address, and your comment, which will be subject to R3D's privacy policy.

Categorieën: Openbaarheid, Privacy, Rechten

EFF Joins HOPE 2020

EFF staff members will present some of our latest work at 2600 Magazine's biennial Hackers on Planet Earth (HOPE) conference beginning this weekend. HOPE is a diverse hacker event that has drawn thousands of tinkerers, security researchers, activists, artists, and makers since 1994. In a departure from the infamous Hotel Pennsylvania in New York, this first-ever virtual edition of HOPE will run an epic 9 days from July 25 through August 2.

EFF's presentations will cover diverse online rights topics including facial recognition, government surveillance powers, digital identity standards and specifications, security dangers in Amazon's Ring, and much more. HOPE registrants will also be able to participate in free-form question and answer sessions with EFF and members of the Electronic Frontier Alliance.

HOPE keynote speakers include EFF's Executive Director Cindy Cohn speaking on August 2nd at 2pm EST, as well as author and EFF Special Advisor Cory Doctorow on July 25 at 4pm EST.

EFF Presentations

Meet the EFA: A Discussion on Grassroots Organizing for Digital Privacy, Security, Free Expression, Creativity, and Access to Knowledge
nash
Sunday July 26 at 1pm EST on the Public Talk Stream
Founded by EFF, the Electronic Frontier Alliance (EFA) is a grassroots network of community and campus organizations across the United States. Join representatives from the EFF, and EFA affiliated groups, for this panel discussion on community-based tech advocacy, and working within your community to educate and empower neighbors in the fight for data privacy and digital rights.

Reform or Expire? The Battle to Reauthorize FISA Programs
India McKinney, Andrew Crocker
Monday July 27 at 4pm EST on the Public Talk Stream
On March 15, 2020, Section 215 of the PATRIOT Act - a surveillance law with a rich history of government overreach and abuse - expired. Along with two other PATRIOT Act provisions, Section 215 lapsed after lawmakers failed to reach an agreement on a broader set of reforms to the Foreign Intelligence Surveillance Act (FISA).

In the week before the law expired, the House of Representatives passed the USA FREEDOM Reauthorization Act, which would have extended Section 215 for three more years, along with some modest reforms. After negotiations, the Senate passed a slightly amended version of the bill, but after a veto threat from the President, the House of Representatives failed to pass it. The bill currently remains expired, but the question remains - for how long? And what will reform look like?

In this discussion, India and Andrew will explain the political factors behind this unusual legislative journey, as well as the policy implications of these proposals.

Mobile First Digital Identities and Your Privacy
Alexis Hancock
Tuesday July 28 at 8pm EST on the Public Talk Stream
"Mobile First" is more than a web developer's mantra chanted from 2010. It also means that many people now visit websites and use services from their mobile devices more than on laptops and desktops. Recently, several proposals and published models for establishing big parts of our lives through our mobile devices have been discussed. Big proposals include mobile driver's licenses, mobile health credentials, and other forms of digitized documentation such as university degrees. Recently published and proposed standards include the W3C's verifiable credentials data model and the ISO's 18013-5 mobile driver's license compliance. This talk discusses the privacy concerns that surround these ideas, test cases, and the trajectory of digitized identification.

Ring's Wrongs: Surveillance Capitalism, Law Enforcement Contracts, and User Tracking
Bill Budington
Wednesday July 29 at 12pm EST on the Public Talk Stream
This talk is going to catalogue Ring's Wrongs and EFF's campaign against these practices - practices that not only facilitate the overreach of law enforcement and injure user privacy, but also provide the clearest example of surveillance capitalism, a new frontier of profiteering.

When Cops Get Hacked: Lessons (Un)Learned from a Decade of Law Enforcement Breaches
Dave MaassMadison Vialpando, Emma Best
Thursday July 30 at 3pm EST on the Public Talk Stream
More than 125 U.S. law enforcement agencies have suffered some form of hack or data breach over the last ten years. Journalism school graduate Madison Vialpando has been working with the Electronic Frontier Foundation to build a dataset compiling all the ransomware, DDOS attacks, physical data theft, and servers and surveillance technologies exposed online. In this talk, she will explain how the dataset works, the trends revealed by the data, some of the most interesting case studies, and whether law enforcement is actually learning anything from these incidents. Dave Maass will talk about the Electronic Frontier Foundation's security research into automated license plate readers and other unsecured surveillance tech, while transparency activist Emma Best of Distributed Denial of Secrets will provide an overview of BlueLeaks - one of the largest dumps of internal police documents in history.

Who Has Your Face? The Fight Against U.S. Government Agencies' Use of Face Recognition
Jason Kelley, Matthew Guariglia
Friday July 31 at 12pm EST on the Public Talk Stream
The fight against government use of face recognition technology is an important one, and one that civil liberties and other groups have come at from many different angles. Unfortunately, the technology is already out there - in use - and endangering people's privacy. Due to differing laws, regulations, and data-sharing agreements between federal, state, and local agencies across the country, U.S. residents and visitors frequently have their image not only collected and stored for facial recognition purposes by the government, but often also secretively shared between dozens of agencies. Because of the complexity of these laws and agreements, it's very difficult to learn who exactly has your image. It can take a hacker mindset to learn where your image is - FOIAs, online research, even contacting individuals directly at government agencies. Using all of these methods, EFF developed a new interactive website to explain to users which agencies might be using their image for face recognition - and to spur them to act. The speakers will explain issues with facial recognition technology; what sort of advocacy has been effective in the past; where we stand on federal, state, and local regulations; and discuss how they did the research, design, and creation of the whohasyourface.org website and its result on laws and advocacy, as well as suggest ways that others can build on this research.

Ask the EFF: The Year in Digital Civil Liberties
Alexis Hancock, India McKinney, Kurt Opsahl, Naomi Gilens, Rory Mir
Saturday August 1 at 12pm EST on the Public Talk Stream
Get the latest information about how the law is racing to catch up with technological change from staffers at the Electronic Frontier Foundation, the nation's premiere digital civil liberties group fighting for freedom and privacy in the computer age.

Legal Inquiries for Security Researchers

EFF staff attorneys are committed to supporting the computer security community. If you have legal concerns regarding an upcoming presentation, or sensitive infosec research that you are conducting for HOPE or at any time, please email info@eff.org and we will do our best to get you the help that you need.

Categorieën: Openbaarheid, Privacy, Rechten

Pagina's

Abonneren op Informatiebeheer  aggregator - Rechten