U bent hier

Rechten

Platform Censorship Won't Fix the Internet

The House Judiciary Committee will hold a hearing on “The Filtering Practices of Social Media Platforms” on April 26. Public attention to this issue is important: calls for online platform owners to police their members’ speech more heavily inevitably lead to legitimate voices being silenced online. Here’s a quick summary of a written statement EFF submitted to the Judiciary Committee in advance of the hearing.

Our starting principle is simple: Under the First Amendment, social media platforms and other online intermediaries have the right to decide what kinds of expression they will carry. But just because companies can act as judge and jury doesn’t mean they should.

We all want an Internet where we are free to meet, create, organize, share, associate, debate and learn. We want to make our voices heard in the way that technology now makes possible. No one likes being lied to or misled, or seeing hateful messages directed against them or flooded across our newsfeeds. We want our elections free from manipulation and for the speech of women and marginalized communities not to be silenced by harassment.

The powerless struggle to be heard in the first place; social media can and should help change that reality, not reinforce it.

But we won’t make the Internet fairer or safer by pushing platforms into ever more aggressive efforts to police online speech. When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier.

Indeed, for every high profile case of despicable content being taken down, there are many, many more stories of people in marginalized communities who are targets of persecution and violence. The powerless struggle to be heard in the first place; social media can and should help change that reality, not reinforce it.

That’s why we must remain vigilant when platforms decide to filter content. We are worried about how platforms are responding to new pressures to filter the content on their services. Not because there’s a slippery slope from judicious moderation to active censorship, but because we are already far down that slope.

To avoid slipping further, and maybe even reverse course, we’ve outlined steps platforms can take to help protect and nurture online free speech. They include:

  • Better transparency
  • Foster innovation and competition; e.g., by promoting interoperability
  • Clear notice and consent procedures
  • Robust appeal processes
  • Promote user control
  • Protect anonymity

You can read our statement here for more details.

For its part, rather than instituting more mandates for filtering or speech removal, Congress should defend safe harbors, protect anonymous speech, encourage platforms to be open about their takedown rules and to follow a consistent, fair, and transparent process, and avoid promulgating any new intermediary requirements that might have unintended consequences for online speech.

EFF was invited to participate in this hearing and we were initially interested. However, before we confirmed our participation, the hearing shifted in a different direction. We look forward to engaging in further discussions with policymakers and the platforms themselves.

Categorieën: Openbaarheid, Privacy, Rechten

California Can Build Trust Between Police and Communities By Requiring Agencies to Publish Their Policies Online

If we as citizens are more informed of police policies and procedures, and we can easily access those materials online and study them, it’ll lead to greater accountability and better relations between our communities and the police departments that serve us. EFF supports a bill in the California legislature which aims to do exactly that.

S.B. 978, introduced by Sen. Steven Bradford, will require law enforcement agencies to post online their current standards, practices, policies, operating procedures, and education and training materials. As we say in our letter of support:

[The bill] will help address the increased public interest and concern about police policies in recent years, including around the issues of use of force, less-lethal weapons, body-worn cameras, anti-bias training, biometric identification and collection, and surveillance (such as social media analysis, automated license plate recognition, cell-site simulators, and drones).

Additionally, policies governing police activities should be readily available for review and scrutiny by the public, policymakers, and advocacy groups. Not only will this transparency measure result in well-informed policy decisions, but it will also provide the public with a clearer understanding of what to expect and how to behave during police encounters.

Last year, Gov. Jerry Brown vetoed a previous version of this bill, which had broad support from both civil liberties groups and law enforcement associations. The new bill is meant to address his concerns of the bill’s scope, and removes a few of the state law enforcement agencies from the law’s purview, like the Department of Alcoholic Beverage Control and California Highway Patrol, among others.

We hope that the legislature will once again pass this important bill, and that Gov. Brown will support transparency and accountability between law enforcement and Californians.

Categorieën: Openbaarheid, Privacy, Rechten

Minister Dekker: stoppen met digitalisering rechtspraak geen optie

Het digitaliseren van rechtspraak is volgens minister Dekker (voor Rechtsbescherming) noodzakelijk, en niet te stoppen. Wel vindt hij dat hierbij goed moet worden gekeken naar de manier waarop de Rechtspraak als organisatie wordt bestuurd. Het moet helder zijn waar verantwoordelijkheden en bevoegdheden liggen. Ook is speciale aandacht nodig voor de snelheid en het ambitieniveau van de digitalisering, stelde de minister tijdens een debat in de Tweede Kamer vandaag.

illustratieve foto Tweede Kamer

Naast de digitalisering sprak de vaste Kamercommissie voor Justitie en Veiligheid tijdens dit zogenoemd algemeen overleg met de minister over onder meer griffierechten, maatschappelijk effectieve rechtspraak, en de financiering van rechtspraak.

Digitalisering

Eerder deze maand kondigde de Rechtspraak een reset van het moderniserings- en digitaliseringsprogramma KEI aan, de focus wordt verlegd van het automatiseren van juridische procedures naar vergroting van de digitale toegankelijkheid. Ter voorbereiding van het Kamerdebat informeerde de Rechtspraak vorig week tijdens een technische briefing de Kamer over dit onderwerp.

De minister wil van de Rechtspraak duidelijkheid op 3 punten: of alle neuzen binnen de Rechtspraak met betrekking tot digitalisering dezelfde kant op staan, of de juiste bestuurders, managers en IT-specialisten op de juiste plaats zitten, en of de aansturing van het digitaliseringsprogramma helder genoeg is. Tijdens het debat vandaag zei de minister dat het in de eerste plaats een verantwoordelijkheid van de Rechtspraak zelf is om hier een antwoord op te vinden, maar ook kondigde hij aan dat hij meer externe controle en een grotere invloed van zijn ministerie wil. Hierbij benadrukte Dekker meerdere keren dat hij ervoor waakt dat hierbij de onafhankelijkheid van de Rechtspraak niet wordt geraakt. Voor de zomer wil Dekker de Kamer informeren over de stand van zaken op dit dossier.

Financiering

Door het uitblijven van de digitaliseringsbaten en het afnemen van het aantal rechtszaken zit de Rechtspraak in financieel zwaar weer. Minister Dekker zei de Kamer dat het een keuze van de Rechtspraak is geweest om eerdere bezuinigingsopdrachten in te vullen met toekomstige opbrengsten van het digitaliseringsprogramma. Nu deze baten langer op zich laten wachten, wil de minister in de eerste plaats van de Rechtspraak horen wat hiervoor mogelijke oplossingen zijn.

Toegang rechter

De Kamer vroeg bij diverse onderdelen van het debat aandacht voor de toegang tot de rechter. Zo waren positieve geluiden te horen over de initiatieven die de Rechtspraak ontplooit op het gebied van maatschappelijk effectieve rechtspraak, waarmee het recht effectiever wordt ingezet en rechtspraak toegankelijker wordt voor iedereen.

De Kamer vroeg de minister ook naar zijn visie op de effecten van griffierechten (de kosten voor het voeren van een rechtszaak) op de toegang tot de rechter. In zijn jaarbericht pleitte Frits Bakker, voorzitter van de Raad voor de rechtspraak, onlangs voor een verlaging van de griffierechten. Veel mensen kunnen een gang naar de rechter niet meer betalen, wat de rechtsbescherming van burgers ondermijnt. De minister zegde de Kamer toe nog eens naar dit onderwerp te kijken.

Ook werd de Evaluatie Wet herziening gerechtelijke kaart (rijksoverheid.nl) besproken door de Kamer. Met deze wet werd het aantal rechtbanken teruggebracht van 19 naar 11, en het aantal gerechtshoven van 5 naar 4. Tijdens deze discussie werd aandacht gevraagd voor de in 2016 ingediende motie-Oskam die stelt dat verdere sluiting van gerechtsgebouwen niet aan de orde is.

Categorieën: Rechten

Moet je bij een Linkedin-wervingsbericht een opt-out opnemen?

IusMentis - 25 april 2018 - 8:24am

Een lezer vroeg me:

Onder de AVG ben je verplicht om bij alle direct marketing uitingen mensen apart te wijzen op het recht van opt-out (verzet). Hoe pakt dat uit bij Linkedinberichten, waar immers het doel van de dienst bestaat uit het contacteren van mensen voor (in mijn geval) bijvoorbeeld personeelsbemiddeling? Het is volstrekt ondoenlijk om op een dergelijk platform het recht van verzet onder elk bericht te plaatsen. Los van de karakterlimiet en de wijze waarop het volstrekt legitiem en vaak welkom contact besmeurt, zal het niet de bedoeling van de wetgever zijn om zich te mengen in contact waar mensen zich nota bene expliciet voor hebben aangemeld. Toch heb ik geen uitzondering kunnen vinden en lijkt het (hoewel enigszins theoretisch) nog steeds verplicht om ook via Linkedin mensen te wijzen op hun recht van verzet.

Vanaf 25 mei heb je inderdaad het recht op verzet oftewel opt-out bij iedere vorm van direct marketing. Wanneer je geen zin (meer) hebt in dergelijke gerichte reclame, kun je daar op ieder moment eenvoudig en gratis bezwaar tegen maken en dat moet dan stoppen.

Bijzonder aan dit recht van verzet is dat het bij de direct marketing berichten zelf gemeld moet worden, en ook nog eens apart en gescheiden van andere informatie. Je mag dit dus niet apart in je privacyverklaring verstoppen, het moet duidelijk en direct te vinden zijn zodat mensen dat recht direct uit kunnen oefenen. Toegegeven, dat is onhandig als je via een kort berichtje via de chatfunctie van Linkedin jezelf wilt voorstellen en wilt hengelen naar een zakelijk kopje koffie met winstoogmerk.

Ik denk alleen dat het bij LinkedIn anders werkt. Je kunt (moet?) op je profiel aangeven dat je wel of niet openstaat voor arbeidsbemiddeling (“Beschikbaar voor nieuwe carrièrekansen”). Binnen de context van Linkedin mag je dat opvatten als toestemming. Denk ik. En daarmee kunnen mensen dus die berichten sturen onder de grondslag toestemming, waarbij je dan géén recht van verzet hoeft te bieden. Toestemming is immers intrekbaar, verzet is niet aan de orde. Maar het is niet verplicht apart en uitdrukkelijk te zeggen dat toestemming kan worden ingetrokken.

Eerlijk gezegd weet ik niet of dit nu het grootste probleem is onder de AVG. Het lost zichzelf ook op, want wie al te hard die berichten stuurt krijgt een verbanning van Linkedin. En dan is er juridisch gezien verder geen probleem.

Arnoud

Het bericht Moet je bij een Linkedin-wervingsbericht een opt-out opnemen? verscheen eerst op Ius Mentis.

Supreme Court Upholds Patent Office Power to Invalidate Bad Patents

In one of the most important patent decisions in years, the Supreme Court has upheld the power of the Patent Office to review and cancel issued patents. This power to take a “second look” is important because, compared to courts, administrative avenues provide a much faster and more efficient means for challenging bad patents. If the court had ruled the other way, the ruling would have struck down various patent office procedures and might even have resurrected many bad patents. Today’s decision [PDF] in Oil States Energy Services, LLC v. Greene’s Energy Group, LLC is a big win for those that want a more sensible patent system.

Oil States challenged the inter partes review (IPR) procedure before the Patent Trial and Appeal Board (PTAB). The PTAB is a part of the Patent Office and is staffed by administrative patent judges. Oil States argued that the IPR procedure is unconstitutional because it allows an administrative agency to decide a patent’s validity, rather than a federal judge and jury.

Together with Public Knowledge, Engine Advocacy, and the R Street Institute, EFF filed an amicus brief [PDF] in the Oil States case in support of IPRs. Our brief discussed the history of patents being used as a public policy tool, and how Congress has long controlled how and when patents can be canceled. We explained how the Constitution sets limits on granting patents, and how IPR is a legitimate exercise of Congress’s power to enforce those limits.

Our amicus brief also explained why IPRs were created in the first place. The Patent Office often does a cursory job reviewing patent applications, with examiners spending an average of about 18 hours per application before granting 20-year monopolies. IPRs allow the Patent Office to make sure it didn’t make a mistake in issuing a patent. The process also allows public interest groups to challenge patents that harm the public, like EFF’s successful challenge to Personal Audio’s podcasting patent. (Personal Audio has filed a petition for certiorari asking the Supreme Court to reverse, raising some of the same grounds argued by Oil States. That petition will be likely be decided in May.)

The Supreme Court upheld the IPR process a 7-2 decision. Writing for the majority, Justice Thomas explained:

Inter partes review falls squarely within the public rights doctrine. This Court has recognized, and the parties do not dispute, that the decision to grant a patent is a matter involving public rights—specifically, the grant of a public franchise. Inter partes review is simply a reconsideration of that grant, and Congress has permissibly reserved the PTO’s authority to conduct that reconsideration. Thus, the PTO can do so without violating Article III.

Justice Thomas noted that IPRs essentially serve the same interest as initial examination: ensuring that patents stay within their proper bounds.

Justice Gorsuch, joined by Chief Justice Roberts, dissented. He argued that only Article III courts should have the authority to cancel patents. If that view had prevailed, it likely would have struck down IPRs, as well as other proceedings before the Patent Office, such as covered business method review and post-grant review. It would also have left the courts with difficult questions regarding the status of patents already found invalid in IPRs. 

In a separate decision [PDF], in SAS Institute v. Iancu, the Supreme Court ruled that, if the PTAB institutes an IPR, it must decide the validity of all challenged claims. EFF did not file a brief in that case. While the petitioner had tenable arguments under the statute (indeed, it won), the result seems to make the PTAB’s job harder and creates a variety of problems (what is supposed to happen with partially-instituted IPRs currently in progress?). Since it is a statutory decision, Congress could amend the law. But don’t hold your breath for a quick fix.

Now that IPRs have been upheld, we may see a renewed push from Senator Coons and others to gut the PTAB’s review power. That would be a huge step backwards. As Justice Thomas explained, IPRs protect the public’s “paramount interest in seeing that patent monopolies are kept within their legitimate scope.” We will defend the PTAB’s role serving the public interest.

Categorieën: Openbaarheid, Privacy, Rechten

Stop Egypt’s Sweeping Ridesharing Surveillance Bill

The Egyptian government is currently debating a bill which would compel all ride-sharing companies to store any Egyptian user data within Egypt. It would also create a system that would let the authorities have real-time access to their passenger and trip information. If passed, companies such as Uber and its Dubai-based competitor Careem would be forced to grant unfettered direct access to their databases to unspecified security authorities. Such a sweeping surveillance measure is particularly ripe for abuse in a country known for its human rights violations, including an attempts to use surveillance against civil society. The bill is expected to pass a final vote before Egypt’s House on May 14th or 15th.

Article 10 of the bill requires companies to relocate their servers containing all Egyptian users’ information to within the borders of the Arab Republic of Egypt. Compelled data localization has frequently served as an excuse for enhancing a state’s ability to spy on its citizens.  

Even more troubling, article 9 of the bill forces these same ride-sharing companies to electronically link their local servers directly to unspecified authorities, from police to intelligence agencies. Direct access to a server would provide the Egyptian government unrestricted, real-time access to data on all riders, drivers, and trips. Under this provision, the companies themselves would have no ability to monitor the government’s use of their network data.

Effective computer security is hard, and no system will be free of bugs and errors.  As the volume of ride-sharing usage increases, risks to the security and privacy of ridesharing databases increase as well. Careem just admitted on April 23rd that its databases had been breached earlier this year. The bill’s demand to grant the Egyptian government unrestricted server access greatly increases the risk of accidental catastrophic data breaches, which would compromise the personal data of millions of innocent individuals. Careem and Uber must focus on strengthening the security of their databases instead of granting external authorities unfettered access to their servers.

Direct access to the databases of any company without adequate legal safeguards undermines the privacy and security of innocent individuals, and is therefore incompatible with international human rights obligations. For any surveillance measure to be legal under international human rights standards, it must be prescribed by law. It must be “necessary” to achieve a legitimate aim and “proportionate” to the desired aim. These requirements are vital in ensuring that the government does not adopt surveillance measures which threaten the foundations of a democratic society.

The European Court of Human Rights, in Zakharov v. Russia, made clear that direct access to servers is prone to abuse:

“...a system which enables the secret services and the police to intercept directly the communications of each and every citizen without requiring them to show an interception authorisation to the communications service provider, or to anyone else, is particularly prone to abuse.”                                                                                             

Moreover, the Court of Justice of the European Union (CJEU) has also discussed the importance of having an independent authorization prior to government access to electronic data. In Tele2 Sverige AB v. Post, held:

“it is essential that access of the competent national authorities to retained data should, as a general rule, (...) be subject to a prior review carried out either by a court or by an independent administrative body, and that the decision of that court or body should be made following a reasoned request by those authorities submitted...”.

Unrestricted direct access to the data of innocent individuals using ridesharing apps, by its very nature, eradicates any consideration of proportionality and due process. Egypt must turn back from the dead-end path of unrestricted access, and uphold its international human rights obligations. Sensitive data demands strong legal protections, not an all-access pass. Hailing a rideshare should never include a blanket access for your government to follow you. We hope Egypt’s House of Representatives rejects the bill.

Categorieën: Openbaarheid, Privacy, Rechten

California Bill Would Guarantee Free Credit Freezes in 15 Minutes

 

After the shocking news of the massive Equifax data breach, which has now ballooned to jeopardize the privacy of nearly 148 million people, many Americans are rightfully scared and struggling to figure out how to protect themselves from the misuse of their personal information.

To protect against credit fraud, many consumer rights and privacy organizations recommend placing a ‘credit freeze’ with the credit bureaus. When criminals seek to use breached data to borrow money in the name of a breach victim, the potential lender normally runs a credit check with a credit bureau. If there’s a credit freeze in place, then it’s harder to obtain the loan.

But placing a credit freeze can be cumbersome, time-consuming, and costly. The process can also vary across states. It can be an expensive time-suck if a consumer wants to place a freeze across all credit bureaus and for all family members.

Fortunately, California now has an opportunity to dramatically streamline the credit freeze process for its residents, thanks to a state bill introduced by Sen. Jerry Hill, S.B. 823. EFF is proud to support it.

The bill will allow Californians to place, temporarily lift, and remove credit freezes easily and at no charge. Credit reporting agencies will be required to carry out the request in 15 minutes or less if the consumer uses the company’s website or mobile app.

The response time for written request has been cut as well from three days to just 24 hours. Additionally, credit reporting agencies must offer consumers the option of passing along credit freeze requests to other credit reporting agencies, saving Californians time and reducing the likelihood of the misuse of their information. 

You can read our support letter for the bill here.

Free and convenient credit freezes are becoming even more important as many consumer credit reporting agencies are pushing their inferior “credit lock” products. These products don’t offer the same protections built into credit freezes by law, and to use some of them, consumers have to agree to have their personal information be used for targeted ads.

The bill has passed the California Senate and will soon be heading to the Assembly for a vote. EFF endorses this effort to empower consumers to protect their sensitive information.

Categorieën: Openbaarheid, Privacy, Rechten

The neverending story: Article 11 must be deleted

International Communia Association - 24 april 2018 - 2:54pm

We still can’t believe how bad the last plan of MEP Axel Voss for the press publishers right is. At the end on March MEP Voss released his proposal for a compromise on Article 11, and the changes he is proposing are even more radical and more broken than anything we’ve seen thus far. It’s time for everyone to stand up and say again, “enough is enough.”

Today, Communia and 55 other organizations, including associations of European public institutions, companies and start-ups, journalists and libraries, news publishers and civil society organisations sent a letter to MEP Voss trying again to present the obvious and well documented arguments against the introduction of a new right for press publishers. The signatories hold that that a neighbouring right for press publishers and news agencies will neither support quality journalism nor foster the free press. Rather it will lead to massive collateral damage and a lose-lose-situation for all stakeholders involved.

Unfortunately, MEP Voss has his very own definition of the term “compromise”. With regard to Article 11 it is especially unfortunate since this is one of the few contentious issues where a real compromise has already been identified: that is, the approach presented earlier by MEP Voss’ predecessor MEP Comodini (and also contemplated in the Estonian presidency) that would rely on a presumption that publishers are the rights holders, thus making it easier for these entities “to conclude licences and to seek application of the measures, procedures and remedies.” But this idea was simply abandoned by the current rapporteur. The signatories of the letter agree that given the empirical evidence presented thus far that the right will not accomplish what it sets out to do – not to mention the detrimental effects on journalism and access to information, Article 11 must be deleted.

 

The post The neverending story: Article 11 must be deleted appeared first on International Communia Association.

Mag je reacties gebruiken voor wetenschappelijk onderzoek?

IusMentis - 24 april 2018 - 8:22am

Een lezer vroeg me:

Ik wil wetenschappelijk onderzoek doen naar stijlontwikkelingen in de taal, en wil daarvoor onder meer reacties van diverse grote forums gebruiken. De beheerders geven aan dat dat niet mag vanwege auteursrecht, maar zit er wel auteursrecht op de vaak korte en simpele reacties die je overal vindt? En is er geen uitzondering voor wetenschappelijke studie?

Auteursrechtelijk is het vrij simpel, als er een “eigen intellectuele prestatie” is geleverd dan zit er auteursrecht op de reactie. Een enkele “+1” of dergelijke opmerking is dus vrij, maar een zin of twee komt al door die toets heen. Ook als je niet weet wie de reageerder is en hem niet kunt bereiken.

Als site mag je aannemen dat je een licentie krijgt voor gebruik van die reacties, maar dat zal dan altijd in combinatie met het bronartikel zijn. De reacties opnemen in een corpus is dus problematisch, in theorie. Immers, dat is gebruik dat buiten de context van de site valt en dus niet door de reageerder in de licentie is toegestaan.

Soms zie je dat een site een bredere licentie eist, maar die is dan vaak weer beperkt tot enkel de forumbeheerder. Die mag er dan alles mee doen, inclusief op mokken afdrukken of in boeken compileren. In dat geval zou de forumbeheerder aan een onderzoeker kunnen toestaan dat hij de reacties gebruikt. Maar dat moet je dan echt uitzoeken vooraf, want het moet er dan wel expliciet hebben gestaan.

Juridisch gezien kun je zeggen dat wetenschappelijk gebruik eigenlijk niet als concurrerend of oneerlijk gebruik te zien is. Er wordt geen geld verdiend met de reacties, de reacties zelf worden eigenlijk niet eens verspreid. Ze worden -zoals hier- in woorden opgehakt en gebruikt voor bijvoorbeeld sentimentanalyse, detectie van taaltrends en ga zo maar door. Ook voor dergelijk analyseren is formeel toestemming nodig, want onze auteursrecht kent geen algemeen “fair use” recht en in de Auteurswet staat nergens expliciet een wettelijk recht op wetenschappelijk onderzoek naar auteursrechtelijk beschermde werken.

Daar komt bij dat je praktisch gezien zelden tot nooit ziet dat reageerders hier een punt van maken. Vaak zijn ze niet bekend, of willen ze niet zichzelf associëren met die opmerkingen. Bij een rechtszaak zou je jezelf bekend moeten maken immers. Plus, er is geen cent te halen want welke vergoeding had je kunnen vragen om je reactie te mogen gebruiken? Daar wordt alleen de advocaat rijker van dus.

Kortom het mag niet maar ik zie praktisch eigenlijk geen bezwaar om het te doen.

Arnoud

Het bericht Mag je reacties gebruiken voor wetenschappelijk onderzoek? verscheen eerst op Ius Mentis.

Net Neutrality Did Not Die Today

When the FCC’s “Restoring Internet Freedom Order,” which repealed net neutrality protections the FCC had previously issued, was published on February 22nd, it was interpreted by many to mean it would go into effect on April 23. That’s not true, and we still don’t know when the previous net neutrality protections will end.

On the Federal Register’s website—which is the official daily journal of the United States Federal Government and publishes all proposed and adopted rules, the so-called “Restoring Internet Freedom Order” has an “effective date” of April 23. But that only applies to a few cosmetic changes. The majority of the rules governing the Internet remain the same—the prohibitions on blocking, throttling, and paid prioritization—remain.

Before the FCC’s end to those protections can take effect, the Office of Management and Budget has to approve the new order, which it hasn’t done. Once that happens, we’ll get another notice in the Federal Register. And that’s when we’ll know for sure when the ISPs will be able to legally start changing their actions.

If your Internet experience hasn’t changed today, don’t take that as a sign that ISPs aren’t going to start acting differently once the rule actually does take effect;  for example, Comcast changed the wording on its net neutrality pledge almost immediately after last year’s FCC vote.

Net neutrality protections didn’t end today, and you can help make sure they never do. Congress can still stop the repeal from going into effect by using the Congressional Review Act (CRA) to overturn the FCC’s action. All it takes is a simple majority vote held within 60 legislative working days of the rule being published. The Senate is only one vote short of the 51 votes necessary to stop the rule change, but there is a lot more work to be done in the House of Representatives. See where your members of Congress stand and voice your support for the CRA here.

Take Action

Save the net neutrality rules

Categorieën: Openbaarheid, Privacy, Rechten

Stupid Patent of the Month: Suggesting Reading Material

Online businesses—like businesses everywhere—are full of suggestions. If you order a burger, you might want fries with that. If you read Popular Science, you might like reading Popular Mechanics. Those kinds of suggestions are a very old part of commerce, and no one would seriously think it’s a patentable technology.

Except, apparently, for Red River Innovations LLC, a patent troll that believes its patents cover the idea of suggesting what people should read next. Red River filed a half-dozen lawsuits in East Texas throughout 2015 and 2016. Some of those lawsuits were against retailers like home improvement chain Menards, clothier Zumiez, and cookie retailer Ms. Fields. Those stores all got sued because they have search bars on their websites.

In some lawsuits, Red River claimed the use of a search bar infringed US Patent No. 7,958,138. For example, in a lawsuit against Zumiez, Red River claimed [PDF] that “after a request for electronic text through the search box located at www.zumiez.com, the Zumiez system automatically identifies and graphically presents additional reading material that is related to a concept within the requested electronic text, as described and claimed in the ’138 Patent.” In that case, the “reading material” is text like product listings for jackets or skateboard decks.

In another lawsuit, Red River asserted a related patent, US Patent No. 7,526,477, which is our winner this month. The ’477 patent describes a system of electronic text searching, where the user is presented with “related concepts” to the text they’re already reading. The examples shown in the patent display a kind of live index, shown to the right of a block of electronic text. In a lawsuit against Infolinks, Red River alleged [PDF] infringement because “after a request for electronic text, the InText system automatically identifies and graphically presents additional reading material that is related to a concept within the requested electronic text.”   

Suggesting and providing reading material isn’t an invention, but rather an abstract idea. The final paragraph of the ’477 patent’s specification makes it clear that the claimed method could be practiced on just about any computer. Under the Supreme Court’s decision in Alice v. CLS Bank, an abstract idea doesn’t become eligible for a patent merely because you suggest performing it with a computer. But hiring lawyers to make this argument is an expensive task, and it can be daunting to do so in a faraway locale, like the East Texas district where Red River has filed its lawsuits so far. That venue has historically attracted “patent troll” entities that see it as favorable to their cases.

The ’477 patent is another of the patents featured in Unified Patents’ prior art crowdsourcing project Patroll. If you know of any prior art for the ’477 patent, you can submit it (before April 30) to Unified Patents for a possible $2,000 prize.

The good news for anyone being targeted by Red River today is that it’s not going to be as easy to drag businesses from all over the country into a court of their choice. The Supreme Court’s TC Heartland decision, combined with a Federal Circuit case called In re Cray, mean that patent owners have to sue in a venue where defendants actually do business.

It’s also a good example of why fee-shifting in patent cases, and upholding the case law of the Alice decision, are so important. Small companies using basic web technologies shouldn’t have to go through a multi-million dollar jury trial to get a chance to prove that a patent like the ’477 is abstract and obvious.

Categorieën: Openbaarheid, Privacy, Rechten

Council: Member States close to adopting a copyright maximalist position

International Communia Association - 23 april 2018 - 12:41pm

It is still unclear if the Bulgarian Council presidency will manage to get the member states in line to agree on a general negotiation position at the COREPER meeting scheduled for this Thursday. Under pressure from the Bulgarian presidency (or rather those who put pressure on them), the member states seem to be moving towards a common position. Last week’s working group meeting appears to have resolved most of the controversies around Article 3a (optional text and data mining exception) and Article 11 (press publishers rights). Article 13 remains the main sticking point, preventing the member states from agreeing on a negotiation mandate.

So what’s the status with regards to these 3 articles and where do the member states stand on them?

Article 13: Continued divisions over the scope of #censorshipfilters

In spite of the significant doubts that many member states expressed last year regarding measures targeting open online platforms contained in Article 13, the article has survived the subsequent rounds of discussions in the Council nearly intact. This seems mainly due to a pivot by the German government which is now backing censorship filters – even though the coalition agreement that underpins the current government is highly critical of such measures.

While there is agreement in principle, the Member States are still spit on the scope of the article. The maximalist axis of France, Spain, Portugal and Italy is backing a broad implementation of the article, while most other member states (including Germany) seem to be favouring a narrowing down of the scope of the services that would be required to filter. Lack of consensus on the scope of Article 13 seems to be the main obstacle that prevents the Bulgarian presidency from closing the file.

Article 11 map (April 2018)
Member States (in red) supporting the introduction of censorship filters for online platforms (own research)

As we have argued before, rushing Article 13 across the finish line carries substantial risks to the European internet economy and to our freedom of creative expression. The only sure way to prevent collateral damage from the music industries attack on open platforms is to send Article 13 back to the drawing table and to delete it from the proposed directive. While this seems highly unlikely at this stage it is important to support those Member States that continue to resist to the pressure exercised by the countries from the maximalist axis.

Article 11: Too little too late

Earlier this year, the member states have conceded to pressure from Germany and have settled on the introduction of a publishers right even though there is a strong academic consensus that such a right is inefficient and will have detrimental effects for libraries and more generally with regards to the freedom of expression. Having lost the battle for the more sensible approach to introduce a presumption of representation, a large number of member states are trying to limit the scope of the press publishers right.

Last week’s working group meeting resulted in some small improvements. A majority of the Member States insisted on reducing the term of protection to a more reasonable 1 year and to maintain originality as a criterium for protection. The last bit is certainly better than simply granting protection to snippets above a certain length (as Germany, France and Estonia have demanded). While in theory an originality requirement seems a step in the right direction, we are not looking forward to the inevitable wave of court cases on the originality of headlines and introductory sentences. Granting an additional layer of rights to press publishers remains a terrible idea no matter how short the term of protection and how original the headlines they produce.

Article 13 map (april 2018)
Member States (in red) supporting the introduction of a press publishers right in the EU (own research)

Article 3a: An incomplete fix

After having agreed on a version of Article 3 that supports the Commission’s fundamentally flawed approach to Text and Data Mining (a mandatory exception that allows text and data mining for research organisations and for research purposes only and would require everyone else to obtain licenses before they can text and data mine materials that they already have access to), the more progressive member states have thrown their hope behind an additional optional exception that would let anyone to mine material for any purpose unless right holders have expressly reserved the right to prohibit TDM.

While this would certainly improve the situation in member states who implement the exception, the limitation of the exception to temporary copies is highly problematic (see our earlier analysis here). Also the approach of adding an optional exception runs somewhat contrary to the objective of the directive to contribute to a digital single market as it will further fragment user rights in EU copyright law.

In its current form Article 3a is opposed by a rather curious coalition of the maximalist axis (FR, ES, PT, IT) and Estonia. Together these countries have a blocking majority but it seems that some of them are willing to revise their position as they finally start to understand that it will be difficult to position the EU as a first class AI research location if fundamental undermining of AI are limited by copyright law. With such insights gaining ground, the member states should have the guts to re-open the discussion about Article 3 and expand the scope of the mandatory exception to allow TDM for any purpose by anyone. Anything else will constitute willful sabotage to the future of technological innovation in the EU.

The post Council: Member States close to adopting a copyright maximalist position appeared first on International Communia Association.

Aantal rechtszaken vorig jaar fors gedaald

Het aantal rechtszaken is het afgelopen jaar in bijna alle rechtsgebieden fors afgenomen, zo blijkt uit het vandaag verschenen jaarverslag van de Rechtspraak. Met name het aantal kanton handelszaken is flink gedaald. Vorig jaar zijn daar 81 duizend minder zaken van aangebracht dan een jaar eerder. Kanton handelszaken gaan vaak over incasso's na het niet betalen van rekeningen, bijvoorbeeld voor een zorgverzekering of telefoonabonnement.

Reden minder rechtszaken

De daling van het aantal zaken in sommige rechtsgebieden ligt voor de hand: een verbeterde betalingsmoraal na de economische crisis, de toegenomen beschikbaarheid van buitenrechtelijke bemiddeling (als mediation). Maar ook de afschrikkende werking van relatief hoge griffierechten (de kosten die moeten worden betaald om een rechtszaak te mogen voeren) speelt een rol. Frits Bakker, voorzitter van de Raad voor de rechtspraak, roept in zijn jaarbericht op deze griffierechten te verlagen, omdat moet worden voorkomen dat mensen vanwege financiële redenen de rechter vermijden. Voor andere rechtsgebieden – bijvoorbeeld bestuursrecht – is het niet precies duidelijk waarom het aantal zaken sterk is gedaald.

Stijging aantal bewindzaken

Ook is niet op elk gebied het aantal zaken sterk gedaald, wat de afname van het aantal rechtszaken in eerste instantie niet direct goed zichtbaar maakt. Zo heeft de rechtspraak in 2017, net als in 2016, toch weer 1,6 miljoen zaken behandeld. Dat komt vooral door de enorme toename van het aantal bewindzaken, waarbij de rechter niet zozeer een beslissende, maar vooral controlerende rol heeft.

Dit type zaken is in 2017 met 100.000 toegenomen tot 500.000. Dit betekent dat inmiddels 1 op de 3 zaken die de rechter behandelt een bewind gerelateerde-zaak is. Die stijging betekent overigens niet per definitie dat er meer mensen onder bewind zijn gesteld. Voor een groot deel is de stijging toe te schrijven aan een nieuwe categorie zaken: de zogenoemde vijfjaarsevaluatiezaken waarbij lopende bewindsdossiers onder de loep worden genomen.

Financiën Rechtspraak onder druk

Een bijzonder zorgelijk punt volgens het jaarverslag is de slechte financiële situatie van de Rechtspraak. Het jaar 2017 wordt afgesloten met een negatief eigen vermogen. Dat gebeurde nog niet eerder. Een van de belangrijkste oorzaken is de vertraging van digitaliserings- en vernieuwingsoperatie KEI. Een andere oorzaak is de forse terugloop van het aantal zaken dat voor de rechter komt. De Rechtspraak wordt per zaak gefinancierd en fors minder zaken betekent een lager budget. Verwacht wordt dat ook in 2018 en 2019 dit nog aan de orde is.

Meer mensen in dienst

Omdat rechters in voorgaande jaren hebben aangegeven dat de kwaliteit van het rechterlijke werk onder druk is komen te staan door onderbezetting en toenemende complexiteit van zaken, zijn afgelopen jaar meer mensen aangenomen en is in 2017 het personeelsbestand van de Rechtspraak toegenomen met 2 procent. Ook heeft de Rechtspraak sinds 2016 de werving van rechters geïntensiveerd, wat in 2017 heeft geleid tot het aannemen van ruim 100 nieuwe rechters in opleiding.

Meer tijd voor een zaak

Rechters hebben door de personele versterking in 2017 meer tijd aan een zaak kunnen besteden. Dat is belangrijk, omdat rechters voldoende tijd moeten hebben om zaken gepaste aandacht te geven, essentieel voor kwalitatief hoogwaardige rechtspraak. Toch blijft de werkdruk een knellend punt, zo blijkt uit het medewerkerswaarderingsonderzoek (MWO) van afgelopen jaar. De gerechten zijn druk bezig met de aanbevelingen tot verbetering , maar dit kost tijd. Een goed teken is dat het werkplezier en de tevredenheid van medewerkers ten opzichte van 2014 is toegenomen.

Doorlooptijd niet structureel verbeterd

Tenslotte is ook de geplande doorlooptijdverkorting van rechtszaken in 2017 niet gehaald. De duur van zaken is ondanks alle inspanningen nog steeds niet korter geworden. Er zijn veel lokale initiatieven ondernomen om rechtszaken te versnellen, als bijvoorbeeld een verhoogde bezetting of wijzigingen in roosters en agenda’s. Maar deze hebben nog niet het beoogde effect gehad, zo blijkt ook uit het periodieke klantwaarderingsonderzoek (KWO) 2017.

Categorieën: Rechten

Games met verhandelbare lootboxes zijn onwettig

IusMentis - 23 april 2018 - 8:03am

De Nederlandse Kansspelautoriteit heeft tien populaire games met lootboxes onderzocht en stelt dat vier daarvan de gokwet overtreden, las ik. De namen van de games worden niet genoemd, maar volgens de NOS zou het gaan om de zeer populaire spellen Fifa18, Dota2, PubG en Rocket League. De bedrijven krijgen acht weken de tijd hun games aan te passen op straffe van een boete. Dat zou een ingrijpende verandering zijn voor de spelaanbieders, omdat lootboxes veel geld opbrengen.

Lootboxes zijn los te kopen elementen in een spel (vaak vormgegeven als een schatkist of iets dergelijks, vandaar de naam) waarbij je op voorhand niet weet wat de aankoop je op gaat leveren. Vaak is de game op dat punt zo vormgegeven dat het kopen zeer aantrekkelijk lijkt. Lootboxes liggen daarmee al een tijdje onder vuur, en vanwege de factor kans bij wat je krijgt, verbaasde het me dan ook niet dat de Kansspelautoriteit op onderzoek uit ging.

Van een kansspel is sprake wanneer

de aanwijzing der winnaars geschiedt door enige kansbepaling waarop de deelnemers in het algemeen geen overwegende invloed kunnen uitoefenen

Die definitie past prima bij lootboxes, maar ook bij bijvoorbeeld een zakje voetbalplaatjes. Daarom had ik enige twijfel hoe dit nu precies uit moest pakken. De KSA voegt echter in haar onderzoek een extra criterium toe dat verduidelijkt waarom de lootbox-inhoud wél en een voetbalplaatjeszakje géén kansspel is: je moet een prijs winnen bij een kansspel, en dat betekent iets van waarde. En waarde krijgt het omdat het verhandelbaar is, omdat er een markt is.

Voetbalplaatjes worden natuurlijk wel geruild, maar om nou echt te spreken van marktplaatsen en betaalde transacties nee. Terwijl er voor lootbox-inhoud wel degelijk dergelijke marktplaatsen zijn. Vaak weliswaar buiten de eigenlijke game om, maar toch. Dat geeft die inhoud een economische waarde en daarmee is het verstrekken van die inhoud via de lootbox-constructie een kansspel.

Sommige spelaanbieders verbieden de handel in in-game objecten. Dat is echter niet genoeg. Het gaat er onder de wet niet om of je van de spelaanbieder mág handelen in die objecten, maar of er daadwerkelijk en legitiem gehandeld wórdt in die objecten. (Anders zou je als gokaanbieder gewoon in je algemene voorwaarden zetten dat gewonnen prijzen niet tweedehands doorverkocht mogen worden, om zo de Wet op de kansspelen te omzeilen.) En het is legitiem omdat dergelijke objecten juridisch gezien in eigendom overgedragen zijn, analoog aan de digitale aanschaf die je bij een online koop van spellen en dergelijke goederen doet. Dan is het betekenisloos de handel daarin in je voorwaarden te verbieden.

Met deze uitspraak loopt Nederland voorop in de aanpak van de controversiële lootboxes, maar het zou me verbazen als in andere landen er wezenlijk andere uitspraken volgen.

Arnoud

Het bericht Games met verhandelbare lootboxes zijn onwettig verscheen eerst op Ius Mentis.

We’re in the Uncanny Valley of Targeted Advertising

Mark Zuckerberg, Facebook’s founder and CEO, thinks people want targeted advertising. The “overwhelming feedback,” he said multiple times during his congressional testimony, was that people want to see “good and relevant” ads. Why then are so many Facebook users, including leaders of state in the U.S. Senate and House, so fed up and creeped out by the uncannily on-the-nose ads? Targeted advertising on Facebook has gotten to the point that it’s so “good,” it’s bad—for users, who feel surveilled by the platform, and for Facebook, who is rapidly losing its users’ trust. But there’s a solution, which Facebook must prioritize: stop collecting data from users without their knowledge or explicit, affirmative consent.

It should never be the user’s responsibility to have to guess what’s happening behind the curtain.

Right now, most users don’t have a clear understanding of all the types of data that Facebook collects or how it’s analyzed and used for targeting (or for anything else). While the company has heaps of information about its users to comb through, if you as a user want to know why you’re being targeted for an ad, for example, you’re mostly out of luck. Sure, there's a “why was I shown this” option on an individual ad", but each generally reveals only bland categories like “Over 18 and living in California”—and to get an even semi-accurate picture of all the ways you can be targeted, you’d have to click through various sections, one at a time, on your “Ad Preferences” page.

Text from Facebook explaining why an ad has been shown to the user.

Text from Facebook explaining why an ad has been shown to the user

Even more opaque are categories of targeting called “Lookalike audiences.” Because Facebook has so many users—over 2 billion per month—it can automatically take a list of people supplied by advertisers, such as current customers or people who like a Facebook page—and then do behind-the-scenes magic to create a new audience of similar users to beam ads at.

Facebook does this by identifying “the common qualities” of the people in the uploaded list, such as their related demographic information or interests, and finding people who are similar to (or "look like") them, to create an all-new list. But those comparisons are made behind the curtain, so it’s impossible to know what data, specifically, Facebook is using to decide you look like another group of users. And to top if off: much of what’s being used for targeting generally isn’t information that users have explicitly shared—it’s information that’s been actively—and silently—taken from them.

Text from Facebook explaining why an ad has been shown to the user.

Telling the user that targeting data is provided by a third party like Acxiom doesn’t give any useful information about the data itself, instead bringing up more unanswerable questions about how data is collected

Just as vague is targeting using data that’s provided by third party “data brokers.” Changes by Facebook in March to discontinue one aspect of this data sharing called partner categories, wherein data brokers like Acxiom and Experian use their own massive datasets combined with Facebook’s to target users, are the kinds of changes Facebook has touted to “help improve people’s privacy”—but they won’t have a meaningful impact on our knowledge of how data is collected and used.

As a result, the ads we see on Facebook—and other places online where behaviors are tracked to target users—creep us out. Whether they’re for shoes that we’ve been considering buying to replace ours, for restaurants we happened to visit once, or even for toys that our children have mentioned, the ads can indicate a knowledge of our private lives that the company has consistently failed to admit to having, and moreover, knowledge that was supplied via Facebook’s AI, which makes inferences about people—such as their political affiliation and race—that’s clearly out of many users’ comfort zones. This AI-based ad targeting on Facebook is so obscured in its functioning that even Zuckerberg thinks it’s a problem. “Right now, a lot of our AI systems make decisions in ways that people don't really understand,” he told Congress during his testimony. “And I don't think that in 10 or 20 years, in the future that we all want to build, we want to end up with systems that people don't understand how they're making decisions.”

But we don’t have 10 or 20 years. We’ve entered an uncanny valley of opaque algorithms spinning up targeted ads that feel so personal and invasive that both the House and the Senate mentioned the spreading myth that the company wiretaps its users’ phones. It’s understandable that users have come to conclusions like this for the creeped out feelings that they rightfully experience. The concern that you’re being surveilled persists, essentially, because you are being surveilled—just not via your microphone. Facebook seems to possess an almost human understanding of us. Like the unease and discomfort people sometimes experience interacting with a not-quite-human-like robot, being targeted highly accurately by machines based on private, behavioral information that we never actively gave out feels creepy, uncomfortable, and unsettling.

The trouble isn’t that personalization is itself creepy. When AI is effective it can produce amazing results that feel personalized in a delightful way—but only when we actively participated in teaching the system what we like and don't like. AI-generated playlists, movie recommendations, and other algorithm-powered suggestions work to benefit users because the inputs are transparent and based on information we knowingly give those platforms, like songs and television shows we like. AI that feels accurate, transparent, and friendly can bring users out of the uncanny valley to a place where they no longer feel unsettled, but instead, assisted.

But apply a similar level of technological prowess to other parts of our heavily surveilled, AI-infused lives, and we arrive in a world where platforms like Facebook creepily, uncannily, show us advertisements for products we only vaguely remember considering purchasing or people we had only just met once or just thought about recently—all because the amount of data being hoovered up and churned through obscure algorithms is completely unknown to us.

Unlike the feeling that a friend put together a music playlist just for us, Facebook’s hyper-personalized advertising—and other AI that presents us with surprising, frighteningly accurate information specifically relevant to us—leaves us feeling surveilled, but not known. Instead of feeling wonder at how accurate the content is, we feel like we’ve been tricked.

To keep us out of the uncanny valley, advertisers and platforms like Facebook must stop compiling data about users without their knowledge or explicit consent. Zuckerberg multiple times told Congress that “an ad-supported service is the most aligned with [Facebook’s] mission of trying to help connect everyone in the world.” As long as Facebook’s business model is built around surveillance and offering access to users’ private data for targeting purposes to advertisers, it’s unlikely we’ll escape the discomfort we get when we’re targeted on the site. Steps such as being more transparent about what is collected, though helpful, aren’t enough. Even if users know what Facebook collects and how they use it, having no way of controlling data collection, and more importantly, no say in the collection in the first place, will still leave us stuck in the uncanny valley.

Even Facebook’s “helpful” features, such as reminding us of birthdays we had forgotten, showing pictures of relatives we’d just been thinking of (as one senator mentioned), or displaying upcoming event information we might be interested in, will continue to occasionally make us feel like someone is watching. We'll only be amazed (and not repulsed) by targeted advertising—and by features like this—if we feel we have a hand in shaping what is targeted at us. But it should never be the user’s responsibility to have to guess what’s happening behind the curtain.

While advertisers must be ethical in how they use tracking and targeting, a more structural change needs to occur. For the sake of the products, platforms, and applications of the present and future, developers must not only be more transparent about what they’re tracking, how they’re using those inputs, and how AI is making inferences about private data. They must also stop collecting data from users without their explicit consent. With transparency, users might be able to make their way out of the uncanny valley—but only to reach an uncanny plateau. Only through explicit affirmative consent—where users not only know but have a hand in deciding the inputs and the algorithms that are used to personalize content and ads—can we enjoy the “future that we all want to build,” as Zuckerberg put it.

Arthur C. Clarke said famously that “any sufficiently advanced technology is indistinguishable from magic”—and we should insist that the magic makes us feel wonder, not revulsion. Otherwise, we may end up stuck on the uncanny plateau, becoming increasingly distrustful of AI in general, and instead of enjoying its benefits, fear its unsettling, not-quite-human understanding.  

Categorieën: Openbaarheid, Privacy, Rechten

Minnesota Supreme Court Ruling Will Help Shed Light on Police Use of Biometric Technology

A decision by the Minnesota Supreme Court on Wednesday will help the public learn more about how law enforcement use of privacy invasive biometric technology.

The decision in Webster v. Hennepin County is mostly good news for the requester in the case, who sought the public records as part of a 2015 EFF and MuckRock campaign to track mobile biometric technology use by law enforcement across the country. EFF filed a brief in support of Tony Webster, arguing that the public needed to know more about how officials use these technologies.

Across the country, law enforcement agencies have been adopting technologies that allow cops to identify subjects by matching their distinguishing physical characteristics to giant repositories of biometric data. This could include images of faces, fingerprints, irises, or even tattoos. In many cases, police use mobile devices in the field to scan and identify people during stops. However, police may also use this technology when a subject isn’t present, such as grabbing images from social media, CCTV, or even lifting biological traces from seats or drinking glasses.

Webster’s request to Hennepin County officials sought a variety of records, and included a request for the agencies to search officials’ email messages for keywords related to biometric technology, such as “face recognition” and “iris scan.”

Officials largely ignored the request and when Webster brought a legal challenge, they claimed that searching their email for keywords would be burdensome and that the request was improper under the state’s public records law, the Minnesota Government Data Practices Act.

Webster initially prevailed before an administrative law judge, who ruled that the agencies had failed to comply with the Data Practices Act in several respects. The judge also ruled that request a search of email records for keywords was proper under the law and was not burdensome.

County officials appealed that decision to a state appellate court. That court agreed that Webster’s request was proper and not burdensome. But it disagreed that the agencies had violated the Data Practices Act by not responding to Webster’s request or that they had failed to set up their records so that they could be easily searched in response to records requests.

Webster appealed to the Minnesota Supreme Court, who on Wednesday agreed with him that the agencies had failed to comply with the Data Practices Act by not responding to his request. The court, however, agreed with the lower appellate court that county officials did not violate the law in how they had configured their email service or arranged their records systems.

In a missed opportunity, however, the court declined to rule on whether searching for emails by keywords was appropriate under the Data Practices Act and not burdensome. The court claimed that it didn’t have the ability to review that issue because Webster had prevailed in the lower court and county officials failed to properly raise the issue.

Although this means that the lower appellate court’s decision affirming that email keyword searches are proper and burdensome still stands, it would have been nice if the state’s highest court weighed in on the issue.

EFF is nonetheless pleased with the court’s decision as it means Webster can finally access records that document county law enforcement’s use of biometric technology. We would like to thank attorneys Timothy Griffin and Thomas Burman of Stinson Leonard Street LLP for drafting the brief and serving as local counsel.

For more on biometric identification, such as face recognition, check out EFF’s Street-Level Surveillance project.

Categorieën: Openbaarheid, Privacy, Rechten

Projecten voor effectievere rechtspraak overal in startblokken

Door het hele land ontwikkelen rechters plannen die gericht zijn op het effectiever inzetten van rechtspraak, en rechtspraak toegankelijk te houden voor iedereen. De belangrijkste winst van maatschappelijk effectieve rechtspraak is dat het recht wordt ingezet om alledaagse problemen van mensen echt op te lossen. Ook het kabinet is voorstander van deze vormen van maatschappelijk effectieve rechtspraak, blijkt uit een brief(rijksoverheid.nl) die minister Dekker (voor Rechtsbescherming) vandaag aan de Tweede Kamer stuurde.

Praktijk

Frits Bakker, voorzitter van de Raad voor de rechtspraak, vindt het goed om te zien dat veel van de plannen van de Rechtspraak terug te vinden zijn in de brief van de minister: 'Door het hele land zijn projecten gestart en in ontwikkeling, die gericht zijn om mensen echt verder te helpen. Niet omdat bestuurders als ik daarom hebben gevraagd, maar omdat onze mensen zelf vonden dat het moest gebeuren. Plannen die dus grotendeels zijn ontwikkeld door rechters en andere medewerkers. Mensen uit de praktijk, die dagelijks in zittingszalen zien tegen welke juridische muren mensen op lopen.'

Het is volgens Bakker een beweging van onderop, die overal in het land gestalte krijgt: 'Binnen onze organisatie wordt het maatschappelijk effectieve rechtspraak genoemd. Het is geen vast project, maar een overkoepelend idee dat het werk van de rechter maximaal effectief moet zijn en mensen echt moet helpen.'

Effectieve rechtspraak

Voor veel mensen met juridische problemen is de drempel om naar de rechter te gaan te hoog. Procedures zijn complex, duren vaak lang, de kosten zijn aanzienlijk. En als de rechter dan uitspraak doet, is daarmee het onderliggende probleem nog lang niet altijd opgelost. Dat geldt met name voor zaken met een emotionele lading, zoals burenruzies en echtscheidingen. Als mensen als tegenstanders tegenover elkaar staan in de rechtszaal, kan dat conflicten juist verergeren.

Rechters zoeken daarom de laatste jaren naar andere manieren om met dit soort zaken om te gaan. Bijvoorbeeld door ruziënde buren samen, tegen lage kosten, hun probleem te laten voorleggen. Of door scheidende ouders een vaste rechter toe te wijzen, die overzicht heeft over alles wat er speelt en kan helpen de problemen definitief op te lossen. Zo nodig in samenwerking met gemeenten en hulpverleners.

Ruimte voor experimenten

Het kabinet omarmt dit streven naar maatschappelijk effectieve rechtspraak en biedt ruimte om te experimenteren. Dat is nodig omdat de wet een vaste manier van procederen voorschrijft, die niet aansluit op de nieuwe werkwijzen. In plaats van meteen het procesrecht te wijzigen – terwijl nog niet duidelijk is welke aanpassingen nodig zijn om het doel te bereiken – wordt het mogelijk om per experiment een nieuwe, afwijkende rechtsgang vast te stellen en later te evalueren.

Vernieuwen

'De minister onderschrijft dat een goed functionerende rechtspraak van essentieel belang is voor de samenleving en voor onze welvaart', zegt Bakker. 'Hij ziet ook in dat we moeten vernieuwen om relevant te blijven voor iedereen, óók voor mensen die zich met moeite staande houden in de veranderende maatschappij. Wij zijn blij dat de minister zich daarvoor wil inzetten, samen met ons.' Gelet op het belang van maatschappelijk effectieve rechtspraak is met de minister afgesproken dat de Rechtspraak hiermee verder gaat. Hierdoor zullen de tekorten van de Rechtspraak verder oplopen.

Themapagina

Lees meer over maatschappelijk effectieve rechtspraak op de themapagina.

Categorieën: Rechten

Now even the rightsholders agree: Article 13 is dangerous and (and should be deleted)

International Communia Association - 20 april 2018 - 9:45am

Now that the Bulgarian Council presidency seems to have decided that it is time to wrap up the discussions on the DSM proposal and push for a political decision on a negotiation mandate, people are getting nervous. Late last week a whole assortment of organisations representing rights holders from the AV industry (organised in the creativity works! coalition) have sent a letter to Member State ministers and representatives, outlining their concerns with the latest Bulgarian compromise text. The document mainly focuses on Article 13, and what they have to say about that article is rather interesting (and surprisingly in line with positions that we have been arguing all along).

The overriding concern expressed by the rightsholders in their letter is that some of the more recent changes introduced in the council would turn Article 13 from a magic weapon against a few online platforms into a mechanism that threatens to further empower these very platforms in a way that does not benefit rights holders. In response to this, Creativity Works! (CW!) argues for further strengthening some of the most problematic aspects of Article 13.

We have long argued that Article 13 seems to be designed to benefit the big dominant online platforms, as it will entrench their market position. For smaller companies compliance with the filtering obligations will be difficult and costly while the main targets of Article 13 already have filtering systems in place (such as YouTube’s Content ID), and it is a welcome sign to see rights holders waking up to this reality.

For us it has been clear from the start that Article 13 will not achieve its stated goals. Instead the filtering obligations will cause tremendous harm to the freedom of expression and to open platforms that operate in fields that have nothing to do with the distribution of entertainment products. For this reason we think that the only responsible way to deal with Article 13 is to delete it and start over with a discussion about how we can best ensure that creators can be fairly compensated for their work. (Note that in this discussion most of the members of CW! are likely to be part of the problem rather than the solution as CW! has very little representation from actual creators.)

And while CW! is not joining us in our call to delete Article 13, their letter does illustrate our argument that adjusting general concepts of copyright law in order to address the concerns of specific groups of stakeholders is utterly irresponsible in the light of the big (and often unintended) consequences such an intervention can have.

Case in point: the re-definition of right of communication to the public. We and others critical of Article 13 have long argued that Article 13 would expand the right of communication to the public. Within the Commission’s proposal this aspect of Article 13 was hidden away in a recital, but over the successive drafts it has become more explicit. This seems to have led to the sudden realisation by rights holders that such a re-definition of this important right can also negatively affect them. In their letter they wrote on the last Bulgarian compromise proposal:

It would limit the scope of the right of communication to the public by incompletely applying Court of Justice of the European Union (CJEU) case law and setting into stone in Article 13 only certain criteria developed by the Court. This approach would roll-back the CJEU’s case law, which has repeatedly confirmed that a broad interpretation of the right of communication to the public (CTTP) is necessary to achieve the main objective of the Copyright Directive, which is to establish a high level of protection for authors and rights holders. CW! recalls that the exclusive right of communication to the public, including the making available right, as enshrined in EU law (and further clarified by the Court), has emerged as the bedrock for the financing, licencing and protection of content, as well as its ultimate delivery to consumers in the online environment. The Court has also emphasised, in its recent judgments, that in order to determine whether there has been a CTTP, several complementary criteria must be taken into account, which are not autonomous, but are interdependent. Any proposals that entail a selective application of the Court’s jurisprudence, or that imply a narrowing of the scope of the right of CTTP, would be contrary to the protection required by current EU and international law.

While we do not agree that the current draft would limit the scope of the CTTP right, this passage illustrates the dangers of carelessly fiddling around with core legal concepts that underpin the EU copyright framework. In this context it is important to recall that the mechanism proposed in Article 13 has not been part of the public consultation that preceded the proposal, and that its modifications of core legal concepts have not been properly analysed by the EU’s own impact assessment. In other words, Article 13 is the product of a sloppy, ideologically-driven way of law making and should be sent back to the drawing table for this reason alone.

The rightsholders have been in for a similar surprise with regard to another tiny veiled objective of the Commission’s proposal – the attempt to strip open online platforms of their liability limitations that they enjoy under the e-commerce proposal:

It would not fill a gap for rights holders, but rather create additional privileges for certain big content sharing platforms. Article 13(4) would create a new special limited liability regime for online content sharing service providers (“OCSSP”) who communicate to the public as it would exempt an OCSSP from liability when it has made “best efforts to prevent the availability of specific unauthorised work or other subject matter for which rightsholders have provided it with information.” This provision would be another clear step backwards for rights holders and would favour certain online platforms. Under the current law, these platforms are already required to take measures with respect to specifically identified and notified works – not only to make “best efforts.” If they do not do so, they do not qualify for the liability privilege under Article 14 of the E-Commerce Directive.

Not surprisingly, this is another element of the proposal that has not been properly addressed in the run up to the proposal or in the impact assessment. In this case we can even agree with the assessment put forward in the CW! letter. This proposal is bad and will further entrench the dominant position of the established online platforms. This will be negative for creators but even more so for users who will be confronted with upload filters that censor their speech and creative expression without actually helping other creators of original content.

While it seems unlikely that the rightsholders will abandon the path that they have embarked on and join us to demand the deletion of Article 13, it is not too late yet. The fact that those who have pushed these dangerous ideas forward are now suddenly terrified by the monster that they have created should open the eyes of lawmakers and anyone who is interested in a functional EU copyright framework that rewards creativity and encourages innovation. There is still time to delete Article 13 and start a proper discussion about how Europe can best ensure that creative work is fairly compensated in the online environment. As we have argued before, the outcome of such a discussion may very well be that there are better ways to achieve this objective than carelessly abandoning core principles of a copyright system that needs to serve the interests of many more sectors than just the entertainment industry.

The post Now even the rightsholders agree: Article 13 is dangerous and (and should be deleted) appeared first on International Communia Association.

Waarom moeten robots rechten krijgen?

IusMentis - 20 april 2018 - 8:06am

Geef robots alsjeblieft geen aparte juridische status, las ik bij NRC Handelsblad. In een open brief waarschuwen 150 prominente onderzoekers op het gebied van robots, kunstmatige intelligentie en rechtsgeleerdheid de Europese Commissie. Het argument om robots rechtspersoonlijkheid te geven hoor ik vaker de laatste tijd. Het zou aansprakelijkheidsproblemen oplossen, ethische dilemma’s kortsluiten en een belangrijke stap voorwaarts zijn in de volwassenwording van de Vierde Revolutie van onze samenleving. U hoort aan de ietwat sarcastische toon dat ik dat niet helemaal zie.

Recent nam het Europees Parlement een resolutie aan, waarin onder meer de aanbeveling stond:

Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently;

Door robots rechtspersoonlijkheid te geven (zeg maar net zoals een bv of een vereniging dat heeft) kunnen zij zelf rechtshandelingen verrichten, geld bezitten en aansprakelijkheid dragen voor schade of wanprestatie. Daarmee los je, op papier dan, een hoop problemen op. Immers als er schade is, dan heb je nu een rechtspersoon bij wie je de claim kunt neerleggen. Alle bestaande regels over aansprakelijkheid, toerekenbaarheid et cetera gelden automatisch, en we hebben 100 jaar jurisprudentie om die regels toe te passen op een rechtspersoon.

De onderzoekers dragen hier een aantal stevige bezwaren tegen aan. Allereerst vind ik een mooie dat je hiermee robots zwaar overschat. Er loopt nog lang geen Daneel Olivaw door de straten om de stoep te vegen of een medische behandeling uit te voeren. Het soort robots dat we wel hebben, is zo basaal en beperkt dat het altijd evident is welke mens of welk bedrijf erachter zit, en het ook zeer logisch is om díe dan gewoon aansprakelijk te houden.

Nog sterker vond ik het tweede argument: als je iemand een (rechts-)persoon maakt, geef je hem ook rechten, en wat betekent dát dan?

A legal status for a robot can’t derive from the Natural Person model, since the robot would then hold human rights, such as the right to dignity, the right to its integrity, the right to remuneration or the right to citizenship, thus directly confronting the Human rights. This would be in contradiction with the Charter of Fundamental Rights of the European Union and the Convention for the Protection of Human Rights and Fundamental Freedoms.

Die laatste zin lijkt te zeggen dat mensenrechten aan robots geven in strijd is met de mensenrechten voor mensen, wat ik niet helemaal volg. Maar het eerste punt zie ik wel: als je een robot rechtspersoonlijkheid geeft, mag je hem niet meer uitzetten als dat in het belang van mensen is. Dat is dan immers zeg maar moord. Ook zou je niet zomaar meer ín de robot mogen kijken zonder medische behandelovereenkomst (herstelovereenkomst?). Ja, dat krijg je er van als je 100 jaar standaardregels op een nieuwe situatie plakt.

Hoe het dan wel moet, laten de onderzoekers een beetje in het midden. Er moet goed over nagedacht worden en zonder vooringenomenheid naar een oplossing worden gestreefd. Daar kan niemand het mee oneens zijn, maar opschieten doet het natuurlijk ook weer niet. Toch maar dat aansprakelijkheidsfonds?

Arnoud

Het bericht Waarom moeten robots rechten krijgen? verscheen eerst op Ius Mentis.

Dear Canada: Accessing Publicly Available Information on the Internet Is Not a Crime

Canadian authorities should drop charges against a 19-year-old Canadian accused of “unauthorized use of a computer service” for downloading thousands of public records hosted and available to all on a government website. The whole episode is an embarrassing overreach that chills the right of access to public records and threatens important security research.

At the heart of the incident, as reported by CBC news this week, is the Nova Scotian government’s embarrassment over its own failure to protect the sensitive data of 250 people who used the province’s Freedom of Information Act (FOIA) to request their own government files. These documents were hosted on the government web server that also hosted public records containing no personal information. Every request hosted on the server contained very similar URLs, which differed only in a single document ID number at the end of the URL. The teenager took a known ID number, and then, by modifying the URL, retrieved and stored all of the FOIA documents available on the Nova Scotia FOIA website.

Beyond the absurdity of charging someone with downloading public records that were available to anyone with an Internet connection, if anyone is to blame for this mess, it’s Nova Scotia officials. They have both insecurely set up their public records server to permit public access to others’ private information. Officials should accept responsibility for failing to secure such sensitive data rather than ginning up a prosecution. The fact that the government was publishing documents that contained sensitive data in a public website without any passwords or access controls demonstrates their own failure to protect the private information of individuals. Moreover, it does not appear that the site even deployed minimal technical safeguards to exclude widely-known indexing tools such as Google search and the Internet Archive from archiving all the records published on the site, as both appear to have cached some of the documents.

The lack of any technical safeguards shielding the Freedom of Information responses from public access would make it difficult for anyone to know that they were downloading material containing private information, much less provide any indication that such activity was “without authorization” under the criminal statute. According to the report, more than 95% of the 7,000 Freedom of Information responses in question included redactions for any information properly excluded from disclosure under Nova Scotia’s FOI law. Freedom of Information laws are about furthering public transparency, and information released through the FOI process is typically considered to be public to everyone.

But beyond the details of this case, automating access to publicly available freedom of information requests is not conduct that should be criminalized: Canadian law criminalizes unauthorized use of  computer systems, but these provisions are only intended to be applied when the use of the service is both unauthorized and carried out with fraudulent intent. Neither element should be stretched to meet the specifics in this case. The teenager in question believed he was carrying out a research and archiving role, preserving the results of freedom of information requests. And given the setup of the site, he likely wasn’t aware that a few of the documents contained personal information. If true, he would not have had any fraudulent intent.

“The prosecution of this individual highlights a serious problem with Canada’s unauthorized intrusion regime,”  Tamir Israel, Staff Lawyer at CIPPIC, told us. “Even if he is ultimately found innocent, the fact that these provisions are sufficiently ambiguous to lay charges can have a serious chilling effect on innovation, free expression and legitimate security research.”

The deeper problem with this case is that it highlights how concerns about computer crime can lead to absurd prosecutions. The Canadian police are using to prosecute the teen was implemented after Canada sign the Budapest Cybercrime Convention. The convention’s original intent was to punish those who break into protected computers to steal data or cause damage.

Criminalizing access to publicly available data over the Internet twists the Cybercrime Convention’s purpose. Laws that offer the possibility of imposing criminal liability on someone simply for engaging with freely available information on the web pose a continuing threat to the openness and innovation of the Internet. They also threaten legitimate security research. As technology law professor Orin Kerr describes it, publicly posting information on the web and then telling someone they are not authorized to access it is “like publishing a newspaper but then forbidding someone to read it.”

Canada should take the lead from the  United States federal court’s decision in Sandvig v. Sessions, which made clear that using automated tools to access freely available information is not a computer crime. As the court wrote:  

"Scraping is merely a technological advance that makes information collection easier; it is not meaningfully different from using a tape recorder instead of taking written notes, or using the panorama function on a smartphone instead of taking a series of photos from different positions.”

The same is true in the case of the Canadian teen.

We've long defended the use of “automated scraping,” which is the process of using web crawlers or bots — applications that run automated tasks over the Internet—to extract content and data from a website. Scraping provides a wide range of valuable tools and services that Internet users, programmers, journalists, and researchers around the world rely on every day to the benefit of the broader public.

The value of automated scraping value goes well beyond curious teenagers seeking access to freedom of information requests. The Internet Archive has long been scraping public portions of the world wide web and preserving them for future researchers. News aggregation tools, including Google’s Crisis Map, which aggregated critical information about the California’s October 2016 wildfires, involve scraping. ProPublica journalists used automated scrapers to investigate Amazon’s algorithm for ranking products by price and uncovered that Amazon’s pricing algorithm was hiding the best deals from many of its customers. The researchers who studied racial discrimination on Airbnb also used bots, and found that distinctively African American names were 16 percent less likely to be accepted relative to identical guests with distinctively white names.

Charging the Canadian teen with a computer crime for what amounts to his scraping publicly available online content has severe consequences for him and the broader public. As a result of the charges against him, the teen is banned from using the Internet and is concerned he may not be able to complete his education.

More broadly, the prosecution is a significant deterrent to anyone who wanted to use common tools such as scraping to collect public government records from websites, as the government’s own failure to adequately protect private information can now be leveraged into criminal charges against journalists, activists, or anyone else seeking to access public records.

Even if the teen is ultimately vindicated in court, this incident calls for a re-examination of Canada’s unauthorized intrusion regime and law enforcement’s use of it. The law was not intended for cases like this, and should never have been raised against an innocent Internet user.

Categorieën: Openbaarheid, Privacy, Rechten

Pagina's

Abonneren op Informatiebeheer  aggregator - Rechten