U bent hier

Rechten

This Judge’s Investigation Of Patent Trolls Must Be Allowed to Move Forward

If you get sued, you should be able to figure out who sued you. Remarkably, though, people and companies who are accused of patent infringement in federal court often have no idea who is truly behind the lawsuit. Patent trolls, companies whose main business is extorting others over patents, often hide behind limited liability companies (LLCs) that serve to keep secret the names of those who profit from their activities.

This shouldn’t be the case. Earlier this week, EFF filed a brief seeking to protect an ongoing investigation of one of the nation’s largest patent-trolling companies, IP Edge. 

In recent weeks, Delaware-based U.S. District Court Judge Colm Connolly asked that owners of several patent-holding LLCs, which have filed 69 lawsuits in his court so far, testify about their histories and financing. The hearings conducted in Judge Connolly’s courtroom have provided a striking window into patent assertion activity by entities that appear to be related to IP Edge. 

Judge Connolly has also filed his own 78-page opinion explaining the nature and reasoning of his inquiries. As he summarizes (page 75), he seeks to determine if there are “real parties in interest” other than the shell LLCs, such as IP Edge and a related “consulting” company called Mavexar. He also asks if “those real parties in interest perpetrated a fraud on the court” by fraudulently conveying patents and filing fictitious patent assignment documents with the U.S. Patent Office. Judge Connolly also wants to know if lawyers in his court have complied with his orders and the Rules of Professional Conduct that govern attorney behavior in court. He has raised questions about whether a lawyer who files and settles cases without discussing the matter with their supposed client is committing a breach of ethics.

Given the growth of patent trolling in the last two decades, these questions are critical for the future of innovation, technology, and the public interest. Let’s take a closer look at the facts that led EFF to get involved in this case. 

Owner of “Chicken Joint” Puts His Name On Patent Troll Paperwork

Hearings conducted by Judge Connolly on Nov. 4 and Nov. 10 revealed that the LLCs are connected to a shadowy network of partial-owners and litigation funders, including IP Edge, a large patent-trolling firm. The hearings also showed that the official “owners” of the LLCs have little or no involvement in litigation decisions, and collect small fractions of the overall settlement money that is collected. 

The owner of record of Mellaconic IP LLC, for instance, was Hau Bui, who described himself as the owner of a food truck and “fried chicken joint” in Texas. Bui was approached by a “friend” named Linh Dietz, who has an IP Edge email address and offered a “passive income” of a mere 5% of the money Mellaconic made from its lawsuits—about $11,000. He paid nothing to get the patent that his company acquired from an IP Edge-related entity. 

The owner of record of Nimitz Technologies LLC is Mark Hall, a Houston-based software salesman. When the judge asked Hall about what technology was covered in the patent he had used to sue more than 40 companies, Hall said, “I haven’t reviewed it enough to know.” He was “presented an opportunity” by Mavexar, a firm he called a “consulting agency” where Linh Dietz was his contact. Again, it was Linh Dietz who arranged the transfer of the patents. Hall told the judge he stood to get 10% of the proceeds from the patent, which has totaled about $4,000. However, Hall agreed that “all the litigation decisions are made by the lawyers and Mavexar.” 

After those hearings, Judge Connolly was concerned that the attorneys involved may have perpetrated a fraud on the court, and violated his disclosure rules. He asked for additional records to be provided. Instead of complying, the patent troll companies have asked the nation’s top appeals court to intervene and shut down Judge Connolly’s inquiry. 

The Public Has A Right To Know Who Benefits From Patent Lawsuits

That’s why EFF got involved in this case. This week, together with Engine Advocacy and Public Interest Patent Law Institute, we filed a brief in this case explaining why Judge Connolly’s actions are “proper and commendable.” 

“The public has a right—and need—to know who is controlling and benefiting from litigation in publicly-funded courts,” EFF writes in the brief. Parties who conceal this information undermine the promise of open courts. What’s more, patent-owning plaintiffs can hide behind insolvent entities, in order to avoid court rules and punishments for litigation misconduct. 

If the U.S. Court of Appeals for the Federal Circuit were to stop Judge Connolly from moving forward with enforcing these transparency rules, it will “encourage meritless suits, conceal unethical conduct, and erode public confidence in the judicial process.” There are circumstances where a privacy right or another interest can limit transparency. But those circumstances aren’t present here, where the identity of the party getting the lion’s share of any damages (and which is potentially the true patent owner) are concealed.

The disclosure requirements being enforced by Judge Connolly aren’t unusual. About 25% of federal courts require disclosure of “any person or entity… that has a financial interest in the outcome of a case,” which often includes litigation funders. 

Patent trolls often hide behind limited liability companies that are merely “shells,” which serve to hide the names of those who profit from patent trolling. When these LLCs have few or no assets, they can also serve to immunize the true owners against awards of attorneys’ fees or other penalties. This problem is widespread—in the first half of this year, nearly 85% of patent lawsuits against tech firms were filed by patent trolls. 

It’s also possible that in these cases, Mavexar or IP Edge may have structured the LLCs to insulate itself from penalties such as being required to pay litigation costs. That could create a structure in which sophisticated patent lawyers behind those firms make 90% or 95% of the profits, while a food truck owner with little knowledge of the patents or litigation could be stuck paying any penalties. In the past several years, fee shifting has become more common in patent litigation, due to Supreme Court rulings that have made it easier to get attorney’s fees paid in the most abusive patent cases. 

Even now, Mavexar-connected plaintiffs are continuing to file new lawsuits based on software patents they claim will compel a vast array of companies into paying them money. Mellaconic IP has filed more than 40 lawsuits, including some this week, claiming that basic HR software functions, like clocking in and out, infringe its patent.  

EFF got involved in the patent fight nearly 20 years ago because of software patents like these. These patents interfere with our rights to express ourselves, and perform basic business or non-commercial tasks in the online world. They make it harder for small actors to innovate and disrupt entrenched tech companies. And they often aren’t new “inventions” at all. The people who profit from lawsuits over these patents, and hide their identities while doing so, are long overdue for this type of investigation. 

Documents related to this case: 

Categorieën: Openbaarheid, Privacy, Rechten

India Requires Internet Services to Collect and Store Vast Amount of Customer Data, Building a Path to Mass Surveillance

Privacy and online free expression are once again under threat in India, thanks to vaguely worded cybersecurity directions—promulgated by India’s Computer Emergency Response Team (CERT-In) earlier this year—that impose draconian mass surveillance obligations on internet services, threatening privacy and anonymity and weakening security online.

Directions 20(3)/2022 - CERT-In came into effect on June 28th, sixty days after being published without stakeholder consultation. Astonishingly, India’s Minister of State for Electronics and Information Technology (MeitY) Rajeev Chandrasekhar said the government wasn’t required to get public input because the directions have “no effect on citizens.” The Directionsn itself states that they  were needed to help India defend against cybersecurity attacks, protect the security of the state and public order, and prevent offenses involving computers. Chandrasekhar said the agency consulted with entities “who run the relevant infrastructure,” without naming them.

Cybersecurity law and policy directly impact human rights, particularly the right to privacy, freedom of expression, and association. Across the world, national cybersecurity policies have emerged to protect the internet, critical infrastructure, and other technologies against malicious actors. However, overly broad and poorly defined proposals open the door to unintended consequences, leading to human rights abuses, and harming innovation. The Directions enable surveillance and jeopardize the right to privacy in India, raising alarms among human rights and digital rights defenders. A global NGO coalition has called upon CERT-in to withdraw the Directions and initiate a sustained multi-stakeholder consultation with human rights and security experts to strengthen cybersecurity while ensuring robust human rights protections. 

What’s Wrong With  CERT-in Cybersecurity Directions from a Human Rights Perspective?

Forced Data Localization and Electronic Logging Requirements

Direction No IV compels a broad range of service providers (telecom providers, network providers, ISPs, web hosting, cloud service providers, cryptocurrency exchanges, and wallets), internet intermediaries (social media platforms, search engines, and e-commerce platforms), and data centers (both corporate and government), to enable logs of  all their internet and communication technology (ICT) systems–and forces them to keep such data securely within India for 180 days. The Direction is not clear about exactly what systems this applies to, raising concerns about government access to more user data than necessary and compliance with  international personal data privacy principles that call for purpose limitation and data minimization. 

Requiring providers to store data within a country’s borders can exacerbate government surveillance by making access to users’ data easier. This is particularly true in India, which lacks strong legal safeguards and data protection laws. Data localization mandates also make providers easy targets for direct enforcement and penalties if they reject arbitrary data access demands.

General and Indiscriminate Data Retention Mandate

Direction No. V establishes an indiscriminate data retention obligation, which unjustifiably infringes on the right to privacy and the presumption of innocence. It forces data centers, virtual private server (VPS) providers, cloud service providers, and virtual private network service (VPN) providers to collect customers' data, including names, dates services began, email addresses, IP addresses, physical addresses, and contact numbers, among other things, for at least five years or longer, even if a person cancels or withdraws from the service.

Mandating the mass storage of private information for the mere eventuality that it may be of interest to the State at some point in the future is contrary to human rights standards. As the Office of the United Nations High Commissioner for Human Rights (OHCHR) has stated, “the obligation to indiscriminately retain data exceeds the limits of what can be considered necessary and proportionate.” Storing the personal information of political, legal, medical, and religious activists, human rights defenders, journalists, and everyday internet users would create honeypots for data thieves and put the data at risk in case of software vulnerabilities, fostering more insecurity than security. Moreover, VPN providers should not collect personal data or be forced to collect any data that are irrelevant to their operations just to comply with the new Directions. Personal data should always be relevant and limited to what is necessary regarding the purposes for which they are processed.

Onerous Cybersecurity Reporting Requirements

Direction No. II forces a broad range of service providers, internet intermediaries, including online game companies, and data centers (both corporate and government) to report cybersecurity incidents to the government within a tight time frame of six hours from detection—compared to 72 hours under the EU’s GDPR to notify data breaches—an onerous requirement for small and medium companies that would need staff available 24-7 to comply in such a short period. Moreover, such a tight time frame can exacerbate human errors. In contrast, the previous rules expected entities to report cybersecurity incidents “as early as possible to leave scope for action.” The new Direction does not mandate that users be notified of cybersecurity incidents. 

The reporting requirements apply to a wide range of cyber security incidents, including data breaches or data leaks, unauthorized access to ICT systems or resources, identity theft, spoofing, phishing attacks, DoS and DDoS attacks, malicious attacks like ransomware, and cyber incidents impacting the safety of human beings, among others. They also apply to “targeted” scanning (the automated probing of services running on a computer) of ICT systems; however, since targeting is ill-defined, this could be interpreted to mean any scanning of the system, which any system administrator can tell you, is the background noise of the internet. What’s more, many pro-cybersecurity projects engage in widespread scanning of the Internet.

Scanning is so ubiquitous on the internet that some smaller companies may choose to just automatically send all logs to CERT-In rather than risk being in violation of policy. This could make an already bad user privacy situation even worse.

Directions Grant CERT-In New Powers to Order Providers to Turn Over Information

Direction No. III grants CERT-In the power to order service providers, intermediaries, and data centers (corporate and government) to provide near real-time information or assistance when the agency is taking protective or preventive actions in response to cybersecurity incidents. The direction provides no oversight mechanism or data protection provision to guard against such orders being misused or abused. The direction also compels the same entities to designate a point of contact to receive CERT-In information requests and directions for complying with such requests.

Why Indiscriminate Data Retention Mandate is Anathema to VPNs

Consumer VPNs play a vital role in securing users’ confidential information and communications. They create a secure tunnel between a user’s device and the internet, enabling people to keep the data they send and receive private by hiding what servers they are communicating with from their ISP, and encrypting data in transit. This allows people to bypass local censorship and defeat local surveillance. 

VPNs are used everywhere. Activists, journalists, and everyday users want to protect their communications from the prying eyes of the government. Research shows that India has the highest growth rates in using VPN services worldwide. VPN installations during the first half of 2021 reached 348.7 million, a 671 percent increase in growth compared to the same period in 2020. Meanwhile, businesses use VPNs to provide secure access to internal resources (like file servers or printers) or ensure they can navigate securely on the Internet.

The massive data retention obligations under Direction No. V is anathema to VPNs—their core purpose is to not hold or collect user data and provide encryption to protect users’ anonymity and privacy. Forcing VPNs to retain customer data for potential government use will eliminate their ability to offer anonymous internet communications, making VPN users easy targets for state surveillance.

This is especially concerning in countries like India, where anti-terrorism or obscenity rules imposed on online platforms have been used to arrest academics, priests, writers, and poets for posting political messages on social media and leading rallies.

If VPNs comply with the CERT-In Cybersecurity Direction, they can no longer be relied upon as an effective anonymity tool to protect VPN’s user's free expression, privacy, and association, nor as an effective security tool. Chandrasekhar has said VPNs must comply with the Directions or curtail services in India. “You can’t say, 'No, it's our rules that we do not maintain logs,'” he told reporters earlier this year. “If you don't maintain logs, then this is not a good place to do business.”

VPNs “should not have to collect data that are not relevant to their operations to satisfy the new directions, just as private spaces cannot be mandated to carry out surveillance to aid law enforcement purposes,” IFF Policy Director Prateek Waghre said in a brief co-authored and published by the Internet Society. “What makes CERT-In’s directions related to data collection even riskier is that India does not have a data privacy or data protection law. Therefore, citizens in the country do not have the surety that their data will be safeguarded against overuse, abuse, profiling, or surveillance.”

The Internet Freedom Foundation (IFF) in India has called on CERT-In to recall the directions, saying the data retention requirements are excessive. The organization has also urged CERT-In to seek input from technical and cybersecurity experts and civil society organizations to revise them.

VPNs Fight Back

VPN operators have strongly objected, as the rules will essentially negate their purpose. Many said they would have to  pull out of India if forced to collect and retain user data. The good news is that most continue to offer services by routing traffic through virtual servers in Singapore, London, and the Netherlands. Meanwhile, Indian VPN service SnTHostings, which has just 15,000 customers, has filed a lawsuit challenging the rules on grounds that they violate privacy rights and exceed the powers conferred by the Information Technology Act 2000, India’s primary electronic commerce and cybercrime law. SnTHostings is represented by IFF in the case.

The CERT-In Directions come as the government has taken other steps to weaken privacy and restrict free expression; read more here, here, here, here, here, and here. Digital rights in India are degenerating, and while civil society organizations and VPN providers are raising red flags,

The Information Technology Industry Council (ITI), a global trade association representing Big Tech companies like Apple, Amazon, Facebook, and Google, has called on CERT-In to revise them, saying they will negatively impact Indian and global enterprises and actually undermine cybersecurity in India. “These provisions may have severe consequences for enterprises and their global customers without solving the genuine security concerns,” ITI said in a May 5 letter to CERT-In. A few weeks later, the agency clarified that the new directions don’t apply to corporate and enterprise VPNs.

A group of 11 industry organizations representing Big Tech companies in Asia, the EU, and the U.S. have also complained to CERT-In about the rules and urged that they be revised. While noting that internet service providers already collect the customer information required by the rules, they said requiring VPNs, cloud service providers, and virtual service providers to do the same would be “burdensome and onerous” for enterprise customers and data center providers to comply with. The threat to user privacy isn’t mentioned. We’d like to see this change. Tech industry groups, and the companies themselves, should stand with their users in India and urge CERT-In to withdraw these onerous data collection requirements.

To learn more, read Internet Freedom Foundation’s CERT-In Directions on Cybersecurity: An Explainer.

Categorieën: Openbaarheid, Privacy, Rechten

Hoe erg is Office, pardon Microsoft 365 in strijd met de AVG?

IusMentis - 2 december 2022 - 8:10am

Microsoft 365 is volgens het Duitse Datenschutzkonferenz nog steeds in strijd met de Europese GDPR-privacyregelgevingen. Dat meldde Tweakers onlangs. Het gaat om een voorlopige conclusie van een tweejarig onderzoek naar de online services binnen de Office-tool (met nieuwe merknaam) van het bedrijf. Volgens de Dsk zijn er ‘geen substantiële verbeteringen’ doorgevoerd, Microsoft heeft een beter PR bureau want spreekt van “we halen, nee: overstijgen, GDPR-vereisten”. Nou gaat niemand voor z’n lol meer doen dan de AVG eist, dus wat is hier aan de hand?

Het probleem kwam in 2020 aan het licht: DSK, de Duitse waakhond voor databescherming, heeft onderzocht hoe de Enterprise-editie van Windows 10 omgaat met gebruikersinformatie. Gebruikers kunnen instellen dat Windows helemaal geen gegevens verzendt naar Microsoft. Volgens het onderzoek van DSK blijkt dit niet helemaal vlekkeloos te verlopen. Zelfs wanneer een gebruiker daar geen toestemming voor geeft, verzendt Windows 10 alsnog gegevens naar Microsoft. Sindsdien is Microsoft al twee jaar aan het onderhandelen en aanpassen om de goedkeuring van het Duitse samenwerkingsverband van privacytoezichthouders te verkrijgen (iedere deelstaat heeft zijn eigen AP). Het bleek niet alleen te gaan om dat stukje telemetrie, maar ook over zaken als de export van die gegevens naar buiten de EU (hoi Schrems II) en de omgang met subverwerkers. Microsoft kwam in september nog met een nieuwe privacyverklaring, waarin zwaarder ingezet werd op legitiem belang als grondslag voor van alles en nog wat.

De DSK mist nog steeds het nodige. Zo maakt die nieuwe privacyverklaring nog steeds niet duidelijk welke gegevens precies worden verstuurd naar de VS en voor welk doel. Lees ‘m hier in al haar juridische glorie, maar concreter dan deze wordt het niet volgens mij (“Microsoft stellt die nötige Transparenz über Verarbeitungstätigkeiten durch umfangreiche Dokumentation her.”): Klantgegevens, Gegevens van Professionele Diensten en Persoonsgegevens die Microsoft namens de Klant verwerkt, mogen niet worden overgedragen aan, of opgeslagen en verwerkt op een geografische locatie, uitgezonderd in overeenstemming met de Voorwaarden van de [Bijlage Bescherming van persoonsgegevens, BBP] en de waarborgen die hieronder in dit artikel worden beschreven. Rekening houdend met dergelijke waarborgen, machtigt de Klant Microsoft om Klantgegevens, Gegevens van Professionele Diensten en Persoonsgegevens over te dragen naar de Verenigde Staten of enig ander land waarin Microsoft of haar Subverwerkers actief zijn en om Klantgegevens en Persoonsgegevens op te slaan en te verwerken voor het leveren van de Producten, behalve zoals elders in de Voorwaarden van de BBP is beschreven. Er gaan nog steeds gegevens naar de VS. Volgens Microsoft zou dat mogen vanwege de privacyschaamlap pardon Executive Order die het Privacy Shield zou moeten vervangen. Daar valt het nodige over te zeggen, maar ik volsta met opmerken dat Microsoft vervolgens verwijst naar SCC’s als reden om de gegevens in de VS op te mogen slaan. Ook zou het alleen gaan om geanonimiseerde gegevens waar de AVG niet voor geldt.

Alles bij elkaar: ik krijg niet het gevoel dat Microsoft écht iets veranderd heeft, maar vooral meer argumenten heeft aangedragen waarom alles eigenlijk wel in orde is. Dat riekt naar tijd winnen en hopen dat die executive order door de Europese Commissie wordt geaccepteerd.

Arnoud

 

Het bericht Hoe erg is Office, pardon Microsoft 365 in strijd met de AVG? verscheen eerst op Ius Mentis.

How to Make a Mastodon Account and Join the Fediverse

This post is part of a series on Mastodon and the fediverse. We also have a post on understanding the fediverse, privacy and security on Mastodon, and why the fediverse will be great—if we don't screw it up, and more are on the way. You can follow EFF on Mastodon here.

The recent chaos at Twitter is a reminder that when you rely on a social media platform, you’re putting your voice, your privacy, and your safety in the hands of the people who run that system. Many people are looking to Mastodon as a backup or replacement for Twitter, and this guide will walk you through making that switch. Note this guide is current as of December 2022, and the software and services discussed are going through rapid changes.

What even is the fediverse? Well, we’ve written a more detailed and technical introduction, but put simply it is a large network of independently operated social media websites speaking to each other in a shared language. That means your fediverse social media account is more like email, where you pick the service you like and can still communicate with people who chose a different service.

EFF is excited and optimistic about the potential of this new way of doing things, but to be clear, the fediverse is still improving and may not be a suitable replacement for your old social media accounts just yet. That said, if you’re worried about relying on the stability of sites like Twitter, now is a good time to “backup” your social media presence in the fediverse.

1. Making an Account

When joining the fediverse, you are frontloaded with several important decisions. Keep in mind it’s easy enough to keep your account information when changing social media providers in the fediverse, so while important, these choices are not permanent. 

First, the social media site which connects you to the fediverse (called an “instance”) can run one of many applications which often mimic how other social media sites work. This guide focuses on the most popular of these called Mastodon, which is a microblogging application that works a lot like Twitter. If you strongly prefer another social media experience over Twitter, however, you may want to explore some of those alternative applications.

Next, using a site like joinmastodon.org, you’ll need to choose which specific Mastodon instance you join– and there are a lot of them. In making a selection you should consider three things:

  • Operators: Who owns the instance and how is it managed? You are trusting them not only with your privacy and security, but to be responsible content moderators. When reviewing an instance’s about page, make sure the rules they set are agreeable to you. You may also want to consider the jurisdiction in which the instance is operating, to help you anticipate what legal and extralegal pressures the moderators might face.
  • Community: Instances run the gamut from smaller or private options that center shared values and niche interests to large, general interest platforms open to everyone. When selecting one keep in mind that your local peers on an instance affect what content you see in direct and indirect ways. The result can be a close-nit community similar to a Facebook group, or a broad platform for exposure like Twitter. 
  • Affiliation: Your instance will be a part of your username, like with email For example, EFF’s account name is “@eff@mastodon.social” with “mastodon.social” being the instance. This affiliation may reveal information about yourself, especially if you join a special interest instance. If your instance is considered polarizing or poorly managed, other instances may also “defederate” or block it—meaning your messages won’t be shared with them. That’s likely not a concern with most popular instances, however.


Newcomers, especially those trying Mastodon after using Twitter, will likely want to try a large general-interest server. To reiterate, Mastodon makes it relatively easy to change this later without losing your followers and settings. So even if your preferred instance isn’t available to new users, you can get started elsewhere and move later. Some of you may even eventually want to start your own instance, which you can learn about here.

2. Privacy and Security settings

Once you’ve registered your account, there are a few important settings to consider. While there is a lot to say about Mastodon’s privacy and security merits, this guide will only cover adjusting built-in account settings. 

Remember, there is no one-size fits all approach, and you may want to review our general advice on creating a security plan.

Profile Settings

Mastodon setting page called "Appearance" under "Profile"

  • Require follow requests: Turning on this setting means another person can only follow your account after being approved. However, this does not affect whether someone can see your public posts (see next section).
  • Suggest Account to others: if you are worried about drawing too many followers, you can uncheck this option so that mastodon instances do not algorithmically suggest your account to other users.
  • Hide your social graph: Selecting this will hide who you are following and who is following you
Preference - Other

Mastodon setting page called "Other" under "Profile"

  • Opt-out of search engine: Checking this will make it more difficult for a stranger to find your profile, but it may still be possible if your account is listed elsewhere–e.g., on another social media site or on another fediverse account. 
  • Posting Privacy: 
    • Public: Your posts are publicly visible on your profile and are shared with non-followers
    • Unlisted: Your posts are publicly visible on your profile, but are not shared to non-followers. That means posts won’t be automatically shared to the fediverse, but anyone can visit your page to see your posts.
    • Followers-only: One needs to follow your account to view your posts.

Automated post deletion

Mastodon setting page called "Automated Post Deletion"

Unlike Twitter, Mastodon has a built-in tool that gives users the ability to easily and automatically delete old posts on the site. 

This can be an effective way to limit the amount of information you leave publicly accessible, which is a good idea for people worried about online harassment or stalkers. However, public figures or organizations may opt to leave posts up as a form of public accountability.

Whatever you decide, remember that, as with any social media site, other users can download or screenshot your posts. Post deletion cannot unring that bell. An additional concern for the fediverse is that post deletion must be honored by every instance your post reaches, so some instances could significantly delay or not honor deletion requests (though this is not common).

Account settings - Enable 2FA

This group of settings lets you change your password, set up two factor authentication, and revoke access to your account from specific browsers and apps. If you notice any strange account activity, this is the section you can use to lock down access to your account

  1. Select 2faMastodon setting page called "Two-factor Auth" under "Account". Showing "set up" prompt
  2. Click setup and confirm your password"Two-factor Auth" page now showing a QR code and plain text secret
  3. Using a 2fa app, scan the presented QR code or manually enter the text secret
  4. Enter your two-factor code
  5. Click enable"Two-factor Auth" page now showing 10 recovery codes
  6. You’ll now receive 10 recovery codes in case you are not able to access the 2FA device you just set up. 

As with all 2FA recovery codes, take extra care to save these in a secure place such as a password manager, an encrypted file– or even written out by hand and locked away. If you ever lose these codes, or suspect that someone else might have access to them, you can return to this section to generate new ones to replace these.

Data export

Mastodon setting page called "data export" under "import and export".

Finally, if you have a secure way to store information, it is a good idea to regularly create a backup of your account. Mastodon makes it very easy to save your online life, so even if the instance you’re on today is bought by a bored billionaire, you can easily upload most of your account info to a new instance. If you’re planning ahead you can also port your followers to your new home by having your old account point to the new one.

It’s worth emphasizing again that your instance is controlled by its administrators—which means that its continued existence relies on them. Like a website, that means you’re trusting their continued work and wellbeing. However, if your instance is suddenly seized, censored, or bursts into flames, having a backup means you won’t have to completely start over.

3. Migrating and Verifying your Identity

Making sure your followers know you’re really you isn’t just to stroke your ego, it’s a crucial feature in combating misinformation and impersonation. However, if you’re looking for an equivalent of Twitter’s original blue-check verification system, you won’t find it on Mastodon– nor on Twitter, for that matter. You do have a few other options, though. 

Share your new Account

The easiest step is to simply link to your new Mastodon account on your other social media account(s). Adding the account to your name, bio, or a pinned message can help your followers find you on Mastodon through a number of methods

Twitter profile of joinmastodon, showing fediverse handle in the name

This is a good idea even if you plan for Mastodon to be your back-up account. You want users to know where you’ll be before it is necessary, and sharing early improves your ability to retain your following.

This is also a reason you may not want to delete your old account. Leaving this message up, especially from a verified account, will help your followers find you when they make the switch.

Website Verification

Mastodon also has a built-in verification system, but it’s a bit different than on centralized platforms. The original blue-check and similar, rely on users sharing sensitive documents to the social media company to verify that their online identity matches their legal identity– sometimes with that real name being required on the site. Ultimately, it is a system where users need to trust the diligence of that company’s bureaucratic process.

Instead, Mastodon instances only verify that your account has the ability to edit an external website. To do this, you first add the URL of a website you control to your profile under Profile > Appearance. The label for the URL does not matter.

Mastodon setting page called "Appearance" under "Profile". Focus on "verification" section


Then you copy the line of HTML from your profile. This is simply a hyperlink to your account with a special message (`rel=”me”`) which most sites will remove from user-created text. Instead, you will need to edit the site’s HTML directly. For example, you can likely add or request this link be added to an employer’s website, who is then vouching for the account truly being yours. The result looks something like this:

Example of official mastodon project account with a verified homepage of joinmastodon.org

On one hand, this system can eliminate an invasive, opaque, and often arbitrary bureaucracy from the verification process. On the other, you are now trusting two entities: the external website and the Mastodon instance hosting the user. This also asks users, like with email, to be careful for look-alike URLs being listed. 

So when setting up verification, a good strategy is to include the website(s) you have the most secure control over, and which has the most recognizable name. A personal blog is less assuring than an employers’ or schools’ site, while including all three can be very assuring– especially on reputable instances.

Mastodon: Into the Fediverse

Now you’re ready to jump into the fediverse itself. There are a few options for viewing posts: your “Home” feed will show you posts from everyone you follow; “Local” will show the listed posts from others on your instance; and “Federated” will show you all of the posts your instance is aware of–like a shared follow list you have with everyone on your instance. Keep this in mind as you follow accounts and “boost” posts by sharing them with your followers (similar to a retweet). There is no algorithm deciding what you see on Mastodon, but rather a shared process of curation and these actions increase the audience of a given post or user.

The fediverse, and Mastodon specifically, are rapidly developing and it is important to check regularly for changes to features and settings. Your particular security plan may also change in the future, so having regular reminders to check settings will help you adjust settings as needed. 

We have the chance to build something better than what the incumbent social media platforms. While this is an ongoing process, this overview of settings should put you in a good starting point to be a part of that change.

Categorieën: Openbaarheid, Privacy, Rechten

International Coalition of Rights Groups Call on Internet Infrastructure Providers to Avoid Content Policing

Except in Rare Cases, Intervening to Block Sites or Remove Content Can Harm Users

San Francisco—Internet infrastructure services—the heart of a secure and resilient internet where free speech and expression flows—should continue to focus their energy on making the web an essential resource for users and, with rare exceptions, avoid content policing. Such intervention often causes more harm than good, EFF and its partners said today.

EFF and an international coalition of 56 human and digital rights organizations from around the world are calling on technology companies to “Protect the Stack.” This is a global effort to educate users, lawmakers, regulators, companies, and activists about why companies that constitute basic internet infrastructure—such as internet service providers (ISPs), certificate authorities, domain name registrars, hosting providers, and more—and other critical services, such as payment processors, can harm users, especially less powerful groups, and put human rights at risk when they intervene to take down speech and other content. The same is true for many other critical internet services.

EFF today launched the Protect the Stack website at the Internet Governance Forum in Addis Ababa, Ethiopia. The website introduces readers to “the stack,” and explains how content policing practices can and have caused risks to the human rights. It is currently available in English, Spanish, Arabic, French, German, Portuguese, Hebrew, and Hindi.

"Internet infrastructure companies help make the web a safe and robust space for free speech and expression," said EFF Legal Director Corynne McSherry. "Content-based interventions at the infrastructure level often cause collateral damage that disproportionately harms less powerful groups. So, except in rare cases, stack services should stay out of content policing."

“We have seen a number of cases where content moderation applied at the internet’s infrastructural level has threatened the ability of artists to share their work with audiences,” said Elizabeth Larson, Director of the Arts and Culture Advocacy Program at the National Coalition Against Censorship. “The inconsistency of those decisions and the opaque application of vague terms of service have made it clear that infrastructure companies have neither the expertise nor the resources to make decisions on content.”

Infrastructure companies are key to online expression, privacy, and security. Because of the vital role they play in keeping the internet and websites up and running, they are increasingly under pressure to play a greater role in policing online content and participation, especially when harmful and hateful speech targets individuals and groups.

But doing so can have far-reaching effects and lead to unintended consequences that harm users. For example, when governments force ISPs to disrupt the internet for an entire country, people can no longer message their loved ones, get news about what’s happening around them, or speak out.

Another example is domain name system (DNS) abuse, where the suspension and deregistration of domain names is used as a means to stifle dissent. ARTICLE 19 has documented multiple instances of “DNS abuse” in Kenya and Tanzania.

Moreover, at the platform level, companies that engage in content moderation consistently reflect and reinforce bias against marginalized communities. Examples abound: Facebook decided, in the midst of the #MeToo movement’s rise, that the statement “men are trash” constitutes hateful speech. In addition, efforts to police “extremist” content by social media platforms have caused journalists’ and human rights defenders’ work documenting terrorism and other atrocities to be blocked or erased. There’s no reason to expect that things will be any different at other levels of the stack, and every reason to expect they will be worse.

A safe and secure internet helps billions of people around the world communicate, learn, organize, buy and sell, and speak out. Stack companies are the building blocks behind the web, and have helped keep the internet buzzing for businesses, families, and students during the COVID-19 and for Ukrainians and Russians during the war in Ukraine. We need infrastructure providers to stay focused on their core mission: supporting a robust and resilient internet.

For more information: https://protectthestack.org/

Contact:  CorynneMcSherryLegal Directorcorynne@eff.org
Categorieën: Openbaarheid, Privacy, Rechten

Comparative Report on the National Implementations of Article 17 of the Directive on Copyright in the Digital Single Market

International Communia Association - 1 december 2022 - 10:00am

In 2019, the EU’s Copyright in the Digital Single Market Directive (CDSMD) was adopted. This included the highly controversial Articles 15 and 17 on, respectively, the new press publishers’ right (PPR) and the new copyright liability scheme for OCSSPs (“online content-sharing services providers”). In a report published in September 2022, I undertook research into the national implementations of these two provisions in 11 Member States: Austria, Denmark, Estonia, France, Germany, Hungary, Ireland, Italy, Malta, the Netherlands and Spain. Based on information gathered through a questionnaire distributed to national experts from each examined Member State, the report assesses the compliance of the national implementations with the internal market objective of the Directive and the EU’s law of fundamental rights. The report was commissioned by C4C, but written in complete academic independence. 

This is Part 2 of a two-part contribution highlighting the report’s most significant findings. While Part 1 (published on Kluwer Copyright Blog) focused on Article 15 CDSMD, Part 2 considers Article 17 CDSMD. The report’s findings as regards Article 17 CDSMD were presented at the first session (“Fragmentation or Harmonisation? The impact of the Judgement on National Implementations”) of the Filtered Futures conference co-organised by COMMUNIA and Gesellschaft für Freiheitsrechte on 19 September 2022. 

The post is published under a Creative Commons Attribution 4.0 International licence (CC BY 4.0).



Subject matter and right-holders

As opposed to Article 15 CDSMD, Article 17 CDSMD does not introduce a new related right to EU copyright law. Instead, it expands the protections already afforded by copyright and related rights law. To this end, Article 17(1) links to Article 3(1) and (2) of the Information Society Directive (ISD). While Article 3(1) ISD covers copyright, Article 3(2) ISD lists four related rights: those of performers, phonogram producers, film producers and broadcasting organisations.

Based on the national reports, from among the 11 examined Member States, only two (Malta and the Netherlands) appear to have restricted protection to these four related rights. Denmark explicitly extends protection to producers of photographic pictures and producers of catalogues – i.e., related rights owners not mentioned in Article 3(2) ISD. Other Member States eschew a closed enumeration in favour of a general reference to related rights – implying that all related rights recognised in their national law are covered. To the extent that such rights fall outside of the EU acquis such gold-plating is arguably unproblematic. However, where harmonised related rights – such as the new PPR – are affected, implementations start slipping out of compliance.

France presents an added twist, given the absence in the French transposition of Article 15 CDSMD of an exception for private or non-commercial uses by individual users, as the Directive requires (see Part 1 on Kluwer Copyright Blog). The logical conclusion is that in France OCSSPs may be liable for non-commercial public sharing of press publications by end-users on their platforms – contrary to Article 15’s intended focus on own use by providers such as news aggregators and media monitoring services (see Recital 54 CDSMD).

Exclusive rights

Further issues arise with regard to the implicated exclusive rights. The Irish implementation of Article 17 is particularly perplexing in this regard: as the Irish national expert (Giuseppe Mazziotti) explains, while the transposition’s language is ambiguous and hard to decipher, the Irish implementation appears to require that OCSSPs also obtain authorisation from the owners not only of the right of communication to the public and of the making available right (as the CDSMD requires), but also of the reproduction right. Adding to the confusion, the immunity of Article 17(4) is transposed correctly, so that it is restricted to acts of communication to the public.

A number of possible interpretations exist. One option would be that absent authorisation from the holders of the reproduction right, OCSSPs are liable for communication to the public unless the immunity applies – though this would make little sense. An equally disconcerting possibility would be that there is no way out of a liability for unauthorised acts of reproduction that has no foundation in the Directive. Of course, it is also possible that the reference to the reproduction right is just a transposition glitch that should be ignored, in line with the Marleasing principle. This interpretation would be the most likely – were it not for the fact that it is very hard to say that OCSSPs (whose very definition in Article 2(6) CDSMD describes them as “storing” content) perform acts of communication to the public without also (at least temporarily) copying the relevant works: the absence of any reference to the reproduction right in Article 17 makes little sense. The omission has been belatedly recognised by the European Commission in its Guidance on Article 17, according to which the

acts of communication to the public and making content available in Article 17(1) should be understood as also covering reproductions necessary to carry out these acts.

The Guidance notes that Member States should not provide for an obligation on OCSSPs to obtain an authorisation for reproductions carried out in the context of Article 17. However, the Guidance is non-binding and, moreover – from a certain perspective – just confirms that acts of reproduction are inherent to acts of communication or making available to the public.

Authorized uploads by end-users

Interestingly, France has ignored the suggestion in Recital 69 of the Directive that authorisations granted to users to upload protected subject matter extend to OCSSPs. As Valerie-Laure Benabou, the French national expert, points out, the key issue here is whether OCSSPs are participating in a single act of communication to the public or making available to the public performed by the end-user or whether there are two independent acts, one performed by the user and the other by the intermediary.

Given that the Directive establishes primary and not accessory liability for OCSSPs, there is a strong argument that there are two acts of communication to the public. Of course, this seems counter-factual, as clearly only one material act occurs – the upload by the user onto the OCSSP’s platform. But that is a problem embedded in the entire solution adopted by Article 17, which holds OCSSPs liable for those uploads as primary infringers. Moreover, while recitals do have interpretative value, they do not have binding legal force and therefore cannot be used to counteract the operative part of a directive. As a result, clarity on this issue can only come with CJEU intervention.

Interaction with other types of liability

Another headscratcher is provided by the Spanish implementation. This contains a provision according to which, as has already been reported on Kluwer Copyright Blog by the Spanish national expert, Miquel Peguera, even when OCSSPs abide by the conditions of Article 17(4), right holders will be still able to rely on other causes of action for compensation, such as unjust enrichment.

How to assess this is not obvious. One view is that the provision digs a hole under the entire concept of immunity. The result, therefore, is an interference with the “occupied field” of the Directive in a way that erodes its useful purpose: the same claimant would be granted the same protection against the same OCSSP for the same behaviour for which the directive provides immunity.

As persuasive as this interpretation is, there is also a different perspective from which the provision is unproblematic. It is important to consider the difference between Article 17 and the hosting safe harbour Article 6 of the Digital Services Act (DSA) (previously Article 14 of the E-Commerce Directive): the wording of the latter clearly indicates its purpose to ensure that the relevant providers are, in a harmonised way, not held liable across the EU under certain conditions. By contrast, Article 17 arguably has the opposite objective: to ensure that OCSSPs are, in a harmonised way, held liable across the EU under certain conditions. If that is the case, then it could be said that the Spanish provision avoids interference with Article 17’s useful purpose.

Of course, even if that is so, it is highly unlikely that any OCSSP that meets the conditions of Article 17(4) would fail to meet the conditions set by any other national rule of law. The hosting safe harbour will provide additional refuge. While the issue is therefore likely a storm in a teacup, the theoretical question is an interesting one: does Article 17 exhaust the liability of OCSSPs for infringing copyright/related rights over uploaded content – does it provide total harmonisation of OCSSP liability for copyright-infringing content uploaded by their users or only total harmonisation of OCSSP liability for communication to the public?

Filtering and general monitoring obligations

Perhaps the most interesting question that arose from the study concerns the different national approaches to the implementation of Article 17(4) CDSMD. The issue was brought before the CJEU by Poland in an action for the annulment alleging that Article 17(4) makes it necessary for OCSSPs to adopt filtering technology to the detriment of end-users’ freedom of expression. In its decision, the CJEU accepted that the prior review obligations imposed by Article 17(4) on OCSSPs do require the use of filtering tools, but concluded that the resultant limitation on freedom of expression is acceptable because of the safeguards embedded in sub-paragraphs 7, 8 and 9. The Court refrained, however, from a detailed explanation on how these safeguards should operate in practice.

Among the examined Member States, Germany and Austria have taken a proactive elaborative interpretation geared at establishing how these safeguards can be put into practice. This approach has been described as a “balanced” one, as opposed to the more “traditional” one taken by those countries that adopted a copy-out transposition.

The question that arises is whether this “balanced” approach is compatible with the directive. As others have suggested, there is a strong argument that – to the extent that Article 17 does not explain how its various sub-paragraphs are intended to interact or how it’s intended to navigate freedom of expression – such elaboration might be necessary. As mentioned in Part 1, to be copiable in national law, it is necessary that a directive be internally consistent and well formulated. When provisions are in need of legislative repair or clarification, literal transposition should be dismissed. It can be said that Article 17 is precisely such a provision – as the analysis above reveals, it is not a well-drafted piece of legislation: it raises many questions, its wording is complex and confusing, and its purpose and scope are ambiguous.

That said, following the decision of the CJEU in Poland it is hard to hold that those Member States that have taken a copy-out approach to transposition were wrong to do so. Instead, the answer can be found in the final paragraph of Poland. This emphasises both that Member States themselves must transpose Article 17 in such a way as to allow a fair balance and that, “the authorities and courts of the Member States” must interpret their transpositions so as to respect fundamental rights. In this way, the Court has enabled the compatibility with the Directive and the Charter of both the “balanced” and “traditional” implementation approaches: with the first the legislator takes on the task of identifying the appropriate “fair balance” itself, via the process of transposition. With the second, the details on the right balance are delegated to judicial interpretation. The first approach may be preferable from a policy perspective – but both are open to the Member States.

Conclusion

Examination of the national implementations of Articles 15 and 17 reveals a number of potential incompatibilities. While some of these can perhaps be attributed to national intransigence or misplaced inspiration despite clear EU requirements (for example, the French failure to protect private or non-commercial uses of press publications by individual users is hard to explain otherwise), very often the root cause can be found in bad drafting by the EU legislator. The contentious subject matter and intricate structures of these articles, as well as their heavy reliance on undefined terminology and the occasional misalignment between the recitals and operative texts, are not designed to facilitate smooth national implementation and homogenous interpretation and application. As the European Commission has acknowledged, “[b]etter law-making helps better application and implementation”. In the intense discussions on Articles 15 and 17 in the run-up to the adoption of the CDSMD, this principle appears to have fallen by the wayside. To address the consequences, the CJEU will no doubt have much CDSMD-focused work ahead of it. Hopefully, future judgments will be clearer than Poland. 

The post Comparative Report on the National Implementations of Article 17 of the Directive on Copyright in the Digital Single Market appeared first on COMMUNIA Association.

Verdachte niet bestraft voor phishingpanel omdat politie werking niet onderzocht

IusMentis - 1 december 2022 - 8:19am

Een verdachte die een phishingpanel op zijn telefoon had staan is hiervoor niet veroordeeld omdat de politie de werking ervan niet heeft getest. Dat meldde Security.nl maandag. Een phishingpanel is een tool om phishingwebsites mee op te zetten, maar je moet natuurlijk wel aantonen dat deze werkte. Opstekertje voor mij: het is een feit van algemene bekendheid dat een website ook serverhardware nodig heeft.

Op de site van onderzoeker Jarno Baselier staat een uitgebreide demonstratie van hoe een dergelijk panel werkt. Het doet sterk denken aan een gewone ecommercesite: je zet een controlepaneel op, vanuit daar programmeer je mailcampagnes die mensen naar je site moeten krijgen, en daar bouw je een landingpagina om ze een boek te verkopen. Alleen dan geen boek maar je ontfutselt ze hun wachtwoord. Vele voorbeelden van hoe dat eruit ziet bij Group-IB.

De getoonde tool heet GoPhish, dat zichzelf adverteert als “a powerful, open-source phishing framework that makes it easy to test your organization’s exposure to phishing.” Met onder meer een superhandige knop om bijvoorbeeld Facebook te ‘importeren’ zodat je jouw medewerkers kunt controleren op het invoeren van Facebookwachtwoorden wanneer jij een echt van Facebook lijkende mail stuurt. (Ik doe mijn best dit zakelijk op te schrijven).

Welke tool de verdachte uit deze zaak had, is mij niet duidelijk. De rechtbank Den Haag ook niet, want uit het proces-verbaal blijkt niet eens dat de panel is aangeklikt en uitgeprobeerd op werking, althans niet hoe dat zit: Ik [, verbalisant] heb onderzoek gedaan in de uitgelezen data van het eerder omschreven toestel. Uit mijn onderzoek is gebleken dat er data in het toestel aanwezig is die gebruikt kan worden voor het plegen van cybercrime delicten. Dat is natuurlijk te mager om te kunnen veroordelen op het hebben van een hulpmiddel om computervredebreuk mee te plegen. De verdachte blijkt wel een berg wachtwoorden voorhanden te hebben (goh, geen idee hoe) en dat is afzonderlijk grond voor een veroordeling.

Opstekertje voor mij dus nog dat de rechtbank scherp inspeelt op het te verwachten verweer dat die wachtwoorden alleen geschikt zijn om op een website in te breken, en dat een website volgens het Gerechtshof Den Haag geen server is: Door de verdediging is betoogd dat met de inloggegevens enkel toegang kon worden verschaft tot accounts en niet tot de betreffende servers. De rechtbank gaat voorbij aan dit verweer, nu het een feit van algemene bekendheid betreft dat het inloggen op een account via een (deel van) een server verloopt, wat een geautomatiseerd werk is als bedoeld in artikel 139d Sr. Ik heb een hele kleine twijfel of dit naar de letter van de kunst wel klopt, gezien “feit van algemene bekendheid” een argument is wanneer je iets feitelijks moet bewijzen. Maar het argument van de verdediging is dat niet ten laste is gelegd dat meneer op een server had willen inloggen. Vergeten te verwijten is wat anders dan vergeten te onderbouwen.

Maar ik denk dat het hier goed gaat, omdat het delict hier is “het hebben van wachtwoorden waarmee je computervredebreuk kunt plegen”. Voor dat laatste is dus nodig dat bewezen wordt dat je op een geautomatiseerd werk kunt inloggen, en dan is het wel logisch om te zeggen “het is algemeen bekend dat Netflix servers heeft”.

Arnoud

Het bericht Verdachte niet bestraft voor phishingpanel omdat politie werking niet onderzocht verscheen eerst op Ius Mentis.

Let Data Breach Victims Sue Marriott

A company harvested your personal data, but failed to take basic steps to secure it. So thieves stole it. Now you’ve lost control of your data, and you’re at greater risk of identity theft. But when you sue the negligent company, they say you haven’t really been injured, so you don’t belong in court – not unless you can prove a specific economic harm on top of the obvious privacy harm.

We say “no way.” Along with our friends at EPIC, and with assistance from Morgan & Morgan, EFF recently filed an amicus brief arguing that negligent data breaches inflict grievous privacy harms in and of themselves, and so the victims have “standing” to sue in federal court – without the need to prove more. The case, In re Marriott Customer Data Breach, arises from the 2018 breach of more than 130 million records from the hotel company’s reservation system. This included guests’ names, phone numbers, payment card information, travel destinations, and more. We filed our brief in the federal appeals court for the Fourth Circuit, which will decide whether the plaintiff class certified by the lower court shares a class-wide injury.

Our brief explains that once personal data is stolen, it can be used against the breach victims for identity theft, ransomware attacks, and to send unwanted spam. The risk of these attacks causes psychological injury, including anxiety, depression, and PTSD. To avoid these attacks, breach victims must spend time and money to freeze and unfreeze their credit reports, to monitor their credit reports, and to obtain identity theft prevention services.

Courts have long granted standing to sue over harms like these. Intrusion upon seclusion and other privacy torts are more than a century old. As the U.S. Supreme Court has recognized: “both the common law and literal understanding of privacy encompass the individual’s control of information concerning [their] person.”

Further, the harms from a single data breach must be understood in the context of the larger data broker ecology. As we explain in our amicus brief:

Data breaches like the Marriott data breach cannot be considered individually. Once data has been disclosed from databases such as Marriott’s, it is often pooled with other information, some gathered consensually and legally and some gathered from other data breaches or through other illicit means. That pooled information is then used to create inferences about the affected individuals for purposes of targeted advertising, various kinds of risk evaluation, identity theft, and more. Thus, once individuals lose control over personal data that they have entrusted to entities like Marriott, the kinds of harms can grow and change in ways that are difficult to predict. Also, it can be onerous, if not impossible, for an ordinary individual to trace these harms and find appropriate redress.

Standing doctrine gone wrong

Under the current standing doctrine, your privacy is violated – and so you have standing to sue – when your data leaves the custody of a company that is supposed to protect it. So In re Marriott is an easy case for the Fourth Circuit.

But make no mistake, the U.S. Supreme Court has wrongly narrowed the standing doctrine in recent data privacy cases, and it should reverse course. These cases are Spokeo v. Robins (2016) and TransUnion v. Ramirez (2021). They hold that to have standing, a person seeking to enforce a data privacy law must show a “concrete” injury. This includes “intangible harms” that have “a close relationship to harms traditionally recognized as providing a basis for lawsuits in American courts,” such as “reputational harms, disclosure of private information, and intrusion upon seclusion.”

In TransUnion, the credit reporting company violated the Fair Credit Reporting Act by negligently and falsely labeling some 8,000 people as potential terrorists. The Court held that some 2,000 of them suffered concrete injury, and thus had standing, because the company disclosed this dangerous information to others. Unfortunately, the Court also held that the remaining people lacked standing, because the company unlawfully made this dangerous information available to employers and other businesses, but did not actually disclose it to them.

We disagree. As we argued in amicus briefs in TransUnion and Spokeo (and have argued elsewhere), we need broader standing for private enforcement of data protection laws, not narrower. Our personal data, and the ways private companies harvest and monetize it, play an increasingly powerful role in modern life. Corporate databases are vast, interconnected, and opaque. The movement and use of our data is difficult to understand, let alone trace. Yet companies use it to reach inferences about us, leading to lost employment, credit, and other opportunities. In this data ecosystem, all of us are increasingly at risk from wrong, outdated, or incomplete information, yet it is increasingly hard to trace the causation from bad data to bad outcomes.

Congress made a sound judgment in the Fair Credit Reporting Act that a person should be able to sue a data broker that negligently compiled a dossier about them containing dangerously false information, and then made that dossier available to others. Four Justices in TransUnion would have deferred to Congress, but the majority thought it knew better.

So, even though TransUnion provides standing to the many millions of people harmed by data breaches, including Marriott’s, the Court still must revisit and overrule TransUnion.

You can read our In re Marriott amicus brief here.

Categorieën: Openbaarheid, Privacy, Rechten

Let Them Know: San Francisco Shouldn’t Arm Robots

Electronic Frontier Foundation (EFF) - nieuws - 1 december 2022 - 12:22am

The San Francisco Board of Supervisors on Nov. 29 voted 8 to 3 to approve on first reading a policy that would formally authorize the San Francisco Police Department to deploy deadly force via remote-controlled robots. The majority fell down the rabbit hole of security theater: doing anything to appear to be fighting crime, regardless of whether or not it has any tangible effect on public safety.

These San Francisco supervisors seem not only willing to approve dangerously broad language about when police may deploy robots equipped with explosives as deadly force, but they are also willing to smear those who dare to question its possible misuses as sensationalist, anti-cop, and dishonest.

When can police send in a deadly robot? According to the policy: “The robots listed in this section shall not be utilized outside of training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments.” That’s a lot of events: all arrests and all searches with warrants, and maybe some protests. 

When can police use the robot to kill? After an amendment proposed by Supervisor Aaron Peskin, the policy now reads: “Robots will only be used as a deadly force option when [1] risk of loss of life to members of the public or officers is imminent and [2] officers cannot subdue the threat after using alternative force options or de-escalation tactics options, **or** conclude that they will not be able to subdue the threat after evaluating alternative force options or de-escalation tactics. Only the Chief of Police, Assistant Chief, or Deputy Chief of Special Operations may authorize the use of robot deadly force options.”

The “or” in this policy (emphasis added) does a lot of work. Police can use deadly force after “evaluating alternative force options or de-escalation tactics,” meaning that they don’t have to actually try them before remotely killing someone with a robot strapped with a bomb. Supervisor Hillary Ronen proposed an amendment that would have required police to actually try these non-deadly options, but the Board rejected it.

The Board majority failed to address the many ways that police have used and misused technology, military equipment, and deadly force over recent decades.

Supervisors Ronen, Shamann Walton, and Dean Preston did a great job pushing back against this dangerous proposal. Police claimed this technology would have been useful during the 2017 Las Vegas mass shooting, in which the shooter was holed up in a hotel room. Supervisor Preston responded that it probably would not have been a good idea to detonate a bomb inside a  hotel.

The police department representative also said the robot might be useful in the event of a suicide bomber. But exploding the robot’s bomb could detonate the suicide bomber’s device, thus fulfilling the terrorist’s aims. After common sense questioning from their peers, pro-robot supervisors dismissed concerns as being motivated by ill-formed ideas of “robocops.”

The Board majority failed to address the many ways that police have used and misused technology, military equipment, and deadly force over recent decades. They seem to trust that police would roll out this type of technology only in the absolutely most dire circumstances, but that’s not what the policy says. They ignore the innocent bystanders and unarmed people already killed by police using other forms of deadly force only intended to be used in dire circumstances. They didn’t account for the militarization of police response to protesters, such as the Minneapolis demonstration with  overhead surveillance of a predator drone.

The fact is, police technology constantly experiences mission creep–meaning equipment reserved only for specific or extreme circumstances ends up being used in increasingly everyday or casual ways. This is why President Barack Obama in 2015 rolled back the Department of Defense’s 1033 program which had handed out military equipment to local police departments. He said at the time police must  “embrace a guardian—rather than a warrior— mind-set to build trust and legitimacy both within agencies and with the public.”

Supervisor Rafael Mandleman smeared opponents of the bomb-carrying robots as “anti-cop,” and unfairly questioned the professionalism of our friends at other civil rights groups. Nonsense. We are just asking why police need new technologies and under what circumstances they actually would be useful. This echoes the recent debate in which the Board of Supervisors enabled police to get live access to private security cameras, without any realistic scenario in which it would prevent crime. This is disappointing from a Board that in 2019 made San Francisco the first municipality in the United States to ban police use of face recognition.

We thank the strong coalition of concerned residents, civil rights and civil liberties activists, and others who pushed back against this policy. We’d also appreciate Supervisors Walton, Preston, and Ronen for their reasoned arguments and commonsense defense of the city’s most vulnerable residents, who too are harmed by police violence.

Fortunately, this fight isn’t over. The Board of Supervisors needs to vote again on this policy before it becomes effective. If you live in San Francisco, please tell your Supervisor to vote “no.” You can find an email contact for your Supervisor here, and determine which Supervisor to contact here. Here's text you can use (or edit):

Do not give SFPD permission to kill people with robots. There are many alternatives available to police, even in extreme circumstances. Police equipment has a documented history of misuse and mission creep. While the proposed policy would authorize police to use armed robots as deadly force only when the risk of death is imminent, this legal standard has often been under-enforced by courts and criticized by activists. For the sake of your constituents' rights and safety, please vote no.

TAKE ACTION

EMAIL YOUR SUPERVISOR: DON'T LET SFPD ARM ROBOTS 

Categorieën: Openbaarheid, Privacy, Rechten

Rechtsfibel Digitalisierung – Social Media ins Archiv – „Doing Research“: Creative Commons, Verlagswesen

iRights.info - 30 november 2022 - 8:44am

Mehrere Publikationen unter Beteiligung von iRights.info sind kürzlich per Open Access erschienen. Darunter die „Rechtsfibel für Digitalisierungsprojekte in Kulturerbe-Einrichtungen“, ein Gutachten zur Archivierung von Social-Media-Inhalten sowie Buchkapitel zu Creative Commons und dem wissenschaftlichen Verlagswesen.

Paul Klimpel: Rechtsfibel zur Digitalisierung des kulturellen Erbes

Einrichtungen des kulturellen Erbes, die ihre Angebote rechtssicher ins Netz bringen möchten, stoßen dabei oftmals auf rechtliche Fragen. Darunter beispielsweise: Welche Folgen hat der Urheberrechtsschutz für die zu digitalisierenden Objekte? Was bedeutet die „Gemeinfreiheit“ im Urheberrecht und wie ist sie festzustellen? Welche Lizenz ist am besten geeignet für den offenen Zugang der eigenen Digitalisate? Was gilt für besondere Formate (beispielsweise mit unklarer Autorschaft) wie etwa Zeitungsartikel, Flugblätter, Akten oder Protokolle?

Fragen dieser und ähnlicher Art beantwortet der Leitfaden „In Bewegung. Die Rechtsfibel für Digitalisierungsobjekte in Kulturerbe-Einrichtungen“ aus der Feder von Paul Klimpel, Rechtsanwalt bei iRights.Law und Leiter der jährlichen Konferenz „Zugang gestalten!“. Erschienen ist der Leitfaden als Weiterentwicklung der Rechtsfibel „Kulturelles Erbe digital“ unter Mitwirkung des Deutschen Digitalen Frauenarchivs, des i. d. a.-Dachverbands e. V. und des Forschungs- und Kompetenzzentrums Digitalisierung Berlin (digiS). Die Fibel wurde überarbeitet, erweitert und an die seit Sommer 2021 geltenden Änderungen des Urheberrechts angepasst.

Zahlreiche Praxisfragen rund ums Urheberrecht alltagsnah erklärt

Die gut 150 Seiten starke Handreichung adressiert alle großen und kleinen Archive, Museen und Bibliotheken, die sich den Möglichkeiten und Aufgaben der digitalen Welt stellen. Die Fibel ist mit zahlreichen Beispielen illustriert und orientiert sich an typischen Fällen aus der Praxis.

In insgesamt 13 Kapiteln erläutert Klimpel die allgemeinen Grundlagen von Urheberrecht, Markenrecht, Freien Lizenzen und Rechteklärung. Zudem bespricht er eine Reihe von speziellen Fragen, etwa zu den neuen erweiterten Kollektivlizenzen und den damit verbundenen Aufgaben von Verwertungsgesellschaften, zur Onlinepräsentation von Digitalisaten oder zur Archivierung. Abgerundet wird die Handreichung mit einem hilfreichen Glossar zu vielen wichtigen Stichworten sowie einer Literaturliste mit Empfehlungen zur weiteren Lektüre und Recherche.

Der Text steht unter der Creative-Commons-Lizenz CC BY-4.0 und darf nach den Maßgaben der Lizenz verbreitet, bearbeitet und anderweitig nachgenutzt werden. Die Handreichung enthält zudem offen lizenziertes Bildmaterial wie Grafiken oder Fotos. Die Rechtsfibel steht bei digiS zum kostenlosen Download zur Verfügung.

  • In Bewegung. Die Rechtsfibel für Digitalisierungsprojekte in Kulturerbe-Einrichtungen (PDF bei iRights.info).
Paul Klimpel und Fabian Rack: Zu den rechtlichen Rahmenbedingungen der Archivierung von Social-Media-Inhalten

Soziale Medien wie Twitter, Facebook oder Instagram gewinnen zunehmend Einfluss im gesellschaftlichen Diskurs. Das dürfte allen klar sein, die sich heutzutage im Internet bewegen. Droht eine Plattform auszufallen oder zu verschwinden, gehen damit möglicherweise auch Diskussionen, Beiträge und anderes Wichtiges verloren. Wie also sind Tweets, Stories, Facebook-Gruppen, aber auch Fotos, Videos und andere nutzergenerierte Inhalte zu archivieren und für die Nachwelt abzubewahren? Und das am besten juristisch einwandfrei? Dieser Frage widmen sich Fabian Rack, Anwalt bei iRights.Law und Autor bei iRights.info, sowie Paul Klimpel in einem kürzlich erschienenen Gutachten für die Friedrich-Ebert-Stiftung (FES).

Social Media archivieren: Rechtliche Analyse, praktische Tipps

Zunächst klären die beiden Juristen die Grundlagen aus Urheberrecht, Datenschutz und den AGBs der Dienste-Anbieter. Anschließend untersuchen sie, wie sich diese Regelungen auf die Archivierbarkeit von Social-Media-Inhalten auswirken: einerseits bei einer Vor-Ort-Nutzung (hier vor allem im Archiv der sozialen Demokratie der FES), andererseits im Kontext einer Online-Nutzung. Die Analyse schließt mit neun pragmatischen Handlungsempfehlungen. Diese richten sich speziell an den Auftraggeber FES, geben aber auch allgemeine Hinweise zur Archivierung von Twitter, Facebook und Instagram. Davon dürften auch weitere Institutionen des kulturellen Erbes profitieren.

Das Gutachten erscheint in der Zeitschrift „Beiträge aus dem Archiv der sozialen Demokratie“ (S. 15-48) und wurde kürzlich bei einer Diskussionsveranstaltung präsentiert. Der Text des Gutachtens steht unter der Lizenz CC BY-4.0.

  • Einschätzung der rechtlichen Rahmenbedingungen für die Archivierung von Social-Media-Inhalten im Archiv der sozialen Demokratie (PDF bei iRights.info).
Fabian Rack, Georg Fischer: Beiträge zu Creative Commons sowie dem wissenschaftlichen Verlagswesen in „Doing Research“

Einen ungewöhnlichen Ansatz verfolgt das Buch „Doing Research“, das Sandra Hofhues und Konstanze Schütze beim transcript Verlag herausgeben: Mehr als 50 Abkürzungen aus dem wissenschaftlichen Kontext stehen im Zentrum. Sie werden erklärt, analysiert und in den historischen und praktischen Kontext der wissenschaftlichen Erkenntnisproduktion gestellt.

So findet sich beispielsweise unter „#“ eine Untersuchung zum Potential des Hashtags in Wissenschaft und Forschung. Der Beitrag zum Kürzel „s.“ (für „siehe“) nimmt akademische Verweispraktiken in den Blick, die den Zitationsalltag und damit das Netzwerken der Forschenden bestimmen. Und das Stichwort „(b)cc“ – bekannt als „(Blind-)Kopie“ aus Email-Programmen – rekonstruiert die Rolle des Kopiergeräts und des Kopierens für die wissenschaftliche Verwaltung (sowohl per Papier als auch digital). Gerade weil die Abkürzungen so alltäglich (und manchmal auch unhinterfragt) sind, ist es interessant, nach ihren Ursprüngen und Effekten für die Wissenschaftspraxis zu fragen.

„CC“ und „Verl.“ – Wissenschaftliche Abkürzungen rekonstruiert

Das Kürzel „CC“ dürfte den Leser*innen von iRights.info gut bekannt sein. Hinter ihm versteckt sich die US-amerikanische Organisation „Creative Commons“, die die gleichnamigen freien Lizenzen entwickelt und herausgibt. Für den Band erklärt Fabian Rack, der die deutschen FAQs zu CC verfasst hat, was das Ziel von Open Content und freien Lizenzen wie CC ist, wie sie funktionieren und was es zu beachten gibt, wenn man sie einsetzen möchte. Der sechs Seiten lange Text empfiehlt sich damit als Einstieg in die Materie und für alle, die nach einer kompakten und gleichermaßen aktuell gehaltenen Übersicht suchen (S. 154-161).

Nicht nur im wissenschaftlichen Kontext, sondern auch in der Belletristik und anderen Buchsparten begegnet man des Öfteren der Abkürzung „Verl.“. Sie steht für „Verlag“, indirekt auch für die verschiedenen Praktiken des Verlegens. In ihrem Beitrag analysieren Georg Fischer, Redakteur bei iRights.info, und Maximilian Heimstädt von der Universität Bielefeld das akademische Verlagswesen. Zudem erörtern die Autoren verschiedene verlegerische Praktiken im privatwirtschaftlichen und universitären Sektor. Ihnen zufolge zeigen sich an den Rändern des Verlagswesens alternative, räuberische und plattförmige Verlagspraktiken. Abschließend zeigen Heimstädt und Fischer, wie sich durch neue Methoden der Datenalyse das wissenschaftliche Verlagswesen derzeit ändert (S. 392-398).

Alle Texte des Herausgeberwerks stehen unter der Lizenz CC BY-4.0, das komplette Buch ist im Open Access verfügbar. Eine Printfassung ist beim transcript Verlag sowie den üblichen Verkaufsstellen erhältlich.

  • Doing Research. – Wissenschaftspraktiken zwischen Positionierung und Suchanfrage (PDF bei iRights.info).

The post Rechtsfibel Digitalisierung – Social Media ins Archiv – „Doing Research“: Creative Commons, Verlagswesen appeared first on iRights.info.

Kantoor van Twitter in Brussel gesloten, is dat een juridisch probleem?

IusMentis - 30 november 2022 - 8:13am

mohamedhassan / Pixabay

Het kleine maar vitale kantoor van Twitter in Brussel is gesloten, meldt de Financial Times donderdag. De laatste medewerkers zijn weg – onduidelijk is of ze zijn ontslagen dan wel ontslag hebben genomen. Het gaat op zich om een handvol mensen, en alle kernactiviteit van Twitter speelt zich in Californië af. Maar toch is dit een stap waar veel mensen uit Europa van schrikken. Wat zit hier achter?

Twitter heeft nog steeds een vestiging in Ierland, waar wel 50% van de medewerkers zou zijn ontslagen. (Voorbehoud omdat onduidelijk is of dit soort massa-ontslag wel mogelijk is onder Europees/Iers arbeidsrecht.) Het vertrek van de Brusselse medewerkers is vooral pijnlijk omdat zij het voortouw hadden bij de compliance rondom de nieuwste regels – de Digital Services Act en de Digital Markets Act, waarmee Europa de Amerikaanse techbedrijven aan de ketting wil krijgen.

Zowel de DSA als de DMA zijn beesten van teksten, dus het zal een pittige worden om hieraan te voldoen zonder lokale experts die dicht bij de wetgever en handhaver (Europese Commissie) zitten. Nog los van de al bestaande wetgeving, zoals de afspraken over het binnen 24 uur verwijderen van haatberichten. De FT meldt hierover dat Twitter dit 5% minder zou hebben gedaan de afgelopen periode.

Specifiek voor de AVG gaat het goed, daar is Twitter in Ierland de aan te spreken partij. Althans formeel – in de praktijk zal alles vanuit Californië uitgevoerd worden. En het ligt voor de hand dat wie straks Twitter wil aanspreken vanwege onnavolgbare moderatie-ingrepen (een DSA overtreding), ook naar Ierland zal moeten. De vraag is dan vooral of daar genoeg menskracht aanwezig is om de regels na te leven, vandaar de zorg van de Europese Commissie.

Arnoud

Het bericht Kantoor van Twitter in Brussel gesloten, is dat een juridisch probleem? verscheen eerst op Ius Mentis.

Coalition of Human Rights, LGBTQ+ Organizations Tell Congress to Oppose the Kids Online Safety Act

Electronic Frontier Foundation (EFF) - nieuws - 29 november 2022 - 7:41pm

Yesterday, nearly 100 organizations have asked Congress not to pass the Kids Online Safety Act (KOSA), which would “force providers to use invasive filtering and monitoring tools; jeopardize private, secure communications; incentivize increased data collection on children and adults; and undermine the delivery of critical services to minors by public agencies like schools.” EFF agrees. 

As we’ve said before, KOSA would not protect the privacy of children or adults, and would force technology companies to spy on young people and stop them from accessing content that is “not in their best interest,” as defined by the government, and interpreted by tech platforms. KOSA would also likely result in an elaborate age-verification system, run by a third-party, that maintains an enormous database of all internet users’ data. 

The letter continues: 

While KOSA has laudable goals, it also presents significant unintended consequences that threaten the privacy, safety, and access to information rights of young people and adults alike. We urge members of Congress not to move KOSA forward this session, either as a standalone bill or attached to other urgent legislation, and encourage members to work toward solutions that protect everyone’s rights to privacy and access to information and their ability to seek safe and trusted spaces to communicate online.

TAKE ACTION

TELL THE SENATE: VOTE NO TO CENSORSHIP AND SURVEILLANCE 

You can tell the Senate not to move forward with KOSA here. 

Categorieën: Openbaarheid, Privacy, Rechten

Power Up! Donations Get a 2X Match This Week

Electronic Frontier Foundation (EFF) - nieuws - 29 november 2022 - 5:40pm

Power Up Your Donation Week is here! Right now, your contribution to the Electronic Frontier Foundation will have double the impact on digital privacy, security, and free speech rights for everyone.

Power Up!

Give today and get an automatic 2X match

A group of passionate EFF supporters created the Power Up Matching Fund and issued this challenge to all supporters of internet freedom: donate to EFF by December 6th and they’ll automatically match it up to a total of $272,000!

This means every dollar you give becomes two dollars for EFF. And we make every cent count. American nonprofit organizations rely heavily on fundraising that happens each November and December. During this season, the strength of members' support gives EFF the confidence to set its agenda for the following year. Your support powers EFF's initiatives to advance digital rights every day.

A Beacon in the Haze

Tech users face problems that shift as quickly as their digital tools. Sometimes the threat is a company’s sneaky methods to track your movements online. Other times it’s shortsighted lawmakers who overlook a dark future for your rights. Our digital world can be just as stormy as the one outside.

But thanks to public support, EFF is a leading voice for digital creators and users’ rights. You can ensure that EFF’s team of public interest lawyers, tech developers, and activists remains a beacon for a brighter web. Your donation will give twice the support for EFF initiatives that include:

Double Your Impact

Power Up Your Donation Week motivates thousands of people to support online rights every year. And we need your help to share this opportunity. Invite friends to join the cause! Here’s some sample language that you can share:

Donations to EFF get doubled this week thanks to a matching fund. Join me in supporting digital rights, and your contribution will pack double the punch, too! https://eff.org/power-up
Twitter | Facebook | Email

I’m grateful to all of the supporters who made EFF one of the most skilled defenders in the internet freedom movement. And now, you can help continue this critical work AND power up your donation.

Join EFF today

Pack twice the punch for civil liberties and human rights online

Categorieën: Openbaarheid, Privacy, Rechten

From Camera Towers to Spy Blimps, Border Researchers Now Can Use 65+ Open-licensed Images of Surveillance Tech from EFF

Electronic Frontier Foundation (EFF) - nieuws - 29 november 2022 - 5:15pm

The U.S.-Mexico border is one of the most politicized technological spaces in the country, with leaders in both political parties supporting massive spending on border security and the so-called "Virtual Wall." Yet we see little debate over the negative impacts for human rights or the civil liberties of those who live in the borderlands. Despite all the political and media attention devoted to the border, most people hoping to write about, research, or learn how to identify the myriad technologies situated have to rely on images released selectively by Customs & Border Protection, copyright-restricted photographs taken by corporate press outlets or promotional advertisements from the vendors themselves.

play %3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FnleYJgKSQrY%3Fautoplay%3D1%26mute%3D1%22%20allow%3D%22accelerometer%3B%20autoplay%3B%20clipboard-write%3B%20encrypted-media%3B%20gyroscope%3B%20picture-in-picture%22%20allowfullscreen%3D%22%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

To address this information gap, EFF is releasing a series of images taken along the U.S. Mexico-Border in California, Arizona, and New Mexico under a Creative Commons Attribution 3.0 license, which means they are free to use, so long as credit is given to EFF (see EFF's Copyright policy).  Our goal is not only to ensure there are alternative and open sources of visual information to inform discourse, but to raise awareness of how surveillance is impacting communities along the border and the hundreds of millions of dollars being sunk into oppressive surveillance technologies.

Surveillance Towers

The images include various types of surveillance towers adopted by Customs & Border Protection over the last two decades: 

  1. Integrated Fixed Towers (IFT). These structures are from the vendor Elbit Systems of America, part of an Israeli corporation that has come under criticism for its role in surveillance in Palestine.  Some IFT towers are built using the same infrastructure as the earlier Secure Border Initiative (SBInet) program, which was widely considered a multi-billion-dollar boondoggle and canceled in January 2011. While there may be different IFT models along the border, the most common versions combine electro-optical and infrared sensors and radar and use solar panels. 
  2. Remote Video Surveillance Systems (RVSS). These structures from the vendor General Dynamics are most commonly, but not exclusively, found along the border fence. The platform at the top usually includes two sensor rigs with electro-optical and infrared cameras and a laser illuminator. The RVSS towers along the southwestern border (California, Arizona, New Mexico, and the El Paso area in Texas) differ in design than some of the RVSS models in south Texas; those are not included in this photo collection. 
  3. Autonomous Surveillance Towers (AST). These "Sentry" towers are made by Anduril Industries, founded by Oculus-creator Palmer Luckey. According to CBP, an AST "scans the environment with radar to detect movement, orients a camera to the location of the movement detected by the radar, and analyzes the imagery using algorithms to autonomously identify items of interest." In July 2020, CBP announced plans to acquire a total of 200 of these systems by the end of Fiscal Year 2022, a deal worth $250 million. EFF is publishing an image of one of these new towers installed in New Mexico along State Road 9; previously Anduril towers were only known to be located in Southern California and South Texas. 
  4. Mobile Surveillance Capabilities (MSC) from the vendor FLIR, which are surveillance towers mounted in the back of trucks so that they can be transported around or parked semi-permanently at particular locations. While CBP has used these trucks for many years, in early 2021 FLIR announced a new $21 million contract with CBP that will include additional units with new technologies "that can track up to 500 objects at once at ranges greater than 10 miles." While these trucks do move around the region, they are often parked in certain established areas, including next to permanent surveillance towers. 

CBP is currently in the early stages of the solicitation process for a massive expansion of this tower network on both the southern and northern border, according to an industry presentation from October. The "Integrated Surveillance Tower (IST) program is designed to "consolidate disparate surveillance tower systems under a single unified program structure and set of contracts," but it also contemplates upgrading 172 current RVSS towers and then adding 336 more, with the majority in California and Texas.

Tactical Aerostats

EFF's image set also includes two new tactical aerostats. First, the persistent ground surveillance (PGSS) tactical aerostat  that was launched without notice over the summer in Nogales, AZ, surprising and angering the local community. Secondly, we photographed a new aerostat in southern New Mexico that had not been previously reported. A third aerostat will soon be launched in Sasabe, AZ-with a total of 17 planned in the next fiscal year, according to a CBP report

These aerostats should not be confused with the "Tethered Aerostat Radar Systems," which are larger and permanently moored at air fields throughout the southern U.S. and Puerto Rico. TARS primarily use radar, while tactical aerostats  include "day and night cameras to provide persistent, low-altitude surveillance, with a maximum range of 3,000 feet above ground level," CBP says. Tactical aerostats are tethered to trailer-like platforms that can be moved to other locations within a Border Patrol's area of responsibility.

EFF and the Border

EFF's photographs were gathered up close when possible, and using a long-range lens when not, by EFF staff during two trips to the U.S.-Mexico border. In addition to capturing these images, EFF met with the residents, activists, humanitarian organizations, law enforcement officials, and journalists whose work is directly impacted by the expansion of surveillance technology in their communities. 

While officials in Washington, DC and state capitals talk in abstract and hyperbolic terms about a "virtual wall," there is nothing virtual at all about the surveillance for the people who live there. The towers break up the horizon and loom over their backyards. They can see the aerostats from the windows of their homes. This surveillance tech watches not just the border, and people crossing it, but also nearby towns and communities, on both sides, from air and the ground, and it can track them for miles, whether they're hiking, driving to visit relatives, or just minding their own business in solitude. 

People who live, work, and cross the border have rights. We hope these photographs document the degree to which freedoms and privacy have been curtailed for people in the borderlands.

A sample of the images is below. You can find the entire annotated collection here. 

An Anduril Sentry off CA-98 in Imperial County, CA

A Tactical Aerostat flying over State Road 9, Luna County, NM

An extreme close-up shot of the lens of an Integrated Fixed Tower (IFT) camera on Coronado Peak, Cochise County, AZ

A Mobile Surveillance Capability surveillance device atop a truck in Pima County, AZ

Categorieën: Openbaarheid, Privacy, Rechten

Politie ontdekt iSpoof-server in Almere en haalt oplichtingsdienst uit de lucht

IusMentis - 29 november 2022 - 8:19am

StockSnap / Pixabay

Het Cybercrimeteam Midden-Nederland heeft in samenwerking met de Londense Metropolitan Police Service iSpoof-servers uit de lucht gehaald. Dat meldde Tweakers donderdag. Met deze servers werden duizenden neptelefoontjes per maand gefaciliteerd, wat vaak gebruikt wordt voor oplichting zoals door alarmerende “wij zijn van de bank en u wordt gehackt” telefoontjes. Wat de vraag oproept, waarom kán dat eigenlijk nog steeds?

Helaas komt het steeds vaker voor dat oplichters (“phishers”, in het jargon) gebruik maken van gespoofte telefoonnummers. Dat is logisch, want als een slachtoffer het werkelijke nummer van een bank als beller ziet, zal hij eerder het verhaal geloven natuurlijk. En technisch is dit simpel: telecomproviders hanteren in de praktijk geen enkele beperking , zoals dat het ingestelde nummer ergens aan jouw bedrijf gekoppeld moet zijn in de administratie van de telecomprovider. Ik weet niet waarom dat niet gebeurt.

Het kabinet had in 2021 aangekondigd dat ze zouden komen met een voorstel om de Telecommunicatiewet zo aan te passen dat telecomproviders worden verplicht om telefoonspoofing aan te pakken. Dat heeft geleid tot een internetconsultatie deze zomer. De kern is dat artikel 11.10 Telecommunicatiewet gaat verbieden dat je “onjuiste” nummers meestuurt, en dat de ACM nadere regels mag maken over specifieke controleverplichtingen. Denk aan een zwarte lijst met bv. nummers van banken (die jij dus niet mag laten gebruiken in uitgaande gesprekken) of een plicht om die nummers te bellen voordat de gebruiker ze mag activeren.

Dit lijkt mij een goed idee, toch was ik verrast te lezen dat onder meer Microsoft bezwaar had gemaakt. De kern is dat er soms toch ook legitieme toepassingen zijn van CLI spoofing, zoals het tonen van het algemene nummer van een bedrijf terwijl er wordt gebeld vanaf de mobiel van een medewerker. Maar zoals ik het wetsvoorstel lees, wordt dat ook niet categorisch verboden. Het mag namelijk gewoon als het nummer wél bij de organisatie hoort. Dat kun je al oplossen met een lijstje van de mobieltjes van de medewerkers.

MS wijst wel op een andere ontwikkeling, het STIR/SHAKEN protocol, waarmee op dieper technisch niveau spoofing moet kunnen worden tegengegaan. De kern is hier dat de provider een lijst nummers heeft waarvan hij weet dat ze bij de klant horen, en dan het gesprek alleen doorlaat als het uitgaande nummer op die lijst staat. Maar hoe dat precies anders is dan wat het ministerie voorstelt, begrijp ik niet helemaal.

Arnoud

 

Het bericht Politie ontdekt iSpoof-server in Almere en haalt oplichtingsdienst uit de lucht verscheen eerst op Ius Mentis.

Red Alert: The SFPD want the power to kill with robots

Electronic Frontier Foundation (EFF) - nieuws - 28 november 2022 - 8:56pm

The San Francisco Board of Supervisors will vote soon on a policy that would allow the San Francisco Police Department to use deadly force by arming its many robots. This is a spectacularly dangerous idea and EFF’s stance is clear: police should not arm robots.

Police technology goes through mission creep–meaning equipment reserved only for specific or extreme circumstances ends up being used in increasingly everyday or casual ways. We’ve already seen this with military-grade predator drones flying over protests, and police buzzing by the window of an activist's home with drones.

As the policy is currently written, the robots' use will be governed by this passage:

 “The robots listed in this section shall not be utilized outside of training and simulations, criminal apprehensions, critical incidents, exigent circumstances, executing a warrant or during suspicious device assessments. Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.”

This is incredibly broad language. Police could bring armed robots to every arrest, and every execution of a warrant to search a house or vehicle or device. Depending on how police choose to define the words “critical” or “exigent,” police might even bring armed robots to a protest. While police could only use armed robots as deadly force when the risk of death is imminent, this problematic legal standard has often been under-enforced by courts and criticized by activists.

The combination of new technology, deadly weapons, tense situations, and a remote control trigger is a very combustible brew.

This occurs as many police departments have imported the use of robots from military use into regular policing procedures, and now fight to arm those robots.

In October 2022, the Oakland police department proposed a similar policy to arm robots. Following public outrage, the plans were scrapped within a week.

The San Francisco Board of Supervisors will be voting on whether to pass this bill on first reading at their November 29, 2022 meeting, which begins at 2pm. You can find your Board of Supervisors member here. Please tell them to oppose this. 

Categorieën: Openbaarheid, Privacy, Rechten

Europees Hof van Justitie: antiwitwasregister UBO is buitenproportioneel

IusMentis - 28 november 2022 - 8:12am

roma1880 / Pixabay

Het Europese Hof van Justitie heeft geoordeeld dat het UBO-register ‘in de huidige vorm de grondrechten en privacy van burgers ernstig aantast’. Dat meldde Tweakers vorige week. Het gaat daarbij vooral om de bepaling dat bepaalde gegevens door iedereen mogen worden ingezien. De uitspraak is vooral opmerkelijk omdat ze haaks staat op een oordeel van het Nederlandse Gerechtshof, dat geen bijzondere risico’s zag voor ondernemers.

Het Ultimate Beneficiary Owner- oftewel uiteindelijkbelanghebbendenregister is ingesteld ter bestrijding van financiële fraude (ik zeg dit zo neutraal mogelijk). De kern is dat men wil voorkomen dat financieel handige jongens zich verstoppen achter bv’tjes die stichtingen aansturen die bestuurder zijn van een vereniging die dan weer een bv in eigendom hebben. Daarom moet iedereen die iets bestuurt, in dat register, zodat partijen die daar belang bij hebben, kunnen nagaan wie de vent achter de tent is.

Dit is geen specifiek Nederlands idee maar komt uit een Europese Richtlijn die een dergelijk register voorschreef, zij het met een aantal opties om de privacy in meer of mindere mate te beschermen. Dat leidde in Luxemburg tot een rechtszaak, waarbij de rechter besloot het Hof van Justitie te raadplegen. Die ziet toch de nodige problemen op het gebied van privacy, met name als het gaat om een openbaar in te zien internetregister: 42 Daarbij komt dat het inherent is aan een dergelijke terbeschikkingstelling van die informatie aan de leden van de bevolking dat deze dan toegankelijk is voor een potentieel onbeperkt aantal personen, zodat een dergelijke verwerking van persoonsgegevens, ook personen die om redenen die geen verband houden met de door die maatregel nagestreefde doelstelling, informatie wensen te verkrijgen over de situatie, in het bijzonder op materieel en financieel gebied, van een uiteindelijk begunstigde, in staat stelt om zich vrijelijk toegang tot die informatie te verschaffen (…). Dit is des te gemakkelijker wanneer, zoals in Luxemburg, de betrokken gegevens op internet kunnen worden geraadpleegd. In Nederland hadden we dat ook. Wie dat wilde, kon zonder nader toezicht bij de KvK grasduinen naar gegevens van deze personen. (Het UBO-register is voorlopig niet meer toegankelijk voor het publiek naar aanleiding van deze uitspraak.)

Ook heeft het Hof grote moeite met dat de Richtlijn enerzijds de maatschappij wil beschermen tegen financiële fraude (iets waar overheidsinstanties mee bezig zouden moeten zijn) en anderzijds de informatie aan het gehele publiek laat geven (in plaats van alleen, zeg, de AFM of het Openbaar Ministerie). En om nog een aantal andere redenen kan dat artikel uit de Richtlijn dus niet blijven bestaan, waardoor ook nationale wetten die daar van afgeleid zijn, komen te vervallen.

Het is opmerkelijk, omdat in november 2021 het Hof Den Haag nog oordeelde dat er eigenlijk niets aan de hand was. Stichting Privacy First had een procedure gestart om de invoering van de wet tegen te houden: In [de eisen van Privacy First] zijn drie categorieën te ontwaren: i) de dreiging van inbraken en ontvoeringen (het Quote 500-effect), ii) problemen als pestgedrag waarmee kinderen van familiebedrijven als (toekomstige) UBO’s te maken kunnen krijgen en iii) het verlies van een normaal/ongestoord leven doordat bijvoorbeeld anderen de UBO om geld gaan vragen. Maar omdat je als UBO kunt vragen om afscherming, en je adresgegevens sowieso niet in het openbare register komen, is er dus niets aan de hand. Dat afschermen gebeurt overigens alleen als er concrete signalen zijn dat de georganiseerde misdaad achter je aanzit. Het Hof zag dus geen reële kans op ernstige schade, en kwam daarom tot de conclusie dat de wet prima ingevoerd kon worden.

Het is nog niet duidelijk hoe we nu verder moeten, maar dat zal op Europees niveau uitgezocht moeten worden gezien het Hof hiermee de kern uit de Europese regels haalt.

Arnoud

 

Het bericht Europees Hof van Justitie: antiwitwasregister UBO is buitenproportioneel verscheen eerst op Ius Mentis.

The danger of AI is weirder than you think

Geert Jan van Bussel - 26 november 2022 - 11:34am

Het ‘gevaar’ van kunstmatige intelligentie is niet dat het in opstand gaat komen tegen degenen die het maakten, maar dat het precies datgene doet wat wij vragen om te doen. Dat is de stelling van Janele Shane in deze TED-talk in Vancouver van alweer drie jaar geleden.

Door het delen van de vreemde, soms alarmerende gevolgen van AI algoritmes bij het oplossen van menselijke problemen (maken van nieuwe ijssmaken of het herkennen van auto’s op de weg) laat ze zien waarom kunstmatige intelligentie inderdaad ‘kunstmatig’ is en niet op kan tegen het ‘echte’ brein.

De werkelijkheid van kunstmatige intelligentie kan nog niet tippen aan wat de ‘hype’ ervan maakt. Het mag dan een ‘gift to society’ zijn, vooralsnog maakt het dat nog niet waar.

Shane is een onderzoeker die zich intensief bezig houdt met kunstmatige intelligentie. Ze houdt ook een humoristische machine learning blog bij, AI Weirdness, waar ze over de soms hilarische, soms verontrustende manieren schrijft waarop algoritmes de plank misslaan.

Ik werd herinnerd aan deze TED-talk toen ik onlangs haar boek uit 2019 in handen kreeg: You Look Like A Thing And I Love You: How AI Works And Why It’s Making The World A Weirder Place. Veel topics die ze bespreekt in haar blog worden hier voor een globaal publiek gepresenteerd, waarbij ze door de hype prikt en kunstmatige intelligentie terug brengt tot wat het nu is: interessant, met vele mogelijkheden, maar de kinderschoenen nog niet ontgroeid.

.huge-it-share-buttons { border:0px solid #0FB5D6; border-radius:5px; background:#3BD8FF; text-align:left; } #huge-it-share-buttons-top {margin-bottom:0px;} #huge-it-share-buttons-bottom {margin-top:0px;} .huge-it-share-buttons h3 { font-size:25px ; font-family:Arial,Helvetica Neue,Helvetica,sans-serif; color:#666666; display:block; line-height:25px ; text-align:left; } .huge-it-share-buttons ul { float:left; } .huge-it-share-buttons ul li { margin-left:3px; margin-right:3px; padding:0px; border:0px ridge #E6354C; border-radius:11px; background-color:#14CC9B; } .huge-it-share-buttons ul li #backforunical5911 { border-bottom: 0; background-image:url('https://www.vbds.nl/wp-content/plugins/wp-share-buttons/Front_end/../images/buttons.20.png'); width:20px; height:20px; } .front-shares-count { position: absolute; text-align: center; display: block; } .shares_size20 .front-shares-count { font-size: 10px; top: 10px; width: 20px; } .shares_size30 .front-shares-count { font-size: 11px; top: 15px; width: 30px; } .shares_size40 .front-shares-count { font-size: 12px; top: 21px; width: 40px; } Share This:

NVB: digitale euro heeft weinig meerwaarde voor burgers en bedrijven

IusMentis - 25 november 2022 - 8:12am

angelolucas / Pixabay

De digitale euro heeft in de huidige opzet weinig meerwaarde voor burgers en bedrijven, maar kan een grote impact op de kwaliteit en stabiliteit van het geldstelsel en betalingsverkeer hebben. Dat las ik bij Security.nl. De Nederlandse Vereniging van Banken (NVB), die achter de uitspraak zit, pleit dan ook voor een maatschappelijk debat alvorens we deze nieuwe munt gaan invoeren. Want wat is een digitale euro eigenlijk en waarom zouden we dit willen?

De kern achter een digitale euro is “contant geld maar dan digitaal”. Dat is niet hetzelfde als elektronisch betalen met je bankrekening (iDeal, creditcard, Paypal) want dan betaal je niet met geld maar wordt je tegoed bij die bank verrekend met een opdracht richting een begunstigde. En er is nog een verschil: bij contant geld zie je niet wie aan wie betaalt, dus bij een digitale euro zou dat ook niet moeten.

Oh dat is bitcoin met Europese merknaam, denkt u nu misschien. Nee: Een digitale euro is centralebankgeld. Dit betekent dat hij door een centrale bank zou worden gegarandeerd en op maat van de burgers zou worden ontworpen: risicovrij, met respect voor de privacy en aandacht voor gegevensbescherming. Centrale banken hebben als taak de waarde van het geld te handhaven, onafhankelijk van de fysieke of digitale vorm ervan. …

Bij crypto’s bestaat er niet iets als een entiteit die aansprakelijk is, wat betekent dat vorderingen niet afdwingbaar zijn. Een digitale euro heeft met cryptomunten gemeen dat er geen centrale registratie is van eigenaren en hoe veel geld ze hebben. (Dus bij diefstal is het inderdaad weg, terugboeken door de bank is niet mogelijk.) Maar de waarde wordt gegarandeerd door de Europese of nationale centrale bank, waar bij cryptomunten de munt waard is wat de kopers vinden dat deze waard is. (hoi FTX)

Dit lost nog steeds niet de vraag op waarom we een digitale euro zouden willen. RTL Nieuws deed een poging: Digitaal centralebankgeld is een soort middenweg daartussen. In plaats van een afspraak met de bank dat jij geld tegoed hebt, heb je een directe afspraak met de centrale bank. In theorie hoeft niemand zich dan ooit zorgen te maken over of een bank wel of niet omvalt en wat dat voor gevolgen heeft om de hele economie. Tussen de regels door lees ik ook een paniekreactie op de inmiddels geflopte Facebookmunt Libra (weet u nog), waarmee het augmentedrealitybedrijf (waar zelfs vluchtende Twitteraars niet heen willen) de wereld wilde veroveren. Zij had een soort decentrale opzet gekozen, en hét voordeel van een centrale bank is dan natuurlijk een centrale, niet aan te tasten waardegarantie.

Een groot discussiepunt blijft nog wel hoe anoniem een digitale euro moet worden. De zorg over witwassen zit diep in de financiële sector, zoals ik terugzie in het paper van de NVB: Het kabinet zet in op anonimiteit voor kleine transacties zonder inbreuk op wettelijke verplichtingen, in lijn met de motie-Alkaya/Heinen. Mocht de digitale euro vooral eigenschappen hebben van contant geld, dan lijkt een zekere anonimiteit bij kleine bedragen – en vooral bij peer-to-peerbetalingen – voor de hand te liggen. Deze anonimiteit mag evenwel onder geen beding de inspanningen van de bankensector op het vlak van [witwas/terrorisme-bestrijding] in de weg staan. Bij introductie van dergelijke anonimiteit dient de sector te worden gevrijwaard van eventuele aansprakelijkheden. Ik snap ergens nog het idee van een digitale anonieme munt die waardevast is (gegarandeerd door een centrale bank), commercieel gezien heeft dat een duidelijke usp ten opzichte van de decentrale cryptovaluta van nu. Maar zodra je het koppelt aan een rekening met alle identificatie-eisen en transactielogging die daarbij horen, dan zie ik het verschil niet meer met mijn gewone digitale rekening?

Arnoud

 

Het bericht NVB: digitale euro heeft weinig meerwaarde voor burgers en bedrijven verscheen eerst op Ius Mentis.

Nieuwe berekening partneralimentatie geldt vanaf 1 januari 2023

Rechters gaan met ingang van volgend jaar op een andere manier berekenen hoeveel partneralimentatie mensen kunnen betalen na een echtscheiding. De Expertgroep alimentatie van de Rechtspraak komt 1 januari met nieuwe aanbevelingen (pdf, 163,2 KB), die de verschillen tussen kinder- en partneralimentatie zoveel mogelijk beperken. Daarvoor wordt een nieuw begrip ingevoerd: het woonbudget. Aan de hand daarvan wordt de financiële draagkracht van partijen bepaald. Draagkracht vaststellen

De financiële draagkracht van gescheiden partners speelt een rol als de hoogte van de alimentatie wordt bepaald. Bij de berekening van kinderalimentatie gaat de rechter ervan uit dat de ouders 30 procent van hun netto besteedbaar inkomen kwijt zijn aan wonen (in juridische termen: forfaitaire woonlast). Maar bij partneralimentatie wordt de draagkracht tot nu toe berekend op basis van de werkelijke woonlasten. Dat verandert per 1 januari.

Voorspelbaar

Binnen de familierechtspraak leeft al langer de wens om het vaststellen van de draagkracht voor kinder- en partneralimentatie zoveel mogelijk gelijk te trekken. De ervaring leert dat rekenen met een forfaitaire woonlast bij kinderalimentatie eenvoudig werkt. De uitkomst is voorspelbaar en leidt bijna nooit tot discussie, in tegenstelling tot de werkelijke woonlasten waarop partneralimentatie is gebaseerd. Daarbij kan een van de ex-partners bijvoorbeeld een heel dure woning huren en vervolgens stellen dat hij of zij daardoor niet kan bijdragen aan het levensonderhoud van de ander. Daar komen conflicten en rechtszaken uit voort. 

Woonbudget

Volgens de nieuwe normen van de expertgroep gaan rechters voor zowel partner- als kinderalimentatie rekenen met een woonbudget dat 30 procent van het netto inkomen bedraagt, waaruit alle woonkosten worden betaald. ‘Dat is een redelijk percentage’, zegt voorzitter Karel Braun, raadsheer bij het hof in Den Haag. ‘Uiteraard staat het iedereen vrij om meer uit te geven aan wonen, maar daar houden we in principe geen rekening meer mee bij de berekening van de alimentatie. Hetzelfde geldt als de werkelijke woonlasten lager zijn.’ 

Uitzonderingen

Het komt voor dat een onderhoudsplichtige partner er niet aan ontkomt meer uit te geven aan wonen dan het vastgestelde woonbudget. De rechter kan daar rekening mee houden bij de draagkrachtberekening als de hogere woonlasten niet vermijdbaar en niet verwijtbaar zijn. ‘Er kan ook reden zijn om rekening te houden met lagere woonlasten van de onderhoudsplichtige, als de ex-partner geld tekort komt en de rechter vraagt daar naar te kijken’, zegt Braun. ‘Die moeten dan wel duurzaam aanmerkelijk lager zijn dan het woonbudget. Dat zal bijvoorbeeld niet het geval zijn als iemand die gescheiden is een nieuwe woning zoekt en tijdelijk bij zijn ouders is ingetrokken.’

Jusvergelijking​

In het algemeen vindt de expertgroep het redelijk dat de partner die alimentatie ontvangt daarna niet meer te besteden heeft dan de onderhoudsplichtige ex. De rechter kan op verzoek van betrokkenen het inkomen van beide partijen met elkaar vergelijken. ‘Nu geldt nog de zogeheten jusvergelijking: wat blijft er over aan vrije bestedingsruimte nadat aan de eerste levensbehoeften is voldaan? De ene partner mag niet meer vet overhouden dan de andere. Ook die manier van berekenen verandert volgend jaar. We bekijken dan wat beide partijen feitelijk te besteden hebben. Dat moet gelijk zijn nadat de alimentatie is betaald. Bijzondere kosten die niet verwijtbaar en niet vermijdbaar zijn, worden in die vergelijking meegenomen. Dit geldt ook voor de kosten van de kinderen, voor zover die niet uit een kindgebonden budget worden vergoed.’

Invoering

De expertgroep adviseert de nieuwe aanbevelingen toe te passen in zaken die na 1 januari 2023 op zitting worden behandeld en waarbij de ingangsdatum van de (gewijzigde) alimentatie op of na 1 januari 2023 ligt. 

Categorieën: Rechten

Pagina's

Abonneren op Informatiebeheer  aggregator - Rechten