CJEU hearing in the Polish challenge to Article 17: Not even the supporters of the provision agree on how it should work

Echtpaar bij de dorpsrechtbank van Puiterveen
Will the CJEU strike down Article 17?
Licentie

On Tuesday, November 10, the Court of Justice of the European Union (CJEU) heard case C-401/19. This case is a request by the Polish government to annul the filtering obligation contained in Article 17 of the Copyright in the Digital Single Market (DSM) Directive on the grounds that it will lead to censorship and will limit the freedom of expression and the freedom to receive and impart information guaranteed in Article 13 of the EU Charter of Fundamental Rights (Charter).

The defendants in this case are the European Parliament and the Council of the European Union. In addition, the European Commission and the governments of France and Spain intervened in the case on the side of the defendants. Advocate General Saugmandsgaard Øe was also present at the hearing.

Even for astute followers of the discussions around the implementation of Article 17, the hearing contained a number of surprises. While several Member States have been soldiering on with their national implementation proposals with little regard for the fundamental rights implications of Article 17, the hearing showed that the Court is taking Poland’s complaint very seriously and that the compliance of the contested provisions of Article 17 with the Charter is far from evident. Frequent reference was made during the hearing to the recent opinion of Advocate General Saugmandsgaard Øe in the YouTube and Cyando cases, which is highly critical of extensive obligations on platforms to police the copyright infringements of their users.

On the face of it, the case is about Poland’s request to annul Articles 17(4)(b) and (c) of the DSM directive. Poland argued its case, which essentially rests on the observation that while not explicitly mandating them, Article 17(4)(b) and (c) effectively require platforms to implement upload filters because there are no other effective means to comply with the obligations contained therein. Poland argues that this will lead to censorship and will limit the freedom of information of the users of online platforms.

According to Poland, the key problem with the directive is the move away from active participation of rightholders (as initiators of removal requests in the context of notice and takedown procedures) and instead handing the responsibility of removing infringing uploads over to platforms who will have to develop private enforcement systems to avoid liability for copyright infringement. Because they are not facing any comparable risk when they limit user rights by blocking access to legal content, this creates strong incentives for over-blocking. This in turn will result in censorship and violation of the fundamental rights to freedom of expression and information under the Charter. Consequently, the problematic parts of Article 17 should be annulled by the Court.

All other parties intervening in the case objected to this line of argument and stated that in their view Article 17 does not violate any fundamental rights. However, they presented strikingly contradictory interpretations of what Article 17 actually requires of platforms. There are two distinct lines of argument: The Commission, the Council and the European Parliament argued that that Article 17 contains enough internal safeguards to prevent users’ fundamental rights from being unduly limited. On the other hand, France and Spain argued that some limitations of fundamental freedoms are justified by the objective that Article 17 seeks to achieve. Continue reading

Taming the upload filters: Pre-flagging vs. match and flag

Raderboot
How to limit the damage filters can do
Licentie

One of the most important elements of any implementation of Article 17 will be how platforms can reconcile the use of automated content filtering with the requirement not to prevent the availability of legitimate uploads. While most implementation proposals that we have seen so far are silent on this crucial question, both the German discussion proposal and the Commission’s consultation proposal contain specific mechanisms that are intended to ensure that automated content filters do not block legitimate uploads, and that uploads are subject to human review if they are not obviously/likely infringing. 

In order to achieve this objective, the German discussion draft published in June relies on the idea of “pre-flagging”: users would be allowed to flag uploads containing third party works as legitimate. Platforms would then be prevented from automatically blocking pre-flagged uploads unless they determine that the flag is incorrect because the upload is “obviously infringing”. 

By contrast, the Commission’s implementation guidance consultation proposes a “match-and-flag” mechanism: if upload filters detect the presence of a third party work in an upload and the use is not deemed to be “likely infringing”, then the uploader is notified and given the ability to state that the use is legitimate. If the user flags the upload as legitimate, the platform will have to initiate a human review of the upload, which remains available from the moment of upload until the review has been concluded. This type of mechanism was first suggested by a group of copyright academics in October of last year. It is also at the core of the proposal that we had presented during the last meeting of the stakeholder dialogue.

Both approaches provide a mechanism that limits the application of fully automated upload filters (while implicitly acknowledging the fact that many platforms will deploy upload filters). In the Commission’s proposal, filters are limited to making a pre-selection (“is the upload likely infringing?”); in the German proposal, they can only operate on unflagged content and to filter out “obviously incorrect” pre-flags.

Convergence on “match-and-flag”?

Both approaches have been criticised by rightholders, who claim that they undermine the “original objective of the directive” without providing alternative proposals on how automated filtering can be reconciled with the requirement not to block legitimate uploads. In addition, the German discussion proposal has also been criticised by platforms such as Google and Facebook. The platforms are arguing that giving users the ability to pre-flag every single upload would be impractical and would likely lead to substantial numbers of unnecessary (where the content in question is already licensed) or unjustified (users making excessive use of the pre-flagging tool) pre-flags, which would make such a system impractical to operate at scale. Continue reading

Commission consultation on Article 17 guidance: User rights must be protected at upload

Heilige Rochus smeekt bescherming van oudere edelman af
Legitimate uses must be considered at all times
Licentie

At the end of July the Commission published a long awaited “targeted consultation addressed to the participants to the stakeholder dialogue on Article 17 of the CDSM Directive“. With this consultation the Commission makes good on its (pre-covid) promise to “share initial views on the content of the Article 17 guidance” with the participants of the stakeholder dialogue. Nestled in-between 18 questions, the consultation document provides a detailed outline of what the Commission’s guidance could look like once it is finalised.

While we have been rather sceptical after the end of the six meetings of the stakeholder dialogue meetings, we are pleased to see that the initial views shared by the Commission express a genuine attempt to find a balance between the protection of user rights and the interests of creators and other rightholders, which reflects the complex balance of the provisions introduced by Article 17 after a long legislative fight.

In the remainder of this post we will take a first, high level, look at the Commission’s proposal for the Article 17 guidance, what it would mean for national implementations and how it would affect user rights.

Two welcome clarifications

With the consultation document the Commission takes a clear position on two issues that were central to the discussions in the stakeholder dialogue and that have important implications for national implementation of Article 17.

The first one concerns the nature of the right at the core of Article 17. Is Article 17 a mere clarification of the existing right of communication to the public, as rightholders have argued, or is it a special or sui generis right, as academics and civil society groups have argued? In the consultation document the Commission makes it clear that it considers Article 17 to be a special right (“lex specialis”) to the right of communication to the public, as defined in Article 3 of the 2001 InfoSoc Directive, and the limited liability regime for hosting providers of the E-commerce Directive. 

What sounds like a fairly technical discussion has wide ranging consequences for Member States implementing the Directive. As explained by João Quintais and Martin Husovec, now that it is clear that Article 17 is not a mere clarification of existing law, Member States have considerably more freedom in deciding how online platforms can obtain authorisation for making available the works uploaded by their users. This should mean that they are not constrained by the InfoSoc Directive. Therefore, mechanisms like the remunerated “de-minimis” exception proposed by the German Ministry of Justice that would legalise the use of short snippets of existing works are permitted and covered by the concept of “authorisation” introduced by Article 17. Continue reading

How Filters fail (to meet the requirements of the DSM directive)

Sakkamettant l'eau au sir (filtre)
Three common failure modes of upload filters
Licentie

Article 17 of the DSM directive establishes that Online Content Sharing Service Providers (OCSSPs) are liable for copyright infringing uploads by their users unless they either obtain a license for the use of such content, or take a number of measures designed to prevent the availability of such content on their platforms. While the directive never explicitly talks about filters or automated content recognition (ACR) systems, it is assumed by all sides of the debate that, in order to meet this obligation, platforms have little choice but to implement ACR-based filtering systems that will scan all user uploads and block or remove uploads that contain works that have been flagged by their rightholders.

This de-facto requirement to implement upload filters is – by far – the most controversial aspect of the entire copyright directive and it continues to dominate the discussions about the implementation of Article 17 into national legislation.

In this context, it is important to remember that the use of such filters is not new and that their functioning can already be observed in practice. What is new, however, is the de-facto requirement for OCSSPs to implement filters as well as a number of requirements that OCSSPs need to meet to ensure that any measures (including filters) implemented by them are not infringing on the rights of users. This includes the requirement that any such measures “shall not result in the prevention of the availability of works or other subject matter uploaded by users, which do not infringe copyright and related rights, including where such works or other subject matter are covered by an exception or limitation“.

In other words, one of the most important contributions of the DSM directive is that, for the first time, it establishes conditions that need to be met by automated upload filters.

As we have argued many times before, these conditions present a very high hurdle for any technological solution to clear. The fact that upload filters are incapable of determining if a particular use of a copyrighted work is infringing or not has been established beyond any doubt. But that does not mean that the failure to assess the context is the only way that filters based on automated content recognition fail to meet the requirements established by the directive. In total there are at least three distinct ways how filters fail.

In the remainder of this post we will discuss these three failure modes based on examples collected by Techdirt in the course of a single week: removals caused by incorrect rights information, removals caused by the inability to recognise legitimate uses, and removals caused by the inability to accurately identify works.

Incorrect rights information

Incorrect rights information is probably the most common and best documented cause for the unjustified removal (or demonetisation) of works on YouTube.

ACR systems execute actions specified by whoever is recognised as the owner of a work. For the purposes of the ACR systems, the owner of a work is whoever claims to be the owner of the work and, unless there are conflicting ownership claims, there is no way to check the accuracy of such claims as there are no authoritative databases of ownership rights. As a result it is possible to claim ownership in public domain works (which no-one owns), in works that have been freely or widely licensed by their owners, or for any copyrighted work that has not already been claimed by someone else. Continue reading

A better way to implement Article 17? New German proposal to avoid overblocking

Jakobs ladder
A step-by-step concept against overblocking
Licentie

Back in April 2019, at the occasion of the final vote on the DSM Directive in the Council, the German Federal Government issued a statement, announcing that it intended to implement Article 17 with a focus on “preventing ‘upload filters’ wherever possible, ensuring freedom of expression and safeguarding user rights”. While the German Government has yet to produce an implementation proposal for Article 17, we may now have gotten a first insight in what such an implementation might look like. Late last month, the Kölner Forschungsstelle für Medienrecht at the University of Applied Sciences in Cologne, published a step-by-step concept for implementing Article 17 of the DSM Directive (“Stufenkonzept gegen Overblocking durch Uploadfilter“). 

The paper authored by Prof. Dr. Rolf Schwartmann and Prof. Dr. Christian-Henner Hentsch consists of an implementation proposal in the form of concrete legislative language. The objective of the authors seems to be to stay as close as possible to the objectives formulated in the German statement to the Council. What makes this proposal remarkable is that it is the first proposal (although not an official one) for implementing the Article 17 of the new Copyright Directive that does not consist of more or less literal transposition of Article 17 into national law (as it is the case in the French, Dutch and Belgian legislative proposals). In order to achieve the stated objective of preventing over-blocking by upload filters, the concept proposes a combination of Article 17 with Article 12 of the DSM Directive (which provides Member States the option to introduce Extended Collective Licenses). 

The implementation proposal contains modifications of three different acts: the Copyright Act (Urheberrechtsgesetz – UrhG), the Tele Media Act (Telemediengesetz – TMG) and the Collective Management Organisations Act (Verwertungsgesellschaftengesetz – VGG). Specifically the authors propose the following modifications: 

In the Copyright Act, they propose to add a new section to the article (§ 19a UrhG) that defines the act of communication to the public. The purpose of this addition is to include acts of giving the public access to copyright-protected user uploaded works by Online Content Service Providers (OCSSPs) in the definition of the existing making available to the public right. This establishes that, in principle, OCSSPs need authorisation from rightholders for such acts. The added section also includes the definition of OCSSPs, which is a literal transposition of the definition used in the DSM directive. 

The second addition to the Copyright Act is a new exception covering uses for the purpose of caricature, parody or pastiche by users of OCSSPs (§ 51a UrhG). Notably, this exception only applies in the context of sharing works via OCSSPs (which is highly unusual as copyright exceptions are usually not limited to specific services) and is conditional on remuneration to rightholders via collective management organisations. Continue reading

Research Librarians: New TDM exception can be undermined by technical blocking from publishers

De zanderij
Protect TDM from adverse effects of TPMs
Licentie

Today the Association of European Research Libraries (LIBER) issued a warning that the new Text and Data Mining (TDM) exceptions contained in the DSM directive can easily be undermined by technical blocking from publishers. LIBER has come to this conclusion based on the results of a survey on content blocking, carried out by LIBER’s Copyright & Legal Matters Working Group and the UK Libraries and Archives Copyright Alliance (LACA). The submissions to the survey so far confirm fears that Technical Protection Measures (TPM) can be abused by rightholders to limit the usefulness of the exceptions contained in the directive. 

Article 3 of the directive allows reproductions in the context of text and data mining for the purposes of scientific research made by research organisations and cultural heritage institutions. While the article allows rightholders to take measures to ensure the security and integrity of the networks, it does not allow them to prevent researchers from exercising their rights under the exception. It also requires rightholders to remove any TPMs that prevent researchers from exercising their rights but does not specify how quickly this has to happen. 

The survey conducted by LIBER shows that in practice it is difficult for researchers to get rightholders to remove TPMs or other access and use restrictions to content that research institutions have licensed. And in cases where such limitations are eventually removed the process is often long and resource intensive. All of this does not only frustrate efforts to text and data mining by individual researchers, but can also have negative consequences for entire institutions. 

The submissions shared with us by libraries and researchers from a mix of countries (some with TDM exceptions already and some without) show that:

  • Researchers are blocked from accessing many types of content. Journal articles were the most common type of content mentioned (44%) but eBooks, websites, databases and newspapers were also cited.
  • Content blocking takes, on average, nearly a month to resolve. Respondents reported that it took between 24 hours and 2-1/2 months to resolve the content blocking issue, with the mean time being 24 days. A fifth of survey respondents said they were only partially able to resolve the issue and 11% said it was never resolved.
  • Sanctions impact whole communities, not just individual researchers. Actions taken by publishers included 1) suspension of campus-wide access to paid for electronic subscriptions 2) threats to cut off access to content unless TDM was stopped 3) technically limiting downloads to one document only 4) a request for additional payments and 5) the introduction of captcha technology to frustrate data mining

The findings reported by LIBER are extremely relevant in the context of the ongoing implementations of the DSM directive in the EU member states. As we have noted in our implementation guide for articles 3 & 4 of the directive Member states should include in their national laws provisions requiring rightholders to remove TPMs and resolve lock-outs within a maximum period of 72 hours once reported.

In order to better make the case why such a requirement must be included in national implementations LIBER is continuing to collect evidence of abusive practices. If you work for an academic research institutions that has ran into similar issues in the past you can still contribute your experience to LIBER: 

If you or your organisation have ever been blocked from accessing a publisher’s servers for reasons you believe are related to data mining, fill out the survey. The survey can be answered anonymously and will remain open indefinitely. You can also send information on this issue to liberlegalworkinggroup@gmail.com”.

Article 17 stakeholder dialogue (day 6): Hitting a brick wall

A woman shouting into a man's ear-trumpetLicentie

This week saw the sixth meeting of the EU stakeholder dialogue on the implementation of Article 17 of the new copyright directive. This meeting was the first one where the question of how to reconcile the protection of user rights’ with automated filters based on technology that cannot assess the legality of the use of copyrighted works was explicitly on the agenda. From the perspective of the users’ organisations participating in the stakeholder dialogue this is the main question that the stakeholder dialogue (and the guidelines that the Commission will have to issue afterwards) needs to address. 

Unfortunately, Monday’s meeting did not result in a productive discussion about how to protect users’ rights. Proposals made by COMMUNIA and Studio71 on how to strike a balance between the rights of users and the interests of creators and other rightholders were largely ignored by a coalition of major rightholders from both the music and the audio-visual sectors. Working in concert, the representatives of the Hollywood studios (MPA), film producers (FIAFP), commercial television channels (ACT), major record labels (IFPI) and music publishers (ICMP) disputed the fact that there is a tension between protecting users rights and automated blocking, restated their conviction that Article 17 is only about empowering them versus the platforms, and suggested that users should simply trust that rightholders will not block free speech or other legitimate uses. In doing so they have made it clear that they want their interests to prevail at all cost, that users should not be trusted and that for them user rights are something that should exist at their discretion. 

This outcome leaves the European Commission in the difficult position to make sense of the input gathered throughout the previous six meetings and to outline a way forwards. Fortunately it seemed that the Commission is not willing to succumb to the unconstructive behaviour exhibited by rightholders and will take serious its task of finding a balance between users rights and the interests of different types of rightholders. 

A proposals for protecting users’ rights

So how could such a balance look like and what is at stake? One of the key insights that emerged from the previous rounds of the stakeholder dialogue is that even the most advanced content recognition technology is incapable of understanding the context in which works are used. This means that technology alone cannot make the determination if a use is lawful or not. Article 17 requires platforms to take measures to prevent the availability of content that rightholders want to keep off the sharing platforms and, at the same time, to ensure that legitimate uses (such as quotations or parodies) are not affected. This means that no matter how good it is at recognising content, ACR alone cannot meet the requirements of the directive. Continue reading

Article 17 stakeholder dialogue (day 5): It all depends

Copyright stakeholder dialogueLicentie

Last week Thursday saw the 5th meeting of the Commission’s stakeholder dialogue on Article 17 of the copyright directive. On paper this meeting was the first meeting where the stakeholders had the opportunity to discuss the actual provisions contained in Article 17 of the directive in order for the Commission “to gather evidence, views and suggestions that the services of the Commission can take into account” when preparing its guidelines on the application of Article 17. 

Contractual freedom über alles

In reality (recording available here), the 5th meeting was the meeting where it became clear that the vast majority of represented stakeholders is not interested in constructively contributing to the process and is instead participating in order to actively undermine the stakeholder process. Interventions made by commercial rights holders from the Music and AV sectors, as well as by platforms, focused on demanding maximum contractual freedom and arguing that the Commission’s guidelines should not contain any binding requirements. When asked to provide input for defining core concepts of the directive (such as “best efforts to obtain authorisation” and “best efforts to prevent availability”), most stakeholders limited their contribution to countless variations of “it depends” or theorising about “dynamic concepts”.

While there were some notable exceptions (apart from users organisations, collective management organisations and journalists’ organisations provided substantive input), it can hardly be surprising that both rightholders and big platforms have no interest in substantive guidelines that would offer meaningful safeguards for user rights. Continue reading

Article 17 stakeholder dialogue: What have we learned so far

A woman shouting into a man's ear-trumpetLicentie

This analysis was previously published in two instalments on the Kluver Copyright Blog (part 1, part 2).

As 2020 unfolds, the European Commission’s stakeholder dialogue pursuant to Article 17 of the Directive on Copyright in the Digital Single Market (CDSM directive) enters its third (and likely final) phase. After four meetings that focussed on gathering “an overview of the current market situation as regards licensing practices, tools used for online content management […] and related issues and concerns”, the next two (or more) meetings will finally deal with issues raised by the provisions in Article 17 of the CDSM directive. According to the Commission’s discussion paper for the meetings of 16 January and 10 February 2020, the objective of the third phase “is to gather evidence, views and suggestions that the services of the Commission can take into account in preparing the guidance pursuant to Article 17(10)”. 

In other words, after four meetings that have set the scene, the stakeholder dialogue will now address some of the thorny issues raised by Article 17. These include the key concepts like the best effort obligations to obtain authorisation and to prevent the availability of content (Article 17(4)), as well as the safeguards for legitimate uses of content (Article 17(7)) and the complaint and redress mechanisms available to users (Article 17(9)). In preparation for these forthcoming discussions, it is worth recapitulating what we have learned since the stakeholder dialogue kicked off in October of last year. 

Three takeaways from the stakeholder dialogue so far

After more than 25 hours of discussion (recordings of the four meetings can be found here: 1, 2, 3 and 4), there are three main insights that will likely have a substantial impact on the overall outcome of the stakeholder dialogue. These are the different motivations of different types of rightholders; the technical limitations of Automated Content Recognition (ACR) technologies; and the general lack of transparency with regards to current rights management practices. The first two of these are discussed in this post and the third will be covered in part 2 which will be published shortly. Continue reading

Article 17 stakeholder dialogue (day 4): it’s all about transparency

A woman shouting into a man's ear-trumpetLicentie

The fourth meeting of the Article 17 stakeholder dialogue took place in the last week before the holiday break. Just like the third meeting, this meeting was dedicated to (more or less) technical presentations on content management technologies and existing licensing practices. In total there were 12 presentations from platforms (Facebook, Seznam, Wattpad), providers of content management tools (Audible Magic, Ardito, Fifthfreedom, Smart protection), rightholders (GESAC, Universal Music Publishing, Bundesliga) as well as by BEUC and the Lumen database.

Filters are context-blind

The day’s presentations largely repeated the lines of arguments different stakeholders had presented during the previous meetings (a recording of the full meeting can be found here). Most notably all providers of content recognition technology confirmed that their technology does not go beyond simple matching of files and cannot understand the context in which a use takes place. Audible Magic summarised this in their opening statement:

Copyright exceptions require a high degree of intellectual judgement and an understanding and appreciation of context. We do not represent that any technology can solve this problem in an automated fashion. Ultimately these types of determinations must be handled by human judgement […]

As we have argued after the third meeting of the stakeholder dialogue, this is an unsurprising but significant insight as it means that current technology cannot be used to automatically block or remove content uploaded by users. 

Platforms don’t trust rightholders

The presentation given by Facebook about Facebook Rights Manager, its in-house content recognition tool, highlighted another problem that such tools are facing: One of the “main challenges” that Facebook is facing with its Facebook Rights Manager tool is that rightholders abuse the tool by claiming rights in works that they do not own. As a result Facebook only makes the most sensitive functionalities (such as automated blocking of uploaded content) available to a small group of carefully vetted trusted rightholders. Continue reading