Article 17 of the DSM directive establishes that Online Content Sharing Service Providers (OCSSPs) are liable for copyright infringing uploads by their users unless they either obtain a license for the use of such content, or take a number of measures designed to prevent the availability of such content on their platforms. While the directive never explicitly talks about filters or automated content recognition (ACR) systems, it is assumed by all sides of the debate that, in order to meet this obligation, platforms have little choice but to implement ACR-based filtering systems that will scan all user uploads and block or remove uploads that contain works that have been flagged by their rightholders.
This de-facto requirement to implement upload filters is – by far – the most controversial aspect of the entire copyright directive and it continues to dominate the discussions about the implementation of Article 17 into national legislation.
In this context, it is important to remember that the use of such filters is not new and that their functioning can already be observed in practice. What is new, however, is the de-facto requirement for OCSSPs to implement filters as well as a number of requirements that OCSSPs need to meet to ensure that any measures (including filters) implemented by them are not infringing on the rights of users. This includes the requirement that any such measures “shall not result in the prevention of the availability of works or other subject matter uploaded by users, which do not infringe copyright and related rights, including where such works or other subject matter are covered by an exception or limitation“.
In other words, one of the most important contributions of the DSM directive is that, for the first time, it establishes conditions that need to be met by automated upload filters.
As we have argued many times before, these conditions present a very high hurdle for any technological solution to clear. The fact that upload filters are incapable of determining if a particular use of a copyrighted work is infringing or not has been established beyond any doubt. But that does not mean that the failure to assess the context is the only way that filters based on automated content recognition fail to meet the requirements established by the directive. In total there are at least three distinct ways how filters fail.
In the remainder of this post we will discuss these three failure modes based on examples collected by Techdirt in the course of a single week: removals caused by incorrect rights information, removals caused by the inability to recognise legitimate uses, and removals caused by the inability to accurately identify works.
Incorrect rights information
Incorrect rights information is probably the most common and best documented cause for the unjustified removal (or demonetisation) of works on YouTube.
ACR systems execute actions specified by whoever is recognised as the owner of a work. For the purposes of the ACR systems, the owner of a work is whoever claims to be the owner of the work and, unless there are conflicting ownership claims, there is no way to check the accuracy of such claims as there are no authoritative databases of ownership rights. As a result it is possible to claim ownership in public domain works (which no-one owns), in works that have been freely or widely licensed by their owners, or for any copyrighted work that has not already been claimed by someone else. Continue reading →