Last week’s third meeting of the Article 17 Stakeholder dialogue was the first one of what the Commission had referred to as the 2nd phase of the dialogue. After two meetings of introductory statements by various stakeholders (see our reports here and here), the third meeting consisted of a number of more in depth technical presentations on content recognition technologies and on existing licensing models (Video recording available here).
The morning session saw presentations from three technology providers. YouTube presented its own Content ID system, PEX presented its platform independent attribution engine and finally Videntifier showed off its video and image matching technology.
The biggest part of the discussion in the morning was centered around understanding the way YouTube’s content ID system works and how it relates to copyright (hint: it’s complicated). The overall impression that arose from the discussion is that very few participants actually understand how content ID works (and those who do, like the big record labels, don’t seem to be interested in talking about it). The fact that the Commission was among those asking questions to get a better understanding of the inner working of content ID is rather striking in the context that evidence based lawmaking was supposed to be one of the priorities of the Junker commission. So far the stakeholder dialogue seems more like an exercise in legislation based fact finding.
While many aspects of Content ID remained opaque, one thing became clear though-out the three presentations: none of the presented technologies can do more than matching content in user uploads. None of the technologies presented can understand the context in which a use takes place and as a result they are incapable of detecting if a use is covered by an exception or not. In the words of the technology providers (lightly edited for clarity): Continue reading