The concept of content filtering has been making quite a career. Not only did it land in the copyright directive proposal, but also it has been introduced into the draft of the Audiovisual Media Service Directive (AVMSD) that is currently making its way through the European Parliament. In the context of the AVMSD, filtering of uploads by video-sharing platforms would serve to prevent legal audiovisual content that could harm children. As important as protecting children may be, the CULT Committee has just voted against that idea. This was the right thing to do.
A seemingly quick solution to filter whatever the decision makers don’t want users to see is a very dangerous tool in any context. It is an arbitrary approach to the flow if information online and as such it can be used as a censorship machine. This “automatized conscience” will operate on a very abstract definitions of content that could impair children’s “physical, mental or moral development” or incitement to terrorism, violence and hatred. Humans often argue about what constitutes such incitement with many cases finding their finale in court. How could we trust algorithms with such a dispute?
Fortunately, 17 members of the CULT Committee understood that. Nine of them either do not see the danger or have an unwavering faith in the potency of technology to solve complex societal problems. Hopefully, the AVMSD debate helped CULT Committee see both the danger and the pointlessness of content filtering and they will take a similar decision for a better copyright. After all, in the context of copyright, putting the interest of rightholders before the interest of the public is an even worse reason to employ algorithms as censors.