


The rapid online dissemination of videos of the terrorist attack - as well as a 74-page manifesto, apparently written by the shooter, that railed against Muslims and immigrants - seemed shrewdly planned to reach as many people online as possible. YouTube was not alone in struggling to control the fallout on Friday and over the weekend. In this case, humans determined to beat the company's detection tools won the day - to the horror of people watching around the world. But YouTube's struggles during and after the New Zealand shooting have brought into sharp relief the limits of the computerised systems and operations that Silicon Valley companies have developed to manage the massive volumes of user-generated content on their sprawling services. It has hired thousands of human content moderators and has built new software that can direct viewers to more authoritative news sources more quickly during times of crises. The company - which has come under increasing fire for allowing Russians to interfere in the 2016 election through its site and for being slow to catch inappropriate content - has worked behind the scenes for more than a year to improve its systems for detecting and removing problematic videos. "Frankly, I would have liked to get a handle on this earlier." "Every time a tragedy like this happens we learn something new, and in this case it was the unprecedented volume" of videos, Mr Mohan said.

Despite being one of the crown jewels of Google's stable of massively profitable and popular online services, for many hours, YouTube could not stop the flood of users who uploaded and re-uploaded the footage showing the mass murder of Muslims.Ībout 24 hours later - after round-the-clock toil - company officials felt the problem was increasingly controlled, but acknowledged that the broader challenges were far from resolved. Each public tragedy that has played out on YouTube has exposed a profound flaw in its design that allows hate or conspiracies to flourish online.
