Machines seeing the lines
Beyond kitties dancing, mongrels skateboarding and babies biting, YouTube videos sometimes — believe it or not — cross content lines, resulting in the offending materials experiencing a takedown.
That happened with almost 8.3 million videos in the fourth quarter of last year, according to YouTube. The task of finding and removing the rogue recordings sounds laborious and, doubtless at one point it was. Until the machines zipped into the picture.
Artificially intelligent computer systems nabbed 80 percent of the videos YouTube yanked off its site in that final quarter. The moves were necessitated partly by the fact that, as the New York Times’ Cade Metz recently explained, advertisers foot the bill, and their promotions sometimes appear automatically (re: randomly) next to videos with which YouTube clients would rather not to be associated.
Thus, the complaints flowed. Responding by removing the videos was no small task for YouTube.
First, there was the matter of figuring out how. Then there was the far stickier affair: explaining why. The issue entailed free speech being paired against civil discourse, and however free speech might rock, it sometimes takes a pummeling in cases like these. Free speech, after all, is dingy business when every range of extremist freely gallops through cyberspace. Mild-mannered, but wild-eyed conspiracy theorists prowl one end of the spectrum, and predators and supremacists skulk about the other.
As Stanford University’s Eileen Donahoe told Metz: “It’s a hard problem to solve.”
Here come the machines, cleaning up man’s mess, substantively resolving the question of how to do it while still leaving humans to figure out why. Users identify three reasons for flagging content: it is considered sexual, misleading or spam (a ripoff) or hateful and abusive.
Judgment, a thing in which people still excel over computers, is an enduring necessity in determining which videos stay. Violence might be OK in a movie, but the real stuff requires careful handling. Ditto explicit material. So how does one know the difference? Judgment. When that fails, whether the artificial kind or that applied by humans, outrage can result, as was the case last year when disturbing videos slipped past filters on the child-friendly YouTube Kids app.
After the breakdown made a large splash in both mainstream and social media, Google pledged in December to hire 10,000 people to tackle policy violations. YouTube says it has filled most of the jobs allotted it, but machines nonetheless are carrying out the bulk of the work, removing three in four videos before they were even viewed.
That’s a testament to the enlarging capacity of technology to rise alongside humans to take on complex quandaries, like this classic clash between free speech and civility, decorum and decency. Machine learning is giving systems the ability to sift through dilemmas and decide, a realm once exclusive to humans. It’s a critical function given the enormity of the task. The job, after all, is too much for humans, even thousands of them.
Anyone seeking evidence of the extraordinary capacities of technology needn’t look further than YouTube’s recent experience removing troubling content. The fear among the afraid is that human efforts to manage machines only will end with machines managing humans. The reality is, as YouTube can attest, machines might provide humans the necessary means to at least partly manage themselves.