What you need to know about YouTube's algorithm system
<p>People watch <a href="https://youtube.googleblog.com/2017/02/you-know-whats-cool-billion-hours.html">more than a billion hours</a> of video on YouTube every day. Over the past few years, the video sharing platform has <a href="https://www.thedailybeast.com/how-youtube-pulled-these-men-down-a-vortex-of-far-right-hate">come under fire</a> for its role in <a href="https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html">spreading</a> and <a href="https://www.theguardian.com/media/2018/sep/18/report-youtubes-alternative-influence-network-breeds-rightwing-radicalisation">amplifying</a> extreme views.</p>
<p>YouTube’s video recommendation system, in particular, has been criticised for radicalising young people and steering viewers down <a href="https://policyreview.info/articles/news/implications-venturing-down-rabbit-hole/1406">rabbit holes</a> of disturbing content.</p>
<p>The company <a href="https://youtube.googleblog.com/2019/01/continuing-our-work-to-improve.html">claims</a> it is trying to avoid amplifying problematic content. But <a href="https://dl.acm.org/citation.cfm?doid=3298689.3346997">research</a> from YouTube’s parent company, Google, indicates this is far from straightforward, given the commercial pressure to keep users engaged via ever more stimulating content.</p>
<p>But how do YouTube’s recommendation algorithms actually work? And how much are they really to blame for the problems of radicalisation?</p>
<p><strong>The fetishisation of algorithms</strong></p>
<p>Almost everything we see online is heavily curated. Algorithms decide what to show us in Google’s search results, Apple News, Twitter trends, Netflix recommendations, Facebook’s newsfeed, and even pre-sorted or spam-filtered emails. And that’s before you get to advertising.</p>
<p>More often than not, these systems decide what to show us based on their idea of what we are like. They also use information such as what our friends are doing and what content is newest, as well as built-in randomness. All this makes it hard to reverse-engineer algorithmic outcomes to see how they came about.</p>
<p>Algorithms take all the relevant data they have and process it to achieve a goal - often one that involves influencing users’ behaviour, such as selling us products or keeping us engaged with an app or website.</p>
<p>At YouTube, the “up next” feature is the one that receives most attention, but other algorithms are just as important, including search result rankings, <a href="https://youtube.googleblog.com/2008/02/new-experimental-personalized-homepage.html">homepage video recommendations</a>, and trending video lists.</p>
<p><strong>How YouTube recommends content</strong></p>
<p>The main goal of the YouTube recommendation system is to keep us watching. And the system works: it is responsible for more than <a href="https://www.cnet.com/news/youtube-ces-2018-neal-mohan/">70% of the time users spend</a> watching videos.</p>
<p>When a user watches a video on YouTube, the “up next” sidebar shows videos that are related but usually <a href="https://www.pewinternet.org/2018/11/07/many-turn-to-youtube-for-childrens-content-news-how-to-lessons/">longer and more popular</a>. These videos are ranked according to the user’s history and context, and newer videos are <a href="https://storage.googleapis.com/pub-tools-public-publication-data/pdf/45530.pdf">generally preferenced</a>.</p>
<p>This is where we run into trouble. If more watching time is the central objective, the recommendation algorithm will tend to favour videos that are new, engaging and provocative.</p>
<p>Yet algorithms are just pieces of the vast and complex sociotechnical system that is YouTube, and there is so far little empirical evidence on their <a href="https://arxiv.org/abs/1908.08313">role</a> in processes of radicalisation.</p>
<p>In fact, <a href="https://journals.sagepub.com/doi/full/10.1177/1354856517736982">recent research</a> suggests that instead of thinking about algorithms alone, we should look at how they interact with community behaviour to determine what users see.</p>
<p><strong>The importance of communities on YouTube</strong></p>
<p>YouTube is a quasi-public space containing all kinds of videos: from musical clips, TV shows and films, to vernacular genres such as “how to” tutorials, parodies, and compilations. User communities that create their own videos and use the site as a social network have played an <a href="https://books.google.com.au/books?id=0NsWtPHNl88C&source=gbs_book_similarbooks">important role</a> on YouTube since its beginning.</p>
<p>Today, these communities exist alongside <a href="https://journals.sagepub.com/doi/full/10.1177/1329878X17709098">commercial creators</a> who use the platform to build personal brands. Some of these are far-right figures who have found in YouTube a home to <a href="https://datasociety.net/output/alternative-influence/">push their agendas</a>.</p>
<p>It is unlikely that algorithms alone are to blame for the radicalisation of a previously “<a href="https://www.wired.com/story/not-youtubes-algorithm-radicalizes-people/">moderate audience</a>” on YouTube. Instead, <a href="https://osf.io/73jys/">research</a> suggests these radicalised audiences existed all along.</p>
<p>Content creators are not passive participants in the algorithmic systems. They <a href="https://journals.sagepub.com/doi/10.1177/1461444819854731">understand how the algorithms work</a> and are constantly improving their <a href="https://datasociety.net/output/data-voids/">tactics</a> to get their videos recommended.</p>
<p>Right-wing content creators also know YouTube’s policies well. Their videos are often “borderline” content: they can be interpreted in different ways by different viewers.</p>
<p>YouTube’s community guidelines restrict blatantly harmful content such as hate speech and violence. But it’s much harder to police content in the grey areas between jokes and bullying, religious doctrine and hate speech, or sarcasm and a call to arms.</p>
<p><strong>Moving forward: a cultural shift</strong></p>
<p>There is no magical technical solution to political radicalisation. YouTube is working to minimise the spread of borderline problematic content (for example, conspiracy theories) by <a href="https://youtube.googleblog.com/2019/01/continuing-our-work-to-improve.html">reducing their recommendations</a> of videos that can potentially misinform users.</p>
<p>However, YouTube is a company and it’s out to make a profit. It will always prioritise its commercial interests. We should be wary of relying on technological fixes by private companies to solve society’s problems. Plus, quick responses to “fix” these issues might also introduce harms to politically edgy (activists) and minority (such as sexuality-related or LGBTQ) communities.</p>
<p>When we try to understand YouTube, we should take into account the different factors involved in algorithmic outcomes. This includes systematic, long-term analysis of what algorithms do, but also how they combine with <a href="https://policyreview.info/articles/news/implications-venturing-down-rabbit-hole/1406">YouTube’s prominent subcultures</a>, their <a href="https://arxiv.org/abs/1908.08313">role</a> in political polarisation, and their <a href="https://datasociety.net/pubs/oh/DataAndSociety_MediaManipulationAndDisinformationOnline.pdf">tactics</a> for managing visibility on the platform.</p>
<p>Before YouTube can implement adequate measures to minimise the spread of <a href="https://journals.sagepub.com/doi/pdf/10.1177/0894439314555329">harmful content</a>, it must first understand what cultural norms are thriving on their site – and being amplified by their algorithms.</p>
<hr />
<p><em>The authors would like to acknowledge that the ideas presented in this article are the result of ongoing collaborative research on YouTube with researchers Jean Burgess, Nicolas Suzor, Bernhard Rieder, and Oscar Coromina.</em><!-- Below is The Conversation's page counter tag. Please DO NOT REMOVE. --><img style="border: none !important; box-shadow: none !important; margin: 0 !important; max-height: 1px !important; max-width: 1px !important; min-height: 1px !important; min-width: 1px !important; opacity: 0 !important; outline: none !important; padding: 0 !important; text-shadow: none !important;" src="https://counter.theconversation.com/content/125494/count.gif?distributor=republish-lightbox-basic" alt="The Conversation" width="1" height="1" /><!-- End of code. If you don't see any code above, please get new code from the Advanced tab after you click the republish button. The page counter does not collect any personal data. More info: http://theconversation.com/republishing-guidelines --></p>
<p><em><a href="https://theconversation.com/profiles/ariadna-matamoros-fernandez-577257">Ariadna Matamoros-Fernández</a>, Lecturer in Digital Media at the School of Communication, <a href="http://theconversation.com/institutions/queensland-university-of-technology-847">Queensland University of Technology</a> and <a href="https://theconversation.com/profiles/joanne-gray-873764">Joanne Gray</a>, Lecturer in Creative Industries, <a href="http://theconversation.com/institutions/queensland-university-of-technology-847">Queensland University of Technology</a></em></p>
<p><em>This article is republished from <a href="http://theconversation.com">The Conversation</a> under a Creative Commons license. Read the <a href="https://theconversation.com/dont-just-blame-youtubes-algorithms-for-radicalisation-humans-also-play-a-part-125494">original article</a>.</em></p>