Auditing YouTube's Recommendation System

for Ideologically Congenial, Extreme, and Problematic Recommendations


Abstract

Algorithms of social media platforms are often criticized for recommending ideologically congenial and radical content to their users. Despite these concerns, evidence on such filter bubbles and rabbit holes of radicalization is inconclusive. We conduct an audit of the platform using 100,000 sock puppets that allow us to systematically and at scale isolate the influence of the algorithm in recommendations. We test 1) whether recommended videos are congenial with regard to users’ ideology, especially deeper in the watch trail and whether 2) recommendations deeper in the trail become progressively more extreme and come from problematic channels. We find that YouTube’s algorithm recommends congenial content to its partisan users, although some moderate and cross-cutting exposure is possible and that congenial recommendations increase deeper in the trail for right-leaning users. We do not find meaningful increases in ideological extremity of recommendations deeper in the trail, yet we show that a growing proportion of recommendations comes from channels categorized as problematic (e.g., “IDW,” “Alt-right,” “Conspiracy,” and “QAnon”), with this increase being most pronounced among the very-right users. Although the proportion of these problematic recommendations is low (max of 2.5%), they are still encountered by over 36.1% of users and up to 40% in the case of very-right users.

People


To reach out, please email the corresponding author Muhammad Haroon.

Citation

@article{doi:10.1073/pnas.2213020120,
    author   = {Muhammad Haroon  and Magdalena Wojcieszak  and Anshuman Chhabra  and Xin Liu  and Prasant Mohapatra  and Zubair Shafiq },
    title    = {Auditing YouTube’s recommendation system for ideologically congenial, extreme, and problematic recommendations},
    journal  = {Proceedings of the National Academy of Sciences},
    volume   = {120},
    number   = {50},
    pages    = {e2213020120},
    year     = {2023},
    doi      = {10.1073/pnas.2213020120},
    url      = {https://www.pnas.org/doi/abs/10.1073/pnas.2213020120},
    eprint   = {https://www.pnas.org/doi/pdf/10.1073/pnas.2213020120},
    abstract = {Algorithms of social media platforms are often criticized for recommending ideologically congenial and radical content to their users. Despite these concerns, evidence on such filter bubbles and rabbit holes of radicalization is inconclusive. We conduct an audit of the platform using 100,000 sock puppets that allow us to systematically and at scale isolate the influence of the algorithm in recommendations. We test 1) whether recommended videos are congenial with regard to users’ ideology, especially deeper in the watch trail and whether 2) recommendations deeper in the trail become progressively more extreme and come from problematic channels. We find that YouTube’s algorithm recommends congenial content to its partisan users, although some moderate and cross-cutting exposure is possible and that congenial recommendations increase deeper in the trail for right-leaning users. We do not find meaningful increases in ideological extremity of recommendations deeper in the trail, yet we show that a growing proportion of recommendations comes from channels categorized as problematic (e.g., “IDW,” “Alt-right,” “Conspiracy,” and “QAnon”), with this increase being most pronounced among the very-right users. Although the proportion of these problematic recommendations is low (max of 2.5%), they are still encountered by over 36.1% of users and up to 40% in the case of very-right users.}
}