Description
Artificial Intelligence (AI) continues to be integrated into various domains and industries. Over the years, social media companies have utilized AI technologies to moderate users’ content, personalize recommendations, and optimize overall user experience. While machine learning models have been found effective in identifying and addressing harmful and violent content, a mounting number of concerns were raised regarding the bias and discriminatory decisions made by these models when applied to non-English content.
In this paper, Mona Elswah (Technology & Human Rights Fellow '22-'23) zooms in on the AI-powered content moderation by Meta’s Facebook in relation to managing Arabic content. She argues that the Arabic content is subject to “inconsistent moderation,” meaning that some content will be over-moderated, while other content will be left untouched despite violating the platforms’ standards. These inconsistencies have limited users’ ability to engage in meaningful political debates in the region. Put simply, Arabic-speaking users are now uncertain whether their content will be deleted or kept by the algorithm. This type of unclear and inconsistent moderation has led to a social distrust towards AI tools and applications among Arab Internet users.
Citations
Mona Elswah. 1/30/2024. “Does AI Understand Arabic? Evaluating the Politics Behind the Algorithmic Arabic Content Moderation”. Cambridge, MA: Harvard Kennedy School.