Meta Platforms, the parent company of Facebook, has initiated an anti-hate speech and misinformation campaign in South Africa. The campaign will be featured on its platforms and broadcasted on local and national radio stations.
As part of its ongoing commitment to safeguarding the integrity of the upcoming May election, the social media giant is taking proactive measures to enable users across its platforms, including WhatsApp, Facebook, and Instagram, to readily identify and report misleading or deceptive content. This initiative aims to mitigate the potential threat posed by such content and empower users to contribute to a transparent and informed electoral process.
In recent years, Meta and other social media platforms have faced criticism for enabling the proliferation of misinformation in pursuit of increased traffic. As part of its efforts to address this issue, Meta Platforms, the parent company of Facebook, has launched an anti-hate speech and misinformation campaign in South Africa. The campaign will be prominently featured on its platforms and broadcasted on local and national radio stations.
In the case of Meta, a significant turning point occurred with the mishandling of user data, specifically impacting 50 million Facebook accounts. This allowed the British consulting firm Cambridge Analytica to exploit the data to manipulate the voter base, ultimately influencing the 2016 US national election in favor of Donald Trump.
In 2019, Meta faced a hefty US$5-billion fine from the US Federal Trade Commission due to the saga. However, Balkissa Idé Siddo, Meta’s public policy director for sub-Saharan Africa, asserts that the company has gained valuable insights and experience since then.
Siddo shared insights from our extensive involvement in more than 200 elections worldwide in a recent interview with TechCentral. He emphasized our continuous efforts over the past eight years to introduce cutting-edge transparency tools for election and political ads, establish robust policies to counter election meddling and voter fraud, and implement the largest third-party fact-checking initiative among all social media platforms to tackle the dissemination of misinformation.
IEC Partnership
According to Siddo, Meta collaborates closely with South Africa’s Electoral Commission (IEC) to develop policies and tools aimed at combating misinformation during elections. The initiative encompasses training programs for IEC staff on media literacy and methods for identifying misinformation.
She noted that while the social media company’s collaboration with the commission may ramp up in the lead-up to the election, the partnership signifies a continuous process in the evolving relationship between the two entities.
“Mis- and disinformation have a long history and will persist beyond the election,” noted Iddo.
In order to promote user education, Meta’s moderators promptly remove harmful content, while also downranking and labeling misinformation to enable users to engage with it and learn how to identify it, even when encountered on other platforms. This approach helps to foster a safer and more informed online community.
However, the constantly evolving technology landscape is presenting new obstacles for content moderation. The proliferation of artificial intelligence and deepfakes is exacerbating the prevalence of misleading content on social media platforms. It is crucial to prioritize educating users about misinformation to empower them to discern and effectively address fake content.
Iddo pointed out that while the potential dangers of AI are often emphasized, there is also a positive side that deserves more attention. Through conversations with different stakeholders, including content creators, Meta has observed a growing enthusiasm for the potential of AI to enhance content production capabilities. This is particularly beneficial for smaller media outlets and individual content producers with limited resources.
Meta is utilizing AI to counter the dissemination of misinformation across its platforms. With over 40,000 staff members focused on safety and security and collaborations with local organizations for fact-checking, the company is also exploring the use of AI tools. According to Iddo, large language models have proven to be significantly more efficient in identifying harmful content, complementing the efforts of the dedicated safety and security team.
At the global level, Meta collaborates with other social media and content production platform owners utilizing AI, such as Microsoft, Google, Shutterstock, and Midjourney. This partnership aims to assist social media platforms in detecting AI-generated content.
After detecting AI-generated content, Meta discloses this fact to users by adding labels. Ben Waters, the policy communications manager for Europe, Africa, and the Middle East at Meta, suggested that content creation platforms should incorporate watermarks in their content. This would enable Meta to identify the origin of the content when it is uploaded to their platforms.
Last year, Meta, Google, TikTok parent ByteDance, and local authorities signed a cooperation agreement with the IEC. As part of this agreement, the elections agency established an independent, three-member committee to assess any instances of misinformation on social media platforms.
Upon the committee’s findings, recommendations will be made to the IEC, which can then request the offending platform to either de-rank the malicious content or remove it. However, it’s worth noting that one of the largest social media platforms, X (formerly Twitter), is not currently part of this agreement.