Freedom of ExpressionRegulating AI algorithms and hate speech

Regulating AI algorithms and hate speech

Social media giants such as Facebook, Google and Twitter have been condemned repeatedly over their lax attitudes in curbing hate speech. More and more countries are facing the rise of authoritarianism, accompanied by violence towards dissidents, activists and minorities. It is clear that social media platforms have played a significant role in this – for instance, revelations about Facebook’s role in the genocide committed against Rohingya Muslims in Bangladesh, or its connections with the ruling BJP government in India. If not causing, social media is definitely magnifying discord around the world. The response by governments has been, to say the least, slow and ineffective. However, facing reputational risks and worldwide condemnation, these corporations themselves have put in place certain policies and safeguards to curb hate speech on their platforms.


Responses by Corporations

Social media companies have adopted a varied set of responses, employing algorithms, relying on user reporting and having staff moderate hate speech. However, user reporting and content moderators have their own set of challenges, with instances of bias within such reporting mechanisms and lack of diversity being reported within companies. Further, it has been reported that content moderators have been forced to work under grueling conditions, due to the massive amount of content that needs to be reviewed on a daily basis. Artificial intelligence (AI) algorithms are expected to not pose similar problems, given the neutrality that they promise. However, the employment of such algorithms has not been seamless. Studies have found racial bias against black speech, with experts finding that AI technology possesses the same issues of bias in detecting hate speech. Although social media companies such as Facebook have promised improvements and updates to the software, the outcome of such actions remains unclear.


Government policies

Germany, Singapore, and France have all instituted rules to penalize platforms that fail to restrict illegal content after due process of notice and flagging. Germany in particular, through its Network Enforcement Act, has instituted strict restrictions against hate speech, providing takedown timelines. However, countries are cautious in adopting such laws, given the obvious conflict they pose towards freedom of speech. In India, the new Information Technology Rules have been widely condemned for the wide grounds of national security that allow the government to punish and compel social media platforms to take down content immediately. In the United States, the right to speech is of paramount importance, and therefore, rules prohibiting hate speech have been met with strong resistance. Regulating the creation of algorithms themselves will be far more difficult given the proprietary secrecy maintained by companies, and therefore governments seek to regulate the type of content that ought to be removed.


Future of online hate speech

The regulation of hate speech presents equal challenges for private social media platforms as well as government authorities. Facebook’s CEO, Mark Zuckerberg has called for global regulations to establish baseline standards concerning content and privacy. However, these regulations must be drafted keeping in mind, first, the algorithms and how they will interact with diverse speech across jurisdictions, and whether they will be able to discern the context of the speech itself. This is important because content by African Americans were flagged off precisely because the algorithms could not decipher the context in which the content was put out. Second, the regulations must provide for strict grounds for taking down content, and not broad and ambiguous language, to avoid a case of state censorship. Compliance by social media companies must be ensured through the submission of reports, and transparency in measures undertaken to curb hate speech. A code of conduct imposed by the European Union obligating tech companies to flag off content and take it down if it violated EU standards, was seen to be followed in 75% of the cases. Although this is reason to be optimistic, this can only be achieved globally through international cooperation and constant vigilance. The regulation of hate speech through algorithms and self-reporting mechanisms can only be done through the integration of technical expertise in policy formulation, to ensure that the AI algorithms are themselves following standards of non-discrimination and not excessively curbing free speech. If anything can be gleaned from Twitter’s decision to permanently suspend Donald Trump’s account, it is that social media companies have an overarching responsibility in determining society’s future. As much as social media is praised for giving a platform to everyone, this privilege has certain conditions. The internet has the ability to reach people across countries instantaneously, and therefore, measures must be taken to ensure that this does not apply to hate speech and incitement of violence, by governments and private actors alike.