December 7, 2015 FM

Why extremist groups want you to ban their online content

Facebook briefly banned Britain First’s page on 30 November, sparking momentary cries of celebration on social media. But within an hour, the far right organisation’s page was restored, claiming Facebook’s censorship to be a ‘fascist attack’. Britain First currently has over 1 million likes on its Facebook page—more than Conservatives and Labour combined. Its page on average generates hundreds of likes for posts. Immediately following restoration, likes spiked into the thousands. Despite their inflammatory material, banning social media sites like Britain First is a highly ineffective approach.

Research points to the negative consequences that arise from exercising bans. The International Centre for Study of Radicalisation (ICSR) found through in-depth analysis that a systematic blocking of sites with extremist material is both impractical and counterproductive. It found that strategies which include removing websites, filtering content for accessibility, and hiding search engine results, have little to no effect hindering such networks. This is due to the particular challenges of internet regulation. The scale of website traffic makes identifying and monitoring content extremely difficult, as well as resource-intensive. Even when web page takedowns do occur, sites and forums tend to re-emerge rapidly.

The far-right website, Gates of Vienna, was taken down twice in 2013 on grounds of racial incitement. It has since moved to a self-hosted site. As of December 2015, gatesofvienna.net has witnessed a 13 per cent increase in global website rankings within the last three months. It features in the media spotlight for promoting controversial matters, including a (later cancelled) London exhibition on cartoons of the Prophet Muhammad. Leader of the radical right Dutch PVV party, Geert Wilders, was one of the planned attendees.

A similar study on ISIS Twitter accounts reveals comparable findings. Although mass suspension campaigns produce a significant decrease in online activity, it also results in a more internally focused network. This poses dangerous implications such as potentially alienating supporters even further whilst not actually preventing new members from joining. In some cases, new accounts are created with a more intense following and concentrated material output.

Extremist politics has historically been at the forefront of using innovate communication technology to mobilise support. Stormfront.org, the first major white supremacist website, was founded in the 1990s as a means to circumvent mainstream media channels. Al-Qaeda similarly was an innovator in cyberspace, initially disseminating propaganda on forums and chat rooms and eventually on social media platforms. Yet while the internet aids extremist networking, it is not its primary driver. Social media sites merely create a space to express and reinforce shared sentiments. They promote a sense of collective identification, but do not spark the initial desire to join an extremist network.

Banning these online echo chambers signals a reactionary rather than proactive move. Instead, approaches should be comprehensive in responding to extremism. This includes all users, not just those targeted as the most vulnerable to extremist messages. A holistic strategy takes into account critical thinking skills of social media users. For instance, claims of a racist comment, while indicative, fails to interrogate what makes such a comment ‘racist.’ It favours a dogmatic approach that shuts down any possibility for exchange. By encouraging social media users to question the meaning behind phrases rather than at face value, this is a significant step towards balancing the national conversation. Agency is the most resilient form of counter-speech.

Importantly, proscription is problematic not only due to proven adverse effects. It is fundamentally at odds with democratic values. Beyond free speech advocacy, banning runs counter to principles of engagement and dialogue. This simultaneously applies to social movements and political parties with views or courses of action outside mainstream opinion. An unpopular agenda should not be delegitimised because it is incompatible with personal beliefs. Doing so neglects extremist perspectives as symptomatic of trending issues within everyday politics. Such blindsiding often fails to recognise complex socio-economic debates at the expense of shallow labels. When simplified narratives become common practice, this comes at the expense of determining what is extremist content (let alone ‘extremism’).

What we are witnessing now is the upsurge of reactionary politics. Far-right views are increasingly influencing mainstream agendas on issues of immigration, employment, housing, and security. Yet somehow we do not tolerate radical beliefs, preferring to ban spaces where they are most vocal. At the same time, social media platforms amplify our inability to distinguish boundaries of extremist speech.

By choosing to ban, we are pushing away a value integral to democracy—pluralism. Isolating extremists only makes it more difficult to challenge alternative views. We risk reproducing an ‘us versus them’ logic that continues to polarise society.

Eviane Leidig did her Master’s Degree at the University of Bristol, with an MSc  in Sociology (Ethnicity and Multiculturalism). Her dissertation looked at ‘Visualising Violent Nationalism: A Comparison of Far Right and Islamic Extremism’. She tweets at @evianeleidig and writes in a personal capacity.

 

 

 

The post Why extremist groups want you to ban their online content appeared first on Religious Reader.