Nuanced communities: Mapping ISIS support on Twitter

Good narrative strategies requires first and foremost an intimate knowledge of the audience being targeted. Nowhere is this more true than in attempts to counter the potent messaging of ISIS. The terrorist group has become well-known for its ability to attract young people from across the world, including those from non-Muslim majority nations, to commit violence in the name of the ‘caliphate.’

ISIS has been a fixture in the global public consciousness for over two years, from its dramatic emergence in summer 2014 to facing near-decline earlier this year, followed by resurgence with its latest attack on Berlin just weeks ago. Long before Berlin, the group had already become notorious for the quality and power of its social media messaging, professionally produced videos and slick English-language print publications.

Concerned national governments and civil society groups have made numerous attempts to counter the ISIS narrative in various ways, ranging from shutting down followers’ Twitter accounts en masse to creating alternative narratives that aim to discredit the group, its ideology and its actions. But despite all these attempts, attacks against European cities remain a very real threat.

As another gloomy and blood-soaked year of ISIS activity comes to an end, the group shows no sign of fading away. Although it has lost physical territory in Iraq and Syria, the ongoing risk of the ISIS virtual caliphate persists.

A whole range of diverse factors determine an individual’s likelihood to become radicalised, many of which have been studied in significant depth elsewhere. Social media is not necessarily the most influential factor, but it undoubtedly plays a role.

RAND, a US-based think-tank, conducted a detailed research study, published in 2016, to examine ISIS support and opposition networks on Twitter, aiming to gather insights that could inform future counter-messaging efforts.

The study used a mixed-method analytics approach to map publicly available Twitter data from across the Arabic-speaking Twitter-verse. Specific techniques used were community detection algorithms to detect links between Twitter users that could signify the presence of interactive communities, along with social network analysis and lexical analysis to draw out key themes from among the chatter.

Research goals were to learn how to differentiate between ISIS opponents and supporters; to understand who they are and what they are saying; and to understand the connections between them while identifying the influencers.

Lexical analysis uncovered four major groups, or ‘meta-communities’ among the Arabic-speaking ISIS conversation on Twitter. These were Shia, Sunni, Syrian Mujahideen, and ISIS Supporters. They are characterised by certain distinct patterns in their tweets. Shia tend to condemn ISIS and hold positive views of Christians/the West/the international coalition fighting ISIS. This is unsurprising considering the long-standing hostility between Sunni and Shia Muslims and the fact that ISIS is a Sunni group.

The Syrian Mujahideen group is anti-Assad, holds mixed views of ISIS, and negative views of the coalition. ISIS supporters talk positively in bombastic overblown language about ISIS and the caliphate. They insult Shia, the Assad regime, and the West. Notably, their approach to social media strategy is by far the most sophisticated of the lot. And finally, the Sunni group is heavily divided along nationalistic lines, which includes most countries of the Arab world.

Key findings of interest

1. Unique audiences, essential nuance

Telling the difference in large datasets between ISIS supporters and opponents was key for this study. RAND researchers chose an easy way; Twitter users who tweeted the Arabic word for ‘Islamic State’ (الدولة ا س مية ) were considered to be supporters, while those who used the acronym ‘DAESH’ (داعش ) were opponents. This dividing line isn’t foolproof but, based on what’s known about the significance of these two Arabic terms, it seems a valid way to approach the task. Research discovered that although opponents outnumbered supporters six to one, the supporters were far more active, producing 50 % more tweets daily.

This could point to a couple of things. Firstly the outnumbering suggests that the majority of the Arab world (or at least the Twitter sphere) is anti-ISIS; while the volume of pro-ISIS tweets could suggest passionate support for the group, or on the other hand could point to the presence of armies of pro-ISIS bots or perhaps the use of astro-turfing. The latter two could be an interesting case for new research, especially in the present climate where the curtain has been lifted on use of social media bots, astro-turfing armies and persona management software.

2. Jordanian pilot, Turkish soldiers

The researchers also plotted Twitter activity levels for all four groups, between July 2014 (when ISIS emerged and announced itself to the world), to May 2015. Notable findings were firstly that both the anti-ISIS groups (Shia and Sunni States) showed similar activity patterns, suggesting that both were responding to the same ISIS-related events. All four groups experienced a large spike in activity in early February 2015, when ISIS released a video showing Jordanian pilot Moath al-Kasasbeh being burned alive.

After this event, the ISIS supporters activity decreased sharply, while the Syrian Mujahideen’s grew to almost match the Shia and Sunni States groups. Possible explanations (assuming the ISIS supporters are not bots) could include outrage at the murder of a fellow Muslim, and/or outrage at the way he was killed, burning, which is forbidden in the Qur’an. It would be interesting to compare the Twitter response to al-Kasasbeh’s murder with the response to another ISIS burning video, released last week, where two Turkish soldiers were killed.

This comparison could reveal further insights about the nature of the original 2015 spike; or reveal changing attitudes towards Turkey, which has started fighting against ISIS in recent months and has most likely become hated among the group’s supporters as a result.

3. Social media mavens

The ISIS supporters Twitter community analysed in the study showed particular features that made it distinct from the other groups. The supporters group members were more active than the other three groups (despite smaller numbers overall). They tweeted a lot of pro-ISIS terms and phrases, predictably. But most notable about this group was their fluency and command of advanced social media strategy, as shown by their use of certain terms on Twitter. In the study, the supporters group used disproportionately high levels of terms such as spread, link, breaking news, media office, and pictorial evidence.

In general, ISIS has always been exceptionally conversant with social media marketing tools and techniques, in fact far superior to the efforts of many national governments. I would be very interested to see a study that uncovers who exactly is responsible for the ISIS propaganda, what their backgrounds are, and how they were recruited and trained (if indeed they weren’t already expert in this area).

4. CVE insights from Twitter data

Finally, the report offers insights for policy-makers and for those engaged in online CVE efforts across the Arab world. The most important of these is a reiteration of the need for counter-messaging that’s not just tailored, but that shows deep levels of insight into the mindsets of its target audiences. Research like this can help reveal useful themes and connections to build upon.

Also, the ongoing efforts by Twitter to ban pro-ISIS accounts has undoubtedly driven many of them to other channels, most notoriously Telegram. Analysing activity on new channels would be of great use in revealing any shifts in ISIS supporters focus or mindset. Much in the landscape has changed since this report was released, and continues to do so at a rapid rate.

Fake armies: A field guide to astroturfing

“There are invisible rulers who control the destinies of millions.”
― Edward L. Bernays

It sounds so Orwellian; the world’s opinions shaped by vast armies of bots, or by paid groups of teenagers in Macedonia. But far from being a 1984 nightmare come to life, this scenario has become reality; and not just in authoritarian states. Technology is now used to drown out the voices of real people, creating an alternate reality where fake opinions rule and the zeitgeist is based on myths.

What is astroturfing exactly?

Astroturfing is where paid groups or automated technologies (‘bots’) fool the public into believing that certain opinions are more popular or widespread than in reality. It’s used in many arenas, from political campaigning to Amazon reviews. With the increasing influence of social media it’s difficult to tell fake from fact. Astroturfing is especially likely to happen whenever the interests of big business come into conflict with those of the public, for example climate change and big oil, or lung cancer and tobacco companies. To challenge scientifically proven fact should be an impossible endeavour, as surely nothing is more sacred than fact? But in a world led by fake news and paid opinion, the word of experts has been cheapened. In fact, many people no longer trust experts at all. This was demonstrated to devastating effect this year during the EU referendum in the UK, and the presidential elections in the United States.

When did astroturfing begin?

Astroturfing is not a phenomenon of the digital age. It’s been going on since before social media began. Back in the days of print newspapers, so-called ‘concerned residents’ would send a barrage of letters to the editor, especially around election times, to protest against certain policies or candidates. Now that newspapers have gone online the armies of astroturfers have headed to the nearest obvious outlet: the comment sections. From there, it’s an easy step to create multiple identities and start posting comments. Forums are another prime target for astroturfers, along with blogs and of course, social media. Have you ever felt a sense of despair when reading the comments under a newspaper article posted on Facebook? They seem to bring out the worst of human nature, but some of them could be astroturfers. In our low moments, when we feel the world is doomed to a constant cycle of bigotry, xenophobia and fear, perhaps we’d do well to remind ourselves that the rabid anti-Muslim or anti-foreigner comments online could simply be the work of some bot army.

What’s the role of technology in all this?

As technology advances further, astroturfing gets more sophisticated. Russia has a particular talent for harnessing the power of fake opinion on a massive scale, using something called ‘persona management software’. This software creates bot armies that use fake IP addresses to hide their location, along with generating authentic-looking ‘aged’ profiles. There’s almost no way to tell bot from human – and that’s where the real danger lies. Fake opinion en masse can have alarming results; shifting the social and political mood and whipping people up into hysteria over issues minor or even non-existent.

Thanks to the online echo chambers that we live in these days, fake opinion can spread with ease once sown. It becomes further reinforced and legitimised by ongoing social sharing and discussion. Most social media users get their news from within a bubble, as algorithms do their utmost to show only the updates that the user is most likely to engage with. This means there’s less chance of people being shown opinions that challenge their existing worldview. That’s a recipe for disaster – and it’s one that we’ve only just begun to understand the significance of.

What are the implications of astroturfing?

Politics in 2016 is fishy business. In particular, the Trump election campaign is extremely suspicious. There have been claims that Russia used its cyber warfare prowess to interfere in the US elections; in the end putting Trump in command of the country. Notably, Russia has been accused of using its hackers to access Wikileaks to produce a leak of thousands of incriminating emails supposedly sent by Hillary Clinton. This move eroded public trust in Clinton and narrowed the gap between candidates by double digits. Again, like astroturfing, this technique is not new. Orchestrating the right conditions to encourage people to act in a certain way has been used for decades. The father of propaganda, Edward Bernays, used it to great effect in the early 20th century, to sell pianos and bacon, and cause regime change in Guatemala.

Having Trump in power is very much in Russia’s interests. Trump is inexperienced in politics, especially foreign policy, making him very much open to manipulation from afar. He has a reputation for being greedy, meaning he can be easily bought. He has already said publicly that he favours anon-interventionist military policy abroad. For the Kremlin, a Trump presidency is Russia’s very own puppet in the White House. It’s the Cold War revisited, with Russia scoring a massive coup against the US. Only this time Russia has technology on its side, propelling its influence all the way into the corridors of American power. The Soviets couldn’t have hoped for anything like it.

Controlling the zeitgeist via propaganda and astroturfing has reached new heights in this fundamentally connected age where the concept of ‘post-truth’ is rapidly gaining currency. That’s a serious concern; it makes a mockery of democracy and free speech, destroying the validity of the internet as a forum for useful online debate. Soon we won’t know what’s bot and what’s not. In this post-truth, Trump-tainted era, one could well argue that is already the case.