Over the past two years, the strategic use of social media platforms by groups like ISIL has prompted widespread calls for the removal of online extremists’ accounts and content, as well as the dissemination of counternarratives, to neutralize their formidable propaganda output as part of a global CVE (countering violent extremism) agenda.
It seems like a logical approach. A recent study determined that in one month in 2015, Daesh, as the terrorist organization is also known, produced more than 1,100 pieces of content.
This content—tweets, radio broadcasts, videos, infographics, newsletters, magazines—was further amplified by an unofficial network of thousands of supporters.
The theory is that taking down extremist content and related accounts from the Internet, and then replacing them with messages that debunk the utopian myth of an Islamic State that Daesh propagates will make the group less visible and less attractive, particularly to potential foreign fighters.
If only it were so easy. Censorship and account removal even of so-called extremist accounts, raise many questions—not the least of which is, Does it work?
For instance, the lack of an internationally agreed-upon definition of violent extremism makes it difficult to consistently identify terrorist or extremist groups and/or their speech. We may think we know it when we see it, but what happens when we appropriate and subvert such speech for satire? Or gather it for legitimate research and analysis? Or express unpopular or uncomfortable but still legal feelings of anger or oppression? In an era you can get kicked off a plane for simply speaking Arabic, we must carefully define what we’re talking about.
The push to prevent violent extremism may encourage governments and private companies to compromise human rights online, as they restrict access to the Internet and information, block lawful content, remove accounts in the hundreds of thousands, develop algorithms to identify extremist speech and even predict attacks, alter search results, and engage in mass monitoring, particularly of Muslims, in the name of countering threats.
In a March letter to the UN Human Rights Council, 58 civil society organizations, including SMEX, expressed concern that “These measures often lack proper procedural safeguards and pose a serious danger to the rights to freedom of expression and privacy online.” Further, it emphasized, “Governments and inter-governmental bodies too often overlook the enormous potential of a free and open Internet to enable robust debate and make a positive contribution to PVE.”
From in the clouds to on the ground
Some might say that concerns about free expression and privacy online are all well and good when we’re talking about them in the abstract realm of international human rights law, but how do they track when a community and a country are reeling from events like the bombing in Borj al Barajneh, on the outskirts of Beirut, last November that killed 42 people or skirmishes on the Lebanese-Syrian border that result in the casualties and kidnapping of Lebanese soldiers? How can we focus on human rights or digital rights when Daesh has drawn more than 900 fighters from Lebanon—far less than Tunisia or Saudi Arabia or Yemen, but a worrying number when compared with the size of the country’s population of about 5.5 million. Why shouldn’t we sanction censorship and surveillance when ISIS spokespeople are issuing video threats via YouTube and a range of other multimedia channels from well-appointed Web studios less than a day’s drive away?
Since President Obama’s summit on countering violent extremism in February 2015, SMEX has been keeping an eye on these issues and how they affect both our work in media development and digital rights. At RightsCon in late March, I hosted “Terrorism, Extremism and Excuses: Getting to Solutions in the ISIS vs. Censorship Debate,” with Courtney Radsch, the advocacy director at the Committee to Protect Journalists, during which we critiqued of the CVE agenda through the lens of press freedom and free expression online. We continued the discussion and debate with diplomats, academics, journalists, and local CVE practitioners in early April at the Milton Wolf Seminar on Media and Diplomacy in Vienna. Then, after six months of preparation, we brought the conversation home to Lebanon.
Over three days in mid-April, we hosted “#HackingExtremism: A Participatory Symposium on Countering Violent Extremism Online,” in cooperation with the U.S. Embassy Beirut. Though we have been skeptical of the CVE agenda from a high-level view, we realize that things can look decidedly different from a ground-level vantage point. We wanted to pose and try to answer some of the questions that emerged in other fora in the context of Lebanon, one of the 66 members of the Global Coalition to Counter ISIL. Conceived as one of a series of tech camps the U.S. government has sponsored in the region, the three-day symposium aimed to facilitate an informed multistakeholder discussions on the challenges the country is facing vis à vis CVE and generate locally driven solutions.
In keeping with SMEX’s mission to protect and defend digital rights, the gathering also briefed participants on how the CVE/PVE agendas intersect with and can threaten human rights and civil liberties, such as free expression, the right to privacy, and the right to assembly. This was the first time that free expression and other digital rights have been integrated into the agenda of such a tech camp.
During the three days, more than 100 participants attended the event, including representatives from several Lebanese ministries and the Lebanese Armed Forces, the U.S. government, and a wide range of civil society organizations, journalists, lawyers, software engineers, ICT experts, social media managers, and academics. International participants included a representative from Facebook’s policy team as well as from international digital rights advocacy organizations, including Article 19 and the Electronic Frontier Foundation, and the academic research network VOX-Pol.
Through an interactive program that included lectures, panel discussions, a mini-unconference, and prototyping sessions, attendees learned about the incentives that draw fighters to extremist organizations, common recruiting methods and channels, and the scale at which ISIS media apparatus operates, among other key issues. Much of the content of the agenda is represented in this reading list.
Participants also explored the potential and pitfalls of counter-narrative programming, such as those developed by the Sawab Center, an Emirati-U.S. governmental initiative that organizes social media campaigns to propagate counter messages, and the use of content moderation, community standards, and algorithms by social media companies like Facebook. In addition, the event showcased locally designed initiatives, such as an eight-month youth theater program emphasizing self-expression, a café designed to promote conflict resolution in Tripoli, and a psychosocial support program for convicted terrorists in Lebanon’s Roumieh prison.
It was the work of these local groups and the advice of experts, like Christina Nemr, a CVE consultant and former CVE advisor at the State Department, who presented on best practices in monitoring and evaluating CVE programs, that supported the key takeaway from the symposium: that countering violent extremism online can’t succeed with a top-down approach. Governments shouldn’t prescribe either the counter messages used or how they are delivered. Rather, they should work to support local communities to do this work as they see fit. They should also make it safer, by not stigmatizing local efforts as tools of foreign policy agendas.
Other key takeaways highlighted the fact that corporations and governments have an interest in protecting and defending digital rights online. For governments, allowing unpopular and even radical speech as well as keeping surveillance targeted and subject to due process is one of the key ways in which they can build the trust they need to enlist citizens’ help in the fight against violent extremism.
Corporations also have to do more to earn the trust of their users, by educating themselves about digital rights, hiring and training teams with cultural and linguistic sensitivity, and applying their community standards and grievances processes consistently across all users and types of content.
With new knowledge and frameworks in mind, participants spent much of Days 2 and 3 developing ideas for local CVE pilot projects in a series of prototyping sessions. Ideas proposed ranged from an awareness-raising poster competition to a multistakeholder rapid-response unit that could triage a communications response in a crisis like last year’s bombing.
After the symposium, participants had the opportunity to flesh out their ideas further and apply for a small grant to pilot their projects. From this process, three key initiatives emerged, each designed to bring concrete, effective solutions geared towards countering online extremism in Lebanon. The implementing teams will present their pilots at a second CVE event, where SMEX will host roundtable discussions on lessons learned. We will digest lessons learned from the final event in a separate blogpost and will publish a report about the entire program in late September/early October.