How The Hashtag is Changing Warfare

social-header2.png

The Cyber Edge

HOW THE HASHTAG IS CHANGING WARFARE

June 29, 2017
By Adam B. Jonas
Article Published on The Cyber Edge 
 

Armies of social media bots battle for hearts and minds online.

On the eve of last year’s U.S. presidential election, two computational social scientists from the University of Southern California published an alarming study that went largely unnoticed in the flood of election news. It found that for a month leading up to the November vote, a large portion of users on the social media platform Twitter might not have been human.

Figure 1. A simulated tweet-retweet network of 100 nodes is linked by 104 tweets. A method known as eigenvector centrality determined the approximate importance of each node. Arrows indicate retweeting of content.

The users were social bots, or computer algorithms built to automatically produce content and interact with people on social media, emulating them and trying to alter their behavior. Bots are used to manipulate opinions and advance agendas—all part of the increasing weaponization of social media.

“Platforms like Twitter have been extensively praised for their contribution to democratization of discussions about policy, politics and social issues. However, many studies have also highlighted the perils associated with the abuse of these platforms. Manipulation of information and the spreading of misinformation and unverified information are among those risks,” write study authors Alessandro Bessi and Emilio Ferrara in First Monday, a peer-reviewed, open-access journal covering Internet research. 

  • Figure 1. A simulated tweet-retweet network of 100 nodes is linked by 104 tweets. A method known as eigenvector centrality determined the approximate importance of each node. Arrows indicate retweeting of content.

Analyzing bot activity leading up to the election, the researchers estimated that 400,000 bots were responsible for roughly 2.8 million tweets, or about one-fifth of the entire political conversation on Twitter weeks before Americans voted. People unwittingly retweeted bot tweets at the same rate that they interacted with humans, which quickly obfuscated the originator of the content, Bessi and Ferrara reported.

Social media manipulation is fast becoming a global problem. The Islamic State of Iraq and the Levant (ISIL) exploits Twitter to send its propaganda and messaging out to the world and radicalize followers. In Lithuania, the government fears that Russia is behind elaborate long-standing TV and social media campaigns that seek to rewrite history and justify the annexation of parts of the Baltic nation—much as it had done in Crimea. 

Figure 2. A simulated tweet-retweet network of 100 nodes is linked by 104 tweets. Probable bots are displayed in red and labeled.

Effective tactics to identify, counter and degrade such social media operations will not emerge from current U.S. military doctrine. Instead, they will come from journal articles on computational social science and technology blogs.

What we already know is that bots are quick, easy and inexpensive to create. All it takes is watching a free online tutorial to learn how to write the code or, alternatively, shelling out a little cash to buy some from a broker. Companies such as MonsterSocial sell bots for less than 30 cents a day. Even popularity is for sale: In 2014, $6,800 could buy a million Twitter followers, a million YouTube views and 20,000 likes on Facebook, according to a Forbes article. 

  •  Figure 2. A simulated tweet-retweet network of 100 nodes is linked by 104 tweets. Probable bots are displayed in red and labeled.

Bots can be good and bad. Not all bots are devious, and not all posts are manipulative. Advanced bots harness artificial intelligence to post and repost relevant content or engage in conversations with people. Bots are present on all major social media platforms and often used in marketing campaigns to promote content. Many repost useful content to user accounts by searching the Internet deeper and faster than people can. 

Now for some of the bad. Bots spoof geolocations to appear as if they are posting from real-world locations in real time. When users receive social media messages promising an increase in followers or an alluring photo—typically sent by someone with a friend connection—chances are it is the work of a bot. The improved algorithmic sophistication of bots makes it increasingly difficult for people to sort out fact from fiction. The technology is advancing at record speeds and outpacing the algorithms companies such as Twitter develop to fight it. 

Reinforcements are on the way. Experts at the Defense Advanced Research Projects Agency (DARPA), Indiana University Bloomington and the University of Southern California are among those working quickly to develop better algorithms that identify malevolent bots. Solutions range from crowdsourcing to detecting nonhuman behavioral features or using graph-based methods such as those Ferrara and others review in the 2016 article “The Rise of Social Bots” for the publication Communications of the ACM. Although some attribution methods come with social media analytics packages, the lion’s share are open source and enacted in coding languages such as Python and R, which are free, open source coding packages that can ingest social media feeds for analysis. 

While identifying a fake account run by a bot is fairly easy, identifying its creator, its controller and its purpose is a real chore. A social science subfield called social network analysis (SNA) might offer fixes to this problem. SNA uses linear algebra and graph theory to quantify and map relational data. These methods can be used to determine whether bots are acting to elevate certain key actors in a network or aligning with certain human subgroups online. Tools or code that collects and creates large networks of interaction on social media platforms can be used to separate humans from bots and identify the causes bots aim to influence.

Figure 3. The red box is a human’s account that has a high external-internal (E-I) index, suggesting that the person’s messages are especially infected by bots. Yellow boxes highlight two accounts influenced by bots to target for deletion. The blue box represents a human being targeted for influence.

At the same time, action is needed to support the tactical level of using social media analytics andSNA to detect bots and enemy influence operations in the information environment. 

Clearly, bad bots can eliminate the need for kinetic deterrence to coerce or manipulate. They can do the dirty work instead.

Identifying, countering and degrading bot armies that spread misinformation requires new tactics—battle-ready tactics. Advanced computational social science methods must be combined with social media and network analysis tools to wipe them out. If such measures were deployed in the months leading up to the November presidential election, the United States might know whether Russia meddled in the election. And knowing can be half the battle. 

  • Figure 3. The red box is a human’s account that has a high external-internal (E-I) index, suggesting that the person’s messages are especially infected by bots. Yellow boxes highlight two accounts influenced by bots to target for deletion. The blue box represents a human being targeted for influence.

 

To learn more about using Social Network Analysis for first responders to identify and manage criminal networks,

Download our Social Network Analysis Guide for Law Enforcement

 

 
Adam Jonas

Written by Adam Jonas

As an intelligence analyst at Threat Tec, LLC., he focuses on social network analysis (SNA) and is a subject matter expert in support of TRADOC G27’s Network Engagement team. Mr. Jonas currently works to develop, teach, and implement approaches to integrate lessons learned from SNA in academia to support the DoD community. Along with other members of the Network Engagement team, Mr. Jonas teaches Advanced Network Analysis and Targeting (ANAT) classes to US Army officers. After receiving his Master’s degree in Sociology from the University of Kentucky with a specialization in Political Sociology and Social Movements, Mr. Jonas went on to be the lead network analyst of a major NIH funded project on HIV Risk and Drug Use in Rural Appalachia. At the University of Kentucky LINKs Center for Social Network Analysis Mr. Jonas studied under Dr. Stephen Borgatti, one of the foremost scholars within the field of SNA. Mr. Jonas has consulted for Yale, Loyola-Marymount, William & Mary, and other private and public organizations on how best to employ network analysis techniques and theory. He is a noted presenter at leading academic conferences focusing on social network analysis. Mr. Jonas has also been invited internationally to teach workshops in conducting social network research and lectures about effectively implementing network interventions to induce change.