White paper article by Patricia DeGennaro highlighted in Small Wars Journals
With every business and IT decision, we now have to take cybercrime into account. Behold our list of 12 things every Chief Information Officer (CIO) should know. Ginni Rometty, IBM’s CEO, has stated that, “Cybercrime is the greatest threat to every company in the world.”
Now even if you have doubts on this statement, it is impossible to deny that security is now part of every business and IT discussion we have. In the future, the necessity of combating these internal and external threats will only be intensified. Due to this essential need, we’ve compiled a short list of facts and statistics covering a range of topics from emerging threats posed by hackers to the everyday issues and vulnerabilities found in company cyber security policies.
On the eve of last year’s U.S. presidential election, two computational social scientists from the University of Southern California published an alarming study that went largely unnoticed in the flood of election news. It found that for a month leading up to the November vote, a large portion of users on the social media platform Twitter might not have been human.
The users were social bots, or computer algorithms built to automatically produce content and interact with people on social media, emulating them and trying to alter their behavior. Bots are used to manipulate opinions and advance agendas—all part of the increasing weaponization of social media.
“Platforms like Twitter have been extensively praised for their contribution to democratization of discussions about policy, politics and social issues. However, many studies have also highlighted the perils associated with the abuse of these platforms. Manipulation of information and the spreading of misinformation and unverified information are among those risks,” write study authors Alessandro Bessi and Emilio Ferrara in First Monday, a peer-reviewed, open-access journal covering Internet research.
Analyzing bot activity leading up to the election, the researchers estimated that 400,000 bots were responsible for roughly 2.8 million tweets, or about one-fifth of the entire political conversation on Twitter weeks before Americans voted. People unwittingly retweeted bot tweets at the same rate that they interacted with humans, which quickly obfuscated the originator of the content, Bessi and Ferrara reported.
Social media manipulation is fast becoming a global problem. The Islamic State of Iraq and the Levant (ISIL) exploits Twitter to send its propaganda and messaging out to the world and radicalize followers. In Lithuania, the government fears that Russia is behind elaborate long-standing TV and social media campaigns that seek to rewrite history and justify the annexation of parts of the Baltic nation—much as it had done in Crimea.
Effective tactics to identify, counter and degrade such social media operations will not emerge from current U.S. military doctrine. Instead, they will come from journal articles on computational social science and technology blogs.
What we already know is that bots are quick, easy and inexpensive to create. All it takes is watching a free online tutorial to learn how to write the code or, alternatively, shelling out a little cash to buy some from a broker. Companies such as MonsterSocial sell bots for less than 30 cents a day. Even popularity is for sale: In 2014, $6,800 could buy a million Twitter followers, a million YouTube views and 20,000 likes on Facebook, according to a Forbes article.
Bots can be good and bad. Not all bots are devious, and not all posts are manipulative. Advanced bots harness artificial intelligence to post and repost relevant content or engage in conversations with people. Bots are present on all major social media platforms and often used in marketing campaigns to promote content. Many repost useful content to user accounts by searching the Internet deeper and faster than people can.
Now for some of the bad. Bots spoof geolocations to appear as if they are posting from real-world locations in real time. When users receive social media messages promising an increase in followers or an alluring photo—typically sent by someone with a friend connection—chances are it is the work of a bot. The improved algorithmic sophistication of bots makes it increasingly difficult for people to sort out fact from fiction. The technology is advancing at record speeds and outpacing the algorithms companies such as Twitter develop to fight it.
Reinforcements are on the way. Experts at the Defense Advanced Research Projects Agency (DARPA), Indiana University Bloomington and the University of Southern California are among those working quickly to develop better algorithms that identify malevolent bots. Solutions range from crowdsourcing to detecting nonhuman behavioral features or using graph-based methods such as those Ferrara and others review in the 2016 article “The Rise of Social Bots” for the publication Communications of the ACM. Although some attribution methods come with social media analytics packages, the lion’s share are open source and enacted in coding languages such as Python and R, which are free, open source coding packages that can ingest social media feeds for analysis.
While identifying a fake account run by a bot is fairly easy, identifying its creator, its controller and its purpose is a real chore. A social science subfield called social network analysis (SNA) might offer fixes to this problem. SNA uses linear algebra and graph theory to quantify and map relational data. These methods can be used to determine whether bots are acting to elevate certain key actors in a network or aligning with certain human subgroups online. Tools or code that collects and creates large networks of interaction on social media platforms can be used to separate humans from bots and identify the causes bots aim to influence.
At the same time, action is needed to support the tactical level of using social media analytics andSNA to detect bots and enemy influence operations in the information environment.
Clearly, bad bots can eliminate the need for kinetic deterrence to coerce or manipulate. They can do the dirty work instead.
Identifying, countering and degrading bot armies that spread misinformation requires new tactics—battle-ready tactics. Advanced computational social science methods must be combined with social media and network analysis tools to wipe them out. If such measures were deployed in the months leading up to the November presidential election, the United States might know whether Russia meddled in the election. And knowing can be half the battle.
To learn more about using Social Network Analysis for first responders to identify and manage criminal networks,
Topics: Cyber Security, social media, Social Network Analysis, Criminal Networks, Data Blending, Data Visualization, simulations, threat study, Active Shooter, emerging threat, cyber-attacks, threat assessment, AMERICAS LONGEST WAR, AFGHANISTAN, BOMB THREAT, SWAT training
Topics: Criminal Networks, simulations, Threat Training, Active Shooter, OFFICE VIOLENCE PREVENTION, ARMED INTRUDER TRAINING, CORPORATE TRAINING FOR ACTIVE SHOOTER, nuclear threat, threat assessment, AFGHANISTAN, COMBAT, TRAINING MUNITIONS, TERRORISM, BOMBING, THREAT EMULATION, BOMB THREAT, EXPLOSIVES, ANARCHIST COOK BOOK, BOMB TRAINING, Role player training, law enforcement training, first responder training, SWAT training
Topics: ACITVE SHOOTER POLICY, CORPORATE TRAINING FOR ACTIVE SHOOTER, IED, TERRORISM, TERRORIST, EXPLOSIVE, BOMBING, THREAT EMULATION, BOMB THREAT, EXPLOSIVES, IMPROVISED EXPLOSIVE DEVICE, HOMEMADE EXPLOSIVE, ANARCHIST COOK BOOK, BOMB TRAINING
Topics: Data Blending, ARMED INTRUDER TRAINING, IRAC, International Replica Arms Company, AMERICAS LONGEST WAR, AFGHANISTAN, COMBAT TOUR, COMBAT, TRUE HISTORY, DEPLOYMENT, VETERAN, VETERAL AUTHOR, TRAINING MUNITIONS, SOLDIER SAFETY, WAR NARRATIVE, FIRST-PERSON WAR NARRATIVE, TWIN TOWERS, VETERAN EXPERIENCE, BARNES & NOBLE, VIRGINIA BEACH TOWN CENTER
Despite the bellicose ravings of North Korea’s leader Kim Jung-un, his government’s exaggerated claims of nuclear capability and a growing list of successful cyber-attacks, North Korea remains, militarily speaking, in a geostrategic hostage standoff. The United States and South Korea’s ability to defeat the North Korean Army has never been in doubt.
In this day and age, active-shooter events are commonplace as we see more and more reports of yet another mass shooting in America. They can happen anywhere, at any time. The University of Alabama found that in America, it is more likely to have a mass shooter incident in the workplace or at a school over any other location. Therefore, it is critical that the leaders of organizations have a response plan, outlined with emergency protocols, training and regulations to maintain a level of preparedness to minimize impact of such events.
Topics: Active Shooter, OFFICE VIOLENCE PREVENTION, ARMED INTRUDER TRAINING, ACITVE SHOOTER POLICY, CORPORATE TRAINING FOR ACTIVE SHOOTER, TRAINING STAFF FOR ACTIVE SHOOTER, WORKPLACE PREPAREDNESS FOR ACTIVE SHOOTER
This week, Threat Tec made its way to the Virginia Beach Convention Center for the 2017 MODSIM World conference. MODSIM World is a multi-disciplinary conference in which sponsors exchange modeling and simulation knowledge, research and technology. With local, national, and international attendance, MODSIM World 2017 will address the state of Modeling and Simulations in the community domains of today; including critical infrastructure, maritime, healthcare, aerospace, and defense and homeland security.