More info on the project can be found at LERSSE here.
A good summary in an ACM Interactions Magazine cover story: Socialbots: voices from the fronts
Our personal and professional lives have gone digital: we live, work and play in cyberspace. We use the Internet, computers, cell phones and mobile devices every day to talk, email, text and socialize with family, friends and colleagues.
The goal of this project is to understand what makes these systems, online social networks in particular, vulnerable to cyber attacks, and inform new designs that lead to systems less vulnerable to both human exploits (i.e., social engineering) and technical exploits (i.e., platform hacks).
Rise of the Socialbots: Large-Scale Infiltration in Online Social Networks
Online Social Networks (OSNs) have become an integral part of today's Web. Politicians, celebrities, revolutionists, and others use OSNs as a podium to deliver their message to millions of active web users. Unfortunately, in the wrong hands, OSNs can be used to run astroturf campaigns in order to spread misinformation and propaganda. Such campaigns usually start off by infiltrating a targeted OSN on a large scale.
In this project, we evaluate how vulnerable OSNs are to a large-scale infiltration by socialbots: computer programs that control OSN accounts and mimic real users. We adopt a traditional web-based botnet design and build a Socialbot Network (SbN): a group of adaptive socialbots that are orchestrated in a command-and-control fashion. We operated such an SbN on Facebook—a 750 million user OSN—for 8 weeks. We collected data related to users' behavior in response to a large-scale infiltration, where the socialbots are used to connect to a large number of Facebook users.
Our results show that (1) OSNs, such as Facebook, can be infiltrated with a success rate of up to 80%, (2) depending on users' privacy settings, a successful infiltration can result in privacy breaches where even more users' data are exposed, and (3) in our case, OSN security defenses, such as the Facebook Immune System, have not been effective enough in detecting or stopping the large-scale infiltration as it occured.
There's an old New Yorker cartoon: "On the Internet, nobody knows you're a dog." Socialbots are a bit like that.
-  Graph-based Sybil Detection in Social and Information Systems, Yazan Boshmaf, Konstantin Beznosov, Matei Ripeanu, 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining ASONAM, Niagara Falls, Canada, August 2013. (acceptance rate: 13%, best paper award) pdf Technical Report
-  Design and Analysis of a Social Botnet, Yazan Boshmaf, Ildar Muslukhov, Konstantin Beznosov, Matei Ripeanu, Elsevier Computer Networks, Special Issue on Botnet Activity: Analysis, Detection and Shutdown. pdf
-  Key Challenges in Defending against Malicious Socialbots, Yazan Boshmaf, Ildar Muslukhov, Konstantin Beznosov, Matei Ripeanu, 5th USENIX Workshop on Large-Scale Exploits and Emergent Threats (LEET'12), collocated with USENIX NSDI’12, San Jose, CA, April 2012 (acceptance rate: 18%). link pdf slides video
-  The Socialbot Network: Are Social Botnets Possible?, Yazan Boshmaf, Ildar Muslukov, Konstantin Beznosov, Matei Ripeanu, ACM Interactions Magazine, March-April 2012, pp45-46 pdf
-  Branded with a Scarlet “C”: Cheaters in a Gaming Social Network, Jeremy Blackburn, Ramanuja Simha, Nicolas Kourtellis, Xiang Zuo, Matei Ripeanu, John Skvoretz, Adriana Iamnitchi, World Wide Web Conference (WWW’12), Lyon, France, April 2012. (acceptance rate: 80/655 = 12.2%) pdf
-  The Socialbot Network: When Bots Socialize for Fame and Money, Yazan Boshmaf, Ildar Muslukhov, Konstantin Beznosov, Matei Ripeanu, Annual Computer Security Applications Conference (ACSAC), December 2011. (acceptance rate 39/195=20%, best paper award) link (preliminary version presented as a poster at USENIX Security Symposium, August 2011 link) For an extended and updated discussion, please refer to the TR
-  The Socialbot Network: When Bots Socialize for Fame and Money, March 2013, Telefonica, Barcelona slides
-  Security Analysis of Social Bots on the Web, Yazan Boshmaf, Ildar Muslukhov, Konstantin Beznosov, and Matei Ripeanu, presentation at Humboldt Colloquium (Toronto), Nov 2012 link
-  Socialbots: A Security Perspective, Yazan Boshmaf, Konstantin Beznosov, and Matei Ripeanu, guest lecture at the University of Washington - Bothell, WA, March 2012
-  Automated social engineering attacks in OSNs, Yazan Boshmaf, Konstantin Beznosov, and Matei Ripeanu, presentation for The Office of the Privacy Commissioner of Canada (Ottawa), May 2010 link
- Our objective is to help improve the security and privacy of online systems. We believe that controlled, minimal-risk realistic experiments are the only way to reliably estimate the feasibility of an attack in real-world. These experiments allow us, and the wider research community, to get a genuine insight into the ecosystem of cyber attacks, which are useful in understanding how they may behave and how to defend against them. We carefully design our experiments in order to reduce any potential risk at the user side by following known practices, and we get the approval of our university's behavioral research ethics board before conducting any experiment. We strongly encrypt and properly anonymize all collected data, which we completely delete after we finish any planned data analysis.
-  The New Scientist Magazine: Inside Facebook's massive cyber-security system. link
-  BBC: Socialbots used by researchers to 'steal' Facebook data. link
-  Vancouver Sun: Facebook fails to stop bots accessing personal information. link and another link
-  CBC: Facebook easily infiltrated, mined for personal info. link
-  Huffington Post: Sniffing out Socialbots - The combustive potential of social media-based algorithms. link
-  InfoWorld: Your Facebook friends may be evil bots. link
-  NYTimes: I Flirt and Tweet. Follow me at #socialbot link
-  Others in English:   , German:  , French:  , Russian:  , Hebrew: , Czech , Turkish , Chinese , Italian , Spanish , Greek 
-  Many more
- So what is a socialbot?
A Socialbot is an automation software that controls a user profile in a particular online social network such as Facebook. A socialbots is capable of executing commands that result in operations related to either social interactions (e.g., posting a status update), or the social graph structure (i.e., sending a connection request). These commands are either sent by the command and control center (the botmaster), or predefined locally on each socialbot. With carefully defined commands, a socialbots can imitate real users and pass itself off as a human being.
- How do they infiltrate social networks, what's the method?
Socialbots infiltrate social networks by obtaining social connections between the controlled profiles and other profiles on the network. To do this, they pretend that their profiles are of real people. If they pretend well, users are likely to accept their friendship requests, and the social network administrators are unlikely to detect them.
- Now, are they a threat to online social networks?
Potentially yes, and in a number of ways: First, by becoming friends with social network users, socialbots can harvest private user data and content. For example, our 102 socialbots have collected over 46K e-mail addresses, over 500K birth dates, over 14K postal addresses, and over 16K phone numbers.
Second, socialbots can be used to perform surveillance by collecting status updates, pictures, and other content posted by social network users, depending on their privacy settings.
- So this is about just mining private information or is there a bigger threat?
The bigger threat of large infiltration by socialbots is the erosion of the trust between users --- the basic fabric of online social network. Moreover, with millions of websites integrating social networking platforms, the socialbots can be used to post fake comments, make misguiding recommendations, and contribute biased product ratings online.
- Can they be used to hijack or manipulate social movements then?
Possibly yes, because if people believe that the profiles controlled by socialbots are of real people, then these people could be influenced by opinions posted by the socialbots, or even follow calls for action.
- So how do we guard against socialbots?
As with any other socio-technical system, there are technical and human aspects of the answer to this question. Users can be helped in making better decisions when accepting connections requests. For example, increasing the awareness of the risks associated with accepting requests from strangers could partially improve the situation.
At the same time, technology should do a better job on two fronts: First, in assisting users with connection requests by employing better graphical interfaces that communicate potential risks. Second, social network operators can make the Socialbot Network business more difficult (and therefore less profitable) by, for example, improving the accuracy and the speed of detecting and blocking bots.
- What has happened to your army of social bots, are they still active?
Our army was so small, 102 socialbots in total. It was more a social club than an army :)
By the end of our experiments (March, 2011), 20 of the bots were blocked by Facebook. By the end of October, when media started discussing our findings, most of the profiles controlled by our socialbots got blocked. Now, at most 15 are alive but dormant.
- Any ideas on how the use of socialbots will develop in the near future?
We expect to see the socialbots being used in Social Architecture, where advanced technologies are used to enable large-scale structuring of social groups and communities online. Accordingly, intelligent socialbots are expected to be used to interact with, promote, or provoke online communities towards certain behaviors. The vision of such a technology is to enable social network operators or third-parties to actively shape the social structure of online social networks in order to produce desired outcomes. For example, online communities can be "stitched" together to promote understanding and cooperation between human users on civic and humanitarian issues. This, however, has its own security and privacy concerns that have to be studies before making such a technology public.
- How far are we from a bot passing the Turing test?
The Turing test is a test of a machine's ability to exhibit intelligent behavior. Thus, predicting how far we are from passing a Turing test is open for discussion and depends on the type of the test in question. One thing, however, is clear: As machine learning and computational power advance, the day when a machine effectively passes the test gets closer and closer.