Skip to Content

In Social Network websites, the users can report the bad behaviors of other users. In order to do so, they can create a kind of escalation ticket called abuse report in which they detail the infraction made by the “bad” user and help the website moderator to decide on a penalty. Today Social Networks count billions of users, the handling of the abuse reports is no more executed manually by moderators; they currently rely on some algorithms that automatically block the “bad” users until a moderator takes care of the case. I personally noticed this kind of malicious behaviors during the election period in France. Different social network groups belonging to a particular political party were organized to block account from politicians or just adversary supporters. I also noticed that this kind of attack was sometimes reported in the news but never adressed by a scientifc research study. I decided then to look close to this phenomena by making some tests and exeriments. I ended up with a scientific research paper published in the 10th International conference on security and Cryptography (SECRYPT 2013) http://www.secrypt.icete.org/?y=2013. In this paper we demonstrated how such algorithms are maliciously used by attackers to illegally block innocent victims. We also proposed to automate such an attack to demonstrate the big damage that can be caused in current social network websites. We also took the case study of Facebook as proof of concept.

Our approach was to :

  • Automate the abuse report coalition attack
  • Proof how we can block any user in a very few seconds
  • Propose a model to detect coalition attacks for fake abuse reports
  • Recommendation for Social Netwrok websites to prevent against automated DoS attacks

Reproducing the Attack Manually With Colleagues

 

In order to replay the attack described in the previous section, we created a dummy testing account that will play the role of victim, then we asked to our colleagues to join us in a coalition attack. For the sake of efficiency, we synchronized our attack in order to be all in all executed in a window time frame of 4 hours. The coalition was

composed of 44 volunteer Facebook users. The attack was executed during 5 hours. After the last abuse report was sent, the target account was blocked. The lesson learned from this experience, is that 44 reporters is maybe lower than the minimum threshold. Our attack was at the end successful.

/wp-content/uploads/2013/09/fig1_281833.png

What about Automating this Attack ?

Automating this attack means creating fake accounts that will send abuse reports targeting a particular user profile. The two major challenges were: How to automate the creation of fake accounts ? and how to let them send abuse reports?


In order to simulate an abuser reporting action, we have to study all the parameters exchanged  between the browser and the Facebook server through HTTP. We captured and analyzed all the HTTP traffic generated during one report abuse action.We ended up with a list of variables that can be intercepted then filled in order to modify the expected behavior of the website. Generating automated abuse report consists in Filling the user ID and the targeted profile ID. The rest of the variables are just replayed as received from the server .

The creation of these accounts was quite tricky due to the confirmation request that is mandatory to validate a new account. Af ter some observations exercices on the format of the validation URL received by e-mail, we were able to find the hash function used to generate the key related to the e-mail adress.  The absence of client-side challenge-response mechanisms made this task easier. Anoither weekness of their account creation process concerns the possibility to execute two actions for a new account before being asked for a confirmation. These two actions can be abuse report sending. These weaknesses are quite enigmatic for us because nowadays most of the websites are preventing the automatic creation of accounts.

Any Countermeasure?

Yes we proposed generic countermeasures to prevent against generation of fake accounts and fake abuse reports. Our proposition is very simple beacause we recomment the usage of traditional captcha based client side challenges. But for the manual attack or the coalition DoS attack we recommended a method to identify ideological clusters and groups that are coordinating in order to block user accounts (more details in the paper below).

Facebook is Aware ?

Yes ! At least before any publication we went through the security vulnerability proess reporting. Of course we didn’t get any answer or feedback from them, but one week after our reporting, we were no more able to perform the automated attack.

For more details I invite you to read the research paper here https://share.sap.com/I054572/MyAttachments/61c7cf5b-9bac-4b51-ad42-55ba59e490ba/

For any further question you can contact me or leave a comment.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply