PhishTank, a mass-participation website used to track phishing sites, is susceptible to voting fraud by criminals, according to researchers at Cambridge University’s Computer Laboratory.
PhishTank asks its members to vote on whether websites are correctly classified as phishing sites, which impersonate online banks and others in order to gather log-in details for the purposes of fraud, to produce an open-source list of such sites.
However, Tyler Moore and Richard Clayton say in a paper that PhishTank is dominated by its most active users, that the less active users are far more likely to make mistakes, and that this distribution of users leaves it open to manipulation by criminals.
Moore and Clayton write that 25 moderators make up 74% of the 881 511 votes cast between February and September 2007, while most of the other 3773 users in the sample voted only a few times. The paper will be published at the Financial Cryptography and Data Security conference, 28-31 January in Cozumel, Mexico.
Moore and Clayton compared PhishTank to a proprietary list of phishing websites (run by a company they do not name in the paper). In a four week period in July and August, when duplicates were removed, the company reported 8730 phishing sites while PhishTank found 8296.
However, the commercial service found nearly twice as many ‘rock-phish’ domains, which quickly change URLs (see One gang corners the market in phish, 17 May 2007), and it verified suspected site on average eight seconds after identification – compared with a 16 hour gap with for PhishTank, as a result of waiting for voting to take place.
Perhaps most seriously, the researchers believe that relying on the wisdom of crowds allows criminals promoting phishing sites to hide in those crowds. They point out that since only 3% of proposed phishing sites are dismissed as being legitimate – and 44% of all submitters get it wrong at least 5% of the time – a criminal could create a strong reputation simply by voting that all submissions being genuine expect his or her own phishing sites.
“Because you are trying to use crowds for security mechanisms, what you have to have is a task which is not easily guessable, or an attacker can build up his reputation by pretending to be legitimate,” said Moore.
He added that there are possible counter-measures: “If PhishTank was to suspect that they were being attacked, they could switch to a fail-safe mode of only relying on their trusted moderators, as they contribute such a large proportion of the verification already.”
However, such problems throw doubt on the ‘wisdom of crowds’ concept promoted by James Surowiecki, at least in security mechanisms, particularly when the distribution of users is highly skewed and the correct decision can be reliably guessed.
“Skewed distributions are problematic because you are concentrating power in the hands of the few. If something was to go wrong with one of those actors, the system could be undermined,” said Moore.