For most humans, a picture conveys a tangled wealth of information and emotion. Not so for computers. Natural photography interpretation is a well-known weakness of our mechanized comrades. Researchers at Google are working to change that. Google announced in April their new Street View technology can also break their own reCAPTCHA technology—with more than ninety-nine percent accuracy. Using “deep convolutional neural networks” the study reports their system is now so advanced that it can do better than humans on the tests:
…at specific operating thresholds, the performance of the proposed system is comparable to, and in some cases exceeds, that of human operators.
What does it mean when a computer can be better than a human on a test for humanity? It calls into question the very nature of a CAPTCHA test—and also its relevance. Google admits this themselves, saying that “this shows that the act of typing in the answer to a distorted image should not be the only factor when it comes to determining a human versus a machine.” Let’s run with that.
CAPTCHAs, or Completely Automated Public Turing tests to tell Computers and Humans Apart, were originally developed by researchers at Carnegie Mellon University in 2000. The important part here is Turing test, a test for intelligence in a computer. As the name suggests, CAPTCHAs use the Turing test to “tell computers and humans apart”—and prevent access to the former. Most CAPTCHAs do this with image interpretation tests which were historically only decipherable by humans. However, Google’s new deep neural system shows that the tide is changing.
CAPTCHAs discriminate against humans
Numerous studies have argued against the effectiveness of CAPTCHAs and the discrimination these tests impose on certain users. A study conducted by Stanford University revealed that traditional CAPTCHAS are problematic for humans.
When we presented image CAPTCHAs to three different humans, all three agreed only 71% of the time on average.
They further discovered that audio CAPTCHAs, which are normally provided as an “accessible alternative,” fared even worse. Not only did they take almost three times longer to solve (28.4 seconds compared to 9.8 seconds for images), they were successful only 31 percent of the time. The study also noted that non-native speakers of English are generally slower and less accurate at English-centric CAPTCHAs.
In short, the researchers concluded that CAPTCHAs regularly exclude valid users—that is, humans—from accessing content on the web. As such, CAPTCHAs are not just a quick security fix. They are very much an accessibility issue.
The World Wide Web Consortium (W3C) has focused their efforts on Web Accessibility providing guidelines for making your site accessible. They’ve long recognized CAPTCHAs as an accessibility problem. In 2007 they issued a report on the Inaccessibility of CAPTCHA concluding that:
This type of visual and textual verification comes at a huge price to users who are blind, visually impaired or dyslexic. In many cases, … CAPTCHAs fail to properly recognize users with disabilities as human.
Why do we still use something that fails?
To understand why CAPTCHAs are such an accessibility issue, let’s talk about the principle they fall under, Principle 1.1.1. The very first accessibility guideline states that anything that isn’t text content needs to have a text alternative that “serves the equivalent purpose.” This is why you need alt text for your images, for example.
An exception to this rule is allowed for CAPTCHAs whose purpose “is to confirm that content is being accessed by a person rather than a computer.” This is because the nature of the test must avoid text to be successful. Providing a text alternative would make them understandable to computers and, therefore, defeat their purpose.
In these cases, text alternatives need to be provided that “identify and describe the purpose of the non-text content.” In addition, alternate CAPTCHAs using different output modes for different senses—like audio and visual—should be provided. This is nice in theory and an improvement over providing no alternatives. But, as the Stanford study shows, the alternatives are hard for humans. Furthermore, as computers and bots get increasingly better at passing these tests, the tests have been growing more difficult. As the recent study from Google ironically highlights, humans are getting worse at passing CAPTCHAs while bots are getting better.
Google has taken some steps to curtail this trend by changing their reCAPTCHA algorithm to detect human users and provide easier tests to them, admitting that:
…today the distorted letters serve less as a test of humanity and more as a medium of engagement to elicit a broad range of cues that characterize humans and bots.
Interestingly, the “easier” tests that are served to humans involve numeric CAPTCHAs which they claim are “significantly easier to solve than those containing arbitrary text and [humans] achieved nearly perfect pass rates on them.” However, an easier CAPTCHA will also be easier for a bot to solve. So truly, the real CAPTCHA has become reCAPTCHA’s internal algorithm that decides which type of CAPTCHA you will receive. In this case, is the user-input CAPTCHA even necessary any longer?
On top of all that, even successful CAPTCHAs make for a poor user experience. Most of us are familiar with a squinting, eye-crossing CAPTCHA preventing you from content you wanted or needed. “Before I let you in to do the thing you came to my site to do, let me test your eyesight” shouldn’t be the message you want to send—unless you’re an ophthalmologist. Depending on the security needs of your site, there may be better options than traditional image CAPTCHAs. Let’s consider them.
Logic puzzles. This method involves simplifying the CAPTCHA test so that it is text-based (not an image), but one that involves linguistics or cognitive ability beyond what a typical bot would have. Simple mathematical word puzzles, trivia, or questions that ask the user to do something specific are common methods of doing this.
Problems: Users with cognitive disabilities may still not be able to pass the test. In addition, since the answers to a more complex question can require more complex answers, the test may need to understand and parse free-form text. You’ll also need to maintain enough questions or alternate them programmatically to keep spiders from capturing all the answers and defeating the test. However, this is a problem for the site owner not the user. If security is not a very high concern, this option can be preferable.
Honeypot CAPTCHAs. This has been a very popular test capitalizing on the fact that spam bots tended to ignore CSS markup. If you use CSS to “hide” a form field, the bots might not know it was not supposed to be visible and will fill it out. When the form is submitted, you can filter out the bot submissions easily by seeing which ones filled in this form field.
Problems: Bots are getting smarter. While it used to work to simply turn off the form’s
display attribute with a CSS command, some bots have been trained to detect CSS commands. New techniques suggest positioning the form field off-screen and altering the style to make it unreadable. Even so, for accessibility, you’ll still need to label it for screen readers which may alert smart bots.
As an alternative, some advocate leaving the honeypots displayed but giving instructions that people should not enter anything into them, which in a sense combines a logic puzzle and a honeypot. This is actually preferred from an accessibility standpoint. The problem will be if the bots are smart enough to understand the command. Again, if security is not a huge issue for your site, this could be a good route to take.
Spam filtering. This is one form of a non-interactive checkpoint. Instead of checking the content before submission, all submissions are accepted and filtered out later on the server using spam-filtering technologies. The W3C conditionally endorsed this in their report.
While such systems may experience false negatives from time to time, properly-tuned systems are as effective as a CAPTCHA approach, while also removing the added cognitive burden on the user.
Problems: Depending on the size of the site and the quantity of submissions, this may simply be untenable. If you have a small site, however, this may be a manageable option. It is also a good idea to employ as a backup to another test—which may be enough to reduce the submissions to a reasonable level.
Heuristic checks. Again, this is a non-interactive checkpoint. The idea here is that it is sometimes possible to detect the presence of a bot using metrics from their visit like the volume of data they requested, series of common pages visited, IP addresses, etc.
Problems: Bot technology is advancing to the point where detecting their behavior is becoming increasingly more difficult. You’ll need someone well-versed in security and bot behavior to stay on top of this route by yourself. However, this seems to be the way reCAPTCHA is trending with their latest algorithm. If they provided an option to use their algorithm without requiring the user-input test, this might become an easy alternative.
Off-platform verification. This version requires the user to enter information that can be verified off the site, for example, a phone number that can receive a SMS message.
Problems: Though this can create a barrier to those with poor accessibility to alternate technologies, it does provide security and protection from exploitation. The W3C states, “It’s not feasible, for example, for someone to use thousands of phones to farm account keys daily, then exchange them for new phones when the service refuses to send more keys.” On the other hand, it can be a lot to ask of your users to verify their identity with a different platform. This works best for large sites that need to verify an account once before repeated use, for example, a social media site. Otherwise, repeated verification could drive your users away.
Stop using image CAPTCHAs
If your site doesn’t need CAPTCHAs, don’t use them. They’re bad for user experience, bad for accessibility, and increasingly bad for security. The W3C concludes:
An explicitly inaccessible access control mechanism should not be promoted as a solution, especially when other systems exist that are not only more accessible, but may be more effective, as well. It is strongly recommended that smaller sites adopt spam filtering and/or heuristic checks in place of CAPTCHA. [emphasis added]
The latest news from Google underlines this fact. When a computer can pass a CAPTCHA better than a human, the test has failed. Instead we should heed the advice from the W3C and consider that “like seemingly every security system that has preceded it, this system can be defeated by those who benefit most from doing so.” If image CAPTCHAs aren’t defeated yet, they soon will be.