Articles

CAPTCHA Transcends and Everybody Wins

The internet is going to be a lot less annoying soon. Headlines like ‘Google has finally killed the CAPTCHA‘ give us a hint as to why. But these headlines don’t tell the full story — which is a shame, because the full story is kind of cool.

Yes, there is reason to celebrate. CAPTCHAs those boxes that force you to prove you’re human by clicking three pictures of an umbrella or typing out grainy text — are going away. But CAPTCHA is not dead — it just evolved. The latest version is completely invisible, but very much still there.

CAPTCHA, an acronym for Completely Automated Public Turing test to tell Computers and Humans Apart will finally realize its original design, automating a test that determines if we’re human beings. CAPTCHA has learned from human behaviour so well, it no longer needs us to confirm our humanity, it assumes it. The latest version only serves the annoying quizzes to suspected bots.

Now that CAPTCHA automatically detects human interaction, the bot intended to protect against simulated human behavior is, in a way, closer to that perfect human simulation than the bots it was originally created to protect against, who are themselves trying to achieve an undetectable human simulation.

See? Kind of cool.

But why does this matter? And how does it work?

Unfortunately we can’t really know the how the new CAPTCHA works.. It doesn’t make sense for Google or any company making widely-used security programs to publicize how those programs work. Captcha exists because spam, virus, and all the ugly parts of the internet exist. Whether its preventing fake accounts from voting in online polls or blocking malware, there has been incentive to filter out unnatural website behaviour from genuine interactions, as long as the internet has been around.

All we know about this new phase of CAPTCHA is in this video. The latest version of rechaptcha, aptly named ‘Invisible Recaptcha’ uses ‘a combination of machine learning and advanced risk analysis that adapt to new and emerging threats’. My personal theory is that this has something to do with a recent search-engine algorithm update, nicknamed Fred. It’s too coincidental that the same week Google adjusts its criteria to penalize or reduce the search authority of blog-style sites with ‘low content value‘, it also unveils a breakthrough in separating genuine human interest from robotic simulation. Fred hit content farms, the places where black hat or unnatural linking techniques live. Basically, until a few days ago, it was possible to manufacture conditions that search search engines would mistake for actual user interest in a site, so that the site could eventually improve its position on results pages. Like with the new CAPTCHA, end-users never see search-engine algorithm updates when they happen, the search engine results just change. If Google can distinguish between human and bot behavior in links or searches, it is a small leap for the company to extend the technology to cover general browsing.

Many have also speculated that this would not be possible without the piles of data Google has been mining through its other projects. That’s also probably true. The company needs some kind of a baseline for how people really act online to pull something like this off. Of course this development probably brings us closer to that inevitable robot uprising, but why not give Google the benefit of the doubt for once? Sure, it’s a little bit suspicious that it blatantly repurposed the scanning technology it originally acquired to help digitize books for security or consumer research that borders on surveillance. But, CAPTCHAs were also really irritating; now they’re going away. Google keeps repeating ‘what’s good for the internet is good for Google’. This time, I’m inclined to agree with them. Not having to enter CAPTCHAs will make browsing better, which will indirectly encourage people to use Google services more often. Internet security supposedly improved. Everybody wins.