Abusing Google Canary’s Origin Chip makes the URL completely disappear

Canary, the leading-edge v36 of the Google Chrome browser, includes a new feature that attempts to make malicious websites easier to identify by burying the URL and moving the domains from the URI/URL address bar (known in Chrome as the “Omnibox”) into a location now known as “Origin Chip”. In theory, this makes it easier for users to identify phishing sites, but we’ve discovered a major oversight that makes the reality much different.

Canary is still in beta, but a flaw that impacts the visibility of a URL is typically something we only see once every few years. We’ve discovered that if a URL is long enough, Canary will not display any domain or URL at all, instead showing an empty text box with the ghost text “Search Google or type URL.” While Canary is intended to help the user identify a link’s true destination, it will actually make it impossible for even the savviest users to evaluate the authenticity of a URL.

Education vs. Technology

Trusteer recently released a study containing the results of a spear phishing test against 100 LinkedIn users. Their findings had a 68% failure rate. While a 68% failure rate seems high, it is not an unusual number for a group that has received no prior education or training in how to spot phishing – or at least training that is meant to be effective. We know this based on having sent well over a million spear phishing emails to employees of corporations across multiple industry verticals. Trusteer, a company that specializes in the creation of information security software products, stated in this article that the only real solution is a technological one. We wholeheartedly disagree. These are numbers that we have seen time and again; Numbers that we consistently reduce through education via periodic training exercises that immerse the recipient in the experience.

There are many characteristics of this test done by Trusteer that would cause anyone with a basic understanding of testing methodologies and statistics to stand up and take notice. Firstly, the test was conducted with no real prior education to the users; this would make a good baseline, but only if you then provided training to the same users and ran the test again later to measure the difference the training made. Trusteer did not do this. In fact, Trusteer by their own admission hand-picked the recipients from a pool of friends and family. Their claims of vetting this list to ensure that it contained people who “it estimated to be fairly educated about security” must be taken at best with a grain of salt. Secondly, this test was conducted on a very small pool of people – we don’t believe the sample set is large enough or diverse enough to make a sweeping statement. While we can agree with their claims of Social Engineering making it “easy to drive corporate users to fake websites that could potentially download malware onto their computer”, it is the way they draw the conclusion, their methodology, and the claims that only a technological solution is the answer, that we take issue with.

Social engineering is a human issue that evolves around technical controls.  Convincing someone to click a link or download a piece of malware is just a twist on the same methods used by grifters and con men for hundreds of years. As long as someone is unaware, there will always be someone to take advantage of them.

It is time we face the simple truth –  there is no magic box that will solve spear phishing. We can’t continue to let the end-user believe that if something made it into their inbox, then it must be ok. We need to proactively teach people to be suspicious.

Mac McCrory