Bogus Claim: Google Doc Phishing Worm Student Project

According to internet sources, Eugene Pupov is not a student at Coventry University.

Since the campaign’s recent widespread launch, security experts and internet sleuths have been scouring the internet to discover the actor responsible for yesterday’s “Google Doc” phishing worm. As parties continued their investigations into the phishing scam, the name “Eugene Popov” has consistently popped up across various blogs that may be tied to this campaign.

A blog post published yesterday by endpoint security vendor Sophos featured an interesting screenshot containing a string of tweets from the @EugenePupov Twitter handle claiming the Google Docs phishing campaign was not a scam, but rather a Coventry University graduate student’s final project gone awry.

Source: Sophos News. https://nakedsecurity.sophos.com/2017/05/04/student-claims-google-docs-blast-was-a-test-not-a-phishing-attempt/

Several folks on Twitter, including Twitter verified Henry Williams (@Digitalhen) have pointed out a serious flaw in the @EugenePupov profile.

Source: Twitter, Inc. httpstwitter.com/digitalhen/status/860006167715643392

This twitter account, which fraudulently used a profile image portraying molecular biologist Danil Vladimirovich Pupov from the Institute of Molecular Genetics at the Russian Academy of Sciences, has since been deactivated.

Coventry University’s communications team quickly responded on social media denying all claims that anyone named Eugene Pupov is a current or former student.

Source: Twitter, Inc. httpstwitter.com/CoventryUniNews/status/860120215216148481

Something clearly is “phishy” about this situation.

Despite the university’s recent announcement discrediting claims of enrollment for a Eugene Popov, I would like to hypothetically explore the theory that yesterday’s campaign was a result of a student phishing research project that went terribly viral. Our PhishMe Intelligence teams identified and obtained the campaign source code and noticed that the most notable aspect of this phishing campaign was its uncanny ability to self-replicate and spread. From our vantage, there is no outward evidence indicating data was stolen or manipulated as previously alleged.

The list of domains created for this alleged “student demonstration” stinks like rotten phish.

googledocs[.]gdocs[.]download

googledocs[.]docscloud[.]download

googledocs[.]gdocs[.]win

googledocs[.]gdocs[.]pro

googledocs[.]g-2Dcloud[.]win

googledocs[.]g-2Ddocs[.]win

googledocs[.]g-2Dcloud[.]pro

googledocs[.]g-2Ddocs[.]pro

googledocs[.]docscloud[.]win

As a career-wide security researcher and current leader of phishing intelligence research teams, this list of domains is concerning. Typically, when a researcher is creating proof-of-concept code for a white paper or presentation, the naming conventions adjust the URLs to showcase their malicious or fraudulent nature for education purposes, examples being:

  • “foo-example.com”
  • “evil-mitm-site.com”
  • “hacker.foo.example.com

If the party responsible intended to showcase educational materials that had any potential to unintentionally mislead a victim, they would typically create one, possibly two, examples to help avoid such scenario. A similar example of this would be the puny code phishing sample recently covered in WIRED where the researcher created one puny code example domain.

What’s most concerning here is the number of googledoc look-alike domains. In most best practice scenarios, a legitimate security researcher would not typically register 9 domains to illustrate a point or to educate on a threat vector. This behavior pattern is most noticeably tied to malicious actors with real nefarious motivations behind their actions.

It may be some time before the true motives of the phishing worm author are revealed, however we are inclined to believe there is a very good chance that malicious intent was in development during this campaign, the execution of which snowballed quickly beyond the author’s desired scope.

Awareness isn’t the goal, it’s just the beginning

When people refer to PhishMe as the awareness company, we smile and nod. I want to correct them, but the label ‘security awareness’ is comfortable and relatable. One of the activities that organizations commonly believe will help reduce risk is mandatory security awareness computer-based training (CBT) lessons.  The hope is that if we enroll our humans in online courses about how the bad guys hack us, they will walk away with a wealth of new-found awareness and avoid being victimized.  (Try to visualize how far in the back of my head my eyes are rolling…)

VIDEO UPDATE: Wire Fraud Phisher attempts to phish PhishMe, instead gets phished by PhishMe

(VIDEO UPDATE LINK: Defending Against Phishing Attacks: Case Studies and Human Defenses by Jim Hansen
• A human centric method of defense
• Attack case studies & attacker technique analysis
• Proactive simulation methods: educating workforces & detecting / thwarting attacks) 

(^ say that title ten time fast)

Every year PhishMe Simulator sends millions of phishing emails to its 500+ enterprise customers’ employees worldwide. PhishMe is hands down the most robust and sophisticated phishing platform in existence. To say that we are a little obsessive about Phishing is a bit of an understatement. In fact, we are sitting on innovations in phishing that the bad guys have yet to figure out.

The difference in PhishMe emails versus the bad guys, is that ours are carefully crafted to deliver a memorable experience. Our experiences are masterfully designed to change human behavior to avoid phishing. So what happens when one of our own employees is on the receiving end of a wire fraud phish? Read on…

Forget About IOCs… Start Thinking About IOPs!

For those who may have lost track of time, it’s 2015, and phishing is still a thing. Hackers are breaking into networks, stealing millions of dollars, and the current state of the Internet is pretty grim.

We are surrounded with large-scale attacks, and as incident responders, we are often overwhelmed, which creates the perception that the attackers are one step ahead of us. This is how most folks see the attackers, as being a super villain who only knows evil, breathes evil, and only does new evil things to trump the last evil thing.

This perception leads to us receiving lots of questions about the latest attack methods. Portraying our adversaries as being extremely sophisticated, powerful foes makes for a juicy narrative, but the reality is that attackers are not as advanced as they are made out to be.

Surfing the Dark Web: How Attackers Piece Together Partial Data

The recent Carefirst breach is just the latest in a rash of large-scale healthcare breaches, but the prevailing notion in the aftermath of this breach is that it isn’t as severe as the Anthem or Premera breaches that preceded it. The thinking is that the victims of this breach dodged a bullet here, since attackers only accessed personal information such as member names and email addresses, not more sensitive information like medical information, social security numbers, and passwords. However, attackers may still be able to use this partial information in a variety of ways, and a partial breach should not be dismissed as trivial.

What we’re reading about the Chinese hacking charges

While the full implications from yesterday’s DoJ indictment of five Chinese hackers on charges of cyber crime are yet to be fully seen, these charges have already succeeded in elevating cyber crime from a niche discussion to an important debate in society at-large.

Furthermore, just as last year’s APT1 report did, the court documents provide a detailed glimpse at the tactics China is using to steal trade secrets from the world’s largest corporations (not surprisingly, phishing continues to be the favored attack method).

There has been a lot of media attention on this story, so we’ve put together a list of some of the most interesting content we’ve seen so far:

Dark Reading: ‘The New Normal’: US Charges Chinese Military Officers with Cyber Espionage

Pittsburgh Tribune-Review: Cybercrime case names U.S. Steel, Westinghouse, Alcoa as victims

The Wall Street Journal: Alleged Chinese Hacking: Alcoa Breach Relied on Simple Phishing Scam

The Los Angeles Times: Chinese suspects accused of using ‘spearphishing’ to access U.S. firms

Pittsburgh Business Times: Hackers posed as Surma on email to access U.S. Steel’s computers

Ars Technica: How China’s army hacked America

CNN: What we know about the Chinese army’s alleged cyber spying unit

The New York Times: For U.S. Companies That Challenge China, the Risk of Digital Reprisal

The Wall Street Journal: U.S. Tech Firms Could Feel Backlash in China After Hacking Indictments

The Washington Post: China denies U.S. cyberspying charges, claims it is the real ‘victim’

Mandiant: APT1: Exposing One of China’s Cyber Espionage Units

@higbee

Abusing Google Canary’s Origin Chip makes the URL completely disappear

Canary, the leading-edge v36 of the Google Chrome browser, includes a new feature that attempts to make malicious websites easier to identify by burying the URL and moving the domains from the URI/URL address bar (known in Chrome as the “Omnibox”) into a location now known as “Origin Chip”. In theory, this makes it easier for users to identify phishing sites, but we’ve discovered a major oversight that makes the reality much different.

Canary is still in beta, but a flaw that impacts the visibility of a URL is typically something we only see once every few years. We’ve discovered that if a URL is long enough, Canary will not display any domain or URL at all, instead showing an empty text box with the ghost text “Search Google or type URL.” While Canary is intended to help the user identify a link’s true destination, it will actually make it impossible for even the savviest users to evaluate the authenticity of a URL.