You’re infected! Ransomware with a twist

Your computer is infected! Pay $50 USD in order to remove the malware.

The FBI has been tracking you for visiting inappropriate sites. Please pay $250 to avoid higher court costs and appearances.

Ransomware is nothing new, and typically comes in many shapes and sizes. For years, users have been visiting websites, only to be redirected to a ransomware site and scared into paying fees that amounted to nothing more than lost money. With the advent of CryptoLocker, however, attackers have felt a need to “give” back to their victims. Once they infect a system and encrypt the data, they will offer to decrypt this data for a small fee. How kind of them…

In recent months, attackers have started to change the game by delivering these samples via phishing, and using new malware that imitates Cryptolocker. I recently came across a phish carrying ransomware similar to Cryptolocker, but with some noteworthy differences.

What we’re reading about the Chinese hacking charges

While the full implications from yesterday’s DoJ indictment of five Chinese hackers on charges of cyber crime are yet to be fully seen, these charges have already succeeded in elevating cyber crime from a niche discussion to an important debate in society at-large.

Furthermore, just as last year’s APT1 report did, the court documents provide a detailed glimpse at the tactics China is using to steal trade secrets from the world’s largest corporations (not surprisingly, phishing continues to be the favored attack method).

There has been a lot of media attention on this story, so we’ve put together a list of some of the most interesting content we’ve seen so far:

Dark Reading: ‘The New Normal’: US Charges Chinese Military Officers with Cyber Espionage

Pittsburgh Tribune-Review: Cybercrime case names U.S. Steel, Westinghouse, Alcoa as victims

The Wall Street Journal: Alleged Chinese Hacking: Alcoa Breach Relied on Simple Phishing Scam

The Los Angeles Times: Chinese suspects accused of using ‘spearphishing’ to access U.S. firms

Pittsburgh Business Times: Hackers posed as Surma on email to access U.S. Steel’s computers

Ars Technica: How China’s army hacked America

CNN: What we know about the Chinese army’s alleged cyber spying unit

The New York Times: For U.S. Companies That Challenge China, the Risk of Digital Reprisal

The Wall Street Journal: U.S. Tech Firms Could Feel Backlash in China After Hacking Indictments

The Washington Post: China denies U.S. cyberspying charges, claims it is the real ‘victim’

Mandiant: APT1: Exposing One of China’s Cyber Espionage Units

@higbee

There’s threat data and then there’s threat intelligence, do you know the difference?

The intelligence-led security approach is gaining traction in corporate security circles.  However, we’ve noticed that the term threat data is often confused with threat intelligence.

It’s an easy mistake to make, yet very important to distinguish between the two – one represents the “old way of doing things,” while the other brings about a new era in corporate security and brand protection. In this article, we’ll discuss threat intelligence and how it differs from threat data.

The Difference between Threat Intelligence and Threat Data

#1: Threat intelligence is verified. Threat data is just a list.

Modern threat intelligence has been verified, while traditional threat data is a list of random data points, such as IP addresses or URLs.  Verified intelligence without false positives produces actionable intelligence that security professionals can rely on to protect their brands from cybercrime.

#2: Threat intelligence is actionable. Threat data is noisy.

Modern threat intelligence gives you enough information for you to take swift and immediate action to stop a threat. Threat intelligence allows you to bring together your network and people with the solution. Rather than “educate” machines with threat data, threat intelligence relies on the analysis and action of your human capital in order to drive success.

Threat data, on the other hand, has a high signal-to-noise ratio. The majority of data found on traditional lists is meaningless and it requires a large effort to sift through high volumes of data to find something meaningful.

#3: Threat intelligence is reliable. Threat data is full of false positives.

Threat intelligence provides a clear picture of what is really going on because it has been filtered to remove information that is not directly relevant to protecting the brand. True threat intelligence has been analyzed, vetted and tested – binaries clicked, URLs followed, threats detonated in sandbox environments. Traditional threat data contains many false positives, false URLs, dead URLs, dead IP addresses.

If an organization is working with old school threat data, then they’re just importing white lists, gray lists, or black lists. They’re going to be chasing ghosts for a good bit of their career, trying to find out what’s there and what’s not.

Threat data has bad habit of constantly crying wolf.  After a while, you stop believing the kid crying wolf.  Then, you stop worrying if there’s a wolf there.  If you have actionable intelligence, however, you know where the wolf is every time.

SIEM: So Many Alerts, So Little Time

Software vendors participate in industry events for various reasons. We attend to share information as speakers and to learn as attendees. You’ll see us sponsor tote bags, snack stations, and even lunch. We are there to raise awareness of our solutions and generate leads for our sales team. We like scanning badges as much as you like getting schwag but for most vendors like us, the best use of our time in the booth is not spent waving a scanner.

It is “events season” in the security world and PhishMe has been an active participant in events like RSA, FS-ISAC and more. SecureWorld, hosted at the Cobb Galleria in Atlanta, offered an particularly enthusiastic crowd, well-attended sessions and an expo floor filled with vendors interacting with conference attendees. We made some new friends with our neighbors from PhishLine and enjoyed meeting everyone who stopped by our booth to learn more about how we’re helping companies deal with the latest email-based phishing and malware attacks. It’s a great opportunity for us, as a company serving the InfoSec community, to learn more about the latest problems companies are trying to solve and to hear firsthand about the state of cybersecurity from those in the trenches.

All of this activity led to a successful industry event and a lot of fun. However, there is one key benefit of attending industry events like this that is rarely discussed. We were fortunate enough to experience it this year at SecureWorld: the conversations.

One particular conversation stands out from the rest this week. We met a gentleman whose main responsibility is the company’s Security Information and Event Management (SIEM). He has successfully worked with internal teams to integrate logs from their AV, DLP, IDS and a few other appliances. After hearing so many stories about scaled back SIEM implementations or completely stalled deployments resulting in expensive shelfware, I offered my congratulations and started asking about this significant achievement. I was eager to take notes! Everybody needs a win now and then and we usually only hear about the bad news.  So, I was surprised and a little disheartened when his reply wasn’t about the success but rather the frustration in getting other teams to leverage the information coming out of the SIEM. At best, the response has been sluggish. Security teams are always busy and automated ticketing systems can be overwhelming. But still, I have to wonder if responding to tickets initiated by the SIEM is a higher priority at Target these days?

We can probably all agree that security alerts should be handled and followed-up. But, “should” is not necessarily reality. In a recent article published on DarkReading, Joshua Goldfarb discussed that security professionals often experience alert fatigue and become desensitized to security alerts. The reasons, argues Goldfarb, is that many organizations experience a low signal-to-noise ratio, meaning that there is a high volume of signals, the majority of which are noise. He offers the recent breaches at Target and Niemen Marcus as examples of instances where alerts were issues, but were not handled properly by internal security teams.

I also have to wonder if better information about the day’s top threats could help elevate the important alerts to make sure critical issues are addressed quickly? Could threat intelligence be used by the SIEM to escalate specific tickets that would otherwise remain under the radar of the dedicated but stressed InfoSec team?

Phishing Attacks Target Google Users with Weakness in Chrome: What You Need to Know

If your employees are users of Google Chrome and/or Mozilla Firefox, your network could be vulnerable to a unique phishing attack targeting the two most widely-used browsers in the world. Several media outlets are covering the uniform resource identifiers (URI) exploit, which Google Chrome and other web browsers utilize in order to display data.

This attack, which is difficult to identify via traditional methods, allows cybercriminals to gain access to Google Play, Google+ and Google Drive. This means that any sensitive information stored within each of those areas is up for the taking. In the case of Google Play that means credit card information. In the case of Google Drive, that means a considerable amount of potentially highly sensitive data.

Other brands have also been spoofed recently using the same browser display vulnerability. On May 8, 2014, PhishMe’s phishing intelligence analysts noticed a quirk in Chrome. When viewing an eBay Canada spoofed login page in Chrome, the only text displayed in the browser address bar was the word “data:” as shown in the image below:

That phishing attack was utilizing what is known as the Data URI Scheme to encode the entire source code of the phishing page into the address bar. As can be seen in the next screenshot; however, Firefox displays the Base64 encoding in the address bar, which a security-savvy user would be more likely to notice.

The second and third steps of the eBay Canada phishing attack were also carried out using the Data URI Scheme. As the victim was enticed to enter more of their personally identifying information, the attacker presented page after page of spoofed eBay pages, eventually collecting the victim’s eBay user ID, password, full name, address, ZIP code, mother’s maiden name, date of birth, credit card number, CVV code, and card expiration date.

The Google account phish in the news also uses the Data URI Scheme. The Google phishing attack was reportedly initiated via an email message in which the attackers posed as Google with the subject “data notice” or “new lockout notice.”

These phishing scams play on users fears that they are being targeted by cybercriminals, yet responding to those very attacks results in them giving their sensitive information to the attackers. The use of the data URI scheme makes these phishing scams easy to identify, but only if users know what to look for.

In the case of the eBay Canada phishing attack, the word ‘data’ may arouse suspicion, but would that suspicion be enough for the user to recognize that this was in fact a scam? For many employees, Base64 encoding displayed in the address bar may not even be noticed. Unless employees are trained to recognize these signs of phishing attacks, there is a high chance that they may be fooled. That doesn’t just mean that they will be handing over their eBay credentials. Many phishing attacks on businesses are conducted to obtain sensitive business login credentials.

Would your employees be able to identify phishing scams like these? Do you provide training to ensure that ALL of your employees are aware of these indicators of a phishing attack? Do you test that knowledge to see whether it has been taken on board and is being applied?

Abusing Google Canary’s Origin Chip makes the URL completely disappear

Canary, the leading-edge v36 of the Google Chrome browser, includes a new feature that attempts to make malicious websites easier to identify by burying the URL and moving the domains from the URI/URL address bar (known in Chrome as the “Omnibox”) into a location now known as “Origin Chip”. In theory, this makes it easier for users to identify phishing sites, but we’ve discovered a major oversight that makes the reality much different.

Canary is still in beta, but a flaw that impacts the visibility of a URL is typically something we only see once every few years. We’ve discovered that if a URL is long enough, Canary will not display any domain or URL at all, instead showing an empty text box with the ghost text “Search Google or type URL.” While Canary is intended to help the user identify a link’s true destination, it will actually make it impossible for even the savviest users to evaluate the authenticity of a URL.