Troy Hunt, creator of “Have I been pwned”, gives a virtual keynote that explores how security threats are evolving - and what we need to be especially conscious of in the modern era. In this keynote, you’ll learn: - Real world examples of both current and emerging threats - How threats are evolving and where to put your focus - How to stem the flow of data breaches and protect against malicious activity and much more!
Troy Hunt, creator of “Have I been pwned”, gives a virtual keynote that explores how security threats are evolving - and what we need to be especially conscious of in the modern era.
In this keynote, you’ll learn:
and much more!
Troy Hunt: So, let's move on and talk a little bit detection because this is another interesting thing where we're seeing adversaries within environments, or data breaches having occurred, and then long periods of time passing before anyone realizes what's going wrong. And I think probably one of the most canonical examples of long lead time for detection is Sony Pictures.
So, if everyone remembers Sony Pictures, so this was back in about 2014. Folks came into the office one day, sat down at their PC and got, this is what appeared on the screen. Hacked by GOP, Guardians of Peace. Evidently not so peaceful. And then you can see a whole bunch of hyperlinks down at the bottom as well. And this was Sony's data. And the data that was leaked was massively extensive. So, the attackers claimed that they'd been in the network for a year and taken about 100 terabytes of data.
I've not seen anything to verify it was quite that long or quite that much, but what we do know is that there was a huge amount of data taken. So, things like unreleased films, sensitive internal emails, some of those emails caused a huge amount of embarrassment because they were disparaging towards Obama, which wasn't a great move. Also, things like employee data with social security numbers and they're kind of important in the U.S.
And one of the things that I find really fascinating about those three different classes of data, the unreleased films, sensitive internal emails, and employee data is that it's not like these are just all sitting on a shared folder somewhere. They're not there in one location. These are the sorts of assets, particularly in a large organization, like Sony Pictures, which would have been distributed into very, very different corners of the organization. So, it's from all over the place. And someone's had enough time to go and retrieve very large amounts of data from different locations within the network, exfiltrate them, and then eventually upload them to those locations.
So, this was really devastating. And it's really interesting now to look at just how much stuff is exposed in organizations which causes things like this. So, I'll give you a bit of an example here. Varonis produced a report earlier this year, ''The 2018 Global Data Risk Report''. And they found that 21% of all folders in an organization are open to everyone. So, if you're in a corporate environment, just have a look around you, like have a look at just how much stuff is open. I spent a lot of years in a corporate environment. And I would see this all the time, folders that were open to everyone. And why do people do it? Well, because it's easy. They're taking the shortcuts. Fifty-eight percent of those have over 100,000 folders open to everyone. A hundred thousand folders that are open to everyone.
Now, obviously, these are large organizations. And of course the larger organization, the harder it is to manage this sort of stuff as well. But that is just a staggeringly high number. So, I remember back in my corporate role, some of you know where that was, I would find these open folders. And I'd go to my leadership and I'll say, ''Look, we've got a lot of open folders. Like we've got to stop doing this. This is going to work out badly.'' And the fix was always to secure the folder. And what this ultimately was, it was always just treating the symptom. It's like, ''Hey, we found something. It's been open, let's close it.'' And I would drive and drive and drive to say, ''Look, there is an underlying root cause which is causing these folders to be opened in the first place.''
And then what it boiled down to was a whole bunch of people having the ability to open them in the first place that shouldn't have. A whole bunch of people had server admin rights to places they shouldn't have. And those are harder problems to solve. But if your only means of detection is some bloke having a browse around the network in spare time and finding too much stuff open, well, then that's probably not a good place to be in. So, we're saying way too much stuff, way too open, for way too long.
So, time and time again in running ''Have I Been Pwned'', I find that I'm the vector by which organizations learn of a data breach. And this shouldn't be the way. Very often, this is very large amounts of data as well. This can be many tens of gigabytes worth of data that someone had sent me. And I've got to go to the organization say, ''Hey look, I've got your data. I think this is yours. You should do something with it.'' I'm in the middle of about half a dozen disclosures right now. And one of them is tens of gigabytes with the log files. And those log files include things like emails. Some of them are disparaging. I'll leave it at that. I'm not quite sure how this will pan out yet. But Troy Hunt should not be your disclosure. This is not the way you want it to work.
So, these organizations really need to do a better job at the ability to detect when data is flying out of their networks in abnormal ways. And if we go back and have a look at some of the really notable recent incidents, you can see just how much data we're talking about. So, LinkedIn is a good example. So, often when I do talks, I talk about ''Have I Been Pwned.'' And I'll ask the audience, ''So, say who was in LinkedIn?" And there's always, "I hate the people in LinkedIn'' because there are 165 million records there, including mine, unfortunately.
Now, the thing is their data breach happened in 2012. And back in 2012, they did actually acknowledge it. They said, ''Look, we've, we've had a cyber thing, we don't think it's too bad.'' I think at the time, they thought it might've been something like 5 million records, not too bad. And then four years passed, so for four years, someone had all this data. SHA-1 hashed passwords too. So, pretty trivial to crack those.
In fact, I was speaking to someone at an event just yesterday in Sydney and they said, ''Look, they'd gone through and managed to crack about 98% of them." So, for all intents and purposes that cryptographic storage was absolutely useless. So, four years between incident and detection. Dropbox, another popular one a lot of people have been in including me. And again, the same sort of time frames. So, the incident happened in 2012. It took four years before they realized what actually happened and just how bad it was.
In fact, as I understand it, and bear with me here, the way the Dropbox data breach went down was Dropbox employees storing a backup of Dropbox data in their Dropbox and then their Dropbox got broken into. It's all very meta. But apparently, that was what happened. But four years before we learned about the incident. Another one. Also another one that I was in. This is not an intentional thing, I've just been on a lot of data breaches.
Disqus. So, someone reached out to me last year and said, ''Look, I've got the Disqus data. There's about 18 million records in here.'' And I had a look at it and it looked very legitimate. And then I found my own data. And incidentally, finding your own data in a data breach makes verification a lot easier.
Actually, my number one blog post ever is titled ''The Dropbox hack is real.'' And it was number one I think because I managed to get verification out there very early. And the way I verified it is that I had one password, generated password. So, it's just like 40 or 50 crazy random characters. And there was a bcrypt hash in the database. And when I passed in that crazy random string of password, it matched, go go, there we go. So, good.
So, Disqus looked legitimate and I had to reach out to them. And that was the first they knew of it. They said, ''Look, you know, we weren't aware of any incident, certainly not an incident dating back three years.'' And they verified it. And then had to go through the disclosure process. And again, like these organizations, your organization, you really don't want to get emails from me. It's not a good day usually.
Imgur was the last one as well. So, Imgur was like last year as well. Slightly after Disqus and very, very similar sorts of time frame. Now, fortunately, there are only 1.7 million records. And I think that that was only that small because it dated back to a point which was pretty early for that. So, they managed to sort of dodge a bit of a bullet. But, you know, even still, four years passing from an almost 2 million records being breached to when they actually realize it.
So, clearly, we've got a problem with detection. And I think that's really, really sort of worthwhile everyone thinking about. If you did have malicious activity happened within your internal network or within your website, would you actually be able to identify anomalous behavior? And would you be able to identify it or is the first you're going to know about it when you get an email from me?
So, moving on, the money pit is an interesting one. Now, this is kind of a little bit delicate. Because there's obviously a lot of companies out there selling a lot of security things. And the trick that organizations have today is they are just absolutely bombarded by messaging.
If any of you have been to any of the big security shows, particularly something like RSA in San Francisco, it's just absolute bedlam with security companies everywhere selling cyber things. And it's very, very hard. In fact, I'm very sympathetic to organizations who are trying to make decisions about, how are we going to protect our company? Because everywhere they look there is a cyber-something. And I'll give you a few examples of this. There are cyber enablement services. You can go and buy cyber-enablement. There are cyber innovation services. That's also a thing here. You can go and buy cyber innovation services. There are even cyber matrix services. You can buy into the cyber matrix. Not quite sure what it is, but it is out there.
And just to make the point that they are actually all genuine services that are out there. Have a Google for them. There are 27,000 cyber enablement results out there. Fifty-two thousand cyber innovation. And if we go all the way down to matrix there's going on 44,000 cyber matrix results. And you might be looking at this going, where on earth do they get these terms from? Like is this something they just make up? It's not really something I made up, but it's something you can make up. Because every one of these came out of the bullshit generator.
There is literally a website. You can see the URL up there on the top right. And I know that everyone now wants to go there because it's actually really cool. So, you go there and you can make bullshit. And what it does is it combines a verb, an adjective, and a noun. And all I did is, I just went and took a bunch of those and added them after cyber and we got the results we saw before.
So, that's actually kind of cool. So, you just go through and you make new terms, repurpose interactive readiness, I can barely even say that one. You go through and streamline next-generation functionalities. This is a real service. Give this to your marketing people, they will love it. It will drive you nuts but they'll love it. And like this was meant to be a little bit tongue in cheek, but the very fact that I could go here and generate terms of the actual things that people are selling sort of demonstrates the point of how difficult it is for those actually having to make decisions about where they spend their cyber dollar.
So, moving on, let's just wrap up a few takeaways here from what we've just looked at and then we'll go through and do some questions. So, thinking back to the conventional risks, we still have the same fundamental underlying problems today as we did many, many years ago. We've also got a whole bunch of new ones as well. And particularly thinking about conventional risk, things like risks in the humans are still massive. We've really not put much of a dent in phishing attacks. You know, a great example, we've still got this conventional vulnerability, which is the organic matter sitting at the keyboard, and we haven't been able to solve it yet. The monetization side of things as well.
So, many of the old monetization strategies still apply today. They've just been streamlined because we've got cryptocurrency and email and internet, which we didn't have when these things started out. And of course, monetization also goes all the way through to the organizations that are...I was going to say defending against these attacks. I'm not sure if that's a fair representation of professional data recovery, but certainly playing in that ecosystem.
The supply chain bit I think is really fascinating. And the bit that we looked at was really just this sort of embedding of external services. It doesn't touch on all the libraries that we're dependent on or all the other things that go into modern-day software. But this is becoming a problem. And that's before we even get into things like the hardware supply chain. So, where does your hardware come from? Do you trust that party?
And there's certainly some very interesting things going on at the moment that cast some really massive doubts about where we can trust our equipment to come from. So, have a think about all the different bits and pieces that go into modern-day applications and indeed into physical infrastructure as well. On the detection side of things, I sort of metaphorically posed the question for us and said, ''Look, how well equipped are you to detect if there's large amounts of data being exfiltrated from your network or from your website?'' And in fairness, this is a nontrivial problem as well. This is not an easy thing, but it's an important thing. Because again, as I said a couple of times, like you really don't want to be getting emails from me. You especially don't want to see like a tweet from me saying, ''Do you have a security contact at your company?''
This is not the way you want your detection to work. Much better to detect it quietly and try and stop it before it happens in the first place. And finally, that piece on the money pit. And again, I have this huge amount of sympathy for organizations that are having to make decisions today about where they spend their money. Particularly when there are a bunch of infosec companies out there who are claiming that will solve all your problems with this one shiny thing. Because of course, the one shiny thing is a very attractive thing to the people that hold the purse strings in a lot of organizations who are very frequently aren't the technical folks but are wowed by flashy presentations.
And I just had a flashback to my corporate life for another moment and day. So, they're the five takeaways from the talk, but I hope that they, if nothing else, sort of give you food for thought about what's going on with your applications in your environment today. ...