Skip to main content
Apr 04, 2019

Vulnerability Epidemic in Financial Mobile Apps - Episode 2 [Video]

How and why apps were able to be compromised


So these things are scary, so maybe we can talk a little bit about the things that we could do to remediate these kind of problems. Are there basic things that these app developers should be doing? And maybe you can talk a little bit more about the advanced things that might be protected against the types of attacks that you did.

I think the million dollar question just as--because in my former life, I did vulnerability research in products, and identifying vulnerabilities, and doing what's called responsible disclosure and publishing advisories on this. I had actually spoken at BlackHat Briefings on hacking VPNs, and it was the first advisory on how to actually hack VPNs. And so I have a lot of experience in doing vulnerability research.

One of the things that I was shocked to find was--let's ask the million dollar question first. Why was I even able to decompile in the first place?

OK, let's push the vulnerability findings aside. OK, fine. It's awful. OK. But why was I even able to do that? Why was I even able to comb the source code? Why did I even see SQL statements in the code? Why was none of this--Here's another interesting-- the one app that actually did secure their app and did implement shielding, for some reason, they didn't obfuscate the SQL statements. So all of the database information was there. What seemed to be pretty prevalent was SQLite with these apps, and none of that was obfuscated. So it's almost like multiple developers came along and one developer started on it and was like, I should be doing this. I should implement this, and did it right. And then another developer came along and said, I should not be doing this. And it's almost like it was two completely different developers where one knew good security hygiene and the other one didn't, because it was almost like everything, every file written after that date stamp was all clear text. It was the strangest thing. And it was the same app. It was the same exact app.

I think there's a culture conflict between good development practice and security, a lot of times. As you said, sort of putting the vulnerabilities aside for a moment, even if you code without a vulnerability, the way that the applications are compiled, and linked, and produced, there's a lot of class information. There's a lot of function information. All of these things are still there. And if you have a good development team, which has named things descriptively and intuitively, all of that information can actually propagate down to the app that's distributed to everybody's device. And so what you end up with is a roadmap of what's behind. I mean, seriously, a 14-year-old picking their nose in their bedroom could have could have figured a lot of this stuff out.

In all fairness, I've never played with these apps before. Before embarking on this research, I've never even downloaded those. Every single app had to be downloaded, except one which was a P2P wire transfer app, which I since deleted after doing the research because of the findings.

The level of sophistication is very remedial. And much to your point, one of them had actually named the directory allkeys. And when I went into the directory, it was a combined listing of multiple-- not one, but multiple private keys. And so much to your point, there isn't even this security through obscurity, where it's like, oh, we'll just name it something that doesn't make sense, and we'll put all the private keys there. It was like, here's the breadcrumbs to what you're looking for.

Did you have to do anything special to the Android device? Did you have to root it or do anything like that?

No, this wasn't a rooted device. There was nothing special about it. It wasn't a Pwn Pad or anything like that. It was a standard LG Tab 4 or something like that, where I just basically used, was it APK extractor? And just extracted the APKs off of the device, put them on Google Drive, pulled them off of Google Drive onto my machine, and eight and 1/2 minutes later, on average, had everything I needed right in front of me.

So it sounds like there's a lot of--

And a cup of coffee.

So it sounds like there's a lot of things that are just unfortunately basic coding, good coding practices. But even going beyond that, it sounds like even if they had done some simple things, does it go beyond good coding in order to defeat the kind of things that you were able to do it in eight minutes?

Yeah. One of the things-- and this part of the findings were out of scope for the research, so we're internally discussing releasing this in a follow-up report--but the network interdiction side was almost as bad as the static code analysis side, the layer 7 side, where using Burps, actually proxying that network traffic, there was an alarming amount of clear text being sent over HTTP instead of HTTPS.

So Aaron, can you maybe highlight a couple of the other things? Based on the research, what are some of the things, beyond good coding, that we should do?

Well, I think first and foremost as a developer of an application, you have to be aware of how much information and metadata is inside that app, and actually audit your own apps. If you don't believe me, APKs are zip files, and everybody can unzip a file on your computer. Download your own app. Take a look at what's inside it. And search through some of the files, and I guarantee you you're going to find information that you had no idea was being sent to every one of your customers' devices. By and large, that sort of metadata that's easily accessible, trivially accessible to an attacker, can start in that process--and I know Alissa, you did this a number of times as a pen tester--that process of assessing the external-facing entities of an organization or a data center, enumerating the APIs and the ways in so that you can begin to nibble around the edges, right?

Right, definitely.

And so two additional follow-ups. So one of the things that I had also done was I wanted to play with it a little bit further, and I didn't want to just stop at the passive analysis. So much to your point, it's not really compiling it. It's almost you're just packaging it. So what I did was we setup a separate server, and we actually repackaged. So I would go in there and modify the URLs and you can repackage them.

So not only just the threat vector of analyzing the source code, and understanding the database schema from the SQL statements, but an attacker repackaging these, changing the URL to maybe like a drive-by download site, or just redirecting that network traffic somewhere else, is very easy. To be able to repackage those and put them on a third-party Play store, a third-party app store, and distribute those out.

And then one of the other things is I actually did talk to a developer at one of the companies, and one of the things that we discovered is I think it's almost as if, to answer your question even further, it also takes awareness training. So the issue here is not just a technology issue, but a human issue. And it's really just teaching the developers that hey, you know what? You need to be writing more secure code. And when I was talking to this developer, asked him, hey, look. Why are you sending this stuff over clear text? This should be a very simple thing that you need to understand you shouldn't be doing. And now I don't know-- granted, I don't know the level, how many years he'd been a developer, whether he was a beginner or whatever-- but he did inform me, look, we didn't see this as a potential attack vector because it's not like an everyday thing is somebody sniffing the cellular bands for traffic to grab credentials.

And I wonder. An ancillary question is hey, how many of these banks and how many of these FIs don't know that they're passing this stuff in clear text where the stuff is just going up.

And to follow up on what you were saying, why aren't you doing application penetration testing? Why is this getting published to these app marketplaces without getting tested? So let's say, for example, OK, fine. You didn't know. This was a developer that's had no secure code training. OK, fine. You're let off the hook, but why did it not get an application pen test before it got published to this app store?

Rusty Carter

Rusty Carter is a security software executive with over 20 years experience, and the current Vice President of Product Management at Arxan Technologies, an application security company that provides application shielding and protection against reverse-engineering and tampering to the world’s largest companies. Prior to Arxan, Mr. Carter led product management at Symantec, McAfee, and Pulse Secure (formerly Juniper), and was responsible for the introduction and growth of multiple new products and lines of business. Mr. Carter holds international patents in the fields of information security, AI / machine learning, telecommunications, mobile devices, and user interaction. Mr. Carter’s background includes system and software architecture, engineering, and has a bachelors degree in Psychology from the University of Arizona.

The Vulnerability Epidemic in Financial Mobile Apps

Join the webinar with Aaron Lint, Arxan chief scientist, and Alissa Knight, Aite Group senior cybersecurity analyst, on Tuesday, April 23, 2019.
More from the Blog
Apr 03, 2019

Vulnerability Epidemic in Financial Mobile Apps - Episode 1 [Video]

Summary of research and findings
Read more
Apr 02, 2019

Vulnerability Epidemic in Financial Mobile Apps [Infographic]

Arxan commissioned research by Aite Group to examine the mobile app vulnerabilities across eight financial services sectors. ...
Read more
Nov 14, 2018

Securing mobile apps against reverse engineering and hacking [Podcast]

Listen to Alissa Knight interview Ken Jochims about Arxan Technologies and application s
Read more