OWASP Top 10 – 2017 – Release Candidate

The OWASP Top 10 – 2017 may be finalized in July or August this year but I had a chance to look at the release candidate version.

Some Changes:

  1. The category ‘Unvalidated Redirects and Forwards’ have been dropped.
  2. Categories ‘Indirect Direct Object References’ and ‘Missing Function Level Access Control’ have been clubbed together. So, if the issue is with either the data or the functionality, that difference wouldn’t matter any more.
  3. Two new categories have made into Top 10. ‘Insufficient Attack Protection’ that aims to detect and deter automated attacks against applications. ‘Underprotected API’ that targets issues in API like REST, JSON based services.

https://github.com/OWASP/Top10/blob/master/2017/OWASP%20Top%2010%20-%202017%20RC1-English.pdf

 

 

 

Acunetix Version 11

Got an opportunity to look into the acunetix version 11. With this version, they have gone ahead with the web based version which is kind of good. When I look into it, these are the positive vibes I get.

  • I can exclude certain hours from the scan configuration. Say, I don’t want the scan to run at my night time, I can set so.
  • Likewise, if I need a manual intervention for a captcha, I can have options for that.

But that’s it. I am not able to find another feature that will make me go gaga over Acunetix.

  1. Their scanning profile actually looks scary as I don’t know what rules are part of complete scan and what rules are part of High Risk scan. I can’t seem to customize either.
  2. I seem to have had lot more control on the scan and application configuration with the desktop based product than on the web version. Though I realize that many utilities that came shipped with the desktop based version are now freebies, the web version looks like of empty.
  3. I really don’t seem to figure how to pause and resume a scan. Desktop version had it.
  4. Detailed Logging, Google hacking database and many fine tuning option, it looks like it all went missing.

A much disappointing build I should say. Probably, will wait for the next build.

GemFire OQL – Information Leakage

Gemfire is an in-memory data grid.It pools memory across multiple processes to manage application objects and behavior. Its written in Java and has a key-value structure storage. This data is stored in something called as a ‘region’ which can be queried using Object query language much like how one would have SQL for RDBMS.

I came to know about this some time back and since the opportunity of abusing an OQL is rare, I googled a bit about it. The only journal which references gemfire OQL and a remote command injection is below.

http://blog.emaze.net/2014/11/gemfire-from-oqli-to-rce-through.html

While doing penetration testing, I tried following the examples cited in this blog. The application I was testing was doing for black list filtering and hence all attack vectors weren’t going through properly but the first attack vector that was successful was something like below.

  1. select * from /region1 limit 10.

Comments: My query returned exactly 10 rows and I knew then that I would be able to pass any data in the parameter and  it is getting appended to code also.

2. select p.getclass.forName(‘java.lang.Runtime’).getDeclaredMethods()[7].getName() from /region1 p.

This query is similar to what is explained in the emaze blog and using the above construct, you would be able to list out all the methods of the run time. Getting to this stage was a little tough as I needed to tweak and find out which parameter was getting accepted and which wasn’t. For some reason, the invoke method wasn’t working at all. Before calling it a day, I had passed on this query to my colleague Prashanth working in a different timezone who cracked the shell.

3.select p.getclass.forName(‘java.lang.System’).getDeclaredMethods()[5].invoke(null, ‘os.version’.split(‘asd’)) from /region1 p

This gave out the version number and like wise, one can get all the System properties from similar queries. So, in your testing even if you don’t get a RCE, try an information leakage like above.

 

 

 

 

 

 

Developer Tools and Proxy Chaining

What do IE developer tools and Proxy chaining have in common? Nothing other than the fact I learnt about both today.

Earlier, when I had to do authorization level attacks while logging in as a low privileged user, I used to construct the whole HTTP request in proxies and send those to Burp repeater, tweaking it till I get the response I wanted. A colleague who happens to be a Share point developer also told me know to invoke Javascript directly even if its not linked from anywhere within HTML. Enter ‘IE Developer Tools : F12’.

That made my work easier and instead of using Burp, I used Developer Tools this time to show a proof of exploit. The development team was happy too as they were able to replicate the scenario much better.

Proxy Chaining: I have been doing this all along for 3 years without knowing that there is a specific term for it. Was having a specific proxy configuration problem with Acunetix. It just didn’t connect to the site even though the proxy configuration details were right. Got Burp Suite in between Acunetix and Site and voila, it worked.

This is called as ‘Proxy Chaining’ it seems. NICE!!

Retire.js

I got an opportunity to try this Burp Extension last week. It is a simple jar file that can be uploaded to the extender tab. Installation was a breeze.

After installing, all I had to do was go through my target website and start navigating ( I didn’t even scan). As I kept on with the navigation, I saw that burp listed some of the javascript files as having security vulnerabilities. False Positives in this case is zero percent.

This tool is better than what Web Inspect and Acunetix offer in terms of finding ‘Components Having Known Vulnerabilities’ and behind Black Duck and Palamida. Of course, The latter tools are there solely for this reason.

But if you want to find such vulnerabilities quickly even without scanning, go for this one!

http://retirejs.github.io/retire.js/

 

Application Security Metrics

This is something that I worked on last year when stakeholders in the risk management group wanted to measure the success of the Application Security Program.

But, how do you measure application security? Or rather the success of an application security center of excellence program? What can give you details that it is working? Is it ok to allocate the same budget every year? Should it be reduced? How would one know? Is the program on track? Is it improving? By, just having a secure SDLC process, doing secure code analysis and security testing alone, one cannot say that they have a sustainable application security program. To continue any task/activity, one needs to know where to reach and where they are. And that is something application security metrics will give you.

What should be done first? Answer: Inventory.

  1. Take an inventory of your assets first. Whether it is secure, insecure, or you don’t know whether it is even used for, it doesn’t really matter. It is amazing when you ask this question to any CISO on whether he has a fair understanding on how many assets he thinks the organization has. Here, we are not getting into hardware or software assets but just the basic web applications/services that an Org’s IT floats in internet or intranet.

Once the inventory is finalized, come up with an asset classification using a risk based approach. Some assets could be critical, some public. Some assets could be accessed by all and some accessed only within a closed trusted environment. Some assets are used by millions of users and some assets are used just by the CISO (ya, you read it right. His dashboard).

2. Once the inventory is finalized, then you go figure your security processes for each of your assets. Did all applications undergo all aspects of secure-SDLC?

In other words, ‘Security Coverage‘. Let’s say, you do code analysis only for 50 of your 100 applications, then your coverage is only 50% and you don’t have an idea about rest of the apps.  With this simple metric, it becomes fairly simple on what one needs to do.

Continue reading

Building A Secure Web Application – Part 1

Well, when I first thought of posting about this topic, a friend of mine suggested..

“Celia, forget about security. Once you put your application on the web, no matter what you do, it is always vulnerable.”

Another one said, “Gosh! Remember that we are from the service industry. Lets not overdo on that security aspect. The client will take care of it when he deploys it. Also, remember, we can do only what they ask for.. ”

As I pondered about this, I was wondering how I could strike a balance between these two people. Agreed, security is an ongoing thing. It seems like a race between the hackers and the crackers.. Today, we find a vulnerability and fix it. Tomorrow, there comes another issue.

Likewise, clients come in all flavors. There are some who really know a lot about what they want. These people are a delight to work with as their requirements are very clear. Also, they are quick to understand and know that building efficient security applications do take some royal effort. And there are some who think that application development shouldn’t take more than a week. There was this manager who once asked me, “After all, its just adding, editing, deleting and viewing. You are not doing rocket science. Why is it taking more time?”.

Yeah.. I agree to it. It would just take me one single query on the database to let an administrator login to a system. But it would take me atleast 10 other policy checks to prevent other users from manipulating this query. Wouldn’t this take some solid effort?

I hope people are atleast nodding a little now. Let me say one thing now. Even popular websites like gmail, facebook, youtube and msn have vulnerabilities. So, its not just because of a poor programmer’s pathetic code. Even experienced experts find it difficult to take care of all vulnerability issues when all their attention is focused on business logic.

In this case, what can be done? First thing:

Client: Client has to know that building secure web application takes some time. And some real effort.

Developer: The programmer needs to know how to secure their code and need to follow some security standard.

PM: The one who takes the real pressure. Needs to coordinate between the above two.

Security Consultant: The one who tells us what we already know 🙂 .. well, jokes apart.. This is the person who makes our lives simpler. Who tells us what needs to be done to make our code secure and who reviews it before the app gets deployed in the production environment.

Now, just like how we have a separate team for application design, BI development and testing, we do need a separate group of security experts who concentrate just on the security aspect of the application. How a security expert will add value to the application, will be discussed in PART 2 of this article… 🙂