The OWASP Top 10 – 2017 may be finalized in July or August this year but I had a chance to look at the release candidate version.
- The category ‘Unvalidated Redirects and Forwards’ have been dropped.
- Categories ‘Indirect Direct Object References’ and ‘Missing Function Level Access Control’ have been clubbed together. So, if the issue is with either the data or the functionality, that difference wouldn’t matter any more.
- Two new categories have made into Top 10. ‘Insufficient Attack Protection’ that aims to detect and deter automated attacks against applications. ‘Underprotected API’ that targets issues in API like REST, JSON based services.
Got an opportunity to look into the acunetix version 11. With this version, they have gone ahead with the web based version which is kind of good. When I look into it, these are the positive vibes I get.
- I can exclude certain hours from the scan configuration. Say, I don’t want the scan to run at my night time, I can set so.
- Likewise, if I need a manual intervention for a captcha, I can have options for that.
But that’s it. I am not able to find another feature that will make me go gaga over Acunetix.
- Their scanning profile actually looks scary as I don’t know what rules are part of complete scan and what rules are part of High Risk scan. I can’t seem to customize either.
- I seem to have had lot more control on the scan and application configuration with the desktop based product than on the web version. Though I realize that many utilities that came shipped with the desktop based version are now freebies, the web version looks like of empty.
- I really don’t seem to figure how to pause and resume a scan. Desktop version had it.
- Detailed Logging, Google hacking database and many fine tuning option, it looks like it all went missing.
A much disappointing build I should say. Probably, will wait for the next build.
Gemfire is an in-memory data grid.It pools memory across multiple processes to manage application objects and behavior. Its written in Java and has a key-value structure storage. This data is stored in something called as a ‘region’ which can be queried using Object query language much like how one would have SQL for RDBMS.
I came to know about this some time back and since the opportunity of abusing an OQL is rare, I googled a bit about it. The only journal which references gemfire OQL and a remote command injection is below.
While doing penetration testing, I tried following the examples cited in this blog. The application I was testing was doing for black list filtering and hence all attack vectors weren’t going through properly but the first attack vector that was successful was something like below.
- select * from /region1 limit 10.
Comments: My query returned exactly 10 rows and I knew then that I would be able to pass any data in the parameter and it is getting appended to code also.
2. select p.getclass.forName(‘java.lang.Runtime’).getDeclaredMethods().getName() from /region1 p.
This query is similar to what is explained in the emaze blog and using the above construct, you would be able to list out all the methods of the run time. Getting to this stage was a little tough as I needed to tweak and find out which parameter was getting accepted and which wasn’t. For some reason, the invoke method wasn’t working at all. Before calling it a day, I had passed on this query to my colleague Prashanth working in a different timezone who cracked the shell.
3.select p.getclass.forName(‘java.lang.System’).getDeclaredMethods().invoke(null, ‘os.version’.split(‘asd’)) from /region1 p
This gave out the version number and like wise, one can get all the System properties from similar queries. So, in your testing even if you don’t get a RCE, try an information leakage like above.
By default, RSA Cleartrust provides options to lock accounts after five consecutive failed authentication attempts within one day. Likewise, the system can unlock users after a specified amount of time or provides an option to have the administrator of the system unlock the users.
The above configuration setting seems to be a foolproof one and one wouldn’t see anything wrong here. This was until I stumbled upon a specific configuration setting in one application. There, the development team had done the below configuration for account lock out.
“Lock accounts after 3 consecutive failed attempts within 2 minutes”. => This I didn’t even know was possible. At a glance it looks even more promising that this setting can catch a robot early. But wait, if I am a malicious insider or a person known to victim, I can use this feature to abuse the system every 3 minutes and finally capture the password. I don’t have to be a robot to get this out.
I told the AD team my thoughts and they removed this interval option from their setting. What are your thoughts?
What do IE developer tools and Proxy chaining have in common? Nothing other than the fact I learnt about both today.
That made my work easier and instead of using Burp, I used Developer Tools this time to show a proof of exploit. The development team was happy too as they were able to replicate the scenario much better.
Proxy Chaining: I have been doing this all along for 3 years without knowing that there is a specific term for it. Was having a specific proxy configuration problem with Acunetix. It just didn’t connect to the site even though the proxy configuration details were right. Got Burp Suite in between Acunetix and Site and voila, it worked.
This is called as ‘Proxy Chaining’ it seems. NICE!!
I got an opportunity to try this Burp Extension last week. It is a simple jar file that can be uploaded to the extender tab. Installation was a breeze.
This tool is better than what Web Inspect and Acunetix offer in terms of finding ‘Components Having Known Vulnerabilities’ and behind Black Duck and Palamida. Of course, The latter tools are there solely for this reason.
But if you want to find such vulnerabilities quickly even without scanning, go for this one!
This is something that I worked on last year when stakeholders in the risk management group wanted to measure the success of the Application Security Program.
But, how do you measure application security? Or rather the success of an application security center of excellence program? What can give you details that it is working? Is it ok to allocate the same budget every year? Should it be reduced? How would one know? Is the program on track? Is it improving? By, just having a secure SDLC process, doing secure code analysis and security testing alone, one cannot say that they have a sustainable application security program. To continue any task/activity, one needs to know where to reach and where they are. And that is something application security metrics will give you.
What should be done first? Answer: Inventory.
- Take an inventory of your assets first. Whether it is secure, insecure, or you don’t know whether it is even used for, it doesn’t really matter. It is amazing when you ask this question to any CISO on whether he has a fair understanding on how many assets he thinks the organization has. Here, we are not getting into hardware or software assets but just the basic web applications/services that an Org’s IT floats in internet or intranet.
Once the inventory is finalized, come up with an asset classification using a risk based approach. Some assets could be critical, some public. Some assets could be accessed by all and some accessed only within a closed trusted environment. Some assets are used by millions of users and some assets are used just by the CISO (ya, you read it right. His dashboard).
2. Once the inventory is finalized, then you go figure your security processes for each of your assets. Did all applications undergo all aspects of secure-SDLC?
In other words, ‘Security Coverage‘. Let’s say, you do code analysis only for 50 of your 100 applications, then your coverage is only 50% and you don’t have an idea about rest of the apps. With this simple metric, it becomes fairly simple on what one needs to do.