Vulnerability Aggregator or Management Tools in the market

After working in the Application Security Sector for more than 9 years, I see that most of the struggle is not in finding security vulnerabilities or in fixing them. The most common pain points are rather below.

  1. Having a common enterprise vulnerability repository that aggregates all vulnerabilities and make meaningful correlation.
  2. Business Aligned Risk where one doesn’t give same priority to a XSS issue found in a business critical app and an intranet less critical app.
  3. Innovation and automating manual tasks.
  4. Security Metrics which help the CISO office to tell them what the security posture is.
  5. Ability to have non-repeatable issues so that the fix you do today, doesn’t break and create an issue that was fixed last year.
  6. Arbitration – This is the most painful task of being stuck in between the Security Group who think that Security is more important than functionality and the business who think that Security is just a bottle neck.

There is not a single tool in the market that answers all six issues. But there are some tools that are at least trying to attempt finding solution to some of the above. Some tools that I explored are

  1. Tenable I.O
  2. Thread Fix
  3. Code DX
  4. Kenna Security
  5. Risk IO
  6. Risk VM

Some Common Features of these tools.

  1. Vulnerability Aggregation – Most of them accept vulnerability feeds from top SAST tools like Microfocus Fortify, Appscan, Checkmarx, Veracode etc, DAST Tools like WebInspect, Acunetix, Burp Suite, Threat Intelligence Tools.
  2. Vulnerability Tracking and Management – Some of these tools integrate with defect trackers and ticketing tools like Service Now
  3. Dashboard – The graphs of Kenna and Tenable IO are good when it comes to projecting meaningful information that can be processed.
  4. Security Orchestration – Code DX comes with inbuilt scan detection capability and open source scanner capability so that even if you don’t have a commercial scanner support, you can still scan using the commercial scanners without spending even 1 single minute in integrating the tools.
  5. Risk Scoring – Some tools offer CVSS based ranking and can be customized further.
  6. Automation – Code DX provides options to add your own custom attack vectors, add custom rules etc

 

Still, there is a long way to go as most of these tools are either application security vulnerability aggregators or network security. There is not much of a meaningful correlation between different kinds of detection methods and hence it is plain aggregation and consolidation of vulnerabilities.

 

 

Advertisements

How do you refine time spent on application security scans?

A technocrat I respect asked me this question. “Year on year, you do scans using Fortify, Web Inspect, Appscan etc. But the scan time is always the same. Why can’t you refine it?”

I replied saying “Even brushing am doing for 30 decades. But I still take the same amount of time. I am dead scared to automate the process”. Though he took this in good sense and we laughed over it, I did tell him that scans can indeed be refined and effort cut down. But it is not like you did a scan for 2 hours this year and next year you want to increase productivity and hence wanted to scan within 30 minutes. That kind of blind refinement doesn’t exist.

So, what exactly can you do to cut down effort on the scan time? To know this, you should first know why a certain application with X number of pages takes Y amount of time for a scan. All tools that you use are nothing but automated script engines that would want to spider your application with certain rules/malicious vectors. So,

  1. The more the number of web pages in your scan, your scan time would increase.
  2. The more the number of input fields in your site, the more the time taken to execute the rules as the steps would have to be tried out per input field.
  3. The more the number of complexities in your site, the more time it would take. For example, if the application has a file upload feature, or a CAPTCHA or a dynamic generation script, it is going to take a certain amount of time for that.

So, these three parameters are not exactly in your hand and tweaking them will reduce the quality of your scan output. So, what and all can you reduce?

  1. Get the best infrastructure that is possible. Don’t expect a scan to run using 8 MB RAM. Go for the max that is allowed in your organization. If you are using dual core processor, ask for quad core or even better.
  2. All scan engines write to temporary files and log files in drives where the OS also is. Change this default setting so that the the system doesn’t slow down as the log file gets huge. If the OS is in C:/, you can change the log file settings to another drive.
  3. Policy -> Web Inspect uses ‘Standard Policy’ by default and App Scan uses Complete. But if you would go into these policies and inspect you will realize that they have a bunch of automated attack vectors that need to be executed. It may include finding a Struts vulnerability and also a PHP wordpress related vulnerability. So, if you are really sure about the application you are testing, experienced enough and can exercise sound judgement, this policy can be refined to cater to your application’s landscape. I have tried it out in applications and had scan time reduce by more than half.
  4. Threading -> The more the number of threads your tool is using per second, the sooner it will complete your scan. But it also comes at the cost of CPU usage. If it is looking like the tool is crashing, then reduce the number of threads.
  5. Scan Configuration Parameters :: There are other parameters that would let you test a page only once per attack, or once per every unique parameter, or repeat for every parameter. If customer wants scan time reduced and that seems to be the ultimate goal and quality can be compromised, you can try this out. But here, you will miss out on finding issues at every parameter.
  6. Rule Suppression, Sanitization and Others -> What if there is some code issue that is already fixed at deployment level but the tool is still finding it? One good example is the parseDouble() issue. In this case, you can write a suppression rule at rule pack level so that this isssue is suppressed and you don’t have to waste time analyzing it later.
  7. Last but not the least -> Schedule your scans so that it can run during non-work hours. If the application goes down during the scan time, you will have none to support you. But if you are running it at your own instance then this will work. In one project that I worked, we had to share the same instance with performance engineering team also and hence opted for a different timeslot.

Do you have any other measure that can reduce scan time?

 

 

 

Acunetix Version 11

Got an opportunity to look into the acunetix version 11. With this version, they have gone ahead with the web based version which is kind of good. When I look into it, these are the positive vibes I get.

  • I can exclude certain hours from the scan configuration. Say, I don’t want the scan to run at my night time, I can set so.
  • Likewise, if I need a manual intervention for a captcha, I can have options for that.

But that’s it. I am not able to find another feature that will make me go gaga over Acunetix.

  1. Their scanning profile actually looks scary as I don’t know what rules are part of complete scan and what rules are part of High Risk scan. I can’t seem to customize either.
  2. I seem to have had lot more control on the scan and application configuration with the desktop based product than on the web version. Though I realize that many utilities that came shipped with the desktop based version are now freebies, the web version looks like of empty.
  3. I really don’t seem to figure how to pause and resume a scan. Desktop version had it.
  4. Detailed Logging, Google hacking database and many fine tuning option, it looks like it all went missing.

A much disappointing build I should say. Probably, will wait for the next build.

Developer Tools and Proxy Chaining

What do IE developer tools and Proxy chaining have in common? Nothing other than the fact I learnt about both today.

Earlier, when I had to do authorization level attacks while logging in as a low privileged user, I used to construct the whole HTTP request in proxies and send those to Burp repeater, tweaking it till I get the response I wanted. A colleague who happens to be a Share point developer also told me know to invoke Javascript directly even if its not linked from anywhere within HTML. Enter ‘IE Developer Tools : F12’.

That made my work easier and instead of using Burp, I used Developer Tools this time to show a proof of exploit. The development team was happy too as they were able to replicate the scenario much better.

Proxy Chaining: I have been doing this all along for 3 years without knowing that there is a specific term for it. Was having a specific proxy configuration problem with Acunetix. It just didn’t connect to the site even though the proxy configuration details were right. Got Burp Suite in between Acunetix and Site and voila, it worked.

This is called as ‘Proxy Chaining’ it seems. NICE!!

Retire.js

I got an opportunity to try this Burp Extension last week. It is a simple jar file that can be uploaded to the extender tab. Installation was a breeze.

After installing, all I had to do was go through my target website and start navigating ( I didn’t even scan). As I kept on with the navigation, I saw that burp listed some of the javascript files as having security vulnerabilities. False Positives in this case is zero percent.

This tool is better than what Web Inspect and Acunetix offer in terms of finding ‘Components Having Known Vulnerabilities’ and behind Black Duck and Palamida. Of course, The latter tools are there solely for this reason.

But if you want to find such vulnerabilities quickly even without scanning, go for this one!

http://retirejs.github.io/retire.js/