Vulnerability Aggregator or Management Tools in the market

After working in the Application Security Sector for more than 9 years, I see that most of the struggle is not in finding security vulnerabilities or in fixing them. The most common pain points are rather below.

  1. Having a common enterprise vulnerability repository that aggregates all vulnerabilities and make meaningful correlation.
  2. Business Aligned Risk where one doesn’t give same priority to a XSS issue found in a business critical app and an intranet less critical app.
  3. Innovation and automating manual tasks.
  4. Security Metrics which help the CISO office to tell them what the security posture is.
  5. Ability to have non-repeatable issues so that the fix you do today, doesn’t break and create an issue that was fixed last year.
  6. Arbitration – This is the most painful task of being stuck in between the Security Group who think that Security is more important than functionality and the business who think that Security is just a bottle neck.

There is not a single tool in the market that answers all six issues. But there are some tools that are at least trying to attempt finding solution to some of the above. Some tools that I explored are

  1. Tenable I.O
  2. Thread Fix
  3. Code DX
  4. Kenna Security
  5. Risk IO
  6. Risk VM

Some Common Features of these tools.

  1. Vulnerability Aggregation – Most of them accept vulnerability feeds from top SAST tools like Microfocus Fortify, Appscan, Checkmarx, Veracode etc, DAST Tools like WebInspect, Acunetix, Burp Suite, Threat Intelligence Tools.
  2. Vulnerability Tracking and Management – Some of these tools integrate with defect trackers and ticketing tools like Service Now
  3. Dashboard – The graphs of Kenna and Tenable IO are good when it comes to projecting meaningful information that can be processed.
  4. Security Orchestration – Code DX comes with inbuilt scan detection capability and open source scanner capability so that even if you don’t have a commercial scanner support, you can still scan using the commercial scanners without spending even 1 single minute in integrating the tools.
  5. Risk Scoring – Some tools offer CVSS based ranking and can be customized further.
  6. Automation – Code DX provides options to add your own custom attack vectors, add custom rules etc

 

Still, there is a long way to go as most of these tools are either application security vulnerability aggregators or network security. There is not much of a meaningful correlation between different kinds of detection methods and hence it is plain aggregation and consolidation of vulnerabilities.

 

 

Advertisements

DASP Top 10 – 2018

Below is the NCC Groups’ initiative in discovering vulnerabilities related to smart contracts and block chain and the order of the vulnerabilities.

DASP Top 10

  1. Reentrancy – This could be a medley of our usual race function with multi threading issues where external contract calls are allowed to make further new calls when a similar execution is already in place and has not completed its execution.
  2. Access Control – This is our age old appsec issue and will not leave smart contracts too.
  3. Arithmetic Issues – Always be wary of your integer overflows and underflows whether its block chain or its simple calculator application.
  4. Unchecked Low Level Calls – First of all, avoid using low level calls. But if you must, please check the return value for Christ sake!
  5. Denial Of Service – Again, DOS is not new.
  6. Bad Randomness – This again is not new
  7. Front Running – Similar to RACE condition, where one can exploit the situation mainly become someone who is qualified enough to WIN can be kept waiting to be mined and the other stealing party can take it over with higher fees. Of all the issues, I think this is a more practical one and will always be exploited by users of malicious intent as it is how its in real world.
  8. Time Manipulation – Reliance on the timestamp that someone has control over. Why did they even allow this?
  9. Short Addresses – Though it could be termed new, to me it looks plain like missing input validation.
  10. Unknown Unknowns – The vagueness of all. Its the fear of the unknown since not many actually understand blockchain or smart contracts even though they claim that their entire country now runs on that. Some kid on the block may stumble upon something interesting any might loot your country away.

The Original Article can be found here.

https://www.dasp.co/

Security Architecture Review – Deployment Considerations

While doing a security architecture review for a software, one should first check out how the product is supposed to be deployed. Questions worth pondering are

  1. Will the network provide secure communication – Secure Socket Layer
  2. Will the topology include a firewall?
  3. Would the OS where the application server is running, need many open ports and if yes, for what reasons?
  4. If Secure Socket Layer is used, then what are the acceptable protocols and algorithms?
  5. Would the system use least privileged process for permissions?
  6. What about the encryption keys and their storage?
  7. Would the database server be an open source one or commercial with enough encryption at record level for sensitive data?
  8. What trust levels would the target environment support?
  9. How would session state be managed?

There may also be some questions that are specific to the technology stack chosen like one has to considered encrypted VIEWSTATE in case of a .Net application. In case of PHP, the secure configuration one needs to do in Apache and php.ini file may also have to be considered.

 

Application Security Architecture Review

Application Security Architecture Review is a security activity done after the application architecture is defined and drafted and before the design starts. Most often, I get called to do an application security architecture review only to discover that people who want it done have no idea of what they wanted in the real place. This activity does not propose the security architecture but it reviews the existing architecture.

WHEN

This activity cannot be done during your testing or development phase but in the initial stages. You need to do this even before you start coding because the more you delay considering security for your software, the more the cost goes up later to fix security issues coming out of the product.

WHY

You have put forth your business specification, drawn out how the functional requirements should be based on the business specification and drafted the technical specification also. For the given technical specification, how do you know that the technical controls used are fit for use with respect to security? For example, lets say that you want to use Tomcat.7.0 as the application server. How do you know that Tomcat has no security vulnerabilities? Lets say that you want users to register to your site using a registration module. Since you would be allowing anonymous users to use this form, you may also need a captcha though this is neither a business need or part of a functional specification. This captcha control is needed as part of a security need so that your form does not get abused by bots.

WHAT

Now, that you know why we should be doing this activity, we need to define what we need to do this.

PRE-REQUISITES: A business specification (BSD), functional specification (FSD) and a technical specification (TS) document as inputs to you from your customer. As a security consultant, you would also need an application security architecture review checklist for the given technology so that you can go through the specification and review the security controls for every security domain.

HOW

  1. Get the documents first and go through it. Note down grey areas where you either don’t understand or need more clarification.
  2. Set up a meeting and clarify all questions. Sample meeting questions can be like what are the entry paths to the application, what will be the data validation approach, what privilege levels (roles) are being defined, nature of data used in the application (data classification) etc
  3. Analyze the entire specification and requirements based on the clarifications.
  4. Review the architecture based on application security architecture review checklists for the given technology/
  5. Provide your recommendation for mitigation.
  6. Provide the report.

Reference Material:

https://www.idi.ntnu.no/emner/tdt4237/2007/yoder.pdf

https://resources.infosecinstitute.com/application-architecture-review/

http://www.guidanceshare.com/wiki/Security_Engineering_Explained_-_Chapter_5_-_Security_Architecture_and_Design_Review

 

Vulnerability Management Metrics

As an organization’s representative for ‘Application Security Service line’, more often I provide presentations to customers who are key owners of Application Security Market in their respective organization. At times, this include CISO’s also.

During these times, the most often asked question is what kind of a value add, we can give in enabling them manage all their vulnerabilities, do security automation at the time help them improve the security posture of their applications year on year.

To do this first of all, one needs to have a clear understanding of where they are and where they need to go. Now, how does one know where they are? To do this, you first need a complete understanding of the assets you are trying to protect, your regulatory/compliance needs, your organization policies, what you need to protect and why you need to protect them. The which/when and how comes later through the help of Application Security Metrics.

First I will list down some of the metrics and then explain how to use this to figure where you are.

Application Security Metrics:

  1. Security Coverage
  2. Remediation Window or Vulnerability Age
  3. Vulnerability Trend by Month
  4. Vulnerability Density
  5. Vulnerability Distribution by severity, by status, by category.
  6. Rejection Rate
  7. Tool Efficiency
  8. Compliance Percentage
  9. Defect Recurrence Rate
  10. False Negatives/Positive Rate

Though there are more, we will cover these 10 for now.

  1. Security Coverage -> This helps you decide whether you are actually doing security assessments for all your assets. To do this, first you need a proper inventory on all your assets. Assets could be your applications, end points, network devices or the IT Infrastructure.

Unless you have total control of your entire asset, it would be impossible to protect them. A well known organization I had worked with had proper SDLC process but their HR department floated a website using a third party vendor which didn’t go through the release management process. The website was later hacked.

So, get all your assets first and get them through the SDLC/Release Management process. Then, make them go through security assessments to see if you are actually covering all.

 

2. Remediation Window or Vulnerability Age:  This is the time it takes for a team to fix the vulnerability after it was detected. Or, this can be also called as your effective ‘internal zero day’ as you know that the vulnerability is there and a patch is not available yet.

3. Vulnerability Trend by Month: This helps in deciding how the vulnerabilities are introduced into the software month on month; whether you are improving, going down or whether its just adhoc.

4. Vulnerability Density: This helps in deciding how vulnerable your software is. It can be calculated not just for source code, but also for dynamic assessments and infrastructure security assessments.

5. Vulnerability Distribution by status, severity and category: Status is nothing but the stages your vulnerability is in once its created in the vulnerability management system. It could be New/Unresolved, Fixed, False Positive, etc; Severity is the risk rating that you assign. It could either be a 3 point rating like High/Medium/Low or have your customized rating. Category helps in deciding the concentration of vulnerabilities. Lets say that 70% of the vulnerabilities is in security configuration and 30% is in authorization. That helps in deciding where to channel your energy.

6. Rejection Rate: This again is an indicator to check how vulnerable and exploitable the software is.

7. Tool Efficiency: One thing that surprises me is that organizations buy a lot of tools and don’t use it to their efficient best. They either remain idle or organization thinks that the tool purchase didn’t go well. Unless you put in a metric to calculate the efficiency of the tool, you cannot assume anything here.

8. Compliance Percentage: This helps in compliance with respect to PCI, HIPAA etc.

9. Defect Recurrence Rate: If a closed vulnerability recurs again due to a broken fix; how do you know whether its the same vulnerability that was reported earlier or whether the ADM don’t effectively know how to fix the defect?

10. False Positives/Negatives: This again is tied to your tool and to the manual analysis that you do. Always choose a tool that gives a best trade off between false positives and false negatives.

 

 

 

How do you refine time spent on application security scans?

A technocrat I respect asked me this question. “Year on year, you do scans using Fortify, Web Inspect, Appscan etc. But the scan time is always the same. Why can’t you refine it?”

I replied saying “Even brushing am doing for 30 decades. But I still take the same amount of time. I am dead scared to automate the process”. Though he took this in good sense and we laughed over it, I did tell him that scans can indeed be refined and effort cut down. But it is not like you did a scan for 2 hours this year and next year you want to increase productivity and hence wanted to scan within 30 minutes. That kind of blind refinement doesn’t exist.

So, what exactly can you do to cut down effort on the scan time? To know this, you should first know why a certain application with X number of pages takes Y amount of time for a scan. All tools that you use are nothing but automated script engines that would want to spider your application with certain rules/malicious vectors. So,

  1. The more the number of web pages in your scan, your scan time would increase.
  2. The more the number of input fields in your site, the more the time taken to execute the rules as the steps would have to be tried out per input field.
  3. The more the number of complexities in your site, the more time it would take. For example, if the application has a file upload feature, or a CAPTCHA or a dynamic generation script, it is going to take a certain amount of time for that.

So, these three parameters are not exactly in your hand and tweaking them will reduce the quality of your scan output. So, what and all can you reduce?

  1. Get the best infrastructure that is possible. Don’t expect a scan to run using 8 MB RAM. Go for the max that is allowed in your organization. If you are using dual core processor, ask for quad core or even better.
  2. All scan engines write to temporary files and log files in drives where the OS also is. Change this default setting so that the the system doesn’t slow down as the log file gets huge. If the OS is in C:/, you can change the log file settings to another drive.
  3. Policy -> Web Inspect uses ‘Standard Policy’ by default and App Scan uses Complete. But if you would go into these policies and inspect you will realize that they have a bunch of automated attack vectors that need to be executed. It may include finding a Struts vulnerability and also a PHP wordpress related vulnerability. So, if you are really sure about the application you are testing, experienced enough and can exercise sound judgement, this policy can be refined to cater to your application’s landscape. I have tried it out in applications and had scan time reduce by more than half.
  4. Threading -> The more the number of threads your tool is using per second, the sooner it will complete your scan. But it also comes at the cost of CPU usage. If it is looking like the tool is crashing, then reduce the number of threads.
  5. Scan Configuration Parameters :: There are other parameters that would let you test a page only once per attack, or once per every unique parameter, or repeat for every parameter. If customer wants scan time reduced and that seems to be the ultimate goal and quality can be compromised, you can try this out. But here, you will miss out on finding issues at every parameter.
  6. Rule Suppression, Sanitization and Others -> What if there is some code issue that is already fixed at deployment level but the tool is still finding it? One good example is the parseDouble() issue. In this case, you can write a suppression rule at rule pack level so that this isssue is suppressed and you don’t have to waste time analyzing it later.
  7. Last but not the least -> Schedule your scans so that it can run during non-work hours. If the application goes down during the scan time, you will have none to support you. But if you are running it at your own instance then this will work. In one project that I worked, we had to share the same instance with performance engineering team also and hence opted for a different timeslot.

Do you have any other measure that can reduce scan time?

 

 

 

Effort Estimation Model for DAST and SAST

Most often during my pre-sales work, I am asked to derive the estimation effort for DAST (Dynamic Application Security Testing) and SAST (Static Application Security Testing). These two testing methodologies are not new in a Software Development life cycle and are almost always done if a web application is internet facing. Read on if you are a customer (software owner) who requires DAST and SAST services for your application or a service provider who wishes to provide this service.

Continue reading “Effort Estimation Model for DAST and SAST”