Application Security Architecture Review

Application Security Architecture Review is a security activity done after the application architecture is defined and drafted and before the design starts. Most often, I get called to do an application security architecture review only to discover that people who want it done have no idea of what they wanted in the real place. This activity does not propose the security architecture but it reviews the existing architecture.


This activity cannot be done during your testing or development phase but in the initial stages. You need to do this even before you start coding because the more you delay considering security for your software, the more the cost goes up later to fix security issues coming out of the product.


You have put forth your business specification, drawn out how the functional requirements should be based on the business specification and drafted the technical specification also. For the given technical specification, how do you know that the technical controls used are fit for use with respect to security? For example, lets say that you want to use Tomcat.7.0 as the application server. How do you know that Tomcat has no security vulnerabilities? Lets say that you want users to register to your site using a registration module. Since you would be allowing anonymous users to use this form, you may also need a captcha though this is neither a business need or part of a functional specification. This captcha control is needed as part of a security need so that your form does not get abused by bots.


Now, that you know why we should be doing this activity, we need to define what we need to do this.

PRE-REQUISITES: A business specification (BSD), functional specification (FSD) and a technical specification (TS) document as inputs to you from your customer. As a security consultant, you would also need an application security architecture review checklist for the given technology so that you can go through the specification and review the security controls for every security domain.


  1. Get the documents first and go through it. Note down grey areas where you either don’t understand or need more clarification.
  2. Set up a meeting and clarify all questions. Sample meeting questions can be like what are the entry paths to the application, what will be the data validation approach, what privilege levels (roles) are being defined, nature of data used in the application (data classification) etc
  3. Analyze the entire specification and requirements based on the clarifications.
  4. Review the architecture based on application security architecture review checklists for the given technology/
  5. Provide your recommendation for mitigation.
  6. Provide the report.

Reference Material:



Vulnerability Management Metrics

As an organization’s representative for ‘Application Security Service line’, more often I provide presentations to customers who are key owners of Application Security Market in their respective organization. At times, this include CISO’s also.

During these times, the most often asked question is what kind of a value add, we can give in enabling them manage all their vulnerabilities, do security automation at the time help them improve the security posture of their applications year on year.

To do this first of all, one needs to have a clear understanding of where they are and where they need to go. Now, how does one know where they are? To do this, you first need a complete understanding of the assets you are trying to protect, your regulatory/compliance needs, your organization policies, what you need to protect and why you need to protect them. The which/when and how comes later through the help of Application Security Metrics.

First I will list down some of the metrics and then explain how to use this to figure where you are.

Application Security Metrics:

  1. Security Coverage
  2. Remediation Window or Vulnerability Age
  3. Vulnerability Trend by Month
  4. Vulnerability Density
  5. Vulnerability Distribution by severity, by status, by category.
  6. Rejection Rate
  7. Tool Efficiency
  8. Compliance Percentage
  9. Defect Recurrence Rate
  10. False Negatives/Positive Rate

Though there are more, we will cover these 10 for now.

  1. Security Coverage -> This helps you decide whether you are actually doing security assessments for all your assets. To do this, first you need a proper inventory on all your assets. Assets could be your applications, end points, network devices or the IT Infrastructure.

Unless you have total control of your entire asset, it would be impossible to protect them. A well known organization I had worked with had proper SDLC process but their HR department floated a website using a third party vendor which didn’t go through the release management process. The website was later hacked.

So, get all your assets first and get them through the SDLC/Release Management process. Then, make them go through security assessments to see if you are actually covering all.


2. Remediation Window or Vulnerability Age:  This is the time it takes for a team to fix the vulnerability after it was detected. Or, this can be also called as your effective ‘internal zero day’ as you know that the vulnerability is there and a patch is not available yet.

3. Vulnerability Trend by Month: This helps in deciding how the vulnerabilities are introduced into the software month on month; whether you are improving, going down or whether its just adhoc.

4. Vulnerability Density: This helps in deciding how vulnerable your software is. It can be calculated not just for source code, but also for dynamic assessments and infrastructure security assessments.

5. Vulnerability Distribution by status, severity and category: Status is nothing but the stages your vulnerability is in once its created in the vulnerability management system. It could be New/Unresolved, Fixed, False Positive, etc; Severity is the risk rating that you assign. It could either be a 3 point rating like High/Medium/Low or have your customized rating. Category helps in deciding the concentration of vulnerabilities. Lets say that 70% of the vulnerabilities is in security configuration and 30% is in authorization. That helps in deciding where to channel your energy.

6. Rejection Rate: This again is an indicator to check how vulnerable and exploitable the software is.

7. Tool Efficiency: One thing that surprises me is that organizations buy a lot of tools and don’t use it to their efficient best. They either remain idle or organization thinks that the tool purchase didn’t go well. Unless you put in a metric to calculate the efficiency of the tool, you cannot assume anything here.

8. Compliance Percentage: This helps in compliance with respect to PCI, HIPAA etc.

9. Defect Recurrence Rate: If a closed vulnerability recurs again due to a broken fix; how do you know whether its the same vulnerability that was reported earlier or whether the ADM don’t effectively know how to fix the defect?

10. False Positives/Negatives: This again is tied to your tool and to the manual analysis that you do. Always choose a tool that gives a best trade off between false positives and false negatives.




How do you refine time spent on application security scans?

A technocrat I respect asked me this question. “Year on year, you do scans using Fortify, Web Inspect, Appscan etc. But the scan time is always the same. Why can’t you refine it?”

I replied saying “Even brushing am doing for 30 decades. But I still take the same amount of time. I am dead scared to automate the process”. Though he took this in good sense and we laughed over it, I did tell him that scans can indeed be refined and effort cut down. But it is not like you did a scan for 2 hours this year and next year you want to increase productivity and hence wanted to scan within 30 minutes. That kind of blind refinement doesn’t exist.

So, what exactly can you do to cut down effort on the scan time? To know this, you should first know why a certain application with X number of pages takes Y amount of time for a scan. All tools that you use are nothing but automated script engines that would want to spider your application with certain rules/malicious vectors. So,

  1. The more the number of web pages in your scan, your scan time would increase.
  2. The more the number of input fields in your site, the more the time taken to execute the rules as the steps would have to be tried out per input field.
  3. The more the number of complexities in your site, the more time it would take. For example, if the application has a file upload feature, or a CAPTCHA or a dynamic generation script, it is going to take a certain amount of time for that.

So, these three parameters are not exactly in your hand and tweaking them will reduce the quality of your scan output. So, what and all can you reduce?

  1. Get the best infrastructure that is possible. Don’t expect a scan to run using 8 MB RAM. Go for the max that is allowed in your organization. If you are using dual core processor, ask for quad core or even better.
  2. All scan engines write to temporary files and log files in drives where the OS also is. Change this default setting so that the the system doesn’t slow down as the log file gets huge. If the OS is in C:/, you can change the log file settings to another drive.
  3. Policy -> Web Inspect uses ‘Standard Policy’ by default and App Scan uses Complete. But if you would go into these policies and inspect you will realize that they have a bunch of automated attack vectors that need to be executed. It may include finding a Struts vulnerability and also a PHP wordpress related vulnerability. So, if you are really sure about the application you are testing, experienced enough and can exercise sound judgement, this policy can be refined to cater to your application’s landscape. I have tried it out in applications and had scan time reduce by more than half.
  4. Threading -> The more the number of threads your tool is using per second, the sooner it will complete your scan. But it also comes at the cost of CPU usage. If it is looking like the tool is crashing, then reduce the number of threads.
  5. Scan Configuration Parameters :: There are other parameters that would let you test a page only once per attack, or once per every unique parameter, or repeat for every parameter. If customer wants scan time reduced and that seems to be the ultimate goal and quality can be compromised, you can try this out. But here, you will miss out on finding issues at every parameter.
  6. Rule Suppression, Sanitization and Others -> What if there is some code issue that is already fixed at deployment level but the tool is still finding it? One good example is the parseDouble() issue. In this case, you can write a suppression rule at rule pack level so that this isssue is suppressed and you don’t have to waste time analyzing it later.
  7. Last but not the least -> Schedule your scans so that it can run during non-work hours. If the application goes down during the scan time, you will have none to support you. But if you are running it at your own instance then this will work. In one project that I worked, we had to share the same instance with performance engineering team also and hence opted for a different timeslot.

Do you have any other measure that can reduce scan time?




Effort Estimation Model for DAST and SAST

Most often during my pre-sales work, I am asked to derive the estimation effort for DAST (Dynamic Application Security Testing) and SAST (Static Application Security Testing). These two testing methodologies are not new in a Software Development life cycle and are almost always done if a web application is internet facing. Read on if you are a customer (software owner) who requires DAST and SAST services for your application or a service provider who wishes to provide this service.

Continue reading “Effort Estimation Model for DAST and SAST”

What to do when your XSS attack vector is converted into CAPITAL letters by the application?

We keep encountering many types of unintended filters used by applications to present their input. One of them is to present all user input in CAPITAL letters. Even if there is no input validation done by the application, our normal XSS attack vector doesn’t work in this scenario.

Here is an example: Within scripts tags you would have given an inline alert(document.cookie);

In this case, the application converts it into ALERT(DOCUMENT.COOKIE). As Javascript is case sensitive, the alert fails to popup. Below are the options that you can do in this case.

Option 1: If VBscript is supported, then try out below. Since vbscript is case insensitive, it should not matter.


Option 2: Try loading external javascript. if your target application is behind a firewall, you can load your own JS file in an internal network and try loading it.

If the above two options don’t work, you can try out iframe or img src tags to inject your attack vector. There are some more ingenious tricks like shown in the below link but those are for rare cases. Hope this tip helped you. 

OWASP Top 10 – 2017 – Release Candidate

The OWASP Top 10 – 2017 may be finalized in July or August this year but I had a chance to look at the release candidate version.

Some Changes:

  1. The category ‘Unvalidated Redirects and Forwards’ have been dropped.
  2. Categories ‘Indirect Direct Object References’ and ‘Missing Function Level Access Control’ have been clubbed together. So, if the issue is with either the data or the functionality, that difference wouldn’t matter any more.
  3. Two new categories have made into Top 10. ‘Insufficient Attack Protection’ that aims to detect and deter automated attacks against applications. ‘Underprotected API’ that targets issues in API like REST, JSON based services.




Acunetix Version 11

Got an opportunity to look into the acunetix version 11. With this version, they have gone ahead with the web based version which is kind of good. When I look into it, these are the positive vibes I get.

  • I can exclude certain hours from the scan configuration. Say, I don’t want the scan to run at my night time, I can set so.
  • Likewise, if I need a manual intervention for a captcha, I can have options for that.

But that’s it. I am not able to find another feature that will make me go gaga over Acunetix.

  1. Their scanning profile actually looks scary as I don’t know what rules are part of complete scan and what rules are part of High Risk scan. I can’t seem to customize either.
  2. I seem to have had lot more control on the scan and application configuration with the desktop based product than on the web version. Though I realize that many utilities that came shipped with the desktop based version are now freebies, the web version looks like of empty.
  3. I really don’t seem to figure how to pause and resume a scan. Desktop version had it.
  4. Detailed Logging, Google hacking database and many fine tuning option, it looks like it all went missing.

A much disappointing build I should say. Probably, will wait for the next build.