This is something that I worked on last year when stakeholders in the risk management group wanted to measure the success of the Application Security Program.
But, how do you measure application security? Or rather the success of an application security center of excellence program? What can give you details that it is working? Is it ok to allocate the same budget every year? Should it be reduced? How would one know? Is the program on track? Is it improving? By, just having a secure SDLC process, doing secure code analysis and security testing alone, one cannot say that they have a sustainable application security program. To continue any task/activity, one needs to know where to reach and where they are. And that is something application security metrics will give you.
What should be done first? Answer: Inventory.
- Take an inventory of your assets first. Whether it is secure, insecure, or you don’t know whether it is even used for, it doesn’t really matter. It is amazing when you ask this question to any CISO on whether he has a fair understanding on how many assets he thinks the organization has. Here, we are not getting into hardware or software assets but just the basic web applications/services that an Org’s IT floats in internet or intranet.
Once the inventory is finalized, come up with an asset classification using a risk based approach. Some assets could be critical, some public. Some assets could be accessed by all and some accessed only within a closed trusted environment. Some assets are used by millions of users and some assets are used just by the CISO (ya, you read it right. His dashboard).
2. Once the inventory is finalized, then you go figure your security processes for each of your assets. Did all applications undergo all aspects of secure-SDLC?
In other words, ‘Security Coverage‘. Let’s say, you do code analysis only for 50 of your 100 applications, then your coverage is only 50% and you don’t have an idea about rest of the apps. With this simple metric, it becomes fairly simple on what one needs to do.
3. If an organization is spending a lot of effort to do architectural analysis secure code analysis, security testing, vulnerability assessment and penetration testing, the metric ‘Cost-to-Fix‘ can be added. Say a vulnerability like SQL-Injection is found only during the penetration testing of the project. Naturally, it means the earlier processes are broken and the cost associated to fix it, the time-to-market goes up.
4. Another parameter that could be of interest is ‘Mean-Time-To-Repair‘. How long does it take to fix a particular category of issues? Is there a better approach? Is the fix needed?
5. How many vulnerabilities do you have totally? How many are you tracking? What is the vulnerability-to-source code ratio? What is the defect injection ratio? This could easily tell you the success of your security certification program.
6. While defect injection ratio is one parameter that can be used to determine success of code analysis, defect detection ratio and turn-around-time are best bets during security testing period.
7. A comparison of how your web applications perform with OWASP Top 10 or SANS top 25 is always good. But this parameter is dependent on how well your certifying body is. Just because your certifying body didn’t find any denial of service situations doesn’t mean all your web applications would always be available.
8. Tool Efficiency – This parameter can be used while evaluating a new security tool that is available in the market. Some parameters to look for is ‘Number of False Positives’, ‘Number of False Negatives’, ‘How much time it takes to scan’, ‘how much time it takes you to figure that it is a false positive’? etc
9. What is your security bar for applications? Keep a minimum threshold to contain minimal vulnerabilities.
10. Last but not the least, ‘Security Fixes Per Release‘. This is nothing but your actual security incident that happen. If all goes well and you still get hacked, it either means that you are irresistible or your application program does not work!.