RSA Cleartrust Account Lockout Policy

By default, RSA Cleartrust provides options to lock accounts after five consecutive failed authentication attempts within one day. Likewise, the system can unlock users after a specified amount of time or provides an option to have the administrator of the system unlock the users.

The above configuration setting seems to be a foolproof one and one wouldn’t see anything wrong here. This was until I stumbled upon a specific configuration setting in one application. There, the development team had done the below configuration for account lock out.

“Lock accounts after 3 consecutive failed attempts within 2 minutes”. => This I didn’t even know was possible. At a glance it looks even more promising that this setting can catch a robot early. But wait, if I am a malicious insider or a person known to victim, I can use this feature to abuse the system every 3 minutes and finally capture the password. I don’t have to be a robot to get this out.

I told the AD team my thoughts and they removed this interval option from their setting. What are your thoughts?





Developer Tools and Proxy Chaining

What do IE developer tools and Proxy chaining have in common? Nothing other than the fact I learnt about both today.

Earlier, when I had to do authorization level attacks while logging in as a low privileged user, I used to construct the whole HTTP request in proxies and send those to Burp repeater, tweaking it till I get the response I wanted. A colleague who happens to be a Share point developer also told me know to invoke Javascript directly even if its not linked from anywhere within HTML. Enter ‘IE Developer Tools : F12’.

That made my work easier and instead of using Burp, I used Developer Tools this time to show a proof of exploit. The development team was happy too as they were able to replicate the scenario much better.

Proxy Chaining: I have been doing this all along for 3 years without knowing that there is a specific term for it. Was having a specific proxy configuration problem with Acunetix. It just didn’t connect to the site even though the proxy configuration details were right. Got Burp Suite in between Acunetix and Site and voila, it worked.

This is called as ‘Proxy Chaining’ it seems. NICE!!

Application Security Metrics

This is something that I worked on last year when stakeholders in the risk management group wanted to measure the success of the Application Security Program.

But, how do you measure application security? Or rather the success of an application security center of excellence program? What can give you details that it is working? Is it ok to allocate the same budget every year? Should it be reduced? How would one know? Is the program on track? Is it improving? By, just having a secure SDLC process, doing secure code analysis and security testing alone, one cannot say that they have a sustainable application security program. To continue any task/activity, one needs to know where to reach and where they are. And that is something application security metrics will give you.

What should be done first? Answer: Inventory.

  1. Take an inventory of your assets first. Whether it is secure, insecure, or you don’t know whether it is even used for, it doesn’t really matter. It is amazing when you ask this question to any CISO on whether he has a fair understanding on how many assets he thinks the organization has. Here, we are not getting into hardware or software assets but just the basic web applications/services that an Org’s IT floats in internet or intranet.

Once the inventory is finalized, come up with an asset classification using a risk based approach. Some assets could be critical, some public. Some assets could be accessed by all and some accessed only within a closed trusted environment. Some assets are used by millions of users and some assets are used just by the CISO (ya, you read it right. His dashboard).

2. Once the inventory is finalized, then you go figure your security processes for each of your assets. Did all applications undergo all aspects of secure-SDLC?

In other words, ‘Security Coverage‘. Let’s say, you do code analysis only for 50 of your 100 applications, then your coverage is only 50% and you don’t have an idea about rest of the apps.  With this simple metric, it becomes fairly simple on what one needs to do.

Continue reading “Application Security Metrics”

Scanning Large or Highly Dynamic Sites

An acquaintance of mine was saying that he didn’t find automated vulnerability assessments to be useful and they were really a waste of time. He was spending time scrubbing the false positives and stated that he rather focus his energy on a manual security testing activities. But compliance mandates us to run automated assessments. In a manual assessment, there is no high level of assurance due to the human factor involved. There is always a chance of lapses or oversight. So, we had to mandate running automated assessments also in addition to manual testing.

I also decided to help him fine tune his scan configuration so that his scans produce better results next time. If you are going through similar situation, the below options in your scan tool can help you. You can use this if you are using HP Web Inspect or Acunetix or IBM Appscan.

  1. Limit Maximum Single URL Hits:

Highly dynamic sites can have URLs like The articles can range from 1 to even millions. In this case, if you let your scanning tool crawl all the URLs, your scan will never complete.

In a typical site, it is better to limit it to 3 or even less.

  1. Include Query Parameters in Hit Count while limiting Single URL hits.

What if your application takes on a different action depending on parameter passed to a single URL. Take the case of URL In this case, action can take values like add, edit, delete, archive, import or export.

In this specific case, you want your scan tool to hit all the three action cases. If your site is structured like this, it makes sense to include your query parameters in hit count while you limit single URL hits. In this example, the limit would be 6 instead of 3.

  1. Limit maximum CRAWL directory depth.

If your site is structured like and so on, and your code and content that is displayed in each of these sub-directories is virtually identical, it makes sense to limit the maximum crawl depth. By doing it, the scan tool will not endlessly crawl all possible sub-directories.

Set your default option to 3 and no more. If you know for certain that all your code is in a single directory, you can even reduce this to 1.


  1. Limit maximum CRAWL Count

This always happens in a content management system like a newspaper site. The content will always be spanning to hundreds of millions of pages and the scan will never complete. It is better in such situation to limit crawl count.


  1. Limit maximum web form submission

What if your web form has drop-downs with Country, states of every possible country and even cities. You cannot have your scan tool submit the web form for every possible permutation and combination of country, state and city. Limit maximum web form submission to 1 or 2.

Most scan tools also come with an option of ‘Limit Scan Folder’, ‘Enable or Disable Scan Logs’, ‘Limit link traversal depth’ or even let you modify their scan policies so that you can skip running certain audits. Remember, a scanner is a tool and it is up to the tester to use it to her/his best.

Insider Threat – Detect an insider’s job

In our application security engagements, we frequently look out for security loopholes that are present in the application source code. Most of these loopholes happen because of certain assumptions in the application architecture, high level design and implementation. But, there are certain loopholes that are left in the application source code intentionally. Because of this reason, these inside jobs are termed as more malicious.

For example, let’s say that the application has a mail routine and sends billing details (credit card information) to the invoice admin. What if, an employee working in the development team, adds his email address to the mail routine so that he can know the credit card numbers and the transactions for some financial gain?. Every time, the invoice admin receives an email, this employee would also receive that information in BCC. Since he is in BCC list, invoice admin would not notice it. The security consultant might not notice this as he might think of this as a functional requirement.

For this reason, Fortify has come up with a rulepack called ‘Insider Threat RulePack’. The functionality of this rulepack is

1) Email Spying
2) Detecting Logic Bombs in the code.
3) Detecting Nefarious Communication
4) Detecting Backdoors.
5) Dynamic Code injection etc.

This rulepack makes it easier for the security consultant to detect malicious code left intentionally in the source code.

More on this rulepack can be found here.

Entity Expansion Attack

In my last article, I covered the basic attacks that could be tried with the XML file. In today’s article, I will describe in detail about an attack called ‘Entity Expansion’. This is also called as the million laugh attack.

Consider the below piece of XML code.

<!DOCTYPE foo [

<!ENTITY a "1234567890" >

<!ENTITY b "&a;&a;&a;&a;&a;&a;&a;&a;" >

<!ENTITY c "&b;&b;&b;&b;&b;&b;&b;&b;" >

<!ENTITY d "&c;&c;&c;&c;&c;&c;&c;&c;" >

<!ENTITY e "&d;&d;&d;&d;&d;&d;&d;&d;" >

<!ENTITY f "&e;&e;&e;&e;&e;&e;&e;&e;" >

<!ENTITY g "&f;&f;&f;&f;&f;&f;&f;&f;" >

<!ENTITY h "&g;&g;&g;&g;&g;&g;&g;&g;" >

<!ENTITY i "&h;&h;&h;&h;&h;&h;&h;&h;" >

<!ENTITY j "&i;&i;&i;&i;&i;&i;&i;&i;" >

<!ENTITY k "&j;&j;&j;&j;&j;&j;&j;&j;" >

<!ENTITY l "&k;&k;&k;&k;&k;&k;&k;&k;" >

<!ENTITY m "&l;&l;&l;&l;&l;&l;&l;&l;" >



The above does look like some garbage but when this data is parsed by your XML parser, it has the potential to use up all your CPU and get your XML service down.

Does this get your attention? Ok, now let us what is so scary about this innocent looking code.


People who are familiar with DOCTYPE, DTD and Entities can move on to the next passage. For others, I will try to give a little background on this.

A XML document is made up building blocks called Elements. Each element can have one to many attributes and zero –to-many child elements. The elements will also carry data. While XML is all about elements, data and its attributes, the definition of these elements is done in Document Type Definition (DTD). There is one other building block in XML called Entities. Entities are something like macros or alias. If you want to repeat the message ‘hi’ 1000 times in your XML, you can just define this string as an entity and specify it in your XML. While parsing, XML parser will take care of replacing the entity with ‘hi’ thousand times.

Code Explanation:

In the above code, while XML parses the entities, the entity ‘&m;’ will blow out to 687,194,767,360 in size. Expanding this entity would be a time consuming job for the CPU and it will go down. And so, we successfully brought down a system with a humble piece of code.


A soap message should actually make use of XSD schema and not DTD. Even if DTD is used, the XML parser shouldn’t encourage the use of entities. But there might be instances when entities are desired. In that case, the parser should limit the size of data it expands. Or set an Auto Timeout after which it will stop parsing to halt this denial of service attack.

But, in reality, how many parsers take care of this attack?

XML Security – Part 1

I have been doing some research on XML Security and attack vectors related to it. The more I dig into the attacks possible, the more I am convinced that given the right kind of attack, even a sophisticated XML parser would succumb to the exploit. While, this might seem like a bold statement with no proof attached, I am afraid that this is indeed true.

If you are a developer working on XML, you should know how to protect your application from XML based attacks. If you are not working on XML, its never too late to learn 🙂

Before we dive into XML Security, I will give a brief on what is XML.


XML stands for eXtensible Markup Language. This is the de-facto standard produced and specified by W3C to transport, store and carry data.


XML is used extensively to transport data between applications, web services and is one of the components in web2.0 ajax based framworks. In this age, atleast 1/3 of the websites available on the internet would use XML in one form or other. These applications would not just use XML but rely on XML for their usability, availability and accuracy.

Since XML has become more important for an application, attackers are also more interested in exploiting XML data. While there are numerous examples on the internet to lanuch network based attacks and application based attacks, exploits against XML payloads (data) are very less in number.

In this series, we will see what kind of attacks are possible and how we can protect a XML payload against these attacks. Today, I will talk about one particular attack called ‘Parameter Tampering’.

Paramater Tampering:

This is not a new term to an application security professional. Ever since appsec consultants were born, they have been tampering with whatever data that comes to their hand. So, XML based tampering is no surprise.

So, what kind of acts are possible in this category?

1) Tweaking the XML elements, attributes or the text content to inject cross site scripting attack.

2) SQL Injection attack by tweaking the text content in XML.

3) Adding non-existent attributes or elements to an XML and checking whether it would cause DOS or information leakage.

4) Adding parameters that would make the XML malformed and check for exceptional conditions.

5) Inserting malicious special characters to check for malformed XML.

6) Using long attribute names or element names

7) Jumbo Payload (unclosed tags) and checking whether it cause DOS (denial of service).

The above (7) points are pretty self-explanatory and I hope I needn’t explain step by step. Now, that I have detailed these notorius acts, what do you think can protect your application from these acts?


The application should ensure that it checks for the correct element length, type, position, format and validate its XML data. Seems fair enough, isn’t it? In my next article, I would talk about another attack called ‘Entity Expansion’.