Risk Based Adaptive Authentication

Risk Based Authentication or Adaptive Authentication is a feature through which the risk context of a certain user’s login attempt is analyzed according to the user’s login pattern, location, device etc. If a certain risk threshold is exceeded, then application challenges the user with another set of authentication questions like challenge/response, captcha, software token etc.

Off late, this has become one of the most often sought feature in any software’s identity and access management requirement. Some of the products that offer this feature are

  1. Okta Adaptive Multi-Factor Authentication
  2. RSA Adaptive authentication
  3. Duo Security
  4. Ping
  5. Secure Auth etc.

This feature is a very good step-up protective measure that one may want to use while developing an application. Any thoughts?

Dynamic Application Security Testing Tool Factors to consider while choosing a tool

Dynamic Application Security Testing (or DAST) as it is often called is a scan or a sequence of HTTP requests done to the run time of an application prior it is deployed in production to ensure that any security issues are ruled out.

It is more often also called as vulnerability assessment and is mostly done using a combination of automated and manual approaches. Some of the tools that are used are Web Inspect, Qualys WAS, App Scan, Vera Code, App Spider, Acunetix, Burp Suite, OWASP ZAP etc. That is quite a lot of tools. Some of these have on-premise installation options and some are purely SAAS.

I have used almost all of these tools in the span of my career and have come to like some of these tools. If you are looking forward to do a comparison between these tools, below are some factors that you can consider.

  1. Support for Authenticated/Un-Authenticated Scans.
  2. Support for different type of authenticated scans like basic, digest, form-based, federated, including captcha or challenge-response.
  3. Support for Web Services, Restful Services
  4. Support for SPA applications, Flash Applications, Applications on Containers
  5. Ability to feed into a devops pipe line
  6. Customizations capabilities like inclusion or exclusion of URLs, ability to limit the breadth/depth of scan, Issue Retest, Issue Re-play, Compliance Support, Type of Scans,
  7. Ability to separate CRAWL and AUDIT phases.
  8. Quality of Scan – How many false positive/false negatives it gives.
  9. Tool Administration/Operational Complexity
  10. Scalability
  11. Parallel Scan Support
  12. Deployment Options etc.

I see that Web Inspect is fairly good in its features but has become a sore point especially in operational complexity and administration. As of today, there is no official plugin for integration with DevSecOps also though it can be done in an indirect manner using their cloudscan model. Qualys WAS/App Spider/Veracode etc offer ease in operations and are very easy to use but miss out on false negatives. In terms of usage and convenience, I still seem to favor Web Inspect and also Acunetix. Burp Suite is best used as a proxy and as a scanner, I see that it would take some more time to catch up with rest of the top tools.

Patch Management: Qualys or SCCM?

Off late, I see many organizations switching to agent based scans for better detection of vulnerabilities and near-real time scanning. Agent based scans especially in end points, are easy deployment, no noise and helps a security team do better vulnerability management.

While all of above is true, agent based scans also result in more vulnerabilities probably due to better detection and this results in chaos especially when the remediation strategy has not changed. this is like the demand for fixing is suddenly overflowing but supply of patching team is very less. In such cases, it is much better to do auto-patching or patch management of end points through the same agents that detect the vulnerabilities. One such product I have come to like is the Qualys Patch Management Module. While this module is not definitely a replacement for SCCM and other patching solutions, it does take the load off in cases where the huge back log of vulnerabilities are coming from end points.

So, how do you compare a solution coming from Qualys with that of a product like SCCM?

  • Coverage for Patching is provided for Windows and for Non-Windows it is likely to be available by next month. This when done, will be a definite advantage over SCCM.
  • Non-Microsoft Patches are available at the click of a button and jobs can be automatically deployed.
  • More 3rd party apps are covered as compared to SCCM and correlation with threats/vulnerabilities are better.
  • However, SCCM is still better when it comes to doing registry changes, remnant file changes etc.
  • One thing that I have personally experienced and felt is that Qualys should provide an option to the end user to be able to choose the time for patch deployment. In the current model, it provides options for deferring patch deployment 3 times. But if at all 3 times you are in a meeting, in the 3rd attempt, you can’t defer anymore and your system will restart in the middle of a discussion. This is something that is small, but can be corrected by Qualys so that user experience is better.

Nevertheless, it is still a wonderful attempt by Qualys to make VM experience better for organizations. However, it is still in the best interest of organizations to identify the root causes of vulnerabilities like why it is happening in the first place, is it due to a lack of process around software catalog, no EOL management or is it provision of administrative access to employees and let them choose anything to install?

Accessing HTTP Service on Azure VM from internet

I had to set up a VM on Azure Subscription today as part of a lab setup that we doing for competency development. The VM creation part was easy and I was also able to install the tools required for the lab within minutes.

One of the tools was a web application that had to be deployed on to web server and I had to enable HTTP traffic to the outside world. As part of this requirement, I

  1. Setup Network Security – Incoming Rules to allow traffic from any source to destination 443 on TCP protocol.
  2. I created a DNS and associated the public IP with the DNS.
  3. I ensured that the VM was also listening on the private IP on port 443.

Inspite of this, when I tried accessing the DNS myvm.zone.cloudapp.azure.com, it still was giving a connection refused error. After many trial and error, I figured that the Windows Firewall that was installed on the VM was blocking incoming traffic. I created a rule within Windows Firewall to allow incoming HTTP traffic in port 443 and restarted the VM.

It worked like a charm. Hope this helps someone too.

Automated Threat Modeling

Threat Modeling is essentially a collaborative activity where the business and the security team sits together to figure out the attack surface and related threats for the threat modeling use case they have computed. While the security team is most often successful in figuring out common security threats related to authentication, authorization, usage of vulnerable frameworks, error handling, cryptography, data handling etc when it comes to doing threat modeling during the design stage of a software, it is usually very difficult for a legacy application.

But then, why done one need a threat modeling for a legacy application? Isn’t it too late by then? Yes, but not that it cannot be done. Threat Modeling is a late pickup and while companies have adopted to vulnerability assessment, SAST, DAST etc, readily, they haven’t done so for threat modeling because, its effort intensive, poorly understood, pre-requisites are most often not there and especially in the agile/devsecops age, it is practically impossible to adapt to.

But what if I tell you that you can do automated threat modeling at least for the deployment architecture for your application by introducing a network discovery tool in your application environment, let it sort out the as-is communications and then feed the results to a threat modeling tool which can figure out the threats with least possible manual intervention. This is one of the best I have seen in a while and would recommend to any organization where they are looking to see quick wins with minimal effort.

Introducing Threat Modeler collaboration with Avocado to you. Try it out to see the results. (P.S I am neither associated with Threat Modeler nor Avocado).

https://www.globenewswire.com/news-release/2020/09/08/2090039/0/en/ThreatModeler-Announces-Automated-Threat-Modeling-for-Legacy-Applications.html