- Posted by Plurilock
- On November 18, 2011
A lot has been said about weaknesses inherent in traditional alphanumeric passwords. By design, passwords can be broken using dictionary or brute force attacks. They can be forgotten, stolen, or shared. As an alternative, it has been recommended to use strong passwords schemes based on biometrics or generated using tokens, or a combination of multiple authentication factors.
Likewise the Federal Financial Institutions Examination Council (FFIEC) issued a regulation in 2005 requiring the use of multifactor authentication for Internet banking. For several years, a widely held belief has been that using a strong password scheme (e.g. biometrics, variable password) or multifactor authentication scheme would be enough to block hackers at the gate.
The recent string of hacking incidents targeting high profile organizations (e.g. Sony, RSA, Epsilon) with sizable security budgets has shown that hackers can still go around the strongest authentication schemes currently available.
The FFIEC has shown that it understood this reality by issuing revised authentication guidance for online banking that will take effect in early January 2012. The new regulation requires beyond the initial authentication monitoring the customer behavior and history, and triggering timely response in case of fraud.
The new reality is that no matter how strong the initial authentication controls are, there is still some residual risk of hackers being able to bypass such controls.
Authentication controls can be bypassed by skipping the regular login page and calling directly an internal page or service that should be accessed only by authenticated subjects. This could become possible by modifying the web requests parameters and tricking the application into believing that authentication has already been performed. Some of the common ways authentication measures can be bypassed include the following:
– Direct page request: when the login page is the only one to provide access control, other pages could be requested directly through forced browsing and accessed by the user without their credentials ever been checked.
– Parameter modification: when successful login is implemented using fixed value parameters, these parameters could be modified directly by the user or via a proxy to obtain access to protected areas without providing proper authentication credentials.
– Session ID prediction: take advantage of possible predictability of session ID generation in some web application to find a valid session ID and gain access to protected system resources without actually being authenticated.
– SQL injection: take advantage of database misconfigurations to read or modify sensitive data (e.g. authentication credentials) from the database or execute privileged operations on the database.
Authentication controls bypass vulnerabilities result quite often from design errors (e.g. improper definition of application parts to be protected), flawed implementations (e.g. lack of validation of inputs to critical functions), or inadequate system configuration and deployment.
While many of the above mentioned ways of bypassing authentication controls can be prevented through thorough application vulnerability testing during development or comprehensive penetration testing after deployment, there are some more insidious techniques which can be more difficult to curb through testing. For instance, the man-in-the-browser (MITB) attack, which is one of the main attack vectors used to carry Internet banking frauds, relies on malware infection that may occur sometime after the completion of all testing activities, and that is difficult to detect because of its high level of stealthiness. MITB attack is a variant of Man-in-the-middle (MITM) attack, where a Trojan horse is used as “middle-man”. The Trojan intercepts and modifies calls between the browser and its security controls on the fly, while still displaying back whatever the user’s intended transaction is.
Furthermore, current security testing techniques heavily target existing and well understood threat vectors, and may not be effective at anticipating on new ways of exploiting the system. Likewise, testing alone may not be enough at detecting new ways of bypassing authentication measures.
The main reason why authentication bypass is a serious issue is because of the static nature of our current approach to user or application authentication; the verification of the user credentials happens only once and typically at the beginning of the session. This means that if some bad guys can bypass this initial step then they will be able to use freely the system resources without having to worry about being busted.
Continuous authentication (CA) using biometrics provides a sharp response to the threat of authentication bypass, as it involves checking the user identity periodically and transparently based on biometrics information typically generated from his activity. CA does not depend on specific attack methods, and consequently it has the capability of detecting new ways of bypassing authentication measures. CA is not a replacement of existing strong static authentication measures, but should be seen as necessary complement to these techniques.
At Plurilock, we recommend organizations to use strategically a combination of all these different techniques including testing, strong static authentication, and CA. Contact us at email@example.com to find out how Plurilock can help you in elaborating a comprehensive protection strategy and how CA will fit in such strategy, and also to learn more about BioTracker, which is our premier CA product.