Muath Obaidat Proposes a Better, Safer Way to “Log-In”
Author: Rachel Friedman
Have you ever been faced with a photo grid and asked to click on every traffic light to prove you weren’t a robot before your could access your email or bank? A recent proposal by Dr. Muath Obaidat, an Assistant Professor in John Jay College’s Department of Mathematics and Computer Science, could prevent you from having to go through that ever again.
Along with co-authors including his student Joseph Brown (a 2020 graduate who earlier this year was awarded John Jay’s Ruth S. Lefkowitz Mathematics Prize), Dr. Obaidat makes the case for a new way of authenticating user information that would make logging into websites more secure without overcomplicating the system. He calls it “a step forward” both technically and logistically, as the proposed authentication system is both technically more secure and easier to deploy commercially than previous proposals. So while Obaidat’s research may seem complicated, the solution he proposes in “A Hybrid Dynamic Encryption Scheme for Multi-Factor Verification: A Novel Paradigm for Remote Authentication” (Sensors, July 2020) is not just theoretical.
(To read the full text of the article for free, visit https://www.mdpi.com/1424-8220/20/15/4212/htm)
Read on for a Q&A with Dr. Muath Obaidat:
Can you describe the most common risks of the typical username/password authentication model most of us are using today?
The most common risk in current authentication models is the lack of presentation of actual proof of identity, especially during communications. Since the majority of websites use static usernames and passwords that do not change between sessions, if an attacker can get ahold of a login — whether by guessing or through more technical means — there is no further mechanism or nuance in design to actually stop them from using stolen data to imitate a user. While 2FA (Two Factor Authentication) has risen in popularity as a mitigation for this problem, both published papers from the National Institute of Standards and Technology (NIST) as well as high profile public hacks have shown this to be insufficient by itself, because of attacks which focus on manipulating or stealing data rather than simply brute-forcing (working through all possible combinations through trial-and-error to crack a passcode).
Can you explain how your proposed method works to authenticate a session?
The simplest way to explain how this form of authentication works is to imagine you had a key split into two halves; the client has a half, and the server has a half. But instead of just sending the half of the key you have, you’re sending the blueprint for said key half, which can only be reconstructed given the other half. This blueprint changes slightly each time you log in, but is still derivative of the other “whole” key.
Only two people have the respective halves: the client and the server. These halves are derivative of data which is itself derived from an original input. Thus, as long as you can produce something from the front-end that creates one input, even though this input is never sent, it can be integrated with this system. Think of that as the “mold” from which the key is derived, and then the blueprint is shifted on both ends according to the original mold.
How does your proposed scheme differ from others that have been in use or proposed previously?
What sets it apart is both the flexibility of the design as well as the range of problems it attempts to fix at one time. Many other schemes we studied were focused on fixing one problem: typically [they focused on] brute-forcing, which manifested in the form of padding “front-end” or “back-end” parts of a scheme without giving much thought to the actual transmission of data itself. Our scheme, on the other hand, is focused on protecting that transmitted data, while also being sure not to introduce additional weaknesses on either end of the communication.
Another big issue we often ran into with other schemes is design flexibility; many were either unrealistic to implement en masse, or were so specific that they pigeonholed themselves into a scenario where they could not be combined with other communication systems or improvements to other architectural traits. Our scheme is flexible in terms of architectural integration — for example, it uses the same simple Client-Server framework without introducing third parties or other nodes — and the overall design is both simplistic in terms of implementation and highly adaptable.
What is it that has prevented many newly-proposed authentication schemes from being implemented more broadly?
While it depends on the scheme in question, there are typically three factors that are preventative to implementation: user accessibility, deployment complications, and degree of benefit. The first isn’t really technical, but relates more to consumer factors. Many schemes simply are not widely implementable on a consumer level; not only because of aspects such as speed, but also because of logistics. Having a user go through a complicated process each time they want to log into a website isn’t very practical, especially if you’re selling a product where convenience is a factor, hence why some schemes don’t catch on despite being technically sound.
Deployment complications, on the other hand, would be related to things such as how to replace current infrastructure with new infrastructure; many schemes significantly stagger architectures or are high specific and complex to actually deploy. These complications act as a deterrent to those who may want to implement them. Lastly, degree of benefit is a big factor too. Given how ubiquitous current paradigms are, simply improving one aspect in exchange for the implementation of a widely different system is a very big ask. Implementation takes time, as does adoption on a wide scale, so unless the benefit is [significant enough to merit departing from] current paradigms, it’s unlikely many would want to explore “unproven” adoptions.
How would a new authentication method go from being theoretical to being widely adopted? In other words, by what process is this type of new technology adopted, and who is responsible for its uptake?
That’s a good question, and I do not think there is a singular answer unfortunately. Especially because of the decentralization of the internet, it’s hard to give a specific answer on what this would look like in practice. As the internet has been more consolidated under specific companies, I suppose one answer to this would be that bigger companies would have to take an interest in implementation and take action themselves to create a ripple effect. This is distinct from the past, when collective normalization of technology was bottom-up because of more decentralized standards.
Dr. Muath Obaidat is an Assistant Professor of Computer Science and Information Security at John Jay College of Criminal Justice of the City University of New York and a member of the Center for Cybercrime Studies, Graduate Faculty in the Master of Science Digital Forensics and Cyber Security program and Doctoral faculty of the Computer Science Department at the Graduate School and University Center of CUNY.
He has numerous scientific article publications in journals and respected conference proceedings. His research interests lie in the area of digital forensics, ubiquitous Internet of Things (IoT) security and privacy. His recent research crosscuts the areas wireless network protocols, cloud computing and security.