Two-factor authentication, or 2FA/TFA for the syllabically challenged like me, is a system of authenticating twice before being granted access to something. For example, if you have to type in both a password and a temporary code received in a text message in order to sign into your bank, then you are already using 2FA. Congratulations in that case, by the way.
The use of 2FA is required for many job roles, such as people who manage systems subject to regulatory compliance, but it can also be used for many personal accounts. If you aren’t already using 2FA for critical accounts, turning it on now makes good sense considering the amount of personal information exposed in the Equifax breach. Of course, some versions of 2FA are better than others, so it is a good idea to know the basics before diving in.
At a high level, the process of authentication means proving your identity in one of three ways: you can reveal something you know such as a password, something you are such as a fingerprint or retina scan, or prove something you have physical possession of such as a phone or RSA key fob.
Properly implemented two-factor authentication uses one method from each of two different categories. Giving two passwords doesn’t count as two-factor authentication since that is two instances from the “something you know” category. The combination of a password and a temporary code received over SMS counts as 2FA because the password is something you know and entering the SMS code proves that you have possession of your phone. (For certain values of “proves,” anyway. More on that later.)
That’s not much of an architecture description, but it is surprisingly useful as a yardstick against which to measure various 2FA implementations. It’s a matter of asking the right questions such as, “which two factors are being used here?” or “does this compromise an otherwise good two-factor authentication implementation?” Let’s walk through a few examples to see how this works in practice.
An RSA key fob is a small hardware device that contains a cryptographic key, is synced to the current time, and displays a 6-digit code that changes every minute. The backend server can verify using cryptography that a particular code came from a particular key fob. Since the key fob has no USB or other electronic port and no user input, receipt of the correct code provides a high level of confidence that the person logging in has physical possession of that key fob.
Compare that to the system in which a vendor sends a temporary login code to a mobile device. In theory, a text message proves that the user has possession of the phone, but it is far less secure than a key fob. Hackers can get to the phone the same way the text does: remotely over the network. Even SS7, the network that transports SMS 2FA codes, has been hacked. Using a mobile device for 2FA is a lot better than passwords alone but not nearly as secure as a dedicted hardware 2FA device.
Neither of these authentication methods conclusively proves the identity of the person making the request. The key fob can be stolen. The text to the phone can be intercepted or diverted. These methods provide very different levels of assurance that whoever enters the code is who they claim to be. The question is whether, for any given application, the method used meets the threshold of “good enough.”
Most people feel that a text to their phone is good enough for online banking. Many business owners opt to use a key fob to safeguard their online banking and consider text messages to the phone too weak of an authentication for that purpose. The National Institute of Standards and Technology has deprecated use of SMS-based 2FA so that is not even an option for NIST compliant job roles.
The different levels of assurance between hardware 2FA and SMS-based 2FA are an example of what I referred to earlier as different values for “proves.” The difference between retail online banking, business online banking, or a nist-compliant job role is an example of how 2FA requirements vary based on the use case. What we consider sufficient proof of identity depends on the context in which it is used, and what works in one place doesn’t necessarily work in another.
Many banks use SMS-based 2FA to protect retail online banking but do not give the account holder much advice as to how to set it up properly. A common mistake is to make the assumption that all cell phone numbers are equivalent, and that the security lies in the sending and receiving of the text and not in the uniqueness of the device on which it is received.
There’s a big difference in effective security between sending a text to the cell phone number provisioned by a carrier to a specific device versus sending the same text to a virtual cell phone number. Consider that a text message sent to Google Voice is forwarded to the user’s cell phone, to the user’s email account, and is accessible to that user via the web, at a minimum. Since the user can specify distribution to multiple cell phones and multiple email accounts, and since the same email account may be accessed by many devices, the virtual number may represent an extremely wide distribution of what is intended to be a secret code.
There are many controls imposed by the mobile carrier to ensure that the text is delivered to one and only one device. Those controls are what make the phone an instance from the “something you have” category for purposes of 2FA. Can it be defeated? Sure. Is it good enough for the average user? Probably.
That changes if the phone number used to receive the login code is a virtual number which is, by intentional design, accessible from a great many devices and a great many places. Delivering an SMS authentication code to such a virtual number doesn’t even come close to proving “something you have” anymore. Using a virtual number for SMS authentication codes collapses that method back to an instance of “something you know.” Because two instances of the “something you know” does not meet the 2FA requirements, sending login codes to virtual numbers defeats what may be an otherwise good 2FA implementation.
When I was shopping for a bank, one of my criteria was whether I would be able to get an RSA key fob to authenticate my online banking. In addition to issuing the key fob I requested, the bank also asked for my cell phone number where I could receive routine notifications and fraud alerts.
After a couple years with this account the bank changed their policy and now their login offers a choice between using the RSA key fob or sending a text message to the registered phone number. While it’s great to give customers a choice, this arrangement also gives an attacker the same choice. There is no option to disable phone authentication short of de-registering the phone entirely. Customers must choose between stronger authentication and no SMS fraud alerts, or to receive fraud alerts but have weak authentication.
The bigger problem here though is that many customers who had previously used key fobs will have set up their notifications to be delivered to a virtual cell number. I tested this at my bank to see if they would block texts to Google numbers, but all my notifications and authentication codes were delivered just fine.
Although I immediately turned this off, I can’t help but wonder how many customers chose to use key fobs and now do not realize that an attacker can opt to send their authentication codes to Gmail, email, voice message, text message, and the web. The bank’s ill-considered design choice collapsed an otherwise-good 2FA implementation into two instances of “something you know,” which no longer meets the basic 2FA requirements. If I can’t convince them to fix this, I’ll be looking for another bank. In the meantime, I’ve deregistered all my phone numbers.
What you may know as “Google Authenticator” is really an instance of a standard called Time-based One-Time Password, or TOTP for short. TOTP is a software-based version of the RSA key fob. After an initial setup step, an app generates 6-digit codes that change every 30 seconds. This makes the user’s device behave just like a dedicated hardware token except that there’s no extra piece of hardware to carry around.
The intent is to implement an instance of “something you have” authentication on a device the user already carries. It isn’t as secure as a dedicated hardware device, so a well-designed TOTP app has additional security controls that try to preserve uniqueness. Typically, such an app lacks an export function or any way for the user to retrieve the original key so that the codes cannot be stolen after the initial setup.
This is another case of trading convenience for the user against the level of security achieved. A true hardware-based token like the key fob is sealed except for the LCD display so there is very little to attack. But the user isn’t going to want to carry a janitor’s keychain ring full of them and to do so would be quite inconvenient.
The software version uses a device the user already carries to perform the same function, and can handle a large number of keys, but the device itself can be attacked remotely. On the other hand, the code never travels over the phone network, the SS7 network, the web or by email, so in many ways this is safer than sending a text to authenticate.
There are many web sites that offer TOTP as an option, including GitHub and WordPress. Lately whenever I enter a TOPT code in a browser my password manager offers to enter it automatically for me. Is this a good idea? That depends on how it is implemented. We can use our understanding of the 2FA security model to evaluate the possibilities.
Let’s say the vendor offers a compliant TOTP application that is completely separate from their password manager. Even if the only difference is that the TOTP app moves from the phone to the PC the security is weaker because all the authentication apps are now on one device. Compromise that device and it’s game over. At least when the TOTP app is on the phone, the attacker has to compromise two devices.
My vendor, though, has added an important new feature: migration of TOTP cryptographic seeds across devices. An important feature of the standard TOTP app is that if an attacker manages to disable it or change the cryptographic seed, the legitimate account owner can detect this. In security, we sometimes design things to fail detectably as an added layer of defense. But in a system designed to deploy the cryptographic seeds seamlessly to any number of devices, the legitimate user has no way to tell whether an attacker has a copy of their TOTP codes. This type of failure is undetectable and a compromised user may never know how they were hacked.
The underlying premise of “something you have” is that the authentication is uniquely tied to a specific physical device and extremely difficult or impossible to replicate. Sure, setting up a new device is a pain but that difficulty provides some of the assurance on which we depend. Compromising the root of trust, which in this case is the irreproducibility of a unique device, for the sake of convenience defeats the security. This is another example in which the system collapses from “something you have” to multiple factors of “something you know” which is a form of single factor authentication.
The situation worsens if the new TOTP functionality is built into the existing password manager. Password managers were invented to manage “something you know” authentication. Now that passwords alone are considered insufficient and use of TOTP has increased, it may seem intuitive to include that functionality into the tool for managing passwords. Knowing what we do about 2FA, it should be obvious after a bit of analysis that “something you know” and “something you have” cannot both be delivered by the same tool and still qualify as two-factor. In fact the security of the 2FA model depends on this not being the case.
When I researched my vendor’s new TOTP features I found many reviews complaining of the lack of a TOTP browser extension and noting that some competitors have this feature. That, to me, would be the worst possible TOTP implementation. The browser is a general-purpose platform that spends all its time executing unknown, untrusted, 3rd party code. It should be considered one of the least trusted things on any device. Personally, I not only don’t want to use the browser to simulate a unique hardware device, but I don’t want to provide any further incentive for developers to even think about implementing something like that.
To recap what we’ve covered so far: 1) 2FA is good; 2) the different types of 2nd factor vary widely as to the level of security they provide; 3) it doesn’t take a security specialist to know enough about 2FA architecture to evaluate the options on the market.
What next? Turn 2FA on everywhere? That’s a bit extreme. Better to prioritize accounts and work your way from highest priority down. Sorted by priority, we can imagine a spectrum of accounts ranging from critical at the top to disposable at the bottom, and a threshold somewhere in there above which two-factor authentication should be enabled. So what counts as high priority?
Obviously, things like bank accounts are high priority, but what about email? If it is the same email account you use for account recovery when you forget a password, then it is being used as a security control and because of that probably falls into the high priority category.
Where each of us draws that line is a personal decision and any two people can vary widely as to how they estimate risk. Some people will have many accounts enrolled in some form of 2FA while others have few or none. The one thing that we all have in common though is that each new breach lowers the threshold of which accounts qualify as critical and if you are not already using 2FA, chances are you will be before long.
I wanted to make the case that 2FA is more important than ever but not lose sight of the things that remain just as important as ever. All the parts have to work together so before finishing up let’s do a very brief review.
You should now be able to avoid the worst 2FA implementations and decide whether and when a hardware token is needed for your personal accounts. If you should find an account that you think is important but for which the vendor does not offer a 2FA option, consider requesting that it be added.
If the first authentication factor is a password, it’s still important to pick a good one and to not reuse them across accounts. Use a password manager to generate and remember long random strings rather than human-readable passwords. Change them now and then. You don’t get to choose whether your vendor uses password authentication but you do have a say in how hackable your password is.
Endpoint security tools work after the login to detect and deflect intrusion attempts, and there are many versions available for both consumers and for companies. CylancePROTECT® even has a blended home and office solution that extends protection all the way from the server room to the family room. You only spend a few seconds at the login screen. Endpoint protection is for everything after that.
A relatively new type of endpoint protection technology is called continuing authentication. Since individual humans are recognizable by our device interaction patterns, we define bad as “activity in your session that doesn’t look like you” and alert on that. Your endpoint protection remains important even after enabling 2FA.
Backups are the last line of defense but not if they are destroyed along with the primary device in a fire or flood, or locked up by ransomware. Get a safe deposit box or a backup buddy to hold those external drives offline and off-site. Make sure to update them regularly. Rotate them so that there is always at least one copy safely offsite while you are taking backups.
In short, 2FA is something to add to your existing toolbelt but doesn’t replace anything that’s already there. Now that you know how to avoid the worst implementations, why not take 2FA for a drive today?