The use of application programming interfaces (APIs) has exploded as businesses deploy mobile apps, containers, serverless computing, microservices, and expand their cloud presence. Consequently, many APIs are developed and deployed very quickly, leading to the persistence of coding errors, with poor authentication practices numbering among the top offenses.
APIs are stateless in nature, and any gap or weakness can allow an attacker to gain unauthorized access to applications or to exfiltrate data. Authenticating an API requires the developer to have a complete understanding of the transaction – from the user interaction through to the outcome – so it requires them to go beyond the limits of the API specification itself. The chosen authentication protocol will seek to verify the identity of the client attempting to connect before authorization is used to allow the connection to an application to take place.
There are a variety of ways to authenticate API requests. These range from HTTPS and a username and password to API keys which generate a unique string of characters for each OAuth authentication request, which sees developers use a well-known authorization framework to automatically orchestrate approvals. There’s also the option to layer OpenID on top of OAuth to verify identity via an authorization server.
Deciding which authentication mechanism to use can be confusing. Each has its pros and cons. HTTPS is simple to use but relies on the security of the username/password combination. Keys provide a randomized means of access, making OAuth the more secure option (that is, admittedly, more difficult to roll out and maintain). The decision on which approach to take will be influenced by multiple factors, such as whether the API will be made accessible to external developers or kept in-house.
Fallout from failures
It’s a decision that should be part of any “shift left” strategy during API development, which sees security be given full attention before the API is tested and spun up. But it’s often an area that can get neglected, and there are plenty of examples that illustrate this happening.
For instance, the MailChimp data breach earlier this year saw attackers using compromised API keys to gain access to an account through which attackers were able to target finance and crypto clients before launching a phishing campaign against the customers of those crypto clients.
Poor API authentication can therefore lead to significant data losses and threaten the integrity of the brand. But what are the most common pitfalls when it comes to API authentication?
Legacy applications aren’t always compatible with authentication mechanisms. Sometimes teams may deploy APIs during production for convenience’s sake, fully intending to tackle authentication at a later stage. Then perhaps a member of staff leaves, or the team simply forgets to document the work, and this results in an unauthenticated API being deployed.
Whether the API is deployed internally or externally, such an error creates a substantial risk that the team may not even be aware of until it is exploited. And, even if it is caught in time, this will lead to some disruption as the service is taken down and the issue fixed.
Non null-value authentication tokens
APIs that don’t check authentication tokens (or if there even is one) make their way into production quite regularly. The actual value of the token doesn’t matter if the presence of a token can be confirmed in the request, so the API will accept pretty much any token. Again, it’s often a thing that the development team meant to get around to fixing, but which was then overlooked. So, prioritize the assignment of token values.
Authenticated but not authorized
Of course, even if the API is authenticated, authorization still needs to be considered, but not all teams take this attitude. Some assume authentication is sufficient. But if the API seeks to connect with an authenticated user but doesn’t check their access privileges, that user can connect to resources that are outside their remit.
If the authentication mechanism is also poorly constructed and is susceptible to enumeration (whereby the attacker can easily run through a finite number of log-in possibilities), this issue also makes it easier for attackers to gain access. It’s an error that can lead to significant dwell time as it’s difficult to detect, enabling an attacker to explore unchallenged. Using randomly generated characters can mitigate – though not eliminate – this type of attack.
Within large organizations with distributed environments, there may be several teams, each deploying their own flavor of API authentication, resulting in a fragmented approach. This creates multiple ways to access an API, increasing the risk of compromise over time and making it harder to manage these APIs.
For example, client A might be given X-api-token in the request header to authenticate themselves in an application. But client B might get asked for an API token in a request parameter called api-key to authenticate. A third client might get asked for the api-key in the authorization header. These multiple approaches increase risk as it means if any one method gets compromised, the entire API becomes susceptible to attack. It also means these authentication methods are unlikely to be spotted and patched in future updates.
Enforcing a consistent approach by using a set specification such as OpenAPI/Swagger and testing prior to deployment using functionality-based testing can help prevent this from occurring. It’s also advisable to use runtime monitoring to detect any APIs out there that might be using more than one mechanism.
Improper authorization logic
If an API accepts tokens produced in lower-privilege environments, it will grant access to higher-privilege environments like production. A threat actor may be able to grab an authentication token from staging and replay that to the production server. A poor authorization implementation would then allow access as the token is technically valid, albeit for the wrong environment. This highlights the need for a proper privilege hierarchy which can be achieved using authentication scopes such as OAuth Scopes or other identity enforcement mechanisms.
Shield right while shifting left
As we become more reliant on APIs, we must know how to prevent these authentication errors.
What these issues illustrate is that security and development teams must work closely together. While adhering to respected API standards like the OpenAPI specification and implementing “shift left” security practices during development can help reduce the likelihood of these authentication and authorization errors occurring, the sheer scale of APIs being deployed means the business is unlikely to catch all instances.
So, authentication is not just a development problem, but also one that the business must address practically and strategically. A post-deployment security strategy can focus on continuous runtime visibility to create and update an inventory of all APIs, assess their risk exposure and to detect and block attacks through unified API protection (UAP).
Unified API protection looks at the entire lifecycle of the API, from discovery to cataloguing, attack detection, mitigation and testing. In this way it ensures APIs are created securely but also continuously monitored and actively defended. Because of this, it facilitates a Zero Trust approach by assuming that even authenticated and authorized APIs are susceptible to attack and monitoring and analyzing API transactions and user behavior. Combine that with authentication that ensures least privilege is enforced, and any anomalous activity can be detected and unauthorized access prevented.