Home CloudAWS AWS: Access an S3 bucket using gateway and interface endpoints (PrivateLink)

AWS: Access an S3 bucket using gateway and interface endpoints (PrivateLink)

by Kliment Andreev
353 views

If you have a use case where you need to transfer a lot of data back-and-forth between various resources and an S3 bucket, you will definitely benefit if you use gateway or interface endpoints for S3. The monthly bill for regional and zonal transfer will be much less and on top of that it’s much more secure.
From the AWS documentation, “Amazon S3 supports both gateway endpoints and interface endpoints. With a gateway endpoint, you can access Amazon S3 from your VPC, without requiring an internet gateway or NAT device for your VPC, and with no additional cost. However, gateway endpoints do not allow access from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. For those scenarios, you must use an interface endpoint, which is available for an additional cost.
Keep this table handy as it explains the differences between gateway and interface S3 endpoint.

Now that we know the use case and the differences, we’ll see three different scenarios. For this, you’ll need access to two different accounts and two different regions.

  • An S3 bucket in account A and an ec2 instance in the same account A and the same region A
  • An S3 bucket in account A and an ec2 instance in a different account B, but the same region A
  • An S3 bucket in account A and an ec2 instance in a different account B and a different region B

The diagram looks like this.

An S3 bucket in account A, ec2 in account A, both in the same region

Let’s create an S3 bucket first. We’ll use AWS CLI with different profiles.

BUCKET_NAME="myuniquenameforthebucket"
REGION_A="us-east-2"
aws s3api create-bucket \
    --bucket $BUCKET_NAME \
    --region $REGION_A \
    --create-bucket-configuration LocationConstraint=$REGION_A

Let’s copy a file there.

dd if=/dev/zero of=somefile bs=1024 count=1
aws s3 cp somefile "s3://$BUCKET_NAME/somefile"

Using the same profile, create a VPC with one public subnet with the same CIDR as the VPC.

CIDR_A="192.168.10.0/24"
VPCA_ID=$(aws ec2 create-vpc \
    --cidr-block $CIDR_A \
    --region $REGION_A \
    --tag-specification 'ResourceType=vpc,Tags=[{Key=Name,Value=VPC-A}]' \
    --output text --query 'Vpc.VpcId')
echo $VPCA_ID

Enable DNS and hostnames resolution, needed for SSM. Two separate lines needed.

aws ec2 modify-vpc-attribute --enable-dns-hostnames "{\"Value\":true}"  --vpc-id $VPCA_ID
aws ec2 modify-vpc-attribute --enable-dns-support "{\"Value\":true}"  --vpc-id $VPCA_ID

Create a default network ACL.

aws ec2 create-network-acl --vpc-id $VPCA_ID

Create an Internet Gateway and attach it to the VPC.

IGWA_ID=$(aws ec2 create-internet-gateway \
    --region $REGION_A \
    --tag-specifications 'ResourceType=internet-gateway,Tags=[{Key=Name,Value=igw-A}]' \
    --output text --query 'InternetGateway.InternetGatewayId')
echo $IGWA_ID
aws ec2 attach-internet-gateway \
    --internet-gateway-id $IGWA_ID \
    --vpc-id $VPCA_ID

Let’s create a subnet.

SUBA_ID=$(aws ec2 create-subnet \
    --vpc-id $VPCA_ID \
    --cidr-block 192.168.10.0/24 \
    --region $REGION_A \
    --tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=subPubA}]' \
    --output text --query 'Subnet.SubnetId')
echo $SUBA_ID

Create a route table.

RTA_ID=$(aws ec2 create-route-table \
     --vpc-id $VPCA_ID \
     --output text --query 'RouteTable.RouteTableId')
echo $RTA_ID
aws ec2 create-route --route-table-id $RTA_ID --destination-cidr-block 0.0.0.0/0 --gateway-id $IGWA_ID

Associate this route table to the public subnet.

aws ec2 associate-route-table --route-table-id $RTA_ID --subnet-id $SUBA_ID

Now, we have to create an EC2 instance where we can test the S3 access. We won’t be using keys and SSH to access, we’ll use SSM. For that we need the instance to have access to the Internet which means we need a security group that allows access to Internet.

SGA_ID=$(aws ec2 create-security-group \
    --group-name sgAllowICMPandOutboundAccess --description "Allows ICMP and outbound access" \
    --vpc-id $VPCA_ID \
    --output text --query 'GroupId')
echo $SGA_ID

Allow unrestricted egress. By default this rule is there, so don’t execute the code below.

#aws ec2 authorize-security-group-egress \
    --group-id $SGA_ID \
    --protocol all \
    --cidr "0.0.0.0/0" 

Do this one, so we can ping instances from each other when we do VPC peering.

aws ec2 authorize-security-group-ingress \
  --group-id $SGA_ID \
  --ip-permissions IpProtocol=icmp,FromPort=-1,ToPort=-1,IpRanges="[{CidrIp=192.168.10.0/24},{CidrIp=192.168.20.0/24},{CidrIp=192.168.30.0/24}]"

Let’s create a role so we can access the EC2 over SSM. First we need a policy that the role can assume.

cat << 'EOF' >> trust-policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

Then create the role and attach the AWS Managed policy for SSM (AmazonSSMManagerInstanceCore).

aws iam create-role --role-name rolSSM --assume-role-policy-document file://trust-policy.json
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore --role-name rolSSM

Create instance profile.

aws iam create-instance-profile --instance-profile-name ipEC2

Add the role.

aws iam add-role-to-instance-profile --role-name rolSSM --instance-profile-name ipEC2

Create an instance.

AMI_ID="ami-033fabdd332044f06"
INS_TYPE="t3.micro"
aws ec2 run-instances \
    --image-id $AMI_ID \
    --instance-type $INS_TYPE \
    --security-group-ids $SGA_ID \
    --subnet-id $SUBA_ID \
    --iam-instance-profile Name="ipEC2" \
    --associate-public-ip-address  \
    --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=ec2-A}]'

We’ll need an IAM user that we’ll use to access the S3 bucket.

IAM_USER="usrMyBucket"
aws iam create-user --user-name $IAM_USER

We’ll also need a policy that gives access to that specific bucket only.

cat >> bucket-policy.json << EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::$BUCKET_NAME",
                "arn:aws:s3:::$BUCKET_NAME/*"
            ]
        },
        {
            "Effect": "Deny",
            "NotAction": "s3:*",
            "NotResource": [
                "arn:aws:s3:::$BUCKET_NAME",
                "arn:aws:s3:::$BUCKET_NAME/*"
            ]
        }
    ]
}
EOF

Create the policy.

POL_ARN=$(aws iam create-policy \
    --policy-name polMyBucketAccess \
    --policy-document file://bucket-policy.json \
    --output text --query 'Policy.Arn')
echo $POL_ARN

Attach the policy to the user.

aws iam attach-user-policy --policy-arn $POL_ARN --user-name $IAM_USER

Create access keys.

aws iam create-access-key --user-name $IAM_USER --output table --query ['AccessKey.AccessKeyId','AccessKey.SecretAccessKey']

Copy these two values.
Now, go to AWS console, select the EC2 instance and click Connect. Use the 2nd tab Session Manager and click Connect.
Type aws configure and paste the copied values for the IAM user you just created.

If you try to do aws s3 ls, you’ll get access denied, but if you try aws s3 ls s3://myuniquenameforthebucket, you’ll see that it works.

Now, this type of access goes over the Internet and you’ll be charged for the data transfers. In order to avoid that, we’ll create an S3 gateway endpoint.
Go to VPC menu and from the left side choose Endpoints.
Create the endpoint, name the endpoint, choose AWS services, filter by Type = Gateway and select the S3 service name. Then select the VPC where you created your EC2 and the route table that’s associated with the subnet where EC2 resides. For policy choose Full Access. This means, the S3 gateway endpoint will have access to all S3 buckets. Finally, click to create the endpoint.

If you check your route table, you’ll see that there is another entry there. Wait 10 seconds or less if you don’t see it.

If you go back to the EC2 instance and do the aws s3 ls s3://myuniquenameforthebucket command again, you’ll see that nothing changed. So, how do you know that we did the right thing? It’s simple. Click on the endpoint that you’ve created, click on the Policy tab and then click the Edit Policy button.

Change line 5 from Allow to Deny and click Save.
Go back to the EC2 instance and do the aws s3 ls s3://myuniquenameforthebucket again. You’ll get access denied. There you go…Traffic goes over the endpoint. Revert the changes back from Deny to Allow. Let’s move to the 2nd use case.

An S3 bucket in account A, an ec2 instance in a different account B, but both in the same region A

NOTE:Make sure you switch your AWS CLI to a different profile, because we’ll be building the EC2 instance in another account. The S3 bucket stays the same.
Let’s build the 2nd VPC and EC2.

REGION_A="us-east-2"
CIDR_B="192.168.20.0/24"
VPCB_ID=$(aws ec2 create-vpc \
    --cidr-block $CIDR_B \
    --region $REGION_A \
    --tag-specification 'ResourceType=vpc,Tags=[{Key=Name,Value=VPC-B}]' \
    --output text --query 'Vpc.VpcId')
echo $VPCB_ID

Enable DNS and hostnames resolution, needed for SSM. Two separate lines needed.

aws ec2 modify-vpc-attribute --enable-dns-hostnames "{\"Value\":true}"  --vpc-id $VPCB_ID
aws ec2 modify-vpc-attribute --enable-dns-support "{\"Value\":true}"  --vpc-id $VPCB_ID

Create a default network ACL.

aws ec2 create-network-acl --vpc-id $VPCB_ID

Create an Internet Gateway and attach it to the VPC.

IGWB_ID=$(aws ec2 create-internet-gateway \
    --region $REGION_A \
    --tag-specifications 'ResourceType=internet-gateway,Tags=[{Key=Name,Value=igw-B}]' \
    --output text --query 'InternetGateway.InternetGatewayId')
echo $IGWB_ID
aws ec2 attach-internet-gateway \
    --internet-gateway-id $IGWB_ID \
    --vpc-id $VPCB_ID

Let’s create a subnet.

SUBB_ID=$(aws ec2 create-subnet \
    --vpc-id $VPCB_ID \
    --cidr-block 192.168.20.0/24 \
    --region $REGION_A \
    --tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=subPubB}]' \
    --output text --query 'Subnet.SubnetId')
echo $SUBB_ID

Create a route table.

RTB_ID=$(aws ec2 create-route-table \
     --vpc-id $VPCB_ID \
     --output text --query 'RouteTable.RouteTableId')
echo $RTB_ID
aws ec2 create-route --route-table-id $RTB_ID --destination-cidr-block 0.0.0.0/0 --gateway-id $IGWB_ID

Associate this route table to the public subnet.

aws ec2 associate-route-table --route-table-id $RTB_ID --subnet-id $SUBB_ID

Now, we have to create an EC2 instance where we can test the S3 access. We won’t be using keys and SSH to access, we’ll use SSM. For that we need the instance to have access to the Internet which means we need a security group that allows access to Internet.

SGB_ID=$(aws ec2 create-security-group \
    --group-name sgAllowICMPandOutboundAccess --description "Allows ICMP and outbound access" \
    --vpc-id $VPCB_ID \
    --output text --query 'GroupId')
echo $SGB_ID

Allow unrestricted egress. By default this rule is there, so don’t execute the code below.

#aws ec2 authorize-security-group-egress \
    --group-id $SGB_ID \
    --protocol all \
    --cidr "0.0.0.0/0" 

Do this one, so we can ping instances from each other when we do VPC peering.

aws ec2 authorize-security-group-ingress \
  --group-id $SGB_ID \
  --ip-permissions IpProtocol=icmp,FromPort=-1,ToPort=-1,IpRanges="[{CidrIp=192.168.10.0/24},{CidrIp=192.168.20.0/24},{CidrIp=192.168.30.0/24}]"

We can reuse the same policy to create a role that will allow us to use SSM.

cat trust-policy.json

Then create the role and attach the AWS Managed policy for SSM (AmazonSSMManagerInstanceCore).

aws iam create-role --role-name rolSSM --assume-role-policy-document file://trust-policy.json
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore --role-name rolSSM

Create instance profile. We’ll use the same names, because they are different accounts.

aws iam create-instance-profile --instance-profile-name ipEC2

Add the role.

aws iam add-role-to-instance-profile --role-name rolSSM --instance-profile-name ipEC2

Create an instance. The AMI_ID is the same because it’s the same region.

AMI_ID="ami-033fabdd332044f06"
INS_TYPE="t3.micro"
aws ec2 run-instances \
    --image-id $AMI_ID \
    --instance-type $INS_TYPE \
    --security-group-ids $SGB_ID \
    --subnet-id $SUBB_ID \
    --iam-instance-profile Name="ipEC2" \
    --associate-public-ip-address  \
    --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=ec2-B}]'

Wait for a couple of minutes and then connect from SSM, same as you did with the first instance. Configure AWS again using the same set of credentials that you did before and then try to list the content of the bucket. You should be able to see the file.

BUCKET_NAME="myuniquenameforthebucket"
aws s3 ls s3://$BUCKET_NAME

Again, the traffic goes over the Internet. So we need another gateway S3 endpoint. You create the gateway S3 endpoint where the client is located, not the destination.
Let’s create another S3 gateway endpoint as described above.

Check the route table and you’ll see the route created. At this point, you are accessing the bucket over the endpoint. Do the same test. Change the policy to Deny for the endpoint and you’ll get access denied.

An S3 bucket in account A, an ec2 instance in a different account B and a different region B

For the 3rd use case, we’ll use an interface endpoint (PrivateLink), not a gateway endpoint. This is what AWS documentation has to say. “With AWS PrivateLink for Amazon S3, you can provision interface VPC endpoints (interface endpoints) in your virtual private cloud (VPC). These endpoints are directly accessible from applications that are on premises over VPN and AWS Direct Connect, or in a different AWS Region over VPC peering. Interface endpoints are represented by one or more elastic network interfaces (ENIs) that are assigned private IP addresses from subnets in your VPC. Requests to Amazon S3 over interface endpoints stay on the Amazon network. You can also access interface endpoints in your VPC from on-premises applications through AWS Direct Connect or AWS Virtual Private Network (AWS VPN).
Let’s create the resources. VPC and EC2. Change the AWS CLI profile so it’s using the same account B and a new region B.

REGION_B="us-east-1"
CIDR_C="192.168.30.0/24"
VPCC_ID=$(aws ec2 create-vpc \
    --cidr-block $CIDR_C \
    --region $REGION_B \
    --tag-specification 'ResourceType=vpc,Tags=[{Key=Name,Value=VPC-C}]' \
    --output text --query 'Vpc.VpcId')
echo $VPCC_ID

Enable DNS and hostnames resolution, needed for SSM. Two separate lines needed.

aws ec2 modify-vpc-attribute --enable-dns-hostnames "{\"Value\":true}"  --vpc-id $VPCC_ID
aws ec2 modify-vpc-attribute --enable-dns-support "{\"Value\":true}"  --vpc-id $VPCC_ID

Create a default network ACL.

aws ec2 create-network-acl --vpc-id $VPCC_ID

Create an Internet Gateway and attach it to the VPC.

IGWC_ID=$(aws ec2 create-internet-gateway \
    --region $REGION_B \
    --tag-specifications 'ResourceType=internet-gateway,Tags=[{Key=Name,Value=igw-C}]' \
    --output text --query 'InternetGateway.InternetGatewayId')
echo $IGWC_ID
aws ec2 attach-internet-gateway \
    --internet-gateway-id $IGWC_ID \
    --vpc-id $VPCC_ID

Let’s create a subnet.

SUBC_ID=$(aws ec2 create-subnet \
    --vpc-id $VPCC_ID \
    --cidr-block 192.168.30.0/24 \
    --region $REGION_B \
    --tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=subPubC}]' \
    --output text --query 'Subnet.SubnetId')
echo $SUBC_ID

Create a route table.

RTC_ID=$(aws ec2 create-route-table \
     --vpc-id $VPCC_ID \
     --output text --query 'RouteTable.RouteTableId')
echo $RTC_ID
aws ec2 create-route --route-table-id $RTC_ID --destination-cidr-block 0.0.0.0/0 --gateway-id $IGWC_ID

Associate this route table to the public subnet.

aws ec2 associate-route-table --route-table-id $RTC_ID --subnet-id $SUBC_ID

Now, we have to create an EC2 instance where we can test the S3 access. We won’t be using keys and SSH to access, we’ll use SSM. For that we need the instance to have access to the Internet which means we need a security group that allows access to Internet.

SGC_ID=$(aws ec2 create-security-group \
    --group-name sgAllowICMPandOutboundAccess --description "Allows ICMP and outbound access" \
    --vpc-id $VPCC_ID \
    --output text --query 'GroupId')
echo $SGC_ID

Allow unrestricted egress. By default this rule is there, so don’t execute the code below.

#aws ec2 authorize-security-group-egress \
    --group-id $SGB_ID \
    --protocol all \
    --cidr "0.0.0.0/0" 

Do this one, so we can ping instances from each other when we do VPC peering.

aws ec2 authorize-security-group-ingress \
  --group-id $SGC_ID \
  --ip-permissions IpProtocol=icmp,FromPort=-1,ToPort=-1,IpRanges="[{CidrIp=192.168.10.0/24},{CidrIp=192.168.20.0/24},{CidrIp=192.168.30.0/24}]"

We don’t have to create any roles. We’ll use the existing ones that we built for the 2nd use case.
Create the EC2. The AMI ID will change this time.

AMI_ID="ami-08a0d1e16fc3f61ea"
INS_TYPE="t3.micro"
aws ec2 run-instances \
    --image-id $AMI_ID \
    --instance-type $INS_TYPE \
    --security-group-ids $SGC_ID \
    --subnet-id $SUBC_ID \
    --iam-instance-profile Name="ipEC2" \
    --associate-public-ip-address  \
    --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=ec2-C}]'

Same thing as we did before, connect from SSM, configure AWS CLI using the previous credentials and try to list the bucket.

BUCKET_NAME="myuniquenameforthebucket"
aws s3 ls s3://$BUCKET_NAME

You should see the file there. But, this access goes over Internet. Now, we have to create an interface endpoint in the destination VPC which is VPC-A. When we create an interface endpoint a private IP will be generated for this interface endpoint. In order this private IP to access the bucket, we need some sort of connection between regions, e.g. VPC peering or Transit Gateway. Let’s do the peering between account A (VPC-A) and account B (VPC-C). Enter the 12 digit account number for account A where the S3 bucket resides.

echo $VPCA_ID
echo $VPCC_ID
echo $REGION_A
ACCOUNT_A="123456789012"
echo $ACCOUNT_A

Make sure all of the above returns some value. Do the peering.

aws ec2 create-vpc-peering-connection \
  --vpc-id $VPCC_ID --peer-vpc-id $VPCA_ID \
  --peer-owner-id $ACCOUNT_A --peer-region $REGION_A

Log to the account A and under VPC | Peering menu on the left, select the peering and accept it. But this is not enough. You need to tell the VPC to use the routes over the peering connection. Go to account A, VPC A, find the route for the subnet where the EC2 instance is and add a route for destination 192.168.30.0/24 to go over peering. As you can see this is the same route that we modified initially in our first use case.

Now, log to account B and VPC C and modify the route but this time, the destination is VPC A (192.168.10.0/24). At this point you should be able to ping EC2-A from EC2-C. Don’t worry if you can’t ping EC2-C from EC2-A.
At this point we can create the interface endpoint where the bucket is. This time choose interface as type, VPC-A, our security group, full access policy and the subnet where EC2 is. This subnet has nothing to do with the bucket access, we are just creating a network interface there. We can choose any subnet in that VPC, we just need the IP assigned to the endpoint interface.

There will be 2 DNS names created for you. One is referring the zone and the other the region. Use the regional DNS name, the first one and add the word bucket instead of the * character.

From instance EC2-C, this is how you will access the bucket.

aws s3 ls s3://myuniquenameforthebucket --endpoint-url https://bucket.vpce-038d5f94d78d8b216-vyvk5uon.s3.us-east-2.vpce.amazonaws.com

If you try from the instance EC2-C, this won’t work. Why? Because of the security group that we used. We have to allow HTTPS traffic from 192.168.30.0/24. Go find the first security group, that’s the one we used for the interface endpoint and allow HTTPS from 192.168.30.0/24. After the change, you’ll be able to access the bucket.

Related Articles

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More