Home CloudAWS AWS: Access DynamoDB and Secrets Manager from Node.js on EC2 and EKS

AWS: Access DynamoDB and Secrets Manager from Node.js on EC2 and EKS

by Kliment Andreev
2.9K views

In this post I’ll explain how to access DynamoDB table and Secrets Manager from an EC2 instance running a small Node.js/Express app. The app will access the DynamoDB and print an entry from the table and also access an API Gateway that will call a lambda function that returns the current date and time. The API Gateway will be protected with a header authentication and we’ll store the header auth token in Secrets Manager, which means our app will have to access the Secret Manager as well. Finally, we’ll containerize that app as a Docker image and run it as a Deployment behind a load balancer on EKS.
There are a lot of steps, so make sure to use the same variable names and pay attention to the outputs.
NOTE: There are multiple ways of accessing Secrets Manager. The one described here is with roles and AWS SDK. There is another way of accessing Secrets Manager from EKS using Kubernetes CSI driver, so make sure you read this post too if you use EKS and Secrets Manager only.

Lambda and API Gateway

Let’s create a Lambda function and an API Gateway so when the API Gateway is called with a parameter, it will write the parameter, current date/time and on top of that we’ll protect the API Gateway so it can be called only if we pass a password/token.
Define the default region.

AWS_DEFAULT_REGION="us-east-2"

Get the account number.

ACCT_NO=$(aws sts get-caller-identity --query "Account" --output text)
echo $ACCT_NO

This is the Hello World Node.js Lambda function source.

cat <<'EOF' > helloworld.js
'use strict';
 
exports.handler = async (event) => {
    let name = "";
    let responseCode = 200;
    const date = new Date();
    
    if (event.queryStringParameters && event.queryStringParameters.name) {
        name = event.queryStringParameters.name;
    }
    
    if (event.body) {
        let body = JSON.parse(event.body)
    }
 
    let greeting = `Hello ${name}, it's ${date}`;
    

    let responseBody = {
        message: greeting,
        input: event
    };
    
    let response = {
        statusCode: responseCode,
        headers: {
            "x-custom-header" : "my custom header value"
        },
        body: JSON.stringify(responseBody)
    };
    console.log("response: " + JSON.stringify(response))
    return response;
};
EOF

Zip the function so we can deploy it.

zip helloworld.zip helloworld.js

We need a Lambda role that allows execution.

cat <<EOF > lambda-role.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
      "Service": [
        "lambda.amazonaws.com"
      ]
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

Create the role and attach it to a policy.

ROLE_ARN=$(aws iam create-role --role-name rolLambdaExecutionRole \
  --assume-role-policy-document file://lambda-role.json --output text --query 'Role.Arn')
echo $ROLE_ARN
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole \
  --role-name rolLambdaExecutionRole

Create the Lambda function.

LAMBDA_HELLOWORLD_ARN=$(aws lambda create-function --function-name HelloWorld \
  --zip-file fileb://helloworld.zip --runtime nodejs18.x --role $ROLE_ARN \
  --handler helloworld.handler --output text --query 'FunctionArn')
echo $LAMBDA_HELLOWORLD_ARN

We also need another Lambda function that will be used as an authorizer. This will allow us to protect the API Gateway with authentication. The authentication parameters are in line 27 here on the screen.(headerauth1, password).

cat << EOF > authorize.js
exports.handler = function(event, context, callback) {
    console.log('Received event:', JSON.stringify(event, null, 2));

    // Retrieve request parameters from the Lambda function input:
    var headers = event.headers;
        
    // Parse the input for the parameter values
    var tmp = event.methodArn.split(':');
    var apiGatewayArnTmp = tmp[5].split('/');
    var awsAccountId = tmp[4];
    var region = tmp[3];
    var restApiId = apiGatewayArnTmp[0];
    var stage = apiGatewayArnTmp[1];
    var method = apiGatewayArnTmp[2];
    var resource = '/'; // root resource
    if (apiGatewayArnTmp[3]) {
        resource += apiGatewayArnTmp[3];
    }
        
    // Perform authorization to return the Allow policy for correct parameters and 
    // the 'Unauthorized' error, otherwise.
    var authResponse = {};
    var condition = {};
    condition.IpAddress = {};
     
    if (headers.headerauth1 === "password") 
    {
        callback(null, generateAllow('me', event.methodArn));
    }  else {
        callback("Unauthorized");
    }
}
     
// Help function to generate an IAM policy
var generatePolicy = function(principalId, effect, resource) {
    // Required output:
    var authResponse = {};
    authResponse.principalId = principalId;
    if (effect && resource) {
        var policyDocument = {};
        policyDocument.Version = '2012-10-17'; // default version
        policyDocument.Statement = [];
        var statementOne = {};
        statementOne.Action = 'execute-api:Invoke'; // default action
        statementOne.Effect = effect;
        statementOne.Resource = resource;
        policyDocument.Statement[0] = statementOne;
        authResponse.policyDocument = policyDocument;
    }
    // Optional output with custom properties of the String, Number or Boolean type.
    authResponse.context = {
        "stringKey": "stringval",
        "numberKey": 123,
        "booleanKey": true
    };
    return authResponse;
}
     
var generateAllow = function(principalId, resource) {
    return generatePolicy(principalId, 'Allow', resource);
}
     
var generateDeny = function(principalId, resource) {
    return generatePolicy(principalId, 'Deny', resource);
}
EOF

Zip it so we can deploy it.

zip authorize.zip authorize.js

Create the Lambda authorizer function and use the same role as with the Hello World function.

LAMBDA_AUTH_ARN=$(aws lambda create-function --function-name Authorize1 --zip-file fileb://authorize.zip --runtime nodejs18.x \
  --role $ROLE_ARN --handler authorize.handler --output text --query 'FunctionArn')
echo $LAMBDA_AUTH_ARN

Create an API Gateway.

APIGW_ID=$(aws apigateway create-rest-api --name 'gwapihelloworld' \
  --endpoint-configuration '{"types": ["REGIONAL"]}' --output text --query 'id')
echo $APIGW_ID

Create a top level resource.

ROOT_ID=$(aws apigateway get-resources --rest-api-id $APIGW_ID --output text --query 'items[0].id')
echo $ROOT_ID
RES_ID=$(aws apigateway create-resource --rest-api-id $APIGW_ID \
      --parent-id $ROOT_ID \
      --path-part helloworld --output text --query 'id')
echo $RES_ID

Create an ANY method without authorization.

aws apigateway put-method --rest-api-id $APIGW_ID \
       --resource-id $RES_ID \
       --http-method ANY \
       --authorization-type "NONE"

Integrate the Lambda function with the API Gateway.

aws apigateway put-integration \
        --rest-api-id $APIGW_ID \
        --resource-id $RES_ID \
        --http-method ANY \
        --type AWS_PROXY \
        --integration-http-method POST \
        --uri arn:aws:apigateway:$AWS_DEFAULT_REGION:lambda:path/2015-03-31/functions/$LAMBDA_HELLOWORLD_ARN/invocations 

Allow Lambda function to be triggered by the API Gateway.

aws lambda add-permission --function-name $LAMBDA_HELLOWORLD_ARN \
        --action lambda:InvokeFunction --statement-id 'api_gateway' \
        --principal apigateway.amazonaws.com \
        --source-arn "arn:aws:execute-api:$AWS_DEFAULT_REGION:$ACCT_NO:$APIGW_ID/*/*/helloworld"

Create an authorizer in the API Gateway.

AUTH_ID=$(aws apigateway create-authorizer --rest-api-id $APIGW_ID --name 'Authorizer1' \
  --type REQUEST \
  --authorizer-uri "arn:aws:apigateway:$AWS_DEFAULT_REGION:lambda:path/2015-03-31/functions/$LAMBDA_AUTH_ARN/invocations"  \
  --authorizer-result-ttl-in-seconds 300 \
  --identity-source 'method.request.header.headerauth1' \
  --output text --query 'id')
echo $AUTH_ID

Tell the API Gateway to use the authorizer.

aws apigateway update-method --rest-api-id $APIGW_ID --resource-id $RES_ID \
    --patch-operations "op=replace,path=/authorizationType,value=CUSTOM" \
    "op=replace,path=/authorizerId,value=$AUTH_ID" \
    --http-method ANY

Create a deployment and a stage.

STAGE="test"
aws apigateway create-deployment --rest-api-id $APIGW_ID --stage-name $STAGE

Allow Lambda to be triggered by the Authorizer API Gateway.

aws lambda add-permission --function-name $LAMBDA_AUTH_ARN \
        --action lambda:InvokeFunction --statement-id 'api_gateway' \
        --principal apigateway.amazonaws.com \
        --source-arn "arn:aws:execute-api:$AWS_DEFAULT_REGION:$ACCT_NO:$APIGW_ID/authorizers/$AUTH_ID"

Build the API Gateway URL.

APIURL="https://${APIGW_ID}.execute-api.${AWS_DEFAULT_REGION}.amazonaws.com/${STAGE}"
echo $APIURL

Test without and with authorization.

curl -X POST $APIURL/helloworld?name=there
{"message":"Unauthorized"}

These are the parameters that we specified in line 27 in authorize.js file above.

curl -X POST $APIURL/helloworld?name=there -H 'headerauth1 : password'

…and this is the JSON output.

{
  "message": "Hello there, it's Sat Aug 19 2023 12:52:54 GMT+0000 (Coordinated Universal Time)",
  "input": {
    "resource": "/helloworld",
    "path": "/helloworld",
    "httpMethod": "POST",
    "headers": {
      "accept": "*/*",
      "headerauth1": "password",
      "Host": "p8d859ec50.execute-api.us-east-2.amazonaws.com",
      "User-Agent": "curl/7.76.1",
      "X-Amzn-Trace-Id": "Root=1-64e0bb26-5ea6f1225951c3d316e1de77",
      "X-Forwarded-For": "63.81.58.96",
      "X-Forwarded-Port": "443",
      "X-Forwarded-Proto": "https"
    },
    "multiValueHeaders": {
      "accept": [
        "*/*"
      ],
      "headerauth1": [
        "password"
      ],
      "Host": [
        "p8d859ec50.execute-api.us-east-2.amazonaws.com"
      ],
      "User-Agent": [
        "curl/7.76.1"
      ],
      "X-Amzn-Trace-Id": [
        "Root=1-64e0bb26-5ea6f1225951c3d316e1de77"
      ],
      "X-Forwarded-For": [
        "63.81.58.96"
      ],
      "X-Forwarded-Port": [
        "443"
      ],
      "X-Forwarded-Proto": [
        "https"
      ]
    },
    "queryStringParameters": {
      "name": "there"
    },
    "multiValueQueryStringParameters": {
      "name": [
        "there"
      ]
    },
    "pathParameters": null,
    "stageVariables": null,
    "requestContext": {
      "resourceId": "gter0y",
      "authorizer": {
        "numberKey": "123",
        "booleanKey": "true",
        "stringKey": "stringval",
        "principalId": "me",
        "integrationLatency": 0
      },
      "resourcePath": "/helloworld",
      "httpMethod": "POST",
      "extendedRequestId": "J6IuBFugiYcFq7Q=",
      "requestTime": "19/Aug/2023:12:52:54 +0000",
      "path": "/test/helloworld",
      "accountId": "261910724432",
      "protocol": "HTTP/1.1",
      "stage": "test",
      "domainPrefix": "p8d859ec50",
      "requestTimeEpoch": 1692449574298,
      "requestId": "1e8461ad-e3ba-494c-8c19-169ce7a9a174",
      "identity": {
        "cognitoIdentityPoolId": null,
        "accountId": null,
        "cognitoIdentityId": null,
        "caller": null,
        "sourceIp": "72.82.158.196",
        "principalOrgId": null,
        "accessKey": null,
        "cognitoAuthenticationType": null,
        "cognitoAuthenticationProvider": null,
        "userArn": null,
        "userAgent": "curl/7.76.1",
        "user": null
      },
      "domainName": "p8d859ec50.execute-api.us-east-2.amazonaws.com",
      "apiId": "p8d859ec50"
    },
    "body": null,
    "isBase64Encoded": false
  }
}

Create a secret in Secrets Manager

We don’t want to store our parameters when we call the API in the code, so we’ll use Secrets Manager to keep the parameters (headerauth1, password) there.

SECRET_ARN=$(aws secretsmanager create-secret  --name "helloworld/secret" \
  --secret-string "{\"headerauth1\":\"password\"}" --output text --query 'ARN')
echo $SECRET_ARN

Create a DynamoDB table

Let’s create a single DynamoDB table and add two records.

TABLE_ARN=$(aws dynamodb create-table \
  --table-name COMPUTERS \
  --attribute-definitions \
    AttributeName=COMPUTER,AttributeType=S \
    AttributeName=MEMORY_MB,AttributeType=N \
  --key-schema \
    AttributeName=COMPUTER,KeyType=HASH \
    AttributeName=MEMORY_MB,KeyType=RANGE \
  --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 \
  --table-class STANDARD \
  --output text \
  --query 'TableDescription.TableArn')
echo $TABLE_ARN

Add some data.

aws dynamodb put-item \
 --table-name COMPUTERS \
 --item \
 '{"COMPUTER": {"S": "ZX SPECTRUM"}, "MEMORY_MB": {"N": "48"}}'

aws dynamodb put-item \
 --table-name COMPUTERS \
 --item \
 '{"COMPUTER": {"S": "COMMODORE"}, "MEMORY_MB": {"N": "64"}}'

Create a VPC and EC2 instance

We have to create a VPC where we’ll provision an Ubuntu EC2 instance and install Node.js on it. If you have a VPC and EC2 that you want to use, you can skip this step. For the sake of simplicity, the VPC will be with one public subnet. Make sure there is an output from each ECHO command. It means everything is OK.

# Create a VPC
VPCID=$(aws ec2 create-vpc --cidr-block 192.168.100.0/24 \
  --region us-east-2 \
  --tag-specifications 'ResourceType="vpc",Tags=[{Key="Name",Value="demo"}]' \
  --output text --query 'Vpc.VpcId')
echo $VPCID

# Create an Internet Gateway
IGWID=$(aws ec2 create-internet-gateway --region us-east-2 \
  --output text --query 'InternetGateway.InternetGatewayId')
echo $IGWID

# Attach the Internet Gatewa to the VPC
aws ec2 attach-internet-gateway \
    --internet-gateway-id $IGWID \
    --vpc-id $VPCID

# Create a subnet
SUBID=$(aws ec2 create-subnet --vpc-id $VPCID \
  --tag-specifications 'ResourceType="subnet",Tags=[{Key="Name",Value="subPublic"}]' \
  --cidr-block 192.168.100.0/24  \
  --output text --query 'Subnet.SubnetId')
echo $SUBID

# Get the main route ID that was created for the VPC
RTID=$(aws ec2 describe-route-tables \
  --filters Name=vpc-id,Values=$VPCID --output text \
  --query 'RouteTables[*].RouteTableId')
echo $RTID

# Use the Internet Gateway for the Internet route 0.0.0.0/0
aws ec2 create-route --route-table-id $RTID --destination-cidr-block 0.0.0.0/0 --gateway-id $IGWID

# Create a security group for the EC2 instance
SGID=$(aws ec2 create-security-group --group-name sgEC2 \
  --description "Security group for the EC2 instance" \
  --vpc-id $VPCID \
  --output text)
echo $SGID

# Get my home IP
MYIP=$(curl -s ifconfig.me)
echo $MYIP

# Allow SSH from my home IP
aws ec2 authorize-security-group-ingress \
    --group-id $SGID \
    --protocol tcp \
    --port 22 \
    --cidr $MYIP/32

# ... and port 3000 from my home IP again (used for Node.js app testing)
aws ec2 authorize-security-group-ingress \
    --group-id $SGID \
    --protocol tcp \
    --port 3000 \
    --cidr $MYIP/32

# Create a key pair
KEYNAME="keydemo"
aws ec2 create-key-pair --key-name $KEYNAME \
  --query 'KeyMaterial' \
  --output text > ~/.ssh/$KEYNAME
chmod 0600 ~/.ssh/$KEYNAME

# Create an Ubuntu instance
IMGID="ami-024e6efaf93d85776"

EC2ID=$(aws ec2 run-instances \
    --image-id $IMGID \
    --count 1 \
    --instance-type t2.small \
    --key-name $KEYNAME \
    --security-group-ids $SGID \
    --subnet-id $SUBID \
    --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=ec2demo}]' 'ResourceType=volume,Tags=[{Key=Name,Value=volec2demo}]' \
    --output text \
    --query 'Instances[*].InstanceId' \
    --associate-public-ip-address)

# Get the public IP
PUBLICIP=$(aws ec2 describe-instances --instance-ids $EC2ID \
  --output text \
  --query 'Reservations[*].Instances[*].NetworkInterfaces[*].PrivateIpAddresses[*].Association.PublicIp')
echo $PUBLICIP

Wait for a minute or two and SSH to the instance with:

ssh -i ~/.ssh/$KEYNAME ubuntu@$PUBLICIP

Exit from the SSH session!!!
We’ll need to create a role for the EC2 instance.

Create an IAM role

In order for the EC2 instance to access the DynamoDB and the Secrets Manager, we’ll create an IAM role and attach it to the EC2 instance. We don’t want to deal with IAM users and passwords, accessing the resources with a role is the proper way. Create a trust policy first.

cat << EOF > trust-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "sts:AssumeRole"
            ],
            "Principal": {
                "Service": [
                    "ec2.amazonaws.com"
                ]
            }
        }
    ]
}
EOF

Then create two policies. One for DynamoDB access and the second one for access to Secrets Manager.

cat << EOF > polHelloWorldDynamoDB.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListAndDescribe",
            "Effect": "Allow",
            "Action": [
                "dynamodb:List*",
                "dynamodb:DescribeReservedCapacity*",
                "dynamodb:DescribeLimits",
                "dynamodb:DescribeTimeToLive"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SpecificTable",
            "Effect": "Allow",
            "Action": [
                "dynamodb:BatchGet*",
                "dynamodb:DescribeStream",
                "dynamodb:DescribeTable",
                "dynamodb:Get*",
                "dynamodb:Query",
                "dynamodb:Scan",
                "dynamodb:BatchWrite*",
                "dynamodb:CreateTable",
                "dynamodb:Delete*",
                "dynamodb:Update*",
                "dynamodb:PutItem"
            ],
            "Resource": "$TABLE_ARN"
        }
    ]
}
EOF
POL_DYNAMODB_ARN=$(aws iam create-policy --policy-name polHelloWorldDynamoDB \
  --policy-document file://polHelloWorldDynamoDB.json \
  --output text --query 'Policy.Arn')
echo $POL_DYNAMODB_ARN
cat << EOF > polHelloWorldSecretsManager.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "secretsmanager:GetResourcePolicy",
                "secretsmanager:GetSecretValue",
                "secretsmanager:DescribeSecret",
                "secretsmanager:ListSecretVersionIds"
            ],
            "Resource": [
                "$SECRET_ARN"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "secretsmanager:ListSecrets",
            "Resource": "*"
        }
    ]
}
EOF
POL_SECRETSMGR_ARN=$(aws iam create-policy --policy-name polHelloWorldSecretsManager \
  --policy-document file://polHelloWorldSecretsManager.json \
  --output text --query 'Policy.Arn')
echo $POL_SECRETSMGR_ARN

Then create a role called rolAccessToResources and attach the AWS policy that gives DynamoDB and Secrets Manager access.

ROLE_ACCESS_ARN=$(aws iam create-role --role-name rolAccessToResources \
  --assume-role-policy-document file://trust-policy.json --output text --query 'Role.Arn')
echo $ROLE_ACCESS_ARN

…attach the policies.

aws iam attach-role-policy --policy-arn $POL_DYNAMODB_ARN --role-name rolAccessToResources
aws iam attach-role-policy --policy-arn $POL_SECRETSMGR_ARN --role-name rolAccessToResources

We need to attach this role to the EC2 instance.

aws iam create-instance-profile --instance-profile-name ipnAccessToResources

…then

aws iam add-role-to-instance-profile --role-name rolAccessToResources --instance-profile-name ipnAccessToResources

Finally…

aws ec2 associate-iam-instance-profile --instance-id $EC2ID --iam-instance-profile Name=ipnAccessToResources

Create JavaScript files for testing

Connect to the instance first.

ssh -i ~/.ssh/$KEYNAME ubuntu@$PUBLICIP

Install Node.js v18. Download and import the Nodesource GPG key

sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg

Then create the deb repo.

NODE_MAJOR=18
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | \
  sudo tee /etc/apt/sources.list.d/nodesource.list

Run update & install.

sudo apt-get update
sudo apt-get install nodejs -y

Check.

node --version

Create a working folder.

mkdir nodejs 
cd nodejs
npm install aws-sdk

This is a script that will dump the DynamoDB table.

cat << EOF > scan.js
'use strict';
// Load the AWS SDK for Node.js
var AWS = require('aws-sdk');
// Set the region 
AWS.config.update({region: 'us-east-2'});

// Create the DynamoDB service object
var ddb = new AWS.DynamoDB({apiVersion: '2012-08-10'});

var params = {
  TableName: 'COMPUTERS'
};

// Call DynamoDB to add the item to the table
ddb.scan(params, function(err, data) {
  if (err) {
    console.log("Error", err);
  } else {
    console.log("Success", data.Items);
  }
});
EOF

Check if everything is there.

node scan.js

You’ll get a response like this.

Success [
  { MEMORY_MB: { N: '48' }, COMPUTER: { S: 'ZX SPECTRUM' } },
  { MEMORY_MB: { N: '64' }, COMPUTER: { S: 'COMMODORE' } }
]

This is a script that will check the API Gateway/Lambda. Replace the URL at line 7 with the value of $APIURL from your local computer + add the header.

cat << EOF > fetch.js
'use strict';

(async () => {

        const requestOptions = { method: 'POST', headers: { 'headerauth1':'password' } };
        const url = 'https://p8d859ec50.execute-api.us-east-2.amazonaws.com/test/helloworld?name=there';

        const response = await fetch(url, requestOptions);
        //console.log(response);
        const data = await response.json();
        console.log(data.message);
})();
EOF

Test the API.

node fetch.js

The output should be something like this.

Hello there, it's Sat Aug 19 2023 22:06:22 GMT+0000 (Coordinated Universal Time)

This is a script that will retrieve the secrets from the Secrets Manager.

cat << EOF > secret.js
const AWS = require('aws-sdk');
const client = new AWS.SecretsManager({ region: "us-east-2" });

const getMySecret = async (SecretId) => {
  const s = await client.getSecretValue({ SecretId }).promise();
  return s.SecretString;
};

(async() => {
  const secret_101 = await getMySecret('helloworld/secret');
  console.log('My secret:', secret_101);
})();
EOF

And if you execute you’ll get the output.

node secret.js
My secret: {"headerauth1":"password"}

The final code that combines all is this one. Replace the URL at line 35 with the value of $APIURL.

cat << 'EOF' > index.js
'use strict';

const AWS = require('aws-sdk');
AWS.config.update({region: 'us-east-2'});
const ddb = new AWS.DynamoDB({apiVersion: '2012-08-10'});
const client = new AWS.SecretsManager({ region: "us-east-2" });
 
var params = {
  TableName: 'COMPUTERS'
};

const getMySecret = async (SecretId) => {
  const s = await client.getSecretValue({ SecretId }).promise();
  return s.SecretString;
};
 
var mysecret = '';
 
params = {
  TableName: 'COMPUTERS'
};
 
ddb.scan(params, function(err, data) {
  if (err) {
    console.log("Error", err);
  } else {
    console.log("Success", data.Items);
  }
});
 
(async () => {
        mysecret = await getMySecret('helloworld/secret');
        var requestOptions = { method: 'POST', headers: JSON.parse(mysecret) };
        const url = 'https://3utbhdiri4.execute-api.us-east-2.amazonaws.com/test/helloworld?name=there';
 
        const response = await fetch(url, requestOptions);
        const data = await response.json();
        console.log(data.message);
})();
EOF

…and the final output.

node index.js
Success [
  { MEMORY_MB: { N: '48' }, COMPUTER: { S: 'ZX SPECTRUM' } },
  { MEMORY_MB: { N: '64' }, COMPUTER: { S: 'COMMODORE' } }
]
Success [
  { MEMORY_MB: { N: '48' }, COMPUTER: { S: 'ZX SPECTRUM' } },
  { MEMORY_MB: { N: '64' }, COMPUTER: { S: 'COMMODORE' } }
]
Hello there, it's Wed Sep 20 2023 20:02:29 GMT+0000 (Coordinated Universal Time)

As you can see, I don’t have any secrets/passwords in the code and it’s still working. That’s because I am getting the secret at line 33 and pass it in line 34.

Dockerizing the app

If we want to run this app as a container under EKS, we have to make sure it runs as a Node.js service and then dockerize it.
The Express version of the same app is this. I am far from an expert for Node.js/Express, so I’ll give it a best shot.
While you are at the nodejs folder, install Express.

npm install express

Then…Make sure you replace line 41 with the $APIURL from your local computer!!!

cat <<'EOF' > server.js
'use strict';
const express = require('express')
const app = express()
const port = 3000

const AWS = require('aws-sdk');
AWS.config.update({region: 'us-east-2'});
const ddb = new AWS.DynamoDB({apiVersion: '2012-08-10'});
const client = new AWS.SecretsManager({ region: "us-east-2" });

var params = {
  TableName: 'COMPUTERS'
};

const getMySecret = async (SecretId) => {
  const s = await client.getSecretValue({ SecretId }).promise();
  return s.SecretString;
};

var mysecret = '';
var dbitems = '';

params = {
  TableName: 'COMPUTERS'
};

ddb.scan(params, function(err, data) {
  if (err) {
    console.log("Error", err);
  } else {
    console.log("Success", data.Items);
    dbitems = data.Items;
  }
});

var data1 = '';
(async () => {
        mysecret = await getMySecret('helloworld/secret');
        var requestOptions = { method: 'POST', headers: JSON.parse(mysecret) };
        const url = 'https://3utbhdiri4.execute-api.us-east-2.amazonaws.com/test/helloworld?name=there';

        const response = await fetch(url, requestOptions);
        data1 = await response.json();
        console.log(data1);
})();

app.get('/', (req, res) => {
    res.send(JSON.stringify(dbitems) + " " + data1.message);
})

app.listen(port, () => {
  console.log(`Example app listening on port ${port}`)
})
EOF

Run this application with node server.js and the app will listen on port 3000. Open a new session to the terminal and go to the public IP of the instance https://$PUBLICIP:3000 and you’ll see something like this. Or just open up a browser and go to $PUBLICIP:3000. We opened port 3000 earlier on the EC2 instance for this reason. Replace the $PUBLICIP with the public IP of the EC2 Ubuntu instance.

curl $PUBLICIP:3000

CTRL-C out of the app and let’s install Docker.

sudo apt install -y docker.io

Create a Dockerfile.

cat <<EOF > Dockerfile 
FROM node:18
 
# Create app directory
WORKDIR /usr/src/app
 
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./
 
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
 
# Bundle app source
COPY . .
 
EXPOSE 3000
CMD [ "node", "server.js" ]
EOF

Create a .dockerignore file.

cat <<EOF > .dockerignore
node_modules
npm-debug.log
EOF

Now, we can create the image and push to Dockerhub. Use your own repo here.

sudo docker build . -t klimenta/awsdemo
sudo docker images
sudo docker run -p 3000:3000 -d klimenta/awsdemo

GO to the same $PUBLIC of the Ubuntu and you’ll see the same output but this time from the Docker container.
Push the image to the Dockerhub.

sudo docker login
sudo docker push kliment/awsdemo

Run as a container in EKS cluster

Now that we have the container and we know it’s working fine, let’s provision an EKS cluster and deploy our Docker image.
Do this from your computer, not the Ubuntu server in AWS.

CLUSTER_NAME="eksAWSDemo"
eksctl create cluster --name $CLUSTER_NAME --region us-east-2 --instance-types t3.medium --nodes 2 --managed --version 1.27

And this is our deployment that creates a public load balancer so we can access the deployment.

cat <<EOF > demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo
spec:
  replicas: 1
  selector:
    matchLabels:
      run: demo
  template:
    metadata:
      labels:
        run: demo
    spec:
      containers:
      - name: demo
        image: klimenta/awsdemo
        ports:
        - containerPort: 3000
EOF

Deploy the solution.

kubectl apply -f demo.yaml
deployment.apps/demo created
service/loadbalancer created

And you’ll get this if you want to check the container(pod).

kubectl get pods
NAME                    READY   STATUS   RESTARTS      AGE
demo-5655f58d9d-z9pnz   0/1     Error    2 (17s ago)   20s

It errored out.
…and if you want to see why the container crashed, it is very obvious.

kubectl logs demo-5655f58d9d-z9pnz
(Use `node --trace-warnings ...` to show where the warning was created)
Error AccessDeniedException: User: arn:aws:sts::261123456789:assumed-role/eksctl-eksDemoAWS-nodegroup-ng-0c-NodeInstanceRole-UUU04JSKQOJR/i-09df331c5f755bc7e is not authorized to perform: dynamodb:Scan on resource: arn:aws:dynamodb:us-east-2:261910724432:table/COMPUTERS because no identity-based policy allows the dynamodb:Scan action
    at Request.extractError (/usr/src/app/node_modules/aws-sdk/lib/protocol/json.js:80:27)
    at Request.callListeners (/usr/src/app/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/usr/src/app/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/usr/src/app/node_modules/aws-sdk/lib/request.js:686:14)
    at Request.transition (/usr/src/app/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/usr/src/app/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /usr/src/app/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/usr/src/app/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/usr/src/app/node_modules/aws-sdk/lib/request.js:688:12)
    at Request.callListeners (/usr/src/app/node_modules/aws-sdk/lib/sequential_executor.js:116:18) {
  code: 'AccessDeniedException',
  '[__type]': 'See error.__type for details.',
  '[Message]': 'See error.Message for details.',
  time: 2023-09-21T20:46:01.784Z,
  requestId: 'SDK8R0LSTASAIIBIO3UUU1RC3RVV4KQNSO5AEMVJF66Q9ASUAAJG',
  statusCode: 400,
  retryable: false,
  retryDelay: 32.90504606038562
}
/usr/src/app/node_modules/aws-sdk/lib/protocol/json.js:80
  resp.error = util.error(new Error(), error);
                          ^

AccessDeniedException: User: arn:aws:sts::261987654321:assumed-role/eksctl-eksDemoAWS-nodegroup-ng-0c-NodeInstanceRole-UUU04JSKQOJR/i-09df331c5f755bc7e is not authorized to perform: secretsmanager:GetSecretValue on resource: helloworld/secret because no identity-based policy allows the secretsmanager:GetSecretValue action
    at Request.extractError (/usr/src/app/node_modules/aws-sdk/lib/protocol/json.js:80:27)
    at Request.callListeners (/usr/src/app/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/usr/src/app/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/usr/src/app/node_modules/aws-sdk/lib/request.js:686:14)
    at Request.transition (/usr/src/app/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/usr/src/app/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /usr/src/app/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/usr/src/app/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/usr/src/app/node_modules/aws-sdk/lib/request.js:688:12)
    at Request.callListeners (/usr/src/app/node_modules/aws-sdk/lib/sequential_executor.js:116:18) {
  code: 'AccessDeniedException',
  '[__type]': 'See error.__type for details.',
  '[Message]': 'See error.Message for details.',
  time: 2023-09-21T20:46:01.800Z,
  requestId: '035cbe99-6826-4f46-b4f8-115cfa90d88b',
  statusCode: 400,
  retryable: false,
  retryDelay: 58.98522014520893
}

We created a role (rolAccessToResources) and two policies (polHelloWorldDynamoDB and polHelloWorldSecretsManager) to allow access to the AWS resources, but we assigned them to the Ubuntu instance. Now, the EKS cluster and the nodes are trying to do the same with their default role (eksctl-eksDemoAWS-nodegroup-ng-0c-NodeInstanceRole-UUU04JSKQOJR), but that role has no access to the DynamoDB table and Secrets Manager, hence the error.
Giving access to the resources to this EKS role is not recommended because all the nodes will have access and we might have other apps running in this cluster.
So we need to give access to the pod, its replicas and all new pods in case the original pods crashes or it’s removed by the autoscaler.

Grant access to AWS resources for a pod

This is a three steps process. We’ll have to:

  • Create an IAM OIDC provider
  • Configure the role and a service account
  • Configure the pods

Let’s do one by one.

Create an IAM OIDC provider

Let’s determine whether we have an existing IAM OIDC provider for our cluster.

aws eks describe-cluster --name eksAWSDemo --query "cluster.identity.oidc.issuer" --output text
https://oidc.eks.us-east-2.amazonaws.com/id/D7FAD6F2BEF430FC1CB673777A9E4FED

When we provisioned the cluster with eksctl, the OIDC was created automatically. We need the ID of the OIDC which is the hex value at the end.

OIDCID=$(aws eks describe-cluster --name $CLUSTER_NAME --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
echo $OIDCID

Check if the IAM OIDC provider is already configured. It shouldn’t if you provisioned a brand new cluster.

aws iam list-open-id-connect-providers | grep $OIDCID 

If for whatever reason, you had an output from the command above, skip this associate-iam-oidc-provider step.

eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve
2023-09-22 12:27:47 [ℹ]  will create IAM Open ID Connect provider for cluster "eksAWSDemo" in "us-east-2"
2023-09-22 12:27:47 [✔]  created IAM Open ID Connect provider for cluster "eksAWSDemo" in "us-east-2"

Configure the role and a service account

We have the role created, that’s rolAccessToResources, that we did initially. This role has two policies that give us access to DynamoDB and Secrets Manager. All we have to do is to create the service account. You can’t use uppercase in the name of the service account.

SA_NAME="sapodshelloworld"
eksctl create iamserviceaccount --name $SA_NAME --namespace default \
  --cluster $CLUSTER_NAME  --attach-role-arn $ROLE_ACCESS_ARN --approve
2023-09-22 12:44:50 [ℹ]  1 iamserviceaccount (default/sapodshelloworld) was included (based on the include/exclude rules)
2023-09-22 12:44:50 [!]  serviceaccounts that exist in Kubernetes will be excluded, use --override-existing-serviceaccounts to override
2023-09-22 12:44:50 [ℹ]  1 task: { create serviceaccount "default/sapodshelloworld" }
2023-09-22 12:44:50 [ℹ]  created serviceaccount "default/sapodshelloworld"

Check the service account.

kubectl get sa
NAME               SECRETS   AGE
default            0         93m
sapodshelloworld   0         27m

Check if the role has the proper policies.

aws iam list-attached-role-policies --role-name rolAccessToResources --query AttachedPolicies[].PolicyArn
[
    "arn:aws:iam::261123456789:policy/polHelloWorldSecretsManager",
    "arn:aws:iam::261987654321:policy/polHelloWorldDynamoDB"
]

In addition, we also have to change the trust policy for our role rolAccessToResources. Currently, it looks like this.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

This is the proper access policy.

cat<<EOF > trust-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::$ACCT_NO:oidc-provider/oidc.eks.$AWS_DEFAULT_REGION.amazonaws.com/id/$OIDCID"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.$AWS_DEFAULT_REGION.amazonaws.com/id/$OIDCID:aud": "sts.amazonaws.com",
                    "oidc.eks.$AWS_DEFAULT_REGION.amazonaws.com/id/$OIDCID:sub": "system:serviceaccount:default:$SA_NAME"
                }
            }
        }
    ]
}
EOF

Modify the role’s trust policy.

aws iam update-assume-role-policy --role-name rolAccessToResources --policy-document file://trust-policy.json

Check if correct.

aws iam get-role --role-name rolAccessToResources --query Role.AssumeRolePolicyDocument

Check the service account and the role.

kubectl describe serviceaccount $SA_NAME -n default | grep Anno
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::261987654321:role/rolAccessToResources

Configure the pods

Actually, we don’t need to configure the pods, we have to configure the deployment that creates the pods.
The only change that you have to make is to specify the service account in the YAML file.
Kill the pods if you have them running.

kubectl delete -f demo.yaml

Create a new deployment demo.yaml file. You only have to add one line (line 16 to specify the service account) + a part with the load balancer.

cat<<EOF > demo-new.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo
spec:
  replicas: 1
  selector:
    matchLabels:
      run: demo
  template:
    metadata:
      labels:
        run: demo
    spec:
      serviceAccountName: sapodshelloworld 
      containers:
      - name: demo
        image: klimenta/awsdemo
        ports:
        - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: loadbalancer
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
  ports:
    - port: 80
      targetPort: 3000
      protocol: TCP
  type: LoadBalancer
  selector:
    run: demo
EOF

Deploy the solution again.

kubectl apply -f demo-new.yaml

Get the ULR of the load balancer.

kubectl get svc

…and if you go to that URL after 2-3 mins, you’ll see that everthing works as expected.

curl ab4fdd725967a438ea5a10b4866b720b-b76f9507fafb4eed.elb.us-east-2.amazonaws.com
[{"MEMORY_MB":{"N":"48"},"COMPUTER":{"S":"ZX SPECTRUM"}},{"MEMORY_MB":{"N":"64"},"COMPUTER":{"S":"COMMODORE"}}] Hello there, it's Fri Sep 22 2023 18:49:49 GMT+0000 (Coordinated Universal Time)

Related Articles

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More