Consul
Extend your service mesh to support AWS Lambda
While managing a service mesh with Consul, you may need to exchange the source of a service for another source. For example, a service mesh hosted on a Kubernetes cluster could include a service that experiences heavy amounts of traffic which exceeds the provisioned capacity of the cluster. Autoscaling a cluster is a common method to manage peak capacity, but autoscaling takes time to provision new instances within a cluster to meet the needs of the service. Serverless computing solutions like AWS Lambda can quickly scale to the capacity a service requires, without the time investment of waiting for instances to provision. During instances of peak demand, this can help you or your organization manage workloads in a sustainable predictable manner.
With Consul on AWS Lambda, you can take advantage of the benefits of serverless workloads. These benefits include reducing cost, decreasing administrative overhead, and scaling services inside a Consul service mesh with minimal context switching.
In this tutorial, you will deploy HashiCups, a demo application, onto an Amazon Elastic Kubernetes Services (EKS). Then, you will learn how to route traffic away from a Kubernetes service and towards a Lambda function, using Consul's service splitter and terminating gateway.
Prerequisites
The tutorial assumes an intermediate understanding of Consul, AWS, and Kubernetes. If you're new to Consul, refer to the Getting Started tutorials collection.
For this tutorial, you will need:
An HCP account configured for use with Terraform
An AWS account configured for use with Terraform
A cleanup script is provided to help minimize the time resources are actively accruing charges to your AWS account.
Note
Some infrastructure in this tutorial does not qualify for AWS free tier.
Clone example repository
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp-education/learn-consul-terraform
Navigate into the repository folder.
$ cd learn-consul-terraform
Fetch the latest tags and check out the v0.9
tag of the repository.
$ git fetch --all --tags && git checkout tags/v0.10
Navigate into the project's terraform folder for this tutorial.
$ cd datacenter-deploy-hcp-eks-lambda
Deploy tutorial infrastructure
This tutorial deploys an HCP Consul Dedicated cluster, an Amazon EKS cluster with Consul installed via Helm and supporting infrastructure. The Consul cluster on EKS is pre-configured with support for AWS Lambda, and includes a terminating gateway you will configure later in this tutorial.
Initialize the Terraform project.
$ terraform init
Initializing the backend...
Initializing provider plugins...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Then, deploy the resources. Confirm by entering yes
.
$ terraform apply
## . . .
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
## . . .
Apply complete! Resources: 122 added, 0 changed, 0 destroyed.
Once complete, Terraform will display a list of outputs that you will use to connect to your HCP Consul Dedicated and Kubernetes clusters.
## . . .
Apply complete! Resources: 122 added, 0 changed, 0 destroyed.
Outputs:
cloudwatch_logs_path = {
"eks" = "/aws/eks/consullambda-w19vbd/cluster"
"payments" = "/aws/lambda/payments-lambda-w19vbd"
"registrator" = "/aws/lambda/lambda_registrator-consullambda-w19vbd"
}
consul_addr = "https://consullambda-w19vbd.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud"
eks_update_kubeconfig_command = "aws eks --region us-west-2 update-kubeconfig --name consullambda-w19vbd"
hcp_login_token = <sensitive>
kubernetes_cluster_endpoint = "https://7CE233483FD372627372941C9C68D0F8.gr7.us-west-2.eks.amazonaws.com"
region = "us-west-2"
Configure your terminal to connect to HCP Consul Dedicated and Kubernetes
Update your local kubeconfig
file by using the Terraform output eks_update_kubeconfig_command
. Then, verify that you
can connect to your EKS cluster with kubectl cluster-info
.
$ terraform output -json | jq -r '.eks_update_kubeconfig_command.value' | $SHELL && kubectl cluster-info
Updated context arn:aws:eks:us-west-2:REDACTED:cluster/consullambda-dl0r7o in /Users/user/.kube/config
Kubernetes control plane is running at https://FD534A8E1D33199C6A1394F490B63394.gr7.us-west-2.eks.amazonaws.com
CoreDNS is running at https://FD534A8E1D33199C6A1394F490B63394.gr7.us-west-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/prox
Set the environment variables required by kubectl
and the AWS CLI.
$ export AWS_REGION=$(terraform output -raw region) && \
export LOG_GROUP="$(terraform output -json cloudwatch_logs_path | jq -r '.registrator')" && \
export LAMBDA_FUNC_LOG=$(terraform output -json | jq -r '.cloudwatch_logs_path.value.payments')
Set your HCP Consul Dedicated environment variables. You will use these to configure your Consul CLI to interact with your HCP Consul cluster.
$ export CONSUL_HTTP_TOKEN=$(terraform output -raw hcp_login_token) && \
export CONSUL_HTTP_ADDR=$(terraform output -raw consul_addr) && \
export POLICY_NAME="payments-lambda-tgw"
Verify tutorial infrastructure
Copy and paste the Consul public URL (consul_public_endpoint
) into your browser to visit the Consul UI. Since HCP
Consul is secure by default, copy and paste the ACL token (hcp_login_token
) into the Consul authentication prompt to
use the Consul UI.
Once you have authenticated, click the Services tab on the left navigation pane to review your deployed services.
Verify HashiCups deployment
Retrieve the URL of the Consul API Gateway. Input this URL into your browser to confirm HashiCups is working.
$ kubectl get services api-gateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-gateway LoadBalancer 10.100.171.204 a315d8970349c44928b9c12b07d7a118-1429028736.us-west-2.elb.amazonaws.com 80:32601/TCP 30m
Verify payment service routing to Kubernetes
Verify HashiCups is routing payments traffic to the Kubernetes pod. Later, you will compare this output to the payments service routing to the Lambda function.
Create a port-forward, sending local requests to port 8080
to the public-api
.
$ kubectl port-forward deploy/public-api 8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
In another terminal, simulate a payment by sending the following request with curl
to the HashiCups public-api
endpoint.
$ curl -v 'http://localhost:8080/api' \
-H 'Accept-Encoding: gzip, deflate, br' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Connection: keep-alive' \
-H 'DNT: 1' \
-H 'Origin: http://localhost:8080' \
--data-binary '{"query":"mutation{ pay(details:{ name: \"HashiCups_User!\", type: \"mastercard\", number: \"1234123-0123123\", expiry:\"10/02\", cv2: 1231, amount: 12.23 }){id, card_plaintext, card_ciphertext, message } }"}' --compressed | jq
The following message indicates a successful response to your request. The Kubernetes-based payments
service is
returning unencrypted data as noted by the card_ciphertext
return value.
{
"data": {
"pay": {
"id": "edb7bc10-164c-4d0b-a285-d6797bcfd953",
"card_plaintext": "1234123-0123123",
"card_ciphertext": "Encryption Disabled",
"message": "Payment processed successfully, card details returned for demo purposes, not for production"
}
}
}
Create AWS Lambda function registrator
Since Lambda functions do not include a Consul agent, you must register Lambda functions to the Consul service mesh
using the Consul API. You can do this using the consul-lambda-registrator
Terraform module, or manually through the Consul API. In this tutorial, you will use consul-lambda-registrator
to automatically register Lambda functions in Consul service mesh.
The Terraform module deploys a registrator Lambda function. The registrator function checks for Lambda functions with
specific Consul annotations in your account, in the deployed region, on an interval, defined by the Lambda function's
tags block in the aws_lambda
resource. When discovered, the registrator functions registers the Lambda function as a
service in Consul service mesh.
The registrator module also deploys a private Elastic Container Registry repository to store the registrator's container image in your AWS account.
Begin by adding the Lambda function registrator code to the lambda-tutorial.tf
Terraform file. Note the highlighted
lines below for the sync frequency and the ECR repository.
lambda-tutorial.tf
module "lambda-registration" {
source = "hashicorp/consul-lambda-registrator/aws//modules/lambda-registrator"
version = "0.1.0-beta1"
name = aws_ecr_repository.lambda-registrator.name
ecr_image_uri = "${aws_ecr_repository.lambda-registrator.repository_url}:${local.ecr_image_tag}"
subnet_ids = module.infrastructure.vpc_subnets_lambda_registrator
security_group_ids = [module.infrastructure.vpc_default_security_group]
sync_frequency_in_minutes = 1
consul_http_addr = module.infrastructure.consul_addr
consul_http_token_path = aws_ssm_parameter.token.name
depends_on = [
null_resource.push-lambda-registrator-to-ecr
]
}
resource "aws_ecr_repository" "lambda-registrator" {
name = local.ecr_repository_name
}
resource "null_resource" "push-lambda-registrator-to-ecr" {
triggers = {
ecr_base_image = local.ecr_base_image
}
provisioner "local-exec" {
command = <<EOF
aws ecr get-login-password --region ${local.public_ecr_region} | docker login --username AWS --password-stdin ${aws_ecr_repository.lambda-registrator.repository_url}
docker pull ${local.ecr_base_image}
docker tag ${local.ecr_base_image} ${aws_ecr_repository.lambda-registrator.repository_url}:${local.ecr_image_tag}
docker push ${aws_ecr_repository.lambda-registrator.repository_url}:${local.ecr_image_tag}
EOF
}
depends_on = [
aws_ecr_repository.lambda-registrator
]
}
resource "aws_ssm_parameter" "token" {
name = "/${local.ecr_repository_name}/token"
type = "SecureString"
value = module.infrastructure.consul_token
tier = "Advanced"
}
Use terraform get
in the project folder to download the consul-lambda-registrator
module.
$ terraform get
Downloading hashicorp/consul-lambda-registrator/aws 0.1.0-alpha2 for lambda-registration...
- lambda-registration in .terraform/modules/lambda-registration/modules/lambda-registrator
Downloading terraform-aws-modules/eventbridge/aws 1.14.1 for lambda-registration.eventbridge...
- lambda-registration.eventbridge in .terraform/modules/lambda-registration.eventbridge
Create the registrator. Confirm by entering yes
.
$ terraform apply -auto-approve
## . . .
Plan: 14 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value:
Apply complete! Resources: 14 added, 0 changed, 0 destroyed.
## . . .
Deploy AWS Lambda function for HashiCups payments
Next, you will deploy the Lambda function payments workload and supporting infrastructure. This includes an IAM Role
and Policy to write logs to CloudWatch, allowing you to confirm success in using this tutorial. The diagram below
reflects the flow of how the payments
service routes to AWS Lambda via the terminating gateway.
The Lambda function resource includes a block of tags
. You must include these annotations for the registrator to
register your Lambda function to the Consul service mesh.
Add the following block of Terraform code to lambda-tutorial.tf
to deploy the Lambda function. The Lambda function will replace the payments service and pods in Kubernetes.
lambda-tutorial.tf
resource "aws_lambda_function" "lambda-payments" {
filename = local.lambda_payments_path
source_code_hash = filebase64sha256(local.lambda_payments_path)
function_name = local.lambda_payments_name
role = aws_iam_role.lambda_payments.arn
handler = "lambda-payments"
runtime = "go1.x"
tags = {
"serverless.consul.hashicorp.com/v1alpha1/lambda/enabled" = "true"
"serverless.consul.hashicorp.com/v1alpha1/lambda/payload-passthrough" = "false"
"serverless.consul.hashicorp.com/v1alpha1/lambda/invocation-mode" = "SYNCHRONOUS"
}
}
resource "aws_iam_policy" "lambda_payments" {
name = "${local.lambda_payments_name}-policy"
path = "/"
description = "IAM policy lambda payments"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*",
"Effect": "Allow"
}
]
}
EOF
}
resource "aws_iam_role" "lambda_payments" {
name = "${local.lambda_payments_name}-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "lambda_payments" {
role = aws_iam_role.lambda_payments.name
policy_arn = aws_iam_policy.lambda_payments.arn
}
Create the Lambda function. Confirm by entering yes
.
$ terraform apply
## . . .
Plan: 4 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value:
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
## . . .
When Terraform finishes creating the function, wait one minute for the registrator to sync and register the new Lambda
function. Use the following aws logs
command to verify the registrator found and registered the payments Lambda
function.
Tip
Customize the sync period by configuring the sync_frequency_in_minutes
value of the registrator module.
$ aws logs filter-log-events --region $AWS_REGION --log-group-name $LOG_GROUP --filter-pattern "Upserting" | jq '.events[].message'
"2022-07-08T13:50:15.979Z [INFO] Upserting Lambda: arn=arn:aws:lambda:us-west-2:REDACTED:function:lambda-payments-klgvh\n"
Next, navigate into the Consul UI's Services section, to verify the Lambda function is registered in Consul service
mesh. You should find a service starting with payments-lambda-
in the dashboard.
Migrate Consul payments service to Lambda function
The Lambda function is present in the service mesh, but is not receiving traffic. As the Lambda function is considered an external workload by Consul, the Consul terminating gateway will serve as the proxy for this Lambda function.
Configure ACL for terminating gateway
The terminating gateway needs permission the via an ACL policy to interact with the Lambda function in Consul.
Retrieve the terminating gateway's ACL token, saving the AccessorID
as an environment variable named TGW_TOKEN
.
$ TGW_TOKEN=$(consul acl token list -format=json | jq '.[] | select(.Roles[]?.Name | contains("terminating-gateway"))' | jq -r '.AccessorID') && echo $TGW_TOKEN
00000000-0000-0000-0000-000000000000
Next, open practitioner/terminating-gateway-policy.hcl
. This is a pre-rendered policy file for the ACL token, that
grants read (intentions) and write (policy) access to the payment service and payments Lambda service. Your policy file
should look similar to the following code block.
practitioner/terminating-gateway-policy.hcl
service "payments-lambda-00000" {
policy = "write"
intentions = "read"
}
service "payments" {
policy = "write"
intentions = "read"
}
Create an ACL policy with the pre-rendered policy file.
$ consul acl policy create -name "${POLICY_NAME}" -description "Allows Terminating Gateway to pass traffic from the payments Lambda function" -rules @./practitioner/terminating-gateway-policy.hcl
ID: 4b4cd926-b6f7-1e17-236d-7cee05855f69
Name: payments-lambda-tgw
Partition: default
Namespace: default
Description: Allows Terminating Gateway to pass traffic from the payments Lambda function
Datacenters:
Rules:
service "payments" {
policy = "write"
intentions = "read"
}
service “payments-lambda-00000" {
policy = "write"
intentions = "read"
}
Associate this policy with the saved ACL token for the terminating gateway, merging this policy to existing roles and policies associated to this token.
$ consul acl token update -id $TGW_TOKEN -policy-name $POLICY_NAME -merge-policies -merge-roles
AccessorID: a91736e9-cbd0-d729-1479-20529fd23155
SecretID: f221d8ef-07de-552e-a712-fe8282bd98d5
Partition: default
Namespace: default
Description: token created via login: {"component":"terminating-gateway/consul-terminating-gateway"}
Local: true
Auth Method: consul-k8s-component-auth-method (Namespace: default)
Create Time: 2022-07-08 13:47:05.098979546 +0000 UTC
Policies:
e71932a0-c517-bb3c-3f01-e25e49546a82 - payments-lambda-tgw
Link Lambda payments service to terminating gateway
Associate the Lambda function service to the Consul terminating gateway. The terminating gateway routes requests for the
payments service, to the AWS Lambda function. Open ./practitioner/terminating-gateway.yaml
to find the pre-rendered
service terminating gateway definition.
Review the following YAML example to observe the association of the payments Lambda function to the terminating gateway.
./practitioner/terminating-gateway.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: TerminatingGateway
metadata:
name: terminating-gateway
spec:
services:
- name: payments-lambda-000000
Apply the pre-rendered terminating gateway configuration file to the Kubernetes cluster.
$ kubectl apply --filename ./practitioner/terminating-gateway.yaml
terminatinggateway.consul.hashicorp.com/terminating-gateway created
In the Consul UI, the Terminating Gateway now includes the payments-lambda
service as a linked service to the gateway:
Route traffic to payments Lambda function with service splitter
Since this tutorial uses Consul with ACLs enabled, the public-api
service requires a service intention to route requests to the underlying Lambda function. Open ./practitioner/service-intentions.yaml
to find the pre-rendered service intentions definition.
./practitioner/service-intentions.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: payments-lambda-7u2jmc
spec:
sources:
- name: public-api
action: allow
destination:
name: payments-lambda-000000
Apply the pre-rendered service intention definition file.
$ kubectl apply --filename ./practitioner/service-intentions.yaml
serviceintentions.consul.hashicorp.com/payments-lambda-w19vbd created
Route traffic to the payments-lambda
function with the ServiceSplitter
resource. In this tutorial, you will route 100% of the traffic for the payment
service to the payments-lambda
function. Open ./practitioner/service-intentions.yaml
to find the pre-rendered ServiceSplitter
definition.
./practitioner/service-splitter.yaml
# Example only
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceSplitter
metadata:
name: payments
spec:
splits:
- weight: 100
service: payments-lambda-000000
- weight: 0
service: payments
Apply the pre-rendered ServiceSplitter
policy.
$ kubectl apply --filename ./practitioner/service-splitter.yaml
servicesplitter.consul.hashicorp.com/payments-lambda created
Verify payment service routing to Lambda
Verify HashiCups is routing payments
traffic to the Lambda function by simulating a payment.
Create a port-forward that sends local requests to port 8080
to the public-api
.
$ kubectl port-forward deploy/public-api 8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
In a different terminal, simulate a payment by sending the following request with curl
to the HashiCups public-api
endpoint.
$ curl -v 'http://localhost:8080/api' \
-H 'Accept-Encoding: gzip, deflate, br' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Connection: keep-alive' \
-H 'DNT: 1' \
-H 'Origin: http://localhost:8080' \
--data-binary '{"query":"mutation{ pay(details:{ name: \"HELLO_LAMBDA_FUNCTION!\", type: \"mastercard\", number: \"1234123-0123123\", expiry:\"10/02\", cv2: 1231, amount: 12.23 }){id, card_plaintext, card_ciphertext, message } }"}' --compressed | jq
The following payload indicates a successful response to your request. Note the value of card_ciphertext
returning
Encryption Enabled
. In context of this tutorial, this value confirms the payments
service routed to AWS Lambda, as the
terminating gateway returns send and returns encrypted traffic.
{
"data": {
"pay": {
"id": "5a73bda6-a9e1-4c60-9163-c467211f8e0f",
"card_plaintext": "1234123-0123123",
"card_ciphertext": "Encryption Enabled (Lambda)"
"message": "Payment processed successfully, card details returned for demo purposes, not for production"
}
}
}
Note
If you receive a No Such Host
error in a new terminal window, reload your kubeconfig
file in this
terminal with the AWS CLI. You can retrieve this command for your specific cluster by using terraform output
.
$ terraform output -raw eks_update_kubeconfig_command
# Output string of `eks-_update_kubeconfig_command`
aws eks --region us-west-2 update-kubeconfig --name consullambda-22qxot
To verify the AWS Lambda function responded to this request, search for the value of the name
key from the request
payload in the Cloudwatch logs. You should find HELLO_LAMBDA_FUNCTION
in the returned data.
$ aws logs filter-log-events --region $AWS_REGION --log-group-name $LAMBDA_FUNC_LOG | jq '.events[].message'
"START RequestId: ea34f875-ecf4-4e44-83cd-080a8eacb0c0 Versio: $LATEST"
"BODY: %+v"
"{HELLO_LAMBDA_FUNCTION! mastercard 1234123-0123123 10/02 1231}"
"END RequestId: ea34f875-ecf4-4e44-83cd-080a8eacb0c0"
"REPORT RequestId: ea34f875-ecf4-4e44-83cd-080a8eacb0c0tDuratio: 16.24 mstBilled Duratio: 17 mstMemory Size: 128 MBtMax Memory Used: 28 MBtIit Duratio: 73.97 mst"
Clean up
To remove all resources, use terraform destroy
twice. Use the following command to have terraform destroy
immediately begin after the first command finishes running.
$ terraform destroy -auto-approve && terraform destroy --auto approve
Note
Due to race conditions with the various cloud resources created in this tutorial, it is necessary to use the destroy command twice to ensure all resources have been properly removed.
Next Steps
In this tutorial, you migrated a Consul service from Kubernetes to an AWS Lambda function. First, you deployed the Terraform Lambda function registrator for Consul, which listens for Lambda functions being created in your AWS account. Then, you used a Terminating Gateway, Service Splitter, and Service Intention to route traffic flow in the service mesh with no downtime. Refer to the Consul AWS Lambda documentation for further details about Lambda function support in Consul.
To register a Lambda function manually in Consul, the Lambda registration documentation provides the necessary instructions and API calls to register services manually.