Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workaround for pods not being able to access EC2 IMDS #56

Open
jimmyraywv opened this issue Nov 12, 2020 · 3 comments
Open

Workaround for pods not being able to access EC2 IMDS #56

jimmyraywv opened this issue Nov 12, 2020 · 3 comments

Comments

@jimmyraywv
Copy link

In my EKS cluster I have disabled the Instance Metadata Service (IMDS) v1 and set the IMDS hop count to 1, to prevent pods from accessing the IMDS. This prevents the cloudwatch-agent daemonset from starting, since it looks into the EC2 IMDS.

2020/11/12 15:56:28 I! 2020/11/12 15:56:25 E! ec2metadata is not available
2020/11/12 15:56:25 I! attempt to access ECS task metadata to determine whether I'm running in ECS.
2020/11/12 15:56:26 W! retry [0/3], unable to get http response from http://169.254.170.2/v2/metadata, error: unable to get response from http://169.254.170.2/v2/metadata, error: Get "http://169.254.170.2/v2/metadata": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

Is there a config item or workaround to use this agent without allowing pods to access the EC2 IMDS?

@ThisIsQasim
Copy link

Same issue happens when running Cloudwatch agent on EKS Fargate. The metadata service isn't available so the agent decides it is running On-prem and starts to look for credentials in .aws/credentials even though it has an IAM role attached via a serviceAccount. Running the same pod on EC2 with the same serviceAccount works fine.

@ThisIsQasim
Copy link

I have added an ENV VAR RUN_IN_AWS=True that should let us force EC2 mode even when IMDS is unavailable.

@whereisaaron
Copy link

Thank you for the information. I disabled IMDS v1 but had to back off setting hop count to 1 on EKS because otherwise the agent could not start. The reason was several other manifests here don't have the RUN_IN_AWS=True update, e.g. including the often promoted 'quick start' manifests. Not so quick when you can to redeploy all the node pools to enable IMDS v2 again 😀

https://github.com/aws-samples/amazon-cloudwatch-container-insights/blob/master/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluent-bit-quickstart.yaml

https://github.com/aws-samples/amazon-cloudwatch-container-insights/blob/master/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluentd-quickstart.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants