This page describes how to install Kubernetes and App Service Monitoring using Helm charts and the Cisco Cloud Observability Amazon Elastic Container Service (EKS) Blueprints add-on.

The Amazon EKS Blueprints for Terraform project is an open-source framework implemented in Terraform that enables platform administrators to easily configure and manage their Amazon EKS clusters. The EKS Blueprints add-on framework supports this alternate installation method for Kubernetes and App Service Monitoring, which is partially automated using Terraform.

This document contains references to third-party documentation. Cisco AppDynamics does not own any rights and assumes no responsibility for the accuracy or completeness of such third-party documentation.

Before You Begin

Before installing the Kubernetes and App Service Monitoring solution, ensure that you meet the following requirements:

  • Your account is set up on Cisco Cloud Observability. See Account Administration.
  • You are connected to the cluster that you want to monitor.
  • You have administrator privileges on the monitored cluster to run the Helm chart commands.

Hardware Requirements

The default hardware settings are:

ComponentCPUMemorySupported PlatformsPer cluster or node

Cisco AppDynamics Distribution of OpenTelemetry Collector*

200m1024MiB
  • Linux AMD64
  • Linux ARM64
  • Windows 2019
Per node (Kubernetes DaemonSet)

Cisco AppDynamics Operator

200m128MiB
  • Linux AMD64
  • Linux ARM64
Per cluster (Kubernetes Deployment)

Cisco AppDynamics Smart Agent

350m512MiB
  • Linux AMD64
  • Linux ARM64
Per cluster (Kubernetes Deployment)
Cluster Collector1000m1000MiB
  • Linux AMD64
  • Linux ARM64
  • Windows 2019
Per cluster (Kubernetes Deployment)
Infrastructure Collector350m
  • 100MiB (Linux)
  • 300MiB (Windows)
  • Linux AMD64
  • Linux ARM64
  • Windows 2019
Per node (Kubernetes DaemonSet)
Log Collector

10m

150MiB
  • Linux AMD64
  • Linux ARM64
  • Windows 2019
Per node (Kubernetes DaemonSet)
Windows Exporter200m200MiB
  • Windows 2019
Per node (Kubernetes DaemonSet)

OpenTelemetry Operator for Kubernetes**

600m256MiB
  • Linux AMD64
  • Linux ARM64
Per cluster (Kubernetes Deployment)

*For throughput-specific details, see Performance and Scaling for the Cisco AppDynamics Distribution of OpenTelemetry Collector.
**OpenTelemetry Operator manager and Kube RBAC Proxy

Software Requirements

Kubernetes and App Service Monitoring is designed to run in hybrid (Linux and Windows) or Linux-only clusters. Kubernetes and App Service Monitoring requires:

NameURL

Cisco AppDynamics Distribution of OpenTelemetry Collector 

appdynamics/appdynamics-cloud-otel-collector:24.4.1-1598

Cisco AppDynamics Operator

appdynamics/appdynamics-cloud-operator:24.4.0-1445

Cisco AppDynamics Smart Agent 

appdynamics/appdynamics-smartagent:24.4.0-1960
Cluster Collector

appdynamics/appdynamics-cloud-k8s-monitoring:24.4.0-2034

Infrastructure Collector
  • appdynamics/infraagent-cnao:24.4.0-5170-ecs-amd64 (for ECS)
  • appdynamics/appdynamics-cloud-k8s-monitoring:24.4.0-2034 (for Kubernetes and Cloud Infrastructure)
kube-rbac-proxygcr.io/kubebuilder/kube-rbac-proxy:v0.15.0
Log Collectorappdynamics/appdynamics-cloud-log-collector-agent:24.4.0-1163
Windows Exporterghcr.io/prometheus-community/windows-exporter:0.23.1
OpenTelemetry Operator for Kubernetesghcr.io/open-telemetry/opentelemetry-operator/opentelemetry-operator:v0.89.0
ghcr.io/open-telemetry/opentelemetry-operator/target-allocator:0.89.0

Cluster Support

Linux ClusterWindows and Linux Cluster
  • Amazon Elastic Kubernetes Service  <=1.28
  • Azure Kubernetes Service <=1.28
  • Red Hat OpenShift Service on AWS (ROSA) <= 4.13.14
  • Microsoft Azure for OpenShift <= 4.13
  • Self-managed OpenShift Container Platform (OCP) 4.11
  • Rancher Kubernetes Engine (RKE) <= 1.27
  • Rancher Kubernetes Engine Government (RKE2) <= 1.29 
  • Tanzu Kubernetes Grid Integrated Edition (TKGI) <= 1.18
  • Google Kubernetes Engine (GKE) <= 1.28
  • Containerd runtime for Windows nodes
  • Windows 2019 Server for Windows nodes 
  • Amazon Elastic Kubernetes Service  <=1.27
  • Azure Kubernetes Service <=1.28
  • Google Kubernetes Engine (GKE) <= 1.28

Cisco AppDynamics and OpenTelemetry Operators can run on Linux nodes only.

The k8s.cluster.id attribute is required to send MELT data for Kubernetes entities. By default, Cisco AppDynamics Helm charts and Collectors attach the k8s.cluster.id attribute to data from Cisco AppDynamics sources. To send data from third-party collectors, you must enrich your data with the k8s.cluster.id attribute. The k8s.cluster.id attribute must have a value equal to the UUID of the kube-system namespace.

In addition, for the Log Collector ensure your environment meets the Log Collector Requirements.

Required Deployment Tools

The following deployment tools are required for this installation method:

Deployment ToolDescriptionRequired VersionInstallation Link
AWS Command Line Interface

An open-source tool that enables you to interact with AWS services using commands in your command-line shell.

>= 2

Install or update the latest version of the AWS CLI
Terraform

A tool for efficiently building, changing, and versioning infrastructure. Terraform is used to automate the deployment of theCisco Cloud Observability Add-On for Amazon EKS Blueprints.

>= 1.6

Install Terraform
Helm

The package manager for Kubernetes® that streamlines installing and managing Kubernetes applications.

 >= 3.8.0

Helm platform binaries (GitHub)
Kubectl

A command-line tool used for communicating with the Kubernetes API server to deploy and manage applications.

A kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.27 kubectl client works with Kubernetes 1.26, 1.27, and 1.28 clusters.

Installing or updating kubectl
yqA lightweight and portable command-line YAML processor.>= 4.35yq binaries (GitHub)

Install Kubernetes and App Service Monitoring Using the Cisco Cloud Observability Add-On for Amazon EKS Blueprints 

These are the high-level steps:

  1. Get the Code
  2. Generate and Download Operators and Collectors Files
  3. Verify Your Connection to AWS and the EKS Cluster
  4. Prepare the Terraform Configuration for Your Environment
  5. Deploy the Cisco Cloud Observability Add-On for Amazon EKS Blueprints

1. Get the Code

Clone the Cisco Cloud Observability Add-On for Amazon EKS Blueprints project from the Cisco DevNet repository on GitHub:

$ cd ~
$ git clone https://github.com/CiscoDevNet/appdynamics-eks-blueprints-addon.git
$ cd appdynamics-eks-blueprints-addon
CODE

2. Generate and Download Operators and Collectors Files

  1. Log into the Cisco Cloud Observability UI.
  2. Use the left-hand navigation panel to navigate to Configure > Kubernetes and APM.
  3. Under CONFIGURE DATA COLLECTORS, enter your Credential set name and Kubernetes cluster name.

    For the Kubernetes cluster name, we recommend using the actual name of your EKS cluster to make it easy to identity your deployment. You can also use your EKS cluster name for the Credential set name.

  4. Under ENABLE ADDITIONAL CONFIGURATIONS, check the boxes for Cluster Collector, Infrastructure Collector, and Log Collector Agent. Specify your operating system for each collector.
  5. Click Generate configuration file. This step generates the operators-values.yaml and collectors-values.yaml files. Download both files.
  6. Click Done.
  7. Copy the operators-values.yaml and collectors-values.yaml files to the add-on project home directory for Terraform. If you downloaded these files to a different directory than ~/Downloads., adjust the command as needed.

    $ cd ~/appdynamics-eks-blueprints-addon/examples/addon/
    $ cp ~/Downloads/operators-values.yaml .
    $ cp ~/Downloads/collectors-values.yaml .
    CODE

3. Verify Your Connection to AWS and the EKS Cluster

  1. Set the AWS environment variables:

    $ export AWS_REGION=<your_aws_region>
    $ export AWS_EKS_CLUSTER=<your_aws_eks_cluster_name>
    CODE
  2. Invoke the Security Token Service (STS) to verify access to your AWS account via the AWS CLI:

    $ aws sts get-caller-identity
    CODE

    Sample output:

    {
        "UserId": "ABCDEFGHIJKLMNOPQRSTU",
        "Account": "012345678901",
        "Arn": "arn:aws:iam::012345678901:user/some.user"
    }
    CODE
  3. Retrieve the AWS EKS kubeconfig and verify access to the EKS cluster:

    $ aws eks --region $AWS_REGION update-kubeconfig --name $AWS_EKS_CLUSTER
    $ kubectl config current-context
    $ kubectl get nodes -o wide
    CODE

    The output should display the updated context, current context, names of the EKS cluster nodes, status, Kubernetes version, internal and external IP addresses, and OS image.

4. Prepare the Terraform Configuration for Your Environment

Before you can execute the Terraform commands and deploy the Cisco Cloud Observability Helm Charts, you'll need to override some custom Terraform variables related to your Cisco Cloud Observability Tenant and EKS cluster by generating a custom terraform.tfvars file. The terraform.tfvars file is used to override default Terraform variables at runtime.

To streamline this process, you can use a Cisco Cloud Observability script to extract key values from the downloaded operators-values.yaml and collectors-values.yaml files. These key values contain information specific to your tenant, such as operator and collector endpoints, client secret, tenant ID, and token URL.

To verify that all the files are in the correct place:

  1. Run the following command from the add-on project home directory for Terraform. Use the output to verify that the operators-values.yaml and collectors-values.yaml files are in the current directory.

    $ cd ~/appdynamics-eks-blueprints-addon/examples/addon/
    $ ls -alF
    CODE
  2. From the current directory, run the following script:

    $ ../../bin/extract_cnao_config_values_for_terraform.sh
    CODE

    Example output:

    Begin processing Helm Chart files...
    Extracting Cisco Cloud Observability configuration values...
    Substituting EKS Cluster name variable...
    Substituting Helm Chart variables...
    Removing temporary backup file...
    Cisco Cloud Observability configuration values extraction complete.
    CODE
  3. Examine the terraform.tfvars file:

    $ cat terraform.tfvars
    CODE

    You should see that the EKS cluster name and Helm chart variables are now uncommented and populated with the correct data.

5. Deploy the Cisco Cloud Observability Add-On for Amazon EKS Blueprints

  1. From the current directory, run the following Terraform lifecycle commands in sequential order:

    $ terraform --version
    $ terraform init
    $ terraform validate
    $ terraform plan -out terraform-addon.tfplan
    $ terraform apply terraform-addon.tfplan
    CODE
  2. In the Terraform output from the apply operation, verify that:

    1. The cco-operators were created.

    2. The cco-collectors were created.

    3. The Terraform apply operation was completed successfully.

Next Steps

Observe Your EKS Cluster

Once Kubernetes and App Service Monitoring is installed, Cisco Cloud Observability populates the Observe page with an entity-centric page that enables you to observe your EKS cluster. For more information on UI elements, see Observe UI Overview.

To observe your EKS cluster:

  1. Log into the Cisco Cloud Observability UI.
  2. On the Observe page, navigate to the Kubernetes domain. Click Clusters.
  3. In the Filter View box, enter the following string:

    EntityStatus = 'active' && attributes(k8s.cluster.name) = '<your_EKS_cluster_name>'
    CODE
  4. Click Apply.
  5. From the filtered list of clusters, click your EKS cluster name.

Uninstall the Cisco Cloud Observability Add-On for Amazon EKS Blueprints

From the current directory, run the following Terraform lifecycle commands in sequential order:

$ cd ~/appdynamics-eks-blueprints-addon/examples/addon/
$ terraform destroy -auto-approve
CODE

Third party names, logos, marks, and general references used in these materials are the property of their respective owners or their affiliates in the United States and/or other countries. Inclusion of such references are for informational purposes only and are not intended to promote or otherwise suggest a relationship between Cisco AppDynamics and the third party.