This page describes how to set up the Machine Agent with Docker Visibility enabled to run as a DaemonSet on a Kubernetes cluster.

Container visibility enables you to monitor containerized applications running inside Kubernetes pods and identify container issues that impact application performance. You deploy the Machine Agent as a Kubernetes DaemonSet in every node of a Kubernetes cluster. Deploying the Machine Agent as a DaemonSet ensures that every Kubernetes worker node runs the Machine Agent to collect critical resource metrics from the node host and associated Docker containers. 

Using Docker Visibility to monitor Kubernetes containers is no longer the preferred option and will be deprecated. Use the Cluster Agent instead as described in Monitor Kubernetes with the Cluster Agent. The Cluster Agent supports Kubernetes container visibility, as well as visibility into cluster health and capacity.

Container Visibility with Kubernetes

Deploy the Machine Agent in Docker-enabled mode. To configure and run the Machine Agent using Docker, see Configure Docker Visibility

The Machine Agent:

  • Identifies the containers managed by Kubernetes.
  • Determines if these containers contain app server agents.
  • Correlates containers with app server agents with the APM nodes for that application.

This diagram depicts the deployment scenario for container visibility in Kubernetes:

Container Visibility Deployment Scenario

  • Install the Machine Agent container () as a DaemonSet on each Kubernetes node.
  • To collect APM metrics from any container in a pod, install the correct App Server Agent () in the container before deploying the pod.
  • The Machine Agent collects resource usage metrics for each monitored container (), as well as Machine and Server metrics for the host, and forwards the metrics to the Controller.
  • (Optional) Install the Network Agent () as a DaemonSet on the node you want to monitor. The Network Agent collects metrics for all network connections between monitored application components and sends these metrics to the Controller.

Before You Begin

Review the requirements for Container Visibility with Kubernetes:

  • A Machine Agent must run as a DaemonSet on every Kubernetes node you want to monitor.
  • Every node to be monitored must have a Server Visibility license.

  • Docker Visibility must be enabled on each Machine Agent.

  • Both App Server Agents and Machine Agents are registered by the same account and are using the same Controller.
  • If you have multiple App Server Agents running in the same pod, register the container ID as the host ID on each App Server Agent and Machine Agent.

Limitations

  • Only the Docker Container Runtime is supported.
  • Only Pod and ReplicaSet labels are supported.

Procedure

  1. Enable Container Visibility
  2. Register the Container ID as the Host ID
  3. Configure the Cluster Role
  4. Deploy the Machine Agent on Kubernetes

After the Machine Agent has been deployed on Kubernetes, you can add the App Server Agent into your image:

Enable Container Visibility

Update the Controller to >= 4.4.3.

To enable Kubernetes Visibility in your environment, edit these parameters: 

  • Controller 
    • sim.machines.tags.k8s.enabled: The value defaults to true. The global tags enabled flag has priority over this. 
    • sim.machines.tags.k8s.pollingInterval: The value defaults to one minute. The minimum value you can set for the polling interval is 30 seconds.
  • Machine Agent
    • k8sTagsEnabled: The value defaults to true and is specified in the ServerMonitoring.yml file. 

Continue with Use Docker Visibility with Red Hat OpenShift. You can use the example DaemonSet, the sample Docker image for Machine Agent, and the sample Docker start script to set up the Machine Agent.

Register the Container ID as the Host ID

Install an app server agent in every container in a Kubernetes pod to collect application metrics. If multiple app server agents are running in the same pod, for example, in the RedHat OpenShift platform, you must register the container ID as the unique host ID, on both the app server agent and the Machine Agent, to collect container-specific metrics from the pod. Kubernetes pods can contain multiple containers and they share the same host ID. The Machine Agent cannot identify different containers running in a pod unless each container ID is registered as the host ID. 

To register the container ID as the host ID:

  1. Get the container ID from the cgroup.

    cat /proc/self/cgroup | awk -F '/' '{print $NF}'  | head -n 1
    CODE
  2. Register the app server agents.

    -Dappdynamics.agent.uniqueHostId=$(sed -rn '1s#.*/##; 1s/(.{12}).*/\1/p' /proc/self/cgroup)
    CODE
  3. Register the Machine Agent.

    -Dappdynamics.docker.container.containerIdAsHostId.enabled=true 
    CODE

Configure the Cluster Role

This sample cluster role definition provides read access to various Kubernetes resources. These permissions enable Kubernetes extensions to the Machine Agent and pod metadata collection. The role is called appd-cluster-reader, but you can rename it. The cluster role definition outlines various API groups that will be available for members of this role. For every API group, we define a list of resources that will be accessed and the access method. Because we need to retrieve information from these API endpoints, we need the read-only access, expressed by "get", "list," and "watch" verbs.

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: appd-cluster-reader
rules:
- nonResourceURLs:
      - '*'
  verbs:
      - get
- apiGroups: ["batch"]
  resources:
    - "jobs"
  verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
  resources:
    - daemonsets
    - daemonsets/status
    - deployments
    - deployments/scale
    - deployments/status
    - horizontalpodautoscalers
    - horizontalpodautoscalers/status
    - ingresses
    - ingresses/status
    - jobs
    - jobs/status
    - networkpolicies
    - podsecuritypolicies
    - replicasets
    - replicasets/scale
    - replicasets/status
    - replicationcontrollers
    - replicationcontrollers/scale
    - storageclasses
    - thirdpartyresources
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
    - bindings
    - componentstatuses
    - configmaps
    - endpoints
    - events
    - limitranges
    - namespaces
    - namespaces/status
    - nodes
    - nodes/status
    - persistentvolumeclaims
    - persistentvolumeclaims/status
    - persistentvolumes
    - persistentvolumes/status
    - pods
    - pods/binding
    - pods/eviction
    - pods/log
    - pods/status
    - podtemplates
    - replicationcontrollers
    - replicationcontrollers/scale
    - replicationcontrollers/status
    - resourcequotas
    - resourcequotas/status
    - securitycontextconstraints
    - serviceaccounts
    - services
    - services/status
  verbs: ["get", "list", "watch"]
- apiGroups:
  - apps
  resources:
    - controllerrevisions
    - daemonsets
    - daemonsets/status
    - deployments
    - deployments/scale
    - deployments/status
    - replicasets
    - replicasets/scale
    - replicasets/status
    - statefulsets
    - statefulsets/scale
    - statefulsets/status
  verbs:
    - get
    - list
    - watch
- apiGroups:
  - apiextensions.k8s.io
  resources:
    - customresourcedefinitions
    - customresourcedefinitions/status
  verbs:
    - get
    - list
    - watch
- apiGroups:
  - apiregistration.k8s.io
  resources:
    - apiservices
    - apiservices/status
  verbs:
    - get
    - list
    - watch
- apiGroups:
  - events.k8s.io
  resources:
    - events
  verbs:
    - get
    - list
    - watch
CODE

Once the role is defined, you must create cluster role bindings to associate the role with a service account. This example of a ClusterRoleBinding spec makes appd-cluster-reader service account a member of the appd-cluster-reader-role in the project "myproject". The naming is purely coincidental. The names of the service account and the cluster role do not have to match.

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cluster-reader-role-binding
subjects:
- kind: ServiceAccount
  name: appd-cluster-reader
  namespace: myproject
roleRef:
  kind: ClusterRole
  name: appd-cluster-reader
  apiGroup: rbac.authorization.k8s.io
CODE

Deploy the Machine Agent on Kubernetes

You can deploy the AppDynamics Machine Agent in a single container image without an init container. By default, the Machine Agent is deployed to the cluster as a DaemonSet and distributes every Agent instance evenly across all cluster nodes. When required, you can configure the DaemonSet with node affinity rules or node anti-affinity rules to ensure that it is deployed to a desired set of nodes and not across the entire cluster. See Assigning Pods to Nodes to learn about node affinity.

To harvest pod metadata, the service account used to deploy the Machine Agent must have the cluster-reader role in OpenShift. The cluster-reader role is also required for the Kubernetes extensions to the Machine Agent.

# assigning cluster-reader role in OpenShift
oc adm policy add-cluster-role-to-user cluster-reader -z appd-account
BASH

If you are working with a vanilla Kubernetes distribution, it may not have a pre-built cluster role similar to cluster-reader in OpenShift. See ClusterRole Configuration.

Instrument Applications with Kubernetes

There are several approaches to instrumenting applications deployed with Kubernetes; the method you choose depends on your particular requirements and DevOps processes. To monitor an application container with AppDynamics, you must include an App Server Agent in that container by:

  • Using an appropriate base image which has the App Server Agent pre-installed.
  • Loading the App Server Agent dynamically as part of the container startup using an init container.
  • Loading the App Server Agent and dynamically attaching it to a running process (where the language runtime supports it).

The third option is usually applicable only to Java-based applications since the JVM supports Dynamic Attach, a standard feature of the AppDynamics Java APM Agent. See Dynamic Java Instrumentation.

For the other options, it is common practice to make use of standard Kubernetes features such as Init Containers, ConfigMaps, and Secrets (as described in Deploying AppDynamics Agents to OpenShift Using Init Containers).

Resource Limits

The main application being monitored should have resource limits defined. Provide 2% padding for CPU and add up to 100 MB of memory.

To support up to 500 containers, you can configure the Machine Agent with the following resource requests and limits: Mem = 400M, CPU = "0.1" and Mem = 600M, CPU = "0.2"

AppDynamics provides a Kubernetes Snapshot Extension to monitor the health of the Kubernetes Cluster. When deploying this extension, only a single version of the extension should be deployed to the cluster. Do not include it in the DaemonSet to avoid duplicates and potential cluster overload. Instead, consider deploying the Machine Agent instance with this extension, in addition to the DaemonSet, as a separate deployment with a single replica for Server Visibility. In this case, you can drop the memory request to 250 M and disable the Machine Agent SIM and Docker.