On this page:

Your Rating:
Results:
PatheticBadOKGoodOutstanding!
2 rates
With AppDynamics you can gain real-time visibility into your containerized applications deployed to Kubernetes. Kubernetes is an open-source container-orchestration platform for automating deployment, scaling, and management of applications running in containers. 

With container visibility, you can enhance container-level metrics, and gain visibility into CPU, packet visibility, memory, and network utilization. These metrics can then be baselined, and have health rules associated along with detailed resource usage statistics about your APM-monitored container applications. By viewing and comparing APM metrics, with the underlying container and server metrics, you quickly receive deep insights into the performance of your containerized applications, along with potential impediments in your infrastructure stack. For example, specific metrics can help you identify both "bandwidth-hogging" applications and container-level network errors. 

Container visibility allows you to monitor containerized applications running inside Kubernetes pods and to identify container issues that impact application performance. The agent is deployed as a Kubernetes DaemonSet in every node of a Kubernetes cluster. Deploying the Machine Agent as a DaemonSet ensures that every Kubernetes worker node runs the Machine Agent and that the agent collects critical resource metrics from both the node host and the associated Docker containers. 

Container Visibility with Kubernetes

Deploy the Machine Agent in Docker-enabled mode. For more information and details on how to configure and run the Machine Agent using Docker, see Configuring Docker Visibility. The Machine Agent will then:

  • Identify the containers managed by Kubernetes.
  • Determine if these containers contain App Server Agents.
  • Correlate containers with App Server Agents with the APM nodes for that application.

The following diagram illustrates the deployment scenario for container visibility in Kubernetes:

  • Install the Machine Agent container() as a DaemonSet on each Kubernetes node.
  • If you wish to collect APM metrics from any container in a pod, install the correct APM Agent () in the container before deploying the pod.
  • The Machine Agent collects resource usage metrics for each monitored container (), as well as Machine and Server metrics for the host, and forwards the metrics to the Controller.
  • (Optional) Install the Network Agent () as a DaemonSet on the node you want to monitor. The Network Agent collects the metrics for all network connections between application components being monitored and sends the metrics to the Controller.

Before You Begin

Container visibility with Kubernetes requires the following:

  • The Machine Agent must run as a DaemonSet on every Kubernetes node that you wish to monitor.
  • Each node to be monitored must have a Server Visibility license.

  • Docker Visibility must be enabled on the Machine Agent. 

  • Both App Server Agents and Machine Agents are registered by the same account and are using the same Controller.
  • If you have multiple App Server agents running in the same pod, register the container ID as the host ID on both the App Server Agent and the Machine Agent. 

Limitations

  • Only the Docker Container Runtime is supported.
  • Only Pod and ReplicaSet labels are supported.

Enable Container Visibility

Update the Controller to 4.4.3 or higher if you have not already done so. To enable Kubernetes visibility in your environment, edit the following parameters: 

Controller 
  • sim.machines.tags.k8s.enabled: The value defaults to true. The global tags enabled flag has priority over this. 
  • sim.machines.tags.k8s.pollingInterval: The value defaults to one minute. The minimum value you can set for the polling interval is 30 seconds.
Machine Agent
  • k8sTagsEnabled: The value defaults to true and is specified in the ServerMonitoring.yml file. 

Continue with Monitoring Red Hat OpenShift. You can use the example DaemonSet, the sample Docker image for Machine Agent, and the sample Docker start script to quickly set up the Standalone Machine Agent.

Register the Container ID as the Host ID

Install an App Server Agent in each container in a Kubernetes pod to collect application metrics. If multiple App Server agents are running in the same pod, in the Redhat OpenShift platform for example, you must register the container ID as the unique host ID on both the App Server Agent and the Machine Agent to collect container-specific metrics from the pod. Kubernetes pods can contain multiple containers and they share the same host ID. The Machine Agent cannot identify different containers running in a pod unless each container ID is registered as the host ID. 

To register the container ID as the host ID:

  1. Get the container ID from the cgroup:

    cat /proc/self/cgroup | awk -F '/' '{print $NF}'  | head -n 1
  2. Register the app server agents:

    -Dappdynamics.agent.uniqueHostId=$(sed -rn '1s#.*/##; 1s/(.{12}).*/\1/p' /proc/self/cgroup)

    For OpenShift, run the following command:
    -Dappdynamics.agent.uniqueHostId=$(sed -rn '1s#.*/##; 1s/docker-(.{12}).*/\1/p' /proc/self/cgroup) 

  3. Register the Machine Agent:

    -Dappdynamics.docker.container.containerIdAsHostId.enabled=true 

Instrument Applications with Kubernetes

There are several approaches to instrumenting applications deployed with Kubernetes, and which one you choose will depend on your particular requirements and DevOps processes.  In order to monitor an application container with AppDynamics, an APM Agent must be included in that container.  This can be done in a number of ways:

  1. Using an appropriate base image which has the APM agent pre-installed

  2. Loading the agent dynamically as part of the container startup

  3. Loading the agent and dynamically attaching to a running process (where the language runtime supports it)

Option 3 is usually applicable only to Java-based applications since the JVM supports Dynamic Attach, which is a standard feature of the AppDynamics Java APM Agent.  See this blog for more details and an example of how to do this. For the other options, it is common practice to make use of standard Kubernetes features such as Init Containers, ConfigMaps, and Secrets as described in this blog.

Deploy the Machine Agent on Kubernetes

AppDynamics Machine Agent can be deployed in a single container image, without the need for an init container. By default the machine agent is deployed to the cluster as a DaemonSet, to distribute each agent instance evenly across all cluster nodes. Where required, the daemon set can be configured with node affinity rules or node anti-affinity rules to ensure that it is deployed to a desired set of nodes and not across the entire cluster.  There is more information on node affinity here.

In order to harvest pod metadata, the service account used to deploy the machine agent must have the cluster-reader role in OpenShift. The "cluster-reader" role is also required for the Kubernetes extensions to the machine agent.

Cluster Reader Role
# assigning cluster-reader role in OpenShift
oc adm policy add-cluster-role-to-user cluster-reader -z appd-account

If you are working with a vanilla Kubernetes distribution, it may not have a pre-built cluster role similar to "cluster-reader" in OpenShift. Please see ClusterRole Configuration for details of how to do this.

Resource Limits

  • The main application being monitored should have resource limits defined.  Provide 2% padding for CPU and add up to 100 Mb of memory.
  • To support up to 500 containers, the Machine Agent can be configured with the following resource requests and limits: Mem = 400M, CPU = "0.1" and limits: Mem = 600M, CPU = "0.2"

AppDynamics provides a Kubernetes Snapshot Extension for monitoring the health of the Kubernetes Cluster. When deploying this extension, it is important to keep in mind that only a single version of the extension should be deployed to the cluster. Do not include it in the DaemonSet to avoid duplicates and potential cluster overload. Instead, consider deploying the instance of the Machine Agent with the extension as a separate deployment with 1 replica in addition to the daemon set for Server Visibility. The machine agent SIM and Docker can be disabled in this case and the memory request can be dropped to 250M.

ClusterRole Configuration

Below is a sample role definition that provides a wide read access to various Kubernetes resources. These permissions are more than sufficient to enable Kubernetes extensions to the machine agent as well as pod metadata collection. The role is called 'appd-cluster-reader', but you can obviously name it as necessary. The cluster role definition outlines various api groups that will be available for members of this role. For each api group, we define a list of resources that will be accessed and the access method. Because we only need to retrieve information from these api endpoints, we only need the read-only access, expressed by "get", "list" and "watch" verbs.

A Sample ClusterRole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: appd-cluster-reader
rules:
- nonResourceURLs:
      - '*'
  verbs:
      - get
- apiGroups: ["batch"]
  resources:
    - "jobs"
  verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
  resources:
    - daemonsets
    - daemonsets/status
    - deployments
    - deployments/scale
    - deployments/status
    - horizontalpodautoscalers
    - horizontalpodautoscalers/status
    - ingresses
    - ingresses/status
    - jobs
    - jobs/status
    - networkpolicies
    - podsecuritypolicies
    - replicasets
    - replicasets/scale
    - replicasets/status
    - replicationcontrollers
    - replicationcontrollers/scale
    - storageclasses
    - thirdpartyresources
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
    - bindings
    - componentstatuses
    - configmaps
    - endpoints
    - events
    - limitranges
    - namespaces
    - namespaces/status
    - nodes
    - nodes/status
    - persistentvolumeclaims
    - persistentvolumeclaims/status
    - persistentvolumes
    - persistentvolumes/status
    - pods
    - pods/binding
    - pods/eviction
    - pods/log
    - pods/status
    - podtemplates
    - replicationcontrollers
    - replicationcontrollers/scale
    - replicationcontrollers/status
    - resourcequotas
    - resourcequotas/status
    - securitycontextconstraints
    - serviceaccounts
    - services
    - services/status
  verbs: ["get", "list", "watch"]
- apiGroups:
  - apps
  resources:
    - controllerrevisions
    - daemonsets
    - daemonsets/status
    - deployments
    - deployments/scale
    - deployments/status
    - replicasets
    - replicasets/scale
    - replicasets/status
    - statefulsets
    - statefulsets/scale
    - statefulsets/status
  verbs:
    - get
    - list
    - watch
- apiGroups:
  - apiextensions.k8s.io
  resources:
    - customresourcedefinitions
    - customresourcedefinitions/status
  verbs:
    - get
    - list
    - watch
- apiGroups:
  - apiregistration.k8s.io
  resources:
    - apiservices
    - apiservices/status
  verbs:
    - get
    - list
    - watch
- apiGroups:
  - events.k8s.io
  resources:
    - events
  verbs:
    - get
    - list
    - watch

Once the role is defined, you will need to create cluster role bindings to associate the role with a service account. Below is an example of a ClusterRoleBinding spec that makes appd-cluster-reader service account a member of the appd-cluster-reader-role in project "myproject". Note that the naming is purely coincidental. The names of the service account and the cluster role do not have to match.

A Sample ClusterRoleBinding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cluster-reader-role-binding
subjects:
- kind: ServiceAccount
  name: appd-cluster-reader
  namespace: myproject
roleRef:
  kind: ClusterRole
  name: appd-cluster-reader
  apiGroup: rbac.authorization.k8s.io
  • No labels