DRAFT SPACE - APPDYNAMICS INTERNAL ONLY

On this page:

Related pages:

Your Rating:
Results:
PatheticBadOKGoodOutstanding!
48 rates
With AppDynamics you can gain real-time visibility into your containerized applications deployed to Kubernetes. Kubernetes is an open-source container-orchestration platform for automating deployment, scaling and management of applications running in containers. Because AppDynamics allows you to manage application and business performance monitoring capabilities with an emphasis on open architecture, monitoring can be extended to environments outside the realm of the application being monitored, and can be customized to perform specific tasks to meet specific user needs. 

With container visibility you can enhance container-level metrics, and gain visibility into CPU, packet visibility, memory and network utilization if the application is deployed to Kubernetes, OpenShift, or to a traditional, non-containerized infrastructure. In Kubernetes, these metrics can then be baselined, and have health rules associated along with detailed resource usage statistics about your APM-monitored container applications. By viewing and comparing APM metrics, with the underlying container and server metrics, you quickly receive deep insights into the performance of your containerized applications, along with potential impediments in your infrastructure stack. For example, specific metrics can help you identify both  "bandwidth-hogging" applications and container-level network errors. 

Container visibility allows you to monitor containerized applications running inside Kubernetes pods and to identify container issues that impact application performance. The agent is deployed as a Kubernetes DaemonSet in every node of a Kubernetes cluster. DaemonSet is a Kubernetes workload object that ensures that a particular pod runs on every node in the cluster, or on some subset of nodes. Deploying the Machine Agent as a DaemonSet ensures that every Kubernetes worker node runs the Machine Agent and that the agent collects critical resource metrics from both the node host and the associated Docker containers. 

Container visibility

Deploy the Standalone Machine Agent in Docker-enabled mode. For more information and details on how to configure and run the Standalone Machine Agent using Docker, see Configuring Docker Visibility. The Standalone Machine Agent can then determine if Kubernetes is running and do the following:

  • Identify the containers managed by Kubernetes.
  • Determine if these containers contain App Server Agents.
  • Correlate containers with App Server Agents with the APM nodes for that application.

The following diagram illustrates the deployment scenario for container visibility in Kubernetes: 

  • Install a Standalone Machine Agent () in a Kubernetes node.
  • Install an APM Agent () inside each container in a pod you want to monitor.
  • The Standalone Machine Agent then collects hardware metrics for each monitored container, as well as Machine and Server metrics for the host (), and forwards the metrics to the Controller.

Before You Begin

Container visibility with Kubernetes requires the following:

  • The monitored machine must have a Server Visibility license.

  • Docker Visibility must be enabled on the Machine Agent. 

  • The Machine Agent must run as a DaemonSet in every Kubernetes node in a cluster.
  • Both App Server Agent and Machine Agent are registered by the same account and are using the same Controller.
  • Review Before You Start under Monitoring Docker Containers.
  • If you have multiple App Server agents running in the same pod, register the container ID as the host ID on both the App Server Agent and the Machine Agent. 

Limitations

  • Only Docker Container Runtime is supported.
  • Only APM-monitored application metrics are collected.
  • Only pod and ReplicaSet labels are supported, which implies only pods are monitored.

Enabling Container Visibility

Update the Controller to 4.4.3 or higher if you have not already done so. To enable Kubernetes visibility in your environment, edit the following parameters: 

Controller 
  • sim.machines.tags.k8s.enabled: The value defaults to true. The global tags enabled flag has priority over this. 
  • sim.machines.tags.k8s.pollingInterval: The value defaults to one minute. The minimum value you can set for the polling interval is 30 seconds.
Machine Agent

k8sTagsEnabled: The value defaults to true and is specified in the ServerMonitoring.yml file. 

Continue with Container Visibility with Kubernetes. You can use the example DaemonSet, the sample Docker image for Machine Agent, and the sample Docker start script to quickly set up the Standalone Machine Agent.

Registering Container ID as the Host ID

You install an App Server Agent in each container in a Kubernetes pod to collect application metrics. You install the Standalone Machine Agent on each node as a DaemonSet to collect container metrics. If multiple App Server agents are running in the same pod (which implies multiple containers in a pod), you must register the container ID as the unique host ID on both the App Server Agent and the Machine Agent to collect container-specific metrics from the pod. Kubernetes pods can contain multiple containers and they share the same host ID. Deployed for each unique host ID, the Machine Agent cannot identify different containers running in a pod unless each container ID is registered as the host ID. Container properties and metrics keep toggling for each container if the Machine Agent cannot detect the source of the metrics.

The App Server Agent identifies it's running in a Kubernetes environment by verifying if the following parameters are present in the environment: KUBERNETES_PORTKUBERNETES_SERVICE_HOST, and KUBERNETES_SERVICE_PORT.

To register the container ID as the host ID:

  1. Get the container ID from the cgroup:

    cat /proc/self/cgroup | awk -F '/' '{print $NF}'  | head -n 1
  2. Register the app server agents:

    -Dappdynamics.agent.uniqueHostId=$(sed -rn '1s#.*/##; 1s/(.{12}).*/\1/p' /proc/self/cgroup)

    For OpenShift, run the following command:
    -Dappdynamics.agent.uniqueHostId=$(sed -rn '1s#.*/##; 1s/docker-(.{12}).*/\1/p' /proc/self/cgroup) 

  3. Register the Machine Agent:

    -Dappdynamics.docker.container.containerIdAsHostId.enabled=true 

AppDynamics Instrumentation in Kubernetes

AppDynamics Agents

There are several approaches to instrumenting applications deployed with Kubernetes, and which one you choose will depend on your particular requirements and devops processes.  In order to monitor an application container with AppDynamics, an APM Agent must be included in that container.  This can be done in a number of ways:

  1. Using an appropriate base image which has the APM agent pre-installed
  2. Loading the agent dynamically as part of the container startup
  3. Loading the agent and dynamically attaching to a running process (where the language runtime supports it)

Option 3 is usually applicable only to Java-based applications, since the JVM supports Dynamic Attach, which is standard feature of the AppDynamics Java APM Agent.  See this blog for more details and an example of how to do this.

For the other options, it is common practice to make use of standard Kubernetes features such as Init Containers, ConfigMaps and Secrets as described in this blog.

Resource Limits

The main application being monitored should have resource limits defined.

Provide 2% padding for CPU and add up to 100 Mb of memory.

Machine Agent

AppDynamics Machine Agent can be deployed in a single container image, without the need for init container. By default the machine agent is deployed to the cluster as a daemon set, to evenly distribute each agent instance across all cluster nodes. Where required, the daemon set can be configured with node affinity rules or node anti-affinity rules to ensure that it is deployed to a desired set of nodes and not across the entire cluster.

More on node affinity here.

In order to harvest pod metadata, the service account used to deploy the machine agent must have the cluster-readerrole in OpenShift. The "cluster-reader" role is also required for the Kubernetes extensions to the machine agent.

Cluster Reader Role
# assigning cluster-reader role in OpenShift
oc adm policy add-cluster-role-to-user cluster-reader -z appd-account

Resource Limits

The stand-alone machine agent can be configured with the following resource requests and limits.

For 500 containers the number of requests: Mem = 400M, CPU = "0.1" and limits: Mem = 600M, CPU = "0.2"

Running Machine Agent with Kubernetes Extensions

When deploying the Kubernetes Extensions described earlier, it is important to keep in mind that only a single version of the extension must be deployed to the cluster. To avoid duplicate extensions and potential cluster overload, do not include the Kubernetes Extension in the daemon set.

 

Instead, consider deploying the instance of the machine agent with the extension as a separate deployment with 1 replica in addition to the daemon set for Server Visibility. The machine agent SIM and Docker can be disabled in this case and the memory request can be dropped to 250M.

Security and RBAC in Kubernetes

OpenShift offers many conveniences on top of the upstream Kubernetes. If you are working with a vanilla Kubernetes distribution, it may not have a pre-built cluster role similar to "cluster-reader" in OpenShift. You will have to construct one yourself.

Below is a sample role definition that provides a wide read access to various Kubernetes resources. These permissions are more than sufficient to enable Kubernetes extensions to the machine agent as well as pod metadata collection. The role is called 'appd-cluster-reader', but you can obviously name it as necessary. The cluster role definition outlines various api groups that will be available for members of this role. For each api group, we define a list of resources that will be accessed and the access method. Because we only need to retrieve information from these api endpoints, we only need the read-only access, expressed by "get", "list" and "watch" verbs.

A Sample ClusterRole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: appd-cluster-reader
rules:
- nonResourceURLs:
      - '*'
  verbs:
      - get
- apiGroups: ["batch"]
  resources:
    - "jobs"
  verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
  resources:
    - daemonsets
    - daemonsets/status
    - deployments
    - deployments/scale
    - deployments/status
    - horizontalpodautoscalers
    - horizontalpodautoscalers/status
    - ingresses
    - ingresses/status
    - jobs
    - jobs/status
    - networkpolicies
    - podsecuritypolicies
    - replicasets
    - replicasets/scale
    - replicasets/status
    - replicationcontrollers
    - replicationcontrollers/scale
    - storageclasses
    - thirdpartyresources
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
    - bindings
    - componentstatuses
    - configmaps
    - endpoints
    - events
    - limitranges
    - namespaces
    - namespaces/status
    - nodes
    - nodes/status
    - persistentvolumeclaims
    - persistentvolumeclaims/status
    - persistentvolumes
    - persistentvolumes/status
    - pods
    - pods/binding
    - pods/eviction
    - pods/log
    - pods/status
    - podtemplates
    - replicationcontrollers
    - replicationcontrollers/scale
    - replicationcontrollers/status
    - resourcequotas
    - resourcequotas/status
    - securitycontextconstraints
    - serviceaccounts
    - services
    - services/status
  verbs: ["get", "list", "watch"]
- apiGroups:
  - apps
  resources:
    - controllerrevisions
    - daemonsets
    - daemonsets/status
    - deployments
    - deployments/scale
    - deployments/status
    - replicasets
    - replicasets/scale
    - replicasets/status
    - statefulsets
    - statefulsets/scale
    - statefulsets/status
  verbs:
    - get
    - list
    - watch
- apiGroups:
  - apiextensions.k8s.io
  resources:
    - customresourcedefinitions
    - customresourcedefinitions/status
  verbs:
    - get
    - list
    - watch
- apiGroups:
  - apiregistration.k8s.io
  resources:
    - apiservices
    - apiservices/status
  verbs:
    - get
    - list
    - watch
- apiGroups:
  - events.k8s.io
  resources:
    - events
  verbs:
    - get
    - list
    - watch

Once the role is defined, you will need to create cluster role bindings to associate the role with a service account. Below is an example of a ClusterRoleBinding spec that makes appd-cluster-reader service account a member of the appd-cluster-reader-role in project "myproject". Note that the naming is purely coincidental. The names of the service account and the cluster role do not have to match.

 

A Sample ClusterRoleBinding
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cluster-reader-role-binding
subjects:
- kind: ServiceAccount
  name: appd-cluster-reader
  namespace: myproject
roleRef:
  kind: ClusterRole
  name: appd-cluster-reader
  apiGroup: rbac.authorization.k8s.io
  • No labels