This page describes deployment options for Transaction Analytics and Log Analytics instrumented with AppDynamics app server agents in Kubernetes applications. 

Transaction Analytics (except for Java and .NET Agent) and Log Analytics require that an Analytics Agent is deployed with an app server agent. 

For Transaction Analytics, the Java Agent >= 4.5.15 or the .NET Agent >= 20.10 supports "agentless" analytics, which does not require that an Analytics Agent is deployed. See Deploy Analytics Without the Analytics Agent.

Transaction Analytics

The Analytics Agent acts as a proxy between the app server agent and the Events Service. See Deploy Analytics With the Analytics Agent.

There are two deployment options for the Analytics Agent to support Transaction Analytics on a Kubernetes application.

  1. A sidecar to the application container. 
     Transaction Analytics Side Car Diagram
    In this model, an Analytics Agent container is added to each application pod and will start/stop with the application container.
  2. A shared agent where a single Analytics Agent is deployed on each Kubernetes worker node. Each pod on the node will use that Analytics Agent to communicate with the Events Service. 

 Log Analytics Side Car Diagram

In this model, the Analytics Agent is deployed as a Daemonset.

Log Analytics

Once deployed, the Analytics Agent has access to the application's logs and can send log data to the Events Service.

There are three deployment options for the Analytics Agent to support Log Analytics on a Kubernetes application.

  1. A sidecar to the application container.

    Log Analytics STDOUT Diagram
    In this model, an Analytics Agent container is added to each application pod and will start/stop with the application container. The Analytics Agent and application container are configured to share a volume where the application logs are written.
  2. If the application bypasses the container filesystem and emits log data to STDOUT and STDERR, the Analytics Agent can be deployed on each Kubernetes worker node. The Analytics Agent can access the log output for every application container on the worker node's file system, stored by Kubernetes under /var/log/containers as a unique file per container. 

    Log Analytics STDOUT Diagram

    In this model, the Analytics Agent is deployed as a Daemonset.

    For some Kubernetes distributions such as OpenShift, the Analytics Agent will require elevated permissions to access the files under /var/log/containers.

  3. If a syslog provider is available in the Kubernetes cluster, the Analytics Agent can be deployed to receive syslog messages with TCP transport. A single Analytics Agent instance is required per syslog provider. See Collect Log Analytics Data from Syslog Messages.

For Transaction and Log Analytics, the sidecar approach is simpler to deploy, but consumes more cluster resources because it requires one additional container per application pod. The shared agent approach adds another deployment object to manage, but can significantly reduce the overall resource consumption for a cluster.

Example Configurations to Deploy the Analytics Agent

The following deployment specs are specific examples of how to implement the deployment options explained above. In addition, see Install the .NET Agent for Linux in Containers and Install the Node.js Agent in Containers for best practices on how to set the Analytics Agent host, port, and SSL environment variables.

Transaction Analytics: Deployment Spec Using A Sidecar

The following deployment spec defines two containers, the application container flight-services, which uses an image instrumented with an app server agent, and the Analytics Agent container appd-analytics-agent, which uses the Analytics Agent from Docker Hub, docker.io/appdynamics/analytics-agent:latest.

The appd-analytics-agent container leverages a ConfigMap and Secret to configure the Events Service credentials required by the Analytics Agent, including the account access key and global account name. See Install Agent-Side Components.

As a sidecar, the Analytics Agent is available at localhost and uses the default port 9090. The app server agent will connect automatically and no additional configuration is required.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: flight-services
spec:
  selector:
    matchLabels:
      name: flight-services
  replicas: 1
  template:
    metadata:
      labels:
        name: flight-services
    spec:
      containers:
      - name: flight-services
        image: sashaz/ad-air-nodejs-services-analytics:latest
        imagePullPolicy: IfNotPresent
        envFrom:
          - configMapRef:
              name: controller-info
        env:
        - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: appd-key
              name: appd-secret
        - name: APPDYNAMICS_AGENT_TIER_NAME
          value: flight-services
        ports:
        - containerPort: 8080
          protocol: TCP
        restartPolicy: Always
      - name: appd-analytics-agent
        envFrom:
        - configMapRef:
            name: controller-info
        env:
        - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: appd-key
              name: appd-secret
        - name: APPDYNAMICS_EVENTS_API_URL
          valueFrom:
            configMapKeyRef:
              key: EVENT_ENDPOINT
              name: controller-info
        - name: APPDYNAMICS_GLOBAL_ACCOUNT_NAME
          valueFrom:
            configMapKeyRef:
              key: FULL_ACCOUNT_NAME
              name: controller-info
        image: docker.io/appdynamics/analytics-agent:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9090
          protocol: TCP
        resources:
          limits:
            cpu: 200m
            memory: 900M
          requests:
            cpu: 100m
            memory: 600M
...
CODE

The full spec can be found in the Flight Services YAML File. The controller-info ConfigMap can be found in the Controller Info YAML File. The command to create appd-secret can be found in Secret.

Transaction Analytics: Deployment Specs Using A Shared Analytics Agent

The following deployment spec is for the same flight-services application, but instead of using a sidecar, it references a shared Analytics Agent deployed separately as a Daemonset. The flight-services container sets the agent environment variables APPDYNAMICS_ANALYTICS_HOST and APPDYNAMICS_ANALYTICS_PORT to the analytics-proxy service for the shared Analytics Agent defined in the example below.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: flight-services
spec:
  selector:
    matchLabels:
      name: flight-services
  replicas: 1
  template:
    metadata:
      labels:
        name: flight-services
    spec:
      containers:
      - name: flight-services
        image: sashaz/ad-air-nodejs-services-analytics:latest
        imagePullPolicy: IfNotPresent
        envFrom:
          - configMapRef:
              name: controller-info
        env:
        - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: appd-key
              name: appd-secret
        - name: APPDYNAMICS_AGENT_TIER_NAME
          value: flight-services
        - name: APPDYNAMICS_ANALYTICS_HOST
          value: analytics-proxy
        - name: APPDYNAMICS_ANALYTICS_PORT
          value: "9090"
        ports:
        - containerPort: 8080
          protocol: TCP
      restartPolicy: Always
...
CODE

The full spec can be found in the Flight Services YAML File. Use this spec in conjunction with the following deployment spec.

In the analytics-agent.yaml file below, the shared Analytics Agent is deployed as a Daemonset. The file also defines a service appd-infra-agent-service that publishes an endpoint in the namespace where the shared Analytics Agent can be reached.

apiVersion: apps/v1
kind: DaemonSet
metadata: 
  name: appd-infra-agent
spec: 
  selector:
    matchLabels:
      name: appd-infra-agent
  template: 
    metadata: 
      labels: 
        name: appd-infra-agent
    spec:
      serviceAccountName: appdynamics-infraviz
      containers:
      - name: appd-analytics-agent
        envFrom:
        - configMapRef:
            name: controller-info
        env:
        - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: appd-key
              name: appd-secret
        - name: APPDYNAMICS_EVENTS_API_URL
          valueFrom:
            configMapKeyRef:
              key: EVENT_ENDPOINT
              name: controller-info
        - name: APPDYNAMICS_GLOBAL_ACCOUNT_NAME
          valueFrom:
            configMapKeyRef:
              key: FULL_ACCOUNT_NAME
              name: controller-info
        image: docker.io/appdynamics/analytics-agent:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9090
          protocol: TCP
        resources:
          limits:
            cpu: 200m
            memory: 900M
          requests:
            cpu: 100m
            memory: 600M
        volumeMounts:
        - name: ma-log-volume
          mountPath: /opt/appdynamics/conf/logging/log4j.xml
          subPath: log4j.xml
        - mountPath: /hostroot
          name: hostroot
          readOnly: true
      restartPolicy: Always
      volumes:
      - name: ma-log-volume
        configMap:
          name: ma-log-config
      - name: hostroot
        hostPath:
          path: /
          type: Directory
---
apiVersion: v1
kind: Service
metadata:
  name: appd-infra-agent-service
spec:
  selector:
    name: appd-infra-agent
  ports:
  - name: "9090"
    port: 9090
    targetPort: 9090
status:
  loadBalancer: {}
CODE

The full deployment spec can be found in the Machine Agent YAML File. The appdynamics-infraviz service account is defined in the RBAC YAML File. The ma-log-config ConfigMap is defined in the Machine Agent Log Config File.

A best practice is to deploy the shared Analytics Agent in a dedicated namespace (typically appdynamics) separate from the namespaces used by applications.

$ kubectl -n appdynamics apply -f analytics-agent.yaml
CODE

To provide access to the shared Analytics Agent from an application namespace:

  1. An ExternalName service is required to map a service name (analytics-proxy in the example) to the DNS name of appd-infra-agent-service created previously:

    kind: Service
    apiVersion: v1
    metadata:
      name: analytics-proxy
    spec:
      type: ExternalName
      externalName: appd-infra-agent-service.appdynamics.svc.cluster.local
      ports:
      - port: 9090
        targetPort: 9090
    CODE
  2. Create this service in each application namespace where an App Server Agent is deployed:

    $ kubectl -n <app namespace> apply -f analytics-proxy.yaml
    CODE
  3. Note that analytics-proxy is the value of APPDYNAMICS_ANALYTICS_HOST used in the flight-services deployment spec.

    - name: APPDYNAMICS_ANALYTICS_HOST
      value: analytics-proxy
    CODE

Log Analytics: Deployment Spec Using A Side Car

The following deployment spec snippet is for a Java application that defines an application container, client-api, and an Analytics Agent container, appd-analytics-agent, that acts as a sidecar to the application container. An init container, appd-agent-attach, is also defined, but the related definitions are removed to simplify the example.

A shared volume, appd-volume, is mounted to the application container and Analytics Agent container using the mount path /opt/appdlogs. The Java application is configured to write its logs to this path and the Analytics Agent is configured to read the logs from this path and send them to the Events Service.

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: client-api
  name: client-api
spec:
  selector:
    matchLabels:
      name: client-api
  template:
    metadata:
      labels:
        name: client-api
    spec:
      containers:
      - name: client-api
        envFrom:
        - configMapRef:
            name: agent-config
        env:
        - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: appd-key
              name: appd-secret
        - name: JAVA_OPTS
          ...
        image: sashaz/java-services:v5
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          protocol: TCP
        resources: {}
        volumeMounts:
        - mountPath: /opt/appdlogs
          name: appd-volume
          ...
      - name: appd-analytics-agent
        env:
        - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: appd-key
              name: appd-secret
        envFrom:
        - configMapRef:
            name: agent-config
        image: docker.io/appdynamics/analytics-agent:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9090
          protocol: TCP
        resources:
          limits:
            cpu: 200m
            memory: 900M
          requests:
            cpu: 100m
            memory: 600M
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/appdlogs
          name: appd-volume
      dnsPolicy: ClusterFirst
      initContainers:
      - name: appd-agent-attach
      ...
      restartPolicy: Always
      schedulerName: default-scheduler
      serviceAccountName: appd-account
      volumes:
      - emptyDir: {}
        name: appd-volume
...
CODE

The full spec can be found in the Java App YAML File.

Log Analytics: Deployment Spec For Shared Analytics Agent (STDOUT/STDERR Support)

The following deployment spec supports the use case where application containers are emitting logs to STDOUT and STDERR, not the application container filesystem.

Since Kubernetes writes the container logs to the host under /var/log/containers, the Analytics Agent can read them there. The Analytics Agent is deployed as a Daemonset. A volume varlog is defined with access to the host path /var/log/containers and mounted to the Analytics Agent container, appd-analytics-agent. The Analytics Agent is configured to read the container-specific logs written to /var/log/containers. See Configure Log Analytics Using Source Rules.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: appdynamics-loganalytics
  namespace: appdynamics
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    name: appd-analytics
  name: appd-analytics
spec:
  selector:
    matchLabels:
      name: appd-analytics
  template:
    metadata:
      labels:
        name: appd-analytics
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: appd-analytics-agent
        env:
        - name: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              key: controller-key
              name: appd-secret
        envFrom:
        - configMapRef:
            name: agent-config
        image: docker.io/appdynamics/analytics-agent:log-20.6.0
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        - containerPort: 5144
          hostPort: 5144
          protocol: TCP
        resources:
          limits:
            cpu: 300m
            memory: 900M
          requests:
            cpu: 200m
            memory: 800M
        volumeMounts:
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: dockerlog
          mountPath: /var/lib/docker/containers
          readOnly: true
      restartPolicy: Always
      serviceAccountName: appdynamics-loganalytics
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: dockerlog
        hostPath:
          path: /var/lib/docker/containers
CODE

The full spec can be found in the Log Analytics YAML File. An OpenShift example can be found in the Log Analytics OpenShift YAML File.