This page applies only to Log Collector deployments that use the advanced layout of collectors-values.yaml

To configure the Log Collector, add log-specific settings to the same values YAML file that you used to deploy Kubernetes and App Service Monitoring. This file is typically named collectors-values.yaml. If you can't find it, see Recover a Lost Values YAML File.

The following steps omit most optional settings. For a complete description of all settings, valid values, and so on, preview Log Collector Settings - Advanced YAML Layout before completing the following steps.

Set Required Parameters in global

  1. On Cisco Cloud Observability, get the name of your cluster:

    1. On the Observe page, select your cluster.
    2. In the Properties panel, copy the name of your cluster and save it to a text file.
  2. In global, set clusterName to the name of your cluster. This name must match the cluster's name as seen in Cisco Cloud Observability:

    global:
      clusterName: <cluster-name>
    YML

Set Required Parameters in appdynamics-cloud-k8s-monitoring.install

In appdynamics-cloud-k8s-monitoring.install, set logCollector to true

install:
  ...   
  logCollector: true
YML

Create appdynamics-cloud-k8s-monitoring.logCollectorConfig

If your collectors-values.yaml does not have the appdynamics-cloud-k8s-monitoring.logCollectorConfig key, copy and paste it from the sample in Log Collector Settings - Advanced YAML Layout.

Set Required Parameters in appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.filebeat.autodiscover

  1. If you want a default condition for harvesting logs from any container on your cluster:
    1. Set hints.default_config.enabled to true.
    2. Set the parameters in hints.default_config to your default log harvesting conditions. You only need to specify overriding values. In other words, if you want to use the default value of any parameter, omit it.
  2. In appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.filebeat.autodiscover.providers.type.templates, set the parameters for one container that generates logs:
    1. In condition, specify a condition that this container must match in order for the Log Collector to collect its logs. For a list of supported conditions, see Filebeat: Conditions. For a list of fields you can specify in these conditions, see Filebeat: Autodiscover: Generic fields.
      For example, to use the name of the container as the condition:

      filebeat.autodiscover:
        providers:
          - type: kubernetes
            ...
            templates:
              - condition:
                  equals:
                    kubernetes.container.name: log-generator-logback
      YML
    2. In config, set paths to the glob pathname to this container's log files.
      For example: 

      filebeat.autodiscover:
        providers:
          - type: kubernetes
            ...
            templates:
              - condition:
                  ...
                config:
                  - type: filestream
                    ...
                    paths:
                      - /var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
      YML
    3. In config.type, set multiline.pattern, multiline.negate, and multiline.match to the correct values to properly parse multiline log messages from this container. See Manage multiline messages
      For example, for multiline log messages which start with a date or timestamp: 

      filebeat.autodiscover:
        providers:
          - type: kubernetes
            ...
            templates:
              - condition:
                  ...
                config:
                  - type: filestream
                    ...
                    multiline.pattern: '^\d{4}-\d{2}-\d{2}'
                    multiline.negate: true
                    multiline.match: after
      YML

      For JSON logs, your multiline settings might look like this:

      filebeat.autodiscover:
        providers:
          - type: kubernetes
            ...
            templates:
              - condition:
                  ...
                config:
                  - type: filestream
                    ...
                    multiline.pattern: '^{'
                    multiline.negate: true
                    multiline.match: after
      YML
    4. In fields, set <custom-key> and <custom-value> to any extra key-value pair you want to inject into every log message from this container.
      For example: 

      config:
        - type: filestream
          ...
          fields:
            <custom-key>: <custom-value>
      YML
    5. In config.type.processors, add an - add_fields block, and in that block, set target to _message_parser, set fields.type to the parser type, and fields.pattern to a regular expression that matches your messages of this parser type. 
      For example: 

      config:
        - type: filestream
          ...
          processors:
          ...
            - add_fields:
                target: _message_parser
                fields:
                  type: log4j
                  pattern: "%d{yyyy-MM-dd'T'HH:mm:ss} %p %C{1.} [%t] %m%n"
      YML
    6. In config.type.processors, add an - add_fields block, and in that block, set target to appd, set fields.log.format to "<log-namespace>:<log-description>". For example, "logs:email_logs".  For more examples, see Log Collector Settings - Advanced YAML Layout.  If you don't specify appd.log.format, you can't mask sensitive data contained within the log messages that are ingested through this configuration. See Mask Sensitive Data

      For example: 

      config:
        - type: filestream
          ...
          processors:
          ...
            - add_fields:
                target: appd
                fields:
                  log.format: logs:email_logs
      YML



    7. In config.type.processors.copy_fields , add these lines as-is, and in this order, if they are missing. This copy_fields block allows the Log Collector to associate log messages with a workload entity: 
      For example: 

      config:
        - type: filestream
          ...
          processors:
          ...
          processors:
           - copy_fields:
               ...
           - copy_fields:
               fields:
                 - from: "kubernetes.deployment.name"
                   to: "kubernetes.workload.name"
                 - from: "kubernetes.daemonset.name"
                   to: "kubernetes.workload.name"
                 - from: "kubernetes.statefulset.name"
                   to: "kubernetes.workload.name"
                 - from: "kubernetes.replicaset.name"
                   to: "kubernetes.workload.name"
                 - from: "kubernetes.cronjob.name"
                   to: "kubernetes.workload.name"
                 - from: "kubernetes.job.name"
                   to: "kubernetes.workload.name"
               fail_on_error: false
               ignore_missing: true 
      YML
  3. If you want the Log Collector to harvest logs from other containers on this pod or another pod, clone the condition block and repeat the previous step for each additional container. It does not matter what pod the log-generating container is running on, as long as the Log Collector can access the location of that container's logs. 

Set Required Parameters in appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.processors

  1. Add the add_cloud_metadata and add_kubernetes_metadata processors if they are not there already. These processors are required. They allow the Log Collector to send the Kubernetes attributes k8s.cluster.name, k8s.namespace.name, k8s.workload.name, k8s.pod.name, k8s.deployment.name (for deployments), and k8s.statefulset.name (for statefulsets) to Cisco Cloud Observability in order to associate log messages with the right entity. 

    These are reserved names. Do not change the names of these processors.

    Add this exact codeblock, as-is:

    processors:
      - add_cloud_metadata: ~
      - add_kubernetes_metadata:
          in_cluster: true
          host: ${NODE_NAME}
          matchers:
            - logs_path:
                logs_path: "/var/log/containers/"
    YML
  2. In rename, If the following lines are missing, add them. These lines allow the Log Collector to send these attributes to Cisco Cloud Observability in order to associate log messages with the right entity.  

    These are reserved names. Do not change the names of these attributes.

    processors:
      ...
      - rename:
          fields:
            - from: "kubernetes.namespace"
              to: "kubernetes.namespace.name"
            - from: "kubernetes"
              to: "k8s"
            - from: k8s.annotations.appdynamics.lca/filebeat.parser
              to: "_message_parser"
            - from: "cloud.instance.id"
              to: "host.id"
          ignore_missing: true
          fail_on_error: false
    YML
  3. If the following lines are missing, add them, but replace <cluster-name> with the name of your cluster, and replace <cluster-id> with your cluster's ID. Your cluster name must match the cluster's name as displayed in Cisco Cloud Observability. You can get your cluster ID by running this command: kubectl get ns kube-system -o json 

    processors:
      ...
      - add_fields:
          target: k8s
          fields:
            cluster.name: <cluster-name>
      - add_fields:
          target: k8s
          fields:
            cluster.id: <cluster-id>
    YML
  4. If the following lines are missing, add them as-is: 

    processors:
      ...
      - add_fields:
          target: source
          fields:
            name: log-agent
      - add_fields:
          target: telemetry
          fields:
            sdk.name: log-agent
    YML
  5. If the following lines are missing, add them, but replace <cluster-ID> with the ID of your cluster, and retain the colon suffix

    processors:
      ...
      - script:
          lang: javascript
          source: >
            function process(event) {
              var podUID = event.Get("k8s.pod.uid");
              if (podUID) {
                event.Put("internal.container.encapsulating_object_id", "<cluster-ID>:" + podUID);
              }
              return event;
            }
    YML
  6. In drop_fields, add any other fields you want to drop. Do not delete any of the existing dropped fields.

    processors:
      ...
      - drop_fields:
          fields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
    YML

     

Set Optional Parameters in appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.output.otlploggrpc

  1. If you need SSL, specify your SSL settings. 
    For example: 

    output.otlploggrpc:
     ...
     ssl.enabled: true
     ssl.certificate_authorities: ["/opt/appdynamics/certs/ca/ca.pem"]
     ssl.certificate: "/opt/appdynamics/certs/client/client.pem"
     ssl.key: "/opt/appdynamics/certs/client/client-key.pem" 
    YML
  2. If you need to improve the performance of the Log Collector, increase batch_size to 1000:

    output.otlploggrpc:
      ...
      batch_size: 1000
    YML

Set Required Parameters in appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.path.data

Set path.data to /opt/appdynamics/logcollector-agent/data. This prevents the Log Collector from harvesting logs that have already been harvested when it restarts. 

path.data: /opt/appdynamics/logcollector-agent/data
YML

If You Are Deploying the Log Collector on Windows Containers

  1. Set appdynamics-cloud-k8s-monitoring.logCollectorConfig.os to the list of operating systems your containers are running. The Log Collector supports linux, windows, or both: 

      logCollectorConfig:
        os: [windows,linux]
    YML
  2. (Optional) If you need different filebeatYaml blocks for different operating systems, do the following in appdynamics-cloud-k8s-monitoring.logCollectorConfig:

    1. Add env.linux and env.windows sections.
      For example: 

        logCollectorConfig:
          os: [linux,windows]
          env:
            linux:
            windows:
      YML
    2. Move the filebeatYaml block to the env.linux and env.windows sections, and modify it as necessary. See examples Log Collector Settings - Advanced YAML Layout.
      For example:

        logCollectorConfig:
          os: [linux,windows]
          env:
            linux:
              filebeatYaml: |-
                ...
            windows:
              filebeatYaml: |-
                ...
      YML
  3. (Optional) In appdynamics-cloud-k8s-monitoring.logCollectorPod, specify different Log Collector images for different operating systems: 

       logCollectorPod:
        imagePullPolicy: IfNotPresent
        env:
          linux:
            image: <image-url>
          windows:
            image: <image-url>
    YML

Apply Changes to Your Cluster

  1. Validate collectors-values.yaml with a YAML validator like YAML Lint.
  2. To apply the changes to your cluster, continue with the "Deploy" step in Deploy the Log Collector.

OpenTelemetry™ and Kubernetes® (as applicable) are trademarks of The Linux Foundation®.