The Cisco Cloud Observability Kubernetes Collectors chart (appdynamics-collectors) can be used to deploy the following collectors:

  • Cluster Collector
  • Infrastructure Collector
  • Log Collector
  • Cisco AppDynamics Distribution of OpenTelemetry Collector

This chart includes the following sub-charts:

  • appdynamics-otel-collector includes settings for the Cisco AppDynamics Distribution of OpenTelemetry Collector
  • appdynamics-cloud-k8s-monitoring includes settings for these collectors:
    • Cluster Collector
    • Infrastructure Collector
    • Log Collector

This is the reference page for Log Collector settings. 

Enclose all values in double quotes unless a particular value supports regular expressions (regex) or its description says otherwise. Enclose all values which support regex in single quotes.

Global Settings

You can specify the following global parameters within the collectors-values.yaml file:

NameTypeDescriptionRequired

global

object

Contains the global settings for the Cisco Cloud Observability Kubernetes Collectors. 

No

oauth NEW

object

Defines the Oauth authentication credentials.

Yes

global

↩ Parent

The following table describes settings in the global key that are related to the Log Collector:

NameTypeDescriptionRequired

clusterName

string

Name of your cluster. Must match the cluster name as displayed in Cisco Cloud Observability.  Based on clusterName, the Log Collector extracts the following Kubernetes®  properties automatically: k8s.pod.name, k8s.namespace.name, k8s.container.name, k8s.node.name.

Example:

global:
  clusterName: <cluster-name>
YML

Yes

global.oauth

↩ Parent

NameTypeDescriptionRequired
clientIdstringDefines the client ID for authenticating with Cisco Cloud Observability.Yes
clientSecretstring

Defines the secret string in plain text to authenticate with Cisco Cloud Observability.

You can use clientSecretEnvVar instead of ClientSecret to define the secret string in environment variable for authenticating with Cisco Cloud Observability.

Yes
endpointstringDefines the endpoint the collector sends data to.Yes
tokenUrlstringDefines the URL for obtaining an authentication token from Cisco Cloud Observability.Yes

appdynamics-cloud-k8s-monitoring

↩ Parent

The following table describes settings in appdynamics-cloud-k8s-monitoring that are related to the Log Collector:

NameTypeDescriptionRequired

install

object

Enables or disables components.

Yes

logCollectorPod

object

Configures a specific Log Collector pod.

No

logCollectorConfig

object

Configures log collection for a specific container. 

Yes

appdynamics-cloud-k8s-monitoring.install

↩ Parent

The following table describes settings in appdynamics-cloud-k8s-monitoring.install that are related to the Log Collector:

NameTypeDescriptionRequired

clustermon

string

Enables the deployment of clustermon on your cluster. This component must be deployed first before deploying the Log Collector. If you have already deployed Kubernetes and App Service Monitoring, clustermon is already deployed.

Valid values:

  • true: Enable clustermon (or keep it enabled)
  • false: Disable clustermon 

Example: 

install:
  ...
  clustermon: true
YML

Yes

logCollector

string

Enables the deployment of the Log Collector on your cluster.

Valid values:

  • true: Enable the Log Collector (or keep it enabled)
  • false: Disable the Log Collector 

Example: 

install:
  ...
  logCollector: true
YML

Yes

appdynamics-cloud-k8s-monitoring.logCollectorPod

↩ Parent

These settings configure a specific Log Collector pod:

NameTypeDescriptionRequired

env

object

Configures OS-specific settings.

Yes

rollingUpdateMaxUnavailable 

string

The maximum number of Log Collector pods in the DaemonSet that can be unavailable during an update. Specify this value as an absolute number (minimum: 1) or as a percentage of the total number of pods at the start of the update (for example, 10%). A rolling update waits when this number of pods is unavailable and then brings up new pods in their place. Once the new pods are available, it then proceeds onto other pods, ensuring that the maximum number of unavailable pods never exceeds this value at any time during the update. See https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable.

Default: 100%

No

tolerations

object

Configures tolerations for the Log Collector pod.

Yes if deploying the Log Collector on Windows nodes on Google Kubernetes Engine (GKE)

appdynamics-cloud-k8s-monitoring.logCollectorPod.env

↩ Parent

In the appdynamics-cloud-k8s-monitoring.logCollectorPod.env section you can specify the location of operating system-specific images of the Log Collector.

NameTypeDescriptionRequired

linux

object

Log Collector pod settings for Linux.No

windows

object

Log Collector pod settings for Windows.No

appdynamics-cloud-k8s-monitoring.logCollectorPod.env.linux

↩ Parent

NameTypeDescriptionRequired

image

string

Location of the Log Collector image for Linux.

Example:

env:
  linux:
YML
No

appdynamics-cloud-k8s-monitoring.logCollectorPod.env.windows

↩ Parent

NameTypeDescriptionRequired

image

string

Location of the Log Collector image for Windows.

Example:

env:
  windows:
    image: <image-url>
YML
No

appdynamics-cloud-k8s-monitoring.logCollectorPod.tolerations

↩ Parent

NameTypeDescriptionRequired

key

operator

value

effect

string

Configures tolerations for the Log Collector pod.

GKE does not allow usage of kubernetes.io/os: windows label as node selector while deploying the Windows host process pods. Therefore, to deploy the Log Collector pods on GKE, you must include this toleration.

For Windows nodes on Google Kubernetes Engine (GKE), use this exact block:

tolerations:  
  - key: "node.kubernetes.io/os"
    operator: "Equal"
    value: "windows"
    effect: "NoSchedule" 
YML
Yes if deploying the Log Collector on Windows nodes on Google Kubernetes Engine (GKE)

appdynamics-cloud-k8s-monitoring.logCollectorConfig

↩ Parent

These settings configure log collection for a specific container:

NameTypeDescriptionRequired

os

array

Array of operating systems to deploy this pod on. Valid values: windows, linux.

Example:

os: [linux,windows]
YML

To deploy different Log Collector images to different operating systems, set appdynamics-cloud-k8s-monitoring.logCollectorPod.env.linux.image and appdynamics-cloud-k8s-monitoring.logCollectorPod.env.windows.image.

No

env

object

Specifies OS-specific overrides.

No

container

object

Configures log collection for a specific container. 

Yes

appdynamics-cloud-k8s-monitoring.logCollectorConfig.env

↩ Parent

Use the appdynamics-cloud-k8s-monitoring.logCollectorConfig.env section to specify operating system-specific overrides.  

NameTypeDescriptionRequired

linux





container

object

A Linux-specific container section identical in syntax to appdynamics-cloud-k8s-monitoring.logCollectorConfig.container. You only need to include parameters which override the settings in appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.

Example:  

  logCollectorConfig:
    os: [windows, linux]
    env: 
      linux: 
        container:
          defaultConfig:
            ...
          conditionalConfigs:
            ...
          logging:
            ...
          monitoring:
            ...         
      windows:    
        container:
          defaultConfig:
            ...
          conditionalConfigs:
            ...
          logging:
            ...
          monitoring:
            ...           
YML

The following sections of appdynamics-cloud-k8s-monitoring.logCollectorConfig.container require a complete override – you must include all parameters:

  • appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.defaultConfig.messageParser
  • appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs

The following sections of filebeatYaml only require a value override – you only need to include the values you want to override:

  • appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.logging
  • appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.monitoring

No

windows





container

object

Same as linux.container.

No

appdynamics-cloud-k8s-monitoring.logCollectorConfig.container

↩ Parent

These settings configure log collection for a specific container:

NameTypeDescriptionRequired

monitorCollectors 

string

Enables or disables log collection from the Log Collector and other collectors running on your cluster. Valid values: true, false.

No

excludeCondition

object

Specifies OS-specific overrides.

No

defaultConfig

object

Default condition for harvesting logs from any container on your cluster.

No

conditionalConfigs

object

The block which contains all the settings for a specific log source, type, and pattern as a pair of condition+config blocks.

Yes

dropFields

array

List of fields you don't want to export to Cisco Cloud Observability. The default list of fields to drop is shown in the example.

Example:

logCollectorConfig:
  container:
    conditions:
      - ...
    dropFields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
YML

No

batchSize

string

Number of log records to be sent together in a single batch, which improves its performance. Default and strongly recommended value: 1000

Best practice is to specify both batchSize and maxBytes. Note that the actual batch size will be less than batchSize if the number of bytes in a batch is more than maxBytes.

No

maxBytes

string

Maximum number of bytes to accept in an OTLP logs packet.

If the number of bytes in the OTLP logs packet to be published (which contains  all the log records present in a batch)  exceeds maxBytes, the Log Collector splits the batch. If maxBytes is null, the Log Collector does not split any batches. Default and strongly recommended value: 1000000. This value can also be specified with scientific E notation as 1e+06.

If your otel-collector is getting 413 response code errors, set maxBytes and batchSize to limit the log batch size that is sent to the otel-collector.

No

worker

string

The number of worker threads to an output host. Default: 1. Valid values: Any positive integer. 

If you find that events are backing up, or that the CPU is not saturated, you can increase this value to make better use of available CPU. You can set value to be greater than the number of CPU cores since threads often spend idle time in I/O wait conditions. See Filebeat documentation on Tuning and Profiling Logstash Performance.

No

summaryDebugLogsInterval

string

Logging interval. Example: 10s.

No

logging

object

Settings for saving or exporting Filebeat logs.

No

monitoring

object

Settings for exporting Filebeat metrics from the Log Collector in OTLP format.

No

appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.excludeCondition

↩ Parent

NameTypeDescriptionRequired

excludeCondition 

object

The condition that must be matched to exclude logs from being collected. This will exclude logs from both conditional and default configurations.  For a list of all Kubernetes fields you can use in exclude conditions, see https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-kubernetes-processor.html. You must use these exact field names. For syntax of excludeConditions, see appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.condition.

Example: 

  logCollectorConfig: 
    container:
      excludeCondition: 
        or:           
        - contains:
            kubernetes.pod.name: jsonapp-669c65dbdf-cz4vm
        - contains:
            kubernetes.pod.name: jsonapp-669c65db6f-p5mb2
YML

No

appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.defaultConfig

↩ Parent

NameTypeDescriptionRequired

defaultConfig 

object

Default condition for harvesting logs from any container on your cluster. This allows for faster setup.  This block has the same structure as appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.condition.

For new deployments, you don't need to change anything in collectors-values.yaml for default log collection; simply deploy the Log Collector and see your logs from this cluster in the system immediately, parsed by timestamp only. You can then incrementally refine your parsing configurations using the parsing pattern tester (see Log Parsing Validator). If you don't want to enable default log collection, set defaultConfig.enabled to false.

Example:

  logCollectorConfig: 
    os: [linux, windows]                                        
    container:
      defaultConfig: 
        enabled: true           
        multiLinePattern: '^{'
        multiLineMatch: "after"
        multiLineNegate: true
        logFormat: "logs:email_logs"
        messageParser:           
          json:
            enabled: true
YML

No

appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs

↩ Parent

The block which contains all the settings for a specific log source, type, and pattern as a pair of condition+config blocks. There can be multiple condition+config pairs within conditionalConfigs. Each condition+config pair contains specific settings (described below) for that log source's matching condition, multiline pattern, multiline negation, multiline matching, single line pattern, and so on. 

Example:

  logCollectorConfig:
    os: [linux, windows]                                        
    container:
      defaultConfig:            
        ...
      conditionalConfigs: 
       - condition:
          ...
         config: 
          ... 
       - condition:
          ...
         config: 
          ... 
       - condition:
          ...
         config: 
          ...
YML
NameTypeDescriptionRequired

- condition 

object

The condition that applications must match in order to have their logs harvested by the Log Collector.

Yes

config

object


Yes

appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.condition

↩ Parent

NameTypeDescriptionRequired

<depends on syntax>

string

The condition that applications must match in order to have their logs harvested by the Log Collector. For a list of supported conditions, see Filebeat: Conditions. For a list of fields you can use for conditions, see Filebeat: Autodiscover: Generic fields.

A condition is a list of three items:

  • A Filebeat operator
  • A key (name of a property)
  • A value (value of property which must be matched)

The condition list supports multiple syntax styles for these three items:

"Advanced" sytax:

- condition:
    operator: <filebeat-operator>
    key: <key-name>
    value: <value-to-match>
YML

With advanced syntax, available operators are limited to equals and contains. Boolean operators (and, or, not) are not supported. 

Example:

This condition configures the Log Collector to harvest logs if the container name is <container-name>.  To find the container name, run the command kubectl describe pod <podname>. See Useful kubectl and helm Commands:
- condition:
    operator: equals
    key: kubernetes.container.name
    value: log-gen-app-logback1
YML

Filebeat syntax:

- condition:
    <filebeat-operator>:
      <key-name>: <value-to-match>
YML

With Filebeat syntax, available operators are limited to equals and contains. Boolean operators (and, or, not) are not supported. 

Example:

- condition:
    equals:
      kubernetes.container.name: log-gen-app-log4j1
YML

Boolean syntax (supports boolean operators and, or, not):

- condition:
    <boolean>:
      - <filebeat-operator>:
          <key-name>: <value-to-match>
      - <filebeat-operator>:
          <key-name>: <value-to-match> 
YML

Example:

- condition:
    or:
      - equals:
          kubernetes.container.name: log-gen-app-log4j1
      - equals:
          kubernetes.container.name: log-gen-app-log4j2 
YML

Nested conditions:

Examples:

- condition:
    not:
      equals:
        kubernetes.container.name: log-gen-app-logback2
YML
- condition:
    or:
      - equals: 
          kubernetes.container.name: log-gen-app-log4j2
      - and:         
          - equals: 
              kubernetes.container.name: log-gen-app-log4j1 
          - equals: 
              kubernetes.namespace: appdynamics
YML

Yes

appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.config

↩ Parent

NameTypeDescriptionRequired

logFormat 

string

Specifies a logical grouping of the log "namespace" and source. Sensitive data masking rules apply only to a scope that matches the value of this parameter. If you don't specify this parameter, you can't mask sensitive data contained within the log messages that are ingested through this configuration. See Mask Sensitive Data

Syntax: "<log-namespace>:<log-description>"

Suggestions for <log-namespace>: aws, K8s, microsoft

Suggestions for <log-source>: apacheLogs, alb_logs, cis_logs, exchange_server_logs

Example for Kubernetes logs from the common ingestion service (CIS) endpoint:

     conditionalConfigs: 
      - condition:
          ...
        config:
          logFormat: "K8s:cis_logs" 
          ...
YML

Yes

multiLinePattern,
multiLineNegate,
multiLineMatch

string

If your messages span multiple lines, you must specify these parameters to identify where a multiline log message starts and ends. See Manage multiline messages.

multiLinePattern: The pattern of a multiline message. This must be a regular expression in RE2 syntax. Must be enclosed in single quotes.

multiLineNegate: Enables or disables negation of a multiline message. Default: false.

multiLineMatch:  The location of the multiline split. Default: after. If you specify multiLinePattern you must also specify multiLineMatch.

Example for log4J, Logback, or Grok logs starting with a date or timestamp of format YYYY-MM-DD

- condition:
  ...
  config:
    multiLineMatch: after
    multiLinePattern: '^\d{4}-\d{2}-\d{2}'
    multiLineNegate: true
YML

Example for Timestamp logs:

Not applicable.

Example for JSON logs:

- condition:
  ...
  config:
    multiLineMatch: after
    multiLinePattern: '^{'
    multiLineNegate: true
YML

No

messageParser

object

Single-line message pattern for log messages matching this condition block. Include only one log type in each a condition block.

Yes

appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.messageParser

↩ Parent

NameTypeDescriptionRequired
log4Jobject

Single-line message pattern for log4J messages matching this condition block. Include only one log type in each a condition block, and delete all others

Get this pattern from your logging configuration file (typically named log4J.xml or log4J2.xml) or from your application developers. You must specify the exact same pattern here as in your logging configuration file, otherwise your logs will not be parsed correctly when they are received by Cisco Cloud Observability.

Example:

- condition:
  ...
  config:
    ...
    messageParser:
      log4J:
        enabled: true
        pattern: "%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n"
YML

No

logbackobject

Single-line message pattern for Logback messages matching this condition block. Include only one log type in each a condition block, and delete all others

Get this pattern from your logging configuration file (typically named logback.xml) or from your application developers. You must specify the exact same pattern here as in your logging configuration file, otherwise your logs will not be parsed correctly when they are received by Cisco Cloud Observability.

Example:

- condition:
  ...
  config:
    ...
    messageParser:
      logback:
        enabled: true
        pattern: %d{yyyy-MM-dd'T'HH:mm:ss.nnnnnnnnn} %p %C{1.} [%t] %m%n" 
YML

No

timestampobject

Single-line message pattern for timestamp messages matching this condition block. Include only one log type in each a condition block, and delete all others

The format parameter specifies the format of the timestamp. Valid values for format are: 

  • ABSOLUTE (HH:mm:ss,SSS)
  • ABSOLUTE_MICROS (HH:mm:ss,nnnnnn)
  • ABSOLUTE_NANOS (HH:mm:ss,nnnnnnnnn)
  • ABSOLUTE_PERIOD (HH:mm:ss.SSS)
  • COMPACT (yyyyMMddHHmmssSSS)
  • DATE (dd MMM yyyy HH:mm:ss,SSS)
  • DATE_PERIOD (dd MMM yyyy HH:mm:ss.SSS)
  • DEFAULT (yyyy-MM-dd HH:mm:ss,SSS)
  • DEFAULT_MICROS (yyyy-MM-dd HH:mm:ss,nnnnnn)
  • DEFAULT_NANOS (yyyy-MM-dd HH:mm:ss,nnnnnnnnn)
  • DEFAULT_PERIOD (yyyy-MM-dd HH:mm:ss.SSS)
  • ISO (yyyy-MM-dd'T'HH:mm:ss)
  • ISO8601_BASIC (yyyyMMdd'T'HHmmss,SSS)
  • ISO8601_BASIC_PERIOD (yyyyMMdd'T'HHmmss.SSS)
  • ISO8601 (yyyy-MM-dd'T'HH:mm:ss,SSS)
  • ISO8601_OFFSET_DATE_TIME_HH (yyyy-MM-dd'T'HH:mm:ss,SSSX)
  • ISO8601_OFFSET_DATE_TIME_HHMM (yyyy-MM-dd'T'HH:mm:ss,SSSXX)
  • ISO8601_OFFSET_DATE_TIME_HHCMM (yyyy-MM-dd'T'HH:mm:ss,SSSXXX)
  • ISO8601_PERIOD (yyyy-MM-dd'T'HH:mm:ss.SSS)

Any valid patten supported by java.time.format.DateTimeFormatter

Example:

- condition:
  ...
  config:
    ...
    messageParser:
      timestamp:
        enabled: true
        format: ISO8601_BASIC
YML

No

jsonobject

Single-line message pattern for JSON messages matching this condition block. Include only one log type in each a condition block, and delete all others

timestampField and timestampPattern identify the log message's timestamp field and the pattern for parsing that field. If you don't specify timestampField and timestampPattern, the Log Collector will use the ingestion time as the timestamp for each message.

For additional settings for JSON logs, see Advanced Configuration for JSON Logs.  

Example:

- condition:
  ...
  config:
    ...
    messageParser:
      json:
        enabled: true
        timestampField: "@timestamp"
        timestampPattern: "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"
YML

No

grokobject

Single-line message pattern for Grok messages matching this condition block. Include only one log type in each a condition block, and delete all others

For tips on Grok parsing, see Advanced Configuration for Grok Logs.

Example:

- condition:
  ...
  config:
    ...
    messageParser:
      grok:
        enabled: true
        patterns:
        - '%{DATESTAMP:time} %{LOGLEVEL:severity} %{WORD:class}:%{NUMBER:line} - %{GREEDYDATA:data}'
        - '%{DATESTAMP_RFC2822:time} %{LOGLEVEL:severity} %{GREEDYDATA:data}'
        - '%{TOMCAT_DATESTAMP:time} \| %{LOGLEVEL:level} \| %{JAVACLASS:class} - %{JAVALOGMESSAGE:logmessage}'
        - '%{IP:clientIP} %{WORD:httpMethod} %{URIPATH:url}'
        timestampField: time
        timestampPattern: yyyy-MM-dd HH:mm:ss,SSS
YML

No

subparsers

object

Applies subparsers to each Grok log message. Subparsers help you to extract more fields from different parts of a Grok log message. For example, after you extract and name a field using a Grok parser, you can parse that named field with a JSON parser.

This setting is applicable only if there is a messageParser.type: grok in collectors_values.yaml.

Specify subparsers as a multiline string, escaped once, minified, and enclosed in double quotes.

To minify, use a tool like Code Beautify

Syntax of subparsers:

- condition:
  ...
  config:
    ...
    messageParser:
      subparsers: |-
         { 
          \"parsersList\": [
            {
              \"_message_parser.type\": \"logback\", 
              \"_message_parser.field\": \"logbackLogPrefix\", 
              \"_message_parser.pattern\": \"%date{yyyy-MM-dd HH:mm:ss.SSS} %-5level %n\" 
            }, 
            {
              \"_message_parser.type\": \"logback\", 
              \"_message_parser.field\": \"logbackLogSuffix\", 
              \"_message_parser.pattern\": \"%-5level %m %n\" 
            }
          ]
        }
YML

For each item in parsersList:

_message_parser.type must be log4j, logback, json, grok, timestamp, or infra. If _message_parser.type is missing, the Log Collector skips this entry. 

_message_parser.field must be the name of a field that has already been extracted from the Grok message in focus. If _message_parser.field is missing, the Log Collector skips this entry.

_message_parser.pattern must be a pattern that matches the data you want to extract from the field named in _message_parser.field. This parameter is only applicable if _message_parser.type is grok.

If there are duplicate items in parsersList, the Log Collector uses the last entry only.

If subparser parsing fails, the parsing status for this log message is false.

If you configure a subparser to name an extracted field the same name as an existing field, it adds a prefix to the field name. 

You cannot use a subparser to extract the timestamp field.

Example after minifying:

- condition:
  ...
  config:
    ...
    messageParser:
      subparsers: "{\"parsersList\": [{\"_message_parser.type\": \"log4j\", \"_message_parser.field\": \"log4j\",    \"_message_parser.pattern\": \"%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %msg%n\"}, {\"_message_parser.type\": \"log4j\",    \"_message_parser.field\": \"log4j2\", \"_message_parser.pattern\": \"%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %msg%n\"},                   {\"_message_parser.type\": \"logback\", \"_message_parser.field\": \"logback\", \"_message_parser.pattern\": \"%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %msg%n\"}, {\"_message_parser.type\": \"grok\",\"_message_parser.field\": \"grok\", \"_message_parser.pattern\": \"%{GREEDYDATA:infra}\"}, {\"_message_parser.type\": \"infra\", \"_message_parser.field\": \"infra\"}, {\"_message_parser.type\": \"json\",    \"_message_parser.field\": \"json\", \"_message_parser.flatten_sep\": \"/\"}]\\r\\n}" 
YML

No

infraobject

Single-line message pattern for Kubernetes infrastructure log messages matching this condition block. Include only one log type in each a condition block, and delete all others

If your infrastructure logs are in native klog format, set infra.enabled to true: 

- condition:
  ...
  config:
    ...
    messageParser:
      infra:
        enabled: true
YML


If your infrastructure logs are in JSON format (in Kubernetes v 1.19 and later), use the messageParser.json settings, and be sure to set timestampField and timestampPattern. If you don't specify timestampField and timestampPattern, the Log Collector will use the ingestion time as the timestamp for each message.

For additional settings for JSON logs, see Advanced Configuration for JSON Logs.  

Example: 

- condition:
  ...
  config:
    ...
    messageParser:
      json:
        enabled: true
        timestampField: "@timestamp"
        timestampPattern: "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"
YML

No

multi 

object

Applies multiple parsers to a single log message. Set enabled to true and specify parsers as a stringified JSON that is minified, escaped once, and enclosed in double quotes.

To minify, use a tool like Code Beautify. To double escape, use a tool like JSON formatter.

Syntax of multi:

- condition:
  ...
  config:
    ...
    messageParser:
      multi:
        enabled: true
        parsers: "{
       "applyAll": <true-or-false>,
       "parsersList": [{
         "_message_parser.type": "<parser-type>",
         "_message_parser.name": "<parser-name>",
         "_message_parser.pattern": "<pattern>"
         }, {
		 "_message_parser.type": "<parser-type>",
		 "_message_parser.name": "<parser-name>",
		 "_message_parser.pattern": "<pattern>"
	     }, 
         {...}]
         }"
YML

For each item in parsersList:

_message_parser.type must be log4j, logback, json, grok, timestamp, or infra. If _message_parser.type is missing, the Log Collector skips this entry. 

_message_parser.name must be unique. If _message_parser.name is missing, the Log Collector skips this entry.

_message_parser.pattern is described in the preceding row.

If there are duplicate items in parsersList, the Log Collector uses the first entry and ignores the duplicates.

Example before minifying and single escaping:

- condition:
  ...
  config:
    ...
    messageParser:
      multi:
        enabled: true
        parsers: "{
        "applyAll": false,
        "parsersList": [{
          "_message_parser.type": "log4j",
          "_message_parser.name": "log4j001",
          "_message_parser.pattern": "%d {yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n"
         }, {
          "_message_parser.type": "logback",
          "_message_parser.name": "logback001",
          "_message_parser.pattern": "%-5level [%class]: %msg%n"
         }, {
           "_message_parser.type": "json",
	       "_message_parser.name": "json001",
	       "_message_parser.flatten_sep": "/"
	     }]
         }"
YML

Example after minifying and single escaping:

- condition:
  ...
  config:
    ...
    messageParser:
       multi:
        enabled: true          
        parsers: "{\"applyAll\":false,\"parsersList\":[{\"_message_parser.type\":\"log4j\",\"_message_parser.name\":\"log4j001\",\"_message_parser.pattern\":\"%d {yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n\"},{\"_message_parser.type\":\"logback\",\"_message_parser.name\":\"logback001\",\"_message_parser.pattern\":\"%-5level [%class]: %msg%n\"},{\"_message_parser.type\":\"json\",\"_message_parser.name\":\"json001\",\"_message_parser.flatten_sep\":\"/\"}]}"
YML

No

appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.logging

↩ Parent

This section contains settings for saving or exporting Filebeat logs.

Name
TypeDescriptionRequired

level

string

Logging level. Valid values: info, debug, warn, error. Default: info.

No

selectors

string

Selector (filter) to limit the logging to only components that match. Valid values: monitoring, otlp.

If logging.files.enabled is true, the Log Collector ignores this parameter and uses only the monitoring selector.

No

files

object


No


enabled

string

Enables or disables Filebeat logging to files. Valid values: true, false. Default: false

When this parameter is true, the following changes happen:

Log files are located in /opt/appdynamics/logcollector-agent/log and named lca-log-yyyyMMdd.ndjson and are written to on a rolling append basis. In other words, the Log Collector creates log files named lca-log-yyyyMMdd.ndjson, lca-log-yyyyMMdd-1.ndjson, and so on, up to the maximum number specified by logging.files.keepFiles.

Log files are persistent across pod restarts.

Pod logs no longer include Filebeat logs.

No


keepFiles

string

Number of log files to keep if Filebeat logging is enabled. Default: 5.

No

metrics

object




enabled

string

Enables or disables metrics logging. Valid values: true, false. Default: false.

When this parameter is true, the Log Collector writes metrics to the log file if logging.files.enabled is true, or to the console if logging.files.enabled is false.

If this is enabled, the Log Collector ignores any selectors you specify and uses only the monitoring selector.

Sample metrics log message:


{"log.level":"info","@timestamp":"2022-05-19T05:34:09.525Z","log.logger":"monitoring","log.origin":{"file.name":"log/log.go","file.line":184},"message":"Non-zero metrics in the last 30s","service.name":"filebeat","monitoring":{"metrics":{"beat":{"cpu":{"system":{"ticks":770,"time":{"ms":472}},"total":{"ticks":2690,"time":{"ms":1834},"value":2690},"user":{"ticks":1920,"time":{"ms":1362}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"e858be1d-e6c8-4902-9b40-71902eb8b973","uptime":{"ms":60252},"version":"8.0.0"},"memstats":{"gc_next":23206464,"memory_alloc":12406320,"memory_sys":17039360,"memory_total":279461008,"rss":107671552},"runtime":{"goroutines":99}},"filebeat":{"events":{"active":650,"added":10396,"done":9746},"harvester":{"open_files":1,"running":1,"started":1}},"libbeat":{"config":{"module":{"running":1,"starts":1}},"output":{"events":{"active":0}},"pipeline":{"clients":1,"events":{"active":650,"filtered":1,"published":10395,"retry":83,"total":10396},"queue":{"acked":9745}}},"registrar":{"states":{"current":1,"update":9746},"writes":{"success":17,"total":17}},"system":{"load":{"1":0.66,"5":0.51,"15":1.18,"norm":{"1":0.11,"5":0.085,"15":0.1967}}}},"ecs.version":"1.6.0"}}
JSON

No


period

string

Frequency of metrics collection.  Valid values: 0-99s, 0-99m. Default: 30s (30 seconds). Ignored if logging.metrics.enabled is false.

No

appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.monitoring

↩ Parent

 This section contains settings for exporting Filebeat metrics from the Log Collector in OTLP format. This is for internal use only.

NameTypeDescriptionRequired

otlpmetric 

object
No

enabledstringEnables or disables export of Log Collector metrics to a backend. Default: false.No

endpointstring

OTLP receiver endpoint. Default is the otel-collector endpoint, "${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}"


No


protocolstringProtocol to use for export. Valid values: http, grpc. Default: grpc.No

collectPeriodstringInternal collection period. Default: 10s.No

reportPeriodstringReporting period to OTLP backend. Default: 60s.No

resourceAttrsstring

List of resource attributes to be added to metrics packets. Default: None (empty list).

Example:

        resourceAttrs:
          - key: resource-key-1
            value: resource-value-1
          - key: resource-key-2
            value: resource-value-2
YML

No


metricsstring

List of metrics to capture. If this list is empty, the Log Collector captures all metrics. If this parameter is omitted, the Log Collector captures the default list of metrics in the example.

Example:

        metrics:           
          - beat.memstats.memory_alloc
          - filebeat.events.active
          - filebeat.harvester.running
          - filebeat.harvester.skipped
          - filebeat.input.log.files.truncated
          - libbeat.output.read.errors
          - libbeat.output.write.bytes
          - libbeat.output.write.errors
          - system.load.norm.5
          - system.load.norm.15
YML

No


retryobjectMetrics exporter retry configuration to be used on when exporting to the metrics backend fails.No


enabledstringEnables or disables retry of failed batches. Valid values: true, false. Default: false.No


initialIntervalstringTime to wait after the first failure before retrying. Specify this as an int64 with a unit suffix. For example, 500ms.No


maxIntervalstringMaximum time to wait between consecutive failures. Specify this as an int64 with a unit suffix. For example, 500ms. Once this value is reached, the delay between consecutive retries is always this value.No


maxElapsedTimestringMaximum amount of time (including retries) spent trying to send a request or batch.  Once this value is reached, the data is discarded.No

sslobjectMetrics exporter secure/TLS configuration. The Log Collector uses TLS protocol 1.3 by default.No


enabledstring

Enables or disables SSL. Valid values: true, false. Default: false.

No



certificateAuthoritiesarray

List of your root CA certificates.

Example:

certificate_authorities: ["/opt/appdynamics/certs/ca/ca.pem"]
YML

No



certificatestring

Full pathname of your certificate for SSL client authentication.

Example:

certificate: "/opt/appdynamics/certs/client/client.pem"
YML

No



keystring

Full pathname of your private client certificate SSL key.

Example:

key: "/opt/appdynamics/certs/client/client-key.pem"
YML

No


Sample Configurations 

global:
  clusterName: <cluster-name>
  oauth:
     clientId:
     clientSecret:
     endpoint: https://<your-tenant-url>/data
     tokenUrl: https://<your-tenant-url>/auth/<tenant-id>/default/oauth2/token  

appdynamics-otel-collector:
  clientId: <client-id>
  clientSecret: <client-secret>
  endpoint: <endpoint>
  tokenUrl: <token-url>
  spec:
    image: <image-url>
    imagePullPolicy: IfNotPresent
  config:
    exporters:
      logging:
        loglevel: debug
 
appdynamics-cloud-k8s-monitoring:
  install:
    logCollector: true
    defaultInfraCollectors: false
    clustermon: false
 
  clustermonPod:
    image: <image-url>
    nodeSelector:
      kubernetes.io/os: linux
 
  inframonPod:
    image: <image-url>
    nodeSelector:
      kubernetes.io/os: linux
 
  logCollectorPod:
    image: <image-url>
    imagePullPolicy: IfNotPresent
 
  logCollectorConfig:
    os: [windows,linux]
    container:
      conditionalConfigs:
        - condition:
            equals:
              kubernetes.namespace: logns
          config:
            multiLinePattern: '^2023|^{'
            multiLineNegate: true
            multiLineMatch: after
            messageParser:
              log4J:
                enabled: true
                pattern: "%d{yyyy-MM-dd'T'HH:mm:ss} %p %C{1.} [%t] %m%n"
      logging:
        level: debug
        selectors: []
        files:
          # to enable logging to files
          enabled: false
          # number of files to keep if logging to files is enabled
          keepFiles: 5                                                                      # default value
        metrics:
          # to enable logging metrics data
          enabled: false
          period: 30s                                                                       # default value
      # you don't need below block if you are not using/exporting metrics
      monitoring:
        otlpmetric:
          enabled: false
          metrics:
            # default metrics to capture are below
            - beat.memstats.memory_alloc
            - filebeat.events.active
            - filebeat.harvester.running
            - filebeat.harvester.skipped
            - filebeat.input.log.files.truncated
            - libbeat.output.read.errors
            - libbeat.output.write.bytes
            - libbeat.output.write.errors
            - system.load.norm.5
            - system.load.norm.15
          retry:
            enabled: false
          ssl:
            enabled: false
YML
global:
  clusterName: test_Linux_Override
  oauth:
     clientId:
     clientSecret:
     endpoint: https://<your-tenant-url>/data
     tokenUrl: https://<your-tenant-url>/auth/<tenant-id>/default/oauth2/token 
 tls:
   appdCollectors:
     enabled: false
     secret:
       secretName: client-secret
       secretKeys:
         caCert: ca.crt
         tlsCert: tls.crt
         tlsKey: tls.key
   otelReceiver:
     mtlsEnabled: true
     secret:
       secretName: server-secret
       secretKeys:
         caCert: ca.crt
         tlsCert: tls.crt
         tlsKey: tls.key
     settings:
       min_version: 1.2
       max_version: 1.3 

appdynamics-otel-collector:
   clientId: test
   clientSecret: test
   os: [ linux, windows ]
   endpoint: https://test-tenant/data
   tokenUrl: <token-url>
 
appdynamics-cloud-k8s-monitoring:
 clustermonConfig:
   os: windows
   logLevel: debug
   filters:
     annotation:
       excludeRegex: "1.filter_Name"
   events:
     enabled: true
     severityToExclude: [ ]
     severeGroupByReason:
       - Pulling
 infraManagerConfig:
   os: [ linux, windows ]
   logLevel: debug
 servermonConfig:
   os: [ linux, windows ]
   logLevel: debug
 containermonConfig:
   os: [ linux, windows ]
   logLevel: debug
 install:
   logCollector: true
   defaultInfraCollectors: true
   clustermon: true


 logCollectorConfig:
   os: [windows, linux]
   env: //Specify the OS for which you want to override config.
     linux: 
       container:
         defaultConfig:
               multiLinePattern: '^-'
               multiLineMatch: before
               multiLineNegate: true
               messageParser:           
                 log4J:
                   enabled: true
                   pattern: "%d{yyyy-MM-dd'T'HH:mm:ss} %p %C{1.} [%t] %m%n"
         conditionalConfigs:
           - condition:
               or:
                 - equals:
                     kubernetes.container.name: log-gen-app-log4j-winTest
                 - equals:
                     kubernetes.container.name: log-gen-app-log4jTest
             config:
               multiLinePattern: '^thisIsForLinux'
               multiLineNegate: true
               multiLineMatch: before
               messageParser:
                 log4J:
                   enabled: true
                   pattern: "%d{yyyy-MM-dd'T'HH:mm:ss} %p %C{1.} [%t] %m%n"
         logging:
           level: debug
           selectors: [otlpmetrics]
         batchSize: 2000 # this is the default value
         maxBytes: 900000 # this is the default value            
         monitoring:
           otlpmetric:
             enabled: true
             collectPeriod: 10s # default value
             reportPeriod: 60s          
             metrics:
               # default metrics to capture are below
               - beat.memstats.memory_alloc //works
             retry:
               enabled: true
               # initialInterval:
               # maxInterval:
               # maxElapsedTime:
             ssl:
               enabled: true // We  can't override SSL fields. This is taken from Global TLS config
               certificateAuthorities: ["/opt/appdynamics/certs/ca/ca.pem"]
               certificate: "/opt/appdynamics/certs/client/client.pem"
               key: "/opt/appdynamics/certs/client/client-key.pem"          
     windows:    
       container:
         defaultConfig:
               multiLinePattern: '^-Windows'
               multiLineMatch: before
               multiLineNegate: true
               messageParser:           
                 infra:
                   enabled: true
                   # pattern: "%d{yyyy-MM-dd'T'HH:mm:ss}"
         conditionalConfigs:
           - condition:
               or:
                 - equals:
                     kubernetes.container.name: log-gen-app-log4j-WINDOWS
                 - equals:
                     kubernetes.container.name: log-gen-app-log4j-WINDOWS
             config:
               multiLinePattern: '^thisIsForWINDOWS'
               multiLineNegate: false
               multiLineMatch: after
               messageParser:
                 logback:
                   enabled: true
                   pattern: "%d{yyyy-MM-dd'T'HH:mm:ss} %p %n"          
         logging:
           level: debug
         batchSize: 3000 # this is the default value
         maxBytes: 800000 # this is the default value                                      
   container:
     defaultConfig:
           multiLinePattern: '^{'
           multiLineMatch: "after"
           multiLineNegate: true
           messageParser:           
             json:
               enabled: true                
     conditionalConfigs:      
       - condition:
           or:
             - equals:
                 kubernetes.container.name: log-gen-app-log4j-win
             - equals:
                 kubernetes.container.name: log-gen-app-log4j
         config:
           multiLinePattern: '^2023|^{'
           multiLineNegate: true
           multiLineMatch: after
           messageParser:
             log4J:
               enabled: true
               pattern: "%d{yyyy-MM-dd'T'HH:mm:ss} %p %C{1.} [%t] %m%n"
       - condition:
           or:
             - equals:
                 kubernetes.container.name: log-gen-app-log4j2-win
             - equals:
                 kubernetes.container.name: log-gen-app-log4j2
         config: 
           multiLinePattern: '^2023'            # default = '' (empty)
           multiLineNegate: true # default = false
           multiLineMatch: "after" # default = after
           messageParser:
             log4J:
               enabled: true
               pattern: "%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n"  # default = ""
       - condition:
           or:
             - equals:
                 kubernetes.container.name: log-gen-app-logback-win
             - equals:
                 kubernetes.container.name: log-gen-app-logback
         config: 
           multiLinePattern: '^2023'           # default = '' (empty)
           multiLineNegate: true # default = false
           multiLineMatch: "after" # default = after
           messageParser:
             logback:
               enabled: true
               pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n" # default = ""
       - condition:
           or:
             - equals:
                 kubernetes.container.name: log-gen-app-grok-sub-win
             - equals:
                 kubernetes.container.name: log-gen-app-grok-sub
         config: 
           multiLinePattern: '^\[2022'        # default = '' (empty)
           multiLineNegate: true # default = false
           multiLineMatch: "after" # default = after
           messageParser:
               grok:
                 enabled: true
                 patterns:
                   - '\[%{GREEDYDATA:log4j}\] \[%{GREEDYDATA:json}\] \[%{GREEDYDATA:log4j2}\] \[%{GREEDYDATA:logback}\] \[%{IPORHOST:grok}\] \[%{GREEDYDATA:infra}\]'                      
                 # timestampField: "time"
                 timestampPattern: "yyyy-MM-dd HH:mm:ss,SSS"  
               subparsers: "{\"parsersList\": [{    \"_message_parser.type\": \"log4j\",    \"_message_parser.field\": \"log4j\",    \"_message_parser.pattern\": \"%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %msg%n\"}, 
               {    \"_message_parser.type\": \"log4j\",    \"_message_parser.field\": \"log4j2\",    \"_message_parser.pattern\": \"%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %msg%n\"}, 
               {    \"_message_parser.type\": \"logback\",    \"_message_parser.field\": \"logback\",    \"_message_parser.pattern\": \"%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %msg%n\"}, 
               {    \"_message_parser.type\": \"grok\",    \"_message_parser.field\": \"grok\",    \"_message_parser.pattern\": \"%{GREEDYDATA:infra}\"}, 
               {    \"_message_parser.type\": \"infra\",    \"_message_parser.field\": \"infra\"}, 
               {    \"_message_parser.type\": \"json\",    \"_message_parser.field\": \"json\",    \"_message_parser.flatten_sep\": \"/\"}]\\r\\n}"
       - condition:
           or:
             - equals:
                 kubernetes.container.name: log-gen-app-grok-win
             - equals:
                 kubernetes.container.name: log-gen-app-grok
         config: 
           multiLinePattern: '^2021|^55|^Tue'            # default = '' (empty)
           multiLineNegate: true # default = false
           multiLineMatch: "after" # default = after
           messageParser:
             grok:
               enabled: true
               patterns:
                 - "%{DATESTAMP:time} %{LOGLEVEL:severity} %{WORD:class}:%{NUMBER:line} - %{GREEDYDATA:data}"
                 - "%{DATESTAMP_RFC2822:time} %{LOGLEVEL:severity} %{GREEDYDATA:data}"
                 - "%{TOMCAT_DATESTAMP:time} \| %{LOGLEVEL:level} \| %{JAVACLASS:class} - %{JAVALOGMESSAGE:logmessage}"
                 - "%{IP:clientIP} %{WORD:httpMethod} %{URIPATH:url}"
               timestampField: time
               timestampPattern: "yyyy-MM-dd HH:mm:ss,SSS"
       - condition:
           or:
             - equals:
                 kubernetes.container.name: log-gen-app-json-win
             - equals:
                 kubernetes.container.name: log-gen-app-json
         config: 
           multiLinePattern: '^{'            # default = '' (empty)
           multiLineNegate: true # default = false
           multiLineMatch: "after" # default = after
           messageParser:
             json:
               enabled: true
               timestampField: "@timestamp"
               timestampPattern: "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"
       - condition:        
           operator: equals
           key: kubernetes.container.name
           value: kube-proxy
         config: 
           multiLinePattern: '^[a-z]|^[A-Z]'            # default = '' (empty)
           multiLineNegate: true # default = false
           multiLineMatch: "after" # default = after
           messageParser:
             infra:
               enabled: true                
     # dropFields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
     batchSize: 1000 # this is the default value
     maxBytes: 1000000 # this is the default value
     logging:
       level: debug
       files:
         # to enable logging to files
         enabled: true
         # number of files to keep if logging to files is enabled
         keepFiles: 5                                                                      # default value
       metrics:
         # to enable logging metrics data
         enabled: true
         period: 30s                                                                       # default value
     # you don't need below block if you are not using/exporting metrics
     monitoring:
       otlpmetric:
         enabled: true
         collectPeriod: 10s # default value
         reportPeriod: 60s          
         metrics:
           # default metrics to capture are below
           - beat.memstats.memory_alloc //works
           - filebeat.events.active //0 rows
           - filebeat.harvester.running //rows with all values 0 
           - filebeat.harvester.skipped //0 rows
           - filebeat.input.log.files.truncated //0 rows
           - libbeat.output.read.errors
           - libbeat.output.write.bytes
           - libbeat.output.write.errors
           - system.load.norm.5
           - system.load.norm.15
           - libbeat.pipeline.events.filtered
         retry:
           enabled: false
           # initialInterval:
           # maxInterval:
           # maxElapsedTime:
         ssl:
           enabled: false
           certificateAuthorities: ["C:/filebeat/certs/ca/ca.pem"]
           certificate: "C:/filebeat/certs/client/client.pem"
           key: "C:/filebeat/certs/client/client-key.pem"
YML

OpenTelemetry™ and Kubernetes® (as applicable) are trademarks of The Linux Foundation®.