This page explains various settings you can customize in collectors-values.yaml for the Log Collector .

This page only applies to Log Collector deployments that use the advanced layout of collectors-values.yaml. It describes advanced settings for the Log Collector. If your collectors-values.yaml is in the simplified YAML layout available in June 2022 and later, see Log Collector Settings.

The Log Collector sends the entire appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml to Filebeat as a string. There are seven top level keys within appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml:

logCollectorConfig:
  filebeatYaml: |-
    filebeat.autodiscover: ...
    processors: ...
    output.otlploggrpc: ...
    filebeat.registry: ...
    path.data: ...
    logging: ...
    monitoring: ...
YML

This page describes these keys plus one that is located in logCollectorPod.

appdynamics-cloud-k8s-monitoring.logCollectorPod.env

In the appdynamics-cloud-k8s-monitoring.logCollectorPod.env section you can specify environment variables such as the location of operating system-specific images of the Log Collector.

ParameterDescription

env

Section for environment variables.

linux




image

Location of the Log Collector image for Linux.

Example:

env:
  linux:
    image: <image-url>
YML

windows




image

Location of the Log Collector image for Windows.

Example:

env:
   windows:
    image: <image-url>
YML

appdynamics-cloud-k8s-monitoring.logCollectorConfig.os

If you need to deploy the Log Collector images on multiple operating systems, specify the image URLs in appdynamics-cloud-k8s-monitoring.logCollectorConfig.os:

ParameterDescription

os

Array of operating systems to deploy this pod on. Valid values: windows, linux.

Example:

os: [linux,windows]
YML

appdynamics-cloud-k8s-monitoring.logCollectorConfig.env

Use the appdynamics-cloud-k8s-monitoring.logCollectorConfig.env section to specify operating system-specific overrides.  

ParameterDescription

env

Specifies the OS-specific override configurations

<linux or windows>




filebeatYaml

A Linux-specific filebeatYaml section identical in syntax to appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.

Example:  


  logCollectorConfig:
    os: [linux,windows]
    env:
      linux:
        filebeatYaml: |-
          ...
      windows:
        filebeatYaml: |-
          ...
YML

The following sections of filebeatYaml require a complete override – you must include all parameters:

  • filebeatYaml.filebeat.autodiscover

The following sections of filebeatYaml only require a value override – you only need to include the values you want to override:

  • filebeatYaml.processors

  • filebeatYaml.otlploggrpc

  • filebeatYaml.filebeat.registry

  • filebeatYaml.path.data

  • filebeatYaml.logging

  • filebeatYaml.monitoring

appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.filebeat.autodiscover

This section is for Kubernetes® container discovery and file log input.

ParameterDescription

providers



hints.default_config

Specifies a default ("fallback") condition for harvesting logs from any container on your cluster. This allows for faster setup.  Valid values: true, false. Default: false.

If you set hints.default_config to true, you must also include all parameters between hints.default_config and the add_fields list (inclusive) in the following example. Modify these parameters as necessary to match the conditions you need for default log collection and parsing:

filebeat.autodiscover:
  providers:
    - type: kubernetes
      ...
      hints.enabled: true
      hints.default_config:
        enabled: true
        type: filestream
        id: fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id}
        close_removed: false
        clean_removed: false
        paths:
          - /var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
        parsers:
          - container:
              stream: all
              format: auto
          - multiline:
              type: pattern:
              pattern: '^\d{4}-\d{2}-\d{2}'
              match: after
              negate: true
        prospector.scanner.symlinks: true
        processors:
          - copy_fields:
              fields:
                - from: kubernetes.pod.name
                  to: fields.k8s.pod.name
              fail_on_error: false
              ignore_missing: true
          - copy_fields:
              fields:
                - from: kubernetes.deployment.name
                  to: fields.k8s.workload.name
              fail_on_error: false
              ignore_missing: true
          - add_fields:
              target: _message_parser
              fields:
                ... # user provided, based on the particular message parser used
      templates:
        ...
processors:
  ...
YML

templatesContains a condition block which contains all the settings for a specific log source, type, and pattern. To create multiple condition blocks, clone templates.


- condition

The condition that applications must match in order to have their logs harvested by the Log Collector. For a list of supported conditions, see Filebeat: Conditions. For a list of fields you can use for conditions, see Filebeat: Autodiscover: Generic fields.

A condition is a list of three items:

  • An operator (equals or contains)
  • A key (name of a property)
  • A value (value of property which must be matched)

The condition list supports multiple orders for these three items:

Filebeat order:

- condition:
    <filebeat-operator>:
      <key-name>: <value-to-match>
YML

Example:

- condition:
    equals:
      kubernetes.container.name: log-gen-app-log4j1
YML


Boolean logical operator (and, or, not):

- condition:
    <boolean>:
      - <filebeat-operator>:
          <key-name>: <value-to-match>
      - <filebeat-operator>:
          <key-name>: <value-to-match> 
YML

Example:

- condition:
    or:
      - equals:
          kubernetes.container.name: log-gen-app-log4j1
      - equals:
          kubernetes.container.name: log-gen-app-log4j2 
YML


Nested conditions:

Examples:

- condition:
    not:
      equals:
        kubernetes.container.name: log-gen-app-logback2
YML
- condition:
    or:
      - equals: 
          kubernetes.container.name: log-gen-app-log4j2
      - and:         
          - equals: 
              kubernetes.container.name: log-gen-app-log4j1 
          - equals: 
              kubernetes.namespace: appdynamics
YML


config



- type: filestream






idLeave this as the default value (fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id}).




close_removedLeave this as the default value (false).




clean_removed

Leave this as the default value (false).




paths

Glob pattern of full pathname of your log files. If you want to collect logs from multiple containers in one or more pods,  modify this glob pattern or add additional pathnames on separate lines. 

Default value:

paths:                     
- /var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
YML

Example: 

paths:                     
- /var/log/containers/${data.kubernetes.pod.name}*mycontainer-id-1.log
- /var/log/containers/${data.kubernetes.pod.name}*mycontainer-id-2.log
- /var/log/containers/${data.kubernetes.pod.name}*mycontainer-id-3.log
YML




parsers

Syntax of parsers block: 

parsers:
  - container:
      stream: ***
      format: ***
  - multiline:
      type: pattern
      pattern: ****
      match: ****
      negate: ****
YML

If you are specifying a pattern for multiline log messages, use following value of parsers:

parsers:
  - container:
      stream: all
      format: auto
  - multiline:
      type: pattern
      pattern: ****
      match: ****
      negate: **** 
YML

Otherwise, use:

parsers:
  - container:
      stream: all
      format: auto
YML





- container

See https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-filestream.html#_container






stream

Reads from the specified streams only: all, stdout, or stderr. Default: all







format

Uses the specified format when parsing logs: auto, docker, or cri. Default: auto (it automatically detects the format). To disable autodetection, specify any of the other values. 






- multiline 

If your messages span multiple lines, you must specify all parameters in this block to identify where a multiline log message starts and ends. See Manage multiline messages.

Example for log4j, logback, and grok logs starting with a date or timestamp of format YYYY-MM-DD: 

config:
  - type: filestream    
    ...
    parsers:
      - container:
          ...
      - multiline:
          type: pattern
          pattern: '^\d{4}-\d{2}-\d{2}'
          negate: true
          match: after
YML

Example for json logs: 

config:
  - type: filestream    
    ...
    parsers:
      - container:
          ...
      - multiline:
          type: pattern
          pattern: '^{'
          negate: true
          match: after
YML






typeLeave this as the default value (pattern).






patternThe pattern of a multiline message. This must be a regular expression in RE2 syntax. Must be enclosed in single quotes.






matchEnables or disables negation of a multiline message. Default: false.






negateThe location of the multiline split. Default: after. If you specify multiline.pattern you must also specify multiline.match.




prospector.scanner.symlinks

Leave this as the default value (true).




processors





- copy_fields

Do not modify.

               processors:
                - copy_fields:
                    fields:
                      - from: kubernetes.pod.name
                        to: fields.k8s.pod.name
                    fail_on_error: false
                    ignore_missing: true
YML





- copy_fields


This copy_fields block allows the Log Collector to associate log messages with a workload entity.

Do not modify.

               processors:
                - copy_fields:
                    ...
                - copy_fields:
                    fields:
                      - from: "kubernetes.deployment.name"
                        to: "kubernetes.workload.name"
                      - from: "kubernetes.daemonset.name"
                        to: "kubernetes.workload.name"
                      - from: "kubernetes.statefulset.name"
                        to: "kubernetes.workload.name"
                      - from: "kubernetes.replicaset.name"
                        to: "kubernetes.workload.name"
                      - from: "kubernetes.cronjob.name"
                        to: "kubernetes.workload.name"
                      - from: "kubernetes.job.name"
                        to: "kubernetes.workload.name"
                    fail_on_error: false
                    ignore_missing: true 
YML





- add_fields








target: appd

fields: log.format

Required. Specifies a logical grouping of the log "namespace" and source. Sensitive data masking rules apply only to a scope that matches the value of this parameter. If you don't specify this parameter, you can't mask sensitive data contained within the log messages that are ingested through this configuration. See Mask Sensitive Data

Syntax: <log-namespace>:<log-description>

Suggestions for <log-namespace>: aws, K8s, microsoft

Suggestions for <log-source>: apacheLogs, alb_logs, cis_logs, exchange_server_logs

Example for Kubernetes logs from the common ingestion service (CIS) endpoint:

config:
  - type: filestream
    ...
    processors:
    ...
      - add_fields:
          target: appd
          fields:
            log.format: logs:email_logs
YML






target: _message_parser

fields: type, pattern

Log type and single line message pattern for single line logs from this container. 

Get the pattern from your logging configuration file (typically named log4j.xml, log4j2.xml, or logback.xml). You must specify the exact same pattern here as in your logging configuration file, otherwise your logs will not be parsed when they are received by Cisco Cloud Observability. If you don't have a logging configuration file, ask your application developers for the pattern.

Valid values for type: log4j, logback, timestamp, json, infra, multi, grok.

Example for Log4j logs:

config:
  - type: filestream
    ...
    processors:
    ...
      - add_fields:
          target: _message_parser
          fields:
            type: log4j
            pattern: "%d{yyyy-MM-dd'T'HH:mm:ss} %p %C{1.} [%t] %m%n"
YML


Example for Logback logs:

config:
  - type: filestream
    ...
    processors:
    ...
      - add_fields:
          target: _message_parser
          fields:
            type: logback
            pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
YML


The format parameter specifies the format of the timestamp. Valid values for format are: 

Example for timestamp logs:

This example uses a field named format. This field is only for timestamp logs. The format parameter specifies the format of the timestamp. Valid values for format are: 

  • ABSOLUTE (HH:mm:ss,SSS)
  • ABSOLUTE_MICROS (HH:mm:ss,nnnnnn)
  • ABSOLUTE_NANOS (HH:mm:ss,nnnnnnnnn)
  • ABSOLUTE_PERIOD (HH:mm:ss.SSS)
  • COMPACT (yyyyMMddHHmmssSSS)
  • DATE (dd MMM yyyy HH:mm:ss,SSS)
  • DATE_PERIOD (dd MMM yyyy HH:mm:ss.SSS)
  • DEFAULT (yyyy-MM-dd HH:mm:ss,SSS)
  • DEFAULT_MICROS (yyyy-MM-dd HH:mm:ss,nnnnnn)
  • DEFAULT_NANOS (yyyy-MM-dd HH:mm:ss,nnnnnnnnn)
  • DEFAULT_PERIOD (yyyy-MM-dd HH:mm:ss.SSS)
  • ISO (yyyy-MM-dd'T'HH:mm:ss)
  • ISO8601_BASIC (yyyyMMdd'T'HHmmss,SSS)
  • ISO8601_BASIC_PERIOD (yyyyMMdd'T'HHmmss.SSS)
  • ISO8601 (yyyy-MM-dd'T'HH:mm:ss,SSS)
  • ISO8601_OFFSET_DATE_TIME_HH (yyyy-MM-dd'T'HH:mm:ss,SSSX)
  • ISO8601_OFFSET_DATE_TIME_HHMM (yyyy-MM-dd'T'HH:mm:ss,SSSXX)
  • ISO8601_OFFSET_DATE_TIME_HHCMM (yyyy-MM-dd'T'HH:mm:ss,SSSXXX)
  • ISO8601_PERIOD (yyyy-MM-dd'T'HH:mm:ss.SSS)
  • Any valid patten supported by java.time.format.DateTimeFormatter

Example:

config:
  - type: filestream
    ...
    processors:
    ...
      - add_fields:
          target: _message_parser
          fields:
            type: timestamp
            format: ISO8601_BASIC
YML

Example for JSON logs:

See Advanced Configuration for JSON Logs. At a minimum, specify only type and omit format. In addition, specify timestamp_field and timestamp_pattern in order to use the log message's timestamp. If you don't specify timestamp_field and timestamp_pattern, the log message's timestamp is set to the ingestion time. 

config:
  - type: filestream
    ...
    processors:
    ...
      - add_fields:
          target: _message_parser
          fields:
            type: json
            timestamp_field: "transactionDetails.timestamp"
            timestamp_pattern: "yyyy-MM-dd HH:mm:ss"
YML


Example for Grok logs:

See Advanced Configuration for Grok Logs.

config:
  - type: filestream
    ...
    processors:
    ...
      - add_fields:
          target: _message_parser
          fields:
            type: grok
            pattern: 
            - '%{DATESTAMP:time} %{LOGLEVEL:severity} %{WORD:class}:%{NUMBER:line} - %{GREEDYDATA:data}'
            - '%{DATESTAMP_RFC2822:time} %{LOGLEVEL:severity} %{GREEDYDATA:data}'
            - '%{TOMCAT_DATESTAMP:time} \| %{LOGLEVEL:level} \| %{JAVACLASS:class} - %{JAVALOGMESSAGE:logmessage}'
            - '%{IP:clientIP} %{WORD:httpMethod} %{URIPATH:url}'
            timestamp_field: time
            timestamp_format: yyyy-MM-dd HH:mm:ss,SSS
YML






target: _message_parser

fields: parsers

Applies multiple parsers. Specify parsers as a stringified JSON that is minified, escaped twice, enclosed in double quotes.

To minify, use a tool like Code Beautify. To double escape, use a tool like JSON formatter.

Syntax of parsers:

config:
  - type: filestream
    ...
    processors:
    ...
      - add_fields:
          target: _message_parser
          fields:
            parsers: "{
	          "apply_all": <true-or-false>,
	          "parsers_list": [{
		        "_message_parser.type": "<parser-type>",
		        "_message_parser.name": "<parser-name>",
		        "_message_parser.pattern": "<pattern>"
	            }, {
		        "_message_parser.type": "<parser-type>",
		        "_message_parser.name": "<parser-name>",
		        "_message_parser.pattern": "<pattern>"
	            }, 
                {...}]
              }"
YML

For each item in parsers_list:

  • _message_parser.type must be log4j, logback, json, grok, timestamp, or infra. If _message_parser.type is missing, the Log Collector skips this entry. 
  • _message_parser.name must be unique. If _message_parser.name is missing, the Log Collector skips this entry.
  • _message_parser.pattern is described in the preceding row.
  • If there are duplicate items in parsers_list, the Log Collector uses the first entry and ignores the duplicates.

Example before minifying and double escaping:

            parsers: "{
	          "apply_all": false,
	          "parsers_list": [{
		        "_message_parser.type": "log4j",
		        "_message_parser.name": "log4j001",
		        "_message_parser.pattern": "%d {yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n"
	            }, {
		        "_message_parser.type": "logback",
		        "_message_parser.name": "logback001",
		        "_message_parser.pattern": "%-5level [%class]: %msg%n"
	            }, {
		        "_message_parser.type": "json",
		        "_message_parser.name": "json001",
		        "_message_parser.flatten_sep": "/"
	            }]
             }"
YML


Example after minifying and double escaping:

            parsers: "{\\\"apply_all\\\":false,\\\"parsers_list\\\":[{\\\"_message_parser.type\\\":\\\"log4j\\\",\\\"_message_parser.name\\\":\\\"log4j001\\\",\\\"_message_parser.pattern\\\":\\\"%d {yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n\\\"},{\\\"_message_parser.type\\\":\\\"logback\\\",\\\"_message_parser.name\\\":\\\"logback001\\\",\\\"_message_parser.pattern\\\":\\\"%-5level [%class]: %msg%n\\\"},{\\\"_message_parser.type\\\":\\\"json\\\",\\\"_message_parser.name\\\":\\\"json001\\\",\\\"_message_parser.flatten_sep\\\":\\\"/\\\"}]}"
YML






target: _message_parser

fields: subparsers

Applies subparsers to each Grok log message. Subparsers help you to extract more fields from different parts of a Grok log message. For example, after you extract and name a field using a Grok parser, you can parse that named field with a JSON parser.

This setting is applicable only if there is a _message_parser.type: grok in collectors_values.yaml.

Specify subparsers as a stringified JSON that is minified, escaped twice, and enclosed in double quotes.

To minify, use a tool like Code Beautify. To double escape, use a tool like JSON formatter.

Syntax of subparsers:

config:
  - type: filestream
    ...
    processors:
    ...
      - add_fields:
          target: _message_parser
          fields:
            subparsers: "{
	          "parsersList": [{
		        "_message_parser.type": "<parser-type>",
		        "_message_parser.field": "<field-name-to-apply-subparser-to>",
		        "_message_parser.pattern": "<parser-pattern>"
	            }, {...}, 
                ]
              }"
YML

For each item in parsersList:

  • _message_parser.type must be log4j, logback, json, grok, timestamp, or infra. If _message_parser.type is missing, the Log Collector skips this entry. 
  • _message_parser.field must be the name of a field that has already been extracted from the Grok message in focus. If _message_parser.field is missing, the Log Collector skips this entry.
  • _message_parser.pattern must be a pattern that matches the data you want to extract from the field named in _message_parser.field. This parameter is only applicable if _message_parser.type is log4j, logback, or grok.
  • If there are duplicate items in parsersList, the Log Collector uses the last entry only.
  • If subparser parsing fails, the parsing status for this log message is false.
  • If you configure a subparser to name an extracted field the same name as an existing field, it adds a prefix to the field name. 
  • You cannot use a subparser to extract the timestamp field.

Example before minifying and double escaping:

config:
  - type: filestream
    ...
    processors:
    ...
      - add_fields:
          target: _message_parser
          fields:
            type: grok
            pattern: 
            - '%{GREEDYDATA:k8Log}'
            timestamp_field: time
            timestamp_format: yyyy-MM-dd HH:mm:ss,SSS
       - add_fields:
          target: _message_parser
          fields:
            subparsers: "{
              "parsersList": [{
                "_message_parser.type": "infra",
                "_message_parser.field": "k8Log"
               }]
            }" 
YML

Example after minifying and double escaping:

config:
  - type: filestream
    ...
    processors:
    ...
      - add_fields:
          target: _message_parser
          fields:
            type: grok
            pattern: 
            - '%{GREEDYDATA:k8Log}'
            timestamp_field: time
            timestamp_format: yyyy-MM-dd HH:mm:ss,SSS
       - add_fields:
          target: _message_parser
          fields:
            subparsers: "{\\\"parsersList\\\":[{\\\"_message_parser.type\\\":\\\"infra\\\",\\\"_message_parser.field\\\":\\\"k8Log\\\"}]}" 
YML

appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.processors

This section is for defining and configuring processors. For help with Filebeat, see Filter and enhance data with processors.

ParameterDescription

- add_cloud_metadata: ~

Do not modify.
- add_kubernetes_metadata:Do not modify.
- rename:

Do not modify.

   - rename:
      fields:
        - from: "kubernetes.namespace"
          to: "kubernetes.namespace.name"
        - from: "kubernetes"
          to: "k8s"
        - from: k8s.annotations.appdynamics.lca/filebeat.parser
          to: "_message_parser"
        - from: "cloud.instance.id"
          to: "host.id"
      ignore_missing: true
      fail_on_error: false
 
YML
- add_fields:


target: k8s
fields: cluster.name

Name of your cluster. Must match the cluster name as displayed in Cisco Cloud Observability.

Example: 

add_fields:
  target: k8s
  fields:
  cluster.name: <cluster-name>
YML

Based on cluster.name, the Log Collector extracts the following fields automatically: k8s.pod.name, k8s.namespace.name, k8s.container.name, k8s.node.name.


target: k8s
fields: cluster.id

ID of your cluster. You can get the cluster ID by running this command: 

kubectl get ns kube-system -o json
BASH

Example:

add_fields:
  target: k8s
  fields:
  cluster.id: <cluster-id>
YML
- add_fields:



target: source
fields: name

Do not modify

Example: 

   - add_fields:
      target: source
      fields:
        name: log-agent
YML
- add_fields:



target: telemetry
fields: sdk.name

Do not modify.

   - add_fields:
      target: telemetry
      fields:
        sdk.name: log-agent
YML

- script: 

Do not modify. If your collectors-values.yaml is missing this, you must add it. Replace <cluster-name> with the name of your cluster, and retain the colon suffix.

  - script:
      lang: javascript
      source: >
        function process(event) {
          var podUID = event.Get("k8s.pod.uid");
          if (podUID) {
            event.Put("internal.container.encapsulating_object_id", "<cluster-name>:" + podUID);
          }
          return event;
        }
YML
- drop_fields:



fields: <list>
ignore_missing: <true_or_false>

List of fields you don't want to export to Cisco Cloud Observability.

Example:

   - drop_fields:
      fields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
      ignore_missing: true
YML

appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.output.otlploggrpc

This section configures the OTLP output of Filebeat logs directly to an OTLP receiver using OpenTelemetry Line Protocol (OTLP) Logs Data Model over either gRPC or HTTP. 

ParameterDescription


groupby_resource_fields  RENAMED










Do not modify.

groupby_resource_fields:
 - k8s
 - source
 - host
 - container
 - log
 - telemetry
 - internal
 - os
YML
hosts

OTLP receiver endpoint. Default is the otel-collector endpoint, ["${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}"]

ssl.enabled

Enables or disables SSL communication on the export of Filebeat logs to the OTLP receiver. Enabling or disabling SSL here does not affect monitoring.ssl.enabled (SSL settings on the export of Filebeat metrics to the OTLP receiver). Valid values: true, false. Default: false.

Example:

ssl.enabled: true
YML
ssl.certificate_authorities

List of your root CA certificates.

Example for Linux:

ssl.certificate_authorities: ["/opt/appdynamics/certs/ca/ca.pem"]
YML

Example for Windows: 

C:/filebeat/certs/ca/ca.pem
YML
ssl.certificate

Full pathname of your certificate for SSL client authentication.

Example for Linux:

ssl.certificate: "/opt/appdynamics/certs/client/client.pem"
YML

Example for Windows: 

C:/filebeat/certs/client/client.pem
YML
ssl.key

Full pathname of your private client certificate SSL key.

Example for Linux:

ssl.key: "/opt/appdynamics/certs/client/client-key.pem"
YML

Example for Windows: 

C:/filebeat/certs/client/client-key.pem
YML

ssl.supported_protocols NEW

List of TLS protocols which the Log Collector can use. Default: "[TLSv1.3]". Valid values you can include in this list: TLSv1.0TLSv1.1TLSv1.2TLSv1.3.

protocolProtocol to use for export. Valid values: http, grpc. Default: grpc.

grpc_config DELETED


wait_for_ready

Configures the action to take when an RPC is attempted on broken connections or unreachable servers. Valid values: true, false. Default: true.

If false and the connection is in the TRANSIENT_FAILURE state, the RPC fails immediately. Otherwise, the RPC client blocks the call until a connection is available (or the call is canceled or times out) and retries the call if it fails due to a transient error.  

If protocol is grpc, will there are no retries if data was written to the wire unless the server indicates it did not process the data. See gRPC Wait for Ready Semantics.

batch_size

Number of log records to be sent together in a single batch, which improves its performance. Default and strongly recommended value: 1000

Best practice is to specify both batch_size and max_bytes. Note that the actual batch size will be less than batch_size if the number of bytes in a batch is more than max_bytes.

max_bytes

If the number of bytes in the OTLP logs packet to be published (which contains  all the log records present in a batch)  exceeds max_bytes, the Log Collector splits the batch. If max_bytes is null, the Log Collector does not split any batches.

Strongly recommended value: 1000000. This value can also be specified with scientific E notation as 1e+06.

If your otel-collector is getting 413 response code errors, set max_bytes and batch_size to limit the log batch size that is sent to the otel-collector.

summary_debug_logs_interval

Specifies the period (in seconds) for summary logs which are printed if logging.level is set to debug. Valid values: 10ms, 2s, 3m, and so on.

Example of summary logs:

{"log.level":"debug","@timestamp":"2022-10-10T03:15:51.648+0530","log.logger":"otlploggrpc","log.origin":{"file.name":"otlploggrpc/debug_metrics_logger.go","file.line":68},"message":"100 logs serialized by OTLP Log gRPC Exporter in 1.120411ms","service.name":"filebeat","ecs.version":"1.6.0"}

{"log.level":"debug","@timestamp":"2022-10-10T03:15:51.652+0530","log.logger":"otlploggrpc","log.origin":{"file.name":"otlplog/debug_metrics_logger.go","file.line":73},"message":"100 logs exported by OTLP Log gRPC Exporter in 4.333174ms","service.name":"filebeat","ecs.version":"1.6.0"} 
CODE

appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.filebeat.registry

This section contains two keys: filebeat.registry.path and filebeat.registry.file_permissions.

ParameterDescription

filebeat.registry.path

Filebeat registry filename. Do not modify.

Example:

filebeat.registry.path: registry1
YML
filebeat.registry.file_permissions
Filebeat registry file permissions. Do not modify.

Example:

filebeat.registry.file_permissions: 0640
YML

appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.path.data

This section configures where Filebeat looks for its registry files. See Configure project paths.

ParameterDescription

path.data

Mount location for the Filebeat registry. This location is fixed and you should not update it, but if this setting is missing from your collectors-values.yaml, when the Log Collector restarts, it collects old (already harvested) logs.

The code snippet below shows the new line in the context of adjacent lines so that you understand exactly where to add it:

Example for Linux:

logCollectorConfig:
  filebeatYaml: |-
    ...
    filebeat.registry.path: registry7
    filebeat.registry.file_permissions: 0640
    path.data: /opt/appdynamics/logcollector-agent/data
YML

Example for Windows: 

logCollectorConfig:
  filebeatYaml: |-
    ...
    filebeat.registry.path: registry7
    filebeat.registry.file_permissions: 0640
    path.data: C:/ProgramData/filebeat/data
YML

appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.logging

This section configures the Log Collector to log its Filebeat activity.

ParameterDescription

level

Logging level. Valid values: info, debug, warn, error. Default: info.

to_filesEnables or disables writing Filebeat logs to files. Valid values: true, false. Default: false.
files

pathPath to log files. Default: /opt/appdynamics/logcollector-agent/log

namePrefix of log files. Default: lca-log

keepfiles

Number of log files to keep if Filebeat logging is enabled. Default: 5.


permissionsFile permissions on log file. Default: 0640
selectors

Selector (filter) to limit the logging to only components that match. Valid values: monitoring, otlp. Default: []

metrics 

enabledEnables or disables metrics logging. Valid values: true, false. Default: false. If true, the Log Collector writes metrics to the log file if logging.files.enabled is true, or to the console if logging.files.enabled is false.

periodFrequency of metrics collection.  Valid values: 0-99s, 0-99m. Default: 30s (30 seconds). Ignored if logging.metrics.enabled is false.

appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.monitoring

This section is for configuring the Log Collector to log its metrics.

ParameterDescription

enabled

Enables or disables export of Log Collector metrics to a backend. Default: false.

otlpmetric


This section contains settings for exporting the Log Collector's own metrics directly to an OTLP receiver using OTLP Metrics data model, over either gRPC or HTTP API.


endpoint

OTLP receiver endpoint. Default is the otel-collector endpoint, "${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}"


protocolProtocol to use for export. Valid values: http, grpc. Default: grpc.

collect_periodInternal collection period. Default: 10s.

report_periodReporting period to OTLP backend. Default: 60s.

resource_attributes RENAMED


List of resource attributes to be added to metrics packets. Default: None (empty list).

Example: 

resource_attributes:
  - key: resource-key-1
    value: resource-value-1
  - key: resource-key-2
    value: resource-value-2     
YML


k8s.cluster.name

Cluster name.

Example:

k8s.cluster.name: "<cluster-name>"   
YML


k8s.pod.name: "${POD_NAME}"

Pod name. Do not modify.

Example:

k8s.pod.name: "${POD_NAME}"
YML

metrics

List of metrics to capture. If this list is empty, the Log Collector captures all metrics. If this parameter is omitted, the Log Collector captures the default list of metrics in the example.

Example:

    metrics:
      - beat.memstats.memory_alloc
      - filebeat.events.active
      - filebeat.harvester.running
      - filebeat.harvester.skipped
      - filebeat.input.log.files.truncated
      - libbeat.output.read.errors
      - libbeat.output.write.bytes
      - libbeat.output.write.errors
      - system.load.norm.5
      - system.load.norm.15
YML

retryMetrics exporter retry configuration to be used on when exporting to the metrics backend fails.


enabledEnables or disables retry of failed batches. Valid values: true, false. Default: false.


initial_intervalTime to wait after the first failure before retrying. Specify this as an int64 with a unit suffix. For example, 500ms.


max_intervalMaximum time to wait between consecutive failures. Specify this as an int64 with a unit suffix. For example, 500ms. Once this value is reached, the delay between consecutive retries is always this value.


max_elapsed_timeMaximum amount of time (including retries) spent trying to send a request or batch.  Once this value is reached, the data is discarded.

ssl.enabled

Enables or disables SSL communication on the export of Filebeat metrics to the OTLP receiver. Enabling or disabling SSL here does not affect output.otlplog.ssl.enabled (SSL settings on the export of Filebeat logs to the OTLP receiver). Valid values: true, false. Default: false.


ssl.certificate_authorities NEW

List of your root CA certificates.

Example for Linux:

ssl.certificate_authorities: ["/opt/appdynamics/certs/ca/ca.pem"]
YML

Example for Windows: 

C:/filebeat/certs/ca/ca.pem
YML

ssl.certificate NEW

Full pathname of your certificate for SSL client authentication.

Example for Linux:

ssl.certificate: "/opt/appdynamics/certs/client/client.pem"
YML

Example for Windows: 

C:/filebeat/certs/client/client.pem
YML

ssl.key NEW

Full pathname of your private client certificate SSL key.

Example for Linux:

ssl.key: "/opt/appdynamics/certs/client/client-key.pem"
YML

Example for Windows: 

C:/filebeat/certs/client/client-key.pem
YML


ssl.supported_protocols NEW

List of TLS protocols which the Log Collector can use. Default: "[TLSv1.3]". Valid values you can include in this list: TLSv1.0TLSv1.1TLSv1.2TLSv1.3.




Sample Configurations 

global:
  clusterName: <ClusterName>
appdynamics-otel-collector:
  clientId: <client-id>
  clientSecret: <client-secret>
  endpoint: <endpoint>
  tokenUrl: <token-url>
  spec:
    image: <image-url>
    imagePullPolicy: IfNotPresent
  config:
    exporters:
      logging:
        loglevel: debug
appdynamics-cloud-k8s-monitoring:
  install:
    logCollector: true
    defaultInfraCollectors: false
    clustermon: false
  clustermonPod:
    image: <image-url>
    nodeSelector:
      kubernetes.io/os: linux
  inframonPod:
    image: <image-url>
    nodeSelector:
      kubernetes.io/os: linux
  logCollectorPod:
    imagePullPolicy: IfNotPresent
  logCollectorConfig:
    filebeatYaml: |-
      filebeat.autodiscover:
        providers:
          - type: kubernetes
            node: ${NODE_NAME}
            labels.dedot: false
            annotations.dedot: false
            hints.enabled: true
            hints.default_config.enabled: false
            templates:
              - condition:
                  equals:
                    kubernetes.container.name: log-generator-logback
                config:
                  - type: filestream
                    id: fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id}
                    close_removed: false
                    clean_removed: false
                    paths:
                      - /var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
                    parsers:
                      - container:
                          stream: all
                          format: auto
                      - multiline:
                          type: pattern
                          pattern: "^[0-9]{4}"
                          negate: true
                          match: after
                    prospector.scanner.symlinks: true
                    processors:
                      - add_fields:
                          target: appd
                          fields:
                              log.format: logs:logback_logs
                      - add_fields:
                          target: _message_parser
                          fields:
                            type: logback
                            pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
      processors:
        - add_cloud_metadata: ~
        - add_kubernetes_metadata:
            in_cluster: true
            host: ${NODE_NAME}
            matchers:
              - logs_path:
                  logs_path: "/var/log/containers/"
        - copy_fields:
            fields:
              - from: "kubernetes.deployment.name"
                to: "kubernetes.workload.name"
              - from: "kubernetes.daemonset.name"
                to: "kubernetes.workload.name"
              - from: "kubernetes.statefulset.name"
                to: "kubernetes.workload.name"
              - from: "kubernetes.replicaset.name"
                to: "kubernetes.workload.name"
              - from: "kubernetes.cronjob.name"
                to: "kubernetes.workload.name"
              - from: "kubernetes.job.name"
                to: "kubernetes.workload.name"
            fail_on_error: false
            ignore_missing: true
        - rename:
            fields:
              - from: "kubernetes.namespace"
                to: "kubernetes.namespace.name"
              - from: "kubernetes"
                to: "k8s"
              - from: k8s.annotations.appdynamics.lca/filebeat.parser
                to: "_message_parser"
              - from: "cloud.instance.id"
                to: "host.id"
            ignore_missing: true
            fail_on_error: false
        - drop_fields:
            fields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
            ignore_missing: true
        - script:
            lang: javascript
            source: >
              function process(event) {
                var podUID = event.Get("k8s.pod.uid");
                if (podUID) {
                  event.Put("internal.container.encapsulating_object_id", "<cluster-id>:" + podUID);
                }
                return event;
              }
        - dissect:
            tokenizer: "%{name}:%{tag}"
            field: "container.image.name"
            target_prefix: "container.image"
            ignore_failure: true
            overwrite_keys: true
        - add_fields:
            target: k8s
            fields:
              cluster.name: <cluster-name>
              cluster.id: <cluster-id>
        - add_fields:
            target: telemetry
            fields:
              sdk.name: log-agent
      output.otlploggrpc:
        groupby_resource_fields:
          - k8s
          - source
          - host
          - container
          - log
          - telemetry
          - internal
          - os
        hosts: ["${APPD_OTELCOL_GRPC_RECEIVER_HOST}:14317"]
        worker: 1
        max_bytes: 1e+06
        ssl.enabled: false
        wait_for_ready: true
        batch_size: 1000
        summary_debug_logs_interval: 10s
      filebeat.registry.path: registry1
      filebeat.registry.file_permissions: 0640
      path.data: /opt/appdynamics/logcollector-agent/data
      logging:
        level: info
        to_files: false
        files:
          path: /opt/appdynamics/logcollector-agent/log
          name: lca-log
          keepfiles: 5
          permissions: 0640
        selectors: []
        metrics:
          enabled: false
          period: 30s
      monitoring:
        enabled: true
        otlpmetric:
          endpoint: ${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}
          protocol: grpc
          collect_period: 30s
          report_period:
          resource_attributes:
            k8s.cluster.name: "<ClusterName>"
            k8s.cluster.id: "<ClusterId>"
            k8s.pod.name: "${POD_NAME}"
            k8s.pod.uid: "${POD_UID}"
            service.instance.id: "${POD_UID}"
            service.version: "23.4.0-567"
            source.name: "log-agent"
            service.namespace: "log-agent"
            service.name: "log-collector-agent"
          metrics:
            - beat.memstats.memory_alloc
            - filebeat.events.active
            - filebeat.harvester.running
            - filebeat.harvester.skipped
            - filebeat.input.log.files.truncated
            - libbeat.output.read.errors
            - libbeat.output.write.bytes
            - libbeat.output.write.errors
            - system.load.norm.5
            - system.load.norm.15
            - libbeat.pipeline.events.filtered
          retry:
            enabled: true
            initial_interval: 1s
            max_interval: 1m
            max_elapsed_time: 5m
          ssl.enabled: false
YML
global:
    clusterName: <ClusterName>
  appdynamics-otel-collector:
    clientId: <client-id>
    clientSecret: <client-secret>
    endpoint: <endpoint>
    tokenUrl: <token-url>
    
    spec:
      image: <image-url>
      imagePullPolicy: IfNotPresent
    config:
      exporters:
        logging:
          loglevel: debug
    
  appdynamics-cloud-k8s-monitoring:
    install:
      logCollector: true
      defaultInfraCollectors: false
      clustermon: false
    
    clustermonPod:
      image: <image-url>
      nodeSelector:
        kubernetes.io/os: linux
    
    inframonPod:
      image: <image-url>
      nodeSelector:
        kubernetes.io/os: linux
    
    logCollectorPod:
      imagePullPolicy: IfNotPresent
      env:
        linux:
          image: <image-url>
    
    logCollectorConfig:
      os: [linux]
      env:
        linux:
          filebeatYaml: |-
            filebeat.autodiscover:
              providers:
                - type: kubernetes
                  node: ${NODE_NAME}
                  labels.dedot: false
                  annotations.dedot: false
                  hints.enabled: true
                  hints.default_config.enabled: false
                  templates:
                    - condition:
                        equals:
                          kubernetes.container.name: log-generator-logback
                      config:
                        - type: filestream
                          id: fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id}
                          close_removed: false
                          clean_removed: false
                          paths:
                            - /var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
                          parsers:
                            - container:
                                stream: all
                                format: auto
                            - multiline:
                                type: pattern
                                pattern: "^[0-9]{4}"
                                negate: true
                                match: after
                          prospector.scanner.symlinks: true
                          processors:
                            - add_fields:
                                target: appd
                                fields:
                                    log.format: logs:logback_logs
                            - add_fields:
                                target: _message_parser
                                fields:
                                  type: logback
                                  pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
            processors:
              - add_cloud_metadata: ~
              - add_kubernetes_metadata:
                  in_cluster: true
                  host: ${NODE_NAME}
                  matchers:
                    - logs_path:
                        logs_path: "/var/log/containers/"
              - copy_fields:
                  fields:
                    - from: "kubernetes.deployment.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.daemonset.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.statefulset.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.replicaset.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.cronjob.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.job.name"
                      to: "kubernetes.workload.name"
                  fail_on_error: false
                  ignore_missing: true
              - rename:
                  fields:
                    - from: "kubernetes.namespace"
                      to: "kubernetes.namespace.name"
                    - from: "kubernetes"
                      to: "k8s"
                    - from: k8s.annotations.appdynamics.lca/filebeat.parser
                      to: "_message_parser"
                    - from: "cloud.instance.id"
                      to: "host.id"
                  ignore_missing: true
                  fail_on_error: false
              - drop_fields:
                  fields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
                  ignore_missing: true
              - script:
                  lang: javascript
                  source: >
                    function process(event) {
                      var podUID = event.Get("k8s.pod.uid");
                      if (podUID) {
                        event.Put("internal.container.encapsulating_object_id", "<cluster-id>:" + podUID);
                      }
                      return event;
                    }
              - dissect:
                  tokenizer: "%{name}:%{tag}"
                  field: "container.image.name"
                  target_prefix: "container.image"
                  ignore_failure: true
                  overwrite_keys: true
              - add_fields:
                  target: k8s
                  fields:
                    cluster.name: <cluster-name>
                    cluster.id: <cluster-id>
              - add_fields:
                  target: telemetry
                  fields:
                    sdk.name: log-agent
            output.otlploggrpc:
              groupby_resource_fields:
                - k8s
                - source
                - host
                - container
                - log
                - telemetry
                - internal
                - os
              hosts: ["${APPD_OTELCOL_GRPC_RECEIVER_HOST}:14317"]
              worker: 1
              max_bytes: 1e+06
              ssl.enabled: false
              wait_for_ready: true
              batch_size: 1000
              summary_debug_logs_interval: 10s
            filebeat.registry.path: registry1
            filebeat.registry.file_permissions: 0640
            path.data: /opt/appdynamics/logcollector-agent/data
            logging:
              level: info
              to_files: false
              files:
                path: /opt/appdynamics/logcollector-agent/log
                name: lca-log
                keepfiles: 5
                permissions: 0640
              selectors: []
              metrics:
                enabled: false
                period: 30s
            monitoring:
              enabled: true
              otlpmetric:
                endpoint: ${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}
                protocol: grpc
                collect_period: 30s
                report_period:
                resource_attributes:
                  k8s.cluster.name: "<ClusterName>"
                  k8s.cluster.id: "<ClusterId>"
                  k8s.pod.name: "${POD_NAME}"
                  k8s.pod.uid: "${POD_UID}"
                  service.instance.id: "${POD_UID}"
                  service.version: "23.4.0-567"
                  source.name: "log-agent"
                  service.namespace: "log-agent"
                  service.name: "log-collector-agent"
                metrics:
                  - beat.memstats.memory_alloc
                  - filebeat.events.active
                  - filebeat.harvester.running
                  - filebeat.harvester.skipped
                  - filebeat.input.log.files.truncated
                  - libbeat.output.read.errors
                  - libbeat.output.write.bytes
                  - libbeat.output.write.errors
                  - system.load.norm.5
                  - system.load.norm.15
                  - libbeat.pipeline.events.filtered
                retry:
                  enabled: true
                  initial_interval: 1s
                  max_interval: 1m
                  max_elapsed_time: 5m
                ssl.enabled: false
YML
global:
    clusterName: <ClusterName>
  appdynamics-otel-collector:
    clientId: <client-id>
    clientSecret: <client-secret>
    endpoint: <endpoint>
    tokenUrl: <token-url>
    
    spec:
      image: <image-url>
      imagePullPolicy: IfNotPresent
    config:
      exporters:
        logging:
          loglevel: debug
    
  appdynamics-cloud-k8s-monitoring:
    install:
      logCollector: true
      defaultInfraCollectors: false
      clustermon: false
    
    clustermonPod:
      image: <image-url>
      nodeSelector:
        kubernetes.io/os: linux
    
    inframonPod:
      image: <image-url>
      nodeSelector:
        kubernetes.io/os: linux
    
    logCollectorPod:
      imagePullPolicy: IfNotPresent
      env:
        windows:
          image: <image-url>
    
    logCollectorConfig:
      os: [windows]
      env:
        windows:
          filebeatYaml: |-
            filebeat.autodiscover:
              providers:
                - type: kubernetes
                  node: ${NODE_NAME}
                  labels.dedot: false
                  annotations.dedot: false
                  hints.enabled: true
                  hints.default_config.enabled: false
                  templates:
                    - condition:
                        equals:
                          kubernetes.container.name: log-generator-logback
                      config:
                        - type: filestream
                          id: fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id}
                          close_removed: false
                          clean_removed: false
                          paths:
                            - C:/var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
                          parsers:
                            - container:
                                stream: all
                                format: auto
                            - multiline:
                                type: pattern
                                pattern: "^[0-9]{4}"
                                negate: true
                                match: after
                          prospector.scanner.symlinks: true
                          processors:
                            - add_fields:
                                target: appd
                                fields:
                                    log.format: logs:logback_logs
                            - add_fields:
                                target: _message_parser
                                fields:
                                  type: logback
                                  pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
            processors:
              - add_cloud_metadata: ~
              - add_kubernetes_metadata:
                  in_cluster: true
                  host: ${NODE_NAME}
                  matchers:
                    - logs_path:
                        logs_path: "/var/log/containers/"
              - copy_fields:
                  fields:
                    - from: "kubernetes.deployment.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.daemonset.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.statefulset.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.replicaset.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.cronjob.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.job.name"
                      to: "kubernetes.workload.name"
                  fail_on_error: false
                  ignore_missing: true
              - rename:
                  fields:
                    - from: "kubernetes.namespace"
                      to: "kubernetes.namespace.name"
                    - from: "kubernetes"
                      to: "k8s"
                    - from: k8s.annotations.appdynamics.lca/filebeat.parser
                      to: "_message_parser"
                    - from: "cloud.instance.id"
                      to: "host.id"
                  ignore_missing: true
                  fail_on_error: false
              - drop_fields:
                  fields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
                  ignore_missing: true
              - script:
                  lang: javascript
                  source: >
                    function process(event) {
                      var podUID = event.Get("k8s.pod.uid");
                      if (podUID) {
                        event.Put("internal.container.encapsulating_object_id", "<cluster-id>:" + podUID);
                      }
                      return event;
                    }
              - dissect:
                  tokenizer: "%{name}:%{tag}"
                  field: "container.image.name"
                  target_prefix: "container.image"
                  ignore_failure: true
                  overwrite_keys: true
              - add_fields:
                  target: k8s
                  fields:
                    cluster.name: <cluster-name>
                    cluster.id: <cluster-id>
              - add_fields:
                  target: telemetry
                  fields:
                    sdk.name: log-agent
            output.otlploggrpc:
              groupby_resource_fields:
                - k8s
                - source
                - host
                - container
                - log
                - telemetry
                - internal
                - os
              hosts: ["${APPD_OTELCOL_GRPC_RECEIVER_HOST}:14317"]
              worker: 1
              max_bytes: 1e+06
              ssl.enabled: false
              wait_for_ready: true
              batch_size: 1000
              summary_debug_logs_interval: 10s
            filebeat.registry.path: registry1
            filebeat.registry.file_permissions: 0640
            path.data: C:/ProgramData/filebeat/data
            logging:
              level: info
              to_files: false
              files:
                path: C:/ProgramData/filebeat/log
                name: lca-log
                keepfiles: 5
                permissions: 0640
              selectors: []
              metrics:
                enabled: false
                period: 30s
            monitoring:
              enabled: true
              otlpmetric:
                endpoint: ${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}
                protocol: grpc
                collect_period: 30s
                report_period:
                resource_attributes:
                  k8s.cluster.name: "<ClusterName>"
                  k8s.cluster.id: "<ClusterId>"
                  k8s.pod.name: "${POD_NAME}"
                  k8s.pod.uid: "${POD_UID}"
                  service.instance.id: "${POD_UID}"
                  service.version: "23.4.0-567"
                  source.name: "log-agent"
                  service.namespace: "log-agent"
                  service.name: "log-collector-agent"
                metrics:
                  - beat.memstats.memory_alloc
                  - filebeat.events.active
                  - filebeat.harvester.running
                  - filebeat.harvester.skipped
                  - filebeat.input.log.files.truncated
                  - libbeat.output.read.errors
                  - libbeat.output.write.bytes
                  - libbeat.output.write.errors
                  - system.load.norm.5
                  - system.load.norm.15
                  - libbeat.pipeline.events.filtered
                retry:
                  enabled: true
                  initial_interval: 1s
                  max_interval: 1m
                  max_elapsed_time: 5m
                ssl.enabled: false
YML
global:
    clusterName: <ClusterName>
  appdynamics-otel-collector:
    clientId: <client-id>
    clientSecret: <client-secret>
    endpoint: <endpoint>
    tokenUrl: <token-url>
    
    spec:
      image: <image-url>
      imagePullPolicy: IfNotPresent
    config:
      exporters:
        logging:
          loglevel: debug
    
  appdynamics-cloud-k8s-monitoring:
    install:
      logCollector: true
      defaultInfraCollectors: false
      clustermon: false
    
    clustermonPod:
      image: <image-url>
      nodeSelector:
        kubernetes.io/os: linux
    
    inframonPod:
      image: <image-url>
      nodeSelector:
        kubernetes.io/os: linux
    
    logCollectorPod:
      imagePullPolicy: IfNotPresent
      env:
        windows:
          image: <image-url>
        linux:
          image: <image-url>          
    
    logCollectorConfig:
      os: [linux, windows]
      env:
        linux:
          filebeatYaml: |-
            filebeat.autodiscover:
              providers:
                - type: kubernetes
                  node: ${NODE_NAME}
                  labels.dedot: false
                  annotations.dedot: false
                  hints.enabled: true
                  hints.default_config.enabled: false
                  templates:
                    - condition:
                        equals:
                          kubernetes.container.name: log-generator-logback
                      config:
                        - type: filestream
                          id: fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id}
                          close_removed: false
                          clean_removed: false
                          paths:
                            - /var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
                          parsers:
                            - container:
                                stream: all
                                format: auto
                            - multiline:
                                type: pattern
                                pattern: "^[0-9]{4}"
                                negate: true
                                match: after
                          prospector.scanner.symlinks: true
                          processors:
                            - add_fields:
                                target: appd
                                fields:
                                    log.format: logs:logback_logs
                            - add_fields:
                                target: _message_parser
                                fields:
                                  type: logback
                                  pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
            processors:
              - add_cloud_metadata: ~
              - add_kubernetes_metadata:
                  in_cluster: true
                  host: ${NODE_NAME}
                  matchers:
                    - logs_path:
                        logs_path: "/var/log/containers/"
              - copy_fields:
                  fields:
                    - from: "kubernetes.deployment.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.daemonset.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.statefulset.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.replicaset.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.cronjob.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.job.name"
                      to: "kubernetes.workload.name"
                  fail_on_error: false
                  ignore_missing: true
              - rename:
                  fields:
                    - from: "kubernetes.namespace"
                      to: "kubernetes.namespace.name"
                    - from: "kubernetes"
                      to: "k8s"
                    - from: k8s.annotations.appdynamics.lca/filebeat.parser
                      to: "_message_parser"
                    - from: "cloud.instance.id"
                      to: "host.id"
                  ignore_missing: true
                  fail_on_error: false
              - add_fields:
                  target: k8s
                  fields:
                    cluster.name: <ClusterName>
              - add_fields:
                  target: k8s
                  fields:
                    cluster.id: <ClusterId>
              - add_fields:
                  target: source
                  fields:
                    name: log-agent
              - add_fields:
                  target: telemetry
                  fields:
                    sdk.name: log-agent
              - add_fields:
                  target: os
                  fields:
                    type: linux
              - script:
                  lang: javascript
                  source: >
                    function process(event) {
                      var podUID = event.Get("k8s.pod.uid");
                      if (podUID) {
                        event.Put("internal.container.encapsulating_object_id", "<ClusterId>:" + podUID);
                      }
                      return event;
                    }
              - drop_fields:
                  fields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
                  ignore_missing: true
            output.otlploggrpc:
              groupby_resource_fields:
                - k8s
                - source
                - host
                - container
                - log
                - telemetry
                - internal
                - os
              hosts: ["${APPD_OTELCOL_GRPC_RECEIVER_HOST}:14317"]
              worker: 1
              max_bytes: 1e+06
              ssl.enabled: false
              wait_for_ready: true
              batch_size: 1000
              summary_debug_logs_interval: 10s
            filebeat.registry.path: registry1
            filebeat.registry.file_permissions: 0640
            path.data: /opt/appdynamics/logcollector-agent/data
            logging:
              level: info
              to_files: false
              files:
                path: /opt/appdynamics/logcollector-agent/log
                name: lca-log
                keepfiles: 5
                permissions: 0640
              selectors: []
              metrics:
                enabled: false
                period: 30s
            monitoring:
              enabled: true
              otlpmetric:
                endpoint: ${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}
                protocol: grpc
                collect_period: 30s
                report_period:
                resource_attributes:
                  k8s.cluster.name: "<ClusterName>"
                  k8s.cluster.id: "<ClusterId>"
                  k8s.pod.name: "${POD_NAME}"
                  k8s.pod.uid: "${POD_UID}"
                  service.instance.id: "${POD_UID}"
                  service.version: "23.4.0-567"
                  source.name: "log-agent"
                  service.namespace: "log-agent"
                  service.name: "log-collector-agent"
                metrics:
                  - beat.memstats.memory_alloc
                  - filebeat.events.active
                  - filebeat.harvester.running
                  - filebeat.harvester.skipped
                  - filebeat.input.log.files.truncated
                  - libbeat.output.read.errors
                  - libbeat.output.write.bytes
                  - libbeat.output.write.errors
                  - system.load.norm.5
                  - system.load.norm.15
                  - libbeat.pipeline.events.filtered
                retry:
                  enabled: true
                  initial_interval: 1s
                  max_interval: 1m
                  max_elapsed_time: 5m
                ssl.enabled: false      
        windows:
          filebeatYaml: |-
            filebeat.autodiscover:
              providers:
                - type: kubernetes
                  node: ${NODE_NAME}
                  labels.dedot: false
                  annotations.dedot: false
                  hints.enabled: true
                  hints.default_config.enabled: false
                  templates:
                    - condition:
                        equals:
                          kubernetes.container.name: log-generator-logback
                      config:
                        - type: filestream
                          id: fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id}
                          close_removed: false
                          clean_removed: false
                          paths:
                            - C:/var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
                          parsers:
                            - container:
                                stream: all
                                format: auto
                            - multiline:
                                type: pattern
                                pattern: "^[0-9]{4}"
                                negate: true
                                match: after
                          prospector.scanner.symlinks: true
                          processors:
                            - add_fields:
                                target: appd
                                fields:
                                    log.format: logs:logback_logs
                            - add_fields:
                                target: _message_parser
                                fields:
                                  type: logback
                                  pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
            processors:
              - add_cloud_metadata: ~
              - add_kubernetes_metadata:
                  in_cluster: true
                  host: ${NODE_NAME}
                  matchers:
                    - logs_path:
                        logs_path: "/var/log/containers/"
              - copy_fields:
                  fields:
                    - from: "kubernetes.deployment.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.daemonset.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.statefulset.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.replicaset.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.cronjob.name"
                      to: "kubernetes.workload.name"
                    - from: "kubernetes.job.name"
                      to: "kubernetes.workload.name"
                  fail_on_error: false
                  ignore_missing: true
              - rename:
                  fields:
                    - from: "kubernetes.namespace"
                      to: "kubernetes.namespace.name"
                    - from: "kubernetes"
                      to: "k8s"
                    - from: k8s.annotations.appdynamics.lca/filebeat.parser
                      to: "_message_parser"
                    - from: "cloud.instance.id"
                      to: "host.id"
                  ignore_missing: true
                  fail_on_error: false
              - drop_fields:
                  fields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
                  ignore_missing: true
              - script:
                  lang: javascript
                  source: >
                    function process(event) {
                      var podUID = event.Get("k8s.pod.uid");
                      if (podUID) {
                        event.Put("internal.container.encapsulating_object_id", "<cluster-id>:" + podUID);
                      }
                      return event;
                    }
              - dissect:
                  tokenizer: "%{name}:%{tag}"
                  field: "container.image.name"
                  target_prefix: "container.image"
                  ignore_failure: true
                  overwrite_keys: true
              - add_fields:
                  target: k8s
                  fields:
                    cluster.name: <cluster-name>
                    cluster.id: <cluster-id>
              - add_fields:
                  target: telemetry
                  fields:
                    sdk.name: log-agent
            output.otlploggrpc:
              groupby_resource_fields:
                - k8s
                - source
                - host
                - container
                - log
                - telemetry
                - internal
                - os
              hosts: ["${APPD_OTELCOL_GRPC_RECEIVER_HOST}:14317"]
              worker: 1
              max_bytes: 1e+06
              ssl.enabled: false
              wait_for_ready: true
              batch_size: 1000
              summary_debug_logs_interval: 10s
            filebeat.registry.path: registry1
            filebeat.registry.file_permissions: 0640
            path.data: C:/ProgramData/filebeat/data
            logging:
              level: info
              to_files: false
              files:
                path: C:/ProgramData/filebeat/log
                name: lca-log
                keepfiles: 5
                permissions: 0640
              selectors: []
              metrics:
                enabled: false
                period: 30s
            monitoring:
              enabled: true
              otlpmetric:
                endpoint: ${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}
                protocol: grpc
                collect_period: 30s
                report_period:
                resource_attributes:
                  k8s.cluster.name: "<ClusterName>"
                  k8s.cluster.id: "<ClusterId>"
                  k8s.pod.name: "${POD_NAME}"
                  k8s.pod.uid: "${POD_UID}"
                  service.instance.id: "${POD_UID}"
                  service.version: "23.4.0-567"
                  source.name: "log-agent"
                  service.namespace: "log-agent"
                  service.name: "log-collector-agent"
                metrics:
                  - beat.memstats.memory_alloc
                  - filebeat.events.active
                  - filebeat.harvester.running
                  - filebeat.harvester.skipped
                  - filebeat.input.log.files.truncated
                  - libbeat.output.read.errors
                  - libbeat.output.write.bytes
                  - libbeat.output.write.errors
                  - system.load.norm.5
                  - system.load.norm.15
                  - libbeat.pipeline.events.filtered
                retry:
                  enabled: true
                  initial_interval: 1s
                  max_interval: 1m
                  max_elapsed_time: 5m
                ssl.enabled: false
YML

OpenTelemetry™ and Kubernetes® (as applicable) are trademarks of The Linux Foundation®.