Download PDF
Download page Log Collector Settings - Advanced YAML Layout.
Log Collector Settings - Advanced YAML Layout
This page explains various settings you can customize in collectors-values.yaml
for the Log Collector .
This page only applies to Log Collector deployments that use the advanced layout of collectors
-values.yaml
. It describes advanced settings for the Log Collector. If your collectors
-values.yaml
is in the simplified YAML layout available in June 2022 and later, see Log Collector Settings.
The Log Collector sends the entire appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml
to Filebeat as a string. There are seven top level keys within appdynamics-cloud-k8s-monitoring.logCollectorConfig
.filebeatYaml
:
logCollectorConfig:
filebeatYaml: |-
filebeat.autodiscover: ...
processors: ...
output.otlploggrpc: ...
filebeat.registry: ...
path.data: ...
logging: ...
monitoring: ...
This page describes these keys plus one that is located in logCollectorPod
.
appdynamics-cloud-k8s-monitoring.logCollectorPod.env
In the appdynamics-cloud-k8s-monitoring.logCollectorPod.env
section you can specify environment variables such as the location of operating system-specific images of the Log Collector.
Parameter | Description | ||||
---|---|---|---|---|---|
| Section for environment variables. | ||||
| |||||
| Location of the Log Collector image for Linux. Example:
YML
| ||||
| |||||
| Location of the Log Collector image for Windows. Example:
YML
|
appdynamics-cloud-k8s-monitoring.logCollectorConfig.os
If you need to deploy the Log Collector images on multiple operating systems, specify the image URLs in appdynamics-cloud-k8s-monitoring.logCollectorConfig.os
:
Parameter | Description | ||||
---|---|---|---|---|---|
| Array of operating systems to deploy this pod on. Valid values: Example:
YML
|
appdynamics-cloud-k8s-monitoring.logCollectorConfig.env
Use the appdynamics-cloud-k8s-monitoring.logCollectorConfig.env
section to specify operating system-specific overrides.
Parameter | Description | ||||
---|---|---|---|---|---|
| Specifies the OS-specific override configurations | ||||
| |||||
filebeatYaml | A Linux-specific Example:
YML
The following sections of
The following sections of
|
appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.filebeat.autodiscover
This section is for Kubernetes® container discovery and file log input.
Parameter | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||
| Specifies a default ("fallback") condition for harvesting logs from any container on your cluster. This allows for faster setup. Valid values: If you set
YML
| ||||||||||
templates | Contains a condition block which contains all the settings for a specific log source, type, and pattern. To create multiple condition blocks, clone templates . | ||||||||||
| The condition that applications must match in order to have their logs harvested by the Log Collector. For a list of supported conditions, see Filebeat: Conditions. For a list of fields you can use for conditions, see Filebeat: Autodiscover: Generic fields. A condition is a list of three items:
The condition list supports multiple orders for these three items: Filebeat order:
YML
Example:
YML
Boolean logical operator (and, or, not):
YML
Example:
YML
Nested conditions: Examples:
YML
YML
| ||||||||||
config | |||||||||||
| |||||||||||
id | Leave this as the default value (fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id} ). | ||||||||||
close_removed | Leave this as the default value (false ). | ||||||||||
| Leave this as the default value (false ). | ||||||||||
paths | Glob pattern of full pathname of your log files. If you want to collect logs from multiple containers in one or more pods, modify this glob pattern or add additional pathnames on separate lines. Default value:
YML
Example:
YML
| ||||||||||
| Syntax of parsers block:
YML
If you are specifying a pattern for multiline log messages, use following value of
YML
Otherwise, use:
YML
| ||||||||||
| See https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-filestream.html#_container | ||||||||||
stream | Reads from the specified streams only: | ||||||||||
format | Uses the specified format when parsing logs: | ||||||||||
- multiline | If your messages span multiple lines, you must specify all parameters in this block to identify where a multiline log message starts and ends. See Manage multiline messages. Example for log4j, logback, and grok logs starting with a date or timestamp of format YYYY-MM-DD:
YML
Example for json logs:
YML
| ||||||||||
type | Leave this as the default value (pattern ). | ||||||||||
pattern | The pattern of a multiline message. This must be a regular expression in RE2 syntax. Must be enclosed in single quotes. | ||||||||||
match | Enables or disables negation of a multiline message. Default: false . | ||||||||||
negate | The location of the multiline split. Default: after . If you specify multiline.pattern you must also specify multiline.match . | ||||||||||
| Leave this as the default value (true ). | ||||||||||
processors | |||||||||||
- copy_fields | Do not modify.
YML
| ||||||||||
| This Do not modify.
YML
| ||||||||||
| |||||||||||
| Required. Specifies a logical grouping of the log "namespace" and source. Sensitive data masking rules apply only to a scope that matches the value of this parameter. If you don't specify this parameter, you can't mask sensitive data contained within the log messages that are ingested through this configuration. See Mask Sensitive Data. Syntax: Suggestions for Suggestions for Example for Kubernetes logs from the common ingestion service (CIS) endpoint:
YML
| ||||||||||
| Log type and single line message pattern for single line logs from this container. Get the pattern from your logging configuration file (typically named Valid values for Example for Log4j logs:
YML
Example for Logback logs:
YML
The Example for timestamp logs: This example uses a field named
Example:
YML
Example for JSON logs: See Advanced Configuration for JSON Logs. At a minimum, specify only
YML
Example for Grok logs: See Advanced Configuration for Grok Logs.
YML
| ||||||||||
| Applies multiple parsers. Specify To minify, use a tool like Code Beautify. To double escape, use a tool like JSON formatter. Syntax of
YML
For each item in
Example before minifying and double escaping:
YML
Example after minifying and double escaping:
YML
| ||||||||||
| Applies subparsers to each Grok log message. Subparsers help you to extract more fields from different parts of a Grok log message. For example, after you extract and name a field using a Grok parser, you can parse that named field with a JSON parser. This setting is applicable only if there is a Specify To minify, use a tool like Code Beautify. To double escape, use a tool like JSON formatter. Syntax of
YML
For each item in
Example before minifying and double escaping:
YML
Example after minifying and double escaping:
YML
|
appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.processors
This section is for defining and configuring processors. For help with Filebeat, see Filter and enhance data with processors.
Parameter | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| Do not modify. | ||||||||||
- add_kubernetes_metadata: | Do not modify. | ||||||||||
- rename: | Do not modify.
YML
| ||||||||||
- add_fields: | |||||||||||
| Name of your cluster. Must match the cluster name as displayed in Cisco Cloud Observability. Example:
YML
Based on | ||||||||||
target: k8s fields: cluster.id | ID of your cluster. You can get the cluster ID by running this command:
BASH
Example:
YML
| ||||||||||
- add_fields: | |||||||||||
| Do not modify Example:
YML
| ||||||||||
- add_fields: | |||||||||||
| Do not modify.
YML
| ||||||||||
| Do not modify. If your
YML
| ||||||||||
- drop_fields: | |||||||||||
fields: <list> ignore_missing: <true_or_false> | List of fields you don't want to export to Cisco Cloud Observability. Example:
YML
|
appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.output.otlploggrpc
This section configures the OTLP output of Filebeat logs directly to an OTLP receiver using OpenTelemetry Line Protocol (OTLP) Logs Data Model over either gRPC or HTTP.
Parameter | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| Do not modify.
YML
| ||||||||||
hosts | OTLP receiver endpoint. Default is the | ||||||||||
ssl.enabled | Enables or disables SSL communication on the export of Filebeat logs to the OTLP receiver. Enabling or disabling SSL here does not affect Example:
YML
| ||||||||||
ssl.certificate_authorities | List of your root CA certificates. Example for Linux:
YML
Example for Windows:
YML
| ||||||||||
ssl.certificate | Full pathname of your certificate for SSL client authentication. Example for Linux:
YML
Example for Windows:
YML
| ||||||||||
ssl.key | Full pathname of your private client certificate SSL key. Example for Linux:
YML
Example for Windows:
YML
| ||||||||||
| List of TLS protocols which the Log Collector can use. Default: | ||||||||||
protocol | Protocol to use for export. Valid values: http , grpc . Default: grpc . | ||||||||||
| |||||||||||
wait_for_ready | Configures the action to take when an RPC is attempted on broken connections or unreachable servers. Valid values: If false and the connection is in the TRANSIENT_FAILURE state, the RPC fails immediately. Otherwise, the RPC client blocks the call until a connection is available (or the call is canceled or times out) and retries the call if it fails due to a transient error. If protocol is grpc, will there are no retries if data was written to the wire unless the server indicates it did not process the data. See gRPC Wait for Ready Semantics. | ||||||||||
| Number of log records to be sent together in a single batch, which improves its performance. Default and strongly recommended value: 1000 . Best practice is to specify both | ||||||||||
| If the number of bytes in the OTLP logs packet to be published (which contains all the log records present in a batch) exceeds Strongly recommended value: If your | ||||||||||
| Specifies the period (in seconds) for summary logs which are printed if Example of summary logs:
CODE
|
appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.filebeat.registry
This section contains two keys: filebeat.registry.path
and filebeat.registry.file_permissions
.
Parameter | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| Filebeat registry filename. Do not modify. Example:
YML
| ||||||||||
filebeat.registry.file_permissions | Filebeat registry file permissions. Do not modify. Example:
YML
|
appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.path.data
This section configures where Filebeat looks for its registry files. See Configure project paths.
Parameter | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| Mount location for the Filebeat registry. This location is fixed and you should not update it, but if this setting is missing from your The code snippet below shows the new line in the context of adjacent lines so that you understand exactly where to add it: Example for Linux:
YML
Example for Windows:
YML
|
appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.logging
This section configures the Log Collector to log its Filebeat activity.
Parameter | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| Logging level. Valid values: | ||||||||||
to_files | Enables or disables writing Filebeat logs to files. Valid values: true , false . Default: false . | ||||||||||
files | |||||||||||
path | Path to log files. Default: /opt/appdynamics/logcollector-agent/log | ||||||||||
name | Prefix of log files. Default: lca-log | ||||||||||
keepfiles | Number of log files to keep if Filebeat logging is enabled. Default: | ||||||||||
permissions | File permissions on log file. Default: 0640 | ||||||||||
selectors | Selector (filter) to limit the logging to only components that match. Valid values: | ||||||||||
metrics | |||||||||||
enabled | Enables or disables metrics logging. Valid values: true , false . Default: false . If true , the Log Collector writes metrics to the log file if logging.files.enabled is true , or to the console if logging.files.enabled is false . | ||||||||||
period | Frequency of metrics collection. Valid values: 0-99s , 0-99m . Default: 30s (30 seconds). Ignored if logging.metrics.enabled is false . |
appdynamics-cloud-k8s-monitoring.logCollectorConfig.filebeatYaml.monitoring
This section is for configuring the Log Collector to log its metrics.
Parameter | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
| Enables or disables export of Log Collector metrics to a backend. Default: | ||||||||||
| This section contains settings for exporting the Log Collector's own metrics directly to an OTLP receiver using OTLP Metrics data model, over either gRPC or HTTP API. | ||||||||||
endpoint | OTLP receiver endpoint. Default is the | ||||||||||
protocol | Protocol to use for export. Valid values: http , grpc . Default: grpc . | ||||||||||
collect_period | Internal collection period. Default: 10s . | ||||||||||
report_period | Reporting period to OTLP backend. Default: 60s . | ||||||||||
| List of resource attributes to be added to metrics packets. Default: None (empty list). Example:
YML
| ||||||||||
k8s.cluster.name | Cluster name. Example:
YML
| ||||||||||
k8s.pod.name: "${POD_NAME}" | Pod name. Do not modify. Example:
YML
| ||||||||||
metrics | List of metrics to capture. If this list is empty, the Log Collector captures all metrics. If this parameter is omitted, the Log Collector captures the default list of metrics in the example. Example:
YML
| ||||||||||
retry | Metrics exporter retry configuration to be used on when exporting to the metrics backend fails. | ||||||||||
enabled | Enables or disables retry of failed batches. Valid values: true , false . Default: false . | ||||||||||
initial_interval | Time to wait after the first failure before retrying. Specify this as an int64 with a unit suffix. For example, 500ms . | ||||||||||
max_interval | Maximum time to wait between consecutive failures. Specify this as an int64 with a unit suffix. For example, 500ms . Once this value is reached, the delay between consecutive retries is always this value. | ||||||||||
max_elapsed_time | Maximum amount of time (including retries) spent trying to send a request or batch. Once this value is reached, the data is discarded. | ||||||||||
ssl.enabled | Enables or disables SSL communication on the export of Filebeat metrics to the OTLP receiver. Enabling or disabling SSL here does not affect | ||||||||||
| List of your root CA certificates. Example for Linux:
YML
Example for Windows:
YML
| ||||||||||
| Full pathname of your certificate for SSL client authentication. Example for Linux:
YML
Example for Windows:
YML
| ||||||||||
| Full pathname of your private client certificate SSL key. Example for Linux:
YML
Example for Windows:
YML
| ||||||||||
| List of TLS protocols which the Log Collector can use. Default: | ||||||||||
Sample Configurations
global:
clusterName: <ClusterName>
appdynamics-otel-collector:
clientId: <client-id>
clientSecret: <client-secret>
endpoint: <endpoint>
tokenUrl: <token-url>
spec:
image: <image-url>
imagePullPolicy: IfNotPresent
config:
exporters:
logging:
loglevel: debug
appdynamics-cloud-k8s-monitoring:
install:
logCollector: true
defaultInfraCollectors: false
clustermon: false
clustermonPod:
image: <image-url>
nodeSelector:
kubernetes.io/os: linux
inframonPod:
image: <image-url>
nodeSelector:
kubernetes.io/os: linux
logCollectorPod:
imagePullPolicy: IfNotPresent
logCollectorConfig:
filebeatYaml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
labels.dedot: false
annotations.dedot: false
hints.enabled: true
hints.default_config.enabled: false
templates:
- condition:
equals:
kubernetes.container.name: log-generator-logback
config:
- type: filestream
id: fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id}
close_removed: false
clean_removed: false
paths:
- /var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
parsers:
- container:
stream: all
format: auto
- multiline:
type: pattern
pattern: "^[0-9]{4}"
negate: true
match: after
prospector.scanner.symlinks: true
processors:
- add_fields:
target: appd
fields:
log.format: logs:logback_logs
- add_fields:
target: _message_parser
fields:
type: logback
pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
processors:
- add_cloud_metadata: ~
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- copy_fields:
fields:
- from: "kubernetes.deployment.name"
to: "kubernetes.workload.name"
- from: "kubernetes.daemonset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.statefulset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.replicaset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.cronjob.name"
to: "kubernetes.workload.name"
- from: "kubernetes.job.name"
to: "kubernetes.workload.name"
fail_on_error: false
ignore_missing: true
- rename:
fields:
- from: "kubernetes.namespace"
to: "kubernetes.namespace.name"
- from: "kubernetes"
to: "k8s"
- from: k8s.annotations.appdynamics.lca/filebeat.parser
to: "_message_parser"
- from: "cloud.instance.id"
to: "host.id"
ignore_missing: true
fail_on_error: false
- drop_fields:
fields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
ignore_missing: true
- script:
lang: javascript
source: >
function process(event) {
var podUID = event.Get("k8s.pod.uid");
if (podUID) {
event.Put("internal.container.encapsulating_object_id", "<cluster-id>:" + podUID);
}
return event;
}
- dissect:
tokenizer: "%{name}:%{tag}"
field: "container.image.name"
target_prefix: "container.image"
ignore_failure: true
overwrite_keys: true
- add_fields:
target: k8s
fields:
cluster.name: <cluster-name>
cluster.id: <cluster-id>
- add_fields:
target: telemetry
fields:
sdk.name: log-agent
output.otlploggrpc:
groupby_resource_fields:
- k8s
- source
- host
- container
- log
- telemetry
- internal
- os
hosts: ["${APPD_OTELCOL_GRPC_RECEIVER_HOST}:14317"]
worker: 1
max_bytes: 1e+06
ssl.enabled: false
wait_for_ready: true
batch_size: 1000
summary_debug_logs_interval: 10s
filebeat.registry.path: registry1
filebeat.registry.file_permissions: 0640
path.data: /opt/appdynamics/logcollector-agent/data
logging:
level: info
to_files: false
files:
path: /opt/appdynamics/logcollector-agent/log
name: lca-log
keepfiles: 5
permissions: 0640
selectors: []
metrics:
enabled: false
period: 30s
monitoring:
enabled: true
otlpmetric:
endpoint: ${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}
protocol: grpc
collect_period: 30s
report_period:
resource_attributes:
k8s.cluster.name: "<ClusterName>"
k8s.cluster.id: "<ClusterId>"
k8s.pod.name: "${POD_NAME}"
k8s.pod.uid: "${POD_UID}"
service.instance.id: "${POD_UID}"
service.version: "23.4.0-567"
source.name: "log-agent"
service.namespace: "log-agent"
service.name: "log-collector-agent"
metrics:
- beat.memstats.memory_alloc
- filebeat.events.active
- filebeat.harvester.running
- filebeat.harvester.skipped
- filebeat.input.log.files.truncated
- libbeat.output.read.errors
- libbeat.output.write.bytes
- libbeat.output.write.errors
- system.load.norm.5
- system.load.norm.15
- libbeat.pipeline.events.filtered
retry:
enabled: true
initial_interval: 1s
max_interval: 1m
max_elapsed_time: 5m
ssl.enabled: false
global:
clusterName: <ClusterName>
appdynamics-otel-collector:
clientId: <client-id>
clientSecret: <client-secret>
endpoint: <endpoint>
tokenUrl: <token-url>
spec:
image: <image-url>
imagePullPolicy: IfNotPresent
config:
exporters:
logging:
loglevel: debug
appdynamics-cloud-k8s-monitoring:
install:
logCollector: true
defaultInfraCollectors: false
clustermon: false
clustermonPod:
image: <image-url>
nodeSelector:
kubernetes.io/os: linux
inframonPod:
image: <image-url>
nodeSelector:
kubernetes.io/os: linux
logCollectorPod:
imagePullPolicy: IfNotPresent
env:
linux:
image: <image-url>
logCollectorConfig:
os: [linux]
env:
linux:
filebeatYaml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
labels.dedot: false
annotations.dedot: false
hints.enabled: true
hints.default_config.enabled: false
templates:
- condition:
equals:
kubernetes.container.name: log-generator-logback
config:
- type: filestream
id: fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id}
close_removed: false
clean_removed: false
paths:
- /var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
parsers:
- container:
stream: all
format: auto
- multiline:
type: pattern
pattern: "^[0-9]{4}"
negate: true
match: after
prospector.scanner.symlinks: true
processors:
- add_fields:
target: appd
fields:
log.format: logs:logback_logs
- add_fields:
target: _message_parser
fields:
type: logback
pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
processors:
- add_cloud_metadata: ~
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- copy_fields:
fields:
- from: "kubernetes.deployment.name"
to: "kubernetes.workload.name"
- from: "kubernetes.daemonset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.statefulset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.replicaset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.cronjob.name"
to: "kubernetes.workload.name"
- from: "kubernetes.job.name"
to: "kubernetes.workload.name"
fail_on_error: false
ignore_missing: true
- rename:
fields:
- from: "kubernetes.namespace"
to: "kubernetes.namespace.name"
- from: "kubernetes"
to: "k8s"
- from: k8s.annotations.appdynamics.lca/filebeat.parser
to: "_message_parser"
- from: "cloud.instance.id"
to: "host.id"
ignore_missing: true
fail_on_error: false
- drop_fields:
fields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
ignore_missing: true
- script:
lang: javascript
source: >
function process(event) {
var podUID = event.Get("k8s.pod.uid");
if (podUID) {
event.Put("internal.container.encapsulating_object_id", "<cluster-id>:" + podUID);
}
return event;
}
- dissect:
tokenizer: "%{name}:%{tag}"
field: "container.image.name"
target_prefix: "container.image"
ignore_failure: true
overwrite_keys: true
- add_fields:
target: k8s
fields:
cluster.name: <cluster-name>
cluster.id: <cluster-id>
- add_fields:
target: telemetry
fields:
sdk.name: log-agent
output.otlploggrpc:
groupby_resource_fields:
- k8s
- source
- host
- container
- log
- telemetry
- internal
- os
hosts: ["${APPD_OTELCOL_GRPC_RECEIVER_HOST}:14317"]
worker: 1
max_bytes: 1e+06
ssl.enabled: false
wait_for_ready: true
batch_size: 1000
summary_debug_logs_interval: 10s
filebeat.registry.path: registry1
filebeat.registry.file_permissions: 0640
path.data: /opt/appdynamics/logcollector-agent/data
logging:
level: info
to_files: false
files:
path: /opt/appdynamics/logcollector-agent/log
name: lca-log
keepfiles: 5
permissions: 0640
selectors: []
metrics:
enabled: false
period: 30s
monitoring:
enabled: true
otlpmetric:
endpoint: ${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}
protocol: grpc
collect_period: 30s
report_period:
resource_attributes:
k8s.cluster.name: "<ClusterName>"
k8s.cluster.id: "<ClusterId>"
k8s.pod.name: "${POD_NAME}"
k8s.pod.uid: "${POD_UID}"
service.instance.id: "${POD_UID}"
service.version: "23.4.0-567"
source.name: "log-agent"
service.namespace: "log-agent"
service.name: "log-collector-agent"
metrics:
- beat.memstats.memory_alloc
- filebeat.events.active
- filebeat.harvester.running
- filebeat.harvester.skipped
- filebeat.input.log.files.truncated
- libbeat.output.read.errors
- libbeat.output.write.bytes
- libbeat.output.write.errors
- system.load.norm.5
- system.load.norm.15
- libbeat.pipeline.events.filtered
retry:
enabled: true
initial_interval: 1s
max_interval: 1m
max_elapsed_time: 5m
ssl.enabled: false
global:
clusterName: <ClusterName>
appdynamics-otel-collector:
clientId: <client-id>
clientSecret: <client-secret>
endpoint: <endpoint>
tokenUrl: <token-url>
spec:
image: <image-url>
imagePullPolicy: IfNotPresent
config:
exporters:
logging:
loglevel: debug
appdynamics-cloud-k8s-monitoring:
install:
logCollector: true
defaultInfraCollectors: false
clustermon: false
clustermonPod:
image: <image-url>
nodeSelector:
kubernetes.io/os: linux
inframonPod:
image: <image-url>
nodeSelector:
kubernetes.io/os: linux
logCollectorPod:
imagePullPolicy: IfNotPresent
env:
windows:
image: <image-url>
logCollectorConfig:
os: [windows]
env:
windows:
filebeatYaml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
labels.dedot: false
annotations.dedot: false
hints.enabled: true
hints.default_config.enabled: false
templates:
- condition:
equals:
kubernetes.container.name: log-generator-logback
config:
- type: filestream
id: fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id}
close_removed: false
clean_removed: false
paths:
- C:/var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
parsers:
- container:
stream: all
format: auto
- multiline:
type: pattern
pattern: "^[0-9]{4}"
negate: true
match: after
prospector.scanner.symlinks: true
processors:
- add_fields:
target: appd
fields:
log.format: logs:logback_logs
- add_fields:
target: _message_parser
fields:
type: logback
pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
processors:
- add_cloud_metadata: ~
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- copy_fields:
fields:
- from: "kubernetes.deployment.name"
to: "kubernetes.workload.name"
- from: "kubernetes.daemonset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.statefulset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.replicaset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.cronjob.name"
to: "kubernetes.workload.name"
- from: "kubernetes.job.name"
to: "kubernetes.workload.name"
fail_on_error: false
ignore_missing: true
- rename:
fields:
- from: "kubernetes.namespace"
to: "kubernetes.namespace.name"
- from: "kubernetes"
to: "k8s"
- from: k8s.annotations.appdynamics.lca/filebeat.parser
to: "_message_parser"
- from: "cloud.instance.id"
to: "host.id"
ignore_missing: true
fail_on_error: false
- drop_fields:
fields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
ignore_missing: true
- script:
lang: javascript
source: >
function process(event) {
var podUID = event.Get("k8s.pod.uid");
if (podUID) {
event.Put("internal.container.encapsulating_object_id", "<cluster-id>:" + podUID);
}
return event;
}
- dissect:
tokenizer: "%{name}:%{tag}"
field: "container.image.name"
target_prefix: "container.image"
ignore_failure: true
overwrite_keys: true
- add_fields:
target: k8s
fields:
cluster.name: <cluster-name>
cluster.id: <cluster-id>
- add_fields:
target: telemetry
fields:
sdk.name: log-agent
output.otlploggrpc:
groupby_resource_fields:
- k8s
- source
- host
- container
- log
- telemetry
- internal
- os
hosts: ["${APPD_OTELCOL_GRPC_RECEIVER_HOST}:14317"]
worker: 1
max_bytes: 1e+06
ssl.enabled: false
wait_for_ready: true
batch_size: 1000
summary_debug_logs_interval: 10s
filebeat.registry.path: registry1
filebeat.registry.file_permissions: 0640
path.data: C:/ProgramData/filebeat/data
logging:
level: info
to_files: false
files:
path: C:/ProgramData/filebeat/log
name: lca-log
keepfiles: 5
permissions: 0640
selectors: []
metrics:
enabled: false
period: 30s
monitoring:
enabled: true
otlpmetric:
endpoint: ${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}
protocol: grpc
collect_period: 30s
report_period:
resource_attributes:
k8s.cluster.name: "<ClusterName>"
k8s.cluster.id: "<ClusterId>"
k8s.pod.name: "${POD_NAME}"
k8s.pod.uid: "${POD_UID}"
service.instance.id: "${POD_UID}"
service.version: "23.4.0-567"
source.name: "log-agent"
service.namespace: "log-agent"
service.name: "log-collector-agent"
metrics:
- beat.memstats.memory_alloc
- filebeat.events.active
- filebeat.harvester.running
- filebeat.harvester.skipped
- filebeat.input.log.files.truncated
- libbeat.output.read.errors
- libbeat.output.write.bytes
- libbeat.output.write.errors
- system.load.norm.5
- system.load.norm.15
- libbeat.pipeline.events.filtered
retry:
enabled: true
initial_interval: 1s
max_interval: 1m
max_elapsed_time: 5m
ssl.enabled: false
global:
clusterName: <ClusterName>
appdynamics-otel-collector:
clientId: <client-id>
clientSecret: <client-secret>
endpoint: <endpoint>
tokenUrl: <token-url>
spec:
image: <image-url>
imagePullPolicy: IfNotPresent
config:
exporters:
logging:
loglevel: debug
appdynamics-cloud-k8s-monitoring:
install:
logCollector: true
defaultInfraCollectors: false
clustermon: false
clustermonPod:
image: <image-url>
nodeSelector:
kubernetes.io/os: linux
inframonPod:
image: <image-url>
nodeSelector:
kubernetes.io/os: linux
logCollectorPod:
imagePullPolicy: IfNotPresent
env:
windows:
image: <image-url>
linux:
image: <image-url>
logCollectorConfig:
os: [linux, windows]
env:
linux:
filebeatYaml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
labels.dedot: false
annotations.dedot: false
hints.enabled: true
hints.default_config.enabled: false
templates:
- condition:
equals:
kubernetes.container.name: log-generator-logback
config:
- type: filestream
id: fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id}
close_removed: false
clean_removed: false
paths:
- /var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
parsers:
- container:
stream: all
format: auto
- multiline:
type: pattern
pattern: "^[0-9]{4}"
negate: true
match: after
prospector.scanner.symlinks: true
processors:
- add_fields:
target: appd
fields:
log.format: logs:logback_logs
- add_fields:
target: _message_parser
fields:
type: logback
pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
processors:
- add_cloud_metadata: ~
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- copy_fields:
fields:
- from: "kubernetes.deployment.name"
to: "kubernetes.workload.name"
- from: "kubernetes.daemonset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.statefulset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.replicaset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.cronjob.name"
to: "kubernetes.workload.name"
- from: "kubernetes.job.name"
to: "kubernetes.workload.name"
fail_on_error: false
ignore_missing: true
- rename:
fields:
- from: "kubernetes.namespace"
to: "kubernetes.namespace.name"
- from: "kubernetes"
to: "k8s"
- from: k8s.annotations.appdynamics.lca/filebeat.parser
to: "_message_parser"
- from: "cloud.instance.id"
to: "host.id"
ignore_missing: true
fail_on_error: false
- add_fields:
target: k8s
fields:
cluster.name: <ClusterName>
- add_fields:
target: k8s
fields:
cluster.id: <ClusterId>
- add_fields:
target: source
fields:
name: log-agent
- add_fields:
target: telemetry
fields:
sdk.name: log-agent
- add_fields:
target: os
fields:
type: linux
- script:
lang: javascript
source: >
function process(event) {
var podUID = event.Get("k8s.pod.uid");
if (podUID) {
event.Put("internal.container.encapsulating_object_id", "<ClusterId>:" + podUID);
}
return event;
}
- drop_fields:
fields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
ignore_missing: true
output.otlploggrpc:
groupby_resource_fields:
- k8s
- source
- host
- container
- log
- telemetry
- internal
- os
hosts: ["${APPD_OTELCOL_GRPC_RECEIVER_HOST}:14317"]
worker: 1
max_bytes: 1e+06
ssl.enabled: false
wait_for_ready: true
batch_size: 1000
summary_debug_logs_interval: 10s
filebeat.registry.path: registry1
filebeat.registry.file_permissions: 0640
path.data: /opt/appdynamics/logcollector-agent/data
logging:
level: info
to_files: false
files:
path: /opt/appdynamics/logcollector-agent/log
name: lca-log
keepfiles: 5
permissions: 0640
selectors: []
metrics:
enabled: false
period: 30s
monitoring:
enabled: true
otlpmetric:
endpoint: ${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}
protocol: grpc
collect_period: 30s
report_period:
resource_attributes:
k8s.cluster.name: "<ClusterName>"
k8s.cluster.id: "<ClusterId>"
k8s.pod.name: "${POD_NAME}"
k8s.pod.uid: "${POD_UID}"
service.instance.id: "${POD_UID}"
service.version: "23.4.0-567"
source.name: "log-agent"
service.namespace: "log-agent"
service.name: "log-collector-agent"
metrics:
- beat.memstats.memory_alloc
- filebeat.events.active
- filebeat.harvester.running
- filebeat.harvester.skipped
- filebeat.input.log.files.truncated
- libbeat.output.read.errors
- libbeat.output.write.bytes
- libbeat.output.write.errors
- system.load.norm.5
- system.load.norm.15
- libbeat.pipeline.events.filtered
retry:
enabled: true
initial_interval: 1s
max_interval: 1m
max_elapsed_time: 5m
ssl.enabled: false
windows:
filebeatYaml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
labels.dedot: false
annotations.dedot: false
hints.enabled: true
hints.default_config.enabled: false
templates:
- condition:
equals:
kubernetes.container.name: log-generator-logback
config:
- type: filestream
id: fsid-${data.kubernetes.pod.name}-${data.kubernetes.container.id}
close_removed: false
clean_removed: false
paths:
- C:/var/log/containers/${data.kubernetes.pod.name}*${data.kubernetes.container.id}.log
parsers:
- container:
stream: all
format: auto
- multiline:
type: pattern
pattern: "^[0-9]{4}"
negate: true
match: after
prospector.scanner.symlinks: true
processors:
- add_fields:
target: appd
fields:
log.format: logs:logback_logs
- add_fields:
target: _message_parser
fields:
type: logback
pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
processors:
- add_cloud_metadata: ~
- add_kubernetes_metadata:
in_cluster: true
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
- copy_fields:
fields:
- from: "kubernetes.deployment.name"
to: "kubernetes.workload.name"
- from: "kubernetes.daemonset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.statefulset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.replicaset.name"
to: "kubernetes.workload.name"
- from: "kubernetes.cronjob.name"
to: "kubernetes.workload.name"
- from: "kubernetes.job.name"
to: "kubernetes.workload.name"
fail_on_error: false
ignore_missing: true
- rename:
fields:
- from: "kubernetes.namespace"
to: "kubernetes.namespace.name"
- from: "kubernetes"
to: "k8s"
- from: k8s.annotations.appdynamics.lca/filebeat.parser
to: "_message_parser"
- from: "cloud.instance.id"
to: "host.id"
ignore_missing: true
fail_on_error: false
- drop_fields:
fields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
ignore_missing: true
- script:
lang: javascript
source: >
function process(event) {
var podUID = event.Get("k8s.pod.uid");
if (podUID) {
event.Put("internal.container.encapsulating_object_id", "<cluster-id>:" + podUID);
}
return event;
}
- dissect:
tokenizer: "%{name}:%{tag}"
field: "container.image.name"
target_prefix: "container.image"
ignore_failure: true
overwrite_keys: true
- add_fields:
target: k8s
fields:
cluster.name: <cluster-name>
cluster.id: <cluster-id>
- add_fields:
target: telemetry
fields:
sdk.name: log-agent
output.otlploggrpc:
groupby_resource_fields:
- k8s
- source
- host
- container
- log
- telemetry
- internal
- os
hosts: ["${APPD_OTELCOL_GRPC_RECEIVER_HOST}:14317"]
worker: 1
max_bytes: 1e+06
ssl.enabled: false
wait_for_ready: true
batch_size: 1000
summary_debug_logs_interval: 10s
filebeat.registry.path: registry1
filebeat.registry.file_permissions: 0640
path.data: C:/ProgramData/filebeat/data
logging:
level: info
to_files: false
files:
path: C:/ProgramData/filebeat/log
name: lca-log
keepfiles: 5
permissions: 0640
selectors: []
metrics:
enabled: false
period: 30s
monitoring:
enabled: true
otlpmetric:
endpoint: ${APPD_OTELCOL_GRPC_RECEIVER_HOST}:${APPD_OTELCOL_GRPC_RECEIVER_PORT}
protocol: grpc
collect_period: 30s
report_period:
resource_attributes:
k8s.cluster.name: "<ClusterName>"
k8s.cluster.id: "<ClusterId>"
k8s.pod.name: "${POD_NAME}"
k8s.pod.uid: "${POD_UID}"
service.instance.id: "${POD_UID}"
service.version: "23.4.0-567"
source.name: "log-agent"
service.namespace: "log-agent"
service.name: "log-collector-agent"
metrics:
- beat.memstats.memory_alloc
- filebeat.events.active
- filebeat.harvester.running
- filebeat.harvester.skipped
- filebeat.input.log.files.truncated
- libbeat.output.read.errors
- libbeat.output.write.bytes
- libbeat.output.write.errors
- system.load.norm.5
- system.load.norm.15
- libbeat.pipeline.events.filtered
retry:
enabled: true
initial_interval: 1s
max_interval: 1m
max_elapsed_time: 5m
ssl.enabled: false
OpenTelemetry™ and Kubernetes® (as applicable) are trademarks of The Linux Foundation®.