Download PDF
Download page Log Collector Settings.
Log Collector Settings
The Cisco Cloud Observability Kubernetes Collectors chart (appdynamics-collectors
) can be used to deploy the following collectors:
- Cluster Collector
- Infrastructure Collector
- Log Collector
- Cisco AppDynamics Distribution of OpenTelemetry Collector
This chart includes the following sub-charts:
appdynamics-otel-collector
includes settings for the Cisco AppDynamics Distribution of OpenTelemetry Collectorappdynamics-cloud-k8s-monitoring
includes settings for these collectors:- Cluster Collector
- Infrastructure Collector
- Log Collector
This is the reference page for Log Collector settings.
Enclose all values in double quotes unless a particular value supports regular expressions (regex) or its description says otherwise. Enclose all values which support regex in single quotes.
Global Settings
You can specify the following global parameters within the collectors-values.yaml
file:
Name | Type | Description | Required |
---|---|---|---|
| object | Contains the global settings for the Cisco Cloud Observability Kubernetes Collectors. | No |
| object | Defines the Oauth authentication credentials. | Yes |
global
The following table describes settings in the global key
that are related to the Log Collector:
Name | Type | Description | Required |
---|---|---|---|
| string | Name of your cluster. Must match the cluster name as displayed in Cisco Cloud Observability. Based on Example:
YML
| Yes |
global.oauth
Name | Type | Description | Required |
---|---|---|---|
clientId | string | Defines the client ID for authenticating with Cisco Cloud Observability. | Yes |
clientSecret | string | Defines the secret string in plain text to authenticate with Cisco Cloud Observability. You can use | Yes |
endpoint | string | Defines the endpoint the collector sends data to. | Yes |
tokenUrl | string | Defines the URL for obtaining an authentication token from Cisco Cloud Observability. | Yes |
appdynamics-cloud-k8s-monitoring
The following table describes settings in appdynamics-cloud-k8s-monitoring
that are related to the Log Collector:
Name | Type | Description | Required |
---|---|---|---|
| object | Enables or disables components. | Yes |
| object | Configures a specific Log Collector pod. | No |
| object | Configures log collection for a specific container. | Yes |
appdynamics-cloud-k8s-monitoring.install
The following table describes settings in appdynamics-cloud-k8s-monitoring.install
that are related to the Log Collector:
Name | Type | Description | Required |
---|---|---|---|
| string | Enables the deployment of Valid values:
Example:
YML
| Yes |
| string | Enables the deployment of the Log Collector on your cluster. Valid values:
Example:
YML
| Yes |
appdynamics-cloud-k8s-monitoring.logCollectorPod
These settings configure a specific Log Collector pod:
Name | Type | Description | Required |
---|---|---|---|
| object | Configures OS-specific settings. | Yes |
| string | The maximum number of Log Collector pods in the DaemonSet that can be unavailable during an update. Specify this value as an absolute number (minimum: 1) or as a percentage of the total number of pods at the start of the update (for example, 10%). A rolling update waits when this number of pods is unavailable and then brings up new pods in their place. Once the new pods are available, it then proceeds onto other pods, ensuring that the maximum number of unavailable pods never exceeds this value at any time during the update. See https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable. Default: 100% | No |
| object | Configures tolerations for the Log Collector pod. | Yes if deploying the Log Collector on Windows nodes on Google Kubernetes Engine (GKE) |
appdynamics-cloud-k8s-monitoring.logCollectorPod.env
In the appdynamics-cloud-k8s-monitoring.logCollectorPod.env
section you can specify the location of operating system-specific images of the Log Collector.
Name | Type | Description | Required |
---|---|---|---|
| object | Log Collector pod settings for Linux. | No |
| object | Log Collector pod settings for Windows. | No |
appdynamics-cloud-k8s-monitoring.logCollectorPod.env.linux
Name | Type | Description | Required |
---|---|---|---|
| string | Location of the Log Collector image for Linux. Example:
YML
| No |
appdynamics-cloud-k8s-monitoring.logCollectorPod.env.windows
Name | Type | Description | Required |
---|---|---|---|
| string | Location of the Log Collector image for Windows. Example:
YML
| No |
appdynamics-cloud-k8s-monitoring.logCollectorPod.tolerations
Name | Type | Description | Required |
---|---|---|---|
| string | Configures tolerations for the Log Collector pod. GKE does not allow usage of For Windows nodes on Google Kubernetes Engine (GKE), use this exact block:
YML
| Yes if deploying the Log Collector on Windows nodes on Google Kubernetes Engine (GKE) |
appdynamics-cloud-k8s-monitoring.logCollectorConfig
These settings configure log collection for a specific container:
Name | Type | Description | Required |
---|---|---|---|
| array | Array of operating systems to deploy this pod on. Valid values: Example:
YML
To deploy different Log Collector images to different operating systems, set | No |
| object | Specifies OS-specific overrides. | No |
| object | Configures log collection for a specific container. | Yes |
appdynamics-cloud-k8s-monitoring.logCollectorConfig.env
Use the appdynamics-cloud-k8s-monitoring.logCollectorConfig.env
section to specify operating system-specific overrides.
Name | Type | Description | Required | |||
---|---|---|---|---|---|---|
| ||||||
container | object | A Linux-specific Example:
YML
The following sections of
The following sections of
| No | |||
windows | ||||||
container | object | Same as | No |
appdynamics-cloud-k8s-monitoring.logCollectorConfig.container
These settings configure log collection for a specific container:
Name | Type | Description | Required |
---|---|---|---|
| string | Enables or disables log collection from the Log Collector and other collectors running on your cluster. Valid values: | No |
| object | Specifies OS-specific overrides. | No |
| object | Default condition for harvesting logs from any container on your cluster. | No |
| object | The block which contains all the settings for a specific log source, type, and pattern as a pair of | Yes |
| array | List of fields you don't want to export to Cisco Cloud Observability. The default list of fields to drop is shown in the example. Example:
YML
| No |
| string | Number of log records to be sent together in a single batch, which improves its performance. Default and strongly recommended value: Best practice is to specify both | No |
| string | Maximum number of bytes to accept in an OTLP logs packet. If the number of bytes in the OTLP logs packet to be published (which contains all the log records present in a batch) exceeds If your | No |
| string | The number of worker threads to an output host. Default: If you find that events are backing up, or that the CPU is not saturated, you can increase this value to make better use of available CPU. You can set value to be greater than the number of CPU cores since threads often spend idle time in I/O wait conditions. See Filebeat documentation on Tuning and Profiling Logstash Performance. | No |
| string | Logging interval. Example: | No |
| object | Settings for saving or exporting Filebeat logs. | No |
| object | Settings for exporting Filebeat metrics from the Log Collector in OTLP format. | No |
appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.excludeCondition
Name | Type | Description | Required |
---|---|---|---|
| object | The condition that must be matched to exclude logs from being collected. This will exclude logs from both conditional and default configurations. For a list of all Kubernetes fields you can use in exclude conditions, see https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-kubernetes-processor.html. You must use these exact field names. For syntax of Example:
YML
| No |
appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.defaultConfig
Name | Type | Description | Required |
---|---|---|---|
| object | Default condition for harvesting logs from any container on your cluster. This allows for faster setup. This block has the same structure as appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.condition. For new deployments, you don't need to change anything in Example:
YML
| No |
appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs
The block which contains all the settings for a specific log source, type, and pattern as a pair of condition
+config
blocks. There can be multiple condition+config
pairs within conditionalConfigs
. Each condition+config
pair contains specific settings (described below) for that log source's matching condition, multiline pattern, multiline negation, multiline matching, single line pattern, and so on.
Example:
logCollectorConfig:
os: [linux, windows]
container:
defaultConfig:
...
conditionalConfigs:
- condition:
...
config:
...
- condition:
...
config:
...
- condition:
...
config:
...
Name | Type | Description | Required |
---|---|---|---|
| object | The condition that applications must match in order to have their logs harvested by the Log Collector. | Yes |
| object | Yes |
appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.condition
Name | Type | Description | Required |
---|---|---|---|
<depends on syntax> | string | The condition that applications must match in order to have their logs harvested by the Log Collector. For a list of supported conditions, see Filebeat: Conditions. For a list of fields you can use for conditions, see Filebeat: Autodiscover: Generic fields. A condition is a list of three items:
The condition list supports multiple syntax styles for these three items: "Advanced" sytax:
YML
With advanced syntax, available operators are limited to equals and contains. Boolean operators (and, or, not) are not supported. Example: This condition configures the Log Collector to harvest logs if the container name is<container-name> . To find the container name, run the command kubectl describe pod <podname> . See Useful kubectl and helm Commands:
YML
Filebeat syntax:
YML
With Filebeat syntax, available operators are limited to equals and contains. Boolean operators (and, or, not) are not supported. Example:
YML
Boolean syntax (supports boolean operators and, or, not):
YML
Example:
YML
Nested conditions:Examples:
YML
YML
| Yes |
appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.config
Name | Type | Description | Required |
---|---|---|---|
| string | Specifies a logical grouping of the log "namespace" and source. Sensitive data masking rules apply only to a scope that matches the value of this parameter. If you don't specify this parameter, you can't mask sensitive data contained within the log messages that are ingested through this configuration. See Mask Sensitive Data. Syntax: Suggestions for Suggestions for Example for Kubernetes logs from the common ingestion service (CIS) endpoint:
YML
| Yes |
| string | If your messages span multiple lines, you must specify these parameters to identify where a multiline log message starts and ends. See Manage multiline messages.
Example for log4J, Logback, or Grok logs starting with a date or timestamp of format
YML
Example for Timestamp logs: Not applicable. Example for JSON logs:
YML
| No |
| object | Single-line message pattern for log messages matching this | Yes |
appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.messageParser
Name | Type | Description | Required |
---|---|---|---|
log4J | object | Single-line message pattern for log4J messages matching this Get this pattern from your logging configuration file (typically named Example:
YML
| No |
logback | object | Single-line message pattern for Logback messages matching this Get this pattern from your logging configuration file (typically named Example:
YML
| No |
timestamp | object | Single-line message pattern for timestamp messages matching this The
Any valid patten supported by Example:
YML
| No |
json | object | Single-line message pattern for JSON messages matching this
For additional settings for JSON logs, see Advanced Configuration for JSON Logs. Example:
YML
| No |
grok | object | Single-line message pattern for Grok messages matching this For tips on Grok parsing, see Advanced Configuration for Grok Logs. Example:
YML
| No |
| object | Applies subparsers to each Grok log message. Subparsers help you to extract more fields from different parts of a Grok log message. For example, after you extract and name a field using a Grok parser, you can parse that named field with a JSON parser. This setting is applicable only if there is a Specify To minify, use a tool like Code Beautify. Syntax of
YML
For each item in
If there are duplicate items in If subparser parsing fails, the parsing status for this log message is If you configure a subparser to name an extracted field the same name as an existing field, it adds a prefix to the field name. You cannot use a subparser to extract the timestamp field. Example after minifying:
YML
| No |
infra | object | Single-line message pattern for Kubernetes infrastructure log messages matching this If your infrastructure logs are in native klog format, set
YML
If your infrastructure logs are in JSON format (in Kubernetes v 1.19 and later), use the Example:
YML
| No |
| object | Applies multiple parsers to a single log message. Set To minify, use a tool like Code Beautify. To double escape, use a tool like JSON formatter. Syntax of
YML
For each item in
If there are duplicate items in Example before minifying and single escaping:
YML
Example after minifying and single escaping:
YML
| No |
appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.logging
This section contains settings for saving or exporting Filebeat logs.
Name | Type | Description | Required | |
---|---|---|---|---|
| string | Logging level. Valid values: | No | |
| string | Selector (filter) to limit the logging to only components that match. Valid values: If | No | |
| object | No | ||
| string | Enables or disables Filebeat logging to files. Valid values: When this parameter is Log files are located in Log files are persistent across pod restarts. Pod logs no longer include Filebeat logs. | No | |
| string | Number of log files to keep if Filebeat logging is enabled. Default: 5. | No | |
| object | |||
| string | Enables or disables metrics logging. Valid values: When this parameter is If this is enabled, the Log Collector ignores any selectors you specify and uses only the Sample metrics log message:
JSON
| No | |
| string | Frequency of metrics collection. Valid values: | No |
appdynamics-cloud-k8s-monitoring.logCollectorConfig.container.conditionalConfigs.monitoring
This section contains settings for exporting Filebeat metrics from the Log Collector in OTLP format. This is for internal use only.
Name | Type | Description | Required | ||
---|---|---|---|---|---|
| object | No | |||
enabled | string | Enables or disables export of Log Collector metrics to a backend. Default: false . | No | ||
endpoint | string | OTLP receiver endpoint. Default is the | No | ||
protocol | string | Protocol to use for export. Valid values: http , grpc . Default: grpc . | No | ||
collectPeriod | string | Internal collection period. Default: 10s . | No | ||
reportPeriod | string | Reporting period to OTLP backend. Default: 60s . | No | ||
resourceAttrs | string | List of resource attributes to be added to metrics packets. Default: None (empty list). Example:
YML
| No | ||
metrics | string | List of metrics to capture. If this list is empty, the Log Collector captures all metrics. If this parameter is omitted, the Log Collector captures the default list of metrics in the example. Example:
YML
| No | ||
retry | object | Metrics exporter retry configuration to be used on when exporting to the metrics backend fails. | No | ||
enabled | string | Enables or disables retry of failed batches. Valid values: true , false . Default: false . | No | ||
initialInterval | string | Time to wait after the first failure before retrying. Specify this as an int64 with a unit suffix. For example, 500ms . | No | ||
maxInterval | string | Maximum time to wait between consecutive failures. Specify this as an int64 with a unit suffix. For example, 500ms . Once this value is reached, the delay between consecutive retries is always this value. | No | ||
maxElapsedTime | string | Maximum amount of time (including retries) spent trying to send a request or batch. Once this value is reached, the data is discarded. | No | ||
ssl | object | Metrics exporter secure/TLS configuration. The Log Collector uses TLS protocol 1.3 by default. | No | ||
enabled | string | Enables or disables SSL. Valid values: | No | ||
certificateAuthorities | array | List of your root CA certificates. Example:
YML
| No | ||
certificate | string | Full pathname of your certificate for SSL client authentication. Example:
YML
| No | ||
key | string | Full pathname of your private client certificate SSL key. Example:
YML
| No |
Sample Configurations
global:
clusterName: <cluster-name>
oauth:
clientId:
clientSecret:
endpoint: https://<your-tenant-url>/data
tokenUrl: https://<your-tenant-url>/auth/<tenant-id>/default/oauth2/token
appdynamics-otel-collector:
clientId: <client-id>
clientSecret: <client-secret>
endpoint: <endpoint>
tokenUrl: <token-url>
spec:
image: <image-url>
imagePullPolicy: IfNotPresent
config:
exporters:
logging:
loglevel: debug
appdynamics-cloud-k8s-monitoring:
install:
logCollector: true
defaultInfraCollectors: false
clustermon: false
clustermonPod:
image: <image-url>
nodeSelector:
kubernetes.io/os: linux
inframonPod:
image: <image-url>
nodeSelector:
kubernetes.io/os: linux
logCollectorPod:
image: <image-url>
imagePullPolicy: IfNotPresent
logCollectorConfig:
os: [windows,linux]
container:
conditionalConfigs:
- condition:
equals:
kubernetes.namespace: logns
config:
multiLinePattern: '^2023|^{'
multiLineNegate: true
multiLineMatch: after
messageParser:
log4J:
enabled: true
pattern: "%d{yyyy-MM-dd'T'HH:mm:ss} %p %C{1.} [%t] %m%n"
logging:
level: debug
selectors: []
files:
# to enable logging to files
enabled: false
# number of files to keep if logging to files is enabled
keepFiles: 5 # default value
metrics:
# to enable logging metrics data
enabled: false
period: 30s # default value
# you don't need below block if you are not using/exporting metrics
monitoring:
otlpmetric:
enabled: false
metrics:
# default metrics to capture are below
- beat.memstats.memory_alloc
- filebeat.events.active
- filebeat.harvester.running
- filebeat.harvester.skipped
- filebeat.input.log.files.truncated
- libbeat.output.read.errors
- libbeat.output.write.bytes
- libbeat.output.write.errors
- system.load.norm.5
- system.load.norm.15
retry:
enabled: false
ssl:
enabled: false
global:
clusterName: test_Linux_Override
oauth:
clientId:
clientSecret:
endpoint: https://<your-tenant-url>/data
tokenUrl: https://<your-tenant-url>/auth/<tenant-id>/default/oauth2/token
tls:
appdCollectors:
enabled: false
secret:
secretName: client-secret
secretKeys:
caCert: ca.crt
tlsCert: tls.crt
tlsKey: tls.key
otelReceiver:
mtlsEnabled: true
secret:
secretName: server-secret
secretKeys:
caCert: ca.crt
tlsCert: tls.crt
tlsKey: tls.key
settings:
min_version: 1.2
max_version: 1.3
appdynamics-otel-collector:
clientId: test
clientSecret: test
os: [ linux, windows ]
endpoint: https://test-tenant/data
tokenUrl: <token-url>
appdynamics-cloud-k8s-monitoring:
clustermonConfig:
os: windows
logLevel: debug
filters:
annotation:
excludeRegex: "1.filter_Name"
events:
enabled: true
severityToExclude: [ ]
severeGroupByReason:
- Pulling
infraManagerConfig:
os: [ linux, windows ]
logLevel: debug
servermonConfig:
os: [ linux, windows ]
logLevel: debug
containermonConfig:
os: [ linux, windows ]
logLevel: debug
install:
logCollector: true
defaultInfraCollectors: true
clustermon: true
logCollectorConfig:
os: [windows, linux]
env: //Specify the OS for which you want to override config.
linux:
container:
defaultConfig:
multiLinePattern: '^-'
multiLineMatch: before
multiLineNegate: true
messageParser:
log4J:
enabled: true
pattern: "%d{yyyy-MM-dd'T'HH:mm:ss} %p %C{1.} [%t] %m%n"
conditionalConfigs:
- condition:
or:
- equals:
kubernetes.container.name: log-gen-app-log4j-winTest
- equals:
kubernetes.container.name: log-gen-app-log4jTest
config:
multiLinePattern: '^thisIsForLinux'
multiLineNegate: true
multiLineMatch: before
messageParser:
log4J:
enabled: true
pattern: "%d{yyyy-MM-dd'T'HH:mm:ss} %p %C{1.} [%t] %m%n"
logging:
level: debug
selectors: [otlpmetrics]
batchSize: 2000 # this is the default value
maxBytes: 900000 # this is the default value
monitoring:
otlpmetric:
enabled: true
collectPeriod: 10s # default value
reportPeriod: 60s
metrics:
# default metrics to capture are below
- beat.memstats.memory_alloc //works
retry:
enabled: true
# initialInterval:
# maxInterval:
# maxElapsedTime:
ssl:
enabled: true // We can't override SSL fields. This is taken from Global TLS config
certificateAuthorities: ["/opt/appdynamics/certs/ca/ca.pem"]
certificate: "/opt/appdynamics/certs/client/client.pem"
key: "/opt/appdynamics/certs/client/client-key.pem"
windows:
container:
defaultConfig:
multiLinePattern: '^-Windows'
multiLineMatch: before
multiLineNegate: true
messageParser:
infra:
enabled: true
# pattern: "%d{yyyy-MM-dd'T'HH:mm:ss}"
conditionalConfigs:
- condition:
or:
- equals:
kubernetes.container.name: log-gen-app-log4j-WINDOWS
- equals:
kubernetes.container.name: log-gen-app-log4j-WINDOWS
config:
multiLinePattern: '^thisIsForWINDOWS'
multiLineNegate: false
multiLineMatch: after
messageParser:
logback:
enabled: true
pattern: "%d{yyyy-MM-dd'T'HH:mm:ss} %p %n"
logging:
level: debug
batchSize: 3000 # this is the default value
maxBytes: 800000 # this is the default value
container:
defaultConfig:
multiLinePattern: '^{'
multiLineMatch: "after"
multiLineNegate: true
messageParser:
json:
enabled: true
conditionalConfigs:
- condition:
or:
- equals:
kubernetes.container.name: log-gen-app-log4j-win
- equals:
kubernetes.container.name: log-gen-app-log4j
config:
multiLinePattern: '^2023|^{'
multiLineNegate: true
multiLineMatch: after
messageParser:
log4J:
enabled: true
pattern: "%d{yyyy-MM-dd'T'HH:mm:ss} %p %C{1.} [%t] %m%n"
- condition:
or:
- equals:
kubernetes.container.name: log-gen-app-log4j2-win
- equals:
kubernetes.container.name: log-gen-app-log4j2
config:
multiLinePattern: '^2023' # default = '' (empty)
multiLineNegate: true # default = false
multiLineMatch: "after" # default = after
messageParser:
log4J:
enabled: true
pattern: "%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n" # default = ""
- condition:
or:
- equals:
kubernetes.container.name: log-gen-app-logback-win
- equals:
kubernetes.container.name: log-gen-app-logback
config:
multiLinePattern: '^2023' # default = '' (empty)
multiLineNegate: true # default = false
multiLineMatch: "after" # default = after
messageParser:
logback:
enabled: true
pattern: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n" # default = ""
- condition:
or:
- equals:
kubernetes.container.name: log-gen-app-grok-sub-win
- equals:
kubernetes.container.name: log-gen-app-grok-sub
config:
multiLinePattern: '^\[2022' # default = '' (empty)
multiLineNegate: true # default = false
multiLineMatch: "after" # default = after
messageParser:
grok:
enabled: true
patterns:
- '\[%{GREEDYDATA:log4j}\] \[%{GREEDYDATA:json}\] \[%{GREEDYDATA:log4j2}\] \[%{GREEDYDATA:logback}\] \[%{IPORHOST:grok}\] \[%{GREEDYDATA:infra}\]'
# timestampField: "time"
timestampPattern: "yyyy-MM-dd HH:mm:ss,SSS"
subparsers: "{\"parsersList\": [{ \"_message_parser.type\": \"log4j\", \"_message_parser.field\": \"log4j\", \"_message_parser.pattern\": \"%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %msg%n\"},
{ \"_message_parser.type\": \"log4j\", \"_message_parser.field\": \"log4j2\", \"_message_parser.pattern\": \"%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %msg%n\"},
{ \"_message_parser.type\": \"logback\", \"_message_parser.field\": \"logback\", \"_message_parser.pattern\": \"%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %msg%n\"},
{ \"_message_parser.type\": \"grok\", \"_message_parser.field\": \"grok\", \"_message_parser.pattern\": \"%{GREEDYDATA:infra}\"},
{ \"_message_parser.type\": \"infra\", \"_message_parser.field\": \"infra\"},
{ \"_message_parser.type\": \"json\", \"_message_parser.field\": \"json\", \"_message_parser.flatten_sep\": \"/\"}]\\r\\n}"
- condition:
or:
- equals:
kubernetes.container.name: log-gen-app-grok-win
- equals:
kubernetes.container.name: log-gen-app-grok
config:
multiLinePattern: '^2021|^55|^Tue' # default = '' (empty)
multiLineNegate: true # default = false
multiLineMatch: "after" # default = after
messageParser:
grok:
enabled: true
patterns:
- "%{DATESTAMP:time} %{LOGLEVEL:severity} %{WORD:class}:%{NUMBER:line} - %{GREEDYDATA:data}"
- "%{DATESTAMP_RFC2822:time} %{LOGLEVEL:severity} %{GREEDYDATA:data}"
- "%{TOMCAT_DATESTAMP:time} \| %{LOGLEVEL:level} \| %{JAVACLASS:class} - %{JAVALOGMESSAGE:logmessage}"
- "%{IP:clientIP} %{WORD:httpMethod} %{URIPATH:url}"
timestampField: time
timestampPattern: "yyyy-MM-dd HH:mm:ss,SSS"
- condition:
or:
- equals:
kubernetes.container.name: log-gen-app-json-win
- equals:
kubernetes.container.name: log-gen-app-json
config:
multiLinePattern: '^{' # default = '' (empty)
multiLineNegate: true # default = false
multiLineMatch: "after" # default = after
messageParser:
json:
enabled: true
timestampField: "@timestamp"
timestampPattern: "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"
- condition:
operator: equals
key: kubernetes.container.name
value: kube-proxy
config:
multiLinePattern: '^[a-z]|^[A-Z]' # default = '' (empty)
multiLineNegate: true # default = false
multiLineMatch: "after" # default = after
messageParser:
infra:
enabled: true
# dropFields: ["agent", "stream", "ecs", "input", "orchestrator", "k8s.annotations.appdynamics", "k8s.labels", "k8s.node.labels", "cloud"]
batchSize: 1000 # this is the default value
maxBytes: 1000000 # this is the default value
logging:
level: debug
files:
# to enable logging to files
enabled: true
# number of files to keep if logging to files is enabled
keepFiles: 5 # default value
metrics:
# to enable logging metrics data
enabled: true
period: 30s # default value
# you don't need below block if you are not using/exporting metrics
monitoring:
otlpmetric:
enabled: true
collectPeriod: 10s # default value
reportPeriod: 60s
metrics:
# default metrics to capture are below
- beat.memstats.memory_alloc //works
- filebeat.events.active //0 rows
- filebeat.harvester.running //rows with all values 0
- filebeat.harvester.skipped //0 rows
- filebeat.input.log.files.truncated //0 rows
- libbeat.output.read.errors
- libbeat.output.write.bytes
- libbeat.output.write.errors
- system.load.norm.5
- system.load.norm.15
- libbeat.pipeline.events.filtered
retry:
enabled: false
# initialInterval:
# maxInterval:
# maxElapsedTime:
ssl:
enabled: false
certificateAuthorities: ["C:/filebeat/certs/ca/ca.pem"]
certificate: "C:/filebeat/certs/client/client.pem"
key: "C:/filebeat/certs/client/client-key.pem"
OpenTelemetry™ and Kubernetes® (as applicable) are trademarks of The Linux Foundation®.