Download PDF
Download page Advanced Configuration for JSON Logs.
Advanced Configuration for JSON Logs
The Log Collector can harvest JSON logs from your applications and send them to Cisco Cloud Observability, where you can use them for analysis and troubleshooting.
Every JSON log message is on its own line, like these two messages:
{"error":"dial tcp 172.21.0.2:5432: getsockopt: connection refused","level":"error","msg":"Could not open database connection","time":"2022-03-25T14:57:15Z"}
{"name":"Peter","age":30,"city":"New York","score":50.05,"grade":null,"birth":"1986-12-14","sale":true}
JSON is a self-describing format that the Log Collector can harvest without requiring format-specific parsing instructions. However, you can configure the Log Collector to do additional parsing on your JSON logs such as flattening some field names, dropping some fields, or only parsing up to a certain number of fields or depths. This page explains the following advanced configuration options:
Advanced Configuration Options At a Glance
The following table describes advanced configuration options for parsing JSON logs. For details on legacy parameters, see Log Collector Settings - Advanced YAML Layout. For details on new simplified parameters, see Log Collector Settings.
Sample Configuration 1: Flatten Field Names
JSON Log Message
{
"transactionId": "TXN001",
"creditDetails": {
"bankDetails": {
"accountNbr": "SB001002003",
"ifscCode": "SB00007777",
"balance": 25000,
"currency": "INR"
}
},
"debitDetails": {
"bankDetails": {
"accountNbr": "CB001002003",
"ifscCode": "CB00008888",
"balance": 30000,
"currency": "INR"
}
},
"transactionDetails": {
"amount": "10000.00",
"currency": "INR",
"timestamp": "2022-02-16 08:10:32",
"actors": [
"ID001",
"ID0034",
"ID0056",
"ID009"
]
}
}
Configuration
In this configuration, the attribute field names are concatenated with a period (.
) to "unnest" or flatten the JSON values:
- condition:
...
messageParser:
json:
enabled: true
...
flattenSep: "."
config:
- type: container
...
processors:
...
- add_fields:
target: _message_parser
fields:
type: json
flatten_sep: "."
Output
creditDetails.bankDetails.accountNbr=SB001002003
creditDetails.bankDetails.ifscCode=SB00007777
Sample Configuration 2: Drop Specific Fields
JSON Log Message
{"firstname":"Tom","lastname":"Cruise","occupation":"Actor"}
Configuration
In this configuration, the fields firstname
and lastname
are ignored and the remaining fields are ingested:
- condition:
...
messageParser:
json:
enabled: true
...
fieldsToIgnore: ["firstname", "lastname"]
config:
- type: container
...
processors:
...
- add_fields:
target: _message_parser
fields:
type: json
fields_to_ignore: firstname,lastname
fields_to_ignore_sep: ,
Output
"occupation":"Actor"
Sample Configuration 3: Drop Fields Matching a Regular Expression
JSON Log Message
{
"institutionname": {
"type": "string",
"description": "institution name",
"label": "name",
"input-type": "text",
"pattern": "^[A-Za-z0-9s]+$"
},
"bio": {
"type": "string",
"label": "bio",
"input-type": "text",
"pattern": "^[A-Za-z0-9s]+$",
"help-box": "tell us about yourself"
}
}
Configuration
In this configuration, all field names matching the regular expression in fields_to_ignore_regex
are not parsed for the whole JSON log:
- condition:
...
messageParser:
json:
enabled: true
...
fieldsToIgnoreRegex: "*.type"
config:
- type: container
...
processors:
...
- add_fields:
target: _message_parser
fields:
type: json
fields_to_ignore_regex: .*type
Output
institutionname.description=institution name
institutionname.label=text
institutionname.pattern=^[A-Za-z0-9s]+$
bio.label=bio
bio.pattern=^[A-Za-z0-9s]+$
bio.help-box=tell us about yourself
Sample Configuration 4: Only Ingest the First N Fields
JSON Log Message
{
"rewardProgramIdentificationCode": "A34324",
"amount": "124.01",
"localTransactionDateTime": "2020-05-21T09:00:00",
"cpsAuthorizationCharacteristicsIndicator": "Y",
"digitalWalletProviderId": "VCIND",
"addValueTaxReturn": "10.00",
"taxAmountConsumption": "10.00",
"nationalNetReimbursementFeeBaseAmount": "20.00",
"addValueTaxAmount": "10.00",
"nationalNetMiscAmount": "10.00",
"countryCodeNationalService": "170",
"nationalChargebackReason": "11",
"emvTransactionIndicator": "1",
"nationalNetMiscAmountType": "A",
"costTransactionIndicator": "0",
"nationalReimbursementFee": "20.00",
"cardAcceptor": {
"address": {
"country": "USA",
"zipCode": "94404",
"county": "San Mateo",
"state": "CA"
},
"idCode": "ABCD1234ABCD123",
"name": "Visa Inc.",
"terminalId": "ABCD1234"
},
"transactionIdentifier": "381228649430056",
"acquirerCountryCode": "840",
"acquiringBin": "408999",
"senderCurrencyCode": "USD",
"retrievalReferenceNumber": "433122895499",
"transactionTypeCode": 22,
"messageReasonCode": 2150,
"systemsTraceAuditNumber": "895499",
"businessApplicationId": "AA",
"senderPrimaryAccountNumber": "4895070000003551",
"settlementServiceIndicator": "9",
"cardProductCode": "15",
"merchantCategoryCode": "6012",
"senderCardExpiryDate": "2021-10",
"dynamicCurrencyConversionIndicator": "Y"
}
Configuration
In this configuration, only 5 fields are parsed and the rest of the fields are not parsed (dropped):
- condition:
...
messageParser:
json:
enabled: true
...
maxNumOfFields: 5
config:
- type: container
...
processors:
...
- add_fields:
target: _message_parser
fields:
type: json
max_num_of_fields: 5
Output
rewardProgramIdentificationCode=A34324
amount=124.01
localTransactionDateTime=2022-03-21T09:00:00
cpsAuthorizationCharacteristicsIndicator=Y
digitalWalletProviderId=VCIND
Sample Configuration 5: Parse Only to a Specific Depth
JSON Log Message
{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": [
"GML",
"XML"
]
},
"GlossSee": "markup",
"GlossArray": [
{
"a": 1
},
{
"a": 2
}
]
}
}
}
}
}
Configuration
In this configuration, only a maximum depth of two nested objects are considered and parsed:
- condition:
...
messageParser:
json:
enabled: true
...
maxDepth: 2
config:
- type: container
...
processors:
...
- add_fields:
target: _message_parser
fields:
type: json
max_depth: 2
Output
glossary.GlossDiv.title=S
glossary.GlossDiv.GlossList={"GlossEntry":{"ID":"SGML","SortAs":"SGML","GlossTerm":"Standard Generalized Markup Language","Acronym":"SGML","Abbrev":"ISO 8879:1986","GlossDef":{"para":"A meta-markup language, used to create markup languages such as DocBook.","GlossSeeAlso":["GML","XML"]},"GlossSee":"markup","GlossArray":[{"a":1},{"a":2}]}}
Sample Configuration 6: Use a Specific Field for the Timestamp
JSON Log Message
{
"@timestamp": "2022-03-24T14:51:07.180Z",
"log.level": "TRACE",
"message": "[3302][indices:monitor/stats[n]] received response from [{elastic-cloud-57c5b9fd8d-8lj5q}{J9-xWfXPS-Su2XBRNaDhfw}{gu3jF8xLQniKmRq1vBRdWg}{10.114.51.135}{10.114.51.135:9300}{cdfhilmrstw}{ml.max_jvm_size=3787456512, xpack.installed=true, ml.machine_memory=7569117184}]",
"ecs.version": "1.2.0",
"service.name": "ES_ECS",
"event.dataset": "elasticsearch.server",
"process.thread.name": "elasticsearch[elastic-cloud-57c5b9fd8d-8lj5q][management][T#1]",
"log.logger": "org.elasticsearch.transport.TransportService.tracer",
"elasticsearch.cluster.uuid": "aWAj25TgRGifTQSGVyviIQ",
"elasticsearch.node.id": "J9-xWfXPS-Su2XBRNaDhfw",
"elasticsearch.node.name": "elastic-cloud-57c5b9fd8d-8lj5q",
"elasticsearch.cluster.name": "docker-cluster"
}
Configuration
In this configuration, the timestamp is extracted from the raw message after parsing it out, and used instead of the ingestion time for the OpenTelemetry™ event:
- condition:
...
messageParser:
json:
enabled: true
timestampField: "@timestamp"
timestampPattern: yyyy-MM-dd'T'HH:mm:ss.SSS'Z'
config:
- type: container
...
processors:
...
- add_fields:
target: _message_parser
fields:
type: json
timestamp_field: @timestamp
timestamp_pattern: yyyy-MM-dd'T'HH:mm:ss.SSS'Z'
OpenTelemetry™ and Kubernetes® (as applicable) are trademarks of The Linux Foundation®.