Filebeat and Elasticsearch - Adding custom fields so ingested logs are more easily searchable

Overview

This will walk through adding custom fields in filebeat.yml for one log file type in order to make those logs ingested by Elasticsearch more easily searchable.

Terminology

The log files that we will be ingesting

We will be ingesting the following log files from my laptop in order to simplify this blog post.

- /var/log/*.log

The steps are applicable for logs from a test or production environment for one particular version of your app.

The custom fields that will be added

The fields added will indicate

  1. The version of the app eg version 2.1.0
  2. What app the logs originated from, in this case “fromAnitaLaptop” Once these fields are added to the index they can be used to search and aggregate data based on these properties.

This simplifies searching for logs and creating charts aggregated by a particular version of the app on a particular server, for example if you wish to view logs “fromAnitaLaptop” running version 2.1.0

Step-by-step simple proof of concept example of adding one field to filebeat.yml

The example uses generic logs generated by my laptop

  1. Pre-condition: Filebeat is installed on my laptop
  2. Edit filebeat.yml to add the custom field for the log file
  3. Save the file and restart Filebeat if it was already running
  4. Verify the new field is showing as expected in Kibana > Discover
  5. Refresh the index pattern so the new field is picked up
  6. Verify the new field is easily searchable in Kibana > Discover

Edit filebeat.yml to add the custom field for the log file

In filebeat.yml add the fields and fields_under_root as follows below the path for one particular log, in this case the standard /var/log/.log*

In this example, the field app.log.origin with the value fromAnitaLaptop will be added to every indexed document in Elasticsearch coming from /var/log/.log*

# filebeat.yml --- SNIP ---
# Add the following fields and fields_under_root underneath the path

- type: log
  enabled: true
  paths:
    - /var/log/*.log
  fields:
    app.log.origin: fromAnitaLaptop
  fields_under_root: true

The reason I used this field name was because I am sure that there are no existing fields starting with the root name “app”.

This makes it easy for me to know which fields have been custom added, and to know that the name I chose is not overwriting an existing field and its value.

In order to simplify this exercise, it assumes there is only one log type with one path name in the filebeat.yml file.

In other words, there should be only one input type with one path in the entire filebeat.yml file.

Warning

.yml files are extremely picky with indentation

Be sure to use spaces for indentation to avoid errors resulting in Filebeat being unable to restart

Tip: Test your config file first

filebeat test config -c filebeat.yml

Save the file and restart Filebeat if it was already running

Verify the new field is showing as expected in Kibana > Discover

New field

Notice there is a warning there is no cached mapping for this field

No cached mapping warning

Refresh the index pattern by navigating to Management: Stack Management > Kibana: Index Patterns > select the index pattern, Refresh and Confirm

so the new field has a cached mapping

Refresh field list

It should now be easy to search on logs with this field.

To do so, ensure there are some new logs entries that have been generated since updating and restarting Filebeat, then Refresh Kibana > Discover to view the latest logs.

The new field should now be easily searchable in Kibana > Discover

One way is to type in the new field name in “Search field names” under the index pattern name

Search field names

An option will appear to Add the field

Add field

Add the field, which will show this field as a column in the logs view area.

Field added to column view

Another way is to type the field name in the Discover search bar at the top…

Type field name in Discover

… and select “equals” and the value should show as a suggestion.

Value is suggested

Add a searchable version field

Add the following fields if the version number is explicitly known for a log file.

Note: app.version.patch would have been more correct as per semver.org

These custom fields make it simple to search and aggregate data for

# filebeat.yml --- SNIP ---
# Add the following fields and fields_under_root underneath the path


- type: log
  enabled: true
  paths:
    - /var/log/*.log
  fields:
    app.log.origin: fromAnitaLaptop
    app.version.display_name: "2.1.0"
    app.version.major: 2
    app.version.minor: 1
    app.version.bugfix: 0
  fields_under_root: true

The newly added version fields are shown

Version added with warning

Repeat the steps of restarting Filebeat and refreshing the Index Pattern to remove the warnings

Version added - no more warnings

Note that it is now easy to search upon specific combinations of versions, as well as less and greater than specific versions

Version searchable

It is also easy to include the app.log.origin value

Version and origin

Example: It is easy to filter only logs coming from major version 2 and minor version 1, where the bugfix version does not matter, and origin fromAnitaLaptop

Version and origin search results

Conclusion

The custom fields added to the index in Elasticsearch contain useful meta information that give the logs context, and thus the logs can be more easily searchable in that context.

When logs are more easily searchable within a context, visual charts and graphs can easily be made for these different contexts, bringing visibility into your application, enabling stakeholders to understand and act upon the application behaviour.