The content of this page has been automatically translated by AI. If you encounter any problems while reading, you can view the corresponding content in Chinese.

Querying Cluster Logs

Last updated: 2024-10-24 15:59:23

This document describes how to use ES cluster logs. Users can use the logs to understand the cluster's health status, locate issues, and assist with application development and operation and maintenance.

Querying Cluster Logs

1. Log in to the ES Console, click Cluster ID/Name to enter the cluster details page.
2. Select the Logs tab to view the cluster's operation logs.
ES has four types of logs: Primary logs, search slow logs, indexing slow logs, and GC (Garbage Collection) logs. The log contents include the log timestamp, log level, and specific information.
ES provides the running logs of the cluster within 7 days by default, which are displayed in reverse chronological order. You can query by time and keywords. In addition, you can also call the ES API to adjust log-related configurations. For example, for slow logs, set the time threshold for considering a slow response when querying or indexing data.
3. In the search box on the log page, you can query related logs by time range and keyword. The keyword query syntax is consistent with that of Lucene.
Enter a keyword to query, for example: YELLOW.
Set a keyword for a specific field, for example: message:YELLOW.
Combine multiple conditions: level:INFO and ip:10.0.1.1 to search related logs.
Search for the required CAM policy as needed, and click to complete policy association.


Log Description

Primary Logs

Querying Primary Logs: Displays logs generated during cluster running, including timestamps, severity levels, and information such as INFO, WARN, DEBUG, etc.
2019-2-14 08:00:00 10.0****
INFO
[o.e.c.r.a.AllocationService] [1550199698000783811] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[filebeat-2019.02.19][2]] ...]).

2019-2-14 02:30:02 10.0****
DEBUG
[o.e.a.a.i.d.TransportDeleteIndexAction] [1550199698000783811] failed to delete indices [[[filebeat-2019.02.09/7VZM6Fa-Twmj8pVaAyyrxg]]]
org.elasticsearch.index.IndexNotFoundException: no such index
at org.elasticsearch.cluster.metadata.MetaData.getIndexSafe(MetaData.java:475) ~[elasticsearch-5.6.4.jar:5.6.4]
at org.elasticsearch.cluster.metadata.MetaDataDeleteIndexService.lambda$deleteIndices$0(MetaDataDeleteIndexService.jav

Slow log

Slow log is used to capture querying and indexing requests that exceed the specified time threshold for tracking and analyzing very slow requests generated by user.
2018-10-28 12:04:17
WARN
[index.indexing.slowlog.index] [1540298502000001009] [pmc/wCALr6BfRm-sr3qOQuGX
Xw] took[18.6ms], took_millis[18], type[articles], id[AWa41-J9c0s1mOPvR6F3], routing[] , source[]
Enable and Adjust Slow Log By default, the slow log is not enabled. Enabling slow log requires the definition of specific actions (query, fetch, or index), the expected event recording level (INFO, WARN, DEBUG, etc.), and the time threshold. Users can enable and adjust related configurations according to their business scenarios.
To enable the slow log, click Kibana at the top right corner of the cluster detail page to enter the Kibana page, and use Dev Tools to call the related Elasticsearch APIs, or use the client to call the configuration modification API.
Configure all indexes:
PUT */_settings
{
"index.indexing.slowlog.threshold.index.debug" : "5ms",
"index.indexing.slowlog.threshold.index.info" : "50ms",
"index.indexing.slowlog.threshold.index.warn" : "100ms",
"index.search.slowlog.threshold.fetch.debug" : "10ms",
"index.search.slowlog.threshold.fetch.info" : "50ms",
"index.search.slowlog.threshold.fetch.warn" : "100ms",
"index.search.slowlog.threshold.query.debug" : "100ms",
"index.search.slowlog.threshold.query.info" : "200ms",
"index.search.slowlog.threshold.query.warn" : "1s"
}
Configure a single index:
PUT /my_index/_settings
{
"index.indexing.slowlog.threshold.index.debug" : "5ms",
"index.indexing.slowlog.threshold.index.info" : "50ms",
"index.indexing.slowlog.threshold.index.warn" : "100ms",
"index.search.slowlog.threshold.fetch.debug" : "10ms",
"index.search.slowlog.threshold.fetch.info" : "50ms",
"index.search.slowlog.threshold.fetch.warn" : "100ms",
"index.search.slowlog.threshold.query.debug" : "100ms",
"index.search.slowlog.threshold.query.info" : "200ms",
"index.search.slowlog.threshold.query.warn" : "1s"
}

GC log

GC logs are enabled by default in ES. The following two specific GC logs show the time, node IP address, and log level of the logs respectively.
2019-2-14 20:48:22
10.0.***
INFO
[o.e.m.j.JvmGcMonitorService] [1550199698000783711] [gc][380573] overhead, spent [307ms] collecting in the last [1s]
2019-2-14 10:04:09
10.0.***
WARN
[o.e.m.j.JvmGcMonitorService] [1550199698000784111] [gc][341943] overhead, spent [561ms] collecting in the last [1s]

Audit Log

Note
The audit log is an X-Pack advanced feature, supported only in the Platinum edition.
The audit log primarily shows the logs generated by operations such as add, delete, modify, and query for the corresponding Elasticsearch instance. You can enable the audit log by following these steps:
1. On the Logs page, click Enable Audit Log on the right.

2. In the pop-up window, check the corresponding prompt and click Confirm.
Note
When the audit log collection is enabled, the files will be output to the current ES cluster and use index names starting with security_audit_log-*. You can query the audit logs of the cluster in Kibana.
In ES 6.8.2 and above, data is saved by default for 3 days. If you need to store data for a longer period, you can modify the corresponding lifecycle management policies. In ES 6.4.3, data is saved permanently by default, please perform cleanup operations in a timely manner. In ES 5.6.4, the audit log feature is not supported.
If you need to modify the types of audit events collected, you can refer to the documentation.
Enabling or disabling audit log collection will trigger a cluster restart. It is recommended to perform this operation when the cluster load is low.
3. After confirmation, the cluster will restart, and you can view the progress in the Change Records. Once the restart is successful, the audit log collection will be enabled.
Note
Audit log information will occupy cluster disk space and also affect performance. If you do not need to view the audit logs, you can disable the audit log collection feature in the same way.
4. Go to the Kibana Discover interface, find the corresponding index, and you can view the audit logs.