This document outlines the process of configuring log collection for a TKE Serverless cluster using Custom Resource Definitions (CRD).
Preparations
Log in to the TKE console, and enable the log collection feature for the serverless cluster. For more information, see Enabling Log Collection..
Instructions
You can take the following actions to configure after enabling the log collection feature for the cluster:
Configuring log rules
After enabling the log collection, you need to configure the log rules including the log source, consumer end, log parsing method, and so on.
1. Log in to the TKE console and choose Log Management > Log Rules in the left sidebar.
2. At the top of the "Log Collection" page, select the region and the TKE Serverless cluster for which you want to configure log collection rules. Click Create. As shown in the image below:
3. On the "Create Log Collection Rule" page, select the collection type, and configure the log source, consumer, and log parsing method. Currently, the supported collection types are Container Standard Output and Container File Path.
Collecting container standard output logs
Log Collection
Select Container Standard Output as the collection type and configure the log source according to your requirements, as shown in the figure below:
This type of log source supports collecting:
All containers: all namespaces or all containers under a namespace.
Specify workload: the containers of a specified workload under a namespace. You can add multiple namespaces.
Specify Pod Labels: specify multiple Pod Labels under a namespace, and collect all containers that match the Labels.
Select Container File Path as the collection type and configure the log source according to your requirements, as shown in the figure below:
This type of log source supports collecting:
Specify workload: the specified file path of a container of a specified workload under a namespace.
Specify Pod Labels: specify multiple Pod Labels under a namespace, and collect the specified file path of all containers that match the Labels.
You can specify a file path or use wildcards for the collection path. For example, when the container file path is /opt/logs/*.log, you can specify the collection path as /opt/logs and the file name as *.log.
Note:
If the collection type is selected as "Container File Path", the corresponding path cannot be a soft link. Otherwise, the actual path of the soft link will not exist in the collector's container, resulting in log collection failure.
Note:
For container standard output and container files, in addition to the raw log content, container or Kubernetes-related metadata (e.g., the Pod name that generated the logs) will also be reported to CLS. This allows users to trace the log source or search based on container identifiers or characteristics (e.g., container name, labels) when viewing logs.
Please refer to the table below for container or Kubernetes-related metadata:
Field
Description
cluster_id
ID of the cluster to which the log belongs.
container_name
Name of the container to which the log belongs
image_name
Image name IP of the container to which the log belongs
namespace
Namespace of the Pod to which the log belongs
pod_uid
UID of the Pod to which the log belongs
pod_name
Name of the Pod to which the log belongs
pod_ip
IP of the Pod to which the log belongs.
pod_lable_{label name}
The labels of the pod to which the logs belong (for example, a pod with two labels: app=nginx, env=prod, will have logs uploaded with two metadata: pod_label_app:nginx, pod_label_env:prod).
4. Configure Tencent Cloud Log Service (CLS) as the consumer end. Select the desired logset and log topic. It is recommended to choose the option to automatically create a new log topic. As shown in the figure below:
Note:
CLS only supports log collection and reporting for intra-region container clusters.
The log set and log topic cannot be updated after the log rule is configured.
5. Click Next and choose a log extraction mode, as shown below:
Explanation of Multiple Extraction Modes:
Parsing mode
Note
Documentation
Full text in a single line
A log contains only one line of content, ending with a newline character (\n). Each log is parsed into a complete string with the key value of CONTENT. After enabling indexing, you can search for log content through full-text retrieval. The log time is based on the collection time.
A complete log entry may span multiple lines, using a first-line regular expression for matching. When a log line matches a predefined regular expression, it is considered the beginning of a log entry, and the appearance of the next line's start serves as the end identifier for that entry. A default key value, CONTENT, is also set, with the log time based on the collection time.
In this log parsing mode, a complete log is processed using regular expressions to extract multiple key-value pairs. First, input a log sample, followed by a custom regular expression. The system will extract the corresponding key-value pairs based on the capture groups within the regular expression.
This log parsing mode is suitable for complete log data spanning multiple lines in log text files (such as Java program logs) and can extract multiple key-value pairs based on a regular expression. First, input a log sample, then input a custom regular expression. The system will extract the corresponding key-value pairs based on the capture groups in the regular expression.
A JSON log automatically extracts the key at the first layer as the field name and the value at the first layer as the field value to implement structured processing of the entire log. Each complete log ends with a line break \n.
In a separator log, the entire log data can be structured according to the specified separator, and each complete log ends with a line break \n. When CLS processes separator logs, you need to define a unique key for each separate field. Invalid fields, which are fields that need not be collected, can be left blank. However, you cannot leave all fields blank.
Currently, one log topic supports only one collection configuration. Ensure that all container logs that adopt the log topic can accept the log parsing method that you choose. If you create different collection configurations under the same log topic, the earlier collection configurations will be overwritten.
6. Enable other features as needed.
Enable filters and configure rules. Once enabled, only logs that match the filter rules will be collected. "Key" supports full matching, and the filter rules support regex matching, such as collecting logs with "ErrorCode = 404" only.
Enable uploading of failed log parsing.
When enabled, all logs that fail to parse will be uploaded with the key name (Key) and the original log content as the value (Value). When disabled, failed logs will be discarded.
7. Click Done to complete the creation.
Updating log rules
1. Log in to the TKE console and choose Log Management > Log Rules in the left sidebar.
2. In the Log Rules page, select the log rule you want to update and click Edit Collecting Rule on the right side, as shown below:
3. Update the relevant settings according to your needs and click Complete to finalize the changes.