The content of this page has been automatically translated by AI. If you encounter any problems while reading, you can view the corresponding content in Chinese.
Agent is a lightweight log collector provided by DataInLong. It can complete installation, management, and operation through a product interface without any coding. When deploying business using Tencent Cloud CVM or self-built servers, you can configure Agent to collect logs and file information within the server and deliver them to downstream targets.
Operation step
Step 1: Create a collection task
After entering Configuration Center > Real-time Synchronization Task, click New Log Collection Task. Enter a task name and select a configuration mode, supporting Form and Canvas modes.
Step 2: Configure the data source
Select the CVM type in the data source type and configure the data source parameters.
Parameter
Description
Collector Group
Choose an available collector for the current project. If there isn't one, click Create Collector to create one.
Server categories
Select the server category it belongs to. The task will collect all servers under this category after selection.
File path
Manually enter the file path of the data source.
Blocklist
Disabled by default. When enabled, the configured blocklist file path will not be collected.
Read mode
CVM source supports two reading methods:
Full: Reads from the first line of the log file content.
Increment: Reads the latest content starting from the end of the log.
Single record end mark
Default is the Enter key. If you select a regular expression, you need to manually enter the correct regular expression.
Content extraction mode
Supports three content extraction modes:
Full Content: Each log record content is parsed into a complete string with the key value __CONTENT__.
JSON: Each log record content is parsed into JSON key-value pairs. The keys need to be defined in the data field.
Split: Parse log content based on the specified delimiter. Key values must be within the data fields Definition (supports vertical bar, comma, semicolon split).
Step 3. Configure the Data Target
Log collection currently supports most mainstream database connections
After configuring the data source and data target, the source table and target table field information will be displayed. We need to map the fields using Same Name Mapping and Peer Mapping. Fields can also be sorted and configured.
Click Add Field to configure the fields: You can change the field name and type, as well as delete and add new fields.
Click Bulk Add:
Parameter
Description
Data Type
The data source type of the current node.
Addition Method
Append Field: Appends the new parsed fields after the original fields of the table.
Overwrite Original Field: The new parsed fields overwrite the original fields in the current source table.
Field Extraction
Text Parsing: Parses based on the text content, with each line considered as a default field and type. Field names and types are separated by the specified delimiter, e.g., age int. Note: Leading and trailing empty lines will be captured; empty lines will be ignored.
JSON Parsing: Input JSON content and quickly parse the content based on key/value pairs, such as {"age":10,"name":"demo"}. The current system only supports parsing certain types. You can confirm and adjust the parsing results in Form Mode. Duplicate fields will retain the last entry.
Pull from Structural Table: Specify a table object of a data source and parse the fields of that table.
Text to be Parsed
Separator
Used to separate field name and type. Supports tab, |, space, such as "age|int".
Quick Fill Field Type
Common field types, supporting constants, functions, variables, string, boolean, date, datetime, timestamp, time, double, float, tinyint, smallint, tinyint unsigned, int, mediumint, smallint unsigned, bigint, int unsigned, bigint unsigned, double precision, tinyint(1), char, varchar, text, varbinary, blob.
Parse Data
Parse the input content.
Preview
Batch delete SNAT rules
After selecting the preview list, batch delete the parsing results.
Field name
Field Name.
Type
Field Type.
Step Five: Configure Task Attributes
Click the right side Task Attributes to enter, configure the correct Basic Attributes and Integrated Resource Group.
Step Six: Task Submission
Serial number
Parameter
Description
1
Submit
Submit the current task to the production environment. Depending on whether there are production tasks, choose different operation strategies when submitting
If there are no active online tasks for the current task, i.e., it is the first submission or the online task is in "failed" status, it can be directly submitted
If there are "Running" or "Paused" online tasks, you need to choose different strategies. Stopping the online job will abandon the previous task's running location, consuming data from the beginning. Retaining the job status will resume from the last consumed location after restarting.
Note
Click Start Now to immediately run the task after submission, otherwise, it needs to be manually triggered to start running.
2
Lock/Unlock
The default creator is the initial lock holder, and only the lock holder can edit task configuration and run tasks. If the lock holder does not perform any editing operations within 5 minutes, others can click the icon to acquire the lock and perform editing operations upon successful lock acquisition
3
Go to Operations
Quickly navigate to the task operation and maintenance page based on the current task name
4
Save
After previewing, click the Save button to save the full database task configuration. If only saved, the task will not be submitted to the Operations Center
Subsequent Steps
After completing the Task Configuration, you can perform Operation and Maintenance and monitoring and alarm on the created collection tasks. For example, configure monitoring and alarms for tasks and view key indicators of task operation. For more details, please refer to Real-time Task Operation and Maintenance.