Help & Documentation>TDMQ for CKafka

Scenarios

Last updated: 2024-10-11 17:55:01

Data reporting and query

CKafka Connector can be applied in scenarios requiring data reporting, such as mobile app behavior analysis, frontend page bug log reporting, and business data reporting. Typically, the reported data needs to be transferred to downstream storage and analysis systems for processing (e.g., Elasticsearch, HDFS). In conventional operations, it is necessary to set up servers, purchase storage systems, and write custom code for data reception, processing, and dumping. This process is cumbersome and incurs high long-term system maintenance costs.
By going SaaS, CKafka Connector allows you to create a complete linkage in just two steps: configure in the console and report data in the SDK. It is designed to be serverless and pay-as-you-go, removing the need to estimate the capacity in advance and saving costs in development and use.




Database change subscription

CKafka Connector supports subscribing to change data from various databases based on the CDC (Change Data Capture) mechanism, such as MySQL Binlog, MongoDB Change Stream, PostgreSQL Row-level Change, and SQL Server Row-level Change. For instance, in practical business scenarios, it is often necessary to subscribe to MySQL Binlog logs to obtain change records (Insert, Update, Delete, DDL, DML, etc.) and perform corresponding business logic processing, such as querying, fault recovery, and analysis.
By default, customers often need to build custom CDC-based subscription components (such as Canal, Debezium, Flink CDC) to subscribe to database changes. However, the construction and maintenance of these components require significant human resources and a comprehensive monitoring system to ensure their stable operation.
In contrast, CKafka Connector provides SaaS components that enable data subscription, processing, and dumping through simple UI configurations.




Data integration

CKafka Connector supports integrating data from different data sources (databases, middleware, logs, application systems) in various environments (Tencent public cloud, self-built IDC, cross-cloud, hybrid cloud) into the public cloud message queue service for processing and distribution. In real-world business processes, users often need to aggregate data from multiple sources into a message queue, such as application client data, business DB data, and operational log data for analysis and processing. Normally, this data must be cleaned and formatted before being uniformly dumped, analyzed, or processed.
CKafka Connector offers robust data aggregation, storage, processing, and dumping capabilities. In short, it can easily integrate data by connecting different data sources to downstream data targets.




Data ETL and dumping

In some customer scenarios, data already exists in a message queue (e.g., Kafka) as a caching layer and needs to be cleaned and formatted (ETL) before being stored downstream (e.g., Kafka, Elasticsearch, COS). Commonly, users need to use Logstash, Flink, or custom code for data cleansing and maintain the stable operation of these components. When data only requires simple processing, learning the syntax, specifications, and technical principles of data processing components and maintaining them can be cumbersome, adding extra development and operational costs.
CKafka Connector comes with lightweight, UI-based, data ETL and dumping capabilities that are simple to configure, making it easier for you to process and dump data to downstream storage systems.