src代码: /*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version
我正在开发一个沉入Kafka的Flink应用程序。我创建了一个默认池大小为5的Kafka生产者。我使用以下配置启用了检查点:
env.enableCheckpointing(1800000);//checkpointing for every 30 minutes.
// set mode to exactly-once (this is the default)
env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
// make sure 500 m
我试图连接卡夫卡与Flink和运行通过sql-client.sh。但是,无论我如何处理.yaml和库,我都会得到错误:
Exception in thread "main" org.apache.flink.table.client.SqlClientException: Unexpected exception. This is a bug. Please consider filing an issue.
at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
我使用双倍创建了这个示例程序
在IDE中运行时出现以下错误
log4j:WARN No appenders could be found for logger (org.apache.flink.api.scala.ClosureCleaner$).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main"
我正试图用DataSet的将Iris ParserError NUMERIC_VALUE_ILLEGAL_CHARACTER读入flink,并不断地获取ParserError NUMERIC_VALUE_ILLEGAL_CHARACTER
csv (隐藏字符可见)如下
我试图删除文件中项之间的空白,但没有成功。有多个无法读取的项,这是错误输出:
org.apache.flink.api.common.io.ParseException: Line could not be parsed: '5.4, 3.0, 4.5, 1.5, 2'
ParserError NUME
使用flink版本1.13.1
我写了一个自定义的度量报告,但在我的flink中似乎不起作用。启动flink时,JobManager显示警告日志,如下所示:
2021-08-25 14:54:06,243 WARN org.apache.flink.runtime.metrics.ReporterSetup [] - The reporter factory (org.apache.flink.metrics.kafka.KafkaReporterFactory) could not be found for reporter kafka. Available f
当使用带有buffer-flush选项的upsert-kafka接收器时,我得到下面的错误,而没有buffer-flush选项也是一样。 Exception in thread "main" java.lang.IllegalStateException: There is no the LegacySinkTransformation.
at org.apache.flink.streaming.api.datastream.DataStreamSink.getTransformation(DataStreamSink.java:71)
at org.apache.flin
使用Scala2.12运行Flink 1.9.0并尝试使用将数据发布到Kafka,在本地调试时一切正常。一旦我将作业提交到集群,就会在运行时得到以下java.lang.LinkageError,它无法运行作业:
java.lang.LinkageError: loader constraint violation: loader (instance of org/apache/flink/util/ChildFirstClassLoader) previously initiated loading for a different type with name "org/apache/
我们已经创建了一个流Flink应用程序,它运行在AWS的Kinesis中。它主要用于处理网页点击流数据(页面浏览、会话化等)。我们从一个按键的窗口(由一个会话/设备令牌键控)中分割出一个的页面视图输入。
该应用程序在小规模上运行良好,但当扩展到我们预期的正常生产吞吐量(每天大约100万页浏览量)进行测试时,我们在合并窗口时会周期性地遇到错误:
“The end timestamp of an event-time window cannot become earlier than the current watermark by merging.”
这个UnsupportedOperation
我用flink (java,maven version8.1)从磁盘读取csv文件(),得到以下异常:
ERROR operators.DataSinkTask: Error in user code: Channel received an event before completing the current partial record.: DataSink(Print to System.out) (4/4)
java.lang.IllegalStateException: Channel received an event before completing the current
我使用Flink v.1.13.2来管理一个工作经理,三个任务经理。
由于某些原因(我无法找出原因),任务管理器连接正在丢失。下面是我找到的日志:
2022-02-17 21:19:55,891 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Sink: Print to Std. Out (13/32) (f0ff88713cc3ff5ce39e7073468abed4) switched from RUNNING to FAILED on 1.2.3.5:39309-f61daa @ serve