首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >Flink抛出java.lang.RuntimeException:缓冲池被销毁

Flink抛出java.lang.RuntimeException:缓冲池被销毁
EN

Stack Overflow用户
提问于 2018-01-16 15:58:52
回答 1查看 2.3K关注 0票数 4

需要帮助!谁能给我指引一条正确的路?

以下是我的代码和日志的片段。

代码语言:javascript
复制
DataStream<ObjectNode> stream = env.addSource(KafkaConsumer.getKafkaConsumer());
DataStream<MyDataObject> dataStream = stream.flatMap(new DataTransformation());

我使用flatMapFunction来处理我的输入对象并获得多个对象。

以下是stackTrace:

代码语言:javascript
复制
java.lang.RuntimeException: Buffer pool is destroyed.
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:75) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:39) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:797) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:775) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
    at com.data.transformation.DataTransformation.flatMap(DataTransformation.java:68) [eventproducer.jar:na]
    at com.data.transformation.DataTransformation.flatMap(DataTransformation.java:23) [eventproducer.jar:na]
    at org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:47) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:422) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:407) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:797) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:775) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.collectWithTimestamp(StreamSourceContexts.java:272) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:261) [flink-connector-kafka-base_2.10-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:88) [flink-connector-kafka-0.10_2.10-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:157) [flink-connector-kafka-0.9_2.10-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:255) [flink-connector-kafka-base_2.10-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:78) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:55) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:56) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:272) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655) [flink-dist_2.11-1.2.0.jar:1.2.0]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
Caused by: java.lang.IllegalStateException: Buffer pool is destroyed.
    at org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBuffer(LocalBufferPool.java:149) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBlocking(LocalBufferPool.java:138) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:131) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:88) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.runtime.io.StreamRecordWriter.emit(StreamRecordWriter.java:86) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
    at org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:72) ~[flink-dist_2.11-1.2.0.jar:1.2.0]
    ... 22 common frames omitted

编辑:为了获得更多信息,我使用collect()收集记录,然后将所有记录传递给下一个操作符,以处理数据库插入操作。我使用的是flinks Cassandra Sink连接器。

EN

回答 1

Stack Overflow用户

发布于 2018-02-14 14:10:50

这可能会有所帮助。可能IO操作花费了更多的时间。https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/asyncio.html

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/48276484

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档