本文简单解析一下kafka0.8.2.2版本中的java producer的异常处理。
kafka的java producer的发送是异步,主要的分几步:
这里就涉及到了几个步骤的异常,append的时候,会抛异常,对于ApiException则放到callback里头去,其他异常直接抛出来(callback仅仅是跟RecordAccumulator打交道这一层
)
sender中run方法直接捕获log出来。
具体跟network打交道的时候,请求失败(网络链接失败或是broker返回异常
),则会根据重试次数重新入队。
kafka-clients-0.8.2.2-sources.jar!/org/apache/kafka/clients/producer/KafkaProducer.java
public Future<RecordMetadata> send(ProducerRecord<K,V> record, Callback callback) {
try {
// first make sure the metadata for the topic is available
waitOnMetadata(record.topic(), this.metadataFetchTimeoutMs);
byte[] serializedKey;
try {
serializedKey = keySerializer.serialize(record.topic(), record.key());
} catch (ClassCastException cce) {
throw new SerializationException("Can't convert key of class " + record.key().getClass().getName() +
" to class " + producerConfig.getClass(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG).getName() +
" specified in key.serializer");
}
byte[] serializedValue;
try {
serializedValue = valueSerializer.serialize(record.topic(), record.value());
} catch (ClassCastException cce) {
throw new SerializationException("Can't convert value of class " + record.value().getClass().getName() +
" to class " + producerConfig.getClass(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG).getName() +
" specified in value.serializer");
}
ProducerRecord<byte[], byte[]> serializedRecord = new ProducerRecord<byte[], byte[]>(record.topic(), record.partition(), serializedKey, serializedValue);
int partition = partitioner.partition(serializedRecord, metadata.fetch());
int serializedSize = Records.LOG_OVERHEAD + Record.recordSize(serializedKey, serializedValue);
ensureValidRecordSize(serializedSize);
TopicPartition tp = new TopicPartition(record.topic(), partition);
log.trace("Sending record {} with callback {} to topic {} partition {}", record, callback, record.topic(), partition);
RecordAccumulator.RecordAppendResult result = accumulator.append(tp, serializedKey, serializedValue, compressionType, callback);
if (result.batchIsFull || result.newBatchCreated) {
log.trace("Waking up the sender since topic {} partition {} is either full or getting a new batch", record.topic(), partition);
this.sender.wakeup();
}
return result.future;
// Handling exceptions and record the errors;
// For API exceptions return them in the future,
// for other exceptions throw directly
} catch (ApiException e) {
log.debug("Exception occurred during message send:", e);
if (callback != null)
callback.onCompletion(null, e);
this.errors.record();
return new FutureFailure(e);
} catch (InterruptedException e) {
this.errors.record();
throw new KafkaException(e);
} catch (KafkaException e) {
this.errors.record();
throw e;
}
}
Handling exceptions and record the errors; For API exceptions return them in the future, for other exceptions throw directly. 对于
kafka-clients-0.8.2.2-sources.jar!/org/apache/kafka/clients/producer/internals/Sender.java
/**
* Complete or retry the given batch of records.
* @param batch The record batch
* @param error The error (or null if none)
* @param baseOffset The base offset assigned to the records if successful
* @param correlationId The correlation id for the request
* @param now The current POSIX time stamp in milliseconds
*/
private void completeBatch(RecordBatch batch, Errors error, long baseOffset, long correlationId, long now) {
if (error != Errors.NONE && canRetry(batch, error)) {
// retry
log.warn("Got error produce response with correlation id {} on topic-partition {}, retrying ({} attempts left). Error: {}",
correlationId,
batch.topicPartition,
this.retries - batch.attempts - 1,
error);
this.accumulator.reenqueue(batch, now);
this.sensors.recordRetries(batch.topicPartition.topic(), batch.recordCount);
} else {
// tell the user the result of their request
batch.done(baseOffset, error.exception());
this.accumulator.deallocate(batch);
if (error != Errors.NONE)
this.sensors.recordErrors(batch.topicPartition.topic(), batch.recordCount);
}
if (error.exception() instanceof InvalidMetadataException)
metadata.requestUpdate();
}
对于有error的情况,将整个batch重新reenqueue handleDisconnect以及handleResponse都会调用这个方法
kafka-clients-0.8.2.2-sources.jar!/org/apache/kafka/clients/NetworkClient.java
public List<ClientResponse> poll(List<ClientRequest> requests, long timeout, long now) {
List<NetworkSend> sends = new ArrayList<NetworkSend>();
for (int i = 0; i < requests.size(); i++) {
ClientRequest request = requests.get(i);
int nodeId = request.request().destination();
if (!isSendable(nodeId))
throw new IllegalStateException("Attempt to send a request to node " + nodeId + " which is not ready.");
this.inFlightRequests.add(request);
sends.add(request.request());
}
// should we update our metadata?
long timeToNextMetadataUpdate = metadata.timeToNextUpdate(now);
long timeToNextReconnectAttempt = Math.max(this.lastNoNodeAvailableMs + metadata.refreshBackoff() - now, 0);
long waitForMetadataFetch = (this.metadataFetchInProgress ? Integer.MAX_VALUE : 0);
// if there is no node available to connect, back off refreshing metadata
long metadataTimeout = Math.max(Math.max(timeToNextMetadataUpdate, timeToNextReconnectAttempt), waitForMetadataFetch);
if (!this.metadataFetchInProgress && metadataTimeout == 0)
maybeUpdateMetadata(sends, now);
// do the I/O
try {
this.selector.poll(Math.min(timeout, metadataTimeout), sends);
} catch (IOException e) {
log.error("Unexpected error during I/O in producer network thread", e);
}
List<ClientResponse> responses = new ArrayList<ClientResponse>();
handleCompletedSends(responses, now);
handleCompletedReceives(responses, now);
handleDisconnections(responses, now);
handleConnections();
return responses;
}
非常好
)case study
)可靠性分析透彻
)