本文主要研究一下reactive streams的processors
processors既是Publisher也是Subscriber。在project reactor中processor有诸多实现,他们的分类大致如下:
DirectProcessor以及UnicastProcessor
)EmitterProcessor及ReplayProcessor
)TopicProcessor及WorkQueueProcessor
)它不支持backpressure特性,如果publisher发布了N个数据,如果其中一个subscriber请求数<N,则抛出IllegalStateException.
@Test
public void testDirectProcessor(){
DirectProcessor<Integer> directProcessor = DirectProcessor.create();
Flux<Integer> flux = directProcessor
.filter(e -> e % 2 == 0)
.map(e -> e +1);
flux.subscribe(new Subscriber<Integer>() {
private Subscription s;
@Override
public void onSubscribe(Subscription s) {
this.s = s;
// s.request(2);
}
@Override
public void onNext(Integer integer) {
LOGGER.info("subscribe:{}",integer);
}
@Override
public void onError(Throwable t) {
LOGGER.error(t.getMessage(),t);
}
@Override
public void onComplete() {
}
});
IntStream.range(1,20)
.forEach(e -> {
directProcessor.onNext(e);
});
directProcessor.onComplete();
directProcessor.blockLast();
}
输出如下
16:00:11.201 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
16:00:11.216 [main] ERROR com.example.demo.ProcessorTest - Can't deliver value due to lack of requests
reactor.core.Exceptions$OverflowException: Can't deliver value due to lack of requests
at reactor.core.Exceptions.failWithOverflow(Exceptions.java:215)
at reactor.core.publisher.DirectProcessor$DirectInner.onNext(DirectProcessor.java:304)
at reactor.core.publisher.DirectProcessor.onNext(DirectProcessor.java:106)
at com.example.demo.ProcessorTest.lambda$testDirectProcessor$5(ProcessorTest.java:82)
at java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:110)
at java.util.stream.IntPipeline$Head.forEach(IntPipeline.java:557)
at com.example.demo.ProcessorTest.testDirectProcessor(ProcessorTest.java:81)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
可以缓存通过sink产生的数据或者订阅publisher的数据,然后重放给后来的订阅者。有如下四种配置
实例
@Test
public void testReplayProcessor() throws InterruptedException {
ReplayProcessor<Integer> replayProcessor = ReplayProcessor.create(3);
Flux<Integer> flux1 = replayProcessor
.map(e -> e);
Flux<Integer> flux2 = replayProcessor
.map(e -> e);
flux1.subscribe(e -> {
LOGGER.info("flux1 subscriber:{}",e);
});
IntStream.rangeClosed(1,5)
.forEach(e -> {
replayProcessor.onNext(e);
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e1) {
e1.printStackTrace();
}
});
LOGGER.info("finish publish data");
TimeUnit.SECONDS.sleep(3);
LOGGER.info("begin to subscribe flux2");
flux2.subscribe(e -> {
LOGGER.info("flux2 subscriber:{}",e);
});
replayProcessor.onComplete();
replayProcessor.blockLast();
}
输出如下
15:13:39.415 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
15:13:39.438 [main] INFO com.example.demo.ProcessorTest - flux1 subscriber:1
15:13:40.445 [main] INFO com.example.demo.ProcessorTest - flux1 subscriber:2
15:13:41.449 [main] INFO com.example.demo.ProcessorTest - flux1 subscriber:3
15:13:42.454 [main] INFO com.example.demo.ProcessorTest - flux1 subscriber:4
15:13:43.459 [main] INFO com.example.demo.ProcessorTest - flux1 subscriber:5
15:13:44.463 [main] INFO com.example.demo.ProcessorTest - finish publish data
15:13:47.466 [main] INFO com.example.demo.ProcessorTest - begin to subscribe flux2
15:13:47.467 [main] INFO com.example.demo.ProcessorTest - flux2 subscriber:3
15:13:47.467 [main] INFO com.example.demo.ProcessorTest - flux2 subscriber:4
15:13:47.468 [main] INFO com.example.demo.ProcessorTest - flux2 subscriber:5
EventLoopProcessor(
int bufferSize,
@Nullable ThreadFactory threadFactory,
@Nullable ExecutorService executor,
ExecutorService requestExecutor,
boolean autoCancel,
boolean multiproducers,
Supplier<Slot<IN>> factory,
WaitStrategy strategy) {
if (!Queues.isPowerOfTwo(bufferSize)) {
throw new IllegalArgumentException("bufferSize must be a power of 2 : " + bufferSize);
}
if (bufferSize < 1){
throw new IllegalArgumentException("bufferSize must be strictly positive, " +
"was: "+bufferSize);
}
this.autoCancel = autoCancel;
contextClassLoader = new EventLoopContext(multiproducers);
this.name = defaultName(threadFactory, getClass());
this.requestTaskExecutor = Objects.requireNonNull(requestExecutor, "requestTaskExecutor");
if (executor == null) {
this.executor = Executors.newCachedThreadPool(threadFactory);
}
else {
this.executor = executor;
}
if (multiproducers) {
this.ringBuffer = RingBuffer.createMultiProducer(factory,
bufferSize,
strategy,
this);
}
else {
this.ringBuffer = RingBuffer.createSingleProducer(factory,
bufferSize,
strategy,
this);
}
}
如果share为true,则创建的是createMultiProducer. 具体的表象就是如果有多线程调用processor的onNext方法,而没有开启share的话,会有并发问题,即数据会丢失.比如上面的代码,如果注释掉share(true),则最后count的大小就不一定是100,而开启share为true就能保证最后count的大小是100 如果设置executor(Executors.newSingleThreadExecutor()),则flux1,flux2,flux3的订阅者则是顺序执行,而不是并发的.
@Test
public void testWorkQueueProcessor(){
WorkQueueProcessor<Integer> workQueueProcessor = WorkQueueProcessor.create();
Flux<Integer> flux1 = workQueueProcessor
.map(e -> e);
Flux<Integer> flux2 = workQueueProcessor
.map(e -> e);
Flux<Integer> flux3 = workQueueProcessor
.map(e -> e);
flux1.subscribe(e -> {
LOGGER.info("flux1 subscriber:{}",e);
});
flux2.subscribe(e -> {
LOGGER.info("flux2 subscriber:{}",e);
});
flux3.subscribe(e -> {
LOGGER.info("flux3 subscriber:{}",e);
});
IntStream.range(1,20)
.forEach(e -> {
workQueueProcessor.onNext(e);
});
workQueueProcessor.onComplete();
workQueueProcessor.blockLast();
}
输出实例
21:56:58.203 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
21:56:58.214 [main] DEBUG reactor.core.publisher.UnsafeSupport - Starting UnsafeSupport init in Java 1.8
21:56:58.215 [main] DEBUG reactor.core.publisher.UnsafeSupport - Unsafe is available
21:56:58.228 [WorkQueueProcessor-1] INFO com.example.demo.ProcessorTest - flux1 subscriber:1
21:56:58.228 [WorkQueueProcessor-3] INFO com.example.demo.ProcessorTest - flux3 subscriber:3
21:56:58.228 [WorkQueueProcessor-2] INFO com.example.demo.ProcessorTest - flux2 subscriber:2
21:56:58.229 [WorkQueueProcessor-1] INFO com.example.demo.ProcessorTest - flux1 subscriber:4
21:56:58.229 [WorkQueueProcessor-3] INFO com.example.demo.ProcessorTest - flux3 subscriber:5
21:56:58.229 [WorkQueueProcessor-2] INFO com.example.demo.ProcessorTest - flux2 subscriber:6
21:56:58.230 [WorkQueueProcessor-1] INFO com.example.demo.ProcessorTest - flux1 subscriber:7
21:56:58.230 [WorkQueueProcessor-3] INFO com.example.demo.ProcessorTest - flux3 subscriber:8
21:56:58.230 [WorkQueueProcessor-2] INFO com.example.demo.ProcessorTest - flux2 subscriber:9
21:56:58.230 [WorkQueueProcessor-1] INFO com.example.demo.ProcessorTest - flux1 subscriber:10
21:56:58.230 [WorkQueueProcessor-3] INFO com.example.demo.ProcessorTest - flux3 subscriber:11
21:56:58.230 [WorkQueueProcessor-2] INFO com.example.demo.ProcessorTest - flux2 subscriber:12
21:56:58.230 [WorkQueueProcessor-1] INFO com.example.demo.ProcessorTest - flux1 subscriber:13
21:56:58.230 [WorkQueueProcessor-3] INFO com.example.demo.ProcessorTest - flux3 subscriber:14
21:56:58.230 [WorkQueueProcessor-2] INFO com.example.demo.ProcessorTest - flux2 subscriber:15
21:56:58.230 [WorkQueueProcessor-3] INFO com.example.demo.ProcessorTest - flux3 subscriber:17
21:56:58.230 [WorkQueueProcessor-1] INFO com.example.demo.ProcessorTest - flux1 subscriber:16
21:56:58.230 [WorkQueueProcessor-3] INFO com.example.demo.ProcessorTest - flux3 subscriber:19
21:56:58.230 [WorkQueueProcessor-2] INFO com.example.demo.ProcessorTest - flux2 subscriber:18
可以看到WorkQueueProcessor的subscriber就类似kafka的同属于一个group的consumer,各自消费的消息总和就是publisher发布的总消息,不像TopicProcessor那种广播式的消息传递.