在下面的示例中,有一个线程通过消费者正在使用的ByteBuffer发送“消息”。最好的表现是非常好,但它不一致。
public class Main {
public static void main(String... args) throws IOException {
for (int i = 0; i < 10; i++)
doTest();
}
public static void doTest() {
final ByteBuffer writeBuffer = ByteBuffer.allocateDirect(64 * 1024);
final ByteBuffer readBuffer = writeBuffer.slice();
final AtomicInteger readCount = new PaddedAtomicInteger();
final AtomicInteger writeCount = new PaddedAtomicInteger();
for(int i=0;i<3;i++)
performTiming(writeBuffer, readBuffer, readCount, writeCount);
System.out.println();
}
private static void performTiming(ByteBuffer writeBuffer, final ByteBuffer readBuffer, final AtomicInteger readCount, final AtomicInteger writeCount) {
writeBuffer.clear();
readBuffer.clear();
readCount.set(0);
writeCount.set(0);
Thread t = new Thread(new Runnable() {
@Override
public void run() {
byte[] bytes = new byte[128];
while (!Thread.interrupted()) {
int rc = readCount.get(), toRead;
while ((toRead = writeCount.get() - rc) <= 0) ;
for (int i = 0; i < toRead; i++) {
byte len = readBuffer.get();
if (len == -1) {
// rewind.
readBuffer.clear();
// rc++;
} else {
int num = readBuffer.getInt();
if (num != rc)
throw new AssertionError("Expected " + rc + " but got " + num) ;
rc++;
readBuffer.get(bytes, 0, len - 4);
}
}
readCount.lazySet(rc);
}
}
});
t.setDaemon(true);
t.start();
Thread.yield();
long start = System.nanoTime();
int runs = 30 * 1000 * 1000;
int len = 32;
byte[] bytes = new byte[len - 4];
int wc = writeCount.get();
for (int i = 0; i < runs; i++) {
if (writeBuffer.remaining() < len + 1) {
// reader has to catch up.
while (wc - readCount.get() > 0) ;
// rewind.
writeBuffer.put((byte) -1);
writeBuffer.clear();
}
writeBuffer.put((byte) len);
writeBuffer.putInt(i);
writeBuffer.put(bytes);
writeCount.lazySet(++wc);
}
// reader has to catch up.
while (wc - readCount.get() > 0) ;
t.interrupt();
t.stop();
long time = System.nanoTime() - start;
System.out.printf("Message rate was %.1f M/s offsets %d %d %d%n", runs * 1e3 / time
, addressOf(readBuffer) - addressOf(writeBuffer)
, addressOf(readCount) - addressOf(writeBuffer)
, addressOf(writeCount) - addressOf(writeBuffer)
);
}
// assumes -XX:+UseCompressedOops.
public static long addressOf(Object... o) {
long offset = UNSAFE.arrayBaseOffset(o.getClass());
return UNSAFE.getInt(o, offset) * 8L;
}
public static final Unsafe UNSAFE = getUnsafe();
public static Unsafe getUnsafe() {
try {
Field field = Unsafe.class.getDeclaredField("theUnsafe");
field.setAccessible(true);
return (Unsafe) field.get(null);
} catch (Exception e) {
throw new AssertionError(e);
}
}
private static class PaddedAtomicInteger extends AtomicInteger {
public long p2, p3, p4, p5, p6, p7;
public long sum() {
// return 0;
return p2 + p3 + p4 + p5 + p6 + p7;
}
}
}打印同一数据块的时间。末尾的数字是对象的相对地址,表明它们每次都被放置在缓存中。运行较长的10次测试表明,给定的组合重复产生相同的性能。
Message rate was 63.2 M/s offsets 136 200 264
Message rate was 80.4 M/s offsets 136 200 264
Message rate was 80.0 M/s offsets 136 200 264
Message rate was 81.9 M/s offsets 136 200 264
Message rate was 82.2 M/s offsets 136 200 264
Message rate was 82.5 M/s offsets 136 200 264
Message rate was 79.1 M/s offsets 136 200 264
Message rate was 82.4 M/s offsets 136 200 264
Message rate was 82.4 M/s offsets 136 200 264
Message rate was 34.7 M/s offsets 136 200 264
Message rate was 39.1 M/s offsets 136 200 264
Message rate was 39.0 M/s offsets 136 200 264每组缓冲器和计数器被测试了三次,这些缓冲区似乎给出了相似的结果。所以我相信这些缓冲区被放置在记忆中的方式是我所看不到的。
有什么东西可以给更高的表现更经常吗?它看起来像一个缓存冲突,但我看不出这可能发生在哪里。
顺便说一句:M/s是每秒数百万条消息,比任何人都可能需要的信息更多,但是了解如何使它始终保持快速是很好的。
编辑:使用同步的等待和通知使结果更加一致。但不是更快。
Message rate was 6.9 M/s
Message rate was 7.8 M/s
Message rate was 7.9 M/s
Message rate was 6.7 M/s
Message rate was 7.5 M/s
Message rate was 7.7 M/s
Message rate was 7.3 M/s
Message rate was 7.9 M/s
Message rate was 6.4 M/s
Message rate was 7.8 M/s编辑:通过使用任务集,如果锁定两个线程以更改相同的核心,则可以使性能保持一致。
Message rate was 35.1 M/s offsets 136 200 216
Message rate was 34.0 M/s offsets 136 200 216
Message rate was 35.4 M/s offsets 136 200 216
Message rate was 35.6 M/s offsets 136 200 216
Message rate was 37.0 M/s offsets 136 200 216
Message rate was 37.2 M/s offsets 136 200 216
Message rate was 37.1 M/s offsets 136 200 216
Message rate was 35.0 M/s offsets 136 200 216
Message rate was 37.1 M/s offsets 136 200 216
If I use any two logical threads on different cores, I get the inconsistent behaviour
Message rate was 60.2 M/s offsets 136 200 216
Message rate was 68.7 M/s offsets 136 200 216
Message rate was 55.3 M/s offsets 136 200 216
Message rate was 39.2 M/s offsets 136 200 216
Message rate was 39.1 M/s offsets 136 200 216
Message rate was 37.5 M/s offsets 136 200 216
Message rate was 75.3 M/s offsets 136 200 216
Message rate was 73.8 M/s offsets 136 200 216
Message rate was 66.8 M/s offsets 136 200 216编辑:似乎触发GC会改变行为。这些结果显示了在同一buffer+counters上重复测试,手动触发GC的方式达到了一半。
faster after GC
Message rate was 27.4 M/s offsets 136 200 216
Message rate was 27.8 M/s offsets 136 200 216
Message rate was 29.6 M/s offsets 136 200 216
Message rate was 27.7 M/s offsets 136 200 216
Message rate was 29.6 M/s offsets 136 200 216
[GC 14312K->1518K(244544K), 0.0003050 secs]
[Full GC 1518K->1328K(244544K), 0.0068270 secs]
Message rate was 34.7 M/s offsets 64 128 144
Message rate was 54.5 M/s offsets 64 128 144
Message rate was 54.1 M/s offsets 64 128 144
Message rate was 51.9 M/s offsets 64 128 144
Message rate was 57.2 M/s offsets 64 128 144
and slower
Message rate was 61.1 M/s offsets 136 200 216
Message rate was 61.8 M/s offsets 136 200 216
Message rate was 60.5 M/s offsets 136 200 216
Message rate was 61.1 M/s offsets 136 200 216
[GC 35740K->1440K(244544K), 0.0018170 secs]
[Full GC 1440K->1302K(244544K), 0.0071290 secs]
Message rate was 53.9 M/s offsets 64 128 144
Message rate was 54.3 M/s offsets 64 128 144
Message rate was 50.8 M/s offsets 64 128 144
Message rate was 56.6 M/s offsets 64 128 144
Message rate was 56.0 M/s offsets 64 128 144
Message rate was 53.6 M/s offsets 64 128 144编辑:使用@BegemoT的库打印使用的核心id,我在3.8 GHz i7 (家用PC)上得到以下信息
注意:偏移量是8的不正确的,因为堆大小很小,JVM不会像堆一样将引用乘以8(但小于32 GB)。
writer.currentCore() -> Core[#0]
reader.currentCore() -> Core[#5]
Message rate was 54.4 M/s offsets 3392 3904 4416
writer.currentCore() -> Core[#0]
reader.currentCore() -> Core[#6]
Message rate was 54.2 M/s offsets 3392 3904 4416
writer.currentCore() -> Core[#0]
reader.currentCore() -> Core[#5]
Message rate was 60.7 M/s offsets 3392 3904 4416
writer.currentCore() -> Core[#0]
reader.currentCore() -> Core[#5]
Message rate was 25.5 M/s offsets 1088 1600 2112
writer.currentCore() -> Core[#0]
reader.currentCore() -> Core[#5]
Message rate was 25.9 M/s offsets 1088 1600 2112
writer.currentCore() -> Core[#0]
reader.currentCore() -> Core[#5]
Message rate was 26.0 M/s offsets 1088 1600 2112
writer.currentCore() -> Core[#0]
reader.currentCore() -> Core[#5]
Message rate was 61.0 M/s offsets 1088 1600 2112
writer.currentCore() -> Core[#0]
reader.currentCore() -> Core[#5]
Message rate was 61.8 M/s offsets 1088 1600 2112
writer.currentCore() -> Core[#0]
reader.currentCore() -> Core[#5]
Message rate was 60.7 M/s offsets 1088 1600 2112您可以看到正在使用相同的逻辑线程,但是在不同的运行中,性能是不同的,但是在运行中没有使用(在运行中使用相同的对象)。
我发现了这个问题。这是一个内存布局问题,但我可以找到一个简单的方法来解决它。ByteBuffer不能扩展,所以不能添加填充,所以我创建了一个丢弃的对象。
final ByteBuffer writeBuffer = ByteBuffer.allocateDirect(64 * 1024);
final ByteBuffer readBuffer = writeBuffer.slice();
new PaddedAtomicInteger();
final AtomicInteger readCount = new PaddedAtomicInteger();
final AtomicInteger writeCount = new PaddedAtomicInteger();如果没有这个额外的填充(没有使用的对象),结果在3.8个GHz i7上看起来是这样的。
Message rate was 38.5 M/s offsets 3392 3904 4416
Message rate was 54.7 M/s offsets 3392 3904 4416
Message rate was 59.4 M/s offsets 3392 3904 4416
Message rate was 54.3 M/s offsets 1088 1600 2112
Message rate was 56.3 M/s offsets 1088 1600 2112
Message rate was 56.6 M/s offsets 1088 1600 2112
Message rate was 28.0 M/s offsets 1088 1600 2112
Message rate was 28.1 M/s offsets 1088 1600 2112
Message rate was 28.0 M/s offsets 1088 1600 2112
Message rate was 17.4 M/s offsets 1088 1600 2112
Message rate was 17.4 M/s offsets 1088 1600 2112
Message rate was 17.4 M/s offsets 1088 1600 2112
Message rate was 54.5 M/s offsets 1088 1600 2112
Message rate was 54.2 M/s offsets 1088 1600 2112
Message rate was 55.1 M/s offsets 1088 1600 2112
Message rate was 25.5 M/s offsets 1088 1600 2112
Message rate was 25.6 M/s offsets 1088 1600 2112
Message rate was 25.6 M/s offsets 1088 1600 2112
Message rate was 56.6 M/s offsets 1088 1600 2112
Message rate was 54.7 M/s offsets 1088 1600 2112
Message rate was 54.4 M/s offsets 1088 1600 2112
Message rate was 57.0 M/s offsets 1088 1600 2112
Message rate was 55.9 M/s offsets 1088 1600 2112
Message rate was 56.3 M/s offsets 1088 1600 2112
Message rate was 51.4 M/s offsets 1088 1600 2112
Message rate was 56.6 M/s offsets 1088 1600 2112
Message rate was 56.1 M/s offsets 1088 1600 2112
Message rate was 46.4 M/s offsets 1088 1600 2112
Message rate was 46.4 M/s offsets 1088 1600 2112
Message rate was 47.4 M/s offsets 1088 1600 2112使用丢弃的填充对象。
Message rate was 54.3 M/s offsets 3392 4416 4928
Message rate was 53.1 M/s offsets 3392 4416 4928
Message rate was 59.2 M/s offsets 3392 4416 4928
Message rate was 58.8 M/s offsets 1088 2112 2624
Message rate was 58.9 M/s offsets 1088 2112 2624
Message rate was 59.3 M/s offsets 1088 2112 2624
Message rate was 59.4 M/s offsets 1088 2112 2624
Message rate was 59.0 M/s offsets 1088 2112 2624
Message rate was 59.8 M/s offsets 1088 2112 2624
Message rate was 59.8 M/s offsets 1088 2112 2624
Message rate was 59.8 M/s offsets 1088 2112 2624
Message rate was 59.2 M/s offsets 1088 2112 2624
Message rate was 60.5 M/s offsets 1088 2112 2624
Message rate was 60.5 M/s offsets 1088 2112 2624
Message rate was 60.5 M/s offsets 1088 2112 2624
Message rate was 60.5 M/s offsets 1088 2112 2624
Message rate was 60.9 M/s offsets 1088 2112 2624
Message rate was 60.6 M/s offsets 1088 2112 2624
Message rate was 59.6 M/s offsets 1088 2112 2624
Message rate was 60.3 M/s offsets 1088 2112 2624
Message rate was 60.5 M/s offsets 1088 2112 2624
Message rate was 60.9 M/s offsets 1088 2112 2624
Message rate was 60.5 M/s offsets 1088 2112 2624
Message rate was 60.5 M/s offsets 1088 2112 2624
Message rate was 60.7 M/s offsets 1088 2112 2624
Message rate was 61.6 M/s offsets 1088 2112 2624
Message rate was 60.8 M/s offsets 1088 2112 2624
Message rate was 60.3 M/s offsets 1088 2112 2624
Message rate was 60.7 M/s offsets 1088 2112 2624
Message rate was 58.3 M/s offsets 1088 2112 2624不幸的是,在GC之后,总是存在这样的风险,即对象的布局将不优化。解决此问题的唯一方法可能是将填充添加到原始类中。:(
发布于 2011-11-04 21:06:04
我不是处理器缓存领域的专家,但我怀疑您的问题本质上是缓存问题或其他内存布局问题。重复分配缓冲区和计数器而不清理旧对象可能会导致您周期性地得到非常糟糕的缓存布局,这可能导致性能不一致。
使用您的代码并制作了几个mods,我已经能够使性能保持一致(我的测试机器是Intel Core2 Quad Q6600 2.4GHz w/ Win7x64 -所以不完全相同,但希望足够接近,以获得相关的结果)。我用两种不同的方式做了这件事,两种方法都有大致相同的效果。
首先,将缓冲区和计数器的创建移到doTest方法之外,使它们只创建一次,然后在测试的每一次传递中重复使用。现在您得到了一个分配,它在缓存中的位置很好,性能是一致的。
另一种获得相同重用的方法是在performTiming循环之后插入gc:
for ( int i = 0; i < 3; i++ )
performTiming ( writeBuffer, readBuffer, readCount, writeCount );
System.out.println ();
System.gc ();这里的结果或多或少是一样的- gc允许回收缓冲区/计数器,下一个分配最终重用相同的内存(至少在我的测试系统上),而您在缓存中以一致的性能结束(我还添加了实际地址的打印以验证相同位置的重用)。我的猜测是,如果没有清理导致重用,您最终会得到一个不适合缓存的缓冲区,您的性能在交换时会受到影响。我怀疑您可以使用分配顺序来做一些奇怪的事情(比如您可以通过移动缓冲区前面的计数器分配来使性能在我的机器上更糟),或者在每次运行时创建一些死空间来“清除”缓存,如果您不想从先前的循环中消除缓冲区的话。
最后,正如我所说的,处理器缓存和内存布局的乐趣不是我的专业领域,所以如果解释是误导或错误的-对不起。
发布于 2011-11-01 16:56:08
你正忙着等呢。在用户代码中,这一直是个坏主意。
读者:
while ((toRead = writeCount.get() - rc) <= 0) ;作者:
while (wc - readCount.get() > 0) ;发布于 2011-11-01 16:55:55
作为一种一般的业绩分析方法:
jconsole。这将打开Java控制台GUI,它允许您连接到正在运行的JVM,并查看性能指标、内存使用情况、线程计数和状态等。jconsole窗口并排放在一起。-Xprof选项启动JVM,该选项在每个线程的基础上输出在各种方法中的相对时间()。例如。java -Xprof [your class file]https://stackoverflow.com/questions/7969665
复制相似问题