- 论坛徽章:
- 20
|
本帖最后由 nswcfd 于 2015-05-29 22:16 编辑
从CSDN发现了一个老帖子,针对memcpy优化的一个patch
CSDN的帖子:bbs.csdn.net/topics/360040485
patch:patchwork.kernel.org/patch/296282/
看起来是把原来的 r w r w r w r w 序列,调整为 r r r r w w w w,
为什么就会有 1.5x ~ 2x的提升。
虽然patch的开头有一段解释,但是很惭愧,没看懂……
different read may run before older write operation, otherwise wait
until write commit. However CPU don't check each address bit,
so read could fail to recognize different address even they
are in different page.For example if rsi is 0xf004, rdi is 0xe008,
in following operation there will generate big performance latency.
1. movq (%rsi), %rax
2. movq %rax, (%rdi)
3. movq 8(%rsi), %rax
4. movq %rax, 8(%rdi)
If %rsi and rdi were in really the same meory page, there are TRUE
read-after-write dependence because instruction 2 write 0x008 and
instruction 3 read 0x00c, the two address are overlap partially.
Actually there are in different page and no any issues,
but without checking each address bit CPU could think they are
in the same page, and instruction 3 have to wait for instruction 2
to write data into cache from write buffer, then load data from cache,
the cost time read spent is equal to mfence instruction. We may avoid it by
tuning operation sequence as follow.
1. movq 8(%rsi), %rax
2. movq %rax, 8(%rdi)
3. movq (%rsi), %rax
4. movq %rax, (%rdi)
Instruction 3 read 0x004, instruction 2 write address 0x010, no any
dependence. At last on Core2 we gain 1.83x speedup compared with
original instruction sequence. In this patch we first handle small
size(less 20bytes), then jump to different copy mode. Based on our
micro-benchmark small bytes from 1 to 127 bytes, we got up to 2X
improvement, and up to 1.5X improvement for 1024 bytes on Corei7. (We
use our micro-benchmark, and will do further test according to your
requirment
这里面“However CPU don't check each address bit,so read could fail to recognize different address even they are in different page.”, CPU don't check each address bit是什么意思?
为什么当%si和%di在不同的page里,还会有3等待2的情况?难道cpu在检查指令依赖关系的时候,只看最后12bit? |
|