Linux 内存管理-伙伴系统内存分配异常
Linux 内存管理算法-伙伴系统内存分配异常最近有个客户报了一个奇怪的Bug,现象是 HTTP代理服务器(Apache Traffic Server)不能正常代理HTTP流量,
查了下/var/log/messages, 发现在出问题的那段时间,kernel的TCP协议栈由于内存分配失败导致不能建立新的TCP连接.
分配内存失败的地方就在 为新建立的TCP socket 分配sock结构的时候。
具体情况看堆栈,但是因为堆栈符号都带问号,说明内核认为这些地址不可靠,有些函数可能实际上并没有调用到
但是从堆栈看个大概的调用流程还是可以的。
内核在内存分配失败的时候会dump一些信息,失败时报的错误是"page allocation failure",
"page allocation failure" 跟 OOM 类似但还是有些区别,通常进程上下文内存分配失败会触发OOM, 其他比如中断上下文内存分配失败就会触发"page allocation failure"
下面就是一个完整的dump:
Jul 25 11:41:35 OSMUM6WS2 kernel: swapper/1: page allocation failure: order:1, mode:0x20
Jul 25 11:41:35 OSMUM6WS2 kernel: Pid: 0, comm: swapper/1 Tainted: G O 3.6.3 #1
Jul 25 11:41:35 OSMUM6WS2 kernel: Call Trace:
Jul 25 11:41:35 OSMUM6WS2 kernel: <IRQ>[<ffffffff8108e907>] ? warn_alloc_failed+0x10a/0x11d
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8108f678>] ? __alloc_pages_nodemask+0x5d7/0x604
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8100122a>] ? hypercall_page+0x22a/0x1000
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff810b2e8d>] ? cache_alloc_refill+0x2aa/0x595
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffffa00ed109>] ? ip_natin+0x12e2/0x1326
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff810b2b82>] ? kmem_cache_alloc+0x93/0xf4
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8120c6e1>] ? sk_prot_alloc+0x2b/0xe2
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8120c844>] ? sk_clone_lock+0x14/0x268
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff81242fbc>] ? inet_csk_clone_lock+0x10/0x91
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff81257a01>] ? tcp_create_openreq_child+0x1b/0x4d4
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8125620e>] ? tcp_v4_syn_recv_sock+0x32/0x241
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff812578b1>] ? tcp_check_req+0x24f/0x384
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff81255868>] ? tcp_v4_do_rcv+0x14e/0x2c4
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8123a1f8>] ? inet_del_protocol+0x2c/0x2c
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff81255ee4>] ? tcp_v4_rcv+0x506/0x7fe
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8123a7ff>] ? ip_local_deliver_finish+0x11b/0x1c3
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff812169fc>] ? __netif_receive_skb+0x416/0x478
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff812184d4>] ? netif_receive_skb+0x71/0x77
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff811e8ca3>] ? xennet_poll+0xa67/0xbef
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8129a640>] ? _raw_spin_unlock_irqrestore+0x34/0x35
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff81218d81>] ? net_rx_action+0x9a/0x17e
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8103a41a>] ? __do_softirq+0xac/0x170
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8119f793>] ? __xen_evtchn_do_upcall+0x1ce/0x20c
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8129c2bc>] ? call_softirq+0x1c/0x30
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8100b947>] ? do_softirq+0x5f/0xbd
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8103a1af>] ? irq_exit+0x44/0x65
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff811a0c1a>] ? xen_evtchn_do_upcall+0x27/0x32
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff8129c30e>] ? xen_do_hypervisor_callback+0x1e/0x30
Jul 25 11:41:35 OSMUM6WS2 kernel: <EOI>[<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff81006960>] ? xen_safe_halt+0xc/0x15
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff810112d0>] ? default_idle+0x31/0x5b
Jul 25 11:41:35 OSMUM6WS2 kernel: [<ffffffff810115aa>] ? cpu_idle+0x74/0xab
Jul 25 11:41:35 OSMUM6WS2 kernel: Mem-Info:
Jul 25 11:41:35 OSMUM6WS2 kernel: DMA per-cpu:
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 0: hi: 0, btch: 1 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 1: hi: 0, btch: 1 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 2: hi: 0, btch: 1 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 3: hi: 0, btch: 1 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 4: hi: 0, btch: 1 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 5: hi: 0, btch: 1 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 6: hi: 0, btch: 1 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 7: hi: 0, btch: 1 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 8: hi: 0, btch: 1 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 9: hi: 0, btch: 1 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: DMA32 per-cpu:
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 0: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 1: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 2: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 3: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 4: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 5: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 6: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 7: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 8: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 9: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: Normal per-cpu:
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 0: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 1: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 2: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 3: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 4: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 5: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 6: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 7: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 8: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: CPU 9: hi:186, btch:31 usd: 0
Jul 25 11:41:35 OSMUM6WS2 kernel: active_anon:877763 inactive_anon:377009 isolated_anon:0
Jul 25 11:41:35 OSMUM6WS2 kernel: active_file:33 inactive_file:272 isolated_file:18
Jul 25 11:41:35 OSMUM6WS2 kernel: unevictable:0 dirty:1 writeback:0 unstable:0
Jul 25 11:41:35 OSMUM6WS2 kernel: free:237348 slab_reclaimable:5467 slab_unreclaimable:707212
Jul 25 11:41:35 OSMUM6WS2 kernel: mapped:54988 shmem:54985 pagetables:24919 bounce:0
Jul 25 11:41:35 OSMUM6WS2 kernel: DMA free:7856kB min:8kB low:8kB high:12kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:7632kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Jul 25 11:41:35 OSMUM6WS2 kernel: lowmem_reserve[]: 0 4024 8921 8921
Jul 25 11:41:35 OSMUM6WS2 kernel: DMA32 free:933396kB min:5448kB low:6808kB high:8172kB active_anon:1717000kB inactive_anon:535384kB active_file:68kB inactive_file:1128kB unevictable:0kB isolated(anon):0kB isolated(file):72kB present:4120800kB mlocked:0kB dirty:4kB writeback:0kB mapped:1304kB shmem:1300kB slab_reclaimable:7380kB slab_unreclaimable:795212kB kernel_stack:12288kB pagetables:29088kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jul 25 11:41:35 OSMUM6WS2 kernel: lowmem_reserve[]: 0 0 4897 4897
Jul 25 11:41:35 OSMUM6WS2 kernel: Normal free:8140kB min:6632kB low:8288kB high:9948kB active_anon:1794052kB inactive_anon:972652kB active_file:64kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:5014648kB mlocked:0kB dirty:0kB writeback:0kB mapped:218648kB shmem:218640kB slab_reclaimable:14488kB slab_unreclaimable:2033636kB kernel_stack:2016kB pagetables:70588kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jul 25 11:41:35 OSMUM6WS2 kernel: lowmem_reserve[]: 0 0 0 0
Jul 25 11:41:35 OSMUM6WS2 kernel: DMA: 2*4kB 1*8kB 0*16kB 3*32kB 3*64kB 3*128kB 2*256kB 1*512kB 2*1024kB 2*2048kB 0*4096kB = 7856kB
Jul 25 11:41:35 OSMUM6WS2 kernel: DMA32: 233067*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 933804kB
Jul 25 11:41:35 OSMUM6WS2 kernel: Normal: 1983*4kB 2*8kB 6*16kB 1*32kB 1*64kB 0*128kB 1*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 8908kB
Jul 25 11:41:35 OSMUM6WS2 kernel: 55325 total pagecache pages
Jul 25 11:41:35 OSMUM6WS2 kernel: 0 pages in swap cache
Jul 25 11:41:35 OSMUM6WS2 kernel: Swap cache stats: add 0, delete 0, find 0/0
Jul 25 11:41:35 OSMUM6WS2 kernel: Free swap= 0kB
Jul 25 11:41:35 OSMUM6WS2 kernel: Total swap = 0kB
Jul 25 11:41:35 OSMUM6WS2 kernel: 2319600 pages RAM
Jul 25 11:41:35 OSMUM6WS2 kernel: 65108 pages reserved
Jul 25 11:41:35 OSMUM6WS2 kernel: 78453 pages shared
Jul 25 11:41:35 OSMUM6WS2 kernel: 1953241 pages non-shared
Jul 25 11:41:35 OSMUM6WS2 kernel: SLAB: Unable to allocate memory on node 0 (gfp=0x20)
Jul 25 11:41:35 OSMUM6WS2 kernel: cache: TCP, object size: 1664, order: 1
Jul 25 11:41:35 OSMUM6WS2 kernel: node 0: slabs: 3712/3712, objs: 14848/14848, free: 0
从dump的信息看,order=1说明是在分配连续2页,也就是分配8K物理内存的时候失败了, gfp_mask=0x20也就是GFP_ATOMIC, 说明是在原子上下文分配内存,这点从堆栈也能看出来。
为了分配8K连续内存, 内核会首先尝试从低端内存也就是Normal区分配,但是因为内核的水位控制以及lowmem_reserve的原因,虽然还剩余12个8K的块,但是并不能被分配使用,
接下来内核会尝试从依次DMA32和DMA区域分配8K内存, 由于同样的原因导致最终没有分配内存, 详情如下:
Jul 25 13:00:11 OSMUM6WS2 kernel: DMA: 2*4kB 1*8kB 0*16kB 3*32kB 3*64kB 3*128kB 2*256kB 1*512kB 2*1024kB 2*2048kB 0*4096kB = 7856kB
Jul 25 13:00:11 OSMUM6WS2 kernel: DMA32: 211514*4kB 2*8kB 1*16kB 1*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 0*2048kB 0*4096kB = 847144kB
Jul 25 13:00:11 OSMUM6WS2 kernel: Normal: 1819*4kB 12*8kB 1*16kB 1*32kB 1*64kB 0*128kB 1*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 8252kB
到这里貌似看起来一切正常,内存分配失败看起来理由充足,8K的内存块已经很少不能继续分配使用。
但是如果你仔细看一下此时系统的剩余内存,此时系统还有 (8140kB+933396kB+7856kB)= 927M物理内存, 而且927M物理内存中的绝大部分都位于DMA32区中4K的内存块,
(虽然4K的块还有很多,但是根据内存管理伙伴系统的实现,分配8K连续内存块是并不能4K块中分配,只能从8K及以上的块分配,最终内存分配失败。)
这种内存分布已经明显不合理,正常情况下, 内核会合并两个连续的4K块成为一个8K的块.
目测造成这种情况的原因大概有两种:
1. 4K的块全是不连续的碎片,导致不能合并,我们的服务器总物理内存是9G, 927M/9G=10%也就是说当前系统中%10的物理内存全是碎片,但是这跟伙伴系统号称的能很好控制碎片量相矛盾.
而且我感觉927M物理内存全是碎片的可能性不太大, 但是没有证据。
2. kernel的伙伴系统算法有bug, 导致在某些情况下不能正确的合并内存碎片。
目前不好说具体是那种原因, 欢迎大家讨论.
系统信息:
1. uname -a:
Linux OSMUM6WS2 3.6.3 #1 SMP Tue Oct 15 18:31:48 CST 2013 x86_64 x86_64 x86_64 GNU/Linux
2. CPU 10 核, 9G物理内存 好牛逼的样子http://www.77bd.net/0/54578/ 顶一下,例子挺好的,以前没有遇到过
我是新手,说的有可能有错误
1. 4K的块全是不连续的碎片,导致不能合并,我们的服务器总物理内存是9G, 927M/9G=10%也就是说当前系统中%10的物理内存全是碎片,但是这跟伙伴系统号称的能很好控制碎片量相矛盾.
这里感觉不矛盾啊,除非伙伴系统有为了向上合并,而将当前空闲的页和使用的页对调的机制,不过看代码好像伙伴系统没有这种机制
所以如果要是927M都是非连续的4K的页的话,伙伴系统确实无能为力 @humlb_1983召唤一名大神 @humjb_1983召唤一名大神 应该查一下哪里还会请求大内存(order>2)。这些900+M的内存就是碎片。 写个测试驱动看能否获得连续物理地址的2个page空间,如果不行,那就是内存碎片了。
有可能是系统中应用分配内存方式导致,多使用slab去分配这种小内存。 大部分是slab (skb)和匿名页,对于后者,也许可以开启swap
然后echo 3 > /proc/sys/vm/drop_cache来回收缓解一下。。
形成这种碎片我也想知道什么场景容易出。 @super皮波 @gaojl0728
呵呵,可能真是碎片。要确认的话,估计需要收集vmcore分析~ super皮波 发表于 2014-08-20 18:02 static/image/common/back.gif
@humlb_1983召唤一名大神
呵呵,不敢当~,最近比较忙~