【求助】读写块设备时死机--meminfo内存信息问题
本帖最后由 ggmove 于 2012-04-17 17:51 编辑写了一个驱动程序,在对他进行写操作时,使用
#cat /proc/meminfo
命令查看内存使用情况,发现
MemTotal: 1035932 kB
MemFree: 531564 kB
Buffers: 99400 kB
Cached: 286504 kB
SwapCached: 0 kB
Active: 212396 kB
Inactive: 258136 kB
HighTotal: 131008 kB
HighFree: 264 kB
LowTotal: 904924 kB
LowFree: 531300 kB
SwapTotal: 2031608 kB
SwapFree: 2031608 kB
Dirty: 13080 kB
Writeback: 0 kB
AnonPages: 84648 kB
Mapped: 46344 kB
Slab: 21900 kB
PageTables: 2720 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 2549572 kB
Committed_AS: 236984 kB
VmallocTotal: 114680 kB
VmallocUsed: 4192 kB
VmallocChunk: 107652 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 4096 kB
写的数据越多,程序吃掉的内存就越多,Dirty(等待写回)项的内存反而越来越大,不知道问题出在哪里,为什么不直接写回;在内存使用完的时候我确实进行释放了,下面是再次执行cat命令内容:
MemTotal: 1035932 kB
MemFree: 456704 kB
Buffers: 172744 kB
Cached: 286504 kB
SwapCached: 0 kB
Active: 318008 kB
Inactive: 225868 kB
HighTotal: 131008 kB
HighFree: 264 kB
LowTotal: 904924 kB
LowFree: 456440 kB
SwapTotal: 2031608 kB
SwapFree: 2031608 kB
Dirty: 42836 kB
Writeback: 4 kB
AnonPages: 84644 kB
Mapped: 46344 kB
Slab: 22948 kB
PageTables: 2720 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 2549572 kB
Committed_AS: 236984 kB
VmallocTotal: 114680 kB
VmallocUsed: 4192 kB
VmallocChunk: 107652 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 4096 kB
发现剩余的内存越来越少,最后造成宕机,下面是test_xfer_bio函数的内容(涉及到内存申请的函数):
static int test_xfer_bio(struct pns_dev *dev,struct bio *bio)
{
int i;
struct bio_vec *bvec;
sector_t sector=bio->bi_sector;
void *buffer;
void *biobuf;
down(&(dev->subdev.request_lock));
buffer= kmalloc(RECV_BUF_SIZE + 1,GFP_KERNEL); //buf size is 4096 when blocknum equal to 8
bio_for_each_segment(bvec,bio,i) {
switch(bio_data_dir(bio)) {
case READA:
case READ :
test_transfer(dev,sector,bio_cur_sectors(bio),buffer,MSG_READ_EX);
biobuf=__bio_kmap_atomic(bio,i,KM_USER0);
memcpy(biobuf,buffer,bio_cur_sectors(bio)<<KERNEL_BLOCKSHIFT);
__bio_kunmap_atomic(bio,KM_USER0);
break;
case WRITE:
biobuf=__bio_kmap_atomic(bio,i,KM_USER0);
memcpy(buffer,biobuf,bio_cur_sectors(bio)<<KERNEL_BLOCKSHIFT);
__bio_kunmap_atomic(bio,KM_USER0);
test_transfer(dev,sector,bio_cur_sectors(bio),buffer,MSG_WRITE_EX);
break;
default:
goto fail;
}
sector +=bio_cur_sectors(bio);
}
kfree(buffer);
up(&(dev->subdev.request_lock));
return 0;
fail:
kfree(buffer);
up(&(dev->subdev.request_lock));
return 0;
}
在获取每一条io请求之前申请内存,处理完成之后进行释放,不明白为什么会吃掉内存,有哪位高手能解释下,被这个问题折磨几天了,不胜感激! 怎么都没人回复啊,人气也太不行了。。自己顶:roll:
页:
[1]