- 论坛徽章:
- 0
|
下面的文字摘自linuxforum上,我还是不知道create_bounce的真实意图.如果说为了考虑DMA的寻址能力而设的,那么:
1.对于standard dma,寻址能力16M,但是create_bounce中申请跳板页时使用的是page = alloc_page(GFP_BUFFER);显然没有考虑16M的限制
2.对于PCI卡上的DMA,它的寻址能力好象是32位(4G),但是create_bounce中测试是否要建立跳板的条件是if (!PageHighMem(bh_orig->b_page)) return bh_orig;也就是地址>896M才建立跳板
我什么地方的思考有问题,谢谢!
http://www.linuxforum.net/forum/ ... &o=186&vc=1
kmap_atomic使用fixmap中保留的两个虚存页面,提供了可以在irq环境中使用的
highmem访问接口.
另外注意__GFP_HIGH不是要分配highmem的内存,__GFP_HIGHMEM才是,看看
struct buffer_head * create_bounce(int rw, struct buffer_head * bh_orig)
不要把这两个东西搞混了.create_bounce为处于highmem的bh分配一个非highmem的
内存页面,并继承bh其他的所有东西.在io完成之前后负责在highmem和这个跳板页
面直接复制数据. 或许是dma不能处理himem故需要这个跳板(should be this).
看看什么情况下对HigMemPage进行io:
内核尽量给用户分配highmem页面,在某种情况下需要将数据写到这些页面.比如用
户将文件mmap到内存,然后内核为这些页面分配了Highmem Page,现在需要读入数据.
直接看block_read_full_page,
............
if (!buffer_mapped(bh)) {
memset(kmap(page) + i*blocksize, 0, blocksize);
flush_dcache_page(page);
kunmap(page);
set_bit(BH_Uptodate, &bh->b_state);
continue;
}
...............
可以看到kmap,映射高端内存的操作.证明这个流程需要处理Highmem Page.看
submit_bh->generic_make_request->q->make_request_fn这个函数度于ide就是
__make_request(见函数blk_init_queue):
#if CONFIG_HIGHMEM
bh = create_bounce(rw, bh);
#endif
利用了highmem提供的这个跳板.
struct buffer_head * create_bounce(int rw, struct buffer_head * bh_orig)
{
struct page *page;
struct buffer_head *bh;
if (!PageHighMem(bh_orig->b_page))
return bh_orig;
repeat_bh:
bh = kmem_cache_alloc(bh_cachep, SLAB_BUFFER);
if (!bh) {
wakeup_bdflush(1); /* Sets task->state to TASK_RUNNING */
goto repeat_bh;
}
/*
* This is wasteful for 1k buffers, but this is a stopgap measure
* and we are being ineffective anyway. This approach simplifies
* things immensly. On boxes with more than 4GB RAM this should
* not be an issue anyway.
*/
repeat_page:
page = alloc_page(GFP_BUFFER);
if (!page) {
wakeup_bdflush(1); /* Sets task->state to TASK_RUNNING */
goto repeat_page;
}
set_bh_page(bh, page, 0);
bh->b_next = NULL;
bh->b_blocknr = bh_orig->b_blocknr;
bh->b_size = bh_orig->b_size;
bh->b_list = -1;
bh->b_dev = bh_orig->b_dev;
bh->b_count = bh_orig->b_count;
bh->b_rdev = bh_orig->b_rdev;
bh->b_state = bh_orig->b_state;
bh->b_flushtime = jiffies;
bh->b_next_free = NULL;
bh->b_prev_free = NULL;
/* bh->b_this_page */
bh->b_reqnext = NULL;
bh->b_pprev = NULL;
/* bh->b_page */
if (rw == WRITE) {
bh->b_end_io = bounce_end_io_write;
copy_from_high_bh(bh, bh_orig);
} else
bh->b_end_io = bounce_end_io_read;
bh->b_private = (void *)bh_orig;
bh->b_rsector = bh_orig->b_rsector;
memset(&bh->b_wait, -1, sizeof(bh->b_wait));
return bh;
} |
|