免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 2313 | 回复: 0
打印 上一主题 下一主题

Client/Server Design Alternatives [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2007-07-13 10:58 |只看该作者 |倒序浏览
10可用积分
Unix Network Programming Volume 1,Third Edition The Sockets Networking API
Chapter 30. Client/Server Design Alternatives

gave us this figure:

Row Server      Description     Process Control CPU time(Difference from baseline)
0 Iterative Server (baseline) 0.0
1 Concurrent Server, one fork per client request 20.90
2 Pre-fork, each child calling accept 1.80
3 Pre-forking, file locking around accept 2.07
4 Pre-forking, thread mutex locking around accept 1.75
5 Pre-fork, parent passing descriptor to child 2.58
6 One thread per client request 0.99
7 Pre-threaded, mutex locking to protect accept 1.93
8 Pre-threaded, main thread calling accept 2.05

but 30.11 said:
Comparing rows 6 and 7 in Figure 30.1, we see that this latest version of our server is faster than the create-one-thread-per-client version.

so i think it must be something wrong with the data in row 7.

i found in the internet:

Dr. Ayman A. Abdel-Hamid
Computer Science Department
Virginia Tech

think "1.93" is doubtable in his speech.

Also i tested the 8 examples using the same parameters, i made this figure:
Row  Fast  Cost Time
1 8 3.215505
2 3 0.596908
3 6 0.897862
4 1 0.565911
5 7 0.938856
6 5 0.730888
7 2 0.571912
8 4 0.628903

so i think the data in the row 6 may be 1.99.
As the figure in the book showed, "Pre-fork, each child calling accept" is faster than "Pre-threaded, mutex locking to protect accept", but my test result is different.

what is your opinion?

您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP