Unix Network Programming Volume 1,Third Edition The Sockets Networking API
Chapter 30. Client/Server Design Alternatives
gave us this figure:
Row Server Description Process Control CPU time(Difference from baseline)
0 Iterative Server (baseline) 0.0
1 Concurrent Server, one fork per client request 20.90
2 Pre-fork, each child calling accept 1.80
3 Pre-forking, file locking around accept 2.07
4 Pre-forking, thread mutex locking around accept 1.75
5 Pre-fork, parent passing descriptor to child 2.58
6 One thread per client request 0.99
7 Pre-threaded, mutex locking to protect accept 1.93
8 Pre-threaded, main thread calling accept 2.05
but 30.11 said:
Comparing rows 6 and 7 in Figure 30.1, we see that this latest version of our server is faster than the create-one-thread-per-client version.
so i think it must be something wrong with the data in row 7.
i found in the internet:
Dr. Ayman A. Abdel-Hamid
Computer Science Department
Virginia Tech
think "1.93" is doubtable in his speech.
Also i tested the 8 examples using the same parameters, i made this figure:
Row Fast Cost Time
1 8 3.215505
2 3 0.596908
3 6 0.897862
4 1 0.565911
5 7 0.938856
6 5 0.730888
7 2 0.571912
8 4 0.628903
so i think the data in the row 6 may be 1.99.
As the figure in the book showed, "Pre-fork, each child calling accept" is faster than "Pre-threaded, mutex locking to protect accept", but my test result is different.