Case 1: 10Mbs = 1 M-byte per second approx.
[assuming
(a) little collision, i.e. little re-send packets
(b) allowing a little network overheads
-->; Example environment: a LAN with 4 computers less than 3 computers is actually sending big files
]
Case 2: 100Mbs = 10 M-byte per second aporox.
(a) little collision, i.e. little re-send packets
(b) allowing a little network overheads
-->; Example environment: a LAN with 4 computers less than 3 computers is actually sending big files
]
Case 3: 1000Mbs = 100 M-byte per second aporox.
(a) little collision, i.e. little re-send packets
(b) allowing a little network overheads
-->; Example environment: a LAN with 4 computers less than 3 computers is actually sending big files
]
So for case 3, I can have a full 700M byte CD in about 7 seconds -- providing of course the buses can cope with this bandwidth.
In the full internet environment, collisions and resending of packets drastically cut down the useful bandwidth.
Essentially, they are "refining" [of course, a lot of engineering] for how ethernet deals with collisions and multiple streams. Then, there is the fairness element....
* * * * * * * * * * * * * * * * * * * * * * *
Here is what nature reported this March and accordingly, we should see something this summer:
reported 7 Giga-byte per minute in the full internet environment.
7 Giga-byte per minute is ABOVE 110 Mega-byte per sec.作者: andrewchoi 时间: 2003-09-26 19:47 标题: 大家知道这种技术的细节不?下载一部DVD只需五秒 我無看清楚全文....
但都是 Load Balance 技術作者: En_route 时间: 2003-09-27 06:03 标题: 大家知道这种技术的细节不?下载一部DVD只需五秒 Assuming you have the resources (money), can you enlighten me on what will be required to connect Beijing(China), New York(USA), Sydney(Australia) and Capetown(Africa), each with 20 users requiring the 100 Mega-byte per second simultaneously (i.e. the normal interent working environment) using the load-balance technology you understand other than a dedicated line like case 3 described in my earlier posting.
I added the headings, hoping it will assist reading.
The 7 Giga-byte per minute speed was achieved using _EXISTING_ facilities -- NO NEW HARDWARE are required, ONLY THE _FAST_ PROTOCOL.
LIMITATIONS OF EXISTING PROTOCOL
================================
The problem is the Transmission Control Protocol (TCP), which manages the data that flows between computers. The TCP chops information into little packages - the receiving and sending computers communicate to check that all the packages have arrived correctly, and the sender re-sends those that didn't.
If it detects many errors, the TCP deduces that the network is congested. It halves the sending rate, and begins edging back up towards the maximum.
WHERE IS THE BOTTLENECK --
(1) OUR COMPUTERS ARE MANY, MANY TIMES FASTER THAN THOSE OF THE 1980s
(2) OUR INTERNET SUPERHIGHWAY HAS A BANDWDITH MANY TIMES OF THE 80s
(3) WE ARE STILL USING THE SAME TCP OF THE 80s
===============================================
This worked fine for the Internet of the late 1980s, when the TCP was invented. But it copes less well with powerful twenty-first-century networks. "The adaptation is too drastic," Low explains. "The speed jumps around from too high to too low."
It's like driving a car by flooring the accelerator for as long as you can, and then stamping on the brake when you hit traffic.
CALTECH's APPROACH
==================
Caltech's alternative is called FAST, for Fast Active queue-management Scalable TCP. It detects congestion by measuring the delay between sending a packet of data and receiving an acknowledgement. As this delay increases, it eases off - just a little.
This deals with congestion before the error rate rises. "It allows you to adapt more smoothly," says Low. In tests using existing hardware and networks, FAST has run the international links between labs at more than 95% efficiency.
MULTIPLE STREAMS IS _NOT_ AN OPTION
===================================
SLAC already uses tricks to increase its transmission rates, such as sending several data streams at once. But this is system is prone to breaking down. "FAST is a big simplification on how we do things now, and that's a major advance," says Cottrell.
FAIRNESS: GIVEN A BANDWIDTH, IF YOU CAN SEND MORE, OTHERS HAVE TO SEND LESS
===============================================
Low's team is not yet ready to unleash its creation on the open Internet. Online traffic is a balance of many different information flows, and the team still needs to ensure that FAST will not hog the information superhighway at other users' expense.