- 论坛徽章:
- 0
|
本帖最后由 锅铁做 于 2012-08-06 16:40 编辑
回复 11# bbjmmj
Hi,
其实,存储上用哪种特征的磁盘,都是次要的,关键基于同城容灾之间的那条链路,如何减低这条链路之间的延迟才是很重要好的。
Infiniband可以直接把数据推送到系统总线,算是一个IO延迟的弥补。
公司给客户设计那套方案(Infiniband),直接都是由总部技术人员(一堆老外)参与,另外结合了Xsigo system公司技术,Xsigo system主要负责两端链路的协议转换。
另外,关于FC 传输的延迟,我有一个(理论)值,可以跟你分享下:
光纤的延迟大概每隔一公里是:5μs,如果是35公里,则是 35×5μs;
以SCSI 为例,每次IO写入需要8个周期。那最终的结论是:一套两地位于35公里的存储,采用FC ,写入延迟则是 35×5μs×8.
我讲的是理论值,则实际会更高。
这是我依据的-指导手册内容的片段:
See:
In environments where a direct connection (dark Fibre) between sites is used, latency is normally no problem, for example:
*Dark Fibre links can be stretched up to 10 km (with 1300 nm laser) or 35 km (with 1550 nm laser). A dark Fibre link of 35 km adds a latency of around 5 micro seconds per km.
<A microsecond (μs) is equal to one millionth of a second or one thousandth of a millisecond (ms)>
*Typical SCSI transactions require a transaction to transverse the link 8 times or four round trips.
This means a dark Fibre link of 35 km adds a latency of 5 μs * 35km * 8(trips)= 1400 micro seconds(μs) = 1.4 milliseconds (ms) which is negligible for most applications, but it could affect time sensitive transactional Application Servers/Hosts such as databases which can send a lot of small I/O per second.
|
|