- 论坛徽章:
- 0
|
As there is no universal solution to all problems, RAID configuration will vary depending on the application. Here is a short list of guidelines we have compiled:
Notes on RAID
The following is a general list of ideas that will help you choose the right RAID level and its associated settings.
As there is no universal solution to all problems, RAID configuration will vary depending on the application. Here is a short list of guidelines we have compiled:
General configuration notes:
1. Striping has the best performance, but offers no data protection.
2. For write intensive applications, mirroring has better performance than RAID5.
3. Mirroring and RAID5 both increase data availability, and both decrease writing performance.
4. Mirroring improves random read performance.
5. RAID5 has lower cost than mirroring. Stripes/concatenations have no additional cost.
Concatenation notes:
1. Concatenation uses less CPU time than striping.
2. Concatenation works well for small random I/O.
3. Avoid using physical disks with different geometries.
4. Distribute slices across different controllers and busses to help balance the I/O load.
Striping (RAID0) notes:
1. Set the stripe's interlace value correctly.
2. The more physical disks in a stripe, the greater the I/O performance, and the lower the MTBF (mean time b/n failures).
3. Don't mix differently sized slices, as a stripe's size is limited by its smallest slice.
4. Avoid using physical disks with different geometries.
5. Distribute the stiped metadevice across different controllers and busses.
6. Striping cannot be used to encapsulate existing filesystems.
7. Striping performs well for large sequential I/O and for random I/O distributions.
8. Striping uses more CPU cycles than concatenation, but it is usually worth it.
9. Striping does not provide any redundancy of data.
Mirroring (RAID1) notes:
1. Mirroring may improve read performance; write performance is always degraded.
2. Mirroring improves read performance only in multi-threaded or asynchronous I/O situations.
3. Mirroring degrades write performance by about 15-50 percent, as it has to write everything twice.
4. Using filesystem cache may turn a 80/20 read/write situation to 60/40 or even 40/60.
RAID5 notes:
1. RAID5 can withstand only a single device failure (mirroring MAY withstand several; striping and concatenation leave no room for that).
2. RAID5 provides good read performance under no errors, and poor read performance under error conditions.
3. RAID5 can cause poor write performance -- up to 70 percent degradation (as parity has to be calculated on the fly).
4. RAID5 is much cheaper than mirroring. Amount of disks needed for parity = 1 / total_#_disks.
5. RAID5 can NOT be used for existing filesystems. A backup and restore will be necessary.
State database replica notes:
(State database replicas only apply to Solstice DiskSuite)
1. All replicas are written when the configuration changes.
2. Only two replicas (per mirror) are updated for mirror dirty region bitmaps.
3. A good average is two replicas per three mirrors.
4. Use two replicas per one mirror for write intensive applications.
5. Use two replicas per 10 mirrors for read intensive applications.
6. 1 drive => 3 replicas on one slice, as the minimum number of replicas is 3.
7. 2-4 drives => 2 replicas on each drive.
8. 5+ drives => 1 replica on each drive.
9. Each state database replica occupies 517K (1034 disk sectors).
10. Replicas can be stored on a dedicated slice or on one that will be used in a metadevice.
11. The system will run with half of the replicas and will reboot with half+1.
Logging device notes:
1. Place them on an unused disk, preferrably around the middle (to minimize the average seek).
2. The log device and the master device of the same trans metadevice should be located on different drives/controllers to balance the I/O load.
3. Trans metadevices can share logs. This is not recommended for heavily used filesystems.
4. Absolute minimum log size is 1 MB. Good average is 1 MB per 100 MB. Recommended minimum is 1 MB per 1 GB.
5. All logs should be mirrored to avoid filesystem problems and/or data loss.
Filesystem notes:
1. Create new filesystems with "newfs -i 8192" -- 1 inode per 8K (default is 1 inode per 2K).
2. For large metadevices (>8G), increase the size of a cylinder group (max is 256) -- "newfs -c 256".
The cluster size should be equial to an integral of the stripe width:
maxcontig = 16 (16*8 Kbyte blocks = 128 Kbyte clusters)
Using a four-way stripe with 32K interlace results in 128K stripe width, which is good in this case.
interlace size = 32K(32K stripe unit size * 4 disks = 128K stripe width)
本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u/10290/showart_48711.html |
|