- 论坛徽章:
- 1
|
6.2.2 Host configuration considerations
There are various key points when configuring the host for optimal performance. Because the
XIV Storage System is distributing the data across all the disks an additional layer of volume
management at the host, such as Logical Volume Manager (LVM), might hinder performance
for workloads. Multiple levels of striping can create an imbalance across a specific resource.
Therefore, it is best to disable host striping of data for XIV Storage System volumes and allow
the XIV Storage System to manage the data.
Based on your host workload, you might need to modify the maximum transfer size that the
host generates to the disk to obtain the peak performance. For applications with large transfer
sizes, if a smaller maximum host transfer size is selected, the transfers are broken up,
causing multiple round-trips between the host and the XIV Storage System. By making the
host transfer size as large or larger than the application transfer size, fewer round-trips occur,
and the system experiences improved performance. If the transfer is smaller than the
maximum host transfer size, the host only transfers the amount of data that it has to send.
Due to the distributed data features of the XIV Storage System, high performance is achieved
by parallelism. Specifically, the system maintains a high level of performance as the number
of parallel transactions occur to the volumes. Ideally, the host workload can be tailored to use
multiple threads or spread the work across multiple volumes.
Changing the queue depth
The XIV Storage architecture was designed to perform under real-world customer production
workloads, with lots of I/O requests at the same time. Queue depth is an important host bus
adapter (HBA) setting because it essentially controls how much data is allowed to be “in flight”
onto the SAN from the HBA. A queue depth of 1 requires that each I/O request be completed
before another is started. A queue depth greater than one indicates that multiple host I/O
requests might be waiting for responses from the storage system. So, the higher the host
HBA queue depth, the more parallel I/O goes to the XIV Storage System.
Chapter 6. Performance characteristics 201
The XIV Storage architecture eliminates the legacy storage concept of a large central cache.
Instead, each component in the XIV grid has its own dedicated cache. The XIV algorithms
that stage data between disk and cache work most efficiently when multiple I/O requests are
coming in parallel - this is where the queue depth host parameter becomes an important
factor in maximizing XIV Storage I/O performance.
Sample queue depth comparison
Figure 6-1 shows a queue depth comparison for a database I/O workload (70 percent reads,
30 percent writes, 8k block size, DBO = Database Open).
Note that the performance numbers in this example are valid for this special test at an IBM lab
only. The numbers do not describe the general capabilities of IBM XIV Storage System as you
might observe them in your environment.
Figure 6-1 Host side queue depth comparison
A good practice is starting with a queue depth of 64 per HBA, to ensure exploitation of the
XIV’s parallel architecture.
Nevertheless the initial queue depth value might need to be adjusted over time. While higher
queue depth in general yields better performance with XIV one must consider the limitations
per port on the XIV side. Each HBA port on the XIV Interface Module is designed and set to
sustain up to 1400 concurrent I/Os (except for port 3 when port 4 is defined as initiator, in
which case port 3 is set to sustain up to 1000 concurrent I/Os). With a queue depth of 64 per
host port as suggested, one XIV port is limited to 21 concurrent host ports given that each
host will fill up the entire 64 depth queue for each request.
If in a very large environment, the relationship of 21 host ports connected to one XIV port is
not sufficient, lower queue depth values have to be configured. This method can also be used
as a “poor man’s” Quality of Service (QoS) mechanism. |
|