免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 2912 | 回复: 0
打印 上一主题 下一主题

Introduction to performance tuning for HP-UX [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2006-09-04 14:26 |只看该作者 |倒序浏览
非常实用的HP-UX tuning资料 帮了我不少忙
When considering the performance of any system it is important to determine abaseline of what is acceptable.How does the system perform when there is no load from applications or users?What are the systems resources in terms of memory, both physical and virtual?How many processors does the system have?What is the speed and RISC level?What is the layout of the data ?What are the key kernel parameters set to?How are those resources being utilized?What are the utilities to measure these? Memory Resources HP-UX utilizes both physical memory ,RAM  and disk memory, referred to asswap. There are three resources that can be used to determine the amountof RAM: syslog.log,dmesg, and adb (absolute de-bugger) .The information dmesg reports comes from /var/adm/syslog/syslog.log. While using dmesg is convienient, if the system has logged too many errorsrecently, the memory information may not be available. Insufficient memory resources are a major cause of performance problems, andshould be the first area to check. The memory information from dmesg is at the bottom of the output. example: Memory Information :physical page size =4096 bytes, logical page size= 4096 bytesPhysical: 524288 Kbytes, locakble: 380880 Kbytes, available: 439312 Using adb reads the memory from a more reliable source, the kernel. To determine the physical memory (RAM) using adb: for HP-UX 10.X example:  echo  physmem/D | adb  /stand/vmunix /dev/kmem physmem:physmem:  24576 for HP-UX 11.X systems running on 32 bit architecture: example: echo phys_mem_pages/D | adb /stand/vmunix /dev/kmem phys_mem_pages:phys_mem_pages: 24576  for HP-UX 11.X systems running on 64 bit architecture: example:  echo phys_mem_pages/D | adb64  /stand/vmunix /dev/memphys_mem_pages:phys_mem_pages: 262144 The results of these commands are in 4 Kb memory pages, to determine the sizein bytes multiply by 4096 . To fully utilize all of the RAM on a system there must be a sufficient amountof virtual memory to accomodate all processes that will be opened on thesystem . The HP recommendation is that virtual memory be at least equal tophysical memory plus application size.  This is outlined in the System Administration Tasks Manual. To determine virtual memory configuration run the following command: #swapinfo -tam  example:       Mb       Mb      Mb      PCT    START/     MbTYPE  AVAIL    USED      FREE   USED    LIMIT    RESERVE     PRI    NAMEdev           1024         0             1024      0      1   /dev/vg00/lvol1reserve        184       -184memory 372     96         276    26total 1396    280       1116     20 The key areas to monitor are reserve , memory and total .  For a process to spawn it needs a sufficient amount of virtual memory to beplaced in reserve. There should be a sufficient amount of free device swap toopen any  processes that may be spawned during the course of operations. Bysubtracting the reserve from the device total you can determinethis value. In the example above , 184Mb of device swap has been reserved, this leaves840Mb to open up processes or for paging to disk. If there is an insufficient amount of device swap available ,the system willuse RAM to reserve memory for the fork call. This is an inefficient use of fastmemory. If there is an insufficient amount of available memory to fork you willreceive an error : cannot fork : not enough virtual memory. If this error is received , you will need to allocate more device swap. Thisshould be configured on a disk with no other swap partitions, and ideally ofthe same size and priority of existing swap logical volumes to enableinterleaving. Refer to the Application Note KBAN00000218Configuring Device Swap for details on the procedure. The memory line is enabled when the kernel parameter swapmem_on is set to 1 .This allows a percentage of RAM to be allocated for pseudo-swap. This is thedefault and should be used unless the amount of lockable memory exceeds 25% ofRAM. You can determine the amount of lockable memory by running the command: example: echo total_lockable_mem/D | adb  /stand/vmunix  /dev/mem total_lockable_mem:total_lockable_mem: 185280 This will return the amount in Kbytes of lockable memory in use.Divide this by 1024 to get the size in megabytes, then divide by the amount ofRAM in megabytes to determine the percentage. To avoid memory contention between the buffer cache and pseudoswap ,dbc_max_pct should not be greater than the difference between lockablememory and pseudo swap. Some overlap is acceptable under most conditions .  Aspseudo swap is used to prevent paging to disk , and thus reducing disk i/otraffic it should be a more significant consideration when configuring memory. If pseudo-swap is disabled by setting swapmem_on to 0 , there will typically bea need to increase the amount of device swap in the system to accommodatepaging and reserve area. Ideally in a modern system paging to disk should beavoided. If there is significant paging to disk and the buffer cache has beenadjusted to avoid contention , adding RAM would be advisable for maximumperformance.  After physical and virtual memory is determined , we need to determine how muchbuffer cache has been configured and how much is being used. By default thesystem will use dynamic buffer cache . The kernel will show buf pages and nbufset to 0 in SAM. The parameters that govern the size of the dynamic buffercache are dbc_min_pct and dbc_max_pct , these define the minimumand maximum percentage of RAM allocated.  The default values are 5% mimimum and50% maximum . On systems with small amounts of RAM these values may be useful fordedicated applications. Since the introduction of HP-UX 11.0 the amount of RAMa system can have has increased from 3.75Gb to our newest systems with up to256Gb.  Keeping the default values for systems with a large amount of RAM canhave a negative impact on performance, due to the time the lower level routinesthat check on free memory in the cache take.  To monitor the use of the buffer cache run the following command : sar -b 5 100 You will see output similar to : bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s  0      95      100       1       2      54       0       0  The statistical average will be reported at the end of the report. Ideally wewant to see an average %wcache of 95 or greater. If the system consistentlyshows %wcache less than 75 it would be advisable to lower the valueof dbc_max_pct. In 32 bit architecture, the buffer cache resides in quadrant 3,limiting the maximum size to 1 Gb. Large and volatile buffer caches can have anegative impact on performance, normally no more than 300Mb is required toprovide a sufficent buffer cache. Keep in mind that many modern disk arrays buffer their writes with onboardmemory, also many databases use lockable memory to buffer within the database.  Note : Buffers remain in the cache even after a file is closed , as they couldbe used again in the future. Trade-offs are associated with either a static or dynamic buffer cache.    Ifmemory pressure exists, a static buffer cache cannot be reduced, potentiallycausing more important pages to be swapped out or processes deactivated.  Incontrast, some overhead exists in managing the dynamic buffer cache, such asthe dynamic allocation of the buffers and managing the buffer cache address mapor buffer cache virtual bitmap   Also, a dynamic buffer cache expands veryrapidly, but contracts very slowly and only when memory pressure exists. It is possible to bypass either static or dynamic buffer caches , in someinstances this allows for faster disk I/O . This can be accomplished with  the Online JFS mount options mincache=direct ,convosync=direct  Other options would be raw I/O , aynchronus writes to rawlogical volumes  , discovered_direct_io , ioctl. These topics are covered laterin the text Tuning recommendations : For databases , favor global area ( SGA)  over the buffer cacheFor most systems 200 -400 MB Current patches relating to the buffer cache : 10.20 PHKL_28866 (Critical, Reboot) s800 10.20 VM read-ahead panic, buffer cache,pagingPHKL_26767 (Critical, Reboot) s800 10.20 Buffer cache deadlock;write getsVX_ERETRY   11.0PHKL_18543(Critical, Reboot) s700_800 11.00PM/VM/UFS/async/scsi/io/DMAPI/JFS/perf patch 11.11Patch PHKL_27808 s700_800 11.11 Filesystem buffercache performance fix   Memory for applications For applications to have a sufficient amount of space for text, data and stackin memory the kernel has to be tuned. The total size for text, data and stackfor 32 bit systems using EXEC_MAGIC is in quadrant 1 and 2, and is at maximum2Gb less the size of the Uarea . These are represented by the kernel parametersmaxtsize, maxdsize and maxssiz. If there is 4Gb of total memory , the cumulative size of of data, stack andtext is 1984Mb . This represents quadrants 1 & 2 minus the Uarea in quadrant 2.If there is less than 4Gb total memory, the quadrant size is 1/4th of totalmemory. For 64 bit systems, while the address space in each quadrant is 4 Tb,the size of the memory map is equal to the total memory of the system, and aquadrant is 1/4 of this value. When sizing memory parameters for 64 bit it isimportant to keep this in mind.  The quadrant boundary rules still apply. It is important to remember that Uarea receives its memory allocationin quadrant 2 first, then stack (maxssiz) , the remainder of available space isavailable for data (maxdsiz) . For HP-UX 11.X data can also occupy the freespace in quadrant 1 that is not used by text (maxtsiz) . A single processcannot cross a quadrant boundary. The last configurable area of memory to check is shared memory .Any application running within the 32 bit architecture will have a limit of1.75Gb total for shared memory for EXEC_MAGIC and 2.75Gb using SHMEM_MAGIC. Note : This is only true when the total memory on the system equals at least 4Gb Individual  processes cannot cross quadrant boundaries , so the largest shmmaxcan be for 32 bit is 1Gb. Note: if a system is utilizing SHMEM_MAGIC the additional 1 Gb of sharedobject space comes from quadrant 2, this means that the text,data, stack andUarea must all come from quadrant 1 . This means maxtsiz,maxdsiz, maxssiz + Uarea can total no more than 1 Gb . If these parameters are undersized the system will error.  maxdsiz will return "out of memory " maxssiz will return "stack growth failure". maxtsiz will return " /usr/lib/dld.sl: Call to mmap() failed - TEXT "  As of HP-UX 11, the kernel stack (maxssiz) will receive its memory allocationbefore data ( maxdsiz) or text (maxtsiz). For 64 bit systems, the quadrant size is determined by dividing the totalmemory by 4 . It is important to determine if the application is running 32 bit or 64 bitwhen troubleshooting 64 bit systems. This can be done with the file command : example : file /stand/vmunix/stand/vmunix:  ELF-64 executable object file - PA-RISC 2.0 (LP64) PA-RISC versions under 2.0 are 32 bit  For an overview on shared memory for 32 bit systems refer to the ApplicationNote RCMEMKBAN00000027 Understanding Shared Memory on PA RISC Systems  .  The kernel parameter shmmax determines the size of the shared memoryregion. Unless patched SAM  will not allow this to be configured greater than 1quadrant , or 1Gb even on 64 bit systems. If a larger shmmax value is neededfor 64 bit systems it has to be done using a manual kernel build. The current patches to address this problem are :11.00:  [PHKL_24487/PACHRDME/English]11.11  [PHKL_24032/PACHRDME/English] Please refer to the patch database found at http://itrc.hp.com for the latestrevisions of these. In a 64 bit system 32 bit applications will only address the 32 bit sharedmemory region, 64 bit applications will only address the 64 bit regions.  To determine shared memory allocation, use ipcs this utility is used toreport status of interprocess communication facilities. Run the followingcommand: ipcs -mob You will see an output similar to this : IPC status from /dev/kmem as of Tue Apr 17 09:29:33 2001T      ID     KEY        MODE        OWNER     GROUP NATTCH  SEGSZShared Memory:m       0 0x411c0359 --rw-rw-rw-      root      root      0    348m       1 0x4e0c0002 --rw-rw-rw-      root      root      1  61760m       2 0x412006c9 --rw-rw-rw-      root      root      1   8192m       3 0x301c3445 --rw-rw-rw-      root      root      3 1048576m    4004 0x0c6629c9 --rw-r-----      root      root      2 7235252m       5 0x06347849 --rw-rw-rw-      root      root      1  77384m     206 0x4918190d --rw-r--rw-      root      root      0  22908m    6607 0x431c52bc --rw-rw-rw-    daemon    daemon      1 5767168  The two fields of the most interest are NATTCH and SEGSZ. NATTCH -The number of processes attached to the associated  sharedmemory segment. Look for those that are 0, they indicate processes who have notreleased their shared memory segment. If there are multiple segments showing with an NATTACH of zero , especially ifthey are owned by a database, this can be an indication that the segments arenot being efficiently released . This is due to the program not callingdetachreg  .  These segments can be removed using ipcrm -m shmid.  Note : Even though there is no process attached to the segment , the datastructure is still intact. The shared memory segment and data structureassociated with it are destroyed by executing this command. SEGSZ The size of the associated shared memory segment in bytes. Thetotal of SEGSZ for a 32 bit system using EXEC_MAGIC cannot exeed 1879048192bytes or 1.75Gb, or 2952790016 bytes or 2.75Gb for SHMEM_MAGIC. If more than 1.75GB total shared object space ( shared memory) is required for32bit enviroments memory windows can be implemented. This configuration willallow discrete 1Gb windows to be opened up to a limit of the total amount ofmemory on the system up to 8192Gb . For more information on memory windows see refer to :Memory Windows White Paper Doc ID :  HPUXWP19Using Memory Windows with 11.0 Doc ID: KBAN00000306These are available in the technical knowledge database at http://itrc.hp.com   CPU load Once we have determined that the memory resources are adequate, we need toaddress the processors. We need to determine how many processors there are,what speed they run at and what load they are under during a variety of systemloads . To find out processor speed,run : example: echo itick_per_usec/D | adb -k /stand/vmunix /dev/memitick_per_usec:itick_per_usec: 360 This will be the speed in MHz. To find out how many processors are in use, run  : example: echo runningprocs/D | adb -k /stand/vmunix /dev/mem runningprocs:runningprocs:   2 This can also be done by using sar -Mu To find out cpu load on a multi-processor system, run : example: sar -Mu 5 100   this will produce 100 data points 5 seconds apart. The output will look similar to :  11:20:05     cpu    %usr    %sys    %wio   %idle11:20:10       0       1       1       0      99               1      17      83       0       0          system       9      42       0      49 After all samples are taken an average is printed This will return data on the cpu load for each processor: cpu - cpu number (only on a multi-processor  system and used with the -M option) %usr -  user mode%sys- system mode%wio - idle with some process waiting for I/O(only block I/O, raw I/O, or VM pageins/swapins indicated)%idle - other idle  Typically the %usr value will be higher than %sys . If the system is makingmany read/write transactions this may not be true as they are systme calls.Out of memory errors can occur When excessive CPU time given to system versususer processes. These can also be caused when maxdsiz is undersized.  As arule , we should expect to see %usr at 80% or less, and %sys at 50% or less.Values higher than these can indicate a CPU bottleneck. The %wio should ideally be 0%,  values less than 15% are acceptable. The %idlebeing low over short periods of time is not a major concern .  This is thepercentage of time that the CPU is not running processes.  However  low  %idleover a sustained period could be an  indication of a CPU bottleneck. If the %wio is greater than 15% and %idle is low , consider the size of therunq (runq-sz). Ideally we would like to see values less than 4 . If the runq-szis high and the %wio is 0 then there is no bottleneck . This is usually a caseof many small processes running that do not overload the processors. If the system is a single processor system under heavy load the CPU bottleneckmay be unavoidable. If the cpu load appears high , but the system is not heavily loaded check thevalue of the kernel parameter timeslice. By default it is 10, if a TunedParameter Set was applied to the kernel, it will change timeslice to 1. Thiswill cause the cpu to context switch every 10mS instead of 100mS. In mostinstances this will have a negative effect on cpu efficiency. To find out what the run queue load is, run : sar -q 5 100 example:           runq-sz %runocc swpq-sz %swpocc10:06:36     0.0       0     0.0       010:06:41     1.5      40     0.0       010:06:46     3.0      20     0.0       010:06:51     1.0      20     0.0       0Average      1.8      16     0.0       0 runq-sz  - Average length of the run queue(s) of  processes(in memory and runnable) %runocc - The percentage of time the run queue(s) were occupied by processes(in memory and runnable) swpq-sz - Average length of the swap queue of runnable processes (processes swapped out but ready to run) These cpu reports can be combined using sar -Muq .  Oversized System Tables can negatively effect system performance Three of the most critical kernel resources are nproc, ninode and nfile .These parameters govern the size of the process , HFS inode , and file tables.By default these are controlled by the formula value of maxusers.  Ideally we want to keep these settings within 25% of the peak observed usage.Using sar -v we can monitor the proc table  and file table, the inodetable reporting reflects the cache , not inodes in use.  The output of sar -v will show the usage/kernel value for each area. example: 08:05:08 text-sz  ov  proc-sz  ov  inod-sz     ov  file-sz  ov08:05:10   N/A    0  272/6420  0  3427/7668  0  5458/12139 0   What do these parameters control? nfile Number of open files for all processes running on the system. Though each entryis relatively small, there is some kernel overhead in managing this table.Additionally, each time a file is opened, it will consume an entry in nfileeven if the file is already opened by another process. When nfile entries areexhausted, a console and/or syslog error message will appear specificallyindicating "File table full". The value should usually be 10-25% greater thanthe maximum number during peak load . The user limit on open files is set by the kernel parameter maxfiles.This is limited by the hard limit parameter maxfiles_lim , by defaultthis is limited to a value of 2048. ninode The kernel parameter ninode only effects HFS file systems , JFS (VxFS)filesystems allocate their own inodes dynamically (vx_ninode) based on theamount of available kernel memory . The true inode count is only incremented byeach unique HFS file open, ie. the initial open of a file, each subsequentopens of that file increments the file-sz column, and decrements the availablenfile value. This variable is frequently oversized, and can impose a heavy toll on theprocessor (especially machines with multiple CPUs).It also can have a negativeeffect on the system memory map, in some cases causing fragmentation . The HFS Inode Cache The HFS Inode cache contains information about the file type , size ,timestamp , permissions and block map.This information is stored in the On disk inode . The In-memory inode containsinformation on on-disk inode, linked list and other pointers inode number andlock primitives. One inode entry for every open file must exist in memory .Closed file inode are kept on the free list. The HFS inode table is controlled by the kernel parameter ninode . Memory costs for the  HFS inode cache in bytes for inode/vnode /hash entry  10.20     11.0 32 bit         11.0 64 bit    11i 32 bit          11i 64 bit   424       444                 680         475                       688 On 10.20 the inode table and dnlc (directory name lookup cache) are combined.The tunable parameter for dnlc ncsize was introduced in patch PHKL_18335.  On 11.00 the dnlc is now configurable using the ncsize and vx_ncsize kernelparameters.  By default ncsize =(ninode+vx_ncsize) +(8*dnlc_hash_locks)  . The parametervx_ncsize defines the memory space reserved for VxFS directory path-name cache(in bytes) The default value for vx_ncsize is 1024, dnlc_hash_ locks defaultsto 512.  As of JFS 3.5 vx_ncsize became obsolete. The JFS Inode Cache A VxFS file system obtains the value of vx_ninode from the system configurationfile used for making the kernel (/stand/system for example). This value is usedto determine the number of entries in the VxFS inode table. By default,vx_ninode initializes at zero; the file system then computes a value based onthe system memory size (see Inode Table Size).  To change the computed value of vx_ninode, you can hard code the value inSAM . For example:Set vx_ninode=16,000. The number of inodes in the inode table is calculated according to thefollowing table. The first column is the amount of system memory, the second isthe number of inodes.  If the available memory is a value between two entries,the value of vx_ninode is interpolated. The memory requirements for JFS are dependent on the revision of JFS and systemmemory. Maximum VxFS inodes in the cache based on system memory System Memory in MB                               JFS 3.1                 JFS.3.3-3.5 256                                                        18666                                     16000512                                                        37333                                      320001024                                                      74666                                      640002048                                                      149333                                    1280008192                            149333                   25600032768                           149333                   512000131072                         149333                    1024000  To determine the amount of vxfs inodes allocated ( these are not reported bysar) run : example: echo  vxfs_ninode/D | adb -k /stand/vmunix  /dev/mem vxfs_ninode:vxfs_ninode:    64000 for JFS 3.5 use the vxfsstat command :vxfsstat -v / | grep maxino vxi_icache_maxino      128000   vxi_icache_peakino   128002 The JFS daemon ( vxfsd ) scans the free list , if inodes are  on the free listfor given length of time the inode is freed back to the kernel memoryallocator . The amount of time this takes , and the amount freed varies byrevison . Maximum time in seconds before being freed    JFS 3.1  300        JFS 3.3  500         JFS 3.5 1800Maximum inodes to free per second    1/300th of current       50                   1-25  Memory cost per in bytes  for JFS inode  by revision  for inode/vnode/locks : JFS 3.1 11.0  32 bit   1220   64 bit  2244JFS 3.3 11.0  32 bit   1494   64 bit   1632JFS 3.3 11.11 32 bit   13.52  64 bit   1902JFS 3.5 11.11                 64 bit  1850    Tuning the maximum size of the JFS Inode Cache Remember  each environment is differentThere must be one inode entry for each file opened at any given time .Most systems will run fine with  2%  or less of memory used for the   JFS InodeCache Large file sservers  ie Web servers , NFS servers which randomly access alarge set of inodes benefit from a large cache The inode cache typicallyappears full after accessing many files sequentially  ie . find,ll , backupsThe HFS ninode parameter has no impact on the JFS Inode Cache While a static cache ( setting a non 0 value for vx_ninode ) may save memory ,there are factors to keep in mind : Inodes freed to the kernel memory allocator may not be available for immidiateuse by other objectsStatic inode caches keep inodes in the cache longer  nproc This pertains to the number of processes system-wide. This is another variableaffected by indiscriminate setting of maxusers. It is most commonly referencedwhen a ps -ef is run or when Glance/GPM and similar commands are initiated. Thevalue should usually be 10-25% greater than the maximum number of processesobserved under load to allow for unanticipated process growth. The user limit to open processes is set by the parameter maxuprc, thisvalue can be no greater than nproc -4. Typically maxuprc should be set nohigher than 60% of nproc. For a complete overview of 11.X kernel parameters refer to : http://www.docs.hp.com/hpux/onlinedocs/939/KCParms/KCparams.OverviewAll.html Disk I/O Disk bottlenecks can be caused by a number of factors. The buffer cache usage,cpu load and high disk I/O load can all contribute to a bottleneck . After determining the cpu and buffer cache load check the disk I/O load. To determine disk i/o performance run:  sar -d 5 100 The output will look similar to : device   %busy   avque   r+w/s    blks/s  avwait  avservc1t6d0    0.80    0.50       1       4    0.27    13.07c4t0d0    0.60    0.50       1       4    0.26     8.60 There will be an average printed at the end of the report.  %busy          Portion of time device was busy  servicing a request  avque          Average number of requests outstanding for the device  r+w/s          Number of data transfers per second (read and writes)                from  and to the device  blks/s         Number of bytes transferred (in 512-byte units)                  from and to the device  avwait         Average time (in milliseconds) that transfer requests                waited idly on queue for the device  avserv         Average time (in milliseconds) to service each                transfer request   (includes seek, rotational latency,                and data transfer times) for  the device. When average wait (avwait) is greater than average service time (avserv) itindicates the disk can't keep up with the load during that sample. When theaverage queue length exceeds the norm of .50 it is an indication of jobsstacking up. These conditions are considered to be a bottleneck. It is prudent to keep inmind how long these conditions last. If the queue flushes, or the avwaitclears in a reasonable time, (ie 5 seconds), it is not a cause for concern. Keep in mind that the more jobs in a queue, the greater the effect on wait onI/O even if they are small. Large jobs, those greater than 1000 blks/s willalso effect throughput. Also consider the type of disks being used. Modern disk arrays are capable ofhandling very large amounts of data in very short processing times. Processingloads of 5000 blks/s or greater in under 10mS. Older standard disks may showfar less capability.  The avwait is similar to %wio returned for sar -u on cpu . If a bottleneck is identified, run: strings /etc/lvmtab to identify the volume group associated with the disks. lvdisplay -v  /dev/vgXX/lvolX where x represents the lvol name.    This will tell you what disks are associated with the logical volume.   bdf to see if this volume groups files sytems are full ( > 85%) cat /etc/fstab   to determine the file system type assiciated with the lvol/mountpoint  How to improve disk I/O ? 1. Reduce the volume of data on the disk to less than 90% 2. Stripe the data across disks to improve I/O speed 3. If you are using Online JFS , run fsadm -e to defragment the extents. 4. If you are using HFS filesystems , implement asynchronous writes by setting   the kernel parameter fs_async to 1 or consider converting to VxFS. 5. Reduce the size of the buffer cache ( if %wcache is less than 90) 6. Consider changing the vxfs mount options to mincache=direct and nolog ,   these are available on Online JFS. 7. If you are using raw logical volumes , consider implementing asynchronous IO.  The difference between the async i/o and the synchronous i/o is that async doesnot wait for confirmation of the write before moving on to the next task. Thisdoes increase the speed of the disk performance at the expense of robustness.Synchronous I/O waits for acknowledgement of the write (or fail) beforecontinuing on. The write can have physically taken place or could be in thebuffer cache but in either case, acknowledgement has been sent. In the case ofasync, no waiting.    To implement asynchronous I/O on HP-UX for raw logical volumes:* set the async_disk driver (Asynchronous Disk Pseudo Driver)to IN in the HP-UX Kernel, this will require generating a new kernel and rebooting .  * create the device file: # mknod /dev/async c 101 0x00000# #=the minor number can be one of the following values:          0x000000 default         0x000001 enable immediate reporting         0x000002 flush the CPU cache after reads         0x000004 allow disks to timeout         0x000005 is a combination of 1 and 4         0x000007 is a combination of 1, 2 and 4    Note: Contact your database vendor or product vendor to determine thecorrect minor number for your application.Change the ownership to the approriate group and owner : chown oracle:dba /dev/async change the permissions : chmod 660 /dev/async   vi /etc/privgroup add 1 line : dba MLOCK give the group MLOCK priviledges /usr/sbin/setprivgrp  MLOCK   To verify if a group has the MLOCK privilege execute: /usr/bin/getprivgrp     The default number of available ports for asynchronus disks is 50 , this istuned with the kernel parameter max_async_ports, if greater than 50disks are being used, this parameter needs to be increased . PATCHES There are a number of OS performance issues that are resolved by currentpatches . For the most up to date patches contact the Hewlett-Packard ResponseCenter.


本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u/11224/showart_164728.html
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP