【求助】VVR偶尔会影响CPU WAIT很高的问题
本帖最后由 xc5sm 于 2012-09-05 11:15 编辑NBU master server + VCS
二台AIX ,分别上 ERP, ORACLE +vvr
VVR服务偶尔会影响oracle上的cpuwait 很高,现在已经影响到客户的erp业务了,小弟不才,在下面的信息中找不到问题所在,请各位帮忙分析下,谢谢:handshake
补充下,如果把VVR暂停了,cpu wait马上就降下来了。VVR影响cpu wait是偶尔会出现,大部分时候是正常的。
# vxrlink -g oradisk -i5 status rlk_p550-dr_oradata_rvg
Wed Sep5 10:25:59 GMT+08:00 2012
VxVM VVR vxrlink INFO V-5-1-4640 Rlink rlk_p550-dr_oradata_rvg has 1846 outstanding writes, occupying 20386 Kbytes (0%) on
the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink rlk_p550-dr_oradata_rvg has 1820 outstanding writes, occupying 20057 Kbytes (0%) on
the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink rlk_p550-dr_oradata_rvg has 1819 outstanding writes, occupying 19746 Kbytes (0%) on
the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink rlk_p550-dr_oradata_rvg has 1819 outstanding writes, occupying 19746 Kbytes (0%) on
the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink rlk_p550-dr_oradata_rvg has 1819 outstanding writes, occupying 19746 Kbytes (0%) on
the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink rlk_p550-dr_oradata_rvg has 1819 outstanding writes, occupying 19746 Kbytes (0%) on
the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink rlk_p550-dr_oradata_rvg has 1819 outstanding writes, occupying 19746 Kbytes (0%) on
the SRL
VxVM VVR vxrlink INFO V-5-1-4640 Rlink rlk_p550-dr_oradata_rvg has 1819 outstanding writes, occupying 19746 Kbytes (0%) on
the SRL
^C# iostat 2 5
System configuration: lcpu=4 drives=7 paths=18 vdisks=0
tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 383.0 2.1 1.1 0.0 96.8
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 1.0 16.0 4.0 12 20
hdisk1 1.0 12.0 3.0 4 20
hdisk4 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0
tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 707.6 1.2 0.9 0.0 97.9
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0
tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 664.6 1.9 0.8 0.0 97.3
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
hdisk2 0.5 3.9 0.5 8 0
hdisk3 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0
tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 716.0 1.4 0.8 0.0 97.8
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0
tty: tin tout avg-cpu: % user % sys % idle % iowait
0.0 520.5 1.3 0.8 0.0 97.9
Disks: % tm_act Kbps tps Kb_read Kb_wrtn
hdisk0 0.0 0.0 0.0 0 0
hdisk1 0.0 0.0 0.0 0 0
hdisk4 0.0 0.0 0.0 0 0
hdisk2 0.0 0.0 0.0 0 0
hdisk3 0.0 0.0 0.0 0 0
hdisk5 0.0 0.0 0.0 0 0
cd0 0.0 0.0 0.0 0 0
# vmstat 2 5
System configuration: lcpu=4 mem=15680MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
rb avm frerepipofr srcyin sycs us sy id wa
01 4693879 27320 0 0 0 0 0 078 1975 716210 97
01 4693880 27319 0 0 0 0 0 036 1771 659110 98
01 4693880 27319 0 0 0 0 0 039 1598 669110 98
01 4693880 27319 0 0 0 0 0 022 1559 644110 98
11 4693880 27319 0 0 0 0 0 0 162 1541 721120 97
# vmstat 2 10
System configuration: lcpu=4 mem=15680MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
rb avm frerepipofr srcyin sycs us sy id wa
11 4693903 27296 0 0 0 0 0 048 36559 718 1630 81
01 4693901 27298 0 0 0 0 0 020 1467 686110 98
120 4689311 31756 0 0 0 0 0 0 1542 25191 3353 76 100 15
40 4688458 32528 0 0 0 0 0 0 1280 14188 3196 535 411
10 4688382 32592 0 0 0 0 0 0 275 4044 1089 102 880
10 4688311 32606 0 0 0 0 0 0 624 6509 1757 263 710
10 4688239 32620 0 0 0 0 0 0 334 3227 120042 931
10 4688266 32586 0 0 0 0 0 0 465 6286 1424 122 860
10 4688209 32634 0 0 0 0 0 0 188 2524 96042 940
10 4689350 31492 0 0 0 0 0 045 36082 63693 890
# vmstat 2 10
System configuration: lcpu=4 mem=15680MB
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------
rb avm frerepipofr srcyin sycs us sy id wa
10 4693176 27607 0 0 0 0 0 031 38141 770 113 43 43
10 4693178 27605 0 0 0 0 0 035 41532 910 116 42 42
10 4693179 27604 0 0 0 0 0 033 1405 65611 49 49
10 4693180 27603 0 0 0 0 0 029 34866 72832 47 48
10 4693180 27603 0 0 0 0 0 042 1619 72821 49 49
10 4693180 27603 0 0 0 0 0 030 1801 67411 49 49
10 4693180 27603 0 0 0 0 0 020 1389 64611 49 49
00 4693179 27604 0 0 0 0 0 049 1886 74621 49 49
00 4693179 27604 0 0 0 0 0 028 1496 69411 49 49
10 4693179 27604 0 0 0 0 0 022 1431 68911 49 49
# vxprint -VPl
Disk group: oradisk
Rlink: rlk_p550-dr_oradata_rvg
info: timeout=500 rid=0.1857
latency_high_mark=1000000 latency_low_mark=999500
bandwidth_limit=none
state: state=ACTIVE
synchronous=off latencyprot=override srlprot=dcm
assoc: rvg=oradata_rvg
remote_host=p550-dr IP_addr=192.168.100.10 port=4145
remote_dg=oradisk
remote_dg_dgid=1335900509.22.p550-dr
remote_rvg_version=30
remote_rlink=rlk_erpdb_oradata_rvg
remote_rlink_rid=0.1097
local_host=erpdb IP_addr=192.168.10.15 port=4145
protocol: TCP/IP
flags: write enabled attached consistent connected asynchronous
Rvg: oradata_rvg
info: rid=0.1121 version=2 rvg_version=30 last_tag=1
state: state=ACTIVE kernel=ENABLED
assoc: datavols=oradata
srl=srl
rlinks=rlk_p550-dr_oradata_rvg
exports=(none)
vsets=(none)
att: rlinks=rlk_p550-dr_oradata_rvg
flags: closed primary enabled attached
device: minor=7002 bdev=47/7002 cdev=47/7002 path=/dev/vx/dsk/oradisk/oradata_rvg
perms: user=root group=system mode=0600
# vradmin -g oradisk repstatus oradata_rvg
Replicated Data Set: oradata_rvg
Primary:
Host name: erpdb
RVG name: oradata_rvg
DG name: oradisk
RVG state: enabled for I/O
Data volumes: 1
VSets: 0
SRL name: srl
SRL size: 40.00 G
Total secondaries: 1
Secondary:
Host name: p550-dr
RVG name: oradata_rvg
DG name: oradisk
Data status: consistent, behind
Replication status: replicating (connected)
Current mode: asynchronous
Logging to: SRL (16150 Kbytes behind, 0% full)
Timestamp Information: behind by 0h 9m 11s
# vxprint -lp
Disk group: oradisk
Plex: oradata-01
info: len=83886080
type: layout=CONCAT
state: state=ACTIVE kernel=ENABLED io=read-write
assoc: vol=oradata sd=oradisk01-01
flags: busy complete
Site: n3400_1_oradata1
mediatype: hdd
Plex: oradata-02
info: len=83886080
type: layout=CONCAT
state: state=ACTIVE kernel=ENABLED io=read-write
assoc: vol=oradata sd=oradisk02-01
flags: busy complete
Site: n3400_2_oradata2
mediatype: hdd
Plex: oradata-03
info: len=0
type: layout=CONCAT
state: state=ACTIVE kernel=ENABLED io=read-write
assoc: vol=oradata sd=(none)
flags:
Site: n3400_2_oradata2
logging:logsd=oradisk02-05 (enabled)
mediatype: unknown
Plex: oradata-04
info: len=0
type: layout=CONCAT
state: state=ACTIVE kernel=ENABLED io=read-write
assoc: vol=oradata sd=(none)
flags:
Site: n3400_1_oradata1
logging:logsd=oradisk01-05 (enabled)
mediatype: unknown
Plex: oradata_dcl-01
info: len=6304
type: layout=CONCAT
state: state=ACTIVE kernel=ENABLED io=read-write
assoc: vol=oradata_dcl sd=oradisk01-02
flags: complete
Site: n3400_1_oradata1
mediatype: hdd
Plex: oradata_dcl-02
info: len=6304
type: layout=CONCAT
state: state=ACTIVE kernel=ENABLED io=read-write
assoc: vol=oradata_dcl sd=oradisk02-02
flags: complete
Site: n3400_2_oradata2
mediatype: hdd
Plex: srl-01
info: len=83886080
type: layout=CONCAT
state: state=ACTIVE kernel=ENABLED io=read-write
assoc: vol=srl sd=oradisk01-03
flags: complete
Site: n3400_1_oradata1
mediatype: hdd
Plex: srl-02
info: len=83886080
type: layout=CONCAT
state: state=ACTIVE kernel=ENABLED io=read-write
assoc: vol=srl sd=oradisk02-03
flags: complete
Site: n3400_2_oradata2
mediatype: hdd
Plex: srl_dcl-01
info: len=6304
type: layout=CONCAT
state: state=ACTIVE kernel=ENABLED io=read-write
assoc: vol=srl_dcl sd=oradisk01-04
flags: complete
Site: n3400_1_oradata1
mediatype: hdd
Plex: srl_dcl-02
info: len=6304
type: layout=CONCAT
state: state=ACTIVE kernel=ENABLED io=read-write
assoc: vol=srl_dcl sd=oradisk02-04
flags: complete
Site: n3400_2_oradata2
mediatype: hdd
要具体看看是那个进程占CPU过高。 另外CPU过高的时候是否刚有大量的写发生?
VVR的版本? 回复 2# 无牙
上面的iostat 2 5结果中基本上没有读写,但wait就很高了。
请教下,vvr版本用什么命令查看{:3_186:} 用lslpp 看一下VRTSvvr包的信息。 本帖最后由 xc5sm 于 2012-09-06 09:11 编辑
回复 4# 无牙
lslpp -dVRTSvvr 结果提示没有这个包
lslpp -l |grepVRTSvvr没有结果
lslpp -l |grep VRT*
VRTSamf 5.1.100.0COMMITTEDVeritas AMF by Symantec
VRTSaslapm 5.1.100.200COMMITTEDArray Support Libraries and
VRTSat.client 5.0.32.0COMMITTEDSymantec Product
VRTSat.server 5.0.32.0COMMITTEDSymantec Product
VRTScps 5.1.100.0APPLIED Veritas Co-ordination Point
VRTSdbed 5.1.100.0APPLIED Veritas Storage Foundation for
VRTSfssdk 5.1.100.0COMMITTEDVeritas Libraries and Header
VRTSgab 5.1.100.0APPLIED Veritas Group Membership and
VRTSllt 5.1.100.0APPLIED Veritas Low Latency Transport
VRTSob 3.4.235.54APPLIED Veritas Enterprise
VRTSodm 5.1.101.0APPLIED Veritas Extension for Oracle
VRTSpbx 1.5.0.6COMMITTEDSymantec Private Branch
VRTSperl 5.10.0.9COMMITTEDPerl 5.10.0 for Veritas
VRTSsfmh 4.0.1598.0COMMITTEDVeritas Operations Manager
VRTSspt 5.5.0.5COMMITTEDVeritas Support Tools by
VRTSvcs 5.1.100.0APPLIED Veritas Cluster Server by
VRTSvcsag 5.1.100.0APPLIED Veritas Cluster Server Bundled
VRTSvcsea 5.1.100.0APPLIED Veritas High Availability
VRTSveki 5.1.100.0APPLIED Veritas Kernel Interface by
VRTSvlic 3.2.51.10COMMITTEDSymantec License Utilities
VRTSvxfen 5.1.100.0APPLIED Veritas I/O Fencing by
VRTSvxfs 5.1.101.0APPLIED Veritas File System by
VRTSvxvm 5.1.101.0APPLIED Veritas Volume Manager by
VRTSamf 5.1.100.0COMMITTEDVeritas AMF by Symantec
VRTSaslapm 5.1.100.200COMMITTEDArray Support Libraries and
VRTSat.server 5.0.32.0COMMITTEDSymantec Product
VRTScps 5.1.100.0APPLIED Veritas Co-ordination Point
VRTSdbed 5.1.100.0APPLIED Veritas Storage Foundation for
VRTSgab 5.1.100.0APPLIED Veritas Group Membership and
VRTSllt 5.1.100.0APPLIED Veritas Low Latency Transport
VRTSob 3.4.235.54APPLIED Veritas Enterprise
VRTSodm 5.1.101.0APPLIED Veritas Extension for Oracle
VRTSpbx 1.5.0.6COMMITTEDSymantec Private Branch
VRTSperl 5.10.0.9COMMITTEDPerl 5.10.0 for Veritas
VRTSsfmh 4.0.1598.0COMMITTEDVeritas Operations Manager
VRTSvcs 5.1.100.0APPLIED Veritas Cluster Server by
VRTSvcsea 5.1.100.0APPLIED Veritas High Availability
VRTSveki 5.1.100.0APPLIED Veritas Kernel Interface by
VRTSvxfen 5.1.100.0APPLIED Veritas I/O Fencing by
VRTSvxfs 5.1.101.0APPLIED Veritas File System by
VRTSvxvm 5.1.101.0APPLIED Veritas Volume Manager by 有如果有条件的话,建议先安装一下补丁。在老版本中,出现过在AIX平台上iowait过高的问题。 回复 6# 无牙
好的,我先试下,非常感谢:handshake
页:
[1]