在虚拟化平台上,单条数据插入1万次的时间是在物理服务器的10倍,如何优化?
本帖最后由 villainl 于 2019-05-23 12:48 编辑单条数据插入1万次在虚拟化平台耗时
42.73909688
单条数据插入1万次在物理机上耗时
5.423371077
批量插入1万条数据的时间,虚拟平台和物理机基本相同。
从后台的监控看,虚拟化平台的IO,CPU,内存的占用都比较低。这方面还有什么优化的空间吗?
虚拟机配置: CentOS 6.5VCPU: 2NUMBER OF CORES PER VCPU: 7 MEMORY: 128GiB数据库500GB
10块 300GB虚拟磁盘组成的3TB的VG
Table:
CREATE TABLE `test3` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`k` int(11) NOT NULL DEFAULT '0',
`c` char(120) NOT NULL DEFAULT '',
`pad` char(100) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k_1` (`k`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Insert 语句:
def ordinary_insert(count):
start = time.time()
tmp_sql = "INSERT INTO `test3` (`k`, `c`, `pad`) VALUES (0, '34838736059-24362714610-75033330387-17863378665-80928638402-33892306210-78377564998-17324442332-39178876426-77334528413', 'AAA')"
temp_str=""
sql=""
for i in range(count):
temp_str = '11946195857-63616115598-80208325001-42313633529-35180183845-' + str(random.randint(1, count)) + '-' + str(i)
sql = tmp_sql.replace('AAA', temp_str)
cur.execute(sql)
#sql = sql.replace('AAA', '11946195857-63616115598-80208325001-42313633529-35180183845-' + str(random.sample(list, 1)) + '-' + str(i))
seconds = time.time() - start
print("单条SQL循环" + str(count) + "次,数据写入耗时" + str(seconds))
output = sys.stdout
with open("result.txt", "a+") as f:
sys.stdout = f
print("单条SQL循环" + str(count) + "次,数据写入耗时" + str(seconds))
sys.stdout = output
table:
CREATE TABLE `test3` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`k` int(11) NOT NULL DEFAULT '0',
`c` char(120) NOT NULL DEFAULT '',
`pad` char(100) NOT NULL DEFAULT '',
PRIMARY KEY (`id`),
KEY `k_1` (`k`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
页:
[1]