- 论坛徽章:
- 0
|
mysql> select distinct uid from stat_login_200907 order by rand() limit 10000;
uid去重分布式提取10000个数据,distinct加order by rand() 会很慢
mysql> create table sjdel_login200907 select distinct(uid) as uid from stat_login_200907 ;
新建个表把uid写到新表中
mysql> select uid from sjdel_login200907 order by rand() limit 10000 into outfile '/tmp/user200907.txt';
在新的表中分布式提取10000个uid,命令分开写快了很多很多。。。。。
[root@waptx126 chen]# cat user200907.txt | sort -u | wc -l
10000
无重复!!!!!
数据提取脚本,
#!/bin/sh
while read uid
do
#echo $uid
/usr/local/mysql/bin/mysql --defaults-file=/data/txdata/test/my.cnf txtest -e "select i_uid,i_money,s_regtime,s_lasttime from alluser where i_uid='$uid'"|sed 1d >>userresult200907.txt
done
[root@waptx126 chen]# cat userresult200907.txt | awk '{print $2}' >123.txt
[root@waptx126 chen]# awk '{sum += $1};END {print sum}' 123.txt
15251108
本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u3/101226/showart_2019511.html |
|