关于使用python的多线程检测文本中的url是否为404
有个文本文件,里边有好几万个url,想通过curl检测url的head是否返回404,判断url是否可用。之前用shell跑太慢了,能不能用python的threading模块实现多线程,加速任务呢? from multiprocessing.dummy import Pool as ThreadPool
import requests
siteList = []
with open("1.txt") as f:
for line in f.readlines():
siteList.append(line)
results = pool.map(requests.get, siteList)
for r in results:
print r.status()
我消化一下。。。回复 2# huangxiaohen
pool = ThreadPool(),忘记实例了。回复 3# myw58
学习了,已经成功运行,谢谢!回复 4# huangxiaohen
页:
[1]