免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
123下一页
最近访问板块 发新帖
查看: 8138 | 回复: 20
打印 上一主题 下一主题

threadpool怎么读写全局变量? [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2008-05-04 13:18 |只看该作者 |倒序浏览
我使用threadpool库构造线程池,我想把接收结果的方法callback的内容给一个全局的变量,应该怎么做?

论坛徽章:
0
2 [报告]
发表于 2008-05-04 13:23 |只看该作者
给段代码看看吧

论坛徽章:
0
3 [报告]
发表于 2008-05-04 13:34 |只看该作者
if __name__ == '__main__':
    import random
    import time

    # the work the threads will have to do (rather trivial in our example)
    def do_something(data):
        time.sleep(random.randint(1,5))
        result = round(random.random() * data, 5)
        print(result)
        # just to show off, we throw an exception once in a while
        #if result > 3:
            #raise RuntimeError("Something extraordinary happened!")
        return result

    # this will be called each time a result is available
    def print_result(request, result):
        print "**Result: %s from request #%s" % (result, request.requestID)

    # this will be called when an exception occurs within a thread
    def handle_exception(request, exc_info):
        print "Exception occured in request #%s: %s" % \
          (request.requestID, exc_info[1])

    # assemble the arguments for each job to a list...
    data = [random.randint(1,10) for i in range(20)]

    # ... and build a WorkRequest object for each item in data
    requests = makeRequests(do_something, data, print_result, handle_exception,)

    # or the other form of args_lists accepted by makeRequests: ((,), {})
    """
    data = [((random.randint(1,10),), {}) for i in range(20)]
    requests.extend(
      makeRequests(do_something, data, print_result, handle_exception)
    )
    """
    # we create a pool of 3 worker threads
    main = ThreadPool(3)

    # then we put the work requests in the queue...
    for req in requests:
        main.putRequest(req)
        #print "Work request #%s added." % req.requestID
        sfd=str(req.callable)
        if sfd.find('do_something') > 0:
            print "nihao"
    # or shorter:
    # [main.putRequest(req) for req in requests]

    # ...and wait for the results to arrive in the result queue
    # by using ThreadPool.wait(). This would block until results for
    # all work requests have arrived:
    # main.wait()

    # instead we can poll for results while doing something else:
    main.wait()
    """
    i = 0
    while 1:
        try:
            main.poll()
            print "Main thread working..."
            time.sleep(0.5)
            if i == 10:
                print "Adding 3 more worker threads..."
                main.createWorkers(3)
            i += 1
        except KeyboardInterrupt:
            print "Interrupted!"
            break
        except NoResultsPending:
            print "All results collected."
            break
    """

上面是threadpool里面的例子,我想在print_result方法里使用全局变量把打印的方法写到一个list里。

论坛徽章:
0
4 [报告]
发表于 2008-05-04 13:43 |只看该作者
全局变量的话好像加global就行了

论坛徽章:
0
5 [报告]
发表于 2008-05-05 10:24 |只看该作者
我试了下好象在线程里用全局不能直接用。

论坛徽章:
0
6 [报告]
发表于 2008-05-05 10:35 |只看该作者
为何不能直接用?有啥问题?

论坛徽章:
0
7 [报告]
发表于 2008-05-05 10:39 |只看该作者
下面是我的代码res是个全局变量
if __name__ == '__main__':
    import random
    import time

    # the work the threads will have to do (rather trivial in our example)
    def do_something(data):
        time.sleep(random.randint(1,5))
        result = round(random.random() * data, 5)
        print(result)
        # just to show off, we throw an exception once in a while
        #if result > 3:
            #raise RuntimeError("Something extraordinary happened!")
        return result

    # this will be called each time a result is available
    def print_result(request, result):
        res+=result
        print "**Result: %s from request #%s" % (result, request.requestID)
        
    # this will be called when an exception occurs within a thread
    def handle_exception(request, exc_info):
        print "Exception occured in request #%s: %s" % \
          (request.requestID, exc_info[1])

    # assemble the arguments for each job to a list...
    data = [random.randint(1,10) for i in range(2)]

    # ... and build a WorkRequest object for each item in data
    requests = makeRequests(do_something, data, print_result, handle_exception,)

    # or the other form of args_lists accepted by makeRequests: ((,), {})
    """
    data = [((random.randint(1,10),), {}) for i in range(20)]
    requests.extend(
      makeRequests(do_something, data, print_result, handle_exception)
    )
    """
    # we create a pool of 3 worker threads
    main = ThreadPool(3)
    global res
    res=''
    # then we put the work requests in the queue...
    for req in requests:
        main.putRequest(req)
        #print "Work request #%s added." % req.requestID
        sfd=str(req.callable)
        if sfd.find('do_something') > 0:
            print "nihao"
    # or shorter:
    # [main.putRequest(req) for req in requests]

    # ...and wait for the results to arrive in the result queue
    # by using ThreadPool.wait(). This would block until results for
    # all work requests have arrived:
    # main.wait()

    # instead we can poll for results while doing something else:
    main.wait()
    print(res)
    """
    i = 0
    while 1:
        try:
            main.poll()
            print "Main thread working..."
            time.sleep(0.5)
            if i == 10:
                print "Adding 3 more worker threads..."
                main.createWorkers(3)
            i += 1
        except KeyboardInterrupt:
            print "Interrupted!"
            break
        except NoResultsPending:
            print "All results collected."
            break
    """

我执行后的出错结果

nihao
nihao
8.41607
Traceback (most recent call last):
  File "C:\Documents and Settings\我\桌面\新建文件夹\temp\threadpool.py", line 3
23, in ?
    main.wait()
  File "C:\Documents and Settings\我\桌面\新建文件夹\temp\threadpool.py", line 2
28, in wait
    self.poll(True)
  File "C:\Documents and Settings\我\桌面\新建文件夹\temp\threadpool.py", line 2
18, in poll
    request.callback(request, result)
  File "C:\Documents and Settings\我\桌面\新建文件夹\temp\threadpool.py", line 2
82, in print_result
    res+=result
UnboundLocalError: local variable 'res' referenced before assignment

论坛徽章:
0
8 [报告]
发表于 2008-05-05 11:01 |只看该作者
好像不是那样写
import threading
share=1
def threadcode():
    global share
    share+=2
    print "childthread output",share
t = threading.Thread(target = threadcode,name='childthread')
t.setDaemon(1)
t.start()
t.join()

论坛徽章:
0
9 [报告]
发表于 2008-05-05 11:11 |只看该作者
我使用的是个第三方的线程池,看附件里面是代码

threadpool.rar

3.88 KB, 下载次数: 29

论坛徽章:
0
10 [报告]
发表于 2008-05-05 11:28 |只看该作者
if __name__ == '__main__':
    import random
    import time
    from threadpool import *


    # the work the threads will have to do (rather trivial in our example)
    def do_something(data):
        time.sleep(random.randint(1,5))
        result = round(random.random() * data, 5)
        print(result)
        # just to show off, we throw an exception once in a while
        #if result > 3:
            #raise RuntimeError("Something extraordinary happened!")
        return result

    # this will be called each time a result is available
    def print_result(request, result):
        global res
        res+=result
        print "**Result: %s from request #%s" % (result, request.requestID)
        
    # this will be called when an exception occurs within a thread
    def handle_exception(request, exc_info):
        print "Exception occured in request #%s: %s" % \
          (request.requestID, exc_info[1])

    # assemble the arguments for each job to a list...
    data = [random.randint(1,10) for i in range(2)]

    # ... and build a WorkRequest object for each item in data
    requests = makeRequests(do_something, data, print_result, handle_exception,)

    # or the other form of args_lists accepted by makeRequests: ((,), {})
    """
    data = [((random.randint(1,10),), {}) for i in range(20)]
    requests.extend(
      makeRequests(do_something, data, print_result, handle_exception)
    )
    """
    # we create a pool of 3 worker threads
    main = ThreadPool(3)

    res=0
    # then we put the work requests in the queue...
    for req in requests:
        main.putRequest(req)
        #print "Work request #%s added." % req.requestID
        sfd=str(req.callable)
        if sfd.find('do_something') > 0:
            print "nihao"
    # or shorter:
    # [main.putRequest(req) for req in requests]

    # ...and wait for the results to arrive in the result queue
    # by using ThreadPool.wait(). This would block until results for
    # all work requests have arrived:
    # main.wait()

    # instead we can poll for results while doing something else:
    main.wait()
    print(res)
    """
    i = 0
    while 1:
        try:
            main.poll()
            print "Main thread working..."
            time.sleep(0.5)
            if i == 10:
                print "Adding 3 more worker threads..."
                main.createWorkers(3)
            i += 1
        except KeyboardInterrupt:
            print "Interrupted!"
            break
        except NoResultsPending:
            print "All results collected."
            break
    """
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP