网站定制开发一般多久,企业如何进行网站备案,wordpress博客主题,关于做网站的英语对话目前计算机程序一般会遇到两类I/O#xff1a;硬盘I/O和网络I/O。我就针对网络I/O的场景分析下python3下进程、线程、协程效率的对比。进程采用multiprocessing.Pool进程池#xff0c;线程是自己封装的进程池#xff0c;协程采用gevent的库。用python3自带的urlllib.request和…目前计算机程序一般会遇到两类I/O硬盘I/O和网络I/O。我就针对网络I/O的场景分析下python3下进程、线程、协程效率的对比。进程采用multiprocessing.Pool进程池线程是自己封装的进程池协程采用gevent的库。用python3自带的urlllib.request和开源的requests做对比。代码如下importurllib.requestimportrequestsimporttimeimportmultiprocessingimportthreadingimportqueuedefstartTimer():returntime.time()defticT(startTime):useTime time.time() -startTimereturn round(useTime, 3)#def tic(startTime, name):#useTime time.time() - startTime#print([%s] use time: %1.3f % (name, useTime))defdownload_urllib(url):requrllib.request.Request(url,headers{user-agent: Mozilla/5.0})resurllib.request.urlopen(req)datares.read()try:data data.decode(gbk)exceptUnicodeDecodeError:data data.decode(utf8, ignore)returnres.status, datadefdownload_requests(url):reqrequests.get(url,headers{user-agent: Mozilla/5.0})returnreq.status_code, req.textclassthreadPoolManager:def __init__(self,urls, workNum10000,threadNum20):self.workQueuequeue.Queue()self.threadPool[]self.__initWorkQueue(urls)self.__initThreadPool(threadNum)def __initWorkQueue(self,urls):for i inurls:self.workQueue.put((download_requests,i))def __initThreadPool(self,threadNum):for i inrange(threadNum):self.threadPool.append(work(self.workQueue))defwaitAllComplete(self):for i inself.threadPool:ifi.isAlive():i.join()classwork(threading.Thread):def __init__(self,workQueue):threading.Thread.__init__(self)self.workQueueworkQueueself.start()defrun(self):whileTrue:ifself.workQueue.qsize():do,argsself.workQueue.get(blockFalse)do(args)self.workQueue.task_done()else:breakurls [http://www.ustchacker.com] * 10urllibL[]requestsL[]multiPool[]threadPool[]N 20PoolNum 100for i inrange(N):print(start %d try %i)urllibTstartTimer()jobs [download_urllib(url) for url inurls]#for status, data in jobs:#print(status, data[:10])#tic(urllibT, urllib.request)urllibL.append(ticT(urllibT))print(1)requestsTstartTimer()jobs [download_requests(url) for url inurls]#for status, data in jobs:#print(status, data[:10])#tic(requestsT, requests)requestsL.append(ticT(requestsT))print(2)requestsTstartTimer()poolmultiprocessing.Pool(PoolNum)datapool.map(download_requests, urls)pool.close()pool.join()multiPool.append(ticT(requestsT))print(3)requestsTstartTimer()pool threadPoolManager(urls, threadNumPoolNum)pool.waitAllComplete()threadPool.append(ticT(requestsT))print(4)importmatplotlib.pyplot as pltx list(range(1, N1))plt.plot(x, urllibL, labelurllib)plt.plot(x, requestsL, labelrequests)plt.plot(x, multiPool, labelrequests MultiPool)plt.plot(x, threadPool, labelrequests threadPool)plt.xlabel(test number)plt.ylabel(time(s))plt.legend()plt.show()运行结果如下从上图可以看出python3自带的urllib.request效率还是不如开源的requestsmultiprocessing进程池效率明显提升但还低于自己封装的线程池有一部分原因是创建、调度进程的开销比创建线程高(测试程序中我把创建的代价也包括在里面)。在Windows上要想使用进程模块就必须把有关进程的代码写在当前.py文件的if __name__ ‘__main__’ :语句的下面才能正常使用Windows下的进程模块。Unix/Linux下则不需要。下面是gevent的测试代码importurllib.requestimportrequestsimporttimeimportgevent.poolimportgevent.monkeygevent.monkey.patch_all()defstartTimer():returntime.time()defticT(startTime):useTime time.time() -startTimereturn round(useTime, 3)#def tic(startTime, name):#useTime time.time() - startTime#print([%s] use time: %1.3f % (name, useTime))defdownload_urllib(url):requrllib.request.Request(url,headers{user-agent: Mozilla/5.0})resurllib.request.urlopen(req)datares.read()try:data data.decode(gbk)exceptUnicodeDecodeError:data data.decode(utf8, ignore)returnres.status, datadefdownload_requests(url):reqrequests.get(url,headers{user-agent: Mozilla/5.0})returnreq.status_code, req.texturls [http://www.ustchacker.com] * 10urllibL[]requestsL[]reqPool[]reqSpawn[]N 20PoolNum 100for i inrange(N):print(start %d try %i)urllibTstartTimer()jobs [download_urllib(url) for url inurls]#for status, data in jobs:#print(status, data[:10])#tic(urllibT, urllib.request)urllibL.append(ticT(urllibT))print(1)requestsTstartTimer()jobs [download_requests(url) for url inurls]#for status, data in jobs:#print(status, data[:10])#tic(requestsT, requests)requestsL.append(ticT(requestsT))print(2)requestsTstartTimer()poolgevent.pool.Pool(PoolNum)datapool.map(download_requests, urls)#for status, text in data:#print(status, text[:10])#tic(requestsT, requests with gevent.pool)reqPool.append(ticT(requestsT))print(3)requestsTstartTimer()jobs [gevent.spawn(download_requests, url) for url inurls]gevent.joinall(jobs)#for i in jobs:#print(i.value[0], i.value[1][:10])#tic(requestsT, requests with gevent.spawn)reqSpawn.append(ticT(requestsT))print(4)importmatplotlib.pyplot as pltx list(range(1, N1))plt.plot(x, urllibL, labelurllib)plt.plot(x, requestsL, labelrequests)plt.plot(x, reqPool, labelrequests geventPool)plt.plot(x, reqSpawn, labelrequests Spawn)plt.xlabel(test number)plt.ylabel(time(s))plt.legend()plt.show()运行结果如下从上图可以看到对于I/O密集型任务gevent还是能对性能做很大提升的由于协程的创建、调度开销都比线程小的多所以可以看到不论使用gevent的Spawn模式还是Pool模式性能差距不大。因为在gevent中需要使用monkey补丁会提高gevent的性能但会影响multiprocessing的运行如果要同时使用需要如下代码gevent.monkey.patch_all(threadFalse, socketFalse, selectFalse)可是这样就不能充分发挥gevent的优势所以不能把multiprocessing Pool、threading Pool、gevent Pool在一个程序中对比。不过比较两图可以得出结论线程池和gevent的性能最优的其次是进程池。附带得出个结论requests库比urllib.request库性能要好一些哈:-)