python tornado mongo驱动应该如何选择,tornadomongo,背景我用apache的a


背景

我用apache的ab test在公司的两台虚拟机上面测试,发现用pymongo的速度最快,asyncmongo其次,最后才是motor库.囧

機器配置

server端ubuntu 12.04Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz內存 500M基本的server配置全部打開 ˜(比如maxsoconn设到很大)client端freebsd(机子在公司,暂时空缺=.=)

測試工具

ab test

测试用例

我这里只贴了asyncmongo的用例,其他两个的代码结构类似,但跟其他业务结合得比较紧,所以就不贴了.大概的用例描述:客户端发起一个json格式的post请求,tornado这边根据player_id跟mongo要数据,只是读请求,不存在锁的问题.
删了一些敏感的信息.

#!/usr/bin/env python# encoding: utf-8import asyncmongoimport tornado.webfrom tornado import webimport tornado.ioloopfrom tornado.ioloop import IOLoopfrom tornado.httpserver import HTTPServerclass RankHandler(tornado.web.RequestHandler):    def __init__(self, application, request, **kwargs):        super(RankHandler, self).__init__(application, request, **kwargs)        self.set_header('Content-Type', 'application/json')    @property    def db(self):        return self.application.db    @tornado.web.asynchronous    def post(self):        r = {}        ## decode msg body        try:            d = tornado.escape.json_decode(self.request.body)        except ValueError, e:            self.log.error('decode track data error. e=%s' % e)            r['status_code'] = 500            r['status_txt'] = 'decode json error'            self.write(tornado.escape.json_encode(r))            self.finish()            return        event = d.get('event')        if not event:            self.log.error('track args missing arg event.')            r['status_code'] = 500            r['status_txt'] = 'missing_arg_event'            self.write(tornado.escape.json_encode(r))            self.finish()            return        event_data = d.get('data')        if event_data and not isinstance(event_data, dict):            self.log.error('track args bad arg data.')            r['status_code'] = 500            r['status_txt'] = 'bad_arg_data'            self.write(tornado.escape.json_encode(r))            self.finish()            return        if(event == "u_add"):            pass        elif(event == "u_group"):            pass        elif(event == "u_update"):            pass        elif(event == "u_get_point"):            self.db.ranking_list.find_one({"_id": event_data["player_id"]},callback=self._on_response)    def _on_response(self, response, error):        r = {}        if error:            raise tornado.web.HTTPError(500)        result = {"data": {"_id": response['_id'], "rank_point": response["rank_point"]}}        r.update(result)        if not r.get('status_code', None):            r['status_code'] = 200            r['status_txt'] = 'OK'        self.write(tornado.escape.json_encode(r))        self.finish()        returnclass Application(web.Application):    def __init__(self):        """        """        handlers = [            (r"/api/xxx", RankHandler),        ]        settings = dict(            debug=True,            autoescape=None,        )        super(Application, self).__init__(handlers, **settings)        self.db = asyncmongo.Client(pool_id='mydb', host='0.0.0.0', port=27017, maxcached=10, maxconnections=20000, dbname='xxx')def main():    http_server = HTTPServer(Application(), xheaders=True)    http_server.bind(8880, '127.0.0.1')    http_server.start()    IOLoop.instance().start()if __name__ == "__main__":    main()

测试结果

同步mongo庫
➜  test git:(master) ✗ ab -n10000 -c3000 -p data-get-user-rank_point.xml -T'application/json' 'http://192.168.0.201:8880/api/xxx'This is ApacheBench, Version 2.3 <$Revision: 1430300 $>Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking 192.168.0.201 (be patient)Completed 1000 requestsCompleted 2000 requestsCompleted 3000 requestsCompleted 4000 requestsCompleted 5000 requestsCompleted 6000 requestsCompleted 7000 requestsCompleted 8000 requestsCompleted 9000 requestsCompleted 10000 requestsFinished 10000 requestsServer Software:        TornadoServer/3.1.1Server Hostname:        192.168.0.201Server Port:            8880Document Path:          /api/xxxDocument Length:        80 bytesConcurrency Level:      3000Time taken for tests:   23.551 secondsComplete requests:      10000Failed requests:        0Write errors:           0Total transferred:      2170000 bytesTotal body sent:        1990000HTML transferred:       800000 bytesRequests per second:    424.61 [#/sec] (mean)Time per request:       7065.317 [ms] (mean)Time per request:       2.355 [ms] (mean, across all concurrent requests)Transfer rate:          89.98 [Kbytes/sec] received                        82.52 kb/s sent                        172.50 kb/s totalConnection Times (ms)              min  mean[+/-sd] median   maxConnect:        1 1806 2222.9   1061   10825Processing:   265 1130 2042.6    539   20975Waiting:      255 1040 2018.9    515   20972Total:        282 2936 2824.0   2930   20976Percentage of the requests served within a certain time (ms)  50%   2930  66%   3402  75%   3526  80%   3592  90%   6670  95%   6823  98%   9961  99%  15001 100%  20976 (longest request)
異步mongo庫
(q2_rank)➜  test git:(master) ✗ ab -n10000 -c3000 -p data-get-user-rank_point.xml -T'application/json' 'http://192.168.0.201:8880/api/xxx'This is ApacheBench, Version 2.3 <$Revision: 1430300 $>Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking 192.168.0.201 (be patient)Completed 1000 requestsCompleted 2000 requestsCompleted 3000 requestsCompleted 4000 requestsCompleted 5000 requestsCompleted 6000 requestsCompleted 7000 requestsCompleted 8000 requestsCompleted 9000 requestsCompleted 10000 requestsFinished 10000 requestsServer Software:        TornadoServer/3.1.1Server Hostname:        192.168.0.201Server Port:            8880Document Path:          /api/xxxDocument Length:        80 bytesConcurrency Level:      3000Time taken for tests:   24.629 secondsComplete requests:      10000Failed requests:        0Write errors:           0Total transferred:      2170000 bytesTotal body sent:        1990000HTML transferred:       800000 bytesRequests per second:    406.02 [#/sec] (mean)Time per request:       7388.749 [ms] (mean)Time per request:       2.463 [ms] (mean, across all concurrent requests)Transfer rate:          86.04 [Kbytes/sec] received                        78.90 kb/s sent                        164.95 kb/s totalConnection Times (ms)              min  mean[+/-sd] median   maxConnect:        0  412 794.5     17    6205Processing:  1024 6286 1793.2   7088   10475Waiting:      836 6256 1843.9   7083   10468Total:       1032 6698 1894.1   7199   14014Percentage of the requests served within a certain time (ms)  50%   7199  66%   7700  75%   7825  80%   7875  90%   8244  95%   9161  98%  10366  99%  10763 100%  14014 (longest request)  98%   6677

結果分析

理論上來說,異步的mongo應該可以處理更多的併發連接,但實際測試來看,併發連接數相差不大.我想到的原因:
- 测试机子太挫
- 測試的數據量不夠大,如果增大返回的數據大小,或者延長查詢的時間,這樣理論上motor性能會更好看點
- 我查看了mongod的日志,发现异步的驱动,每次都会跟mongo新生成一个连接,频繁的连接建立与删除,造成了性能上的损耗?而pymongo则从头到尾都是一个连接.

最后

有没童鞋做过这方面的性能测试呢,可否把测试结果分享下=.=?然后,大家能否推荐下一些比较好的结合方式?比如我在知乎上面看到过有人问类似的问题.

http://www.cnblogs.com/restran/p/4937673.html
这里也有一个测试,不过是motor性能比较好。我觉得这个测试可以把逻辑简化一下,仅仅对比数据库操作。

个人感觉损耗在建立连接上了。
pymongo也存在pool,只是你的测试可能只建立一个连接,可以看看max_pool_size这个参数
asyncmongo会在开始时创建链接池,可以参考:asyncmongo / asyncmongo / pool.py (我没怎么仔细看)。
需要注意的是,asyncmongo会在超过最大连接数时报错,motor印象中会阻塞,掉过这个坑里。

如果你的mongodb数据够多,每次读取的数据不同,每个数据大点的话,性能应该会体现出来。
个人想法,可以做更多测试看看。

编橙之家文章,

评论关闭