论坛首页 编程语言技术论坛

django,性能测试,以及对fastcgi下进程模型和线程模型的分析

浏览 30879 次
该帖已经被评为精华帖
作者 正文
   发表时间:2008-11-15   最后修改:2009-01-09
/**
*作者:张荣华
*日期:2008-11-15
**/

之前网上有很多关于django的测试,他们的测试结果都表明django在fastcgi模式下,使用线程模型要比进程模型快,而且更稳定,具体文章见:
http://irobot.blog.hexun.com/20332312_d.html
http://taoyh163.blog.163.com/blog/static/19580356200802433559850/
但是ahuaxuan根据操作系统的原理判断结果不应该是这样,理论上来讲,进程应该更快。为了证明自己的观点,于是做了以下测试。

那么在讲解我的测试方法之前,按照惯例,现来讲以下dango中fastcgi模式的一些知识点。
dango的fastcgi模式有如下几个重要参数:
 protocol=PROTOCOL    fcgi, scgi, ajp, ... (default fcgi)
  host=HOSTNAME        hostname to listen on..
  port=PORTNUM         port to listen on.
  socket=FILE          UNIX socket to listen on.
  method=IMPL          prefork or threaded (default prefork)
  maxrequests=NUMBER   number of requests a child handles before it is 
                       killed and a new child is forked (0 = no limit).
  maxspare=NUMBER      max number of spare processes / threads
  minspare=NUMBER      min number of spare processes / threads.
  maxchildren=NUMBER   hard limit number of processes / threads
  daemonize=BOOL       whether to detach from terminal.
  pidfile=FILE         write the spawned process-id to this file.
  workdir=DIRECTORY    change to this directory when daemonizing.
  outlog=FILE          write stdout to this file.
  errlog=FILE          write stderr to this file.
  umask=UMASK          umask to use when daemonizing (default 022).


相信做java的同学一看就比较明白了,很多参数和tomcat中是一样的,主要有一个host,port,socket需要讲解一下,host和port我们知道应该是成对出现的,那么socket是什么呢,其实他们都是socket,只不过,host+port模式是tcp sock,而socket是unix sock,他们都是套接字,一个是操作系统本地的,一个是网络套接字而已。

我的测试工具是apachbench,简称ab,在apache的bin目录中有这个工具。我的web服务器是lighttpd1.4。
我一共划分了4个场景,第一个场景是操作数据库的请求,第二个场景是请求缓存的场景,而且使用线程模型,第3和第4个场景都是fastcgi的进程模型。

场景一
涉及到查数据库的url,每次请求一条简单的sql语句。
python manage.py runfcgi method=threaded host=127.0.0.1 port=3033 daemonize=false

请求数    并发数   总时间
5000    50     22.86s
5000    25     23.37s
5000    10     23.37s
5000    100    21.58s


场景二
不涉及到数据的url,执行一段判断后返回(可以认为数据都放在缓存中)。
python manage.py runfcgi method=threaded host=127.0.0.1 port=3033
请求数    并发数   总时间
5000    50     7.734s
Concurrency Level:      50

Time taken for tests:   7.883 seconds

Complete requests:      5000

Failed requests:        0

Write errors:           0

Total transferred:      5505084 bytes

HTML transferred:       4685937 bytes

Requests per second:    634.28 [#/sec] (mean)

Time per request:       78.830 [ms] (mean)

Time per request:       1.577 [ms] (mean, across all concurrent requests)

Transfer rate:          681.98 [Kbytes/sec] received


5000    25     7.545s
Concurrency Level:      25

Time taken for tests:   7.859 seconds

Complete requests:      5000

Failed requests:        0

Write errors:           0

Total transferred:      5504770 bytes

HTML transferred:       4685000 bytes

Requests per second:    636.20 [#/sec] (mean)

Time per request:       39.296 [ms] (mean)

Time per request:       1.572 [ms] (mean, across all concurrent requests)

Transfer rate:          684.01 [Kbytes/sec] received


5000    10     7.481s
Concurrency Level:      10

Time taken for tests:   7.920 seconds

Complete requests:      5000

Failed requests:        0

Write errors:           0

Total transferred:      5503153 bytes

HTML transferred:       4685000 bytes

Requests per second:    631.28 [#/sec] (mean)

Time per request:       15.841 [ms] (mean)

Time per request:       1.584 [ms] (mean, across all concurrent requests)

Transfer rate:          678.52 [Kbytes/sec] received


5000    100    7.776s
Concurrency Level:      100

Time taken for tests:   7.776 seconds

Complete requests:      5000

Failed requests:        0

Write errors:           0

Total transferred:      5504370 bytes

HTML transferred:       4685937 bytes

Requests per second:    643.04 [#/sec] (mean)

Time per request:       155.511 [ms] (mean)

Time per request:       1.555 [ms] (mean, across all concurrent requests)

Transfer rate:          691.32 [Kbytes/sec] received


场景一和场景 二对比可以发现,带有数据操作的请求明显需要更多的时间,之间从缓存中拿数据,每秒中fastcgi可以处理1000个请求。

场景三
不涉及到数据的url,执行一段判断后返回(可以认为数据都放在缓存中)。使用进程模型。
python manage.py runfcgi method=prefork host=127.0.0.1 port=3033
请求数    并发数   总时间
5000    50     22 s
Concurrency Level:      50

Time taken for tests:   22.676 seconds

Complete requests:      5000

Failed requests:        15

   (Connect: 0, Receive: 0, Length: 15, Exceptions: 0)

Write errors:           0

Non-2xx responses:      15

Total transferred:      5519788 bytes

HTML transferred:       4676480 bytes

Requests per second:    220.50 [#/sec] (mean)

Time per request:       226.762 [ms] (mean)

Time per request:       4.535 [ms] (mean, across all concurrent requests)

Transfer rate:          237.71 [Kbytes/sec] received


5000    25     25 s
Concurrency Level:      25

Time taken for tests:   25.330 seconds

Complete requests:      5000

Failed requests:        15

   (Connect: 0, Receive: 0, Length: 15, Exceptions: 0)

Write errors:           0

Non-2xx responses:      15

Total transferred:      5481652 bytes

HTML transferred:       4676480 bytes

Requests per second:    197.40 [#/sec] (mean)

Time per request:       126.649 [ms] (mean)

Time per request:       5.066 [ms] (mean, across all concurrent requests)

Transfer rate:          211.34 [Kbytes/sec] received


5000    10     15 s
Concurrency Level:      10

Time taken for tests:   15.463 seconds

Complete requests:      5000

Failed requests:        9

   (Connect: 0, Receive: 0, Length: 9, Exceptions: 0)

Write errors:           0

Non-2xx responses:      9

Total transferred:      5536528 bytes

HTML transferred:       4679888 bytes

Requests per second:    323.35 [#/sec] (mean)

Time per request:       30.926 [ms] (mean)

Time per request:       3.093 [ms] (mean, across all concurrent requests)

Transfer rate:          349.66 [Kbytes/sec] received


5000    100    21 s
Concurrency Level:      100

Time taken for tests:   21.225 seconds

Complete requests:      5000

Failed requests:        15

   (Connect: 0, Receive: 0, Length: 15, Exceptions: 0)

Write errors:           0

Non-2xx responses:      15

Total transferred:      5541355 bytes

HTML transferred:       4676480 bytes

Requests per second:    235.57 [#/sec] (mean)

Time per request:       424.498 [ms] (mean)

Time per request:       4.245 [ms] (mean, across all concurrent requests)

Transfer rate:          254.96 [Kbytes/sec] received

通过场景二和三的对比,我们可以看出线程模型在默认情况下比进程模型更加快。不过根据操作系统的特性,ahuaxuan认为事有蹊跷。理论上来讲,在速度方面,进程模型不应该比线程模型慢,虽然网上有的文章确实有提到线程模型比进程模型快,不过ahuaxuan觉得他们的测试是有问题的。在研究了django的fastcgi参数之后,再根据做java的经验我发现问题可能出现在进程的创建上。于是调整参数,继续测试。


场景四
不涉及到数据的url,执行一段判断后返回(可以认为数据都放在缓存中)。将最大进程数和最小进程数调整到50。
python manage.py runfcgi method=prefork host=127.0.0.1 port=3033 daemonize=false minspare=50 maxspare=50
请求数    并发数   总时间
5000    100     8.16s
第一次:
Concurrency Level:      100

Time taken for tests:   9.682 seconds

Complete requests:      5000

Failed requests:        0

Write errors:           0

Total transferred:      5557585 bytes

HTML transferred:       4685000 bytes

Requests per second:    516.42 [#/sec] (mean)

Time per request:       193.642 [ms] (mean)

Time per request:       1.936 [ms] (mean, across all concurrent requests)

Transfer rate:          560.55 [Kbytes/sec] received

第二次
Concurrency Level:      100

Time taken for tests:   5.134 seconds

Complete requests:      5000

Failed requests:        0

Write errors:           0

Total transferred:      5560000 bytes

HTML transferred:       4685000 bytes

Requests per second:    973.84 [#/sec] (mean)

Time per request:       102.686 [ms] (mean)

Time per request:       1.027 [ms] (mean, across all concurrent requests)

Transfer rate:          1057.53 [Kbytes/sec] received


分析,一模一样的两次请求,为什么差两倍的速度呢,根据ahuaxuan的分析,问题应该出在进程的创建上,第二次测试,由于进程已经存在,所以速度非常的快,比线程模型快了2倍不到一点。
5000    25     8.90s

Concurrency Level:      25

Time taken for tests:   5.347 seconds

Complete requests:      5000

Failed requests:        0

Write errors:           0

Total transferred:      5559748 bytes

HTML transferred:       4685000 bytes

Requests per second:    935.07 [#/sec] (mean)

Time per request:       26.736 [ms] (mean)

Time per request:       1.069 [ms] (mean, across all concurrent requests)

Transfer rate:          1015.38 [Kbytes/sec] received

5000    10     8.78s

Concurrency Level:      10

Time taken for tests:   5.723 seconds

Complete requests:      5000

Failed requests:        0

Write errors:           0

Total transferred:      5562916 bytes

HTML transferred:       4687811 bytes

Requests per second:    873.64 [#/sec] (mean)

Time per request:       11.446 [ms] (mean)

Time per request:       1.145 [ms] (mean, across all concurrent requests)

Transfer rate:          949.22 [Kbytes/sec] received

5000    50    7.90s
Concurrency Level:      50

Time taken for tests:   5.239 seconds

Complete requests:      5000

Failed requests:        0

Write errors:           0

Total transferred:      5560923 bytes

HTML transferred:       4685937 bytes

Requests per second:    954.43 [#/sec] (mean)

Time per request:       52.387 [ms] (mean)

Time per request:       1.048 [ms] (mean, across all concurrent requests)

Transfer rate:          1036.63 [Kbytes/sec] received


对比场景三和场景四发现,在进程模式下在没有指定maxspare和minspare值的情况下,由于每次并发大的时候都动态的去创建进程,效率明显下降,5000个请求居然需要20s之多。而一旦设置了maxspare和minspare之后,只有第一次请求的时候,需要创建进程,之后经常已经存在,不需要创建,也不需要动态的消亡(maxspare和minspare值太小会导致fastcgi父进程频繁的创建和销毁子进程,非常的消耗cpu),整个应用程序的处理能力大大提高。

再对比场景二和场景四,可以发现不管是进程模式还是线程模式,每秒都能处理超过1000次的请求。而且在并发较大的情况下,进程模式效率更高。由此可见在网站访问量巨大的情况下,使用进程模型才是比较好的选择,而不是网上所说的使用线程模型。

后来为了作对比,ahuaxuan在线程模型上也加了maxspare=50,minspare=50,不过性能和没有加几乎一样,可见,这两个参数对进程模型的影响比较大。而且也可以进一步说明操作系统创建进程消耗确实大。

从这个对比结果,我们还可以得知:
1线程创建在ubuntu中的代价比进程小的多。(根据观察,在创建进程的时候,cpu上升到100%,而线程模型的cpu只有80%的样子)
2在进程已经存在的情况下,处理请求的能力,进程要比线程能力强。而且要强出1/3左右的样子

最后,贴出我的机器配置
cpu:t8100
内存:2g
硬盘:5400转的希捷

希望本文能够给对django性能有怀疑,以及对fastcgi下认为线程模型更快的同学有所帮助。
   发表时间:2008-11-15  
protocol=scgi
0 请登录后投票
   发表时间:2008-11-15   最后修改:2008-11-15
剑 事 写道
protocol=scgi

没试,不过我想不会高到哪里去的,因为fastcgi的进程模型已经够快了,每天2000w根本不是什么难事,关键看程序怎么写。反正django给我们带来的特性已经支持每天2000w了

其实写这篇文章的主要目的是为了给进程模型讨会一个公道,也省得网上一些文章继续误导大家

5000次请求只要5秒钟,而且是在我的笔记本上。

0 请登录后投票
   发表时间:2008-11-15   最后修改:2008-11-15
lighttpd+fastcgi+threaded   Requests per second: 591.93 [#/sec] (mean)


lighttpd+scgi+threaded    Requests per second: 806.19 [#/sec] (mean)
我用scgi  不过没测试过
0 请登录后投票
   发表时间:2008-11-15   最后修改:2008-11-15
剑 事 写道
lighttpd+fastcgi+threaded   Requests per second: 591.93 [#/sec] (mean)


lighttpd+scgi+threaded    Requests per second: 806.19 [#/sec] (mean)
我用scgi  不过没测试过


从你的测试结果得出的结论是:
在线程模型下,scgi的速度比fastcgi更快

于是我也做一个测试
测试cast:
./ab -c 100 -n 5000 http://localhost/mark.html

django启动命令:python manage.py runfcgi host=127.0.0.1 port=3033 method=prefork protocol=scgi  daemonize=false

结果:
Concurrency Level:      100
Time taken for tests:   8.240 seconds
Complete requests:      5000
Failed requests:        0
Write errors:           0
Total transferred:      5512708 bytes
HTML transferred:       4685000 bytes
Requests per second:    606.80 [#/sec] (mean)
Time per request:       164.800 [ms] (mean)
Time per request:       1.648 [ms] (mean, across all concurrent requests)
Transfer rate:          653.34 [Kbytes/sec] received

对比我的测试场景二,发现,scgi下的线程模型和fastcgi下的线程模型也是差不多

你的测试结果怎么相差那么多,能给出你的测试方法吗?






------------------------------------------

但是在进程模型下,scgi和fastcgi谁快呢
于是我又做了一个测试:

测试case:
./ab -c 100 -n 5000 http://localhost/mark.html

django启动命令:python manage.py runfcgi host=127.0.0.1 port=3033 method=prefork protocol=scgi  daemonize=false minspare=50 maxspare=50

测试结果
第一次请求:
Concurrency Level:      100
Time taken for tests:   9.617 seconds
Complete requests:      5000
Failed requests:        0
Write errors:           0
Total transferred:      5557396 bytes
HTML transferred:       4685000 bytes
Requests per second:    519.91 [#/sec] (mean)
Time per request:       192.342 [ms] (mean)
Time per request:       1.923 [ms] (mean, across all concurrent requests)
Transfer rate:          564.32 [Kbytes/sec] received


第二次请求
Concurrency Level:      100
Time taken for tests:   5.330 seconds
Complete requests:      5000
Failed requests:        0
Write errors:           0
Total transferred:      5559979 bytes
HTML transferred:       4685000 bytes
Requests per second:    938.02 [#/sec] (mean)
Time per request:       106.607 [ms] (mean)
Time per request:       1.066 [ms] (mean, across all concurrent requests)
Transfer rate:          1018.63 [Kbytes/sec] received


结合我文章中的场景四可以得出结论
在scgi模式下的进程模型和fastcgi模式下的进程模型速度不相上下



0 请登录后投票
   发表时间:2008-11-15  
SCGI和FCGI仅仅是通讯协议略有差别而已,性能差别会在测试误差范围之内的。

我没有研究过django的部署,不过从上文描述来看,FCGI进程是可以动态spawn的。但我以为动态spawn并不适合真正高负载的网络环境,黑客们只需要在瞬间发起上万个并发链接,服务器因为需要短时间内创建大量进程的开销而导致CPU消耗光,服务器失去响应。

Ruby的mod_rails就是这种动态spawn模式,我很反对。JavaEye是静态spawn模式,应用服务器启动特定数量的FCGI进程数量,不论并发请求如何波动,不创建更多进程,也不销毁进程,这样系统的负载比较平滑。
0 请登录后投票
   发表时间:2008-11-15  
robbin 写道

我没有研究过django的部署,不过从上文描述来看,FCGI进程是可以动态spawn的。但我以为动态spawn并不适合真正高负载的网络环境,黑客们只需要在瞬间发起上万个并发链接,服务器因为需要短时间内创建大量进程的开销而导致CPU消耗光,服务器失去响应。



从场景三来看确实是这样的,动态的spawn导致我的本本的cpu在整个测试期间(也就是多次5000个请求里)都是100%,然后我通过自己写的中间件发现,进程在不停的被创建和销毁,这个对操作系统来说简直是恶梦啊

在django里也可以解决这个问题,通过设置合理的maxchildren,minspare和maxspare应该就可以解决这个问题了
0 请登录后投票
   发表时间:2008-11-15   最后修改:2008-11-15
ahuaxuan 写道
robbin 写道

我没有研究过django的部署,不过从上文描述来看,FCGI进程是可以动态spawn的。但我以为动态spawn并不适合真正高负载的网络环境,黑客们只需要在瞬间发起上万个并发链接,服务器因为需要短时间内创建大量进程的开销而导致CPU消耗光,服务器失去响应。



从场景三来看确实是这样的,动态的spawn导致我的本本的cpu在整个测试期间(也就是多次5000个请求里)都是100%,然后我通过自己写的中间件发现,进程在不停的被创建和销毁,这个对操作系统来说简直是恶梦啊

在django里也可以解决这个问题,通过设置合理的maxchildren,minspare和maxspare应该就可以解决这个问题了


我部署JavaEye服务器是自己写shell脚本来控制FCGI进程的spawn,respawn,很稳定,很健壮,而且很容易维护。
0 请登录后投票
   发表时间:2008-11-15  
robbin 写道

我部署JavaEye服务器是自己写shell脚本来控制FCGI进程的spawn,respawn,很稳定,很健壮,而且很容易维护。

想了几分钟,还是没有想通具体的做法或者思路是什么样子的,shell脚本是如何去控制系统中进程的行为的呢,难道那个进程有提供这样的接口或者什么滴?

知识不够用了,robbin大哥能否为我们讲解一下具体的思路。
0 请登录后投票
   发表时间:2008-11-15   最后修改:2008-11-15
ahuaxuan 写道
robbin 写道

我部署JavaEye服务器是自己写shell脚本来控制FCGI进程的spawn,respawn,很稳定,很健壮,而且很容易维护。

想了几分钟,还是没有想通具体的做法或者思路是什么样子的,shell脚本是如何去控制系统中进程的行为的呢,难道那个进程有提供这样的接口或者什么滴?

知识不够用了,robbin大哥能否为我们讲解一下具体的思路。


我们是静态spawn的,用lighttpd的spawn-fcgi命令来spawn进程就可以了,不进行动态spawn,当然shell启动脚本还有很多其他工作要做。另外的一个shell脚本用来监控FCGI的内存使用状况,发现超过限额以后就respawn它,另外还有一个进程健康检查的shell,如果发现进程crash掉,就spawn一个新的进程。
0 请登录后投票
论坛首页 编程语言技术版

跳转论坛:
Global site tag (gtag.js) - Google Analytics