科技行者

行者学院 转型私董会 科技行者专题报道 网红大战科技行者

知识库

知识库 安全导航

至顶网软件频道JAVA热点:质疑 apache和yaws性能比较

JAVA热点:质疑 apache和yaws性能比较

  • 扫一扫
    分享文章到微信

  • 扫一扫
    关注官方公众号
    至顶头条

JAVA热点:质疑 apache和yaws性能比较

作者:baocl 来源:赛迪网技术社区 2007年11月1日

关键字: yaws Apache java

  • 评论
  • 分享微博
  • 分享邮件

仔细看下他的测试方式,这个比较是非常不公平的

What do we measure and how?
We use a 16 node cluster running at SICS. We plot throughput vs. parallel load.

Machine 1 has a server (Apache or Yaws).
Machine 2 requests 20 KByte pages from machine 1. It does this in tight a loop requesting a new page as soon as it has received a page from the server. From this we derive a throughput figure, which is plotted in the horizontal scale on the graph. A typical value (800) means the throughput is 800 KBytes/sec.
Machines 3 to 16 generate load.
Each machine starts a large number of parallel sessions.

Each session makes a very slow request to fetch a one byte file from machine 1. This is done by sending very slow HTTP GET requests (we break up the GET requests and send them character at a time, with about ten seconds between each character)

apache的链接处理机制是 开线程或者进程来处理请求 按它的测试方法 你非常慢速的8w请求 导致apache开大量的线程来处理。而能开多少线程取决于操作系统的能力 这还是其次 大量的线程处理活跃的链接导致大量的thread content switch。 apache 挂了不奇怪。 而erlang的线程相大于c语言的一个数据结构 erl_process你开多少取决于你的内存 大量的但是慢速的链接刚好适合poll事件dispatch, 以epoll的能力(俺测试过epoll30w)能够轻松处理。 这个测试与其说测试web服务器的性能 不如说 测试服务器的进程生成能力。

俺的测试是这样的:.
./yaws --conf yaws.conf --erlarg "+K true +P 1024000"     #epoll 最多1024000个进程 内核都已经调优过

yaws.conf 的内容:

auth_log = false
max_num_cached_files = 8000
max_num_cached_bytes = 6000000

 

大家都用 ab -c 1000 -n 1000000 http://192.168.0.98:8000/bomb.gif 来测
果然发现yaws的性能也是非常一般 大概也就是3K左右.

各位看下 strace 的结果就知道了:

accept(10, {sa_family=AF_INET, sin_port=htons(5644), sin_addr=inet_addr("192.168.0.97")}, [16]) = 11
fcntl64(11, F_GETFL)                    = 0x2 (flags O_RDWR)
fcntl64(11, F_SETFL, O_RDWR|O_NONBLOCK) = 0
getsockopt(10, SOL_TCP, TCP_NODELAY, [0], [4]) = 0
getsockopt(10, SOL_SOCKET, SO_KEEPALIVE, [0], [4]) = 0
getsockopt(10, SOL_SOCKET, SO_PRIORITY, [0], [4]) = 0
getsockopt(10, SOL_IP, IP_TOS, [0], [4]) = 0
getsockopt(11, SOL_SOCKET, SO_PRIORITY, [0], [4]) = 0
getsockopt(11, SOL_IP, IP_TOS, [0], [4]) = 0
setsockopt(11, SOL_IP, IP_TOS, [0], 4)  = 0
setsockopt(11, SOL_SOCKET, SO_PRIORITY, [0], 4) = 0
getsockopt(11, SOL_SOCKET, SO_PRIORITY, [0], [4]) = 0
getsockopt(11, SOL_IP, IP_TOS, [0], [4]) = 0
setsockopt(11, SOL_SOCKET, SO_PRIORITY, [0], 4) = 0
getsockopt(11, SOL_SOCKET, SO_PRIORITY, [0], [4]) = 0
getsockopt(11, SOL_IP, IP_TOS, [0], [4]) = 0
setsockopt(11, SOL_SOCKET, SO_KEEPALIVE, [0], 4) = 0
setsockopt(11, SOL_IP, IP_TOS, [0], 4)  = 0
setsockopt(11, SOL_SOCKET, SO_PRIORITY, [0], 4) = 0
getsockopt(11, SOL_SOCKET, SO_PRIORITY, [0], [4]) = 0
getsockopt(11, SOL_IP, IP_TOS, [0], [4]) = 0
setsockopt(11, SOL_TCP, TCP_NODELAY, [0], 4) = 0
setsockopt(11, SOL_SOCKET, SO_PRIORITY, [0], 4) = 0
recv(11, "GET /bomb.gif HTTP/1.0\r\nUser-Age"..., 8192, 0) = 100
getpeername(11, {sa_family=AF_INET, sin_port=htons(5644), sin_addr=inet_addr("192.168.0.97")}, [16]) = 0
clock_gettime(CLOCK_MONOTONIC, {110242, 326908594}) = 0
stat64("/var/www/html/bomb.gif", {st_mode=S_IFREG|0644, st_size=4096, ...}) = 0
access("/var/www/html/bomb.gif", R_OK)  = 0
access("/var/www/html/bomb.gif", W_OK)  = 0
clock_gettime(CLOCK_MONOTONIC, {110242, 327135982}) = 0
time(NULL)                              = 1185894828
clock_gettime(CLOCK_MONOTONIC, {110242, 327222643}) = 0
stat64("/etc/localtime", {st_mode=S_IFREG|0644, st_size=405, ...}) = 0
writev(11, [{NULL, 0}, {"HTTP/1.1 200 OK\r\nConnection: clo"..., 231}, {"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...
, 4096}], 3) = 4327
close(11

这里面充斥着大量的无用的昂贵的系统调用 (至少有20个*10us = 200us 的系统调用是无效的)
对文件的access 2 次  连文件的cache都没有  每次 打开文件  读文件 然后写到socket去 。

这个case是小文件(4k)的情况。 看下大文件(40k)的情况
open("/var/www/html/bomb.gif", O_RDONLY|O_LARGEFILE) = 19
read(19, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 10240) = 10240
writev(16, [{NULL, 0}, {"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 10240}], 2) = 10240
read(19, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 10240) = 10240
writev(16, [{NULL, 0}, {"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 10240}], 2) = 10240
read(19, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 10240) = 10240
writev(16, [{NULL, 0}, {"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 10240}], 2) = 10240
read(19, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 10240) = 10240
writev(16, [{NULL, 0}, {"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 10240}], 2) = 7240
read(19, "", 10240)                     = 0
close(19)                               = 0
clock_gettime(CLOCK_MONOTONIC, {110574, 856508319}) = 0
epoll_ctl(3, EPOLL_CTL_DEL, 11, {0, {u32=11, u64=581990243524149259}}) = 0
epoll_ctl(3, EPOLL_CTL_DEL, 12, {0, {u32=12, u64=581990243524149260}}) = 0
epoll_ctl(3, EPOLL_CTL_ADD, 16, {EPOLLOUT, {u32=16, u64=581990243524149264}}) = 0
epoll_wait(3, {}, 256, 0)               = 0
clock_gettime(CLOCK_MONOTONIC, {110574, 856677411}) = 0
clock_gettime(CLOCK_MONOTONIC, {110574, 856729274}) = 0


大量的epoll_ctl 调用 clock_gettime的调用 足够让系统的速度变的非常慢。


比对下lighttpd的性能。 lighttpd用到了cache,用到了aio,还是完全用c语言小心编写, 他处理小文件大概是并发1w.  而yaws这个的处理方式打个3折还差不多。

查看本文来源
    • 评论
    • 分享微博
    • 分享邮件
    邮件订阅

    如果您非常迫切的想了解IT领域最新产品与技术信息,那么订阅至顶网技术邮件将是您的最佳途径之一。

    重磅专题
    往期文章
    最新文章