Why I Don’t Believe Ruby FCGI Can Beat LSAPI (Benchmark Ruby LSAPI vs FCGI)

August 31st, 2006 by Benchmarks , LiteSpeed Web Server , Performance Comments Off on Why I Don’t Believe Ruby FCGI Can Beat LSAPI (Benchmark Ruby LSAPI vs FCGI)

I simply couldn’t believe it when I saw a benchmark result that FCGI beats LSAPI, because we have spent quite amount of time optimizing our LSAPI protocol and implementation, it is much faster than FastCGI protocol according to our benchmark. Then how much faster LSAPI can be? Here is our benchmark for LSWS Enterprise.

This time we only tested Ruby FCGI and LSAPI alone, just simple CGI scripts, no Rails framework involved.

Our test environment is same as the one used in previous post except that we booted our test server into a non-SMP kernel, so only one CPU is used. Why? Because LSAPI is so fast, our simple test script cannot max out all the CPU power, and there are about 20% idle CPU during the test, while almost all CPU power has been used during FCGI test, only have 0.5-1% idle CPU. When we booted into a non-SMP kernel, only 1 CPU is used, which has been maxed out during both tests, the result should make more sense.

Test scripts:

testlsapi.rb

\#!/usr/local/bin/ruby

require ‘lsapi’

while LSAPI.accept != nil
print “HTTP/1.0 200 OK\r\nContent-type: text/html\r\n\r\nHello, World!\n”
end

testfcgi.rb

\#!/usr/local/bin/ruby
require “fcgi”

FCGI.each {|request|
out = request.out
out.print “Content-Type: text/html\r\n\r\nHello, World!\n”
request.finish
}

Both FCGI and LSAPI has been configured to start 10 instances. Here is the result:

Ruby LSAPI:

$ ab -n 100000 -c 100 http://192.168.0.60:8080/testlsapi

Server Software: LiteSpeed
Server Hostname: 192.168.0.60
Server Port: 8080

Document Path: /testlsapi
Document Length: 14 bytes

Concurrency Level: 100
Time taken for tests: 13.600 seconds
Complete requests: 100000
Failed requests: 0
Broken pipe errors: 0
Total transferred: 15300459 bytes
HTML transferred: 1400042 bytes
Requests per second: 7352.94 [#/sec] (mean)
Time per request: 13.60 [ms] (mean)
Time per request: 0.14 [ms] (mean, across all concurrent requests)
Transfer rate: 1125.03 [Kbytes/sec] received

Connnection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 14
Processing: 6 13 2.9 13 280
Waiting: 1 13 2.9 12 280
Total: 6 13 2.9 13 283

Percentage of the requests served within a certain time (ms)
50% 13
66% 13
75% 13
80% 14
90% 15
95% 17
98% 18
99% 19
100% 283 (last request)

FCGI:

$ab -n 100000 -c 100 http://192.168.0.60:8080/testfcgi

Server Software: LiteSpeed
Server Hostname: 192.168.0.60
Server Port: 8080

Document Path: /testfcgi
Document Length: 14 bytes

Concurrency Level: 100
Time taken for tests: 20.069 seconds
Complete requests: 100000
Failed requests: 0
Broken pipe errors: 0
Total transferred: 15300153 bytes
HTML transferred: 1400014 bytes
Requests per second: 4982.81 [#/sec] (mean)
Time per request: 20.07 [ms] (mean)
Time per request: 0.20 [ms] (mean, across all concurrent requests)
Transfer rate: 762.38 [Kbytes/sec] received

Connnection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.0 0 3
Processing: 16 19 2.7 18 195
Waiting: 16 19 2.7 18 195
Total: 16 19 2.8 18 195

Percentage of the requests served within a certain time (ms)
50% 18
66% 18
75% 19
80% 20
90% 23
95% 27
98% 27
99% 27
100% 195 (last request)

Ruby LSAPI is about 50% faster than Ruby FCGI for the simple “Hello, World” test.

How about other web servers’ FCGI engine? OK, let’s try the same test with nginx then.

nginx configuration:


upstream fcgi {
server unix:/tmp/fcgi1.sock;
server unix:/tmp/fcgi2.sock;
server unix:/tmp/fcgi3.sock;
server unix:/tmp/fcgi4.sock;
server unix:/tmp/fcgi5.sock;
server unix:/tmp/fcgi6.sock;
server unix:/tmp/fcgi7.sock;
server unix:/tmp/fcgi8.sock;
server unix:/tmp/fcgi9.sock;
server unix:/tmp/fcgi0.sock;

}

location /testfcgi {
fastcgi_pass fcgi;
include conf/fastcgi_params;
}

Result:

$ ab -n 100000 -c 100 http://192.168.0.60:80/testfcgi

Server Software: nginx/0.4.0
Server Hostname: 192.168.0.60
Server Port: 80

Document Path: /testfcgi
Document Length: 14 bytes

Concurrency Level: 100
Time taken for tests: 21.317 seconds
Complete requests: 100000
Failed requests: 0
Broken pipe errors: 0
Total transferred: 13500000 bytes
HTML transferred: 1400000 bytes
Requests per second: 4691.09 [#/sec] (mean)
Time per request: 21.32 [ms] (mean)
Time per request: 0.21 [ms] (mean, across all concurrent requests)
Transfer rate: 633.30 [Kbytes/sec] received

Connnection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 6
Processing: 6 21 3.3 20 47
Waiting: 2 20 3.3 19 47
Total: 6 21 3.3 20 47

Percentage of the requests served within a certain time (ms)
50% 20
66% 20
75% 20
80% 20
90% 26
95% 31
98% 31
99% 31
100% 47 (last request)

Don’t believe my test result, try it yourself!



Related Posts


Comments