By ablemike


2011-09-06 19:38:35 8 Comments

We have a server that is serving one html file.

Right now the server has 2 CPUs and 2GB of ram. From blitz.io, we are getting about 12k connections per minute and anywhere from 200 timeouts in that 60 seconds with 250 concurrent connections each second.

worker_processes  2;

events {
 worker_connections 1024;
}

If I increase the timeout, the response time starts creeping up beyond a second.

What else can I do to squeeze more juice out of this?

1 comments

@Bulat 2011-11-21 20:07:03

Config file:

worker_processes  4;  # 2 * Number of CPUs

events {
    worker_connections  19000;  # It's the key to high performance - have a lot of connections available
}

worker_rlimit_nofile    20000;  # Each connection needs a filehandle (or 2 if you are proxying)


# Total amount of users you can serve = worker_processes * worker_connections

more info: Optimizing nginx for high traffic loads

@Ethan 2012-05-24 08:15:11

I think the equation provided for total amount of users per sec is wrong. Instead the average amount of users served per second should be = worker_processes * worker_connections / (keepalive_timeout * 2) Therefore, the above conf file can server ~7.6K connections per sec, which is way above what @ablemike needs. However, worker_rlimit_nofile is a good directive to use, if ulimit is restrictive and you don't want to modify it.

@Bulat 2012-06-14 16:03:36

@Ethan, why it should be devided by 2? If every second we get 100 new connections, and timeout is 5, then strting with sixth second, we will constantly have 5*100 connections that is still not terminated on the server side. we may have less if some users are aborted connections himself

@Tilo 2013-03-21 05:12:01

that formula does not work if keepalive is set to 0s (disabled)

@Bulat 2013-05-12 15:15:54

thanks, Tilo. the better formula is: total amount of users you can serve in 1 second = worker_processes*worker_connections/(keepalive_timeout+time_‌​required_to_serve_on‌​e_request)

@Ethan 2013-05-19 20:52:30

Each connection needs 2 file handles even for static files like images/JS/CSS. This is as 1 for the client's connection and the 2nd for opening the static file. Therefore, it's safer to change worker_rlimit_nofile = 2 * worker_connections.

@Ethan 2013-05-19 20:53:31

Use worker_rlimit_nofile but one should also call 'ulimit -n' to set the open file count value per process. This is better done in the init script.

@Bulat 2013-06-23 18:19:54

Thank you, Ethan. By our collective work, this topic becames encyclopedy of high-throughput nginxing :)

@VBart 2013-08-20 16:58:23

You should remove keepalive_timeout from your formula. Nginx closes keepalive connections when the worker_connections limit is reached.

@blubberdiblub 2014-02-10 08:14:56

Rather than removing it, you should decide on a minimum keepalive timeout you want to grant everyone to connect and use that instead in the formula.

@Adam C. 2014-02-19 14:42:43

How to determine worker_connections? We could not keep on increase it to meet our need, could we? It should be based on CPU, Memory, etc. Correct? Then how?

@Bulat 2014-03-17 14:57:57

Adam, every worker_connection (in the sleeping state) needs 256 bytes of memory, so you can increase it easily

Related Questions

Sponsored Content

0 Answered Questions

nginx request balancing per worker

2 Answered Questions

[SOLVED] what is worker_processes and worker_connections in Nginx?

  • 2014-04-30 11:32:51
  • user801116
  • 14467 View
  • 7 Score
  • 2 Answer
  • Tags:   nginx

0 Answered Questions

Optimize and Scale my Webservers (Nginx + Gunicorn + Django stack)

4 Answered Questions

[SOLVED] 502 Gateway Errors under High Load (nginx/php-fpm)

  • 2012-01-07 18:34:40
  • Mr.Boon
  • 27110 View
  • 43 Score
  • 4 Answer
  • Tags:   nginx centos php

2 Answered Questions

[SOLVED] Optimize Ubunto and nginx to handle static files

2 Answered Questions

2 Answered Questions

AWS EC2 with Nginx & PHP-FPM - Can not push CPU over 50%

1 Answered Questions

nginx php-fpm cache same multiple requests per second

1 Answered Questions

CURLOPT_DNS_CACHE_TIMEOUT - Cachng bad responses?

  • 2013-06-06 16:46:26
  • B p
  • 2527 View
  • 0 Score
  • 1 Answer
  • Tags:   curl nginx dns php

1 Answered Questions

[SOLVED] LEMP Nginx + php-fpm high load timeouts then fine

Sponsored Content