Debates in the IT community can get heated. Vim vs Emacs, tabs vs spaces, PHP vs everything else — but there are calmer debates where habit doesn't win, numbers do. One such debate is choosing a web server for WordPress.
LEMP is the classic stack: Linux, Nginx (Engine-X), MySQL/MariaDB, PHP. A time-tested stack used by millions of websites. OpenLiteSpeed is a relatively new web server with its own PHP implementation (LSPHP) and built-in caching. It is positioned as a fast alternative to Apache and Nginx.
Our order statistics show an interesting picture: 63% of clients choose the classic LEMP stack (Nginx + PHP-FPM), while 37% choose OpenLiteSpeed with pre-installed WordPress. The ratio is almost two to one in favor of the proven solution. Does this mean LEMP is objectively better? Or just more familiar? We decided not to guess, but to measure. Two identical servers, the same WordPress, real load tests. No synthetic benchmarks from marketing materials — only live numbers from real servers.
What We Tested
Server Configuration
Server 1: OpenLiteSpeed
- Ubuntu 22.04 LTS;
- OpenLiteSpeed 1.8.5 + LSPHP 8.5;
- MariaDB 11.4;
- Docker Compose (4 containers: web server, DB, Redis, phpMyAdmin).
Server 2: LEMP
- Ubuntu 22.04 LTS;
- Nginx 1.22.1 + PHP 8.3 FPM;
- MariaDB 10.6.16;
- Docker (monolithic container adhocore/lemp:8.3).
Both servers had identical limits: 1.847 GB RAM, identical hardware. We installed WordPress 6.7 with the Astra theme on each, creating 16 test posts and 7 pages. No optimization plugins — only basic caching: LSCache for OpenLiteSpeed, WP Super Cache for LEMP.
OpenLiteSpeed came with pre-installed WordPress, so we only needed to finish the setup via the web interface. LEMP is a clean stack without WordPress; installation was performed manually via WP-CLI. Manual installation required more effort (including debugging a localhost/127.0.0.1 issue), but it gave us a clean configuration without pre-installed optimizations.
Methodology
Tests were run from a third server (neutral ground) using Apache Bench. Before each test, the cache was warmed up with ten requests. We measured RPS (requests per second), latency, and CPU and RAM consumption.
Test scenarios:
- Light load: 10 concurrent users;
- Medium load: 50 concurrent users;
- High load: 100 concurrent users;
- Extreme: 200, 300, 500 concurrent users;
- Dynamic content: WordPress search (no cache).
Note that Apache Bench generates ideal, uniform load with identical requests from a single point, without delays. This is not a simulation of real users. Real traffic is heterogeneous: different pages, bots, crawlers, multiple resources on one page. This is why AB is useful for basic comparison. It shows the maximum theoretical performance of the stack without variables. If one stack is 9 times faster than another in synthetic tests, the difference on real traffic will be smaller, but the direction will remain the same.
16 posts and 7 pages is minimalistic content. On a site with a thousand posts, the cache will be larger, and additional factors will appear (cache size on disk, cache updates, memory consumption). However, the basic caching mechanics are already evident at a small volume. We are not testing WordPress as a platform, but the difference between two ways of serving the same content. Our goal: not to simulate production, but to isolate the variable. Which serves a static HTML page faster: Nginx with PHP-FPM or OpenLiteSpeed with LSAPI? Everything else is a constant.
We did not test cold start (all measurements after cache warm-up), long-term stability (tests lasted minutes, not weeks), SSL/TLS overhead, or performance on bare metal instead of Docker. Also, the servers used different versions of PHP (8.5 vs 8.3) and MariaDB (11.4 vs 10.6). This is due to the default configurations of the images. Apache Bench generates uniform load, meaning it is not bots, not crawlers, not real users with different geographies. 16 posts are not equivalent to 10,000 posts with a large disk cache. A simple WordPress search is not comparable to WooCommerce with product filters.
These tests answer one question: which serves cached HTML faster: LEMP or OpenLiteSpeed. Not "what is better for production" (depends on context), but where the baseline performance difference lies. For a final decision, you need tests on your content, with your load, on your hardware.
Test Results
Static Content with Cache
Here, OpenLiteSpeed was expected to win, as LSCache is considered one of the best caching solutions, but the scale of the difference was surprising.
10 Concurrent Users
|
RPS |
Latency (ms) |
|
|
LEMP |
61.64 |
162 |
|
OpenLiteSpeed |
559.26 |
17.88 |
A 9x difference! OpenLiteSpeed processes requests nine times faster under the same load.
50 Concurrent Users
|
RPS |
Latency (ms) |
|
|
LEMP |
60.56 |
825 |
|
OpenLiteSpeed |
604.98 |
82.65 |
The difference is now 10x! As load increased, LEMP did not speed up, as latency grew from 162 to 825 milliseconds. OpenLiteSpeed showed stable 600+ RPS.
100 Concurrent Users
|
RPS |
Latency (ms) |
|
|
LEMP |
63.97 |
1,563 |
|
OpenLiteSpeed |
550.48 |
181.66 |
LEMP hit a ceiling around 60 RPS. When load doubled, latency almost doubled (825 >> 1563 ms), but throughput stayed the same. OpenLiteSpeed held at 550 RPS, with latency increasing insignificantly.
Latency Percentiles (50 Concurrent Users)
|
Percentile |
LEMP (ms) |
OpenLiteSpeed (ms) |
|
50% |
798 |
80 |
|
90% |
895 |
103 |
|
99% |
1,117 |
126 |
99% of OpenLiteSpeed users get a response faster than half of LEMP users.
Extreme Loads
200 Concurrent Users:
|
RPS |
Latency (ms) |
|
|
LEMP |
61.13 |
3,262 |
|
OpenLiteSpeed |
300.81 |
664 |
A 5x difference.
300 Concurrent Users:
|
RPS |
Latency (ms) |
|
|
LEMP |
54.23 |
5,531 |
|
OpenLiteSpeed |
239.56 |
1,252 |
A 4.4x difference.
500 Concurrent Users:
|
RPS |
Latency (ms) |
failed requests |
|
|
LEMP |
57.22 |
8,738 |
0 |
|
OpenLiteSpeed |
crashed after 1,671 of 5,000 requests |
At 500 concurrent users, OpenLiteSpeed started dropping connections with the error Connection reset by peer. LEMP continued to work, albeit slowly, taking almost 9 seconds per request, but without failures.
Most sites are not threatened by 500 concurrent users. According to a HubSpot study, 46% of sites receive fewer than 15,000 visitors per month — with typical traffic distribution, this means peak loads of tens of concurrent connections, not hundreds. The extreme load test shows not just the performance limit, but the nature of degradation.
LEMP slows down but doesn't break: it becomes slower (9 seconds per request) but continues to work. Every user gets a response, albeit with a delay.
OpenLiteSpeed degrades catastrophically: it starts dropping connections. Some users do not get a response at all.
For a small blog, this doesn't matter. For a site that might hit the top of Hacker News or Reddit, it's critical. A viral article can bring an unpredictable traffic spike. LEMP will survive it slowly; OLS might crash.
This doesn't make OLS a bad choice, it just defines the boundaries of applicability. Up to 300 concurrent users, it is faster and more efficient. Above 300, you need LEMP or horizontal scaling (multiple OLS instances behind a load balancer).
The verdict is roughly this: OpenLiteSpeed is 4–10 times faster on static content, but has a limit of around 300–400 concurrent connections. LEMP is slow but stable.
Dynamic Content without Cache
WordPress search queries are not cached — every request goes through PHP and the database. Here, differences in dynamic content processing should appear.
First Run (Default OpenLiteSpeed Configuration)
|
RPS |
Latency (ms) |
99th Percentile (ms) |
|
|
LEMP |
49.78 |
173 |
296 |
|
OpenLiteSpeed |
6.84 |
144 |
59,530 |
LEMP is 7.3 times faster, but the main problem with OpenLiteSpeed is not median latency, but the 99th percentile: almost a minute for one percent of requests.
OpenLiteSpeed logs showed the problem:
[NOTICE] No request delivery notification has been received from LSAPI application, possible dead lock.
[NOTICE] ExtConn timed out while processing.
LSAPI deadlock. The default OpenLiteSpeed configuration uses only 10 PHP worker processes (PHP_LSAPI_CHILDREN=10). With 10 parallel search requests, all processes are occupied, and new requests queue up. initTimeout is set to 60 seconds — exactly how long the system waits before dropping the connection.
Changing the configuration:
- maxConns:10 >> 100
- PHP_LSAPI_CHILDREN:10 >> 50
- initTimeout:60 >> 30
- retryTimeout:0 >> 10
- max_execution_time:0 >> 30
Results after fixes:
|
RPS |
Latency (ms) |
99th Percentile (ms) |
|
|
LEMP |
54.41 |
161 |
242 |
|
OpenLiteSpeed |
15.99 |
579 |
1,056 |
OpenLiteSpeed's 99th percentile improved 56 times (from 60 seconds to 1 second), but LEMP is still 3.4 times faster on dynamic content.
LEMP handles dynamic requests significantly better. OpenLiteSpeed requires mandatory LSAPI configuration to work correctly — out of the box, it is not ready for parallel dynamic requests.
Resource Consumption
Synthetic RPS tests show theoretical performance. Resource consumption is a practical metric. How much does a VPS with 1 GB RAM cost? How much with 2 GB? If OLS fits in 1 GB and LEMP requires 2 GB, the savings are real. A 2 GB plan is 20–25% more expensive, and over the long term, this is a noticeable difference in budget. This is not abstract "efficiency," but money in the hosting provider's account.
Test: 50 concurrent users, 2000 requests, static content with cache.
- CPU:
- LEMP: Peak 194.82% (~2 cores), Idle ~1%
- OpenLiteSpeed: Peak 110.81% (~1 core), Idle ~0.4%
OpenLiteSpeed uses 1.76 times less CPU processing the same load.
RAM:
- LEMP: Peak 465.4 MiB (24.6% of limit), Idle 419 MiB
- OpenLiteSpeed: Peak 42.68 MiB (2.25% of limit), Idle 35 MiB
OpenLiteSpeed uses 11 times less RAM. For a VPS with 1 GB RAM, this is a critical difference: LEMP eats almost half, OpenLiteSpeed — 40 megabytes.
Throughput:
- LEMP: 6,928 KB/sec
- OpenLiteSpeed: 39,227 KB/sec
A 5.7x difference. OpenLiteSpeed not only responds faster, it passes more data per second.
TTFB (Time To First Byte)
Measuring time to first byte on 10 sequential requests to the cached homepage.
- LEMP: Average 0.807 seconds (807 milliseconds)
- OpenLiteSpeed: Average 0.023 seconds (23 milliseconds)
OpenLiteSpeed delivers the first byte 35 times faster. For SEO and Core Web Vitals, this is an important metric, as Google considers TTFB in ranking.
Conclusions
For content-driven sites (blogs, news sites, small business/brochure sites), the choice is obvious — OpenLiteSpeed. A 9x increase on cached content outweighs the drop on uncached requests. TTFB of 23 milliseconds instead of 807 is not an abstract metric, but a real advantage for Google PageSpeed metrics. Server stack RAM consumption of 43 MB instead of 465 MB means the ability to host the site on a cheap plan. On a VPS for 238₽/month (~$2.50) instead of 294₽/month (~$3), the annual savings will be 672 rubles (~$7) — across several projects, this adds up to a noticeable sum.
The limitation is only in our configuration: the limit turned out to be around 300 concurrent users. At 500 users, OpenLiteSpeed starts dropping connections. For small and medium sites, this is not a problem. For a project that might hit the top of Reddit or Habr, it is already critical.
For dynamic applications, LEMP is safer. WooCommerce with product filters, personal accounts, API services — everywhere where requests are unique and cache doesn't help, LEMP handles such load 3–4 times faster than OpenLiteSpeed without cache. Under extreme loads, it slows down but doesn't break; although _tail latency_ reaches 9 seconds, connections are not dropped.
The second advantage of LEMP: it works out of the box. OpenLiteSpeed with default configuration hit an LSAPI deadlock on the first parallel requests (99th percentile — 60 seconds). After configuration (50 PHP workers instead of 10, correct timeouts), the problem went away, but this requires time and understanding. LEMP just works.
If 70%+ of content is cached — install OpenLiteSpeed, the resource savings and speed increase are worth it. If half the requests are unique or guaranteed stability under peak loads is needed — take LEMP.