NGINX vs Apache – Which Is the Best Web Server in 2024?

NGINX vs Apache are two of the most prominent open-source internet services globally. They collectively manage over 50% of web traffic worldwide. Choosing between the two isn´t a straightforward decision. 

NGINX and Apache are engineered to handle diverse workloads and seamlessly integrate with a variety of software, forming the backbone of powerful web infrastructure.

Despite their similarities, NGINX and Apache have distinct features and functionalities. Understanding these nuances is essential in determining which server aligns best with your needs.

To simplify this decision-making process, we’ve crafted this comprehensive guide that delves into the critical aspects of both servers. Here you´ll find insights into their backgrounds and essential features. But before we dive deeper, let’s establish a foundational understanding of NGINX and Apache.

NGINX - NGINX vs Apache - Plesk

NGINX Outline

NGINX is a widely used web server and reverse proxy server that is renowned for its high performance and scalability. It was created by Igor Sysoev in 2004 to address the challenge of handling a large number of simultaneous connections, known as the C10k problem. NGINX’s asynchronous, event-driven architecture allows it to efficiently manage multiple connections with minimal resource consumption.

As a web server, NGINX is capable of serving static content, handling SSL/TLS termination, and supporting various protocols such as HTTP, HTTPS, WebSocket, and HTTP/2. It excels in efficiently delivering static files and can also act as a reverse proxy, distributing client requests to backend servers and returning the responses to the clients. This reverse proxy capability makes NGINX a valuable tool for load balancing, caching, and enhancing the performance and reliability of web applications.

NGINX boasts a flexible configuration system that can be customized to meet specific requirements. It supports URL rewriting and access control mechanisms, allowing administrators to modify incoming URLs and restrict access to resources. NGINX’s extensive ecosystem includes a vast library of third-party modules and extensions that provide additional functionality, enhancing its versatility.

NGINX is available as an open-source project, called NGINX Open Source, which can be freely used and modified. There is also a commercial version called NGINX Plus, which offers additional features, professional support, and enterprise-grade capabilities. Thanks to its robust architecture, rich feature set, and widespread adoption, NGINX has become a popular choice for developers and system administrators seeking a high-performance web server and reverse proxy solution.

Basic Design

  • Uses an event-driven process.
  • Capable of dealing with multiple requests in a single thread at the same time.

NGINX handles requests asynchronously with event-driven architecture. NGINX was made to utilize a non-blocking, event-driven handling algorithm, so it can accommodate potentially thousands of connection requests at the same time, in one processing thread. It can also work quickly regardless of resources being minimal. NGINX runs smoothly on systems with low power and those that are required to work with large loads.

Apache - NGINX vs Apache - Plesk

Apache Outline

Apache, also known as Apache HTTP Server, is a widely used open-source web server software. It was initially developed in 1995 and has since become one of the most popular web server platforms in the world. Apache is known for its stability, security, and flexibility.

Apache is capable of serving static and dynamic web pages over the internet. It supports various operating systems, including Unix-like systems, Linux, Windows, and macOS. Apache is highly extensible and can be customized through modules, enabling additional functionality such as URL rewriting, authentication, and server-side programming language support.

Apache uses a modular architecture, allowing administrators to enable or disable specific features as needed. It also includes robust security mechanisms, such as access controls and SSL/TLS encryption support, ensuring secure communication between clients and the server.

With its long history and large community support, Apache has established itself as a reliable and versatile web server solution, powering countless websites and applications worldwide.

Basic Design

  • Takes a process-driven approach.
  • Makes a fresh thread for each request.

Apache uses a multi-threaded approach with several processing modules, incorporating three algorithm types for request handling. Why three? Each one is suited to a specific usage scenario.

The system has the flexibility to pick from multiple algorithms and connections thanks to the Muli-Processing Modules (MPMs). Additionally, different processing modules are utilized by various Apache 2 versions. The three core Apache MPMs are:

  • Process (Pre-fork) MPM
  • Worker MPM
  • Event MPM

Apache in the older style (2.2) utilizes mpm_worker, modphp, and mpm-preform. The newer Apache (2.4) is configured for mpm_event, php-fpm usage instead.

Apache 2.2 is set up in mpm_prefork (pre-fork mode) as standard. It responds to a specific amount of processes, each serving a single request at any time. In other words, it generates a fresh thread to handle each connection request that follows.

What is a thread? The shortest sequence of programmed instructions that can be managed by a scheduler. It will also contribute to a bigger process overall. Still, Apache’s basic architecture means that it can slow things down because it’s so resource-hungry.

Apache vs NGINX – Handling Connections

One of the most significant contrasts between Nginx and Apache is their respective connection- and traffic-handling capabilities.

As NGINX was released following Apache, the team behind it had greater awareness of concurrency issues plaguing sites at scale. The NGINX team was able to use this knowledge to build NGINX from scratch to utilize a non-blocking, asynchronous, event-driven algorithm for handling connections. NGINX is designed to spawn worker processes capable of handling many connections, even thousands, courtesy of a fast-looping function. This searches for events and processes them continuously. As actual work is decoupled from connections easily, every worker is free to make connections only after new events are activated.

Every connection handled by the workers is situated in the event loop, alongside numerous others. Events inside the loop undergo asynchronous processing, so that work is handled in a non-blocking way. And whenever each connection closes, it will be taken out of the loop. NGINX can scale extremely far even with limited resources, thanks to this form of connection processing. As the single-threaded server doesn’t spawn processes to handle every new connection, CPU and memory utilization remains fairly consistent during periods of heavy load.

Apache offers a number of multi-processing modules. These are also known as MPMs, and are responsible for determining how to handle client requests. This enables administrators to switch its connection handling architecture simply, quickly, and conveniently.

So, what are these modules?

mpm-prefork

This Apache module creates processes with one thread to handle each request, and every child is able to accommodate one connection at one time. Provided the volume of requests remains less than that of processes, this module is capable of extremely fast performance.

But it can demonstrate a serious drop in quality when the number of requests passes the number of processes, which means this module isn’t always the right option.

Every process with this module has a major effect on the consumption of RAM, too, which makes it hard to achieve effective scaling. However, it could still be a solid choice when utilized alongside additional components built without consideration of threads. E.g. as PHP lacks thread safety, this module could be the best way to work with mod_php (Apache’s module for processing these specific files) safely.

mpm_worker

Apache’s mpm_worker module is designed to spawn processes capable of managing numerous threads each, with each of those handling one connection. Threads prove more efficient than processes, so this MPM offers stronger scaling than the module discussed above.

As there are no more threads than processes, fresh connections can take up one of the free threads rather than waiting for another suitable process to come along.

mpm_event

Apache’s third module can be considered similar to the aforementioned mpm_worker module in the majority of situations, though it’s been optimised to accommodate keep-alive connections. This means that, when using the worker module, connections continue to hold threads, whether or not requests are made actively for the full period during which the connection remains alive.

It’s clear that Apache’s connection handling architecture offers considerable flexibility when selecting various connections and request-handling algorithms. Options provided are primarily a result of the server’s continued advancement, as well as the growing demand for concurrency as the internet has changed so dramatically.

Apache vs NGINX – Handling Static and Dynamic Content

When pitting Nginx vs Apache, their ability to handle static and dynamic content requests is a common point of comparison. Let’s take a closer look.

NGINX is not designed for native processing of dynamic content: it has to pass to an external processor to handle PHP and other dynamic content requests. It will wait for content to be returned when it has been rendered, before relaying the results back to the client.

Communication has to be set up between NGINX and a processor across a protocol which NGINX can accommodate (e.g. FastCGI, HTTP, etc.). This can make things a little more complicated than administrators may prefer, particularly when attempting to anticipate the volume of connections to be allowed — an extra connection will be necessary for every call to the relevant processor.

Still, there are some benefits to using this method. As the dynamic interpreter isn’t integrated within the worker process, the overhead applies to just dynamic content. On the other hand, static content may be served in a simpler process, during which the interpreter is only contacted when considered necessary.

Apache servers’ traditional file-based methods mean they’re capable of handling static content, and their performance is primarily a function of those MPM methods covered earlier.

But Apache is designed to process dynamic content too, by integrating a processor of suitable languages into every worker instance. As a result, Apache can accommodate dynamic content in the server itself, with no need to depend on any external components. These can be activated courtesy of the dynamically-loadable modules.

Apache’s internal handling of dynamic content allows it to be configured more easily, and there’s no need to coordinate communication with other software. Modules may be swapped out if and when requirements for content shift.

NGINX or Apache – How Does Directory-level Configuration Work?

Another of the most prominent differences administrators discuss when discussing Apache vs NGINX relates to directory-level configuration, and whether it’s allowed in their content directories. Let’s explore what this means, starting with Apache.

With Apache, additional configuration is permitted on a per-directory level, through the inspection of files hidden within content directories — and the interpretation of their directives. They’re referred to as .htaccess.

As htaccess files are located inside content directories, Apache checks every component on the route to files requested, applying those directives inside. Essentially, this allows the web server to be configured in a decentralized manner, typically utilized for the implementation of rewritten URLs, accessing restrictions, authentication and authorization, as well as caching policies.

Though these offer configuration in Apache’s primary configuration file, there are some key advantages to .htaccess files. First and foremost, they’re implemented instantly without needing to reload the server, as they’re interpreted whenever they’re located on a request path.

Secondly, .htaccess files enable non-privileged users to take control of specific elements of their web content without granting them complete control over the full configuration file.

This creates a simple way for certain software, such as content management systems, to configure environments without giving entry to central configuration files. It’s used by shared hosting providers for maintaining control of primary configurations, even while they offer clients their own directory control.

With NGINX, interpretation of .htaccess files is out of the question.It also lacks a way to assess per-directory configuration beyond the primary configuration file. As a result, it could be said to offer less flexibility than Apache, though it has a number of benefits too.

Specifically, improved performance is one of the main advantages compared to the .htaccess directory-level configuration system. In the case of standard Apache setups that accommodate .htaccess in any one directory, the server assesses the files in every parent directory leading to the file requested, whenever a request is made. Any .htaccess files found throughout this search will be read before being interpreted.

So, NGINX can serve requests in less time, due to its single-directory searches and file-reads for every request. Of course, this is based on files being located in a directory with a conventional structure.

Another benefit NGINX offers with directory-level configuration relates to security. Distributing access also leads to a distribution of security responsibility to single users, and they might not all be trustworthy. When administrators retain control of the whole server, there’s less risk of security-related problems which grant access to people who can’t be relied upon.

How does File and URI-based Interpretation Work with NGINX and Apache?

When discussing Nginx vs Apache, it’s important to remember the way in which the web server interprets requests, and maps them to system resources, is another vital issue.

When NGINX was built, it was designed to function as a web and proxy server. The architecture demanded to fulfil both roles means NGINX works with URIs mainly, and translates to the filesystem as required. This is evident in a number of ways in which its configuration files function.

NGINX has no means of determining filesystem directory configuration. As a result, it’s designed to parse the URI. NGINX’s main configuration blocks are location and server blocks: the former matches parts of the URI which come after the host and port, while the latter interprets hosts requested. Requests are interpreted as a URI, rather than one of the filesystem’s locations.

In the case of static files, requests are eventually mapped to a filesystem location. NGINX chooses the location and server blocks for handling the specific request, before combining the document root with the URI. It also adapts whatever’s required, based on the configuration specified.

With NGINX designed to parse requests as URIs rather than filesystem positions, it makes for simpler functionality in various areas. Specifically, in the following server roles: web, proxy, and mail. This means NGINX is easily configured by laying out appropriate responses to varied request patterns, and NGINX only checks filesystems when it’s prepared to serve the request. This is why it doesn’t implement .htaccess files.

Interpret requests as physical resources on a filesystem. It may also interpret requests as a URI location, which demands an assessment that’s a little less specific. Generally, Apache utilizes <Directory> or <Files> blocks for these purposes, and <Location> blocks for resources that are more abstract.

As Apache was conceived as a server for the web, its standard function is interpreting requests as traditional filesystem resources. This process starts with the document root and changing the part of the request which comes after host and port numbers, as it attempts to locate an actual file. So, on the web, filesystem’s hierarchy appears in the form of the available document tree.

Apache gives various alternatives for when requests fail to match underlying filesystems. E.g., Alias directives may be utilized for mapping alternative placements. Leveraging <Location> blocks is a way to work with the URI rather than the filesystem. A number of expression variants may be utilized to apply configuration throughout filesystems with greater flexibility.

As Apache is capable of functioning on the webspace and underlying filesystems, it has a heavier focus on filesystem methods. This is evident in a number of the design choices, such as the presence of .htaccess files in per-directory configuration. Apache documentation advises not to utilize URI-based blocks for inhibiting access when requests match those underlying filesystems.

NGINX vs Apache: How Do Modules Work?

When considering Apache vs NGINX, bear in mind that they can be extended with module systems, though they work in significantly different ways.

NGINX modules have to be chosen and compiled into its core software, as they cannot be dynamically loaded. Some NGINX users it’s less flexible as a result. This may be particularly true for those who feel unhappy managing their compiled software that’s positioned external to the distribution’s conventional packaging system.

Even though packages typically include modules which are used commonly, you would need to create the server from source if you need a non-standard module. Still, NGINX is incredibly useful, allowing users to dictate what they want out of their server by including only the functionality you plan to utilize.

For many people, NGINX seems to offer greater security as a result of this. Arbitrary components are unable to connect to the server. But if the server is in a scenario where this appears to be likely, it may have been affected already.

Furthermore, NGINX modules offer rate limiting, geolocation, proxying support, rewriting, encryption, mail functionality, compression, and more.

With Apache, the module system provides users with the option to load or unload modules dynamically based on your individual needs. Modules may be switched on and off even though the Apache core remains present at all times, so you can add or take extra functionality away and hook into the main server.

With Apache, this functionality is utilized for a wide range of tasks, and as this platform is so mature, users can choose from a large assortment of modules. Each of these may adjust the server’s core functionality in various ways, e.g. mod_php embeds a PHP interpreter into all of the running workers.

However, modules aren’t restricted to processing dynamic content: some of their functions include authenticating clients, URL rewriting, caching, proxying, encrypting, compression, and more. With dynamic modules, users can expand core functionality significantly — with no need for extensive extra work

NGINX or Apache: How do Support, Documentation, and Other Key Elements Work?

When trying to decide between Apache or Nginx, another important factor to bear in mind is actually getting set-up and the level of support with other software.

The level of support for NGINX is growing, as a greater number of users continue to implement it. However, it still has some way to go to catch up with Apache in certain areas.

Once upon a time, it was hard to gather detailed documentation for NGINX (in English), as the majority of its early documentation was in Russian. However, documentation has expanded since interest in NGINX has grown, so there’s a wealth of administration resources on the official NGINX website and third parties.

On the topic of third-party applications, documentation and support is easier to find. Package maintainers are starting to offer choices between NGINX and Apache auto-configuring. It’s easy to configure NGINX to complement alternative software without any support, as long as the specific project documents clear requirements (such as headers, permissions, etc.).

Support for Apache is fairly easy to find, as it’s been such a popular server for such a long time. An extensive library of first- and third-party documentation is on offer out there, for the core server and task-based situations that require Apache to be hooked up with additional software.

As well as documentation, numerous online projects and tools involve tools to be bootstrapped within an Apache setting. This could be present in the projects or the packages managed by the team responsible for the distribution’s packaging.

Apache receives decent support from external projects mainly due to the market share and the sheer number of years it’s been operating. There may be a higher likelihood of administrators having experience of using Apache, not just because it’s so prevalent but as a lot of them begin in shared-hosting scenarios which rely on Apache, due to the htaccess distributed management capabilities.

NGINX vs Apache: Working with Both

Now that we’ve explored the advantages and disadvantages of NGINX and Apache, you could be in a better position to understand whether Apache or NGINX is best for you. But a lot of users discover they can leverage both server’s benefits by utilizing them together.

Traditional configuration for using NGINX and Apache in unison is to position NGINX ahead of Apache. This way, it serves as a reverse proxy — enabling it to accommodate every client request. Why is this important? Because it takes advantage of the quick processing speeds and NGINX’s capabilities to handle a lot of connections at the same time.

In the case of static content, NGINX is a fantastic server, as files are served to the client directly and quickly. With dynamic content, NGINX proxies requests to Apache to be processed. Apache will then bring rendered pages back. After this, NGINX is able to send content back to clients.

Plenty of people find this is the ideal setup, as it enables NGINX to perform as a sorting machine, handling all requests and passing on those which have no native capability to serve. If you reduce Apache’s level of requests, it’s possible to reduce the level of blocking which follows when Apache threads or processes are occupied.

With this configuration, users can scale out through the addition of extra backend servers as required. NGINX may be configured to pass to a number of servers with ease, boosting the configuration’s performance and its resistance to failure.

Apache vs NGINX – Final Thoughts

It’s fair to say that NGINX and Apache offer quality performance — they’re flexible, they’re capable, and they’re powerful. Choosing which server works best for your needs depends largely on assessing your individual requirements and testing with those patterns you believe you’re likely to see.

These differences have a tangible effect on capabilities, performance, and the time required to implement each solution effectively. In the end, no web server is perfect for everyone, so it’s best to pick the one that fits your needs the closest.

While NGINX and Apache are two of the most prominent web servers, you might want to explore the broader spectrum of available options. For insights into other top web servers suitable for both Linux and Windows environments, you can refer to our comprehensive article: Top Web Servers for Linux and Windows

16 Comments

  1. To run wordpress, i chose nginx web server for the best performance..

  2. Nginx needs to introduce a solution like htaccess. Then it can easily be the market leader.

    • LiteSpeed Web Server is event-driven, beats nginx in real-world performance and supports .htaccess

    • Why do you think?

      • Debbie,

        In shared environments, for example, you’d most likely run some kind of Web Application Firewall (such as mod_security) – nginx and Apache will execute rules for every single request, regardless of it being a dynamic or static request – LiteSpeed does this more intelligent, so static resources won’t have to go through the ModSecurity engine, this alone speeds things up (and saves CPU).

        Dynamic content also tends to be faster than both Apache and nginx because of the way that the web server communicates with PHP API.

        Not to mention that e.g. for a shared environment, LiteSpeed brings the added benefit of it’s caching.

        You can get caching options in Apache and nginx as well, e.g. using varnish, or fastcgi cache in nginx, but this either adds complexity to the setup, or in the case of nginx, because the fastcgi cache stores things uncompressed but majority of real-world traffic actually asks for compressed content, then you’ll end up with scenario where a high traffic site will just end up eating CPU because nginx stores it’s cache inefficiently.

        All web-servers has their purpose, some are commercial, some are free – but at least in this case, Plesk often also target the shared hosting industry, and there is a lot of sites (and servers) that could benefit from LiteSpeed, both because of performance but also better compatibility than nginx for example.

        • Thanks for your great insights Lucas, always welcome here. It is interesting and would be a good topic to look into. Will bring it forward to our engineers.

  3. You guys should make a comparison with LiteSpeed and Apache, or LiteSpeed and Nginx – since LiteSpeed supports Apache directives, .htaccess, is event-driven, and is super fast – it’s a good alternative.

  4. Agree with the above comment! Litespeed would be a good addition to this blog post and it’s an excellent alternative to Apache and Nginx.

    Apache is slow at times, And Nginx doesn’t support .htaccess. Litespeed Does!

  5. Love Litespeed and Apache for the .htaccess thing. Also I think configuring Apache is a lot easier but that’s just my opinion.
    Also have never had even near “10,000 client connections all at the same time” so I guess Apache is the best option for me.

    • Thanks for your input Hamid! Good to have different points of view – I guess this depends on your circumstances as you said.

  6. 2 months I am trying the 2 platforms and my hearth still not choose. But maybe I believe I will finish using both of them, Apache + nginx proxy. Anyway, very useful article, thank you.

  7. Static Content/Dynamic Content could be explained more. But Its a good read.
    For a VPS choose I’ll choose Nginx.

  8. I am using WordPress over Nginx and it’s better than over Apache.

  9. my site was with Ubuntu 18.04, and recently updated to Ubuntu 20.04, the default server coming with 20.04 is NGINX, but got me one message saying service Apache2 was failed…, I am new to this world, can anyone please help what should I do? I prefer leave with default, but how to do with the failed apache2 message? or I have to restart Apeche2 and stop NGINX? The site from the surface working fine to me (i mean i can open all pages, and can post new topics), thank you so much!

Add a Comment

Your email address will not be published. Required fields are marked *

GET LATEST NEWS AND TIPS

  • Yes, please, I agree to receiving my personal Plesk Newsletter! WebPros International GmbH and other WebPros group companies may store and process the data I provide for the purpose of delivering the newsletter according to the WebPros Privacy Policy. In order to tailor its offerings to me, Plesk may further use additional information like usage and behavior data (Profiling). I can unsubscribe from the newsletter at any time by sending an email to [email protected] or use the unsubscribe link in any of the newsletters.

  • Hidden
  • Hidden
  • Hidden
  • Hidden
  • Hidden
  • Hidden

Related Posts

Knowledge Base

Plesk uses LiveChat system (3rd party).

By proceeding below, I hereby agree to use LiveChat as an external third party technology. This may involve a transfer of my personal data (e.g. IP Address) to third parties in- or outside of Europe. For more information, please see our Privacy Policy.

Search
Generic filters
Exact matches only
Search in title
Search in content
Search in excerpt