Skip Navigation

I abandoned OpenLiteSpeed and went back to good ol’ Nginx

arstechnica.com I abandoned OpenLiteSpeed and went back to good ol’ Nginx

One weather site’s sudden struggles, and musings on why change isn’t always good.

I abandoned OpenLiteSpeed and went back to good ol’ Nginx
  • The author switched from using OpenLiteSpeed to Nginx for hosting a weather forecasting website.
  • The website experiences spikes in traffic during severe weather events, requiring additional preparation.
  • OpenLiteSpeed was initially chosen for its integrated caching and speed, but the complexity and GUI configuration were challenges.

Archive link: https://archive.ph/Uf6wF

7
7 comments
  • The author completely missed the two most important points about why use OpenLiteSpeed.

    1. It needs to be optimized for the traffic you've and for what's worth I don't believe he did it properly as it would require a lot of changes in the way PHP is handled and configured as well.

    2. The largest selling point of OLS is the fact that is understands Apache rewrite rules (mod_rewrite compatible).

    Let's say you've managing something similar to a shared hosting where multiple users deploy websites to your servers. In this scenario Nginx isn't an option because a) most people don't want / know how to write rules for it and b) it requires* the rules to be centralized on global config file and the webserver needs to be reloaded after each change.

    With Apache + mod_rewrite you can just throw a familiar .htaccess to any directory and it will work out right away. OLS is the only alternative webserver out there that can understand those rules and is actually 100% compatible with them.

    If you look closely at OLS marketing and documentation you'll find it was mostly developed and optimized for those shared hosting use cases. and as you can see picking a webserver isn't always about speed, sometimes it's about what your users are used to and about not having to take down dozens or hundreds of websites to change the configuration of a single one.

    * it actually doesn't require it, but the performance hit of looking for config files in each folder + conflicts + reloads + other issues make it an unviable and unrecommended way of operating.

    • I agree with the author: Only GUI config? WTF!

      If a gui does make the configuration harder then it is a bad tool for the job. Your claim is partly, that OLS makes things easier. I think, the struggle with the gui config illustrates that it doesn't. If cannot debug a problem with that gui or do not know what an abstract gui setting does, then it actually pretty bad.

      Btw. Nginx configuration can be separated into seperate files and through proxy_pass seperated onto seperate servers.

      • I agree with the author: Only GUI config? WTF!

        First, this isn't even true: https://openlitespeed.org/kb/ols-configuration-examples/

        Your claim is partly, that OLS makes things easier.

        No. My claim is that OLS / the enterprise version makes things feasible for a specific use-case by providing the compatibility your users are expecting. Also performs very well above Apache.

        Btw. Nginx configuration can be separated into seperate files and through proxy_pass seperated onto seperate servers.

        I'm not sure if you never used anything before Docker and GitHub hooks, or you may be simply brainwashed by the Docker propaganda - the big cloud providers reconfigured the way development was done in order to justify selling a virtual machine for each website/application.

        Amazon, Google, Microsoft never entered the shared hosting market. They took their time to watch and study it and realized that, even though they were able to complete, they wouldn't be profiting that much and the shared business model wasn't compatible with their "we don't provide support" approach to everything. Reconfiguring the development experience and tools by pushing very specific technologies such as Docker, build pipelines and NodeJS created the necessity for virtual machines and then there they were ready to sell their support free and highly profitable solutions.

        As I said before, Nginx has a built in way to use wildcards in the include directive and have it pull configs from the website's root directory (like Apache does with .htaccess) however it isn't as performant as a single file.

        On this context, why are suggesting splitting into multiple daemons and using proxy_pass that has like 1/10 of the performance of using a wildcard include directive? I'm stating that ONE instance + wildcard include is slower than a single include/file and you're suggesting multiple instances + proxy overhead? Wtf.

  • Upvoted for the concise summary.

  • This is the best summary I could come up with:


    But when severe weather events happen—especially in the summer, when hurricanes lurk in the Gulf of Mexico—the site’s traffic can spike to more than a million page views in 12 hours.

    So during some winter downtime two years ago, I took the opportunity to jettison some complexity and reduce the hosting stack down to a single monolithic web server application: OpenLiteSpeed.

    OLS seemed to get a lot of praise for its integrated caching, especially when WordPress was involved; it was purported to be quite quick compared to Nginx; and, frankly, after five-ish years of admining the same stack, I was interested in changing things up.

    The first significant adjustment to deal with was that OLS is primarily configured through an actual GUI, with all the annoying potential issues that brings with it (another port to secure, another password to manage, another public point of entry into the backend, more PHP resources dedicated just to the admin interface).

    Translating the existing Nginx WordPress configuration into OLS-speak was a good acclimation exercise, and I eventually settled on Cloudflare tunnels as an acceptable method for keeping the admin console hidden away and notionally secure.

    Fortunately, Space City Weather provides a great testing ground for web servers, being a nicely active site with a very cache-friendly workload, and so I hammered out a starting configuration with which I was reasonably happy and, while speaking the ancient holy words of ritual, flipped the cutover switch.


    The original article contains 589 words, the summary contains 239 words. Saved 59%. I'm a bot and I'm open source!

7 comments