• The author switched from using OpenLiteSpeed to Nginx for hosting a weather forecasting website.
  • The website experiences spikes in traffic during severe weather events, requiring additional preparation.
  • OpenLiteSpeed was initially chosen for its integrated caching and speed, but the complexity and GUI configuration were challenges.

Archive link: https://archive.ph/Uf6wF

  • poinck@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    I agree with the author: Only GUI config? WTF!

    If a gui does make the configuration harder then it is a bad tool for the job. Your claim is partly, that OLS makes things easier. I think, the struggle with the gui config illustrates that it doesn’t. If cannot debug a problem with that gui or do not know what an abstract gui setting does, then it actually pretty bad.

    Btw. Nginx configuration can be separated into seperate files and through proxy_pass seperated onto seperate servers.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      I agree with the author: Only GUI config? WTF!

      First, this isn’t even true: https://openlitespeed.org/kb/ols-configuration-examples/

      Your claim is partly, that OLS makes things easier.

      No. My claim is that OLS / the enterprise version makes things feasible for a specific use-case by providing the compatibility your users are expecting. Also performs very well above Apache.

      Btw. Nginx configuration can be separated into seperate files and through proxy_pass seperated onto seperate servers.

      I’m not sure if you never used anything before Docker and GitHub hooks, or you may be simply brainwashed by the Docker propaganda - the big cloud providers reconfigured the way development was done in order to justify selling a virtual machine for each website/application.

      Amazon, Google, Microsoft never entered the shared hosting market. They took their time to watch and study it and realized that, even though they were able to complete, they wouldn’t be profiting that much and the shared business model wasn’t compatible with their “we don’t provide support” approach to everything. Reconfiguring the development experience and tools by pushing very specific technologies such as Docker, build pipelines and NodeJS created the necessity for virtual machines and then there they were ready to sell their support free and highly profitable solutions.

      As I said before, Nginx has a built in way to use wildcards in the include directive and have it pull configs from the website’s root directory (like Apache does with .htaccess) however it isn’t as performant as a single file.

      On this context, why are suggesting splitting into multiple daemons and using proxy_pass that has like 1/10 of the performance of using a wildcard include directive? I’m stating that ONE instance + wildcard include is slower than a single include/file and you’re suggesting multiple instances + proxy overhead? Wtf.