• gmhafiz@programming.dev
    link
    fedilink
    arrow-up
    58
    arrow-down
    3
    ·
    11 months ago

    1.2 million page views per month

    1,200,000 / 30 days / 24 hours / 60 minutes / 60 seconds is 0.46 requests per second.

    That is crazy low and is nothing to shout about. I notice people like to this in months to inflate the number to looks bigger. But calculating it down to RPS puts it to a perspective.

    So why not create a website out of really, really old technology?

    PHP 8.0 is no longer supported so I hope they update the “really, really old technology” to at least PHP 8.1 today.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      1
      ·
      edit-2
      11 months ago

      Either way that VPS will cost 10-20$ depending on CPU form a good provider. You can’t get that cheap with a bunch of AWS services for that number of requests.

      Also, if they were using an hyped tech stack (nodejs or wtv) they wouldn’t be able to handle the request spikes like they do. 0.46 requests per second means nothing because I’m sure they’ve hours of full inactivity and others serving 10 requests / second that would totally obliterate 2GB of ram if done on nodejs and a Mysql DB.

      • myersguy@lemmy.simpl.website
        link
        fedilink
        arrow-up
        8
        ·
        11 months ago

        $10-20 is what that VPS costs at a cloud provider. You could also dockerize and use a container service like GCP Cloud Run combined with cloud storage within that budget.

        I’m not a big node guy, but I also kind of doubt nodejs would fail to handle 10RPS on 2gb of memory. I guess it all depends on what the requests are doing.

      • Deckweiss@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        11 months ago

        10-20$ ? I think it is way cheaper. I doubt they need a good CPU, some vcores will do.


        edit:

        Here is an 8GB 6vcore ARM VPS from a reputable German server host for 7€

        https://www.netcup.de/bestellen/produkt.php?produkt=3564

        Here is a 2GB 2vcore x86 for 3.25€

        https://www.netcup.de/bestellen/produkt.php?produkt=2948

        Not to mention - they have regular deals, where you can get them for a permanent 50% off (during black friday and winter sales) I have been paying 17€ per year for the 2GB version.

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          11 months ago

          You going for ultra cheap most likely that that reliable, I was pointing at companies like Digital Ocean.

          • Deckweiss@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            11 months ago

            The netcup vps I have has a 100% uptime during the past 5 years. But no heavy use ofc, just wireguard

          • Kissaki@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            11 months ago

            I’ve been using netcup for a decade. They’re very reliable and high quality. (Management/Admin interface, functionality, help wiki. Never had reliability issues.)

            I’ve used other providers before. I’m very satisfied with netcup.

      • QuadriLiteral@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        Indeed. They say they’ve been repeatedly featured on the front page of HN and the site didn’t fall over, I’ve seen many examples that did.

    • Spectacle8011@lemmy.comfysnug.space
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      PHP 8.0 is no longer supported so I hope they update the “really, really old technology” to at least PHP 8.1 today.

      Most likely. This blog was written in February 2022; support for PHP 8.0 was only dropped in November 2023.

  • bitcrafter@programming.dev
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    11 months ago

    Wait… I just noticed this:

    [XHTML] never took off on the web, in part because in a website context so much HTML is generated by templates and libraries that it’s all too easy to introduce a syntax error somewhere along the line; and unlike HTML, where a syntax error would still render something, the tiniest syntax error in XHTML means the whole thing gets thrown out by the browser and you get the Yellow Screen of Death.

    This confuses me; don’t you want to make sure you are always generating a syntactically valid document, rather than hoping that the browser will make something suitable up to work around your mistake?

    • polakkenak@feddit.dk
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      11 months ago

      The thing with XHTML is that even a minor problem will make the page refuse to render and display a full page error message instead of any content. Having the browser guess how to handle the malformed HTML isn’t ideal, but it’s a lot better than showing nothing at all.

      • atheken@programming.dev
        link
        fedilink
        arrow-up
        6
        ·
        11 months ago

        As an end result, maybe. But it also means that you get specific feedback on how to properly author it correctly and fix it before pushing it live.

        IDK, I lived through that whole era, and I’d attribute it more to the fact that HTML is easy enough to author in any text editor by complete novices. XHTML demands a hell of a lot more knowledge of how XML works, and what is valid (and, more keystrokes). The barrier to entry for XHTML is much, much, higher.

        • bitcrafter@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          11 months ago

          I completely agree with that assessment, but what is weird to me is that most people use frameworks so they don’t actually touch any of the markup themselves.

          • atheken@programming.dev
            link
            fedilink
            arrow-up
            3
            ·
            11 months ago

            I don’t know if it’s “most people,” but I agree, there is no excuse for frameworks producing sloppy output - that being said, XHTML is a bit more chatty than HTML(5), so there is some minor benefit to not using the less verbose standard.

    • DeLift@feddit.nl
      link
      fedilink
      arrow-up
      8
      ·
      11 months ago

      I feel the idea was that anyone should be able to make a webpage by just copy pasting snippits and to help with that html and Javascript will attempt to continue as best as it can, even if there are glaring issues.

      • bitcrafter@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        11 months ago

        That approach makes a lot of sense for amateur web sites, but less sense for professional web sites.

        • DeLift@feddit.nl
          link
          fedilink
          arrow-up
          6
          ·
          11 months ago

          Oh yes, Front-end developers suffer this decision daily. Luckily there things like Typescript to ease the pain.

    • adrian783@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      well, no. because broken html can still function sometimes. but most importantly most of html is not even “broken”, just not “adhering to the complete standards”.

      html is just formatting around the content. even completely devoid of html you can still see things. we’re not writing latex here and no one cares things are a little fucky.

      as far as generated html go, you’re more likely to break it further if you fuck with it anyways.

      • bitcrafter@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Sure, but shouldn’t you want your generated markup to adhere to the complete standards so that you know it will be interpreted correctly, rather than hoping that the browser will make the correct guess about what you really meant?

        • adrian783@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          11 months ago

          I mean yeah it would be nice but software isn’t perfect and validating html is not a sexy feature.

    • PixxlMan@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      11 months ago

      That’s too sensible for the web. It almost makes sense, and there’s no fun compatibility problems to revel in!

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    11 months ago

    Finally someone who know how to do things properly.

    Modern PHP isn’t half bad, and it has at least two major benefit over some of its competitors: Each request is a totally independent request that rebuilds the world. There’s no shared state (unless you want there to be).

    big benefit is that you’re not stuck with having to learn and maintain a huge bells-and-whistles 3rd-party framework in perpetuity. I think people really underestimate the burden of maintaining a 3rd-party framework even after development of the website is complete

    Starting on a cloud provider cedes one’s independence because it often leads to vendor lock-in.

    The big benefit of running a basic Linux box on our own VPS is that everything is just files on a generic, well-understood platform (…) a VPS is a low-cost, simple, and lock-in-free way to go. Very classic-web.

    At the end of the day…

    All of this goes to show that you don’t need a whole lot to build a performant, useful website, capable of serving millions of requests a month, on a tiny server that also handles other resource-intensive tasks.

    • expr@programming.dev
      link
      fedilink
      arrow-up
      11
      ·
      11 months ago

      Modern PHP isn’t half bad, and it has at least two major benefit over some of its competitors: Each request is a totally independent request that rebuilds the world. There’s no shared state (unless you want there to be).

      …isn’t that how every web framework works?

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        Anything JS / NodeJS doesn’t work like that and that’s precisely one of the issues with it. Node will also keep running a process in the background even if the website/app isn’t ever accessed wherever PHP won’t be running anything until a request comes.

        • expr@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          11 months ago

          Yikes, pretty bizarre considering stateless endpoints is the gold standard.

          Re: persistent process, that doesn’t seem like a big deal, to me. It’s pretty normal since you often want to keep some common stuff going, like metrics. Unless you’re doing something crazy it should really take next to no resources while idling.

          • TCB13@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            11 months ago

            Re: persistent process, that doesn’t seem like a big deal, to me. It’s pretty normal since you often want to keep some common stuff going, like metrics.

            That shouldn’t require persistent processes, you can do metrics without persistent processes and Matomo is a good example of that. If something requires a persistent process is either poorly designed / executed or we’re talking about edge case like a chat, socket or similar where having a connection open to clients will improve things considerably.

            This “always running” thing is a cancer initially found in some Java backends (because java takes a lot of time to start anything) and later reintroduced in JS exactly because of the same reason.

            Another thing with JS/Node is that it is all a single-threaded runtime environment, meaning that a program’s code is executed line after line and there can’t be two lines of a program running at the same time. To handle requests from multiple clients node simply queues them and processes them sequentially (search for nodejs run loop). In short you can’t process / send a reply to two users at the same time, one will have to wait before getting a reply. Even worse, if a request from a user manages to crash the daemon then everyone will get an instant / queued requests discarded. If you’ve some kind of daemon management running it may restart Node, if not your application will just die.

            There are solutions for this like PM2 that essentially lunch x node.js processes and can upscale / downscale automatically and do request load balancing to those instances but that essentially an afterthought instead of a real fix for the real issue.

            In PHP this was never an issue, because since every requests will spin a new process they’ll all get processed in parallel, some may crash, some may take a long time and others wont’ get affected. No extra 3rd party daemons required to manage things (except for the webserver or php-fpm that comes out of the box).

            The JS/Node model is fundamentally flawed however it is mostly pushed around by people that don’t know how things works funded / sponsored by people that have interest in resource wasting and selling VPSs for everyone and everything (cloud providers). It is all about mangling and reconfiguring the tools and technologies and education developers have in order to push them into flawed technologies so they can then sell more resources, load balancers, build processes and other overly complex stuff that aren’t required for the majority of people.

            Look, I’m not saying there aren’t good and valid uses cases for node and for the single thread model because there are. There are cases where it performs better than PHP and its (usually isolated) processes but unfortunately people aren’t using it for those use cases, they’re using it to build simple websites and APIs there would’ve been much better, more reliable and cheaper to run if developed with PHP’s model.

            Re: persistent process, that doesn’t seem like a big deal, to me.

            Now imagine someone makes 500 websites with nodejs for small businesses, that’s gonna be 500 always running processes, very prone to memory leaks and crashes running in order to server those two users a day those businesses get. That’s at least 5GB of RAM + CPU load to keep things running. If you do the same with PHP, what’s gonna happen is that you’ll have nginx running idle on 15MB of RAM and a bunch of PHP processes to process the requests that die after the job is done. You’ll probably be able to run those websites in a 1GB of RAM VPS instead of something like 8GB that node would require.

            You may be surprised by this example but it is more common than you might think and people don’t even notice it. Sometimes it isn’t a single person managing 500 websites, its about 500 developers making 500 websites with node and and deploying them to “droplets” in DigitalOcean. Since we’re in the subject what about power consumption and hardware? Where are all the environmentalists and whatnot? PHP’s model is more reliable, cheaper and also eco-friendly.

          • adrian783@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            for content sites, stateless is fine. for web apps you need states of all different kinds. even the smallest detail is a state in an application.

            endpoints themselves are stateless, but the web application is stateful. you only have to build the world once, and its much friendlier for end users.

            • expr@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              10 months ago

              I wasn’t talking about frontend state, just the server. Frontend state is kind of irrelevant, tbh.

    • xigoi@lemmy.sdf.org
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      11 months ago

      Each request is a totally independent request that rebuilds the world. There’s no shared state (unless you want there to be).

      I with there was a language with this model, but without the language itself being completely garbage.

      • RonSijm@programming.dev
        link
        fedilink
        arrow-up
        6
        ·
        11 months ago

        Isn’t that the same as modern languages? For example in ASPCore / C#, you can just register all your services with a lifetime scoped to the request, and then there’s no shared state.

        If you want there to be a shared state, you’d just have to register your services with a higher lifetime scope, like with a singleton scope

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        11 months ago

        And there is… its called PHP. JS doesn’t have this model because it is complete garbage slow and wouldn’t ever run fine and reasonable in that model.

      • redcalcium@lemmy.institute
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        11 months ago

        You can still use CGI with Apache. Apache will execute your program on each request and return its output from stdout as webserver response. If you have a form, it’ll get POSTed to stdin when Apache execute your program. You can write your program with whatever language you want as long as you can read stdin and write to stdout. It’s just tedious af so no one really use it these days. PHP was basically born because people got tired writing CGI program with pearl or C and want something more convenient. But with modern programming languages, perhaps CGI is not too bad, except the one process per request which will absolutely kill your server the moment you have visitors spike.

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          You can still use CGI with Apache. Apache will execute your program on each request and return its output from stdout as webserver response. If you have a form, it’ll get POSTed to stdin when Apache execute your program. You can write your program with whatever language you want as long as you can read stdin and write to stdout. It’s just tedious af so no one really use it these days.

          You can’t use the CGI model with node and JS/Node because… unlike PHP, the thing isn’t designed for a quick start and shutdown.

          • redcalcium@lemmy.institute
            link
            fedilink
            arrow-up
            1
            ·
            11 months ago

            Why not? Is it because a typical nodejs app include hundreds of npm dependencies? As long as it can launch and finish within 60s (default timeout for apache), you should be able to use it.

                • TCB13@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 months ago

                  The point is that PHP is much more performant and doesn’t waste resources as NodeJS does. While you can run PHP on CGI with a decent bootstrap performance that won’t get people annoyed the same can’t be said of NodeJS. Nowadays people do PHP-FPM which is way faster at scaling up that any NodeJS process manager out there and doesn’t sit wasting resources when a particular application doesn’t have requests.

  • bitcrafter@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    Thanks, it’s actually kind of nice to hear someone who likes using PHP explaining in detail why they like it.