• Zikeji@programming.dev
    link
    fedilink
    English
    arrow-up
    46
    arrow-down
    4
    ·
    22 hours ago

    If this is your take your exposure has been pretty limited. While I agree some devs take it to the extreme, Docker is not a cop out. It (and similar containerization platforms) are invaluable tools.

    Using devcontainers (Docker containers in the IDE, basically) I’m able to get my team developing in a consistent environment in mere minutes, without needing to bother IT.

    Using Docker orchestration I’m able to do a lot in prod, such as automatic scaling, continuous deployment with automated testing, and in worst case near instantaneous reverts to a previously good state.

    And that’s just how I use it as a dev.

    As self hosting enthusiast I can deploy new OSS projects without stepping through a lengthy install guide listing various obscure requirements, and if I did want to skip the container (which I’ve only done a few things) I can simply read the Dockerfile to figure out what I need to do instead of hoping the install guide covers all the bases.

    And if I need to migrate to a new host? A few DNS updates and SCP/rsync later and I’m done.

    • Toribor@corndog.social
      link
      fedilink
      English
      arrow-up
      10
      ·
      20 hours ago

      I’ve been really trying to push for more usage of dev containers at my org. I deal with so much hassle helping people install dependencies and deal with bizarre environment issues. And then doing it all over again every time there is turnover or someone gets a new laptop. We’re an Ops team though so it’s a real struggle to add the additional complexity of running and troubleshooting containers on top of mostly new dev concepts anyway.

      • Zikeji@programming.dev
        link
        fedilink
        English
        arrow-up
        6
        ·
        19 hours ago

        So far I’ve helped my team of 5 get on them. Some other teams are starting as well. We’ve got Windows, Linux, and Mac OSX that developers are running on their work machine (for now), and the only container specific issue we ever encounter is port conflicts, which are well documented with easy to change environment variables to control.

        The only real caveat right now is we have a bunch of micro services, and so their supporting services (redis, mariadb, etc.) end up running multiple times, so their is some performance loss from that. But they’re all designed to be independent, only talking to each other via their API, so the approach works.

      • jjjalljs@ttrpg.network
        link
        fedilink
        arrow-up
        1
        ·
        15 hours ago

        …what do you mean by using dev containers? Are your people doing development on their host machine?

        • Toribor@corndog.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          Mostly infrastructure as code with folks installing software natively on their windows host (terraform, ansible, powershell modules, but we also do some NPM stuff too). I’m trying to get people used to running a container instead of installing things on their host so I don’t have to chase people down when they run commands using the wrong version or something.

      • Arghblarg@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        19 hours ago

        Agreed there – it’s good for onboarding devs and ensuring consistent build environment.

        Once an app is ‘stable’ within a docker env, great – but running it outside of a container will inevitably reveal lots of subtle issues that might be worth fixing (assumptions become evident when one’s app encounters a different toolchain version, stdlib, or other libraries/APIs…). In this age of rapid development and deployment, perhaps most shops don’t care about that since containers enable one to ignore such things for a long time, if not forever…

        But like I said, I know my viewpoint is a losing battle. I just wish it wasn’t used so much as a shortcut to deployment where good documentation of dependencies, configuration and testing in varied environments would be my preference.

        And yes, I run a bare-metal ‘pet’ server so I deal with configuration that might otherwise be glossed over by containerized apps. Guess I’m just crazy but I like dealing with app config at one layer (host OS) rather than spread around within multiple containers.

        • Clent@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 hours ago

          The container should always be updated to march production. In a non-container environment every developer has to do this independently but with containers it only has to be done once and then the developers pull the update which is a git style diff.

          Best practice is to have the people who update the production servers be responsible for updating the containers, assuming they aren’t deploying the containers directly.

          It’s essentinally no different than updating multiple servers, except one of those servers is then committed to a local container respository.

          This also means there are snapshots of each update which can be useful in its own way.

    • msage@programming.dev
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      19 hours ago

      You know, all this talk about these benefits… when PHP has had this for ages, no BS needed.

      I’ll see myself out.