• 1 Post
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle
  • We follow the principle of doing one thing well instead of all things mediocre, so we use 2 solutions for what you asked. As others in the thread, we do use Tandoor, but only for Recipes and Meal Planning. It does this execeptionally well, but the shopping list part is fitting to our style of shopping.

    As a shopping list, we use David Shays Groceries / Specifically Clementines. Why?

    • It works offline when you are in one of those huge buildings that work like a faradays cage and you do not have reception anymore.
    • It lets my partner attach a picture to a list item, so I can find that specific cheese when I am standing clueless in front of those shelves with 500 different cheese brands and that helps me find the right item before the shop closes.
    • It works exactly the way we shop. We always arranged items in the order they are in the shop when you work through the shop from entry to exit. That is super efficient.
    • It supports aisles. That means your items are assigned to an aisle. The super cool feature here is, that you can rearrange the isles for each shop. Veggies are at the entrance of Shop A, but at the middle of Shop B? Just arrange the isle for Shop A to the start of the list and to the middle of Shop B. Since all items are connected to an isle, they move with the aisle. This way you never have to turn around in a shop to get “those other things”. You just walk from entry to exit in one line and be done with it.
    • With this software I never forgot to buy something I did not find in Shop A. How this software does it is that you create list groups that contain lists for every shop that fits. For example you group food shops together, or shops for gardening stuff. Within the list groups, you have your items. And when putting an item on a list, you can select on which list it should appear. Now when you put your favourite cheese on the List of Shop A and B and you bought it on Shop A, it gets ticked off on Shop B too. Or the other way round, I think you get the idea.
    • I have to repeat that it works offline. A shopping list is useless if you can not use it when you are shopping.
    • Accidentially ticked off an item because, well… touchscreens and you do not know what it was? No problem. Ticked off items just move down that list and you can pick it up again. With other apps stuff just disappears or gets send back to the global item list and now you do not have any idea what you missed. Not so with “Specifically Clementines”.
    • It never let us down. It always worked, whether offline or online without any hiccups.

    There is more, but this post got too long already. It also has User Management, Permissions and Live Sync. Yes, my Partner can see live when I tick of items on the list and can put stuff on the list while I am shopping :-)

    Everything in that software feels like it was created by a person that goes actually shopping.

    It has a very good web interface (which also has the offline mode AFAIK) and a very good Android App.

    Does it look fancy? No. Has it everything we ever searched for in a shopping list app? Absolutely!



  • buedi@feddit.detoSelfhosted@lemmy.worldNextcloud appreciation post
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    3 months ago

    I run Nextcloud for many, many years. I hosted it for a very long time at Hetzners second lowest tier of Webspace they rent. It was not very fast there (you get what you pay for), but fast enough for our need here. Later I moved it to an Azure VM and after that to my Homeserver where it runs blazingly fast, especially since the last updates they pushed out.

    In all that time I never reinstalled. I just upgraded to the newer versions when they were out. The only times I had problems upgrading was when I was hosting at the cheap Webspace instance at Hetzner and an upgrade process took longer than the PHP timeout my very cheap hosting instance provided. So it was never a fault of Nextcloud, but just that I hosted it on basically the cheapest hosting plan I could find.

    We use it for file sharing, calendar + contacts (+ Sync with DAVx), Notes and of course Talk. For talk to make full use of Voice + Video calls, you should have a TURN Server, but if you do not use that (if you just text) it was running great even on the Webspace instance at Hetzner.

    We are very happy in our family that it exists, that it is free and that it serves us well since many years.


  • You would think so, yes. But to my surprise, my well over 60 Containers so far consume less than 7 GB of RAM, according to htop. Also, of course Containers can network and share services. For external access for example I run only one instance of traefik. Or one COTURN for Nextcloud and Synapse.



  • I would absolutely look into it. Many years ago when Docker emerged, I did not understand it and called it “Hipster shit”. But also a lot of people around me who used Docker at that time did not understand it either. Some lost data, some had servicec that stopped working and they had no idea how to fix it.

    Years passed and Containers stayed, so I started to have a closer look at it, tried to understand it. Understand what you can do with it and what you can not. As others here said, I also had to learn how to troubleshoot, because stuff now runs inside a container and you don´t just copy a new binary or library into a container to try to fix something.

    Today, my homelab runs 50 Containers and I am not looking back. When I rebuild my Homelab this year, I went full Docker. The most important reason for me was: Every application I run dockerized is predictable and isolated from the others (from the binary side, network side is another story). The issues I had earlier with my Homelab when running everything directly in the Box in Linux is having problems when let´s say one application needs PHP 8.x and another, older one still only runs with PHP 7.x. Or multiple applications have a dependency of a specific library when after updating it, one app works, the other doesn´t anymore because it would need an update too. Running an apt upgrade was always a very exciting moment… and not in a good way. With Docker I do not have these problems. I can update each container on its own. If something breaks in one Container, it does not affect the others.

    Another big plus is the Backups you can do. I back up every docker-compose + data for each container with Kopia. Since barely anything is installed in Linux directly, I can spin up a VM, restore my Backups withi Kopia and start all containers again to test my Backup strategy. Stuff just works. No fiddling with the Linux system itself adjusting tons of Config files, installing hundreds of packages to get all my services up and running again when I have a hardware failure.

    I really started to love Docker, especially in my Homelab.

    Oh, and you would think you have a big resource usage when everything is containerized? My 50 Containers right now consume less than 6 GB of RAM and I run stuff like Jellyfin, Pi-Hole, Homeassistant, Mosquitto, multiple Kopia instances, multiple Traefik Instances with Crowdsec, Logitech Mediaserver, Tandoor, Zabbix and a lot of other things.



  • I love Traefik! When I started, I tried NGinx, but could not wrap my head around it. So I tried Caddy. Pretty easy to understand andI used it for a while. Then I had demands Caddy could not do ant stumbled uponTraefik. As you said, a learning curve, butfor me much easier than NGinx. I like that you can put the Traefik config inside the Compose files and that the service only is active in Traefik when the actual Containers are up and running. I added Crowdsec to my external facing Traefik instance and even use a plain Traefik instance for all my internal services also. And it can forward http, https, TCP and UDP.








  • buedi@feddit.detoTechnology@lemmy.worldComing to you soon...
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    The thing that stuck with me was that I always had the impression that the Video quality was much worse than on Youtube. IIRC when there was content that was available on both platforms, Youtube had the much better picture and sound. But maybe that was just specific to the content I watched back then. There was not THAT much to see in the beginning, not like today where you can spend 24h straight and always see new stuff :-)




  • Setup of the HMAC Key for the CouchDB was indeed the step I struggled with too. I think the first time I either made a mistake or used a broken Website to generate a Base64 value. The 2nd time my mistake was that I put in the Base64 value for the HMAC Key into the jwt.ini AND in the docker-compose.yml. But in the docker-compose.yml COUCHDB_HMAC_KEY, I had to put it unencoded and in the jwt.ini hmac:_default it has to be Base64 encoded. Maybe this is the thing you did wrong too?

    I bet you are close!

    On the other hand, if you are the only person using the shopping list and your current setup offers you what you need, maybe it is not worth it for you. For me it was (and updating when it runs is super easy, I promise!). The instant sync over all devices is great + it keeps working when I lose reception in a shop and syncs again instantly when I have internet again. But what makes Groceries for me are:

    • The ability to have an item on multiple shopping lists if needed and if it is checked off from one list, it is checked of from the other lists too. I stopped forgetting buying stuff that was not available in the 1st shop to get in the 2nd.
    • The ability to add items to aisles and move the aisles in different order for each list (every shop I visit has a bit of a different layout). This made shopping super quick for me, because I enter the shop and walk through it exactly once and have everything I need, because it is all in the correct order on the respective list.

    Oh, and adding a photo to an item is super useful if you are like me and need very close instructions what to get for your partner if you stand in front of a shelf with 100 different types of cheese which look all exactly the same to you… having a photo is sometimes a life saver for me :-)


  • As others mentioned, you probably do not need VMs. If you thought about VMs because of isolation, then yes. that might be a good idea.

    In an ideal world, if I had the budget / hardware, I would have a Server with multiple NICs (Network Interface Cards) connected to different ports on my Firewall for LAN and DMZ. Then I would create VMs for LAN and DMZ and on those the Docker Containers needed for that zone. Everything that is accessible from the Internet gets into the DMZ, the rest in the LAN. I could further lock it down by creating 2 DMZ zones and only put let´s say NGINX or Traefik into the Zone that gets exposed and the services behind the Reverse Proxy in the 2nd DMZ zone, which will still be isolated from LAN.

    But since I only have a small box with 1 NIC, instead I created VLANs on my Router and created a Docker Network for each VLAN. Every single service I run is a docker container and in one of the VLANs, appropriate to their level of exposure. I have one VLAN called LAN that obviously is connected to my LAN and 2 other VLANs where I basically do what I described above. One holds Traefik and has exposed ports to the Internet and the other VLAN hosts the Services which are accissible through traefik. With that setup you at least isolate network traffic and it is something I would look into if you plan to expose any of your services to the internet. Usually when you start with Docker, you probably would just expose Ports from the Containers, which get mapped to the IP of your host… and so all those Containers will have access to your LAN. At least try to separate that.

    The next thing I wanted to do, is run my Containers rootless, which means that no container has root permissions if in case something within the container decides to let the docker service do something malicious on the host, it should not be able to run as root. The caveat here is, that docker does not support VLANs in rootless mode. I spend half a day converting everything to Podman, because people where praising podman left and right if you want to run rootless, but then I found out that Podman does not support VLANs in rootless mode either :->

    Using VMs as described above would make the “I can not use docker rootless” problem less of a problem, but I decided against VMs because of Resources / Budget.

    What I can recommend when you start, do not try to make things too complicated until you are familiar with Docker and understand what you are doing. As you get better, you might want more and learn more stuff as you go.

    You could just install a Linux Distribution you are familiar with (I use Ubuntu Server LTS 22), install Docker and just play around with it a bit to see how everyting works. Only start exposing Services to the Internet if you know what you are doing.

    Maybe a few tips or keywords for you of stuff I went through step by step for later usage.

    • If you expose Services to the Internet, use a Reverse Proxy you think you will understand (NGINX, Traefik, Caddy…)
    • Try to segment your network if your Hard- / Software allows it to separate LAN Services from Services exposed in the Internet
    • Start documenting your setup from the beginning! If you are like me, everything is clear as you do it… but when I come back a month later I wonder how I set up the VLANs or what each Environment Setting does for a specific container etc ;-)
    • Instead of using Docker Volumes, think about redirecting Container directories to directories on the host instead. All my containers have their data under /opt/<container> and all my docker-compose files are in another, separate directory.
    • Implement a Backup solution early on (I use kopia, which backs up my compose directory and /opt, which should be everything I need to set up everything again on a new host)
    • Once you have a few containers up and running and think you are familiar how they work, start use docker-compose. Having a compose file for each container makes updating and maintaining them super easy. There is an updated image for a container? Just run docker-compose up -d and you are done. You need a variation of a container for testing? Copy the compose file, make adjustments and run it.
    • I use watchtower to automatically check if new docker images are available. I use it in monitoring mode. It will check and download for new images, but will not restart the containers. Instead I receive an E-Mail from watchtower. I can then check if the update is for a container exposed to the internet and then will let kopia do another backup run and just do a docker-compose up -d to restart / update the respective container, check if it still does what it does and am done.
    • Did I mention that you should document everything you do? If you are like me and have a memory like an earthworm, you should document your setup from the beginning ;-)

    All in all: Do not rush it, do not feel the pressure to do everything I wrote. You might even come up with other, much better fitting solutions for you than what I or others here are doing. The most important things? Have fun and think twice what and how you expose a service to the public :-)