• 1 Post
  • 1.29K Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle



  • And so many of these “common men” still seem to really believe that no matter what he actually says or does, all that matters is that he talks like the person they imagine him to be, which they believe means he unequivocally understands and cares about them and can do no wrong. He really does love the poorly educated, and you can see why.

    The reality distortion field Trump supporters seem to be trapped in is rapidly approaching the strength of a black hole. I’m not sure what happens when it all collapses and they all fall into the event horizon but I’ll certainly be glad if they can’t escape and we never have to hear from most of them ever again.


  • Basically, that’s not where the farmland is (or, when it was first being settled, the fur, which provided the major economic incentives for why that area was settled in the first place). You also have to think about how the land was settled. Settlers from the east used mountain valleys to get around. Mountain valleys in that circled area aren’t easily traversable and don’t go anywhere or lead anywhere useful. Settlers from the southwest used ships and followed shipping routes up the coast. When you consider both these settlement methods simultaneously (and they were in fact used almost simultaneously) you will come to the conclusion that these are some of the most remote areas to be settled in the continental US, and their relative remoteness has a lot to do with why they were settled the way they were.

    Meanwhile, from the perspective of a ship sailing up the coast there are few good protected anchorages to use as a sheltered waystation or safe harbor in case of inclement weather directly along the coast, but if you go just a little further you’ll reach good port lands (it’s literally called “Portland”) or Seattle and you might as well journey just a little further to stop there instead if you possibly can. When you consider people taking a long and perilous journey around the horn of South America (there was no Panama Canal) you’re almost at the end of the line, and you aren’t going to want to stop 99% of the way, you’re so close that you’ll push on to the end, and that’s why Portland, Seattle and Vancouver developed where they did. The farmland got worse the further north you went and became increasingly unsustainable so nobody really went much further before the gold rush provided yet another economic incentive to draw people there, but that’s a different story.







  • cecilkorik@lemmy.catoProgramming@programming.devI just tried vibe coding with Claude
    link
    fedilink
    English
    arrow-up
    92
    arrow-down
    1
    ·
    edit-2
    3 days ago

    No, I think you do get it. That’s exactly right. Everything you described is absolutely valid.

    Maybe the only piece you’re missing is that “almost right, but critically broken in subtle ways” turns out to actually be more than good enough for many people and many purposes. You’re describing the “success” state.

    /s but also not /s because this is the unfortunate reality we live in now. We’re all going to eat slop and sooner or later we’re going to be forced to like it.


  • You can do all those things with proper routing and there is no difference from mobile devices (as long as they use DHCP and what mobile device wouldn’t?). What I’m suggesting does not change anything on the public side. You still authenticate publicly to renew your certificates. You still give the same certificates to both public and local networks. They’re still valid. Nothing changes.

    The only difference is that when you’re local, your DNS gives you the correct local IP address where that service is hosted, say, 192.168.12.34 instead of using public DNS, getting an external IP that’s on the wrong side of the router, and having to go outside your own network and come back in. Hairpin is like that simpsons episode where Abe goes in the revolving door, takes off his hat, puts his hat back on, and goes back out the same revolving door in the span of 2 seconds. It’s pointless, why are you doing that? If you didn’t want to be on the outside of the network, why are you going to the outside of the network first? Just stay inside the network. Get the right IP. No hairpin routing needed. No certificate madness needed. Everything just works the way its supposed to (because this is in fact the way it’s supposed to work)



  • cecilkorik@lemmy.catoSelfhosted@lemmy.worldHairpin dns issue
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 days ago

    I’m not too familiar with unraid but from a little research I just did it seems like you’re right. That does seem like a really unfortunate design decision on their part, although it seems like the unraid fans defend it. Obviously, I guess I cannot be an unraid fan, and I probably can’t help you in that case. If it were me, I would try to move unraid to its own port (like all the other services) and install a proxy I control onto port 443 in its place, and treat it like any other service. But I have no idea if that is possible or practical in unraid. I do make opinionated choices and my opinion is that unraid is wrong here. Oh well.


  • cecilkorik@lemmy.catoSelfhosted@lemmy.worldHairpin dns issue
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    4 days ago

    I’d argue that your internally hosted site should not be published on ports other than 80/443. Published is the key word here, because the sites themselves can run on whatever port you want and if you want to access them directly on that port you can, but when you’re publishing them and exposing them to the public you don’t want to be dealing with dozens of different services each implementing their own TLS stack and certificate authorities and using god-knows-what rules for security and authentication. You use a proxy server to publish them properly. And there’s no reason you can’t or shouldn’t use that same interface internally too. Even though you technically might be able to directly access the actual ports the services are running on on your local network, you really probably shouldn’t, for a lot of reasons, and if you can, maybe consider locking that down and making those services ONLY listen on 127.0.0.1 or isolated docker networks so nothing outside the proxy host itself can reach them.

    If you don’t want your services to listen on 80/443 themselves that’s reasonable and good practice, but something should be, and it should handle those ports responsibly and authoritatively to direct incoming traffic where it needs to go no matter the source. Even if (or especially if) you need to share that port with various other services for some reason, then either way you need it to operate that port as a proxy (caddy, nginx, even Apache can all do this easily). 443 is the https port, and in the https-only world we should all be living in, all https traffic should be using that port at least in public, and the https TLS connection should be properly terminated at that port by a service designed to do this. This simplifies all sorts of things, including domain name management and certificate management.

    tl;dr You should have a proxy that publishes all your services on port 443 according to their domain name. When https://photos.mydomain.com/ comes in, it hits port 443 and the service on port 443 sees it’s looking for “photos”, handles the certificates for photos, and then decides that immich is where it is going and proxies it there, which is none of anyone else’s business. Everyone, internal or external, goes through the same, consistent, and secure port 443 entrance to your actual web of services.




  • Hot take: Manipulative and mentally destructive social media algorithms are the reason your sleep is disrupted. It’s what is on the screens that is the problem, not what color it is.

    But of course, the tech companies would rather have you blame the color of the screen than their own products. I’m sure they loved adding those color-shifting features to their next products too. not only do they avoid the blame, they get to sell you the “solution”.


  • This is typically called “continuous deployment” or “CD”, a close neighbor to “continuous integration” or “CI”, and you will find that this is a very deep rabbit hole.

    It’s intentionally roundabout, as it has security implications when you make that process too direct and automated. You don’t really want to just give your forgejo repository root commandline access to the machine it’s running on (and it doesn’t want you to do that either). Good software like Forgejo doesn’t trust itself nevermind its users, and sets things like this up in a way that it has to pass through various gates in the process that control what it’s doing a little more carefully and explicitly. At the end of the day of course it’s always potentially dangerous to be running automatic code deployments like this, but adding the extra hoops to jump through is one way of putting extra barriers against someone trying to profoundly violate your machine. There’s a swiss cheese model of security going on here, where yes, there are holes in each of the slices, but unless all the holes of all the different slices line up exactly like they’re supposed to, an attacker can’t get through.

    With that said, there are tons of CD options out there and it’s totally possible to roll your own especially for a simple use-case like this, but forgejo runners are absolutely the easiest and most native way of handling this, they follow the github actions configuration almost perfectly (for better or worse, it’s become the standard now, god save us all). The initial setup is a bit front-loaded, but once you’ve got your runner connected, you’re laughing. Smooth as silk. Don’t worry too much about the “risks” side of the setup, if this is truly a single-user Forgejo where you’re not letting other people create repos and you’re not blindly copying other people’s repos or accepting dangerous PRs from people, the risks are minimal, you’re the only one running github actions on it, you can give it access to the same machine Forgejo is on without too much worry. You’re poking a few holes in the swiss-cheese security model but, us self-hosters have gotta do what we’ve gotta do with our limited resources.

    Once the runner’s connected, just pretend you’re dealing with github actions from that point forward, set the “runs-on” attribute to point at whatever you tagged your runner with, and either use native github actions directly from github, or they’re also mirrored on forgejo.org (for example https://code.forgejo.org/actions/setup-go) or you can mirror them yourself, or you can just avoid using pre-packaged actions at all and just script your heart out and run straight bash commands. It’ll run and do whatever you tell it to. It’ll pull down the latest copy of the repo and deploy it wherever you want, however you want, you can have it run deployment scripts you’ve saved inside the repo itself, whatever you need to do to get it deployed.


  • XMPP/Jabber and Matrix both support full end-to-end encryption. Matrix has more cool modern features and slicker UI but has a brutally complex architecture if you want to self-host it. Matrix.org is available though, but since it’s pretty centralized it’s likely to get blocked. XMPP is simple and self-hostable. Both protocols are pretty niche, and except for matrix.org most of the providers that use the protocols are extremely niche. I would say XMPP is on the whole significantly more niche, though. My condolences on your family being in Russia. The warmongering fascists must be stopped. Good luck, hopefully everyone can stay safe.