• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle
  • oranki@sopuli.xyztoSelfhosted@lemmy.worldWhy docker
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 months ago

    Portability is the key for me, because I tend to switch things around a lot. Containers generally isolate the persistent data from the runtime really well.

    Docker is not the only, or even the best way IMO to run containers. If I was providing services for customers, I would definetly build most container images daily in some automated way. Well, I do it already for quite a few.

    The mess is only a mess if you don’t really understand what you’re doing, same goes for traditional services.



  • There was a good blog post about the real cost of storage, but I can’t find it now.

    The gist was that to store 1TB of data somewhat reliably, you probably need at least:

    • mirrored main storage 2TB
    • frequent/local backup space, also at least mirrored disks 2TB + more if using a versioned backup system
    • remote / cold storage backup space about the same as the frequent backups

    Which amounts to something like 6TB of disk for 1TB of actual data. In real life you’d probably use some other level of RAID, at least for larger amounts so it’s perhaps not as harsh, and compression can reduce the required backup space too.

    I have around 130G of data in Nextcloud, and the off-site borg repo for it is about 180G. Then there’s local backups on a mirrored HDD, with the ZFS snapshots that are not yet pruned that’s maybe 200G of raw disk space. So 130G becomes 510G in my setup.



  • oranki@sopuli.xyztoSelfhosted@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    Wireguard runs over UDP, the port is undistinguishable from closed ports for most common port scanning bots. Changing the port will obfuscate the traffic a bit. Even if someone manages to guess the port, they’ll still need to use the right key, otherwise the response is like from a wrong port - no response. Your ISP can still see that it’s Wireguard traffic if they happen to be looking, but can’t decipher the contents.

    I would drop containers from the equation and just run Wireguard on the host. When issues arise, you’ll have a hard time identifying the problem when container networking is in the mix.


  • I used to run everything with Pis, but then got a x86 USFF to improve Nextcloud performance.

    With the energy price madness last year in Europe, I moved most things to cloud VPSs.

    One Pi is still running Home Assistant, hooked to my heating/ventilation unit via RS485/modbus.

    I had a ZFS backup server with 2 HDDs hooked up over USB to a Pi 8GB. That is just way too unreliable for anything serious, I think I now have a lot of corrupted files in the backups. Looking into getting some Synology unit for that.

    For anything serious that requires file storage, I’d steer clear from USB or SD cards. After getting used to SATA performance, it’s hard to go back anyways. I’d really like to use the Pis, but family photo backups turning gray due to bitflips is unacceptable.

    They are a great entrypoint to self-hosting and the Linux world though!




  • In my limited experience, when Podman seems more complicated than Docker, it’s because the Docker daemon runs as root and can by default do stuff Podman can’t without explicitly giving it permission to do so.

    99% of the stuff self-hosters run on regular rootful Docker can run with no issues using rootless Podman.

    Rootless Docker is an option, but my understanding is most people don’t bother with it. Whereas with Podman it’s the default.

    Docker is good, Podman is good. It’s like comparing distros, different tools for roughly the same job.

    Pods are a really powerful feature though.


  • Even though you said “isn’t Nextcloud”, I’d still say it’s perhaps the simplest solution.

    You can disable most the other apps and set calendar as the landing page. If you don’t use the other features, the resource usage is very low, just a cron job that does basically nothing. I don’t think disabling the default apps has much effect on the footprint, by the way.

    Calendar, contacts and notes are why I still self host nextcloud. Just remember to pay/donate to Davx5, they’re one of the projects that need to keep running!






  • Couple things that I’ve found out,

    • Gmail seems to need your server to have IPv6 with PTR, even if the mail is sent over IPv4
    • Even a DMARC record with no ruf or rua helps lower the spam score
    • For Outlook you need to send some mail to yourself or someone else and mark the messages as not spam manually for a while
    • MS365 will even put mails from Gmail to spam initially
    • Some TLDs like .xyz will go to spam even if everything is set up perfectly
    • Outlook also seems to cache DNS quite long, you may need to wait a day for changes to propagate
    • A recently registered domain will land in spam more easily, if it has been registered for a while it also seems to help

    If you’re not already familiar with these, https://mxtoolbox.com/SuperTool.aspx (write smtp:your.mx.record is a good tool, and I’ve also used https://www.mail-tester.com/. Mxtoolbox blacklist check is also good.

    I hate it that spammers have made hosting email such a hassle. Hope you get stuff running!




  • No need to apologize.

    You’d create a CNAME for myservice.mydomain.com, that points to proxynearorigin.cloudflare.com.

    proxynearorigin.cloudflare.com contains the A and AAAA records for the reverse proxy servers. When you do a DNS query for myservice.mydomain.com, it will (eventually) resolve to the CF proxy IPs.

    The CF proxies see from the traffic that you originally requested myservice.mydomain.com and serve your content based on that. This still requires you to tell Cloudflare where the origin server is so the reverse proxies can connect to it.

    On the free service instead of the CNAME you set the origin server’s IP as the A and/or AAAA record. Enabling the proxy service actually changes this so that when someone makes a DNS query to myservice.mydomain.com they get the proxy addresses straight as A and AAAA records, leaving the IP you originally configured known only to Cloudflare internally.

    It’s hard to explain this, and since I don’t work at Cloudflare the details may be off too. The best way to get an idea is play around with something like NGINX and run a local DNS server (Bind, Unbound, dnsmasq, PiHole…) and see for yourself how the DNS system works.

    CDN isn’t really related to DNS at all. In the case of the CF free tier, it’s actually more like caching static content, which is technically a bit different. A CDN is a service that replicates said static content to multiple locations on high-performant servers, allowing the content to always be served from close to to the client. Where DNS comes in is that Anycast is probably used, and cdn.cloudflare.com actually resolves to different IPs depending on where the DNS query is made from.

    There’s also the chance that I don’t actually know what I’m talking about, but luckily someone will most likely correct me if that’s the case. :)



  • The reasons for having to use their nameservers is probably about getting some data in the process. But DNS queries are quite harmless compared to the MITM issue for the actual traffic.

    Traffic proxied via CF uses their TLS certificates. Look up how HTTPS works, and you’ll understand that it means the encryption is terminated at Cloudflare.

    For the record, CF DNS infrastructure is really solid. For something already public anyway, I’d use their services in a heartbeat. You get some WAF features and can add firewall rules like geoblocking, even on the free tier.

    For sensitive data, I probably wouldn’t use the proxy service.