Her sidder jeg, med mit hjerte brudt // Prøvede at skide, men slog kun en prut

  • 0 Posts
  • 39 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle
  • No, it’s 100% economics. Why do you think that having “careers, lives and travel” (as if having a family is not having a life?) is more appealing to modern first worlders? Because it doesn’t impact their finances severely. Having more children in impoverished countries is a financial gain because children are free labor and lottery tickets to get the entire family out of poverty. In wealthy countries, children are only a financial loss.






  • I think it’s sad how so many of the comments are sharing strategies about how to game the Youtube algorithm, instead of suggesting ways to avoid interacting with the algorithm at all, and learning to curate content on your own.

    The algorithm doesn’t actually care that it’s promoting right-wing or crazy conspiracy content, it promotes whatever that keeps people’s eyeballs on Youtube. The fact is that this will always be the most enraging content. Using “not interested” and “block this channel” buttons doesn’t make the algorithm stop trying to advertise this content, you’re teaching it to improve its strategy to manipulate you!

    The long-term strategy is to get people away from engagement algorithms. Introduce OP’s mother to a patched Youtube client that blocks ads and algorithmic feeds (Revanced has this). “Youtube with no ads!” is an easy way to convince non-technical people. Help her subscribe to safe channels and monitor what she watches.




  • Why do you have to use NGINX? Caddy does the proxying to the Lemmy containers for you. That docker-compose.yml file is my entire deployment, there is no hidden NGINX container or config file that needs to be added. Just remove your broken Lemmy deployment with docker compose down and delete the containers, then docker compose up my docker-compose.yml (after you edit the postgres variables) with config.hjson in the same folder.


  • Oh shit, I forgot that your Caddy would be running on a bridge network by default because mine is on the host network where all ports are already exposed to it! (It’s generally a bad idea to use the host network, so don’t do this if you’re only using Caddy with containers on the same network) I edited the Gist to expose 80 and 443 for HTTP/S on that container, the updated file uses the same Github link. Really sorry about that!



  • Yeah, the config file on the documentation sucks. I had to poke through several discussions on /c/selfhosting to find a config that wasn’t the extremely minimal one linked in the documentation. Your config.hjson is fine from what I can tell, although I’m not sure why you censored the hostname there as it’s supposed to be lemmy.emphisia.nl and not anything confidential.

    Honestly, I don’t have enough understanding of NGINX to debug its config, so I’ll just share my docker-compose.yml for leddit.danmark.party which worked correctly and federated out of the box, with a few adjustments to match your deployment. Note that you’ll have to tear down your existing deployment if you want to use this docker-compose.yml because they use the same ports.

    I should probably self-host my own pastebin
    version: "3.9"
    x-logging:
      &default-logging
      options:
        max-size: '10m'
      driver: json-file
    
    services:
      caddy:
        image: caddy:2
        volumes:
          - ./volumes/caddy:/data
          - ./volumes/caddy:/config
        # See Caddy's documentation for customizing this line
        # https://caddyserver.com/docs/quick-starts/reverse-proxy
        command:
          - /bin/sh
          - -c
          - |
            cat <<EOF > /etc/caddy/Caddyfile && caddy run --config /etc/caddy/Caddyfile
            
            {
              debug
            }
            
            (common) {
            	encode gzip
            	header {
            		-Server
            		Strict-Transport-Security "max-age=31536000; include-subdomains;"
            		X-XSS-Protection "1; mode=block"
            		X-Frame-Options "DENY"
            		X-Content-Type-Options nosniff
            		Referrer-Policy no-referrer-when-downgrade
            		X-Robots-Tag "none"
            	}
            }       
            
            # Lemmy instance
            lemmy.emphisia.nl {
              log
              import common
              reverse_proxy http://lemmy-ui:1234 # lemmy-ui
              
              @lemmy {
            		path /api/*
            		path /pictrs/*
            		path /feeds/*
            		path /nodeinfo/*
            		path /.well-known/*
            	}
             
             	@lemmy-hdr {
            		header Accept application/*
            	}
              
              handle @lemmy {
                reverse_proxy http://lemmy:8085 # lemmy
              }
              
              handle @lemmy-hdr {
                reverse_proxy http://lemmy:8085
              }
              
              @lemmy-post {
            		method POST
            	}
            
            	handle @lemmy-post {
            		reverse_proxy http://lemmy:8085
            	}
            }
            EOF
        lemmy:
          image: dessalines/lemmy:0.18.1-rc.9
          ports:
            - 8085:8536
          volumes:
            - ./lemmy.hjson:/config/config.hjson
          depends_on:
            - postgres
            - pictrs
          restart: always
          logging: *default-logging
          
        lemmy-ui:
          image: dessalines/lemmy-ui:0.18.1-rc.9
          ports:
           - 1234:1234
          environment:
            - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8085
            - LEMMY_UI_LEMMY_EXTERNAL_HOST=localhost:1236
          depends_on:
            - lemmy
          volumes:
            - ./volumes/lemmy-ui/extra_themes:/app/extra_themes
          restart: always
          logging: *default-logging
       
        postgres:
          image: postgres:15-alpine
          ports:
            - 5432:5432
          environment:
            - POSTGRES_USER=MyPostgresUser
            - POSTGRES_DB=MyPostgresDb
            - POSTGRES_PASSWORD=MyPostgresPassword
          volumes:
            - ./volumes/postgres:/var/lib/postgresql/data
          restart: always
          logging: *default-logging
          
        pictrs:
          image: asonix/pictrs:0.4.0-rc.7
          user: 991:991
          hostname: pictrs
          environment:
            - PICTRS__MEDIA__VIDEO_CODEC=vp9
            - PICTRS__MEDIA__GIF__MAX_WIDTH=256
            - PICTRS__MEDIA__GIF__MAX_HEIGHT=256
            - PICTRS__MEDIA__GIF__MAX_AREA=65536
            - PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400
          volumes:
            - ./volumes/pictrs:/mnt
          restart: always
          logging: *default-logging
    	  
        postfix:
          image: mwader/postfix-relay
          environment:
           - POSTFIX_myhostname=lemmy.emphisia.nl
          restart: "always"
          logging: *default-logging
    

  • I don’t use NGINX as my proxy server, but it’s a bit strange that you would need two configs for this while mine runs perfectly with one config and two open ports (:8536 for Lemmy-BE and :1234 for Lemmy-UI). And why are you using different versions of Lemmy-BE (18.1-rc9) and Lemmy-UI (18.1-rc4)?

    If you are using the default docker-compose.yml on the Lemmy repo, that part of the NGINX config uses https:// + the name of the Docker containers. And you always give NGINX the external port (the number on the right side of the colon defined in ports:, like 1234 in 1234:5678). The port on the left is only known to the container the port is defined for.

    If it’s still broken after you correct the NGINX config, what are your docker-compose.yml and config.hjson like? There’s several versions of them floating around and you might have combined incompatible versions with each other.






  • I just look to the microblogging side of the network (which has about 10 million total users) as a case study.

    The ideal situation? More nodes are added to the network to spread the load and control away from a few very large and very expensive instances. The realistic situation? Some instances manage to secure external funding (such as mastodon.social) and grow extremely large at the expense of smaller instances that shut down from a lack of users and funding. Decentralized protocols like the fediverse and email are not immune to centralization thanks to lazy users who join the biggest instance. My pessimistic outlook is that the Fediverse will eventually become like email, with a few very big instances and a lot of spam making it difficult for smaller instances to enter the network. Enjoy the fresh new internet feeling while it lasts and move on when the platform starts to decay.