Between basically every process being done on paper, and most of the civil servants having no idea what an operating system is, I’m sure this will go great.
Between basically every process being done on paper, and most of the civil servants having no idea what an operating system is, I’m sure this will go great.
It’s kinda standard but Pihole is how I got into the general realm of home labbing.
Political means more than just parties and institutions of government. Society and economy is inherently political. Who owns what is produced and the tools used to produce it is inherently political. Therefore software development, just like any other type or work or other economic interaction, is political.
Because universal surveillance is more profitable than consumer privacy, and surveilling consumers aligns really well with the interests of the billionaires that control telecommunications.
I like btop. It’s pretty. I just use it for checking resource usage, I rarely have the need to kill a process or anything else one may do with a system monitor.
The switch to Forgejo is super easy, if you don’t mind everything being called “Gitea” you can just switch out the Docker image and carry on.
I just switched recently, maybe around version 1.19.
Forgejo is also working on federation which will give the system an advantage moving forwards. They’re also sticking with Gitea as an upstream source so reasonable changes Gitea makes should make their way to Forgejo pretty quickly.
Without more info it’ll be hard to help.
I got it working in principle, but the Raspberry Pi I wanted to host it on isn’t powerful enough to handle the necessary computing.
I really like what Mikrotik offers. Their gigabit routers start at maybe €40 and have the incredibly powerful Router OS installed.
A mini-PC with pfSense would offer similar features with more processing power, but with a homelab already you don’t need to do much processing on the router itself.
You could rsync with directories shared on the local network, like a samba share or similar. It’s a bit slower than ssh but for regular incremental backups you probably won’t notice any difference, especially when it’s supposed to run in the background on a schedule.
Alternatively use a non-password protected ssh key, as already suggested.
You can also write rsync commands or at least a shell script that copies all of your desired directories with one command rather than one per file.
I’m not familiar with how VLC manages LAN streaming, but smb (samba) is a filesharing server you have to set up. Just search “set up samba share ubuntu”.
I tried migrating my personal services to Docker Swarm a while back. I have a Raspberry Pi as a 24/7 machine but some services could use a bit more power so I thought I’d try Swarm. The idea being that additional machines which are on sometimes could pick up some of the load.
Two weeks later I gave up and rolled everything back to running specific services or instances on specific machines. Making sure the right data is available on all machines all the time, plus the networking between dependencies and in some cases specifying which service should prefer which machine was far too complex and messy.
That said, if you want to learn Docker Swarm or Kubernetes and distributed filesystems, I can’t think of a better way.
I’d run it with Docker. The official documentation looks sufficient to get it up and running. I’d add a database backup to the stack as well, and save those backups to a separate machine.
A Pi 4 draws maybe 5W of electricity most of the time. 24/7 operation at 5W will be your cost (approx 44 kWh per year), not including cost of the Pi, your internet connection, and any time you spend on maintenance.
I didn’t even look to see if the one I linked was a fork. I’m glad it works!
A cool thing about Dockerfiles is that they’re usually architecture agnostic. I think the one I linked is as well, meaning that the architecture is only locked in when the image is built for a specific one. In this case the repo owner probably only built it for arm machines, but a build for x86_64 should work as well.
Hetzner may have the thing for you. IIRC their VPS options don’t have that much storage, but their storage plans are super cheap and easily connect to the VPS.
Building images is easy enough. It’s pretty similar to how you’d install or compile software directly on the host. Just write a Dockerfile that runs the hide.me install script. I found this repo and image which may work for you as is or as a starting point.
When you run the image as a container you can set it up as the network gateway, just find a tutorial on how to set up a Wireguard container and replace Wireguard with your hide.me container.
In terms of kill switches you’d have to see how other people have done it, but it’s not impossible.
I started my Linux journey with a Raspberry Pi and Debian based PiOS four years ago and I haven’t felt the need to mess with that. Since then I have added other machines running other distros, but the Pi running PiOS is always on and always reliable.
If compromising the privacy of millions of people is an acceptable alternative to an incredibly infrequent act which can already be perpetrated by other more anonymous means and can be easily mitigated by various socio-economic policies, have at it I guess.
I set it up using a docker image based on the older Firefox sync repo. It’s outdated but it works. What I don’t self host is authentication as it is way more involved than I prefer my self hosting projects to be and I’d probably end up frustrated by some little thing not working.
Precisely this. The fuss about Chinese telecom hardware spying on you is made up by US intelligence because they want to be the ones who get to spy on you and keep their back doors in your products
Digital surveillance is omnipresent in the west. Apparently nobody cares.