ChaCha20-Poly1305 and CBC with Encrypt-then-MAC ciphers are vulnerable to a MITM attack.
Saved you a click.
ChaCha20-Poly1305 and CBC with Encrypt-then-MAC ciphers are vulnerable to a MITM attack.
Saved you a click.
Saying you don’t need privacy because you have nothing to hide is the same as saying you don’t need freedom of speech because you have nothing to say
Came here to say this. fwupd is so good, it’s almost magic, and good vendors will actually support it themselves.
I have never heard of a router phoning home to report traffic.
I wouldn’t be quick to assume that this means a failing disk. There would probably be more sporadic issues if this were the case.
Oh you mean battery life?
What do you mean by “has a great runtime?”
Or any OS that uses UEFI. Or UEFI without an OS. So basically UEFI and not Windows or Linux at all.
Why search for error codes when your operating system has every opportunity to tell you what’s wrong?
My least favorite thing about Windows, above all things, is that it’s extremely difficult to discover what’s wrong with it. People just try random things until it works in most cases.
Thanks for reducing the click bait.
I think Docker is a tool, and it depends on how you implement said tool. You can use Docker in ways that make your infra more complicated, less efficient, and more bloated with little benefit, if not a loss of benefits. You can also use it in a way that promotes high uptime, fail-overs, responsible upgrades, etc. Just “Docker” as-is does not solve problems or introduce problems. It’s how you use it.
Lots of people see Docker as the “just buy a Mac” of infra. It doesn’t make all your issues magically go away. Me, personally, I have a good understanding of what my OS is doing, and what software generally needs to run well. So for personal stuff where downtime for upgrades means that I, myself, can’t use a service while it’s upgrading, I don’t see much benefit for Docker. I’m happy to solve problems if I run into them, also.
However, in high-uptime environments, I would probably set up a k8s environment with heavy use of Docker. I’d implement integration tests with new images and ensure that regressions aren’t being introduced as things go out with a CI/CD pipeline. I’d leverage k8s to do A-B upgrades for zero downtime deploys, and depending on my needs, I might use an elastic stack.
So personally, my use of Docker would be for responsible shipping and deploys. Docker or not, I still have an underlying Linux OS to solve problems for; they’re just housed inside a container. It could be argued that you could use a first-party upstream Docker image for less friction, but in my experience, I eventually want to tweak things, and I would rather roll my own images.
For SoC boards, resources are already at a premium, so I prefer to run on metal for most of my personal services. I understand that we have very large SoC boards that we can use now, but I still like to take a simpler, minimalist approach with little bloat. Plus, it’s easier to keep track of things with systemd services and logs anyway, since it uniformly works the way it should.
Just my $0.02. I know plenty of folks would think differently, and I encourage that. Just do what gives you the most success in the end 👍
NEVER CLICK THESE ↪️
Vim handles remote files over SCP natively:
vim scp://192.168.1.2//data/editme.txt
deleted by creator
There is a GUI, but I prefer the terminal:
sudo apt update
sudo apt install steam
“Update” fetches the latest package information, and “install steam” does exactly what you think it does :)
Yup! Install Steam (with your package manager!) and play. Nothing to it.
Enjoy!
Some of these tips are dangerous. You generally don’t want cause insensitivity in your shell. Also, ls should never be used as a subshell to find files as a part of commands.
deleted by creator