A towel
A towel
I know what I said. Linux upholds the “don’t break userspace” contract pretty well: most kernels, particularly those from generalistic distros built with modules, are compatible with whatever userspace binaries you throw at them. Major version changes in glibc (or equivalent) is where incompatibilities start, but those happen quite rarely, and you can often still force multiple glibc versions to run side by side.
What @exi@feddit.de said. Switching .deb based distros is little more than changing sources, maybe some pinning, doing an upgrade, and optionally a cleanup pass to remove any stranglers.
My main Linux box is a Debian-Ubuntu-Debian upgrade, that hasn’t seen a proper reinstall for like 15 years (switched all the hardware several times, still no clean reinstall).
Switching between non-deb distros is also possible, with a chroot. Like, Gentoo to Fedora. As long as the kernel is compatible with the glibc, it’s basically like running containers, just on slightly hard mode.
Not really, I’m not new to containers.
This might blow yours though: I once booted up from a Tomsrtbt disk, installed Debian, added some RedHat packages, and topped it up with some pinned downgrades from Ubuntu.
On bare metal, no containers, no rebooting.
Debian would not create and maintain a “core debian” variant just to be installed then receive the extra packages
Debian server minimal, is kind of a “core Debian”. There are netinst versions that can be even smaller. The Debian base image for Docker is even smaller than all that.
There is also an Ubuntu minimal install that you could call “core Ubuntu”.
But more importantly, and I can’t stress this enough: YOU CAN SWITCH DISTROS WITHOUT REINSTALLING. Might need to do some cleanup afterwards, but it’s perfectly doable, more so between Debian based ones.
Could you link some examples?
Also keep in mind that people can release their work under multiple licenses, so they may upload the same work with a different license (like a privative one) to other markets.
Because I already had my fingers closer to “su” than to “-s”… but more seriously, because I tend to use sudo -E su
on a remote terminal with a PS1 set to colorize the prompt based on whether I’m running root and the host if it’s remote, but sudo -E -s
doesn’t run the root’s .bashrc
that runs the updated colorization while at the same time exports too much of the user’s environment into the root shell.
Haven’t you heard? The UEFI bios can have binaries included by the board manufacturer that Windows will ask for and automatically run on startup… for example to download a GigaByte control center installer to fill your recent install with crapware… that would then proceed to download a self-update from a http (no-s) URL. And the binaries will work even if they’re signed with revoked certificates and have been injected by any device with DMA access!
That’s… like… super cool, isn’t it? If only we could have that on Linux… /s
Also, the modern bioses have pretty graphics and mouse support… /s/s
As an AI language model, I can understand and communicate fluently in different languages such as English, 中文, 日本語, Español, Français, Deutsch and others. Here are some examples of how I can complete the sentence:
Which one do you find the most useful? 😊
[
]
Just saying, not everyone needs session management…
Xorg, or X11, “used to” do the “minimum necessary” for a remote display system… in the 80s. Graphics tech has changed A LOT in the last 40 years, with most of the stuff getting offloaded to GPUs, so the whole X11 protocol became more and more bloated as it kept getting new and optional features without dropping backwards compatibility.
The point against Wayland, was dropping support for remote displays, while kind of having an existential crysis for several years during which it didn’t know what it wanted to become. Hopefully that’s clear now.
OpenRC and runit are indeed working alternatives, but OpenRC is kind of a hack over init.rd, while runit relies a bit too much on storing all its status in the filesystem. Systemd has a cleaner approach and a more flexible service configuration.
“do one thing well”
Arguably, Systemd does exactly that: orchestrate the parallel starting of services, and do it well.
The problem with init.d and sys.v is they were not designed for multi-core systems where multiple services can start at once, and had no concept of which service depended on which, other than a lineal “this before that”. Over the years, they got extended with very dirty hacks and tons of support functions that were not consistent between distributions, and still barely functional.
Systemd cleaned all of that up, added parallel starting taking into account service dependencies, which meant adding an enhanced journaling system to pull status responses from multiple services at once, same for pulling device updates, and security and isolation configs.
It’s really the minimum that can be done (well) for a parallel start system.
Their system has “triggered successfully”, great news everyone! 😃👍
/s