If you use a GUI configuration tool for NetworkManger like virtually every user I don’t know how that works. Odds are not well.
If you use a GUI configuration tool for NetworkManger like virtually every user I don’t know how that works. Odds are not well.
Are they so different that it’s justified to have so many different distributions?
Linux isn’t a project its a source compatible ecosystem. A parts bin out of which different people assemble different things. The parts being open source means you don’t need anyone’s permission or justification to make something different out of them.
From these many and varied efforts comes life, vitality, interest, intellectual investment. You can’t just take the current things you like best and say well what if we all worked on THOSE when many of them wouldn’t even have existed save for the existence of a vital ecosystem that supported experimentation and differentiation.
If we really believed in only pulling together maybe you would be developing in cobol on your dos workstation.
Are we suggesting that rich people who get a product for free and use it to forklift more piles of money into their scrooge mcDuck like vault ought to demand more accountability from the people who provided the free forklift.
How about they pay for that?
Necessary for performance of such service is like needing your address to ship you food or your identity data to connect you with individuals seeking to employ you. EG the info is necessary and relevant to the performance of the actual task at hand not I need all your data so I can sell it to make money. The alternative is so expansive that it would automatically authorize all possible data collection which is obviously not the intent of the law.
TLDR: 80s: On crack 90s: Being yourself, cool, swagger 2020s Alt right attempts to appropriate this to mean being an unapologetic douchnozzle
I do not know who that is.
Why would you need to? Apps might likely need functional drivers for your hardware to exist but most things aren’t going to directly relate to or depend on a particular version of the nvidia driver. If it does you might be a bad developer.
If you don’t know install a distro and use what comes with it by default and only worry about digging into the plumbing if something doesn’t work for you.
Ideally you let your distro worry about plumbing.
I think Mint is nice if you don’t need bleeding edge stuff. You can use Cinnamon which runs x11 but will eventually support Wayland.
I’ve heard good things about suse which has a rolling release option and supports gnome and KDE under Wayland.
Arch of course is a thing if you don’t mind a manual transmission as it were.
Personally I might pick Mint to get started.
A few reasons. It’s received wisdom that AMD are the good guys because in the Intel / AMD slog they are the underdogs fighting the good fight and bringing good affordable products to all vs intel who has historically behaved in a sleazy underhanded and anti-competitive fashion and when they bought ATi they moved ATi from a maker of shitty proprietary poorly supported pieces of shit to an open source friendly maker of acceptable GPUS.
Since Nvidia is the bad guy in that fight it would be handy if Nvidia was also badly supported buggy, inferior. The fact that Nvidia is actually more stable, well supported, and generally better is somewhat a fly in the ointment.
It’s especially humorous when its coming from users of a permanent beta distro like arch where the kernel update process is that the new kernel is pushed extremely quickly after release. Expert arch users realize that means they are their own QA as far as out of tree modules. Actually stable distros express what is known to work as dependencies such that you trivially get something that is known to work when you press go. They also don’t run the kernel release that was cut this morning.
Meanwhile users of arch derived distros, who may or may not claim to be running arch while believing their distro is ubuntu with faster updates yell that nvidia is broken when 6.3 doesn’t work the day it was cut with nvidia using a driver that doesn’t claim to support 6.3. The fact that this dependency is known but not encoded into arch packages isn’t an Nvidia problem.
Even Manjaro a distro run by folks who once told their users to set their clocks back because they forgot to renew their SSL Cert figured out they can avoid almost as much trouble as smart people can avoid by actually reading by just being lazy and not pulling changes instantly.
You have a bunch of duplicated stuff because flatpak is a piece of shit. With traditional packaging apps supporting your platform would get exactly one choice. Support the fucking version of nvidia that everyone else gets to or fuck off. In all likelihood all your shit would work work with the most recent release but because they have the option to be lazy fucks and make you download Nvidia 7 times this is your life now. Also if dkms takes appreciable time you either need to stop running Linux on a toaster or delete some of the 17 kernels you are hoarding for some reason. You need like 2 the one that you know works and the new one you just installed.
If you have more than enough RAM isn’t the older suggested configuration of low swappiness + modest swap should be more performant than encouraging the system to swap more and paying the price of compression. EG if you are apt to use 8GB in normal usage 32-64GB are at this point relatively inexpensive.
It’s not my fault if their work is of poor quality. Here is how people actually experience Wayland out of the box.
https://www.reddit.com/r/Fedora/comments/xdvy7z/multimonitor_scaling_in_wayland_is_totally_broken/ Oh I know its now 11 months old and someone even suggested a magic incantation you can insert into the Linux version of the Windows Registry that might fix some of the problems but this is 2022. In 2015 Wayland proponents were already promoting it as ready for prime time.
You ably demonstrate your own inability to listen. The monitor on my right hand side right here as I type this isn’t blurry there is no amount of proving that it MUST be blurry that is more persuasive than the fact that as I type this I’m looking at it.
Furthermore I didn’t say that the existence of desktops obviated the need to worry about the impact of resolution/scaling on battery life. I said that the impacts on battery life were both minimal and meaningless because mixed DPI concerns by definition concerns exclusively desktops and laptops which are plugged into external monitors at which time logically your computer is also plugged into power. In fact the overwhelming configuration for those which use external monitors is a dock which delivers both connectivity to peripherals and powers. If you are using a desktop OR a plugged in laptop the benefits of scaling more efficiently is zero.
I started using Linux with the release of the very first release of Fedora then denoted Fedora “Core” 1. I’m not sure how you hallucinated that Wayland got 4 years of design and 8 years of implementation. First off by the end of the month it will be 15 years old so you fail first at the most basic of math. Next I’m guessing you want to pretend it got four year of design to make the second number look less egregious.
With graphics programming relatively in its infancy X11 didn’t require 15 years to become usable and Apple took how many years to produce their stack was it even one? Working with incredibly powerful hardware, with a wide variety of approaches well understood and documented 15 years is downright embarrassing. Much as I enjoy Linux the ecosystem is kind of a joke.
It doesn’t require a meaningful or measurable difference in CPU/GPU to scale my third monitor. That is to say in practical effect actual usage of real apps so dwarfs any overhead that it is immeasurable statistical noise. In all cases nearly all of the CPU power is going to the multitude of applications not drawing more pixels.
The concern about battery life is also probably equally pointless. People are normally worrying about scaling multiple monitors in places where they have another exciting innovation available… the power cord. If you are kicking it with portable monitors at the coffee shop you are infinitely more worried about powering the actual display more so than GPU power required to scale it. Also some of us have actual desktops.
Furthermore, scaling up and down in multiple passes, instead of letting the clients doing it in “one go” and have the compositor scan it directly onto your screen, leads to problems in font rendering
There are some nasty side effects
There just aren’t. It’s not blurry. There aren’t artifacts. It doesn’t take a meaningful amount of resources. I set literally one env variable and it works without issue. In order for you to feel you are justified you absolutely NEED this to be a hacky broken configuration with disadvantages. It’s not its a perfectly trivial configuration and Wayland basically offers nothing over it save for running in place to get back to the same spot. You complain about the need to set an env var but to switch to wayland would be a substantial amount of effort and you can’t articulate one actual benefit just fictional deficits I can refute by turning my head slightly.
Your responses make me think you aren’t actually listening for instance
11 is utterly broken, just admit it. You are welcome to develop another X11 if you want.
Why would I need to develop another X11 I believe I shall go on using this one which already supported high and mixed DPI just fine when Wayland was a steaming pile of shit nobody in their right mind would use. Apparently the “nobody” includes GTK, Qt, SDL…
Please attend more carefully. Scaling and High DPI was a thing on X back when Wayland didn’t work at all. xrandr supported --scale back in 2001 and high DPI support was a thing in 2012. Wayland development started in 2008 and in 2018 was still a unusable buggy pile of shit. Those of us who aren’t in junior high school needed things like High DPI and scaling back when Wayland wasn’t remotely usable and now that it is starting to get semi usable I for one see nothing but hassle.
I don’t have a bunch of screen tearing, I don’t have bad battery life, I have working high DPI, I have mixed DPI I don’t have a blurry mess. These aren’t actual disadvantages this is just you failing to attend to features that already exist.
Imagine if at the advent of automatic transmissions you had 500 assholes on car forums claiming that manual transmission cars can’t drive over 50MPH/80KPH and break down constantly instead of touting actual advantages. It’s obnoxious to those of us who discovered Linux 20 years ago rather than last week.
Nothing is set automatically I run a window manager and it starts what I tell it to start. I observed that at present fewer env variables are now required to obtain proper scaling. I did not personally dig into the reasoning for same because frankly its an implementation detail. I just noted that qt apps like dolphin and calibre are scaled without benefit of configuration while GTK apps like Firefox don’t work without GDK_SCALE set.
X actually exposes both the resolution and physical size of displays. This gives you the DPI if you happen to have mastered basic math. I’ve no idea if this is in fact used but your statement NOTHING provides that is trivially disprovable by runing xrandr --verbose. It is entirely possible that its picking up on the globally set DPI instead which in this instance would yield the exact same result because and wait for it.
You don’t in fact actually even need apps to be aware of different DPI or dynamically adjust you may scale everything up to the exact same DPI and let X scale it down to the physical resolution. This doesn’t result in a blurry screen. The 1080p screen while not as pretty as the higher res screens looks neither better nor worse than it looks without scaling.
Why would I need to develop another X11 I believe I shall go on using this one which already supported high and mixed DPI just fine when Wayland was a steaming pile of shit nobody in their right mind would use. It probably actually supported it when you yourself were in elementary school.
Why on earth would I develop “another X11” instead of using the one that still works perfectly fine?
I know you live in this weird universe where the screen that is 12 inches from my face actually looks like crap but it just isn’t so you are merely confused.
It is literally how Wayland is scaling your shit you just don’t know how anything works.
With X/i3 I had to read and the result works well. With Sway I had to read and the result works poorly. So is sway better for the illiterate?
Flatpak isn’t going to have every library, cli tool, or even every GUI tool. I think in the end out of date just isn’t worth it.