To be honest I’m more concerned by language-humor
.
Like not even saying what kind of humour, just any type of humour at all.
Jokes are for adults only!
To be honest I’m more concerned by language-humor
.
Like not even saying what kind of humour, just any type of humour at all.
Jokes are for adults only!
The article mentions they’ll continue making the eZ80. If you’re in the middle of making a PCB around the Z80, you’ll just have to change the pins, I guess.
Heads up for anyone (like me) who isn’t already familiar with SimpleX, unfortunately its name makes it impossible to search for unless you already know what it is. I was only able to track it down after a couple frustrating minutes after I added “linux” into the search on a lark.
Reminds me a little of the old Jonathan Shapiro research OSes (Coyotos, EROS, CapROS), though toned down a little bit. The EROS family was about eliminating the filesystem entirely at the OS level since you can simulate files with capabilities anyway. Serenum seems to be toning that down a little and effectively having file- or directory-level capabilities, which I think is sensible if you’re going to have a capability-based OS, since they end up being a bit more user-visible as an OS.
He’s got the same problem every research OS has: zero software. He’s probably smart to ditch the idea of hardware entirely and just fix on one hardware platform.
I wish him luck selling his computer systems, but I doubt he’s going to do very well. What would a customer do with one of these? Edit files? And then…edit them again? I guess you can show off how inconvenient it is to edit things due to its security.
I just mean it’s a bit optimistic to try and fund this by selling it. I understand he doesn’t have a research grant, but it’s clearly just a research OS.
You just don’t appreciate how prestigious it is to get a degree from Example U.
It is, but it probably shouldn’t be any more. WebP has good support everywhere now and is slightly better than JPEG and PNG combined. (Better lossy compression than JPEG, plus transparency support, and better lossless compression than PNG). But even WebP is considered lame these days compared to the new crop.
E.g., JXL (JPEG XL) is much better WebP and is supported by everyone except Google (which is ironic since Google helped create it). Google seems to want AVIF to be the winner for the new image format, but not many others do.
Anyway, until the Google JXL AVIF hissy fit is dealt with, at least we’ve still got WebP. It’s not super great, but it’s at least better than JPEG and PNG. A lot of web developers are stuck in their old JPEG PNG mindset and are being slow to adapt, so JPEG is still hanging around.
I feel like this should be required reading for a lot of Linux users. That article is a couple years old now, but I think is even more true now than it was when it was written. Having a middleman (package maintainer) between the user and the software developer is a tremendous benefit. Maintainers enforce quality, and if you bypass them, you’re going to end up with Linux as the Google Play Store (doubly so if you try and fool yourself into thinking it won’t happen because “Linux is different”)
Linux is the only platform to get native WebGL, too!
It’s in Proverbs 11:20
The C++ developers are an abomination to the Lord,
But the Rustaceans in their Rust-based OSes are His delight.
The search term is censored by DuckDuckGo in Korea. Even robots apparently think it’s going to be an IoT buttplug.
That’s Saturday night in North American time zones. Just a heads up in case you’re planning a boys’ night out a couple hundred billion years in advance, maybe move it to Friday night in case the world ends Saturday night.
Out of curiosity, did you use it as a daily driver? A friend of mine tried it out briefly, and it was pretty cool, but the lack of applications meant we couldn’t really do anything with it (other than marvel at how cool it was). Did it eventually get applications developed for them? Like did they have an office suite?
It’s not. He was very explicitly not talking about his murder there.
In a certain light, you could argue that Linus doesn’t really have any control at all. He doesn’t write any code for Linux (hasn’t in many years), doesn’t do any real planning or commanding or managing. “All” he does is coordinate merges and maintain his own personal git branch. (And he’s not alone in that: a lot of people maintain their own Linux branches). He has literally no formal authority at all in Linux development.
It just so happens that, by a very large margin, his own personal git branch is the most popular and trusted in the world. People trust his judgment for what goes in and doesn’t go in.
It’s not like Linux development is stopped because Linus goes offline (or goes on vacation or whatever). People keep writing code and discussing and testing and whatnot. It’s just that without Linus’s discerning eye casting judgment on their work, it doesn’t enter the mainstream.
Nothing will really get slowed down. Whether something officially gets labelled by Linus as “6.8” or “6.whatever” doesn’t really matter in the big picture of Linux development.
Ah thanks for that! You can tell how long it’s been since I’ve used Mac OS.
Isn’t it Mac OS X 14? I.e., Mac OS 10.14?
The stat
command is using statx, which gives you a slightly different struct.
statx is the cool new Linux-only system call for stat-ing.
Not every filesystem will support the new btime field.
(And, as you correctly say, many of those time fields are wrong, anyway)
won’t be useful beyond basic word processing and browsing.
Not even that. For most basic users, web browsing is by far the most resource-intensive thing they’ll ever do, and it’ll only get moreso. If it weren’t for modern web design, most users could honestly probably be okay with 4GB or 8GB of RAM today. For a laugh, I tried using a 512MB Raspberry Pi 1B for all my work for a few days. I could do absolutely everything (mostly developing code and editing office documents) without any problems at all except I couldn’t open a single modern web page and was limited to the “retro” web. One web page used up more resources than all of my work combined. I’m guessing it won’t be too many years before web design has evolved to the point where basic webpages will require several GB of RAM per tab.
(I agree with your overall point, by the way. Soldering in 8GB of RAM these days is criminal just based on its effects on the environment)
I used to run a TFTP server on my router that held the decryption keys. As soon as a machine got far enough in the boot sequence to get network access, it would pull the decryption keys from the router. That way a thief would have to steal the router along with the computer, and have the router running when booting up the computer. It works wirelessly, too!
This may be super-nitpicky (and I lose LocalSend and use it a lot), but there is one difference between LocalSend and Airdrop. LocalSend requires network connectivity (and requires the devices to be on the same network), whereas Airdrop can work without any network connection (using Bluetooth).