• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • Problem is in practice, I suspect something is pretty wrong in most teams.

    Some common examples come to my mind:

    • Management hears “talk about what you’ve done and what you will do” so great time to sit in and take notes for performance review, and it becomes a “make sure management knows you spent all your time and did really impressive stuff” meeting. Also throws a kink in “things I need help with” as there’s always the risk that management decides you aren’t self sufficient enough if they hear you got stuck, so you also need to defend why you got stuck and how it isn’t your fault.
    • The people who feel like everyone needs to know the minutia of their trials and tribulations including all the intermediate dead ends they went down on the way to their final result. Related to the above, but there are people who think to do this even without the need to impress management.
    • The people who cannot stand to “take it offline” and will stop everything to fully work a problem while everyone is still ostensibly supposed to stay in the meeting despite having nothing to do with the two people talking (sometimes even just one, a guy starts talking to himself as he tries to do something live).
    • Groups that are organized but have very little common ground. An “everything must be scrum” company sticks a guy who does stuff like shipping and receiving into a development team and there’s no ‘scrum-like’ interaction to be had and yet, there he is wasting his time and having to talk about stuff no one else on that meeting has a need to hear either.



  • Had a colleague get diagnosed with cancer. Like really early and one of the ones that is considered very treatable with high success rate for treatment.

    He said he didn’t trust the medical industry and vitamin c would take care of it. He died from what was probably a very treatable disease.

    Of course he had some family trauma, a loved one with colon cancer that tried long shot treatment but still died, despite suffering from the treatments along the way. He even says the doctors said the circumstances were way better for him, but he just wouldn’t believe them.


  • I think turn based is fine and in fact I like. However, when no one has a turn it’s annoying to sit around while nothing happens as the timer keeps ticking. Also, to make it “active”, the turn timer doesn’t stop when you hit the menu. If you delay your action the enemy may get to take their turn, just because you neglected to navigate the menu. I think ATB is actually the worst of both worlds, would prefer either turn based or action RPG rather than being forced to navigate a menu in some facsimile of ‘real time’.

    Where FF7 kind of went south from a gameplay perspective compared to 6 was that in 6, summons were a brief flash. In FF7, by contrast, for example Knights of the round would “treat” you to an 80 second spectacle, which was cool the first couple of times, but then just a tedious waste of time. Generally rinse and repeat this for any action that was pretty quick in FF6 and before but a slow spectacle in FF7, with no real option to speed up those animations you had already seen a dozen times that wore out their welcome long ago. Just like that stupid chest opening in OOT.

    Anyway, I did enjoy FF7, but the “game” half was kind of iffy.


  • Thing is those criticisms also mostly apply to FF7.

    Disconnect between combat and exploration? I see that for Zelda, but ff7 goes harder, with a random encounter jolting you into a different game engine for combat.

    To much time in combat waiting while nothing happens? FF7 battle system is mostly waiting for turns to come to with lots of dead time.

    Exploration largely locked to narrative allowing it? Yeah, FF7 had that too, with rare optional destinations a very prescribed order and forced stops. It opens up late in the game.

    The video generally laments that OOT was more a playable story than an organic gameplay experience, and FF7 can be characterized the same way. Which can be enjoyable, but it can be a bit annoying when the game half of things is awkward and bogs things down a bit. Particularly if you are getting subjected to repeated “spectacle” (the slow opening of chests in oot, the battle swirl, camera swoops, and oh man the summons in ff7…)

    They both hit some rough growing pains in the industry. OOT went all in on 3D before designers really got a good idea on how to manage that. FF7 had so much opportunity for spectacle open up that they sometimes let that get in the way. Also the generally untextured characters with three design variations that are vastly different (field, battle, and pre rendered) as that team try to find their footing with visual design in a 3d market.


  • Agreed, as a game, as in fun, ff7 wasn’t very good. That music, those visual designs (the pre rendered stuff), and the story (though it suffered from bad localization) were compelling. But random encounters, fights filled with mostly waiting to be able to do things, the best attacks doing too much spectacle which was nice the first time, but pretty boring on repetition… The materia management became frustrating as you got more party members and no way to arrange or search, even with in game dialog mentioning how it was a pain…

    Chrono Cross actually had significantly better game design, with enemies on screen and no standing around waiting for some characters turn to come up before anything would happen. Wish ff7 had clipped the “no action allowed by either side” time and that would have helped immensely. Then it just becomes a matter of if the player prefers real time adventure to menu driven play.


  • Yep, and I see evidence of that over complication in some ‘getting started’ questions where people are asking about really convoluted design points and then people reinforcing that by doubling down or sometimes mentioning other weird exotic stuff, when they might be served by a checkbox in a ‘dumbed down’ self-hosting distribution on a single server, or maybe installing a package and just having it run, or maybe having to run a podman or docker command for some. But if they are struggling with complicated networking and scaling across a set of systems, then they are going way beyond what makes sense for a self host scenario.


  • Based on what I’ve seen, I’d also say a homelab is often needlessly complex compared to what I’d consider a sane approach to self hosting. You’ll throw all sorts of complexity to imitate the complexity of things you are asked to do professionally, that are either actually bad, but have hype/marketing, or may bring value, but only at scales beyond a household’s hosting needs and far simpler setups will suffice that are nearly 0 touch day to day.


  • For 90% of static site requirements, it scales fine. That entry point reverse proxy is faster at fetching content to serve via filesystem calls than it is at making an http call to another http service. For self hosting types of applications, that percentage guess to go 99.9%

    If you are in a situation where serving the files through your reverse proxy directly does not scale, throwing more containers behind that proxy won’t help in the static content scenario. You’ll need to do something like a CDN, and those like to consume straight directory trees, not containers.

    For dynamic backend, maybe. Mainly because you might screw up and your backend code needs to be isolated to mitigate security oopsies. Often it also is useful to manage dependencies, but that facet is less useful for golang where the resulting binary is pretty well self contained except maybe a little light usage of libc.


  • But it you already have an nginx or other web server otherwise required to start up (which is in all likelihood the case), you don’t need any more auto startup, the “reverse proxy” already started can just serve it. I would say that container orchestration versioning can be helpful in some scenarios, but a simple git repository for a static website is way more useful since it’s got the right tooling to annotate changes very specifically on demand.

    That reverse proxy is ultimately also a static file server. There’s really no value in spinning up more web servers for a strictly static site.

    Folks have gone overboard assuming docker or similar should wrap every little thing. It sometimes adds complexity without making anything simpler. It can simplify some scenarios, but adding a static site to a webserver is not a scenario that enjoys any benefit.


  • Because serving static files doesn’t really require any flexibility in web serving code.

    If your setup has an nginx or similar as a reverse proxy entry point, you can just tell it to serve the directory. Why bother making an entire new chroot and proxy hop when you have absolutely zero requirements beyond what the reverse proxy already provides. Now if you don’t have that entry point, fine, but at least 99% of the time I see some web server as initial arbiter into services that would have all the capability to just serve the files.


  • This is consistent with the “Linux is for backend services and command line” mentality. For me those are nice and important, but I prefer the Linux desktop experience, so those options are of no solace. The VM is ultimately constrained on what it can do UI wise.

    I flip the relationship the other way around. Linux on bare metal, Windows in a VM. For people needing windows games, this would be a non starter, however I’ve got enough games between Linux native, emulators, and proton with steam. Windows as a separate box would be my strategy if needed.



  • Of course the problem is that wingetui isn’t there by default, isn’t integrated to Windows Update, no matter what, WinGetUI basically becomes yet another tray icon, alongside a half dozen other auto-updater tray icons that various vendors added since there’s no integrated facility to rely upon.

    So sure, it’s a bandaid on winget, but it’s still awkward and the ecosystem is a mess. Compared to Linux where a distribution will have, in the box, an extensible central update facility maybe serving two different types of repositories (e.g. apt and snap, or dnf and flatpak).


  • True, for some uses.

    If you only need command line use, it’s fine. I personally strongly prefer the environment in, say, Linux distribution running Plasma, but if you are fine with Windows applications, then fine.

    If you need GUI Linux… WSLG can kind of sort of get you there, but it sucks. So if you live with any Linux GUI application for significant periods of time, then you’ll want to strangle WSLg and it’s weird behaviors. VcXsrv can help on this front.

    If you are like me and find dnf+flathub an appealing strategy for installation and update of software, you like Plasma desktop management, then Linux ‘for real’ is the way to go.


  • Well, it’s making them plenty of money, but they pretty much get that money no matter what (from the device manufacturers when they sell hardware, and from businesses afraid to have their software entitlement coupled to the accident of their hardware).

    Now it’s a game of using that guaranteed footprint to bolster the recurring revenue services (OneDrive, Office, Azure). They still get the money for however the copy got there, but also use the copy to launch folks into recurring revenue options.


  • Well, I don’t think it’s anti-monopoly evidence, but instead a way to intercept a popular search phrase and control the narrative.

    You search for “how to download and install linux” in google, and the very top link is the Microsoft page. And the narrative is:
    -I just want to get started: Oh, use WSL, that way you are using Windows really, and just a touch of Linux
    -I need to use it for real: Oh, then use Azure, you can have us set up those scary Linux instances for you and Microsoft Terminal will hook you right up to those instances
    -I really really want to use it: Ok, but remember, you’ll lose access to Windows applications, so there are downsides, and also, we are going to make this hands down the scariest looking procedure of the three…


  • WSL may be fine for a Windows user to get some access to Linux, however for me it misses the vast majority of what I value in a desktop distribution -Better Window managers. This is subjective, but with Windows you are stuck with Microsoft implementation, and if you might like a tiling window manager, or Plasma workspaces better, well you need to run something other than Windows or OSX.

    -Better networking. I can do all kinds of stuff with networking. Niche relative to most folks, but the Windows networking stack is awfully inflexible and frustrating after doing a lot of complex networking tasks in Linux

    -More understanding and control over the “background” pieces. With Windows doing nothing a lot is happening and it’s not really clear what is happening where. With Linux, it can be daunting like Windows, but the pieces can be inspected more easily and things are more obvious.

    -Easier “repair”. If Windows can’t fix itself, then it’s really hard to recover from a lot of scenarios. Generally speaking a Linux system has to be pretty far gone

    -Easier license wrangling. Am I allowed to run another copy of Windows? Can I run a VM of it or does it have to be baremetal? Is it tied to the system I bought with it preloaded, or is it bound to my microsoft account? With most Linux distributions, this is a lot easier, the answer is “sure you can run it”.

    -Better package management. If I use flatpak, dnf, apt, zypper, or snap, I can pretty much find any software I want to run and by virtue of installing in that way, it also gets updated. Microsoft has added winget, which is a step in the right direction, but the default ‘update’ flow for a lazy user still ignores all winget content, and many applications ignore all that and push their own self-updater, which is maddening.

    The biggest concern, like this thread has, is that WSL sets the tone for “ok, you have enough Linux to do what you need from the comfort of the ‘obviously’ better Microsoft ecosystem” and causes people to not consider actually trying it for real.


  • Indeed, it’s to contain the “Linuxification” of the developer community.

    Before WSL, any developer dealing with backend development almost had to install Linux to have a vaguely decent development environment to align with what they get to use on the servers. While they were dragged into that world by their requirements, they may find that the packaging and window management is actually pretty cool. There reluctance to venture out of the Windows world transforms into acceptance and perhaps even liking it.

    Now with WSL, those Windows desktop users say “I just need to click a distribution in the Microsoft Store and I’m golden and don’t have to deal with that scary Linux world I don’t know yet.”.

    I’ve repeatedly have people notice I’m running a Linux desktop when I’m presenting and off hand say “you know you can just run Linux under Windows, you don’t have to endure Linux anymore”. They seem to think I’m absurd for actually preferring Linux when I can get away with it.