I have it running on a raspberry pi at home behind a reverse proxy on my router and backed up to rsync.net. But hosting it on a cheap VPS might be easier.
I have it running on a raspberry pi at home behind a reverse proxy on my router and backed up to rsync.net. But hosting it on a cheap VPS might be easier.
I host forgejo for myself.
The answer to your question of why it’s so hard to give artists your money is exactly the same as it has been for ages for all media. The few companies who survived the consolidation of the industry have done everything in their power to make sure they are the gatekeepers of content. They buy and merge or kill off any competing companies or technologies.
They weren’t successful with MP3s or with streaming because they didn’t bother to understand the technology or that the Internet was the new marketplace and thought they could just do what they had done with physical media and pay for laws that protected their interests and sue everyone, but they ultimately lost control because you can’t sue hundreds of millions of people like you can sue a few thousand stores. So they had to give the people what they wanted for a while so they could have time to buy up all of the companies.
But they’ve now done that and paid enough to get the laws and precedents on interpreting those laws that they wanted, so courts are becoming better at enforcing those laws more quickly. So they can pressure new tech that pushes the limits on interpreting the laws to not last long enough to get people hooked. And now that they’ve reconsolidated most of the market and technologies as capitalism tends to do if you’re patient enough and there’s no possibility of monopoly regulation or market disruption, we’re stuck with pirate or use the garbage they feed to us and most artists are back to having to sign their art away and sleep with executives to get the marketing and distribution from the gatekeepers just to get a chance at success. The rest have to rely on word of mouth and self distribution which even online can be expensive without the advantages of centralized hosting providers, merchant accounts, and bandwidth.
Docker automatically upgrades if you tell it to by specifying “latest” or not specifying a version number. But it only upgrades if you issue the pull command or the compose up command. There are ways to start without a pull like using start or restart. So yes, there was warning and something you did actively told it to upgrade.
And it’s really bad practice to update any software without testing, especially between breaking/major version numbers.
Finally, it’s not uncommon for a platform to release its update and then the plugins or addons to follow. Especially with major updates that require lots of testing before release. This allows plugin/add-on makers to fully test their software with the release version of the platform rather than all of the plugin makers having to wait for one that may be lagging behind.
Native OIDC support…something I wish more self hosted apps would prioritize. I shouldn’t need to maintain a bunch of user account systems on my own servers.
There’s really no need to reverse proxy ssh. What are you attempting to accomplish with the reverse proxy exactly? Http proxying allows you to add things like TLS encryption and modify headers. But ssh is a secure protocol already and you can’t really modify much in transit.
Would only be worth it if you created a system for easily deploying applications on an already set up subnet with routing preconfigured.
Like set up a single server kubernetes distribution like microk8s or minikube on the server with metalLB and ingress already preconfigured on the server and router. You could also give instructions on how to install a GUI like Lens and how to use it to deploy a few things. Probably using workstation applications would be better than a web UI like Portainer to keep the server lighter, but either might work.
It’s also why tablets never really took off. Sure a lot of people use them, but mostly as a big screen phone in portrait orientation. But they could be so much more if designers actually designed apps to adapt to changing sizes. Even something simple like displaying two screens of a normal phone app side by side in landscape mode rather than having to switch back and forth. But ultimately, cost makes developing for multiple screen sizes a “low priority feature”, and those kinds of things never get funding. Instead they would rather put a feature that looks cool to investors and executives the product managers are trying to get to fund the project and on marketing materials to get sales people on-board, but is ultimately useless to the end user. Which comes back to the main problem in late-capitalism. The end user is no longer the customer, the corporate overlords and their investors are.
Termius
Not just Android, I want a cross-platform ssh client that shares keys. Termius is probably overkill for that, but I haven’t found anything else that works on Linux and Android. The real issue that made me stop paying for it is that for rpm based Linux I have to use the snap version and snap is buggy as heck with multitasking.
Yeah, you definitely should run it on a separate machine. A home NAS itself probably shouldn’t be doing anything beyond serving files and basic maintenance. Using them for too much will reduce their ability to serve data fast enough. Just be sure the media server and NAS have appropriate network cards, preferably gigabit, though even 100Mbit probably is enough for most of your network isn’t already too busy, and ideally are connected to the same switch (again preferably gigabit) with good quality network cables.
Use Drive or if it’s more than 15GB or whatever the max is these days. Pay for storage for one month for a couple of dollars on one of the supported platforms and download from there.
It’s reasonably priced. I was in the same boat with the Google domains shutdown. As long as you aren’t a heavy user, it has lots of cool features. But if you get their attention they’ve been known to fleece the crap out of small businesses that were using their free services. Most of my stuff is self hosted applications to move myself off of Google services, so my traffic is minimal.
I agree that it’s the wrong way, but not because of any of this other than the first half of the first sentence.
It’s the hard/wrong way because it means you are having to be responsible for securing the root cert private keys and because most people will do it wrong and set up a root cert with the ability to sign not just tls certs, and that’s where the problems can occur if the keys are compromised and you’ve set up all of your machines to trust it.
But it’s also not true that you shouldn’t use HTTPS or that you should trust your own network, not because of the router, but because of the devices. People don’t control their devices anymore. Many home automation devices, nanny cams, security devices, water leak detectors, etc., contain firmware that is poorly configured and can easily expose your network traffic if it’s not encrypted. Not to mention a lot of apps these days on smartphones are Trojans for spyware, Temu, WeChat, etc.
And as for cost, you can get a domain name for a few dollars per year or as mentioned, a subdomain from something like a DDNS service, so it definitely can be totally free to do it the right way.
In my opinion, the difference with Google is that Google is actively using your data and you’re giving them a lot of it. For Cloudflare, what do they have exactly? Depends on what services you use, but really all they get from me is the list of servers that connect to my domains. Google does that too if you use 8.8.8.8, or if you have any of their hardware that overrides router DNS settings like Chromecast and Google TV.
I mean it depends on the intensity of the surge, but basically you’d be making it so your PSU is unable to protect the devices from surges. The more sensitive the electronics, the more critical the ground is and CPUs are pretty darned sensitive among other things. And depending on the type of components in the PSU, “surges” also include things like inrush current. Basically, when you turn on a transformer or certain other devices, there is a surge of sometimes as much as 10 times the rated current to create the initial magnetic flux. Depending on the components, this excess energy may end up getting shunted to the ground to avoid pushing it through your electronics. So if it can’t do that, you likely will blow fuses a lot when switching the power on (hopefully there are fuses), or if you’re touching the case which is supposed to be grounded, you may end up getting that jolt.
Anyway, without grounded outlets, and especially if your electronics are cheaply made because many expect there to be grounding and don’t build in extra components to deal with not having a ground, you are likely to significantly reduce the life of your electronics, your life, or start a fire without even considering major surges. If you have a high-end PSU, you may never have a problem until that surge happens. How stable is your power? Because even a normally small surge combined with a cheap PSU, and no ground, is pretty likely to end up in damage to electronics at the best case.
I self host a lot, but I host a lot on cheap VPS’s, mostly, in addition to the few services on local hardware.
However, these also don’t take into account the amount of time and money to maintain these networks and equipment. Residential electricity isn’t cheap; internet access isn’t cheap, especially if you have to get business class Internet to get upload speeds over 10 or 15 mbps or to avoid TOS breaches of running what they consider commercial services even if it’s just for you, mostly because of of cable company monopolies; cooling the hardware, especially if you live in a hotter climate, isn’t cheap; and maintaining the hardware and OS, upgrades, offsite backups for disaster recovery, and all of the other costs. For me, VPS’s work, but for others maintaining the OS and software is too much time to put in. And just figuring out what software to host and then how to set it up and properly secure it takes a ton of time.
That electric bill, though. 🤣
Mailcow or Mailu have pretty good setups if you don’t want to do anything too different and don’t need to keep resource usage to a minimum.
Docker is nice for things that have complex installations and I want a very specific implementation that I don’t plan to tweak very much. Otherwise, it’s more hassle than it’s worth. There are lots of networking issues like limited/experimental support for IPv6, and too much is hidden and preconfigured, making it difficult to make adjustments that would otherwise just be a config file change.
So it is good for products like a mail server where you want to use the exact software they use like let’s say postfix + dovecot + roundcube + nginix + acme + MySQL + spam assassin + amavisd, etc. But you want to use an existing reverse proxy and cert it setup, or want to use a different spam filter or database and it becomes a huge hassle.
Depends on how secure the application and the security you use in front of the application such as reverse proxies, load balancers, etc. If you are exposing a web application with no SSL, no two factor with, or something in a beta state or if you can’t trust your ISP not to create man-in-the-middle attacks for advertising and collecting information to sell which also likely introduces security vulnerabilities, then that could be a problem and a VPN or similar might be a big help.