• 11 Posts
  • 65 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle


  • Just because the phone is connected to the car doesn’t mean that the driver of said car is using the phone, or that the phone even belongs to the person driving.

    It is Android’s job to provide music and entertainment to my car’s head unit. It is my job to drive safely. It is NOT the job of Android to make sure I’m driving safely. Why in the hell should my passenger have to sit through repeated “safety breaks” while they try to scroll down to play a new song?



  • I’m curious, why does this require OpenSSL in order to compile? I’m not aware of any audio formats that use encryption, but I could be wrong.

    My first thought was for connecting to https streams, but I don’t remember Winamp having this capability. “Back in the day,” I used Winamp for playing local audio and RealPlayer for what little streaming was available.


  • There are really two reasons ECC is a “must-have” for me.

    • I’ve had some variant of a “homelab” for probably 15 years, maybe more. For a long time, I was plagued with crashes, random errors, etc. Once I stopped using consumer-grade parts and switched over to actual server hardware, these problems went away completely. I can actually use my homelab as the core of my home network instead of just something fun to play with. Some of this improvement is probably due to better power supplies, storage, server CPUs, etc, but ECC memory could very well play a part. This is just anecdotal, though.
    • ECC memory has saved me before. One of the memory modules in my NAS went bad; ECC detected the error, corrected it, and TrueNAS sent me an alert. Since most of the RAM in my NAS is used for a ZFS cache, this likely would have caused data loss had I been using non-error-corrected memory. Because I had ECC, I was able to shut down the server, pull the bad module, and start it back up with maybe 10 minutes of downtime as the worst result of the failed module.

    I don’t care about ECC in my desktop PCs, but for anything “mission-critical,” which is basically everything in my server rack, I don’t feel safe without it. Pfsense is probably the most critical service, so whatever machine is running it had better have ECC.

    I switched from bare-metal to a VM for largely the same reason you did. I was running Pfsense on an old-ish Supermicro server, and it was pushing my UPS too close to its power limit. It’s crazy to me that yours only pulled 40 watts, though; I think I saved about 150-175W by switching it to a VM. My entire rack contains a NAS, a Proxmox server, a few switches, and a couple of other miscellaneous things. Total power draw is about 600-650W, and jumps over 700W under a heavy load (file transfers, video encoding, etc). I still don’t like the idea of having Pfsense on a VM, though; I’d really like to be able to make changes to my Proxmox server without dropping connectivity to the entire property. My UPS tops out at 800W, though, so if I do switch back to bare-metal, I only have realistically 50-75W to spare.


  • corroded@lemmy.worldtoSelfhosted@lemmy.worldLow Cost Mini PCs
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 months ago

    I have a few services running on Proxmox that I’d like to switch over to bare metal. Pfsense for one. No need for an entire 1U server, but running on a dedicated machine would be great.

    Every mini PC I find is always lacking in some regard. ECC memory is non-negotiable, as is an SFP+ port or the ability to add a low-profile PCIe NIC, and I’m done buying off-brand Chinese crop on Amazon.

    If someone with a good reputation makes a reasonably-priced mini PC with ECC memory and at least some way to accept a 10Gb DAC, I’ll probably buy two.



  • I’m okay with the “human-readability,” but I’ve never been happy with the “machine-readibility” of XML. Usually I just want to pull a few values from an API return, yet every XML library assumes I want the entire file in a data structure that I can iterate through. It’s a waste of resources and a pain in the ass.

    Even though it’s not the “right” way, most of the time I just use regex to grab whatever exists between an opening and closing tag. If I’m saving/loading data from my own software, I just use a serialization library.



  • This is only true when you have a single transmission medium and a fixed band. Cable internet is a great example; you only have a few MHz of bandwidth to be used for data transmission, in any direction; the rest is used up by TV channels and whatever else. WiFi is also like this; you may have full-duplex communications, but you only have a very small portion of the 2.4Ghz or 5Ghz band that your WiFi router can use.

    Ethernet is not like this. You have two independent transmission lines; each operates in one direction, and each is completely isolated from any other signals outside the transmitter and receiver. If your ethernet hardware negotiates a 10Gb connection, you have 10Gb in one direction and 10Gb in the other. Because the transmission lines are separate, saturating one has absolutely no effect on the other.


  • You are absolutely correct; I phrased that badly. Over any kind of RF link, bandwidth is just bandwidth. I was more referring to modern ethernet standards, all of which assume a separate link for upload and download. As far as I am aware, even bi-directional fiber links still work symmetrically, just different wavelengths over the same fiber.

    If you have a 10GBaseT connection, only using 5Gb in one direction doesn’t give you 15Gb in the other. It’s still 10Gb either way.


  • This is a really good explanation; thank you!

    There is one thing I’m having a hard time understanding, though; I’m going to use my ISP as an example. They primarily serve residential customers and small businesses. They provide VDSL connections, and there isn’t a data center anywhere nearby, so any traffic going over the link to their upstream provider is almost certainly very asymmetrical. Their consumer VDSL service is 40Mb/2Mb, and they own the phone lines (so any restriction on transmit power from the end-user is their own restriction).

    To make the math easy, assume they have 1000 customers, and they’re guaranteeing the full 40Mb even at peak times (this is obviously far from true, but it makes the numbers easy). This means that they have at least a 40Gbit link to their upstream provider. They’re using the full 40Gb on one side of the link, and only 2Gbit on the other. I’ve used plenty of fiber SFP+ modules, and I’ve never seen one that supports any kind of asymmetrical connection.

    With this scenario, I would think that offering their customers a faster uplink would be free money. Yet for whatever reason, they don’t. I’d even be willing to buy whatever enterprise-grade equipment is on the other end of my 40/2 link to get a symmetrical 40/40; still not an option. Bonded DSL, also not an option.

    With so much unused upload bandwidth on the ISP’s part, I would think they’d have some option to upgrade the connection. The only thing I can think is that having to maintain accounts for multiple customers with different service levels costs more than selling some of their unused upload bandwidth.




  • Like several people here, I’ve also been interested in setting up an SSO solution for my home network, but I’m struggling to understand how it would actually work.

    Lets say I set up an LDAP server. I log into my PC, and now my PC “knows” my identity from the LDAP server. Then I navigate to the web UI for one of my network switches. How does SSO work in this case? The way I see it, there are two possible solutions.

    • The switch has some built-in authentication mechanism that can authenticate with the LDAP server or something like Keycloak. I don’t see how this would work as it relies upon every single device on the network supporting a particular authentication mechanism.
    • I log into and authenticate with an HTTP forwarding server that then supplies the username/password to the switch. This seems clunky but could be reasonably secure as long as the username/password is sufficiently complex.

    I generally understand how SSO works within a curated ecosystem like a Windows-based corporate network that uses primarily Microsoft software for everything. I have various Linux systems, Windows, a bunch of random software that needs authentication, and probably 10 different brands of networking equipment. What’s the solution here?


  • If you’re concerned about power, I don’t see any reason it should matter at all where you have your cameras, as long as your PoE switch is rated to supply your cameras. If your NVR has some kind of built-in PoE switch, then you can probably avoid having a second PoE switch for your cameras by co-locating them in the same network closet, but PoE switches are so cheap, I’d say set it up however it’s most convenient for you. To answer your question of “is it possible,” it absolutely is. I’m doing something similar. I have a lot of cameras, but two of them are PoE and are quite a distance away from my NVR server. They feed into a PoE switch that connects to a second switch that acts as the main switch for the building. That switch has a fiber connection to a third switch that lives in my server rack, and that switch has a DAC connection to my DVR server. They work just as well as the ones plugged directly into my rack switch.

    The only real concern I see is bandwidth. If your cameras and NVR are on the same switch, you’d avoid having to pass the data from the cameras out across your network to the switch that has your NVR. For 4 cameras, though (even at 4k), your total bandwidth is going to be far less than what even a 1GB network can handle. It’s very easy to saturate a switch, though, so this is going to depend largely on your network topology and what you’re using your network for.

    I would highly encourage you to keep your IP cameras on a separate VLAN, though. IP cameras all have a tendency to want to “call home,” and while that might just be for something as simple as checking for firmware updates, I don’t want my cameras connecting to anything outside my network without my permission.


  • Got my two CRS310s, set them up, and they’re working well. I’m amazed with how configurable they are in comparison to my old Zyxel switches.

    I’m not sure I’m setting up VLANs correctly, though. There’s an option to set up VLANS under Interface or Bridge. I have several ports that pass more than one tagged VLAN, and as far as I can tell, that’s only possible on the Bridge. So my Interface -> VLAN setup is completely empty, and my Bridge -> VLAN setup contains all my VLAN assignments.

    I’ve researched this a bit, and it seems like I’m doing it the right way, but I’m a bit concerned I’m passing the VLANs off to the CPU instead of the switch chip. This is the first switch I’ve used with this kind of VLAN setup. Am I on the right track?

    Also, my 1GB SFP modules only work if I disable Autonegotiation; then they show as “Up,” with all the lights on, even if no cable is attached. Not a big deal really, but strange. I don’t have this issue with my 10GB SFP+ modules.




  • I had no idea. Microtik is definitely new to me. For a long time, I always used surplus or recycled enterprise-level hardware, and that usually ended up being Dell, HP, or Cisco. When I did my most recent upgrade, I replaced most of that with Trendnet or TP-Link; it just made more sense, and I recognized the brand names.

    The fact that Miktotik has a CLI at all is kind of a plus to me, even if it’s horrible. Regardless, though, my network setup usually consists of Factory Default Settings -> Assign a Static IP -> Configure port-based VLANs. It’s not particularly advanced. Most likely I wouldn’t even need to use anything other than the web-based management interface.

    I really appreciate the suggestion. Microtik makes a few switches that would work perfectly for me, but I had written them off as a “random white-label brand.” I think I’ll probably be replacing my Zyxel switches with Microtik.