What are your use cases for the tiny home server and the APU? I built a smallish FreeNas with an i3 earlier this year. It was before this year’s AMD announcements. I like the i3 because it has ECC memory and can fit into some supermicro server boards. IPMI makes it easy to setup over LAN and I don’t have to ever plug in a monitor or keyboard. It would be nice to see more AMD boards with it beside the AsRock X470D4U.
My desktop/plex server is due for an upgrade next year. Maybe the threadripper price will go down.
> IPMI makes it easy to setup over LAN and I don’t have to ever plug in a monitor or keyboard.
Hardware BMCs have their place (e.g. low-overhead compute-cluster nodes, where free cores = profit.)
But, for most workloads—and especially consumer workloads—there’s no reason that the concept of a “Baseboard
Management Controller” needs to be instantiated as hardware; you can just as well set the system up with a hypervisor OS (e.g. a minimal Linux KVM install; or an appliance-OS designed for this, like VMWare’s ESXi), set your regular workload up as one VM guest (and pass through to it all the nice hardware you have, like GPUs), and then set up another “control plane” guest VM that exposes IPMI management of your regular guest and of the hypervisor itself. As they say, “there’s no problem that can’t be solved with another layer of indirection.” ;)
(I should note, this is exactly the setup you get by default if you install ESXi [hypervisor] + a free home license of vSphere Server Center [BMC-equivalent appliance] onto a box. I was happily using this exact setup for quite a while, though I eventually moved to Linux+KVM+Xen just because I wanted the host to be able to create guest volumes from a thin-provisioned storage pool and then serve them out to the guests over iSCSI, as if I had a teeny-tiny SAN.)
Of course, this has only become a viable approach for IoT integrators very recently, which is why we don’t see any IoT appliances (e.g. NASes) coming set up this way from the manufacturer just yet. Until recently, your choices for building IoT devices were microcontrollers at the low end; old ARM cores in the middle; and Intel’s most “power efficient”, feature-stripped cores on the high end. None of these were particularly suited to hosting virtualization. But Ryzen is! While it may only be affordable to home-builders today, I expect to see AMD chasing Intel up on its “power-efficient embedded profile” market segment quite soon, with Ryzen-based, highly-cored, virtualization-capable equivalents to the Intel Atom line being sold for cheap enough to get system integrators excited.
FreeNAS does not recommend running in a VM and I’ve heard about problems with iSCSI :-). I could easily pick up used Dell servers dual core E5 Xeons with 128gb of ecc RAM and whatever SATA/SAS controllers I want off Craigslist. ESXi costs money and a yearly cost at that but I have played around with the trial version.
But! The FreeNAS community is a bunch of grumpy sys admins. I’m considering going down the Linux and ZFS route. I’d be able to do more with VMs (I feel more comfortable in Linux vs FreeBSD). I’m building some IoT Pi’s to collect data and have it a Linux box would be nice.
Raspberry Pis haven't quite got there yet but I'm hoping the next iteration will have an NVME or SATA implementation. Although to be honest it doesn't have to be the Pi. Any small board that'll run Linux, has at least gigabit ethernet and a fast path to disk will do. At that point it'll be possible to make a ceph cluster with one Pi per disk.
A few years go now Western Digital demonstrated an onboard controller with two 1 GB NICs and a mini linux distribution with a single Ceph OSD installed. Unfortunately it never made it out of the lab. I would gladly pay a $50 premium per device for spinning rust to have that onboard. Perhaps the issue is with NVME-connected devices that could be a much costlier device to build? Or maybe there's no standard for housing network-connected storage devices in a rack?
You can do that, but they're not really replacements for one another, and there are lots of things that can pull one way or another for that use case.
- You generally don't want to run storage servers virtualized.
- Tooling matters. There are multiple reasons I generally do things the same way at home as I do at work (within reason).
- Probably a niche concern, but I have some hardware that is only configurable during early boot.
- Virtualization costs performance. Not a huge issue at home, granted, and you have to quantify it for your specific workload. (It is usually going to be IO.) But it certainly can matter with home workloads; home theater video processing is probably the most common.
I use both for what they're good at. IMPI is for managing hardware. Virtualization is for not needing more of it.
I still have a haswell i5 quad core as my plex/pi hole/openvpn box. It’s never cpu pegged. I do have hardware transcoding enabled which looks like crap but teaches people to not have such low birtates on their gigabit connections. Silly plex defaults. The most cpu used is deluge/openvpn combo out to PIA as it uhhh acquires new content. That and rclone as it pulls off google drive. But I see no need to upgrade. What do you use yours for that it’s cpu needs refreshing?
Intel QSV does degrade quality. The i7-7700k can’t handle a 4K HVEC transcode to 1080p very well on the fly with Plex. Plex optimizing a 4K video takes about as long as the movie to get a 1080p version. I’d like something that’s not dependent on QSV. My Asus motherboard also has a Bluetooth issue from time to time that requires unplugging the motherboard power connection.
Not sure I want to spend the money on a 10gb backbone or wait and upgrade the desktop. My UniFi switches and supermicro board can do link aggregation but my Asus mobo only has one port.
If your UniFi switch has SFP+ you can get 10Gb copper SFPs these days for under $60 and then a dual 10Gb PCIe is another $180-200. Probably not worth the upgrade just for Plex - but most people don't realize how cheap copper 10Gb PHY has gotten. And if you have SFP+ on both ends and just need something to fill in-between you can get fused 10Gb DAC cables for under $20 which is a drop in the bucket for those SFP+ ports that are sitting there collecting dust.
The SFP+ ports on my 150w PoE switch are only 1Gb. The MikroTik CRS305 switch is the most cost effective one I’ve found at ~$130. UniFi has an option but it’s $599.
DAC is the most sensible option considering I need to go from my basement, up 2 more stories to my attic, and drop into my office.
True that. Between the weird "phone home" stuff that Ubiquiti just stopped responding to publicly and some of the odd overlap with their product line. That being said if you own their stock today you were smiling because they're up over 30% at the time of this writing.
With regard to USG the Dream Machine looks to be a kind-of-sort-of successor to the really old USG. I would guess a USG replacement is coming, but the Dream Machine Pro will have 10Gb as I understand it. They do product rollouts pretty horribly IMO.
If you're up for a migration project on deluge/ovpn you can probably cut out quite a bit of overhead by moving to WireGuard and Mullvad, WireVPN, etc... I thought PIA donated a bunch to WireGuard dev, but it doesn't look like they support it in GA yet.
Thing I love about deluge and PIA is it’s all in a docker container. So I didn’t have to muck with any settings on the host machine. Works great. I didn’t see anything like that for wireguard sadly.
> What are your use cases for the tiny home server
Not the op, but small form factor home servers are so excellent. Pihole, UniFi controller, Home Assistant (coffee machine warming up for 30 mins before I get up!), some testing VMs and a load of docker containers for various chores.
My desktop/plex server is due for an upgrade next year. Maybe the threadripper price will go down.