Going to rebuild my home cluster. Goals are balanced redundancy, Plex storage, personal cloud, pen test lab, couple light websites.
Refresh Hardware:
Server 1: 5900x 64GB ECC, 1TB NVME, 280GB Optane, 8x10TB, 2x10GbE
Server 2: 2400ge 64GB ECC, 2x 1TB NVME, 8x10TB, 10GbE
Server 3/4: 2x NUC 10700u 64GB, 512GB SATA, 1TB NMVE, 10GbE
Server 5: 2x E5-2450v2 (8 core ea.) 96GB ECC, Sata SSD: 3x 500GB, 2x 2TB, 4x 4TB, Rust: 8x 6TB, 2x 10GbE
Server 6: Pentium N5105, 16GB, 128GB NVME, spare SATA slot, 4x 2.5GbE
Server 7: i3-10100u 16GB, 256GB NVME, spare SATA slot, 4x1GbE
Server 8: 4 core Intel, 64GB, 1TB NVME, 12x 3.5" useable slots, 4x1GbE
NAS: 6x14TB, 2x 2TB Cache, 2x 500GB root, 10GbE
WS: 3970x 128GB, 2TB NVME, 10GbE,
I have a whole slew of smaller 3TB/2TB/1TB drives along with 2x spares ea. for the 6TB and 10TB arrays.
I'm going to use Server 6 as the new opnsense box, replacing Server 7. No raid controllers, all HBA.
The question: How would you setup the storage on these servers (starting from scratch). I was looking into ceph, but I don't know how I feel about eating my network bandwidth to distribute writes. Currently thinking raid 1 zfs root disk where possible, using raidz3 on any large spinning arrays. Populating server 8 with old junk drives and installing Snapraid+mergerfs for a final backup. I would switch the 5900x with the 3970x for WS duties, but I don't feel like moving around the water cooling loop. I could probably get a nice replication going across the 1TB NVMEs for Containers/VMs, and use the bulk ZFS storage for secondary NAS and backup.
Am I missing anything glaring? Anything interesting I could try?
Refresh Hardware:
Server 1: 5900x 64GB ECC, 1TB NVME, 280GB Optane, 8x10TB, 2x10GbE
Server 2: 2400ge 64GB ECC, 2x 1TB NVME, 8x10TB, 10GbE
Server 3/4: 2x NUC 10700u 64GB, 512GB SATA, 1TB NMVE, 10GbE
Server 5: 2x E5-2450v2 (8 core ea.) 96GB ECC, Sata SSD: 3x 500GB, 2x 2TB, 4x 4TB, Rust: 8x 6TB, 2x 10GbE
Server 6: Pentium N5105, 16GB, 128GB NVME, spare SATA slot, 4x 2.5GbE
Server 7: i3-10100u 16GB, 256GB NVME, spare SATA slot, 4x1GbE
Server 8: 4 core Intel, 64GB, 1TB NVME, 12x 3.5" useable slots, 4x1GbE
NAS: 6x14TB, 2x 2TB Cache, 2x 500GB root, 10GbE
WS: 3970x 128GB, 2TB NVME, 10GbE,
I have a whole slew of smaller 3TB/2TB/1TB drives along with 2x spares ea. for the 6TB and 10TB arrays.
I'm going to use Server 6 as the new opnsense box, replacing Server 7. No raid controllers, all HBA.
The question: How would you setup the storage on these servers (starting from scratch). I was looking into ceph, but I don't know how I feel about eating my network bandwidth to distribute writes. Currently thinking raid 1 zfs root disk where possible, using raidz3 on any large spinning arrays. Populating server 8 with old junk drives and installing Snapraid+mergerfs for a final backup. I would switch the 5900x with the 3970x for WS duties, but I don't feel like moving around the water cooling loop. I could probably get a nice replication going across the 1TB NVMEs for Containers/VMs, and use the bulk ZFS storage for secondary NAS and backup.
Am I missing anything glaring? Anything interesting I could try?