-- Leo's gemini proxy

-- Connecting to dt.smol.pub:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini

progress in design


2022-09-10


I've slowly made progress in the cluster design. I have started setting up what could be called the hashi stack. It's been an interesting, if sometimes frustrating, process.


I've also worked on getting some network kit. I ended up looking at ibm rackswitch equipment, ebay found me some ibm 8264 switches for cheap, one had ethernet, the cables of course were a fortune.


here is a link to the diagram which represents my progress


https://static.developing-today.com/image/2022-09-10_cluster_network_diagram.png


If the am5 motherboards are reasonable, i hope to pick up some of them. I hope those motherboard chipsets last most of the generation, I'm betting on it, but there's a small chance that while am5 lasts until 2025 (and beyond?) , x670e could die sooner. we'll see.


i think using a lot of commodity hardware makes sense financially. between the machines and the switches, electricity is going to be a concern.


even still, i think i can see a need for more pci lanes than i can use from even the pci 5 lanes from am5. if bridges that duplicated down to 4 or 3 existed, maybe it would be different. i want to have 2-3 machines that have pci 3 x16 hba card, a dual 100gb card, a dual 40gb card. compute is one thing, mass storage is harder. even using a storage mesh with the m2 on each machine, i think there are tasks that will need disks. due to this, i'm looking at 2-3 threadripper pro or epic, one day. not specific, other than lanes and not old threadripper. that's a future thing but is now a plan. i need the lanes.


ebay for networking hardware, fs for cables, amazon for a startech rack, trendnet 2.5gig, and the commodity pc gear. after this is all built it's a matter of if my electricity can handle it all. i may need to call someone. the watts are going to cost, either way.

-- Response ended

-- Page fetched on Tue May 14 09:47:23 2024