The TLDR; after lots of research- Don't use consumer SSDs. Only use enterprise SSDs.
Attempt / Experiment Number 2.
I ended up ordering 5x 1T Samsung PM863a enterprise sata drives.
After, reinstalling ceph, I put three of the drives into kube05, and one more into kube01 (no ports / power for adding more then a single sata disk...).
And- put the cluster together. At first, performance wasn't great.... (but, was still 10x the performance of the first attempt!). But, after updating the crush map to set the failure domain to OSD rather then host, performance picked up quite dramatically.
This- is due to the current imbalance of storage/host. Kube05 has 3T of drives, Kube01 has 1T. No storage elsewhere.
BUT.... since this was a very successful test, and it was able to deliver enough IOPs to run my I/O heavy kubernetes workloads.... I decided to take it up another step.
A few notes-
Can you guess which drive is the samsung 980 EVO, and which drives are enterprise SATA SSDs? (look at the latency column)
Future - Attempt #3
The next goal, is to properly distribute OSDs.
Since, I am maxed out on the number of 2.5" SATA drives I can deploy... I picked up some NVMe.
5x 1T Samsung PM963 M.2 NVMe.
I picked up a pair of dual-spot half-height bifurcation cards for Kube02. This will allow me to place 4 of these into it, with dedicated bandwidth to the CPU.
The remaining one, will be placed inside of Kube01, to replace the 1T samsung 980 NVMe.
This should give me a pretty decent distribution of data, and with all enterprise drives, it should deliver pretty acceptable performance.