Cloudlet: The Beginning
This is the beginning of a tale of adventure, perhaps one of woe, one that has no current end. Major characters are established and there is material for a prequel in which many characters are killed off.
This is the beginning of the tale of my attempt to build a personal cloud, a "cloudlet" to borrow a term from C4 Labs, a "cluster" of Raspberry Pi computers.
I suppose it's worth getting the "why" out of the way.
There is a lot of chatter these days about self-hosting, pulling your content out of the hands of the megacorps and managing your services yourself. There are major pros and cons to the concept. For most folks, self-hosting is untenable because they lack the skills.
For most folks of the geek persuasion who are interested in self-hosting, I strongly recommend heading over to Lollipop Cloud. You need a single computer like a Raspberry Pi and your home network connection. The hardware will run you about $40 and the Lollipop folks have worked hard to make the process as painless as possible.
For the folks of a sysadmin persuasion, for whom a single computer system sounds interesting but kinda boring, well, this tale is for you.
But I still really haven't gotten to why I'm doing this. Partially, it's because I can. I'm not the sort of geek who has rackmount servers at home (yet?) and I really like the cheapness of the Raspberry Pi type boards. They offer a lot of flexibility for not a lot of money. It's also a fun concept for me to be able to tie such cheap boards together into a bit of a powerhouse. A blade server but with RPis.
I'm also building the cloudlet because there's a lot of neat tech out there that I've not had a chance to play with. The software portion of this build is yet to come but I'm looking forward to expanding my horizons.
A couple of disclaimers. First, I work for Joyent, a cloud company. If you are a follower of our smart datacenter products, you'll notice I'm using none of them. Before you take that as me dismissing or disrespecting my employer's products, I want you to read back a bit. Joyent is work. This is play. Also, our kit and software was designed to run on enterprise grade kit. All of the SmartOS/Triton approved hardware is rackmount. It's good kit but way overkill even for this project.
Second, I'm not getting any kickbacks for the products I use. The only money for this project flowed out of my bank account, not in. None of the links to the products have affiliate codes either. There are some links to Amazon in here so hover over before you click if that sort of thing concerns you.
Let's get things going.
The goal here is to cluster a bunch of Raspberry Pi boards, preferably in a way that doesn't end up with each pi in its own protector sprawled across my desk.
I started this off with four RPi 3B boards from previous projects and couple of new 3B+ boards. I rounded that up to eight since most of the cases I was running into were factors of four. Eight turned to be the magic number.
For reasons that will I rant about later, I strongly recommend that you use Raspberry Pi 3B+ boards only. I will probably upgrade in that direction as time moves on.
For power supplies, I pretty much exclusively use Adafruit's 5V 2.5A 20AWG MicroUSB Power Supply. It's heavy gauge wire and provides extremely clean power.
I settled on the Cloudlet Cluster Case from C4 Labs.
It offers eight vertical clip-in hangers, four 50mm 3.5/5V fans, and the option to physically attach additional cases should the need arise. The truth of the mattter is a bit more complicated as we'll discover shortly.
For the infrastructure node, I put it into a Miuzei Raspberry Pi 3B+ Case with Cooling Fan mostly because it's neat. The infrastructure node shouldn't need any special case beyond protecting its bits.
The network backbone for the cloudlet is based entirely on kit already in my lab.
Up front for firewall (and eventual VPN) is a Netgate SG-1000. It runs pfsense which is its usual level of great.
The core switch is an unmanaged D-Link 16-Port Gigatbit Desktop Switch DGS-1016A.
There are two pieces already that are completly optional, namely the separate infrastructure node and firewall node. I strongly recommend putting the cluster behind a firewall but you don't need a separate widget for it. Most clustering software needs a control node of some variety but, again, it doesn't need to be a seperate pi. If you want to simplify the build, I recommend allocating one of the eight hanging pis to be both the firewall and control node, with something like a usb network adapter for the uplink. This does limit your network connectivity a bit as the Raspberry Pis are not known for their high transfer rates. The 3B+ is a lot better but still not gigabit. If you don't need the speed and/or the complexity, collapsing those functions is a decent place to slim the build.
Further, if the $50 D-Link switch is too pricey, and you don't care at all about network speeds, you might use a pair of TP-Link 5 Port Switches. They're very slow, being 100Mbps only) but they will lower the cost of the network backplane to $20. Using two of the switches also lowers your network port count such that you need to consolidate nodes as I was just discussing. You end up with eight ports for eight case slots.
Almost immediately, I hit the question of whether to recommend the C4 Labs case to you all. As you can see above, the case looks pretty decent when it's complete. But... But. C4 Labs is one of those acrylic case companies that ships a box full for acrylic plates and bags of screws, with an instruction sheet that essentially says "put all the screws in the holes". That's well enough if there's only one type of screw. This case has several different screw types with minimal differences. I found myself comparing screws time and again.
The plates for the computers involve screws and little brass fittings that act both as washers and stand-offs. The fittings are just a few millimeters across and prone to break. My unit came with 10 or so extras so C4 Labs knows this is a problem. Getting the Raspberry Pis on the plates requires a lot of patience, a screwdriver, tweezers, and maybe a Third Hand like this one by Hobby Creek.
Like I said, though, once I grumbled my way through the build, the case works really well. The clipping mechanism for the plates works well and the fans move a lot of air. The fans are powered by GPIO pins and operate just fine on 5V and 3.5V. I configured mine for 3.5V after finding that running the fans on 5V sounds like an aircraft engine. Time will tell if the extra airflow becomes necessary.
As mentioned above, power is delivered via Adafruit's 5V 2.5A 20AWG MicroUSB Power Supplies .
Depending on your level of obsession, the power cables don't really need a lot of cleaning up. Mine is a bit high though. In the picture above, you can see I used cloth velcro ties to hold the cables against the bottom. That was ok but the cables still moved a lot and the ties were a bit too wide for the holes so I couldn't get things as locked-down as I wanted.
In the end, I 3D printed some connectors on my Monoprice Mini Delta. They work really well for me and let me shape the wire bundle before it goes into the (off-screen) neoprene sleeve that routes the wires down the back of the cabinet.
The network cables you see below are six inch patch cables that I apparently got on Amazon.
The network is nothing special. As mentioned above, a Netgate SG-1000 sits in front of the network, providing isolation and a bit of security. A firewall is important for me because all the cloud infrastructure software has network trickery that really shouldn't be exposed to outside parties. I had the SG-1000 laying around so that's what I used. It doesn't really matter what you use as long as it supports port forwarding or some other means of getting the outside world into a set of allowed ports. This is true regardless of what software you decide to use on the backend.
The switch isn't doing anything weird either. The picture above shows a Mikrotik Cloud Router Switch (yeah, that name is awful) because I have one on my project bench. In the end, I went with the aforementioned unmanaged D-Link 16-Port Gigatbit Desktop Switch DGS-1016A.
The picture above is the cloudlet in its final physical form, in its final physical home in my workspace, sitting on some vibration isolation foam I had sitting around.
Early on in this tome, I strongly recommended the use of Raspberry Pi 3B+ boards. There are a few reasons.
First, the 3B+ is a lot faster than the 3B. The 3B, because the network interface shares the USB 2.0 backend, gets maybe 12Mbps if you're lucky. The 3B+ was redesigned and now claims to get 300Mbps and the reviews indicate that it's a pretty accurate number. Since this is a cloud we're building, network speeds can be important. There will be a lot of cross talk between nodes as services are hosted on different nodes. If you're hosting a small website or something, (a) why did you put it in this cloudlet? (b) the speed of the 3B is probably fine. If you're doing anything serious, you want the extra speed.
Second, if one wishes to network boot the RPis, there is a significant difference. The 3B, like the models before it, require you to flash a piece of memory to activate "usb boot", which confusingly includes network booting. It's easy enough to do and you can create a single SD card to carry out the task. However, that means that every 3B in the cluster needs to get booted up onto that SD card before getting installed into the cloudlet. The 3B+ comes with "usb boot" mode already active. No extra work is necessary when installing a node.
Third, the 3B and the 3B+ light up different on boot. Regardless of whether or not the node has an SD card and can boot, the 3B lights the LEDs on the network interface. You can tell if you have link. If you're network booting the Pis, that's super helpful. The 3B+ doesn't light the network interface lights at all. The power light turns on and the pi just sits there. If you can see the lights on the switch, they'll indicate if the switch has link. But the pi itself will not tell you anything other than that it's powered on.
The first two points are ways that the 3B+ is much better than the 3B. The last point is super annoying and I wasted a lot of time thinking that I had four bad boards. I think in that case, one is better off being consistent, using only one board in the whole cloudlet. At least then, the differing behavior isn't so confusing.
Two Roads Diverge
A relative of mine (who was in a dark place, admittedly) once questioned the end of Frost's poem "The Road Not Taken".
Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.
Typically, folks read that last line in a positive light. He took the road less traveled and it was awesome and the folks on the regular path were really missing out.
But what if "the difference" was awful, my relative asked? What if he took the road less traveled and it was full of alligators and lava pits and snakes hanging from the trees? What if he now realized that people were taking that path for good reason?
As we reach the point in our tale where one must begin looking to software, there are two paths. Both paths have the potential for lava pits.
The two paths forward are, from my perspective:
Manually provision each Raspberry Pi as a node in a Docker Swarm, with the infrastructure node as the leader. For most folks, this is absolutely the right path to take. It's way less fiddly; lets you get going quickly; and lets you leverage the images from Lollipop Cloud immediately. If you've gotten to this point because you want the hardware but also want a fairly straightforward software base, go with Docker Swarm.
Network-boot the Raspberry Pis and set them up in Kubernetes, probably using k3s aka "Lightweight Kubernetes". This is the path I'm going to take. This approach will let me learn the most, not having deployed Kubernetes before, and also allow for fun like scheduled tasks which Docker Swarm does not allow.
I am sure that my choice will make all the difference. The real question is which interpretation will hold true?
Stay tuned, viewers. There will be more later as I figure out the software side.