Turing Pi 2 – The Ultimate Home Server?

I have three Raspberry Pis in my home network One is running my Home Assistant 
instance, as well as Deconz and MQTT It’s basically my smart home hub The other one is running my internal 
reverse proxy, Unifi controller,   AdGuard home and DDNS, basically 
stuff that concerns my home network Finally, my third Raspberry Pi 
is running my PiKVM instance, Which gives me a way to manage my main 
server remotely, change its BIOS settings, Reinstall the operating system, and all of that without having 
to splurge on an enterprise   server motherboard with an extra iKVM chip But today, everything changes Today, I will be replacing all of it with this This is Turing Pi 2 A cluster board that can fit up to 
four Raspberry Pi Compute Modules 4 It’s also compatible with NVIDIA Jetson boards And you can even mix and match CM4s and Jetsons To have a cluster that can do both general 
purpose computing and machine learning tasks. I am really excited to try it, so let’s 
build up our cluster and see what it can do! But first, I want to say huge 
thanks to folks from Turing Pi.

They sent me the board, including three   Compute Modules 4 as well as an 
NVIDIA Jetson board – for free. As usual, this doesn’t mean that I’m only 
going to say nice things about this board – It’s basically a prototype unit with 
a fair share of quirks and flaws, all of which we’re gonna talk about in this video So let’s talk about the board itself It’s sized as a standard MiniITX board, So it will fit in any standard PC case One thing that you have to keep in mind is that 
you’ll need at least 51 milimeters of clearance, Or 2 inches for my American friends For exmaple, It won’t fit in 1U rack enclosures As you can see, Turing Pi 2 hat lots of 
I/O, and each CM4 module gets some of it The first Pi has access to 
the miniPCIe slot and GPIO The second one also gets a miniPCIe 
slot, but without a SIM card slot The third Pi has access to two SATA slots, And finally, the fourth Pi gets the 
USB ports, both internal and external Compute Module 4 only has one PCI Express v2 lane, And I think the Turing Pi folks found a very 
creative way to utilize all of those lanes.

Obviously, it has its drawbacks For example, the device you connect 
to the miniPCIe slot of Pi 1   will not be accessible by the Pi 2 But at the same time, I 
imagine that it was necessary For keeping that standard miniITX form-factor And making sure that any peripherals that you 
would connect to the Pis would run at full speed The board also has an built-in Ethernet switch Which connects all of the Pis to the 
network with those two Ethernet ports The ports are bridged and connected 
to the same 1Gbit interface, So you can just use one of them When you connect Turing Pi to your network, Each Pi will get its own unique IP and 
will be identified by its own MAC address So in order to use CM4 modules with Turing Pi, You’ll need to use these adapters They kind of look like SODIMM 
memory modules for a laptop, except they can basically fit an entire computer The CM4 module clips onto the adapter 
with a satisfying click, and that’s it! The adapters also have microSD card slots   for Compute Module Lite models 
that don’t have built-in storage.

To power the cluster, I’m gonna 
be using this wide input PicoPSU And a 60W Lenovo charger that I 
frankensteined a barrel plug on You can use any ATX power supply, But since the board doesn’t need much power, 
PicoPSU is the best choice in my opinion Once all four Compute Modules are clipped on to 
the adapters, it’s time to assemble the cluster! So some of you guys who’re not familiar 
with the idea of cluster computing,   might be wondering – why bother with four 
low power and low performance computers   instead of just building one high power 
machine and using it to run a bunch of VMs? First and one of the most important 
reasons for me is power efficiency Even with alll four nodes running at full 
blast, Turing Pi 2 only consumes around 22 watts While doing normal tasks and running some Docker 
containers, this number goes down to 11 watts If I were to use an x86 machine 
with a similar level of performance,   it would consume anywhere from 15 to 65 watts Second, redundancy and availability If you host all of your stuff on one computer, You’re basically putting all 
of your eggs into one basket If one day you have do some 
maintenance on the computer, Or if a kernel upgrade breaks your OS, Or maybe if one of the components fail You can say goodbye to all of the 
services hosted on that machine Of course, with Turing Pi you also have a 
single point of failure to some extent – All four nodes use the same power 
supply, and the same Ethernet connection, But at the same time they 
have independent resources Like RAM, storage, CPU and most importantly,
each of them run an indpedendent operating system That makes Turing Pi a great choice For running a high availability 
Kubernetes or Docker Swarm cluster And you can even hotswap nodes on the go, 
without having to power down the whole cluster, Although I’ve been told by Turing 
Pi engineers not to do that Third, flexibility and I/O With Raspberry Pi you have access to 
things like GPIO, SPI, serial ports and DSI Turing Pi exposes all of those things, Letting you use devices that you wouldn’t be 
able to easily use on a standard x86 machine Like this Zigbee adapter Spoiler alert – GPIO doesn’t really work 
in the current hardware/firmware revision, But it should work on actual production units One more point that I would 
normally include would be price And if by the time you’re watching this video  Compute Modules 4 are back in 
stock and are sold at MSRP, I guess the point is valid But as of now, it’s actually cheaper 
to buy four of these thin clients, Which will also have better 
performance than a Raspberry Pi.

Sure, the power efficiency is not really there, A thin client like that 
consumes around 10 to 15 watts And multiplied by 4 you get around 40 to 60 watts, But considering the price difference, 
even when CM4 units are in stock It will probably take a while 
to recoup the electricity costs So now that our cluster is built, I’ll tell you a 
little bit about what I’m planning to do with it. I have an actual big “NAS” that runs things 
like PhotoPrism, Plex, Sonarr and Radarr. It also used to run a PiHole instance, 
Home Assistant and reverse proxy, But I discovered pretty quickly that running 
all of my services on one machine is a bad idea. If I have to do some maintenance 
on my server, I end up with no DNS,   no light or heating automation, and 
no access to any of my services. So I started running my mission critical 
services on a separate Raspberry Pi 4.

At the same time, I needed 
another Raspberry Pi for PiKVM, Since the developers don’t have any plans of 
releasing it as a docker container of some sort, And only offer PiKVM as a standalone OS image. And then I also added another Raspberry 
Pi that only runs smart home stuff. At the end, like I said in the beginning, 
I ended up with three Raspberry Pis. So instead, I’m planning to run 
all my stuff on Turing Pi 2. I also want to use it to learn clustering 
software, like Docker Swarm and Kubernetes, To make some of my services, 
like DNS, highly available.

Leave a Reply

Your email address will not be published. Required fields are marked *