One of the most exciting advances happening in the computer industry recently has been the dramatic proliferation of GPUs into all different kinds of computing solutions. While the use of GPUs in crypto-currency mining is getting a lot of press lately (full disclosure here, I have an 8x GPU rig as a test system), GPUs are finding more uses to solve ever more complex problems. Initially, GPUs were focused on just graphics processing, and computer gaming experienced a renaissance in complexity, realism, and diversity. The corporate data center workloads however, were not able to take advantage of this tremendous increase in GPU computing power. As data volumes grew, the amount of traditional CPU power to process this data has not been keeping up. There became a problem of how to throw enough computing resources at a problem, while not purchasing a “super computer” for millions of dollars.
The emergence of big data processing approaches, like Hadoop, gave data scientists a platform for storing and analyzing hundreds of TB of data on cheap, commodity hardware. This approach worked for a while, as tens to hundreds of small servers breaking up the work could easily fit in the datacenters that companies have been employing for years. But there’s another shift coming that is causing data to grow at an even faster rate, and that’s data generated by connected machines.
So, what does this have to do with GPUs? Well, it turns out that the math these devices are designed to do in processing high definition graphics are also really, REALLY good at plowing through machine generated data. While a lot of attention has been paid to Machine Learning and Artificial Intelligence (and Crypto-Mining…), GPUs are working their way further into the enterprise datacenter. Recently, Kubernetes and Spark have gained support for GPUs, and database applications like SQream are making GPU computing a reality for more general data analysis. This growth is happening so fast that (for example) NVIDIA’s data center business grew from $338M in 2016 to $830M in 2017. That’s a massive jump, and it’s a perfect fit for the Axellio Micro-Datacenter platform.
The Axellio Micro-Datacenter Platform is a next generation architecture, designed for ultra high performance components like GPUs and NVMe storage devices. Leveraging a unique, High Bandwidth PCIe fabric (FabricXpress™), Axellio can be customized for the GPU and storage needs of any environment. It’s this flexibility in being able to combine these high performance storage devices (NVMe), TBs of RAM, dozens of CPU cores, and the power of multiple GPUs in a simple 2u architecture that is purpose built to process huge volumes of data at the edge of the network. Applications like SQream are leveraging this high performance data architecture to provide the fastest data analytics available on massive data stores.
Recently, we at X-IO have just complete testing with SQream’s GPU accelerated database, and the results were outstanding. If you are wondering about how GPUs can accelerate your big data analytics projects, I would invite you to join us for a webinar I am doing with SQream or reach out to us. We would be happy to talk with you about your project.