Time to get ‘Edge-ucated’
The term ‘Edge Computing’ is being used to describe a number of different environments by many different companies in order to bend the definition to sell their version of the truth – nothing wrong with that, other than it makes things very confusing. Edge Computing is actually quite simple, think of it as moving the data processing power closer to where the data is being generated – closer to the source.
This is obviously counter to what virtually all of industry has been developing and building over the past ten years – Cloud Computing, send all the data to the Cloud, use it’s ‘infinite’ processing power to process and analyze, then return the results to the user. There’s nothing wrong with Cloud Computing until you try to break the laws of physics – bandwidth has finite limits, or the economics to send massive amounts of data are cost prohibitive. And, even if you could get all the data to the Cloud, you wouldn’t be able to process/analyze the data in real-time, which is becoming more and more necessary in many industries. So, the current use and models of Cloud Computing are now starting to break down – enter, Edge Computing!
Edge Computing is poised to resolve the issues now starting to impact Cloud Computing. Being at the Edge is optimal for ingesting big data fast and storing a lot of it, 100s of terabytes, while simultaneously processing it to do things like comparing the most recently ingested data against the stored data set to detect anomalies in real time. It is now practical to simultaneously run forensic analysis against the entire data set in place to explore whether detected anomalies represent a current threat, all the while continuing to ingest data without slowing down or dropping anything. Requirements like this now occur in applications like cybersecurity, detecting fraudulent or erroneous activity in financial markets, or identifying that something is going awry with operations of critical infrastructure or industrial processes. For insights and actions to come soon enough to be truly useful, big data must often be ingested, stored, and processed right there at the Edge where it is created.