There is no shortage of trends predicting where cloud computing is heading: from multi-cloud to hybrid cloud, these reflect our ambitions for a fluid computing spectrum.
Currently, provisioning resources from multiple infrastructure providers allows us to meet differing technical and business requirements: through combining private and public cloud infrastructure, developers orchestrate the deployment of workloads to make best use of different compute resources available.
We gained additional flexibility using cloud infrastructure for sudden bursts of compute requirements, scaling as needed on demand. In a cloud native world, this compliments a continuous delivery model where developers build, test, and deploy quickly. As we move towards computing where the cloud delivers exactly the amount of compute resources to run particular workloads for exactly the amount of time needed, our focus turns to how infrastructure players can provide exactly the right resources to deliver seamlessly.
Simple, composable, and modular is what we demand from software architecture: how do we translate this to infrastructure? In the cloud native ideal, a developer should be able to develop code once and be able to run it anywhere. Can the centralised cloud really deliver on this? What is missing is access to infrastructure that is distributed all around.
There is a growing demand for low latency and real-time decision making at the edge of the network, where computing as we know it moves increasingly closer to where data is generated: we need greater variety of geographical infrastructure to power and deliver future services in a digital, connected world.
We rely too heavily on the centralised cloud: it has enabled so much, but as global, disruptive outages of centralised infrastructure become ever frequent, we must consider the immense technical challenge of supporting a truly connected future.
We must open up the network edge. The growing pains for cloud computing are already beginning to surface: there is a lack of variety of infrastructure to deploy when no two applications have the same requirements. Developers must choose between distant, centralised clouds or invest in capital heavy infrastructure of their own to run on-premise. Where the cloud falls short is where edge computing must deliver.
"Where the cloud falls short is where edge computing must deliver"
The Age of Reckoning for the Telco
Let’s ask ourselves, who has the most distributed infrastructure of all? The edge is a unique opportunity for the telco network owner, who has been at the sidelines of the cloud evolution. The industry has its attention on edge computing, which promises to deliver the benefits of cloud affordability and scalability, alongside on-premise performance and convenience.
This is the opportunity to re-write access to the previously closed telco infrastructure as a fluid spectrum, running from a centralised cloud down to the edge of the network, and ultimately end user devices. Distinct layers of infrastructure are naturally converging: this reflects the direction the industry is taking, with the growing demand for hybrid deployments that will require a seamless connection from public and private clouds to telco networks.
Because it draws from both telco and cloud, a full end to end edge solution must not be defined by one player alone. There is a very real need for a global platform to open up and link access to multiple edge resources: two once opposite ends of the infrastructure space are merging. The solution to enabling third party access to infrastructure is an agnostic,federated platform serving the underserved, those requiring a truly flexible distributed deployment model.