Part 2 of 3

Part 2, or the onset of a requirements tsunami

It may be without parallel in the history of information infrastructures.  Emerging threats, emerging business trends, and emerging technologies are rising, falling, and coming into close proximity.  At a time when the marketplace is falling behind in its development of talent to address the strengths, weaknesses, opportunities, and threats this tsunami menu proffers.  At one time, you would have three characteristics, and cost concerns meant you could have any two of three.  Now there are three (or four, or as many as you can imagine) sets of characteristics and you need to pick all three, all four, or as many as you can imagine.

As the emerging (threats, business trends, and technologies) come together within an organization seemingly at times of their choosing and not that of management, it appears as if change is completely at the mercy of rising influences external to the organization.  This is the perception of many.  It is not the perception of all because there are those who have been through floods before, and have found that while tides cannot be turned, they can be buffeted with planning.

Planning how to turn a data protection service into a Data as Code service requires a platform engineering overhaul of how we view the ones and zeros as part of a streaming experience rather than an actuarial accounting for pre-specified points in time.

In order to construct a platform engineering perspective on a Center of Data Experience, we need a couple of things.  The first is a template for the platform, the engineering, and the process of combining the templates into a runbook.  The runbook needs guidance in terms of strategic, operational, and tactical implementations to reach the first version of goals for a Data as Code practice.  In this, the Center of Experience takes on something of the Information Technology cyclical evolution as a set of centralizations and decentralizations based on available cost-effective technologies.

Since the Composable Edge program is working on the confluence of 5G and edge computing, here we can focus on the cloud-native computing and the lack of data conversant specialists.  At a future point in time, merging the Composable Edge work with the Data as Code catalog becomes a way to provide a transformation platform for others to leverage.

For purposes of providing solutions that are applicable across vertical, horizontal, and horzontical markets, we can use the Cloud-Native Computing Foundation’s Landscape as the source of solution builder components.  The CNCF catalog currently consists of products and services which comprise a $21T market value.  As an economy, the CNCF catalog would be one of the three largest economies in the world.

With that framework, we can look at the requirement resolution characteristics the cloud-native catalog has used to address inherited past practices.  Certainly, we have the requirements of cloud computing in the support for the four types of clouds and the five operating fundamentals.  Too, the original NIST definition included three service models, defined as Software-as-a-Service, Infrastructure-as-a-Service, and Platform-as-a-Service. 

Addressing the three service offerings in turn, Software-as-a-Service provides at least an emblematic model for developing a Center of Data Experience.  Certainly, the Center will be cloud based, and include the most recent requirements for omni-cloud, multi-cloud, and cross-cloud support across a cloud-core-edge spectrum.  While Software as Code is often considered under the heading of ‘Everything as Code’, for the purposes here low-code and no code catalog solutions will be referred to as Software as Code.  In the CNCF Landscape, the evolution of Helm as a manifest delivery system becomes a prototypical example of how Configuration as Code or Experience as Code techniques could be delivered.

Cloud-native methodologies have already been implemented in the form of Infrastructure as Code to support a next generation Infrastructure-as-a-Service catalog.   Infrastructure as Code serves both as a guide for developing incremental ‘as Code’ solutions and as a foundation for the platform under consideration.  The Software-as-a-Service model for a Data as Code Center of Experience has to include support for a manager of managers, an ecosystem of ecosystems, and a normalization of operating policies and practices across those three, or four, or as many confluences as one might imagine.

Those last elements in the Infrastructure-as-a-Service paragraph become the guidelines for the construction of the platform needed to support a Center of Data Experience.  There was a time already when the platform economy was to have been our collective future.  Fortunately, rather than a trend addressing the needs of a temporally viewed market, platforms and their engineering has become a longer-term trend based not so much on replacing the NIST Platform-as-a-Service, but on moving it to another perspective altogether.

We mentioned the increased, and increasing, number of requirements on development teams today.  Cloud computing began with Private, Public, Hybrid, and Community Clouds.  These cloud formations would be capable of providing: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service.  Certainly, the hyperscaler cloud providers have been able to achieve all these provisions in a Public Cloud environment and are well underway of moving the Public Cloud experience into Private and Hybrid Cloud solutions.

Just as infrastructure virtualization services were centralizing on x86 based components, Docker and Kubernetes began to become mainstream.  Container technology is now found in most information infrastructures and serves as a fourth-generation virtualization experience platform.  Kubernetes supports both centralization trends and a de-centralization trends, simultaneously.

Kubernetes has enabled the centralization of legacy information infrastructure tools, rules, and requirements onto a common set of shared and sharable work planes.  Kubernetes enables the de-centralization (off its centralizing components) to extend what was the data center-as-a-glass house to many organizational edges.  It is consuming legacy infrastructure with the promise of reduced cost of ownership and operation, becoming an effective standard for cloud-based computing of all types. That is has become a trusted delivery vehicle for the five NIST cloud characteristics in such a short time is remarkable to the point of not having words to describe it.

Cloud-native solutions came about at a time when omni-cloud, multi-cloud, cross-cloud, and cloud-core-edge were moving past the fog computing phase.  Each of these new cloud formations required security, maintenance, and productivity management in order to remain in compliance with corporate governance, availability, and cost.

All that has been accomplished to date in terms of advanced information infrastructure operation and catalog management can be transposed (or refactored) on to cloud-native formats.  Data as Code, by managing multiple sets of requirements, and multiple data sources and types, can became a path to working our way out of technical debt and continuing lost opportunity.  Data can become a path for organizations to manage transformational modernization and move closer to the edges that represent the future.

Previous
Previous

Part 3, and from there….

Next
Next

is data back -as-a-thing?