Convergence being what it is…

One of the opportunities before us is to converge on yet future trends by combining adjacent catalogs.  Maybe for one organization it’s converging goods and services to build digitalized product services.  Maybe for another it’s converging staff insights in the form of automation solutions.  Maybe for another it’s using machine learning to improve customer experiences by anticipating next journeys.

All these are important facets of getting to digital, and staying digital.  What many organizations still need is a road map onto and into that market so they can expand on their own edge.  In thinking about that, a couple of things came to mind.  Mostly from experience, mind you, not so much thinking about what’s ahead.

The first thing is that once a market segment gets past the early adopter phase, there gets set in place a concept similar to a maturity model.  Except that, it’s a maturity model based on what the lifecycle of the segment needs, as it knows it then.  Not as It will need to be supported when it matures, but at the time when many have reviewed the early adopter work, and want to get involved.

While great for any provider of goods and services to that market, the rise in interest absorbs any incremental investment to satisfy the rise in demand for the as-built, and as-early adopted, solution.  To make a point of it, that is where the adoption of Kubernetes is in early 2023.  Its capability to substitute for hypervisors, cloud architectures, and process tools has made adoption cost effective. 

The replacement of hypervisors saves organizations licensing and maintenance fee costs.  The use as cloud architectures forces the hyperscale cloud providers into both a race to the bottom (providing services at the least cost) and a race to the top (investing in new capabilities to retain clients).  Since it is a process agnostic tool chain solution, Kubernetes can sweep a wide set of legacy tool chains, and their costs, out of the data center.

But Kubernetes on its own cannot replace all of the capability and maturity these other solutions have had to adopt along their own lifecycles.  Certainly, the opportunistic phase of substituting Kubernetes into existing markets is not even over yet.  But, left unattended, in the wake of this glorious start is little but repeats of past single product organizations.  And, eventually, there will be Kubernetes II which provides similar cost values and the race begins all over again.

We need, at some time or other, to understand we have a stake in this, however far we feel removed from what’s going on with an acknowledged helmsman.  If Kubernetes is to continue on as our helmsman (thinking we need ‘helmsing’ as the noun), we need to instantiate around it a support system that is, itself, scalable, flexible, and extensible.  Like DNA, the same strands need to provide different proteins at different phases of the lifecycle. 

That’s why it needs to be convergences of great things that help us build this support system, not just great things.  We need to help create the substrate which becomes the ‘internet of how to build things’ and generate a standard shared across industries, markets, segments, and professions.  We need the leverage of standardization and maybe it’s being provided to us by the Cloud Native Computing Foundation.

For now, the work on the edge can focus on how to generate a value-based devsecops adoption program for container management.  To the point that started this, what that would require is a convergence of something from the Digital Twin Consortium, the NIST, and the Linux Foundation.  In the remainder of this Note, we will describe how the Digital Twin Consortium’s interest in open source can be applied to container management systems.

The first thing to realize is that we can provide platform engineering its own ‘swim lane’ by making it the digital twin of future instantiation.  The object of platform engineering, longer term, is to enable it as an innovation support generator for the ‘as Code’ era.  With that correlation, we can use the Digital Twin Consortium’s document on Infrastructure Digital Twin Maturity to guide the process of getting platform engineering ready to shoulder the burden we are going to put on it.

The Digital Twin consortium’s 2021 publication provides a Dimension structure to their framework, plus four other characteristics which augment the adoption of the Dimensions.  Though similar in intent to the Modalities used here, the two sets of capabilities can also mutually support converged devsecops topologies based on them, combined.

There is some responsibility on the purveyor of such insight.  Namely, what is it that one brings forward, on behalf of, and in support of, others?  Short form: what do they get from any of this?   Always a hard thing to answer plainly, but in this case, the net net of it is clear.  One can provide a reduction in the step functions from the idea of adoption to the idea of what’s next, enabling many new entrants to the field more efficient journeys than might otherwise be had.

The Dimension structure from the Digital Twin Consortium’s document is shown, below:

                        

From: Infrastructure Digital Twin Maturity: A Model for Measuring Progress; published in 2021 by the Digital Twin Consortium.

 

And now, let’s assume someone might want to adopt the role of Evangelist for a program using this model to enable step-wise adoption of something.    That would consist of needing to provide each Dimension a set of ‘resolvers’.  For the convenience of the reader, the Dimensions and the Evangelists’ role are provided below:

                                                                     

In one last graphic for this Note, the correlaton of the Dimenions to the Continuous Integration/Continuous Delivery model commonly in use today is provided below.

CI/CD model from: What is DevOps - A development process or a set of tools - DEV Community; accessed 23 march 2023

The reader will note that the Digital Twin Consortium Dimensions become gateways from one part of the process to the next.  This enables developers to target capability for entry into a subsequenst phase, plus can enable simultaneous work on non-sequential components.  Similar to the use of data planes and control planes in virtual architecture, the CI/CD process can be viewed as a data plane, and the Digital Twin Consortium Dimensions as the control plane.

This convergence, as indicated at the beginning of the Note, provides a way to use concepts from virtualziation architecture, best practices from developer environments, and a use case from digital twin experiences to create a scalable, flexible, and extensible system of change methodology.  That’s a long sentence for sure, but it’s as succinct a way of bringing combinations forward to enable forward, itself.

Previous
Previous

Converging 101

Next
Next

Strategies and Challenges