Part 3, and from there….
Part 3, with Data as Code as the Digital Twin of a transformation cycle
Since the origin of the Digital Twin, the idea has served both as a way to mimic the operation of a physical item in software and as a way to model process engineering tasks. In developing a Data as Code Center, we can combine those perspectives. Transformation without a goal is stateless and largely unmanageable. Transformation goal to a transformation goal (namely, a version of an organization partially transformed) becomes a stateful transition from an as-built to an as-envisioned one.
The as-envisioned organization becomes the output of the investment in the as-built organization for purposes of transformation management. Since the digital twin-as-a-representation of the organization’s transformation lifecycle includes both the as-built and as-envisioned states for purposes of mimicry and the process by which the state change is accomplished (the transition process), we can apply the Digital Twin Capabilities model all along the transition path. (Digital Twin Capabilities Periodic Table (digitaltwinconsortium.org); accessed 04 july 2023). One hears along a future path that the laws of cloud thermodynamics becomes context to support Data as Code modeling to the point where Data as Code manages the supply-demand curve for data delivery systems.
Since sequential transition paths form a lifecycle of a transformation, the digital twin capability extends to serving as a foundation for managing inputs, outputs, and the consumption of inputs into outputs for purposes of transition observability. The components of the digital twin capabilities model are provided below.
Though yet to be proven, in the framework of a data as code platform, it appears that the Digital Twin Capabilities model can serve as the goal-setting component for the Data as Code Center of Experience. Many of the sub-components of the six primary components of this model are already formulated in terms of data-centric capability. Digital-Twin-Capabilities-Periodic-Table-Toolkit.xlsx (live.com); accessed 04 july 2023.
With the requirements from Part 2 of this working set, we can create a platform generator framework. The Maturity Model for the six Digital Twin Domains is taken from the CNCF Maturity Model in order to maintain compliance with the Digital Twin Consortium’s interest in open-source development. The CNCF Maturity Model consists of Business Outcomes, Policy, Process, People, and Technology. In current formulations of maturity model domain sets, the CNCF configuration parallels industry developments.
In terms of a Capability Model, again the CNCF provides a solution in the form of Build, Operate, Scale, Improve and Optimize. The Digital Twin Consortium Level 2 definitions serve as the components for the Capability Model.
In keeping with the macro-, meso-, and micro- perspectives found in academic disciplines, a third model comprised of the asset base used to formulate cloud-native based Data as Code functionality needs to be provided. The CNCF Landscape, along with the products and services supported by the Linux Foundation, provides that asset model. As components are added to the CNCF Landscape, they will be automatically included in the Data as Code Asset Model.
Since the goal of the project is to develop a Data as Code Center of Experience, there need to be amendments to the framework hierarchy. These amendments take the form a devsecops process based on Industry 4.0 supply chain characteristics. The purpose of devsecops processes is to ensure that end users of an application have access to the latest feature/functions of the application and that their use of the application is provided by, with, and through a scalable, sustainable, and productive environment. To supplement the workbench already described, the use of the NIST advanced manufacturing templates provides a meso-level model for articulation of the project in multiple meanings of the word.
Questions about how long it will take to create the framework, the platform, the user experience, and the reliability engineering for such a project are without answers, or even passements which could lead to answers. The process is sub-titled ‘zero to platform in 180 days’ though the beginning, end and throughway from one to the other are not yet defined. This becomes the work of future days.
The reader will recognize that there are three levels to the project, and only one level of assets. From the NIST cloud definition, these assets are the resource pools which will be combined, recombined, and extended through an operator, API, and SDK continuum. The use of APIs, for example, accelerated the success of Amazon engineering in developing a scalable, flexible, and efficient business platform. We are only attempting to repeat that process.
Too, we realize that the parallel development of a composable edge platform is critical to the eventual application of the Center of Data Experience in the market. Fortunately, from the relationship defined earlier, the two projects do not impact one another until the point at which they can be merged.
Any project of this scope needs reporting and performance validation metrics. For the development of the hierarchical attainment of services delivered from the platform created by assets from the resource pools, the characteristics of NIST Big Data will be used. These are volume, variety, velocity, variation, and value in the context of a Data as Code experience center. By comparing the requirements demanded of the platform with the output generated by its engineering teams, a delta of the shortfall between demand and supply can be managed.
To finish this three-part series, there remains only an expression of gratitude to and for those whose works provide the foundation for these thoughts. Without their successes, another could not be so well contemplated.