Refactor-as-a-Service?
“This won’t be a year to realize grand ambitions, but it marks a moment to refocus, retool and rethink your infrastructure. In every crisis lies opportunity, and in this case, the chance to make positive changes that may be long overdue.”
Paul Delory, VP analyst at Gartner; words in bold highlighted by the Note author
Four trends influencing the future of cloud, data centers, and edge infrastructure | ITPro; accessed 22 may 2023
A moment to reflect on what we have been able to do versus what we wanted to do and what we might do going forward as a result. In thinking about the statement above, you realize that the ordering of activity might be rethink, refocus, and retool. Rethinking technical debt being incurred because we didn’t have time to think about how the two years’ ago frenzy resulted in doing what we were doing three years ago some more. And then those years of doing the same and more of it compounded some of that debt.
Everybody, thankfully, knows that.
The contingencies weighing on information management resources for the past three, now going on four, years have been the cause of most of the current need to rethink, refocus, and retool. Fortunately, the opportunities to retool are as great as they ever have been, if only viewed in terms of the potential paths along which organizations can proceed.
Let’s say, for starting purposes, that we still want to sustain productivity and productively continue moving towards sustainability. The use of automation to replace manual tasking is now possible across the data center and across the customer experience. That’s the productivity arc. “Can you..?, Will you…?, Why won’t you…?” deliver automation into multiple cores of your as-built environment? That’s the productivity question and if you can’t see how automation frees up time to use in concatenating services into self-services, that’s where your productivity gains will stop until you get another moment to rethink, and etc.
Sustainability from the data center perspective can be approached using four concepts, plus ‘Repeat.’ As innovative solutions become available and practical, we need to be able to adapt to adopt those which might work for us.
1. Optimizing and ‘refactoring’ cloud infrastructure.
2. New application architectures driving infrastructure changes.
3. Data center teams adopting cloud principles on-prem.
4. Skills growth as a key priority for businesses.
5. Repeat.
Four trends influencing the future of cloud, data centers, and edge infrastructure | ITPro; accessed 22 may 2023
The Gartner analysis supporting the characteristic above provide a methodology which replaces ‘Location. Location. Location.’ in real estate with ‘Refactor. Refactor. Refactor.’ It’s not just applications and the code included in them that needs to be refactored, it’s the entire technology stack and the entire cloud delivery model. The infrastructure supporting the business needs to be refactored to support business changes.
Let’s say, maybe even we agree, that up to the cloud era, the potential for a data center to replicate the structure of a business was limited to technical selections alone. Mainframe, Unix, and Windows were the choices and what applications went where were the sum total of the cost of ownership decisions which could be made on behalf of the business. One realizes one is making light of one’s career spent working through these assessments with valued customers and prospects. But those were the choices.
When the internet combined with the hyperscaler cloud providers, another cost model became available in which we could substitute communication bandwidth for on-premise infrastructure. And the need to consider refactoring a data center from on-premise only to on-premise plus some publicly available cloud functionality became a fiscal necessity.
As the offerings in the public clouds became more corporatized (i.e., contained data availability solutions, compliance, and governance functionality), more of the on-premise functionality could be lifted and shifted into the public cloud. And the cost comparisons came to include not only private clouds and public clouds, but also hybrid and multi-cloud offerings.
It is into this environment that cloud native solutions came to be. Just like the internet and hyperscaler clouds came together, at scale, to off-load on-premise data center solutions, cloud native components have come together, at scale, to off-load legacy clouds. Private, Public, Hybrid, and Multi-Cloud solutions can now all be replicated through the use of cloud native components. Assuming, that is, that an existing application can be refactored into a microservices based offering.
The capability to refactor, end to end, becomes essential in an era of digital transformation. As long as the business units continue to require innovation, there will be a need to refactor the supporting information infrastructure. As to why that’s true, any componentry in an information delivery system that cannot be refactored for reasons of cost, extensibility, or decommissioning represents technical debt that cannot be reduced. And those supporting the increased adoption of open-source solutions understand that.
What is harder to get around, through, or even into, are how the constraints on business today are going to be circumvented in a timely enough manner to prevent digital transformation from drowned in technical debt. That is, that macroeconomic pressures will force us to make decisions which cannot help but be detrimental to digital transformation.
The message here is that we always need to refocus, retool, and rethink. What we need to do with the insights and competitive advantage we come to realize is that we need to act. And acting, in the case of a scalable, flexible, and sustainably information infrastructure means refactor.
Which is something to deal with in a following note. That is, if we can make refactoring the verb that changes how we view productivity and sustainability projects. That would be an -as-a-Service that enabled future proofing as well as continuous integration with modernization.