Short on time? 90-second audio summary…

A quick listen, generated using Google Notebook LM.

 

Cloud has fundamentally changed how organisations store and process data. What we’re calling “cloud” isn’t just about location, it’s actually a set of operating models that shape cost structures, skill requirements, and how easily a business can adapt over time.

 

We hate technology jargon at Miso. It usually obscures more than it explains. However, in this case, a shared set of terms is useful so we’ll use the industry terms Cloud 1.0, 2.0, 3.0 and Cloud X.0 to describe the different models and illustrate how these impact the technology.

What are the cloud operating models?

 

Cloud continues to evolve, shaping the needs of organisations and the native state of their data. So, what are the 4 key cloud models?

Cloud 1.0

Cloud 1.0 is the lift-and-shift approach. Organisations are looking to escape the cost, complexity, and operational burden of running on-premises hardware shift infrastructure to someone else’s environment i.e. Infrastructure as a Service (IaaS).

As the architecture remained infrastructure-centric, the skills required are similar to traditional IT. This makes the transition relatively easy from a capability perspective but ultimately limits the benefits the cloud delivers.

Cloud 2.0

Cloud 2.0 is cloud native. Instead of running their own servers, organisations adopt cloud native solutions, such as Salesforce and Platform as a Service (PaaS) models that have ready-made components, processes, and management.

This gives speed and convenience but also concentrates data and systems into a singular cloud environments. Cloud 2.0 introduces new delivery practices such as Continuous Integration and Continuous Delivery (CI/CD) and platform management, which in turn create specialist roles across DevOps. As more data moves onto these platforms, costs rise, flexibility reduces, and technology becomes increasingly tied to specific vendors.

Cloud 3.0

Cloud 3.0 is a distributed model. Rather than trying to centralise everything in one place, this model recognises that a more diffused approach can be more efficient.  It focuses on communicating and orchestrating across different environments (on-prem, clouds, and the edge) based on where data and processing make most sense.

It’s also where AI services naturally land, as training and inference benefit from operating close to data, rather than trying to deliver data heavy services to remote locations.

While this model is fluid and flexible, it’s also harder to control. As elements in your data pipeline are atomised across environments, orchestration becomes fundamental. Teams rely more heavily on services such as Azure Data Factory and AWS Glue to coordinate activity. Financial control, governance, and transparency become substantially more complex, and skills extend beyond the knowledge of internal teams.

Cloud X.0

Cloud X.0 is the model that comes next. It’s a recognition that this is a dynamic approach and a combination of cost pressure, data pressure, skills pressure, and market pressure will cause a new model to emerge. Cloud 1.0, 2.0 and 3.0 were responses to the limits of the models before them. Cloud X.0 will be no different.

What’s happening in the real world?

 

In our conversations with customers, we see very few organisations operate within a single cloud model. Most are running a mix of on-prem, Cloud 1.0, 2.0 and 3.0 patterns, shaped less by long-term strategy and more by skills availability, existing technologies, and the demands of individual projects.

 

On-prem environments still play a significant role, often sitting uncomfortably alongside third-party software that is increasingly cloud-only or tied to specific platforms. This mismatch creates frustration, amplified by exaggerated vendor claims that oversimplify complex realities and downplay difficult trade-offs.

 

We see data and engineering teams under increasing pressure as data volumes grow, platforms shift, and business expectations rise. The fundamental differences between Cloud 1.0, 2.0 and 3.0 means teams can’t realistically develop and maintain deep expertise across an ever-expanding set of specialist approaches and tools. Every additional tool or approach stretches already limited capacity and further fragments knowledge.

From a technology perspective, what does this mean?

 

What organisations actually need is less technology.  Rather than adopting a new set of tools for every environment, they need common ones.  A tool that allow teams to build deep expertise and apply it consistently, regardless of how their infrastructure evolves.

 

These Evolutionary Technologies need to be fully agnostic. They need to be able to work on-prem, as well as in a fully distributed, multi-cloud environments.  They need a broad church of inputs and outputs, and critically need a licencing structure that doesn’t throw in prohibitive cost hurdles as soon as new use cases are commissioned. Finally, they need to be able to fully integrate into a seamless and silent orchestration, oblivious to any of the other key technologies around them. 

 

It’s only when teams have access to this sort of Evolutionary Technology, a consistent and reliable capability that works across the messy reality of today’s server rooms and tomorrow’s fluid Cloud X.0 environments, that they can be confident in their ability to deliver the environment the business actually needs.

Want to know more? Download our insight PDF…

Insightful, shareable, available here.