If there were an incident or a system went down, the engineer could visit their local datacenter, pull plugs and push buttons, undertaking manual procedures to get operations up and running. With the scope of resilience all under one roof, engineers could literally oversee the IT lifecycle and, used to getting their hands on the hardware, they knew what to look for.
But when the public cloud became a fixture in the digital ecosystem, this changed. Offering unprecedented elasticity and scalability, the cloud encouraged widespread adoption of an OpEx model for IT spending, permitting smaller upfront investments and easy purchases via the credit card on file. With hosting and management handled by another party not only off premises, but usually in a land far away, the cloud introduced many layers of abstraction and shared service models. This reduced the operational upkeep work that had gone into running stacks locally and redirected the attention of engineers from ensuring the resilience of on-premises infrastructure to supporting the velocity of development.
It’s been about 15 years since the work of operations engineers (Ops) combined with the work of development engineers (Dev), coalescing into the DevOps model. While DevOps revolutionized how teams collaborate and the pace of product development and delivery, the need for speed still prevails across organizations. In an unpredictable world with fast-moving markets and exponentially complex IT landscapes, many organizations struggle with the processes that control their daily business. They feel they must choose between the resilience emphasized by Ops and the velocity driven by Dev. Yet, for Schuberg Philis and the enterprises we partner with, it’s not a tradeoff. Organizations can have both by adopting a holistic view of resilience and, within it, reintegrating the role of Ops today.