Business Issues, CIO Leadership, Converged Infrastructure

IT's job: smoothly make connections

Blog-post by,

Looking ahead (and not too far) we see IT evolving to make use of the full spectrum of sourcing options available: in-house DC resources to co-lo and hosted services to cloud, both public and private: SaaS, IaaS, PaaS.    We have already seen in our research that making use of a broad range of options is tied to greater success in delivering IT services.

As the spread of options continues to widen, integration and automation will become increasingly critical to successful use of the whole spectrum.  That is, to really make the smooth integration and fluid application of all these different sourcing strategies work, IT is going to need tools that span the whole spectrum.  Tools to manage, automate, and orchestrate movement of workloads and data,  and to ensure appropriate performance and security.

To make tools like that possible in anything but a single-vendor environment—an increasingly rare thing—requires development of and adherence to vendor-independent standards.  Vendors must embrace standards and standardization efforts, IT must focus its efforts on standards-based environments and tools.  As we have seen in decades past, in the move to Ethernet and IP, SQL and the Web, the embrace of open standards can be critical to the mass migration to a new way of doing things.

In the world of virtual data centers and cloud resources, standards cover both how services work and how they are secured and provided.  Functional standards include the Open Virtualization Format (OVF), for packaging single or grouped virtual machines, and OpenStack, an operating system for managing cloud-structured resources.  Provider standards include those of the Cloud Security Alliance and the Open Data Center Alliance and suggest usage models and security standards for providers of cloud services.

The key, again, is that everything, from the most closely held mission critical data and systems held and running inside a company-owned data center to the most dynamically relocated systems floating from IaaS provider to IaaS provider based on the best daily rates for the compute and storage required, has to be tied together.  Data needs to flow freely from system to system, and application performance monitoring needs to be able to see into all the components of an application no matter where they live, security policies need to be shared by all and enforced by each as appropriate. 

One-off system-to-system integrations cannot scale to large and complex environments, as we have seen in the past.  As importantly, now, is the fact that they cannot expand quickly.  In order to provide business agility, IT needs to be able to stand up new systems quickly.  If integrating new systems one-off to existing systems is the plan, the speed of deployment plummets.  If everything hooks together within a standards framework, adding new features and whole new systems is simple and fast.

Bottom line:  Early in the 20th century, a huge part of the city of Baltimore burned to the ground because firefighters coming in from outside the city to help fight the fire couldn’t attach their hoses to the city hydrants – the connectors didn’t match up.  Don’t set your application delivery environment up for a catastrophe in the future by failing to build it around standards for integration, interoperability, security, and management.


Discussion
Would you like to comment on this content? Log in or Register.