Should you migrate your datacenters to the Cloud? It depends…

Picture1

 

One of the big items in technology news this month was the Netflix announcement that it would be powering down the last of its datacenters, bringing to conclusion a multi-year Cloud migration initiative. Adrian Cockroft, the head of the infrastructure team at Netflix who managed this migration, predicted that more enterprises would adopt a Cloud-only initiative. There are many experts that claim that the corporate datacenter is dead, but is that the reality?

At the same time, SalesForce has been busy establishing new datacenters all over the world. This is also true for other players such as Microsoft and Google. The stated advantages of the Cloud are lower costs, simpler setup and operation, and easier management; establishing datacenters does not seem to make any sense. Is the Netflix model universally applicable to all companies, or is there more here than meets the eye? Let us first take a brief look at what worked well for Netflix.

  1. Business Model: Netflix offers movies and TV shows on demand to subscribers, which is downloaded for viewing on PCs, televisions, and other mobile devices. Subscribers are likely to be indifferent to the source of their content since their actual interface and experience will not change. Not many companies have such a limited range of operations and applications.
  2. Impact of network costs: This factor is highly favorable to Netflix as the content moves closer to the consumer. It reduces both operating costs and download latencies. Most large companies have legacy applications, and will find that their network costs would drastically increase with some applications residing in the Cloud. They could also see increased latency with legacy apps being run in the cloud.
  3. IP considerations: Intellectual Property (IP) considerations are paramount when companies consider moving their crown jewels to the Cloud. However, the content in this case does not belong to Netflix. The software application could be considered their IP; however even this could be treated as replicable. The major IP for Netflix arises from their people and processes, such as their rating and recommendation systems. This information still stays in-house, so there is no risk from Cloud migration. However, companies such as Intel would prefer to keep their cutting-edge chip designs in-house.
  4. Externally generated data vs. internal: Most of the data generated at Netflix arises due to subscriber interaction with the application. It is externally generated, in contrast to the internal data generation of manufacturing organizations (such as GE) or research focused organizations (such as the Pharma industry).
  5. Regulatory and Compliance limitations: There are no regulatory or compliance limitations upon Netflix’s business model. Their liability arises from the privacy of their subscriber information and is the same in the Cloud, as it is in-house. Companies subject to regulations such as HIPAA or ITAR are restricted to compliant providers. They will find that choices are limited and would be quite expensive.

In addition, the Cloud provides Netflix with increased scalability and elasticity. Since they receive monthly subscription revenue, converting their capital expenses into operational expenses provides them with better financial visibility.  This was likely a significant factor in Netflix decision.

Before we rush out to compare and shortlist Cloud providers, it is prudent to evaluate our own organization and operations. A simple method is to answer the questions below with a Yes or No

  1. Are your workloads relatively consistent (irrespective of season, product/market cycle, etc)?
  2. Do your workloads generate a high volume of data?
  3. Is most of your data internally generated?
  4. Do your workloads need to interact with legacy data or legacy applications?
  5. Are there IP protection considerations that prevent migration of sensitive data?
  6. Is your company or industry subject to regulatory or compliance limitations?
  7. Would your network costs increase significantly with migration to the Cloud?

If you answered “No” to all these questions, many congratulations. You are ready to move your applications to the Cloud, generate a shortlist vendors and get quotes to identify your ROI.

In case you answered “No” to most questions, there may be certain workloads that could move to the Cloud while others need to remain in your datacenter.

If you answered “Yes” to most or all questions, it is time to reinvest in your datacenters for some more time.

A note about size of the company and scale of application workloads:  while these need not always be correlated, smaller companies have smaller–sized workloads, which grow with the company. The economics highly favor the Cloud for smaller companies. However, this advantage diminishes with growth of the workload, as operational expenses scale up rapidly.

It makes sense for very large workloads to be retained in-house; all the large players including Google, Microsoft, Facebook, and Amazon are focused on building, maintaining, and expanding their datacenter footprint. This means that at some time during the growth phase of a company, Cloud-based workloads might need to be migrated to in-house datacenters for purely economic reasons. Strategic reasons might further speed up this process. For most companies, a hybrid operation would make the most sense; migrate some applications to the Cloud for cost and flexibility reasons, while retaining others in-house for other reasons.

Still confused? The reasons to migrate to the Cloud and the reasons to keep workloads in-house will continue to evolve, making this a difficult and subjective decision for many companies. There are also many barriers that inhibit companies from quick adoption of Cloud computing. I dealt with this issue in detail last year; while there have been some improvements, most of the issues continued to be valid and unaddressed by the major providers. Please let me know your thoughts on how you see this evolution progressing and what the industry needs to do to speed it up.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

Attend the Silicon Valley VMUG UserCon on April 14, 2015

 

It is almost time for our annual signature event at the Silicon Valley VMUG. There has been much soul-searching about putting together a useful and meaningful agenda for our attendees, hectic planning sessions with VMUG HQ, the anticipation and angst regarding participation from members and partners alike, and the last-minute preparations tying up any loose ends. It is now time for the event to unfold, to experience and learn from it for the following years.

Participation in UserCons is a key membership benefit to all VMUG members at no cost. UserCons offer attendees with a wide range of choices from VMware and partner educational sessions, an Exhibit Hall with exhibits from 50+ partners, and morning and afternoon keynote sessions delivered by reputed speakers in the industry. What is more, VMUG has you covered with breakfast, lunch, and coffee – you can spend the entire day attending sessions, previewing the Solutions Center, and networking with fellow attendees.

Based on member feedback, we have updated the agenda with a number of new and updated sessions. We have big names for keynotes – Kit Colbert and John Troyer, but we also have great breakout sessions from VMware and partners with compelling educational content. New this year is a Demo Zone, where selected partners offer short demos. This provides a quick and compelling technical overview, which gets the conversation started. We have Nick Marshall and Josh Atwell, authors of “Mastering VMware vSphere 6” doing a book signing during lunch. While you can engage in discussion with the authors, and get your copy signed by them, we are also offering 30 copies of this book to our members – will you be a lucky winner?

Agenda highlights are below – please follow the registration link to view the entire agenda. SVVMUG UserCon

Click here to Register

Some key facts from previous UserCon attendees (info collected from attendee surveys)

  • 99% of attendee survey respondents plan to return
  • 98% would recommend events to a colleague
  • 85% of attendees are decision makers / influencers within their organizations

Attendees also mentioned these as their primary reasons for attending

  1. Build Relationships with VMUG Members
  2. Learn about the latest solutions to their VMware installations
  3. Share their Knowledge

There are also loads of other prizes on offer, including VMUG Advantage subscriptions, gift cards, and Two Home Labs (who wouldn’t like to win these?). The VMware vCloud Air team will be sponsoring a VMUG member reception after the event. If you are still on the fence after reading all of this, kindly view the video below to hear first-hand from attendees and partners about their experiences.

Silicon Valley VMUG UserCon

If you live in the San Francisco Bay Area, I hope to see you there. Follow the tweets at #svvmug and #MyVMUG

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

The Paradox of Cloud Computing Adoption – Part III

 

Previously, I had dealt with some of the complexities of computing as it exists within enterprises today, as well as intrinsic aspects of these workloads. These are important factors that have a great influence upon the decision to migrate them to the Cloud. Earlier posts can be viewed here and here. In addition, there are a number of factors that inhibit rapid Cloud adoption by customers. Some of these are

 

1.  Standards, Standardization, and Interoperability between Cloud providers

Today, we have multiple providers selling *aaS services within their public Clouds, and with products for Private Clouds. Each vendor is attempting to stake out new ground with regard to features and capabilities. There are some basic similarities between these clouds in terms of support for features, but they do not interoperate with each other. Migrating a workload from one Cloud to another is a lengthy and manual process, with multiple steps involved, making it only useful as a one-time migration activity. This process does not permit automation, and significantly increases costs and risks for organizations.

2.  Vendor lock-in

While there are multiple providers offering similar services, each of these vendors is interested in acquiring new customers, and then locking them within their Cloud forever. Amazon makes it easy and inexpensive to migrate in-house workloads to their Cloud, but makes it very difficult and expensive for customers to leave. Other vendors utilize similar strategies through a combination of product features and pricing. Customers have experienced the impact of vendor lock-in on many occasions, with several products and vendors; hence they are wary of this vendor strategy.

3.  The rip-and-replace issue

A large organization has many applications that generate data, communicate, and interoperate with each other via automated workflows. En-masse replacement of all applications is not feasible, and needs to be accomplished one application at a time. Each application migrated to the Cloud needs to be reconfigured to communicate with the desired in-house applications. There is significant expense and risk involved with nearly zero or very low ROI until the entire migration is complete. Even at that point, the potential ROI is unclear.

4.  The scale-up paradox that accompanies growth

We dealt above with the Cloud adoption curve – as a new startup grows, its variable costs increase. The complexity of its applications and the number of providers also increase. At some point, it becomes feasible and necessary to consider investing capital to reduce the variable costs, for at least the most expensive of its providers. This would be contrary to the direction of the industry. In addition, the ROI is quite clear for organizations with large, low-scope applications, and for small startups. When such a startup grows to become a mid-sized company, both scale and scope increase, with the recommendations being unclear over a wide range in the middle. Also, is there a strategic reason to bring it in-house sooner?

5.  Buy vs. rent

The main selling point for a public Cloud service is that large capital costs are instead converted to manageable operational costs. This is attractive and meaningful for small companies, where it is easy to acquire a service without a need to acquire a new skillset in-house. However, Cloud services are rented by the hour, and by the capacity tier. Given a choice, would you buy a car, or rent it on a daily basis when the plan is to use it every day over the next 5 years? What if you needed it only for the next week? The recommendations for these cases appear obvious; what about moderate usage?

Cloud services represent capital costs for the provider, who must recover them by renting this capacity at higher prices to customers. With a relatively stable workload, it might be feasible for organizations to establish in-house systems, using the public Cloud for overflow capacity in a Hybrid model.

6.  Risks of working with startups that could vanish

There are a number of startups that offer Cloud-based services that have the potential to replace in-house legacy applications. Examples include ERP, Databases, (list some more). It is quite likely that they do not offer all the features, or the sophistication of legacy software. Enterprises might consider migrating such applications with the understanding that more features could be added down the road. However, there is an inherent risk of the provider going out of business, which can be very sudden, causing serious impact to its operations. This is one of the reasons that established companies adopt a cautious approach to migrating critical applications.

7.  Performance and network latency considerations

Cloud based providers operate multiple datacenters, which can be local (low-latency) to the enterprise, distributed, for a multi-location organization, or remote to aid Disaster Recovery. Each of these has advantages and disadvantages, and is based on the end customer requirement. Since a single provider is unlikely to be able to fulfill the needs of many mid-size and large organizations, it will be necessary to utilize multiple providers in a Cloud-based operational scenario. A multi-provider scenario would also be recommended to avoid down time resulting from a Cloud provider failure.

Now, there are provider to provider latencies to consider as well; if each provider were to use different telecom providers, then the landscape gets pretty complicated to implement and manage. Given the difficulty enforcing SLAs and OLAs with any single provider, a failure of communication between providers is likely to lead to finger pointing between them; in the meantime, the organization and its customers are faced with sub-par performance.

8.  Long-term normalization of Cloud Costs

Companies such as Amazon, Google, Microsoft, and many others have been racing to establish datacenters and tout these capabilities to attract new customers. They have been continually lowering their internal costs as well, attempting to match each other’s price reductions.  This is a strategic move by large companies generating significant profits from their dominant primary business to squeeze out new entrants within this space (Rackspace?), and also to establish their design as the prevailing standard. While costs for customers are low today, it is primarily due to this factor. We do not think that either of the large providers are realizing commensurate profits based upon their investments towards these infrastructure developments.

Also, many of these infrastructure projects have been generously subsidized by governments, and take advantage of low costs for land, electricity, and manpower in the locations of these datacenters. These incentives will run out and other costs will rise over time. In addition, these providers could come under pressure (either due to a focus upon generating ROI for their Cloud investments, or due to reduced profits within their primary business), and will need to pass on some of the costs to customers. Such price increases could create significant uncertainty about the expected ROI for companies that are planning to migrate to the Cloud.

9.  Outdated Business Models

The world of economy and business has evolved drastically over the past decades. Some examples of this include globalization, use of the Internet, functional (instead of hierarchical) organizations within a flatter structure, greater integration with partners, leverage from social media, and much more. However, the IT systems reflect the vestiges of older business models, and constrain the business processes from change. While Cloud-based services reflect today’s business models, they represent drastic change for many organizations. Established organization within the manufacturing, financial, or government sectors are highly prone to this issue, while it is of much lesser consequence for newer companies in other sectors.

In this context, updating business models is imperative in order to effectively leverage cloud services. It is to be noted here that piecemeal adoption will provide lesser benefits until the business models are updated. This is a significant opportunity for established organizations and companies that successfully transform could represent a serious threat to the holdouts. In any case, newer and agile companies within this space will reap the initial benefits from Cloud services merely from the absence of legacy systems.

 

Conclusions

Cloud Computing consists of a relatively simple set of tools and technologies, and are expected to make rapid progress towards the Plateau of Productivity within Gartner’s Hype Cycle shown below.

 

HC_ET_2014

Hype Cycle for Emerging Technologies, 2014 (Gartner, August 2014)

However, both the timing and ultimate scope of Cloud Computing are subject to a number of factors. A failure to achieve these could extend these time frames and/or reduce the viability and adoption of Cloud Computing. These factors include

  •  Modularity

Vendors need to offer modular services that permit customers to select features that they need, without having to pay for those that they do not. In addition, these should work with legacy protocols that customers already use for in-house systems. This will permit incremental replacement of legacy in-house modules with Cloud-based replacements and will not need extensive customization or integration. Once customers are comfortable with the concept and ease of module replacement, this process would accelerate.

  •  Interoperability

Any effort to address the disjointed landscape of Cloud-based services will increase the comfort level of customers. At present, customers hesitate to even consider deploying systems that utilize multiple Cloud vendors’ services, solely due to the hurdles involved with integration, as well as the time and effort involved. The technology workforce and consulting companies have expertise with multiple vendor services today; yet they lack the skills to effectively integrate these services seamlessly. This is a significant barrier to companies that wish to better utilize compelling services from competitors within their application environments. Companies may also wish to arbitrage their costs of Cloud-based services by utilizing multiple vendors, with changes being static or dynamic. Providers offer dynamic pricing based upon their current resource utilization, and it would make sense for customers to leverage this.

  • Reduction of Cloud and Network Costs

While the cost of Cloud services are on the decline today, much of this is due to the large vendors staking out positions to establish themselves as the market leader, and to deter new entrants into this space by temporarily reducing market profitability. Long term costs of these services should therefore rise to match economic costs; some of this would be offset by progressive reduction in technology costs. Unless customers see a permanent reduction in costs, with a falling trend steeper than that of their current technology costs, it would be difficult to get medium to large-sized customers to adopt for mainly cost-based reasons.

Network costs are an important part of this mix as outlined above; in case network charges drop steeply due to technology improvements, or from an increase in competition within this space, customers that move large amounts of information would have a greater appetite to adopt Cloud-based services. This is likely to be attractive to them, since they are cost-neutral at worst, and provide additional benefits.

 

Recommendations

Customer-centric Cloud offerings

Today, we have vendors that offer services that represent hardware systems, technology platforms, databases, or other services, all of which represent technology services. On the other hand, customers utilize and need financial, logistic, manufacturing, engineering, and other similar services, which can be categorized as operational services. It is almost as if customers and vendors exist in their own universe. This is probably one of the most critical factors that currently restrains adoption. Unless the industry offers services that are focused to customer needs, adoption is likely to be muted at best. Notable exceptions here are the offerings from companies such as SalesForce and ServiceNow.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

The Paradox of Cloud Computing Adoption – Part II

In Part I of this post, I discussed the overview of Cloud Computing adoption and the importance of the primary factors, scale and scope. What are other considerations that dictate Cloud strategy and have a bearing upon customer adoption? Some of these are listed below.

1. Variability of Workload

Workloads are assumed to be fixed or variable; in reality, they are almost always variable. They can vary at low or high rates, and this variability can be predictable or unpredictable (H&R Block during tax season vs. social media spikes during major events). It is difficult to accommodate a highly variable workload in-house or in a private cloud – the two available options are to either overbuild at a large cost or suffer degraded performance during workload spikes. A public Cloud capitalizes upon the fact that workload spikes between its customers are not correlated; if they were, the public Cloud could be susceptible to similar performance issues as well.

2. Data Intensity

Enterprises have different characteristics in terms of data generation, based primarily upon the nature of their business. Some organizations generate a large amount of data (Social Media, Insurance companies, Pharmaceutical companies, and Manufacturing companies). Others generate relatively lesser amounts of data (early-stage startups, mobile app companies). It is relatively easier to migrate low data intensity applications to Cloud providers, and such migrations are likely to involve lower operational costs as well.

3. Externally generated data vs. internal

Companies also differ by where their data is generated. Social media companies experience externally generated data, whereas Pharmaceutical or Manufacturing companies generate their data in-house via test or manufacturing equipment. It is not easy to migrate these to the Cloud due to the data volumes involved, and if there are Compliance and IP considerations involved (see #5 below).

4. Interoperations between applications, and data movement within the enterprise

Companies could have several standalone applications with relatively limited interaction. For example, Twitter’s Operations, internal email, and financial systems minimally interact with each other, if at all. On the other hand, an insurance company’s Claims applications, Data Warehouse, Marketing applications, and email systems (workflow) are all linked with well-defined data flows. Here it would be a significant task to replace any of these components to a Cloud-based service.

Another hurdle with high interoperability of applications is large scale data movement within the organization. Various functions and locations within an organization access data created by each other; internal network connections are created to implement and optimize these data flows, which is a complex task to accomplish across locations and service providers.

5. Legacy issues and IP considerations

Cloud service providers support most modern protocols, such as SOAP and REST within their PaaS offerings. However, they do not have support for legacy protocols that are in use at enterprises. This constitutes a significant hurdle that cannot be easily overcome. This is not an issue for new companies, but a rather significant issue for established companies that have been in business for many years. Organizations will continue to replace such applications with new services where feasible, but this is a slow process.

6. SLAs and OLAs required by the business

Corporations are increasingly required to provide SLAs and OLAs to their customers for all operational aspects of their business. These flow back to the Business Applications and IT Operations as corresponding SLAs/OLAs. In case an application or a set of applications are migrated to Cloud providers, corresponding SLAs/OLAs need to be established with the Cloud providers. This may or may not be possible based upon options that the provider offers, may be financially prohibitive, or difficult to enforce. Amazon offers a 99.9999% availability for its EC2 infrastructure, yet we have had several outages that violate such an SLA. However, the contract would have been written in the provider’s favor, insulating them from serious financial impact, and usually limited to the amount paid for the service feature that failed. However, the damage to the customer’s reputation and financials is immense.

For example, a provider that offers a 99.9% SLA would remain in compliance with a single 8 hour outage during an entire calendar year. Would EBay or Amazon tolerate this one week before Thanksgiving?

7. Regulatory and Compliance limitations

Regulatory and compliance issues are little understood by most players in the Cloud business; even industry experts can be misled by some of the provider certifications. The reality for companies is that they are responsible to their customers, shareholders, and to regulatory authorities for enforcement of privacy, security, and integrity of their customer data. Cloud provider certifications mean that these providers follow best practices; however, in case customer data is lost or compromised, it is hard to see that the Cloud provider would step up to take financial responsibility. Even worse, smaller providers might simply close their doors, leaving all their customers in an extremely difficult situation.

8. Impact of network costs

Network costs are a major source of expense for organizations in the present situation, representing significant chunks of IT/Telecom budgets for most. The reasons that these are exceedingly high are due to the nature of large capital projects for network capacity, and the small number of providers operating as an effective oligopoly.

With migration to the Cloud, there is an increase in network traffic, and a corresponding increase in costs for companies on multiple accounts.

  • All client access will move data to external providers.
  • Movement of data occurs between multiple applications/providers
  • Data communication to/from in-house legacy applications
  • Movement of internally generated data to external providers

Each of the above items represents data that was being moved within the organization’s Local Area Network (LAN), and will now be routed over the Wide Area Network (WAN) of a telecom provider. LAN costs are relatively very low, and provide a stable and high performance network path, whereas WAN costs are quite high, even at much lower bandwidths. In addition, usage of WANs introduces significant latencies that applications may or may not be able to tolerate. WANs are also more susceptible to failure than LANs as well. This represents a reduction in the reliability of network and application infrastructure.

Let us examine the impact of these factors on a few types of companies. Dark colors below indicate strong impact based on this dimension and lighter colors imply lesser impact due to any given factor.

Cloud Computing - Secondary Factor Impact

 

It is clear that each of these types of companies have distinct profiles based upon the combination of these eight dimensions. Existing companies have operations and IT systems that reflect these profiles, and look for these requirements to be met by Cloud services. In case these requirements are not adequately met, it constitutes a barrier to migration. These profiles are likely to be similar for companies within a particular domain, irrespective of size.

In addition, there are a number of factors that inhibit rapid Cloud adoption by customers. I will explore these, and identify conditions that would speed up this process, and offer some recommendations for Cloud providers in the concluding part of this paper.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS

The Paradox of Cloud Computing Adoption – Part I

Cloud Computing has been a major factor for companies of all sizes and types in determining their IT strategy. The compelling nature of the pay-as-you-go model that has minimized large capital outlays for technology projects, enhanced agility that enterprises derive with the ability to scale up/down at will, and reduced costs incurred due to sharing infrastructure established at large scales have made it highly attractive to executives. These factors also appear to make a solid case for Cloud adoption; however, the ground reality is quite different. For any enterprise, Cloud strategy is not a one-size-fits-all and depends upon the intrinsic complexities that define an organization, which in turn is based upon the nature of its operations.

Looking at various applications that exist within an organization from the outside, it appears simple to decide which applications can be moved to the Cloud and those that should be retained in-house. Yet, this is a simplistic view that ignores the context of these applications and the organization within which they exist. While the nature of any given application seems to be an obvious driver in the decision to move it to the cloud, a close look at the current landscape across various organizations reveal that the organization’s size, maturity and integration with other applications are also significant drivers of the adoption. This paper attempts to take a closer look at the many complexities involved with matching any given enterprise to an appropriate Cloud strategy, and explores the drivers that explain such a disparity in adoption of Cloud services.

As a new startup, it is a simple proposition to use the Cloud for computing capacity. There are no large capital investments needed, or processes for acquisition and installation of equipment. A Cloud service can be used to provision capacity for a nominal cost. There is no need for expensive infrastructure that might be underutilized, or teams to manage the same. In case there is a need for more capacity, it can be obtained instantaneously. This increases the agility of a startup, while keeping down costs. As the startup grows, its infrastructure can also grow with it.

As organizations grow, the scale and scope of their IT needs/applications also increase. Large scales imply economies of scale from deploying applications in-house, reducing benefits resulting from migrating to the Cloud. With a large scope, multiple Cloud providers are needed to satisfy the organization’s needs, which makes it preferable to retain these applications in-house. Although most of these workloads are unlikely to provide any economies of scale, the integration between them represents a large barrier to migration.

As with scale, the complexity of the applications used within organizations increases as they grow/mature. In the early stages most companies just need email and spreadsheets. As they grow they need tools such as Salesforce, Workday, etc. They might also use computing capacity from AWS or Google, and migrate their email to Office 365. They also begin to establish workflows that integrate these services to mirror their operational processes. While it makes sense to use XaaS services for most needs, other factors accompany growth – gradual acquisition of proprietary information, data movement (between applications, organizational functions, and multiple sites), and regulatory mandates. These factors promote the establishment and usage of complex systems.

Examples of complex systems are ERP systems, MES systems, and Data warehouses. They impact multiple or all functions within the organization, make extensive use of automated workflows, and generate reports at daily or hourly rests that are needed to make operational decisions. They communicate with many other applications, and interact with multiple organizational functions and impact core operations at various levels within the organization. They communicate with systems and applications that belong to their suppliers, partners, customers, and regulatory organizations. They are organization-aware and have been extensively customized to closely match the organization’s operations. Losing these applications has critical impact upon an organization, bringing most operations to a halt.

It is this combination of organizational and industry awareness, criticality of business impact, and extensive integration with other systems that makes complex systems hard to replicate within a Cloud environment. In case these systems already exist in-house, it is quite difficult to upgrade, replace, or migrate to the Cloud.

CA_P11

 

Based on the chart above, we see that organizations with large monolithic applications, or organizations with complex applications (even at low scales), should maintain these in-house. On the other hand, smaller companies with low complexity of applications are better served by using Cloud-based applications.

What would be the recommendations for an existing enterprise? Again, for an existing startup, this would be a no-brainer; in most cases, it would be to leverage the Cloud to improve agility while minimizing expenses. On the other hand, a large existing enterprise has the necessary scale to retain its infrastructure and systems in-house. It is quite easy to recommend that a new startup rent its infrastructure at AWS or build its applications upon the Azure platform. It is similarly easy to recommend that companies such as GE or Merck maintain their systems in-house. The recommendation is not so clear for medium-sized companies; they do not have a very large scale to justify in-house retention. At the same time, this is complicated by integration between applications.

How do these recommendations change with growth? When does Cloud Strategy dictate moving Cloud systems in-house or vice-versa? Let us take a look at the chart below to understand the impact.

CA_P12

Startups score low on both scale and scope of their applications, and hence are located at the top right of the above chart. As they grow, the scale and scope of their workloads increases, causing them to resemble mid-sized companies, and ultimately become large enterprises. This migration is also pictured above. At some point in the organizational lifecycle, it becomes economically imperative for applications to be migrated to in-house systems, as shown in the blue region. However, strategic considerations may pre-empt this move, within the brown region above.

What are some of the other considerations that dictate Cloud strategy and have a bearing upon customer adoption? We will take a look at these aspects in Part 2 of this paper.

Share and Enjoy

  • Facebook
  • Twitter
  • Delicious
  • LinkedIn
  • StumbleUpon
  • Add to favorites
  • Email
  • RSS