Table of Contents
Posted 8/12/23
All Rights Reserved
Traditional Cloud vs. Next Generation Cloud: A Forbes Spotlight on NZO Cloud’s Insights
Fueled by mounting storage requirements, ease of use, automatic software updates and users’ thirst for limitless access and maximum flexibility, cloud computing has grown exponentially over the past several years. In fact, since 2009, spending on cloud computing has been growing at a rate that is 4.5 times faster than the rate of IT spending, and it’s expected to exceed six times the rate from 2015 to 2020. This is certainly because there are many situations in which utilizing the cloud makes sense.
The most obvious would be in a small startup operating on a lean budget where cash flow is tight. This is because cloud computing allows access to resources without large capital expenditures. The cloud can also be an option for enterprises of all sizes as it relates to disaster recovery. And the cloud excels at providing options for immediate capacity whereby companies can have separate instances for moderate burst capacity.
However, the cloud is not always superior to building in-house IT infrastructure. Cloud providers’ slick marketing materials gloss over the technology’s numerous drawbacks, such as skyrocketing fees, poor performance and cybersecurity issues. The decision between using a public cloud and building your own IT infrastructure is not so different than deciding between renting a workspace for your business or buying your own building; both decisions boil down to having total control (and responsibility) over your own environment versus depending on a landlord to provide an adequate workspace and fix problems quickly and adequately.
Beyond The Hype: Not All Clouds Have A Silver Lining
Some enterprises choose cloud services for their scalability; the typical pricing model for cloud computing is “pay for what you use,” with organizations having the option to buy more (or less) computing power as their needs change. This may not be as cost-effective as it sounds. Cloud computing can easily result in unexpected costs, especially when dealing with performance and security setbacks. Trading algorithm developer Deep Value found using Amazon’s EC2 service to be 380% more expensive than running its own internal cluster, noting that “For one-off peaks, EC2 makes sense, but given the ongoing nature of our simulated analysis, moving to our own data center is a very clear winner.”
Cloud security is another serious issue, especially because it is up to the enterprise — not the cloud provider — to properly configure certain cybersecurity settings. Improperly configured cloud security settings were at fault for the recent massive breach of voter data mined by a data analytics company that had been hired by the Republican National Committee.
Additionally, enterprises that use a public or shared cloud can experience a cyberattack or performance issue through no fault of their own. This past February, AWS customers experienced a widespread outage because AWS misconfigured something. For these reasons, enterprises running mission-critical applications with high-availability needs and compliance or regulatory requirements may want to think twice about using a public or shared cloud.
Public and shared clouds are also plagued by performance and reliability problems. Using a public cloud means potentially sharing a network with so-called “noisy neighbors” or users who hog resources. Further, since cloud service providers have servers in several dispersed locations, users may experience latency issues and are often forced to pay exorbitant fees to avoid them. In contrast with bare-metal hardware users, public cloud customers have limitations on resource availability and may struggle with application performance and data transfer rates.
When Is In-House Infrastructure A Better Option?
Several factors determine when it is better to deploy in-house infrastructure than to use the cloud. At the top of this list is the monthly cost. For some companies, the $30,000 monthly AWS bill exceeds the cost of an in-house solution. For other companies, the decision is tied to performance, reliability and security issues. Organizations that own their infrastructure have total control over their computing environments; if something goes wrong, or if an organization wants to implement new features, it can just call on its own staff to make the fixes or changes instead of having to depend on their cloud provider to do it. An in-house IT infrastructure is probably best for high-scale IT environments that process large amounts of data, especially if that data is constantly growing, and for companies that want maximum flexibility to make changes.
There are also competitive issues to consider. Giving a potential competitor insight into your business model, applications and customer base is a very risky endeavor. A recent Wired article outlined Dropbox’s “exodus” from Amazon. While cost was one of the reasons Dropbox cited for building its own infrastructure, so were concerns over Amazon’s foray into file-sharing services — Dropbox’s domain. Enterprises would do well to be cautious about the prospect of Amazon, which already competes in numerous industries, eventually offering services that will compete against those of its own cloud storage customers.
It Doesn’t Have To Be All Or Nothing
The decision of where to store data and run applications doesn’t have to be a strict matter of cloud vs. in-house. In some cases, the best solution is a combination of both (hybrid). For some enterprises with limited budgets, a public cloud is the most realistic choice. Other organizations may want to implement in-house servers to handle standard traffic and turn to the cloud for additional capacity.
However, organizations need to seriously consider their individual data needs beyond the “cloud first” hype. This includes prioritizing requirements for processing, performance, storage, security, data transfers and, of course, determining how much they are willing and able to spend. There is inherent value in building and owning in-house infrastructure instead of being at the mercy of a virtual landlord. Enterprises must shift their mindsets and view their IT departments as assets and business drivers rather than cost centers. An investment in appropriate IT infrastructure ultimately drives long-term profits that will support the growth of not only the IT department, but also the larger business of which it is a part.
Most Read
Infrastructure Considerations for Containers and Kubernetes
Containers and Kubernetes are at the heart of a broad industry shift where applications and services are based on a microservices architecture. Specifically, microservices are being rapidly adopted as a means of building and modernizing distributed applications allowing them to be more scalable, flexible, resilient, and easier to build.
Boston MA Children’s Hospital selects HIPPA compliant NZO Cloud
Since its invention in the early 1970’s, computed tomography (CT) has become one of the dominant approaches to non-invasive diagnostic imaging. Yet despite its tremendous utility, there is a continuing desire to push the limits of detection, while minimizing the overall radiation exposure to the patient.
HPC Technology Strategies for Fighting COVID
We’re amid unprecedented times — COVID-19 has forced government officials to take stringent measures, and healthcare organizations and research companies are tirelessly working to find proper drug treatments and vaccines to combat the virus.
Related Blogs
Stay Up to Date.
Sign Up!
Posted 07/12/23
All Rights Reserved
Cloud-Based HPC and AI: Transforming Weather Modeling and Wildfire Analysis
Posted 08/04/23
All Rights Reserved
Unlocking the Power of Object Storage: Benefits and Beyond