Evolution of the Cloud

Taylor Karl
Evolution of the Cloud 2959 0

Back when all infrastructure was located on premise this was a truly huge and complex problem. A database that began small begins collecting data from hundreds or thousands of collection points. It doesn’t take long before it becomes a big database that exceeds the capacity of the drive it’s stored on. A new storage device needs to be added, and fast. But that’s not a fast process. First everyone must argue…err… decide what storage device to obtain. Then budget needs to be allocated. Then a purchase order. The drive is shipped and ultimately received. Then it must be installed and configured, tested, and approved for production use. Then the database needs to be spanned across the existing and new drives.

By this time the data collection has a big gaping hole in it.

Then Came Cloud

One extraordinary solution to this problem came with the arrival of the cloud. When the storage allocated at the start of the database project got close to being exhausted, they simply requested more, and it was allocated from the shared resource pool. If more was needed, more was allocated. And so on, and so on. The cloud quickly proved to be a great place to find scalable Big Data solutions.

Cloud computing also provided choice. You might use a SQL back-end database, or choose NoSQL, or MongoDB, or one of many others. You might choose Hadoop to process all the data, or perhaps PowerBI or other tools. You could readily change tools, too, if you decided you’d made a bad choice. Very forgiving, this cloud.

Cloud Training Solutions

But What About Resilience and Reliability

Here’s where they started to separate the big clouds from the small. The big cloud players, such as AWS, Azure, and Google, built redundant data centers which replicated your data repeatedly. If one data center was lost, your work would fail-over to another often without you even knowing it.

As always, however, the Open Source Community stepped in to democratize cloud resilience with Software-Defined-Storage which segments your data and allocates it in triplicate to multiple cloud data centers. If large file copies are required the software determines which parts are located nearest by to shave milliseconds off each transaction, assembling your copies far faster than before. If one data center is lost, often even two, all the data is kept safe at another redundant site and still available to the entire storage cluster. Storage purpose-built for use in the cloud. Goes perfectly with containers, microservices, and other cloud-centric constructs.

For Those Who Thought Cloud Displaced You

Many IT professionals have been concerned that the onset of cloud computing has displaced them. Made their jobs obsolete. That may be true in some cases, but your skills are not obsolete. They are easily repurposed when you learn to apply them to the growing Big Data cloud community. Big Data requires high network capacity with non-stop performance, high capacity and fast I/O storage, massive processor capacity perhaps with enough memory to contain entire big databases for faster processing, and experts to code the solutions enabled by all this big data.

Developments like software-defined-storage and software-defined-networking, along with the far more resilient container and microservice approach to coding create a broader, more interesting selection of occupations to apply your skills to. All you need is some guidance and re-training to learn the differences in this new environment.  The rest, as they say, is all just syntax.

To get more information on how to make your existing skills more relevant to the world of big data scalable cloud solutions, contact your United Training Account Manager today!

Print