Cloud Cloud Cloud
In the last few weeks, I have been asked by colleagues, customers, and a few industry analysts to comment on the continued trend towards the use of cloud computing. In the last few months, we have seen AWS growing at 58% in the second quarter of 2016 as compared to the same period last year. Azure grew 102% in the same period. On-premise stalwart Oracle is expecting 78-82% growth in its SaaS/PaaS revenues which will help offset declines in new on-premise license revenue. Adding to the frenzy is Oracle’s Executive Chairman and CTO’s pronouncements at Oracle OpenWorld 2016 and Google’s acquisition of Apigee. What are the drivers behind this trend? Will it continue? Isn’t this going to cost customers more?
Well, let me start with the last one first. No, it is actually less costly, especially when one considers the benefits of cloud computing and SaaS. Upgrades are built-in, elastic compute with pay per use, easier backup, and DR *ducks and hides, it’s usually easier because it’s usually thought about in the design cycle*. However, in my opinion, the biggest benefit is the proximity of data. Well, what do you expect a CTO from a Big Data company to say? Let me explain…
For this, I will borrow liberally from one of my colleagues Dave McCrory. Dave is the CTO of Basho; we serve as Industry Consultants together. At least twice a year we get together at our groups’ conference, very cool group, lots of great ideas and insights shared. Dave’s theory on Data Gravity helped me recognize the value of a company like Rubikloud – thanks, Dave! A simplified summary of the Data Gravity theory is that data which is near other data is more useful, and the tendency of data to cling together comes from the usefulness of the resulting knowledge.
So what? Why does this make the cloud a compelling platform? Well there are a number of factors, the cloud being pervasive allows for the collection and integration of many additional data sources. Customers benefit in both latency – the data is already there or close by, and throughput – the public clouds have massive scale and reach. Companies like us are utilizing machine learning to crunch through these vast pools of data to find insights and make predictions. The public cloud platform makes absolute sense, hence why Rubikloud takes a cloud first approach.
Cloud vendors are aware of this advantage and are taking steps to improve their platforms. Amazon, the largest cloud provider, has increased Snowball capacity and made network improvements to S3 to enable faster transfers. Google has been flexing Big Query, Cloud Dataflow muscle for the last few months and announcing new data centers as it strives to match AWS and Azure’s reach. Azure is expanding Data Management Gateways for connecting on-prem and expanding the availability of their SQL Data Warehouse. These are all exciting additions (amongst many more) that make it compelling to move data onto these platforms.
As the use of technology increases and becomes more widespread. The volume of data will increase, and the source of the data will be distributed. The cost of building and operating data centers to house this data will make it more compelling to utilize the cloud. Cloud providers have the network reach and financial resources to build large-scale efficient data centers to handle this volume. In order to attract the right customers and software vendors, the cloud providers will continue to extend their services and provide a lower cost platform for their customers.
In conclusion, to my friends, customers, and the analysts, I speculate that the adoption of cloud services will increase. To me, it’s the most viable platform. To the customers, I say that this changes significantly the manner in which software should be consumed, but how? More on this in my next blog post.