In a 1968 letter to Science magazine, British science fiction writer Arthur C. Clarke shared the third of his three laws, which stated that:
Any sufficiently advanced technology is indistinguishable from magic.
It is with this reverence that ‘the cloud’ has often been discussed in the last few years, as if it were something so complex and incomprehensible that it could only be considered magic. And any attempt at understanding it was a feat best not undertaken by mere mortals.
The truth is altogether different, and in reality the cloud is far from magic, though it is a game-changing advancement. It is, in hindsight - as is so often the case - a seemingly mundane progression, built upon existing technology to nudge the metaphorical boat that little bit further out.
Broadly speaking, the cloud is to datacenters what the Internet is to your computer. Where computers are connected together to create the Internet, so datacentres are connected to create the cloud. ‘The cloud’ is presented as a service, abstracting the underlying datacentres. In the same way that the internet spawned new services and industries, so too has the evolution from datacentres to the cloud resulted in new products and businesses.
First of all, let’s look at what the leap from datacentre to cloud computing has enabled.
Flexibility and scalability go hand in hand. In a traditional datacentre, not only did you require a site to house the servers, but also ensure redundant power and connectivity, secure access and all that is before you even install your first server. To get off the ground you then have to procure and install physical hardware, set them up and install whatever you need on them for your project. Only after that stage are you ready to get started, and that alone can take weeks. Even in an existing datacentre, provisioning additional capacity is a project measured in weeks, let alone days or hours.
With the cloud, you can provision a server in a couple of minutes and get direct and private access to the new server, which can also be preconfigured to meet specific requirements.
You can even save a copy of a previous server to create a new clone, saving even more setup time. If you then realise halfway through your project that you need additional or complementary resources, you can spin those up with another few clicks and within minutes be ready to use them.
This actively encourages teams to try new ideas and significantly reduces the time required to develop systems. Mistakes made early on regarding resource requirements are not as costly, as they can be swiftly remedied from an infrastructure point of view.
Because the cloud allows you to add or remove resources at will, you can also design solutions to be more cost-effective so that you are only provisioning what you need, rather than the minimum required to keep the proverbial lights on 24/7. This is particularly impactful when hardware requirements are determined by peak events, such as a disproportionately high spike of traffic on Monday mornings that requires resources which are overkill for the other 165 hours of the week.
Traditional datacentres do not provide the flexibility to scale resources on demand like the cloud, meaning that you have to provision for your peak load requirements upfront. This could result in over-provisioning during normal or quiet hours, leading to wasted capacity and thus cost. The fluid nature of the cloud allows for a more responsive resource management approach, which dynamically adds or removes resources to ensure performance requirements are consistently met as traffic volumes ebb and flow.
The other side of scalability is the opportunity to provision a mammoth instance for a short period of time to tackle an otherwise lengthy process, such as processing vast amounts of data. At the time of writing, AWS offers the u-24tb1.112xlarge which boasts 448 vCPUs and 24TiB of memory - yes, that really is terabytes of memory! This type of scalability is simply not viable in the vast majority of datacentres, and requires a crippling upfront investment for all but the largest enterprises.
What? But the cloud is really confusing and complicated!
Sure, it can be. There is a dizzying array of services available, and enough acronyms to fill a periodic table. ML, AL, APIM, EC2, IAM, SQS, VM, ARM, CF… you get the idea.
That said, there are services like AWS Lightsail or Azure App Service which will host your site with minimal configuration required. It handles the underlying hardware, and takes most of the heavy lifting out of your hands.
Even if you wanted more control, you can still take advantage of the Shared Responsibility Model, where AWS or Azure own a sliding scale of the solution, depending on whether you require Infrastructure as a Service (IaaS), Platform as a Service (PaaS) or Software as a Service (SaaS). These different approaches offer varying levels of control and responsibility to the customer.
Let’s quickly dive into what IaaS, PaaS and SaaS really mean.
Infrastructure as a Service (IaaS) is where you get access to a virtualised resource, such as a virtual machine (VM), blob storage and networking infrastructure. While the cloud provider manages the infrastructure, the customer manages the operating system (OS), any applications that are installed upon it, and the networking configuration and data being stored.
A likely scenario is where you want to host a website but require specific plugins and versions of SQL Server, so you get a Windows VM, install all the additional features you require as you would on any Windows machine, and then deploy your code to it and have full access to the OS to debug and monitor the system. You’re responsible for maintaining all of the software and the OS patching.
Platform as a Service (PaaS) is a step away from managing the resource as the OS is now firmly taken out of the equation. In our example above, the implementation would instead utilise a service like Azure App Service to host the website with some configurations to ensure the plugins are available, and connect to an instance of Azure SQL. Now the website is hosted entirely “serverless” - which really only means that the customer is not responsible for any part of the underlying hosting. The webserver and SQL configurations are managed by Azure and exposed as an endpoint rather than an OS. Customers are only responsible for configuring the service within the exposed parameters, and ensuring their code or data schemas are functioning correctly.
Software as a Service (SaaS) is fundamentally just a web-based solution which is normally on-demand or on a subscription basis. Instead of purchasing Office to download and install on your machine, you can get Office 365 as an online offering. Netflix is another excellent example, which provides streaming content via monthly subscription, instead of users needing to download specific movies, or rent them from Blockbuster. These solutions are built in the cloud, and customers subscribe to use of the software as users, but have no part in developing or managing the solution or underlying infrastructure.
As evidenced by the Azure Responsibility model, even opting for the model closest to an on-premise solution, IaaS still abstracts everything up to the Operating System (OS). As an end user, this enables teams to quickly spin up servers, retain full control over the OS and installed applications, networking and access, but have none of the underlying concerns about the physical hardware it runs on. Power, cooling, connectivity, space, upfront costs… all swept aside by the cloud IaaS offering.
The further you move up the sliding scale, the less there is to worry about - ultimately simplifying the overall process of standing up infrastructure down to a few clicks rather than hands-on server provisioning. Ramp-up time decreases, and costs relate more to usage than to keeping the lights on.
As discussed, with cloud computing users can pay for only the computing resources they use, which can result in significant cost savings. A classic example is a business that needs to run periodic high-intensity tasks, such as a pharmaceutical company that needs to run models on potential new medicines. With the cloud, this can be automated to take advantage of several forms of cost-saving.
On a basic level, large, powerful instances can be provisioned to run a task and then shut down when it’s complete. This optimises the use of those instances to a very high efficiency - it’s only running when it’s actively performing the task, so you only pay for the hours of compute time that you use. Assuming it’s a daily task that takes 4 hours to run, you save 20 hours of compute time every day, 140 hours a week or the equivalent of 303.33 days of compute hours per year.
Notching that up yet another step, pricing between different geographical regions varies, as some regions of the cloud are more mature than others. This variable pricing can be exploited by identifying the region with the lowest current pricing for the instance type that you require, and now your daily task is not only saving 20 hours per day, but is also 5% cheaper to run in the 4 hours you are billed for.
If that’s not enough, then we can look at a feature available in AWS and Azure called Spot Instances/Virtual Machines. These are deeply discounted instances that reflect excess capacity in the AWS or Azure clouds, which are priced to incentivise use and reduce idle time.
An automated deployment process can be triggered by the spot pricing for your required instance type reaching a desired threshold, which then deploys and configures the resources required to run the task, resulting in optimal cost-saving. Even sticking to a set schedule for time-sensitive tasks would see a significant reduction in cost from the previous cost-saving initiatives, let alone on the PAYG model.
These types of cost-savings are simply not available outside of the cloud.
Finally, monthly Pay-As-You-Go (PAYG) billing, reduces the barrier for smaller businesses or individuals to get started, prototyping solutions and then progressively building up capacity in line with needs and funding.
Cloud service providers typically offer advanced security features and protocols to protect users' data from unauthorised access and other threats. Due to the sliding scale of responsibility between the customer and the cloud provider, the cloud providers are subject to stringent security regulations in order not just to protect their customers, but simply to meet the requirements many large businesses have relating to security.
If, for example, a customer requires SOC2 compliance and the cloud provider isn’t compliant, then they miss out on that opportunity. This makes it good business sense for cloud providers to adhere to as many regulations as possible. Given that all of the tooling employed to achieve those certifications are also at play to protect customers from cyber attacks and data loss, this makes the cloud a gateway to secure, compliant systems which adhere to strict regulations. Businesses can thereby meet these regulations without the need to invest time and money into securing their own infrastructure to those standards.
To support customers going through bids, audits or certifications of their own, both AWS and Azure provide access to their compliance certifications and audit reports for a range of industry standards such as HIPAA, SOC 2, and ISO 27001. These are available in AWS Artifact and the Microsoft Trust Center, which demonstrates the commitment from the providers in ensuring they provide a robust and compliant platform for their end users.
Already, companies that adopt the cloud are a step ahead and can now focus efforts on securing the solutions that they deploy into the cloud. Once more, a range of default security features and tools are provided as managed services by the cloud providers to simplify the process.
Both AWS and Azure provide DDoS protection through the AWS Shield and Azure DDoS Protection services. Furthermore, the process of creating secure networks is simplified by the default stance taken by cloud providers to block all access. Unless specific ports are opened, nothing is able to reach instances configured in the cloud. Tools are also available to evaluate the security of solutions, with Azure providing a service called Blueprints which continually audit resources against a set of pre-defined or custom criteria, and report back on the compliance. AWS’s Config performs a similar function, and showcases how seriously all cloud providers take security.
AWS and Azure also provide tools for threat detection and response, such as AWS GuardDuty and Azure Security Center. These tools can help detect and automate responses to security threats in real-time, reducing the risk of data breaches and other security incidents.
Security best practices require constant evaluation and vigilance, and these tools provide ready-made solutions for monitoring your security posture at all times.
Managed services is an umbrella term for any function, in our case in the cloud, that is built on top of the hosting offering and further extends capabilities, such as the baked-in security baseline features.
In some cases these managed services simplify the deployment and management of resources, as is the case with Azure SQL Managed Instance that is a fully managed, scalable cloud database service presented as PaaS, or AWS Backup which can automate the management of backups for a range of resource types, both in terms of scheduling backups but also implementing lifecycles to manage storage costs and grandfathering strategies.
Those are useful means to quickly implement routine tasks, and save both setup time and ongoing concerns such as maintenance. The cloud, however, takes things a few steps further.
Services such as AWS Comprehend, a fully managed natural-language processing (NLP) service that uses machine learning (ML) to extract insights from text. Comprehend is built on top of the AWS cloud, leveraging other services such as EC2 and S3, and developing an entirely new service that in turn is provided as a new cloud offering. Comprehend can then work in conjunction with other AWS services to further extend its capabilities.
These types of managed services are not available from an on-premise solution, and would require significant expertise and investment in software to implement, let alone manage and scale.
Overall, cloud computing is more accessible because it reduces the barriers to entry for organisations and individuals alike. The cumulative impact of these benefits result in a platform that is more cost-effective, scalable, and easier to use, which makes it such an attractive option that you cannot afford to overlook it.
Between the three pillars of IaaS, PaaS and SaaS, the cloud offers numerous different means of achieving an end goal. Customers can pick & mix different models for different requirements, thereby creating their own unique blend - something which is not readily achievable without the cloud. This empowers customers of all kinds to level up their businesses, from the ambitious start-up to established global enterprises, and it promotes innovation as the cost
By now you’ve probably guessed that the development of cloud computing is pretty universally considered a good thing. It is. We have explored numerous advantages to the cloud, and touched upon how it is not only abstracting but also extending those common hosted services - webservers, databases, cache, etc.
However, like the Apollo Program, which generated around $7-8 in economic benefit for every dollar spent, the emergence of the cloud has spawned new business opportunities which were not readily conceivable or hugely complex and expensive to implement with traditional datacentres.
A great example of this is Dropbox, which is built on top of AWS S3, exploiting low-cost cloud storage at scale. Dropbox was founded in 2007 as a start-up, and is one of the most recognised solutions in the file hosting space. Without a low-cost entry model coupled with the immense scalability of the cloud, it is unlikely to have ever got off the ground.
Before you go running off to start a multi-million pound business however, there are pitfalls to watch out for.
Even though the cloud is simpler, faster, cheaper and more accessible than setting up your own datacentre, it still has that Layer 8 problem: you have to know what you’re doing (mostly). If all that you want is to host a personal site with AWS Lightsail or Azure App Service, it’s probably ok to use the quickstart route and trust in the provider to do the rest.
For anything business-critical, whether it’s a cloud migration or a greenfield project, some cloud expertise is going to be vital. First off, how do you know which cloud you want to use? Sometimes this is dictated by existing partnerships or suppliers, but often it’s up for debate and you need to understand not only your own workloads, but also which cloud is best suited for it.
Looking for a solid Platform-as-a-Service (PaaS) provider, or are you a heavy SQL user? Azure it is. But for Infrastructure-as-a-Service (IaaS) and a dizzying array of tools, AWS leads the way. These may seem like trivial considerations, but where pricing differences between hosting a fleet of SQL servers in AWS or Azure is concerned, it’s not. If you’re tied to a specific region for regulatory reasons, it’s worth spending some time investigating what the pricing structures are like in that region for the various cloud providers as well.
Now that you’ve settled on your most favourite cloud provider, the next problem is understanding it. Generally speaking, there are two approaches to a cloud migration project: lift & shift, and going ‘cloud native’.
Lift & shift does what it says on the tin. If you’ve got 5 webservers and 2 database servers in your datacentre, you can spin up 7 instances in the cloud and configure some as webservers and install SQL on the others, then continue to manage them as if they are bare-metal servers. No clever managed SQL service, no auto-scaling on the webservers or automated deployment solutions. You’ve effectively just replicated your datacentre in the cloud, without taking advantage of any of the real benefits of the cloud.
Option 2, the ‘cloud native’ approach involves some changes to your architecture so that you can re-imagine it in the cloud space. In all likelihood, this won’t involve major changes; it’ll be more a case of identifying how you can drop that fileserver and rely on super-cheap blob storage instead, or make your servers stateless so you can scale them up and down with wild abandon, or run intensive tasks at strategic times / price points to optimise cost. This approach can be a sliding scale, as your specific requirements dictate how far you want to take the cloud optimisation route. Whole hog, or just hit the high (cost-saving) notes.
Moving to the cloud, or optimising your use of the cloud, can bring enormous benefits to your business. The best advice on how to do so effectively, is ironically to get some good advice. An experienced cloud consultant can advise you on which cloud, services and architecture is likely to return the best gains, based on your parameters (cost, deadline, availability of resources).
Before leveraging the manifold benefits of the cloud, leverage the expertise of someone who deeply understands it.
With Learning Technologies 2023 on the horizon, we’ve decided to share some critical components for custom software in enterprise learning. There is a wide range of solutions, but we will focus on 3 essential solutions most applicable to many teams.