This comprehensive guide about AWS covers the expansive cloud services offered by Amazon, common use cases and technical limitations and what to know when adopting this technology.
The rise of cloud computing provides businesses the ability to quickly provision computing resources without the costly and laborious task of building data centers, and without the costs of running servers with underutilized capacity due to variable workloads.
Amazon Web Services was the first large vendor of easily affordable cloud infrastructure and services, and remains the single largest player in the cloud computing market. For startups, this low barrier to entry has enabled the rise of popular photo sharing services such as imgur, while established companies like Netflix have transitioned their workloads to AWS to decrease the complexity of their deployment while reducing costs.
This AWS guide is an introduction to Amazon’s cloud ecosystem that will be updated periodically to keep IT leaders in the loop on new AWS services and ways in which they can be leveraged.
SEE: Hiring kit: Cloud Engineer (TechRepublic Premium)
What is AWS?
AWS is a platform consisting of a variety of cloud computing services offered by Amazon.com. Instead of building an in-house data center, or leasing general purpose servers from traditional data centers, the costs of resource provisioning on AWS reflect actual usage, not reserved capacity. The service in question is also a factor in billing – pricing varies based on the individual product and storage type.
For example, the total cost for S3 level storage will vary depending on how many GB you require and how many transactions you perform during the billing period. Small businesses could easily add terabytes of storage for what amounts to petty cash, while large enterprises could spend millions satisfying their storage needs. Users of AWS systems only pay for what they use during the billing period.
In addition to the aforementioned EC2 and S3 services, other services exist in the AWS portfolio. CloudFront, a content-delivery network (CDN), mirrors resources at “edge locations” to improve page loading time. Relational Database Service (RDS) is a scalable database server that supports MySQL/MariaDB, PostgreSQL, Oracle and Microsoft SQL Server, as well as Amazon’s own Aurora implementation of MySQL. Similarly, DynamoDB offers scalable NoSQL database support. Elastic Beanstalk allows users to quickly deploy and manage applications in the cloud from preconfigured container images.
AWS also offers specialized resources that are applicable to specific use cases. Video stored on S3 can be easily transcoded for mobile devices using Elastic Transcoder, and for any process not yet automatable simple tasks can be completed by remote workers in Mechanical Turk – though, this is more crowd computing than cloud computing. Amazon Connect is a cloud-based contact center service delivered through AWS, allowing businesses to scale to thousands of customer support agents.
SEE: AWS: 9 pro tips and best practices (free PDF) (TechRepublic)
At the 2016 AWS re:Invent conference, a substantial emphasis was placed on AI services, with the announcement of Amazon Rekognition, a deep-learning based image recognition system; Amazon Polly, a text-to-speech system that supports 25 languages, and differentiates for different dialects of English, Spanish and Portuguese; as well as Amazon Lex, the speech recognition and natural language processing technology that powers the Alexa virtual assistant, used in the Amazon Echo speaker and Fire TV digital media player.
AWS has specialized services for Internet of Things (IoT) devices, with particular emphasis on enabling encrypted communication between devices, and transmitting information to the cloud. AWS Greengrass is a service that allows local compute, messaging, data caching and synchronization.
In 2018, Amazon introduced its first blockchain offerings at the AWS re:Invent conference. Amazon Quantum Ledger Database (QLDB) is a fully-managed ledger database with a central trusted authority, and Amazon Managed Blockchain allows users to create and manage blockchain networks using Ethereum or HyperLedger. The company also announced AWS Outposts, a collaboration with VMware to bring AWS cloud services on-premises for hybrid cloud deployments.
Other new services introduced include Amazon Elastic Inference and AWS Inferentia for machine learning, FSx for native Windows file systems, Lake Formation for speeding up data lake building, Global Accelerator for increasing performance across regions, SiteWise for industrial data collection, RoboMaker dev service for building intelligent robotics apps, and AWS Ground Station for transmitting and processing data between satellites.
SEE: AWS RoboMaker: A cheat sheet (TechRepublic)
Amazon Web Services has also added non-Intel options for EC2 compute instances. First, AMD EPYC processors are available for M5 and T3 general purpose instances as well as R5 memory optimized instances, for 10 percent less than Intel Xeon-powered versions of those instances. Amazon announced custom Arm processors, called AWS Graviton, at re:Invent 2018. Graviton was developed internally at Amazon, and is based on 2015-era Cortex-A72 microarchitecture, though it has support for Arm Neoverse server extensions.
At re:Invent 2020, Amazon announced the availability of macOS instances in EC2 along with lots of other new features, covered below. The macOS instances will use a combination of Mac mini machines and the AWS Nitro abstraction layer to allow both Catalina and Mojave instances to utilize other AWS resources like S3, EBS and EFS. Unfortunately, instances powered by Apple Silicon M1 chips won’t be available until sometime in 2021. Regardless of the wait, AWS is now the only cloud provider with on-demand macOS instances.
Amazon announced a new product called Amazon Braket in December 2019, which is a fully-managed quantum computing platform designed for building, testing and running quantum algorithms. Braket left preview in mid-2020 and is now generally available.
At AWS re:Invent 2021, AWS chose not to announce much in the way of new services and features and instead concentrated on iterations and improvements to existing cloud services. According to an analysis by TechRepublic contributing writer Matt Asay, AWS seems to be trying to help customers make better use of what they already have.
However, AWS did announce the introduction of new hardware and chips for its EC2 platform called Graviton 3. Also announced was AWS Wavelength, a 5G edge computing service and Amazon SageMaker Canvas, a no-code module that uses a point-and-click interface to walk customers through the entire process of building and using a machine learning workflow.
Why does AWS matter?
AWS, like other cloud service providers, offers the ability to instantly provision computing resources on demand. Compared to the laborious task of planning and building on-site data center infrastructure, along with the requisite hardware upgrades, maintenance costs, server cooling requirements, electricity costs and use of floorspace – particularly for offices in urban centers with associated real estate costs – the savings can add up very quickly.
The benefit of AWS extends beyond cost. Managed services of AWS reduce the administrative burden of IT, freeing them to work on new projects rather than spending time on general system upkeep. For example, in RDS, the administrative console can be used to automatically apply security updates to the underlying software stack, as well as manage backups, snapshots, deployments in multiple availability zones and seamlessly replace an instance in the event of hardware failure.
SEE: AWS Lambda: A guide to the serverless computing framework (free PDF) (TechRepublic)
Amazon has made AWS a leader in cloud-based machine learning. Since launching a long list of AI-based services at re:Invent in 2016, Amazon has extended its offerings to make AWS competitive with Google’s Cloud AI.
Since 2016 Amazon has added services like SageMaker, which rapidly trains machine learning models for faster deployment, and AWS DeepLens, a deep-learning enabled video camera.
As the largest public cloud services provider, Amazon acts as a trendsetter for the industry. Their purchasing power and scale gives them the ability to influence the industry at large. As part of this, Amazon’s development of Arm-powered AWS Graviton CPUs is likely to result in an increased focus in open source on optimizations for server applications to harness the full power of the Arm architecture in a way in which smaller vendors would not be able to affect.
Who does AWS affect?
Practically any organization that uses computers has a use case applicable to a service provided by AWS. Even for the most basic uses–such as using S3 Glacier for offsite backups – AWS is a compelling alternative to traditional solutions. While AWS started as a cloud-based replacement for simple storage and compute operations, it has expanded to cover practically every use case imaginable, with targeted services for databases, IoT development, business productivity, messaging, game development, virtual desktops, analytics, machine learning and more.
SEE: Cloud Data Storage Policy (TechRepublic Premium)
Additionally, while established organizations likely have capital for traditional data center deployments, cash-strapped startups benefit from the absence of deployment costs and paying only for resources used, as opposed to paying for capacity provisioned, or being forced to pay for infrastructure hardware. Utilizing cloud service providers such as AWS also allows for scale as a company grows, as well as establishing cloud infrastructure early in its growth.
Of note, Amazon’s largest brick-and-mortar competitor Walmart reportedly issued an ultimatum to suppliers and software vendors in 2017 to cease using AWS for their businesses, at the risk of losing business with the big box retailer. While Walmart has contributed to the open source OpenStack platform, the company is not requiring its partners to use cloud services from a particular vendor.
Since issuing that ultimatum Walmart has begun using Microsoft Azure, but the company has said it isn’t abandoning its OpenStack investments as of July 2018; as of 2020, the partnership between Microsoft and Walmart continues as well.
When was AWS launched, and what are key features of AWS?
AWS launched in 2006, though various services and geographic service regions have been added continually since launch. Presently, AWS services are available in distinct global “regions”: US East (Ohio and Northern Virginia), US West (Oregon and Northern California), Canada, (Montreal), Brazil (São Paulo), England (London), EU (Ireland, France, Italy, Sweden, and Germany), Middle East (Bahrain), Africa (Cape Town), Asia Pacific (Singapore, Sydney, Tokyo, Seoul, Osaka, Mumbai, and Hong Kong) and Mainland China (Beijing, Ningxia). An additional region exclusively for GovCloud users exists in the Northwestern United States.
In November 2014, Amazon announced a plan to transition AWS to 100% renewable energy. By April 2015, one quarter of consumed energy was provided by renewable sources. At the end of 2016, over 40% of consumed energy was provided by renewable sources, while Amazon planned to reach 50% by the end of 2017. Amazon announced that it had surpassed 50% renewable energy in 2018, and in 2019 the company said it planned to build wind farms in Ireland, the US and Sweden to support AWS infrastructure.
In early 2018 AWS launched a unified auto scaling dashboard that can be found in the AWS Management Console. It allows administrators to control scaling of multiple AWS products easily from one location.
At re:Invent 2020, Andy Jassy revealed a laundry list of new AWS features, many of which revolve around the objective of automating the more tedious aspects of machine learning and data analytics.
Newly announced features at re:Invent 2020 include:
- macOS instances in EC2, making AWS the only cloud provider to offer macOS instances.
- Habana Gaudi-based EC2 instances coming in the first half of 2021 designed for deep learning.
- AWS Trainium, a new AWS-designed machine learning training chip that Jassy said will deliver “the most cost-effective training in the cloud.”
- Amazon ECS and EKS Anywhere, which lets ECS and EKS customers run the two platforms in their own data centers on local hardware.
- Lambda container support, which will allow for Lambda functions built right in to container workflows.
- AWS Proton, a fully managed deployment service for containers and apps.
- io2 block express, the first SAN built for the cloud.
- Aurora serverless v2, which will scale more quickly.
- Babelfish for Amazon Aurora PostgreSQL, which can understand proprietary SQL dialects and translate them into Auroral PostreSQL.
- AQS Glue elastic views, which will create more efficient dashboards by copying relevant data from the datastore to the local host while GEV monitors the datastore for any changes.
- Sagemaker data wrangler, which will examine unstructured data and give recommendations for how to translate it into data Sagemaker can use.
- Sagemaker Feature Store, a new purpose-built feature store in Sagemaker.
- Sagemaker pipelines for building automated CI/CD pipelines into the Sagemaker ML platform.
- Amazon DevOps Guru, which promises to identify operational issues like resources running out of space, misconfigured alarms, etc., before they become serious trouble.
- Amazon Quicksight Q, a natural language business intelligence tool that can generate data models based on regular, everyday questions.
- Amazon Monitron, a machine learning end-to-end equipment monitoring platform that builds operational profiles for industrial equipment and notifies relevant parties of any deviations from the norm.
Other changes from 2020 re:Invent include AWS releasing Babelfish for PostgreSQL as an open source project, a change in Lambda billing increments from 100ms to 1ms, and changes to Amazon Connect to improve user experience with machine learning tools.
What services compete with AWS?
According to tech market research firm Canalys, AWS comprised 32% of the public cloud market as of Q1 2021, after growing 32% during Q1. Both growing 50% or more in Q1, Microsoft accounts for 19% of the market, while Google Cloud accounts for 7%. These top three cloud service providers account for 58% of the total cloud spend for the first quarter of 2021.
In much the same way that Amazon as an internet retailer is intended to be everything to everyone, so is AWS. While competing cloud services offer alternatives for general use cases of AWS, no competing cloud service has an exact replacement for every product included in AWS.
In terms of scale, Google, Microsoft, Alibaba and IBM are certainly capable of handling any amount of data or compute tasks you can generate.
SEE: Power checklist: Local email server-to-cloud migration (TechRepublic Premium)
For organizations looking to migrate from an on-premise SharePoint system, or with other deep dependencies on Microsoft products, Azure is likely the most compelling option for a seamless transition to the cloud.
Google Cloud Platform’s core strengths are in machine learning, big data tools and extensive container support.
How do I become an AWS engineer?
Tech professionals interested in becoming an AWS engineer should head over to Amazon’s AWS Certified DevOps Engineer certification portal to see what skills Amazon says are required in order to manage AWS instances, find out how to prep for the certification exam, see sample questions, or schedule an exam session. However, this certification isn’t necessary to perform the job functions of an AWS DevOps engineer.
SEE: How to become a DevOps engineer: A cheat sheet (TechRepublic)
TechRepublic previously covered skills that are essential for cloud engineers to master, and while the list does include AWS, it also mentions several programming languages, automation suites and analytics tools that are essential. In essence, AWS is a proprietary platform, but the basic skills needed to manage it are the same as any other cloud product.
Anyone who wants to learn more about the specifics of being an AWS engineer should check out the list of online courses that Amazon maintains in its AWS Training portal.
How do I get and use AWS?
Developers can get started with AWS using the Free Tier, which is available to anyone without restriction for the first 12 months. It features 750 hours per month of EC2 t.2 micro instances of Linux or Windows, as well as 5 GB of standard storage in S3 with 20,000 GET and 2,000 PUT requests.
Also available is 25 GB of storage in DynamoDB with 25 units of write and read capacity each, which Amazon estimates to be sufficient to handle 200 million requests per month. It includes one million free requests per month in Lambda, and 20,000 free requests in AWS Key Management, and free access grants in a dozen other AWS services. After the 12 month unlimited period ends, some restrictions take effect. Free tier users should be sure they know all the details about products they need that may be affected by the end of the free tier unlimited period.
For startups, various tiers of free credits (up to $100,000) are available depending on your accelerator. These promotional credits can be applied to most AWS products, though are not usable with Mechanical Turk, AWS Marketplace or some types of support requests.
Note: This article was written and updated by James Sanders. It was also updated by Brandon Vigliarolo and Mark Kaelin.