Amazon EC2 - Elastic Cloud Computing
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides re-sizable compute capacity in the cloud. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.
We are working from last 10 years in IT I remember time where if we needed a new Active Directory Server or a new SQL Server we have to go to HP or go to DELL order new servers we then had to get deliver to our data-centers we had to get racked we had to do the networking setup them the internet accessible etc and you know your provisioning time should be anywhere from 5 to 10 business days. Then i started public cloud and was really exciting to see the capabilities of cloud in step having of 5 to 10 days lead time you would reduce to literally just couple of minutes you can have that server up and running so that’s really how cloud computing change the IT industry in the last 5 to 10 years so Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate themselves from common failure scenarios. So we just look at the first section the advantage of the cloud computing is utility based model you can pay only by the hour. If you want to spin up the development environment and just test on it and then terminate you only pay for 1 or 2 hours the environment is live the old model way you would buy the server hardware you would be stuck with it.
Elastic Compute Cloud Pricing Options
Free Tier –
you get 735 hours free on certain micro instances.
On Demand –
Which allow you to pay a fixed rate by the hour with no commitment.
Reserved –
Which provide you with a capacity reservation, and offer a significant discount on the hourly charge for an instance. Then you have 1 Year or 3 Year Terms so reserved just saying i need 10 servers of this size and i am willing to pay either up-front contractual willing to commit for 1 to 3 years and if you do use reserved instances then you get massive discounts compared with on demand.
Spot –
This is enable you to bid whatever price you want to pay for instance capacity, providing for even greater savings if your applications have flexible start and end times.
Elastic Compute Cloud On Demand vs Reserved vs Spot
On Demand Instances
Users that want the low cost and flexibility of Amazon EC2 without any up-front payment or long-term commitment. Applications with short term, spike, or unpredictable workloads that cannot be interrupted. Applications being developed or tested on Amazon EC2 for the first time.
Reserved Instances
Applications with steady state or predictable usage so reserved might be your 3 or 4 web servers that you always want to turned on and then your on demand instances might be is the part of an auto scaling event. Applications that require reserved capacity. Users able to make upfront payment to reduce their total computing costs even further.
Spot Instances
Applications that have flexible start and end times. Applications that are only feasible at very low compute prices. Users with urgent computing needs for large amounts of additional capacity.
Elastic Compute Cloud On Demand Instances
- General Purpose Instances
- Compute Optimized Instances Compute Intensive Applications
- Memory Optimized Instances Database & Memory Caching Applications
- GPU Instances Instances High Performance Parallel Computing (eg Hadoop)
- Storage Optimized Instances Data warehousing and Parallel Computing
Local Instance Storage vs Elastic Block Storage
- Local Instance Storage
Data stored on a local instance store will persist only as long as that instance is alive. So you terminate that Instances you loose all the data on that virtual hardware.
- Elastic Block Storage Backed Storage
Data that is stored on an Amazon Elastic Block Storage volume will persist independently of the life of the instance.
Storage backed by Elastic Block Storage
- Provisioned IOPS Solid State Drive
Designed for I/O intensive applications such as large relational or No-SQL databases.
- General purpose Solid State Drive
Designed for 99.999% availability. Ratio of 3 IOPS per GB, offer single digit millisecond latency, and also have the ability to burst up to 3000 IOPS for short periods.
Magnetic
Lowest cost per gigabyte of all Elastic Block Storage volume types. Magnetic volumes are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important.
abcdevops © 2019 all rights reserved.
Amazon Lambda - Serverless technology
What is Lambda?
AWS Lambda is a compute service that runs your code in response to events and automatically manages the underlying compute resources. So you don’t have to worry about server infrastructure all you have to worry about is code and you can design your code respond automatically to events. AWS Lambda can automatically run code in response to modifications to objects in Amazon S3 buckets, messages arriving in Amazon Kinesis streams, or table updates in Amazon DynamoDb.
Lambda runs your code on high-availability compute infrastructure and performs all the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code and security patch deployment, and code monitoring and logging.
All you need to do is supply the code.
What Events Trigger Lambda?
You can use AWS Lambda to respond to table updates in Amazon DynamoDB, modifications to objects in Amazon S3 buckets, messages arriving in an Amazon Kinesis stream, AWS API calls logs created by AWS Cloud Trail, and custom events from mobile applications, web applications, or other web services.
Lambda Pricing
So pricing is broken down into two bits Requests based and Duration based. So if we start with requests you get first 1 million requests are free to the Lambda service and then you are paying $0.20 per 1 million requests thereafter.
Duration is calculated from the time your code begins executing until it returns or otherwise terminates, and it’s rounded up to the nearest 100ms. The price depends on the amount of memory you allocate to your function. You are charged $0.00001667 for every GB-second used.
In terms of your free tier
1M free requests per month and 400,000 GB-seconds of compute time per month. The memory size you choose for your Lambda functions determines how long they can run in the free tier. The Lambda free tier does not automatically expire at the end of your 12 month AWS Free Tier term, but is available in both existing and new AWS customers indefinitely.
abcdevops © 2019 all rights reserved.
Amazon S3 - Simple storage service
Simple Storage Service (S3):
S3 provides developers and IT teams with secure, durable, highly-scalable object storage. Amazon S3 is easy to use, with a simple web services interfaces to store and retrieve any amount of data from anywhere on the web. S3 Essentials —
- S3 is Object based i.e. allows you to upload files and stored file on the platform.
- Files can be from 1 Byte to 5Tb in size.
- There is unlimited storage.
- Files are stored in Buckets. (Buckets like directory any windows and linux file system).
- Buckets have a unique namespace for each given region (eg if i want to create a bucket izapcloudguru in the eu-west-1 region that namespace with then be reserved so somebody else with using another amazon account could not go in and create a izapcloudguru bucket. https://s3-eu-west-1.amazonaws.com/bucketname/)
- Amazon guarantees 99.99% availability for the S3 platform. S3 buckets essentially spread across availability zone. So if availability zone goes down you don’t have to worry your S3 bucket is stored in the other availability zone and amazon do this automatically on a region bases you don’t have to worry about configuring this.
- Amazon also guarantees 99.999999999% durability for S3 information. Durability is simply if you think of storing of file on a disk set i.e Raid 1 and you lose one of the disk because in Raid 1 configuration which mirror all your information is stored across two disks so you can loss of 1 disk now the way amazon structure S3 is that if you stored 10000 files the guarantee those 10000 files stay there with the 99.999999999% durability.
- S3 can have metadata (key value pairs) on each storage (eg file).
- S3 allows you to do Lifecycle Management.
- Versioning
- Encryption (S3 also allows you to do encrypt your buckets. You can store your files encrypted at rest.)
S3 Storage Types
- Standard S3 storage which gives you 99.99% availability, and the 99.999999999% durability
- Reduced Redundancy storage – Still has 99.99% availability and your buckets replicated across different availability zones automatically but they use different disk sets the only give you 99.99% durability over a given year. So it’s little bit cheaper to use reduced redundancy storage but you only stored files on that not important if you lose them.
- Only use Reduced Redundancy Storage for replaceable data. For example if you have 10,000 files, you could expect to lose 100 files over 1 year as opposed to 0.00001 file with standard S3 durability.
S3 Versioning
- Stores all versions of an object (including all writes and even if you delete an object)
- Great backup tool.
- Once enabled, Versioning cannot be disabled, only suspended that’s quite important to know.
S3 Lifecycle Management
- Lifecycle Management can be used in conjunction with versioning.
- Lifecycle Management can be applied to current versions and previous versions.
- Following actions are allowed in conjunction with or without versioning;
- Archive Only
- Permanently Delete Only
- Archive and then permanently delete.
S3 Encryption
- You can upload/download your data to S3 via SSL Encrypted Endpoints and S3 can automatically encrypt your data at rest. S3 gives you the choice of managing your keys through AWS key Management Service (AWS Key Management Service), having Amazon S3 manage them for you, or providing your own keys.
S3 Security
- All buckets are private by default.
- Allows Access Control Lists (an individual user, can only have access to 1 bucket and only have read only access).
- Integrates with IAM (using roles for example allows EC2 users to have access S3 buckets by roles).
- All endpoints are encrypted by SSL.
S3 Functionality
- Static Websites can be hosted on S3. No need for web servers, you can just upload a static .html to an S3 bucket and take advantage of AWS S3’s durability and High Availability.
- S3 also Integrates with Cloud Front which is amazon content delivering network.
- Multipart uploads, allows you to upload parts of a file concurrently.
- Suggested for files a 100Mb over. It is required for any file over 5Gbs.
- Allows us to resume a stopped file upload.
- S3 is spread across multiple availability zones and i guarantee you have Eventual Consistency you just have to remember the sometimes you might upload a file to an S3 bucket and then you go to try and access that file pro-grammatically because you trying to do that so fast it might not replicated across other availability zones. So just important to remember that all AZ’s will eventually be consistent. Put/Write/Delete requests will eventually be consistent across AZ’s.
S3 Use Cases
- File Shares for networks
- Backup/Archiving
- Origin for CloudFront CDN’s
- Hosting Static Files
- Hosting Static Websites
abcdevops © 2019 all rights reserved.
Amazon Cloud Front - serve static assets from the closest place.
CDN:-
A Content Delivery Network (CDN) is a system of distributed servers (network) that deliver web pages and other web content to a user based on the geographic locations of the user, the origin of the webpage and a content delivery server. Now, let’s look at a practical example if i am in Australia let say i am in Perth for example and i want access the server in New York that server has image files on it in order get those image files the actual image files have to be served across the Atlantic then across the Indian ocean in order to reach Perth and every 200Kms equals to approximately 1 millisecond length of time latency so it’s take me a little bit of time for those files to physically arrived for New York to Perth a even operating a speed of light it’s gonna be a longer time than a files viewing those files directly from a server in Perth so our Content Distribution Network does its every time a user in Perth tries to access those files in New York CDN cache those files add a server in Perth for the length of time. Now a new user goes to access the same files they can just get it from Perth server they don’t have to go halfway around the world to pull down the same files. Those files will be cached depending on the settings but you set was called it time to live (TTL) and that’s measured in seconds you can set on your CDN you can set TTL on your files to say how long you are going to cache them. So that a really high overview what CDN is and CloudFront is Amazon CDN.
Amazon CloudFront can be used to deliver your entire website, including dynamic, static, streaming, and interactive content using a global network of edge locations. Requests for your content are automatically routed to the nearest edge location, so the content is delivered with the best possible performance.
Amazon CloudFront is optimized to work with other Amazon Web Services, like Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Load Balancing, and Amazon Route 53. Amazon CloudFront also works seamlessly with any non-AWS origin server, which stores the original, definitive versions of your files.
CloudFront Terminology
Origin –
This is the origin of all the files that the CDN will distributed. This can be either an S3 Bucket, an EC2 Instance, an Elastic Load Balancer or Route53.
Distribution –
This is the name given the CDN which consists of a collection of Edge Locations. You can have 1 distribution with multiple origins and good example of this would be where buy to trying to serve a dynamic website and may be your image files or stored flat static files that be stored in an s3 bucket. You also running a PHP application it does not refresh to often and you want to cache the output of those PHP files you can create a separate origin server which should be an EC2 instance for example and then any PHP files would come from your EC2 instance image file comes from your S3 buckets. You can also have multiple S3 buckets with different files types perhaps you have an S3 buckets for your pdf files we have a separate S3 buckets for application users will download. So you can have 1 distribution will multiple origins.
CloudFront Distribution Types
Web Distribution –
Typically used for websites
Speed up distribution of static and dynamic content, for example, .html, .css, .php, and graphics files. Distribute media files using HTTP or HTTPS. Add, update, or delete objects, and submit data from web forms. Use live streaming to stream an event in real time. You store your files in an origin — either an Amazon S3 bucket or a web server. After you create the distribution, you can add more origins to the distribution.
RTMP –
RTMP distribution to speed up distribution of your streaming media files using Adobe Flash Media Servers RTMP protocol. An RTMP distribution allows an end user to begin playing a media file before the file has finished downloading from a CloudFront edge location. Note the following:
To create an RTMP distribution, you must store the media files in an Amazon S3 bucket. To use CloudFront live streaming, create a web distribution.
abcdevops © 2019 all rights reserved.
Amazon Storage Gateway - Better storage solution.
Storage Gateway
AWS Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization’s on-premises IT environment and AWS’s storage infrastructure. The service enables you to securely store data to the AWS cloud for scalable and cost-effective storage.
AWS Storage Gateway software appliance is available for download as a virtual machine (VM) image that you install on a host in your data-center. Once you’ve installed your gateway and associated it with your AWS account through our activation process, you can use the AWS Management Console to create either gateway-cached or gateway-stored volumes that can be mounted as iSCSI devices by your on-premises applications.
Storage Gateway in two different models Gateway-cached and Gateway-stored
Gateway-cached :-
Gateway-cached volumes allow you to utilize Amazon S3 for your primary data, while retaining some portion of it locally in a cache for frequently accessed data. These volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. You can create storage volumes up to 32 TBs in size and mount them as iSCSI devices from your on-premises application servers. Data written to these volumes is stored in Amazon S3, with only a cache of recently written and recently read data stored locally on your on-premises storage hardware.
Gateway-stored :-
Gateway-stored volumes store your primary data locally, while asynchronously backing up that data to AWS. These volumes provide your on-premises applications with low-latency access to their entire data sets, while providing durable, off-site backups. You can create storage volumes up to 1TB in size and mount them as iSCSI devices from your on-premises applications servers. Data written to your gateway-stored volumes is stored on your on-premises storage hardware, and asynchronously backed up to Amazon S3 in the form of Amazon EBS snapshots.
Storage Gateway Pricing
With AWS Storage Gateway, you pay only for what you use. AWS Storage Gateway has four pricing components: gateway usage (per GB per month) so the number of gateway using per month, snapshot storage usage (per GB per month), volume storage usage (per GB per month), and data transfer out (per GB per month).
abcdevops © 2019 all rights reserved.
Amazon RDS - Relational database service
Databases Introduction
So we just with brief introduction on the different types of databases so we got Relational databases.
Relational Databases (OLTP) :-
Online Transaction Processing these are the databases that used to using day in day.
RDS :- Amazon have a service code RDS which stands for Relational Database Services. In this consist of 5 different relational databases including (MYSQL, SQL Server, POSTgresql, Oracle, and Aurora)
Non-Relational Databases (NOSQL) :-
These aggressively new to the industry having sort of come out around 2004 also and Amazon service for this is :
Dynamodb :- Most famous Non-relational database would be something like Mongodb are you could look at cloud end Couchdb. Dynamodb is slightly different to these databases do not compare with Mongodb.
Data Warehousing Databases (OLAP) :-
Online Analytical Processing and these over the use to be relational structure both from a logical perspective and it infrastructure perspective has not changed and these really there are types of databases these known as Data Warehousing Databases. Amazon offers for this product code:
RedShift
Compare The Fundamentals
So Let’s start with the Relational Databases or Amazon RDS so it’s for what most of us are used to. Been around since the 1970’s.
- Database
- Tables :- Inside your database you got number of tables.
- Row :- Inside your tables you got Row otherwise knows as Record.
- Fields :- That Row or Record consists number of fields which known as colum
RDS includes technologies such as :-
- SQL Server
- Oracle
- MySQL Server
- Postgres
- Aurora
NoSql Database Structure :-
NoSql quite a little bit different to relational databases so there is different types of NoSql i am gonna talk about document oriented databases in this That’s by Dynamodb is you can also get tabular you get key value pairs you get different types of NoSql databases with the ones we are going to look at document oriented.
- Collection :- So Inside your database you have got a collection.
- Document :- Inside your collection and you got a number of documents.
- Key Value Pairs :- Those documents consist of key value pairs.
Data Warehousing :-
This is often used by number of different software products business intelligence. Tools like Cognos, Jasper soft, SQL Server Reporting Services, Oracle Hyperion, SAP Net Weaver.
abcdevops © 2019 all rights reserved.
Amazon DynamoDB - NoSQL database for faster response time
DynamoDB
Amazon DynamoDB is a fast and flexible No-SQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. It’s flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and many other applications.
DynamoDB Configuration
- It’s always going to be stored on SSD storage. So there is no magnetic storage you going to get very good and very high IOPS from it.
- It is Spread Across 3 different geographically distinct data-centers. So if you write a database you write a record to a particular a AZ that is going to be replicated across to your other two AZ and in terms of that replication you can choose between two options;
Eventual Consistent Reads Strongly Consistent Reads
Difference Between Eventual Consistent Reads and Strongly Consistent Reads
Eventual Consistent Reads :-
Consistency across all copies of data is usually reached within a second. Repeating a read after a short time should return the updated data. (Best Read Performance). So this means that when you run a read query against your database you might be querying at two unavailability zone that has not yet had that data that is been from the initial write which should be another AZ has not been replicated across. So let say you are writing your data to AZ-(A) and then when you going to read the database it might not be the in AZ-(B) depending on the length of the time between that write and that read that would be Eventual Consistent.
Strongly Consistent Reads :-
A strongly consistent read returns a result that reflects all writes that received a successful response prior to the read. So with Strongly Consistent Reads you would not get the same read performance you get with the Eventual Consistent Reads but you do more less guarantee reflect that after somebody has written record to AZ-(A) nobody would be able to read that record in AZ-(B) into has been replicated across. So just keep that in mind when you designing your application whether or not (We are talking about millisecond you know we are not talking within a seconds) so it’s really up to you and your application team is to which one the you would choose most people choose the default which is you know we can afford for my data to be out of date or not replicated within a second so that its Eventual Consistent Reads.
Pricing
- Provisioned Throughput Capacity
Write Throughput $0.0065 per hour for every 10 units. Read Throughput $0.0065 per hour for every 50 units.
- Storage costs of $0.25Gb per month.
Pricing Example
Let’s assume that your application needs to perform 1 million writes and 1 million reads per day, while storing 3GB of data.
First, you need to calculate how many writes and reads per second you need. 1 million evenly spread writes per day is equivalent to 1,000,000 (writes) /24 (hours) /60 (minutes) /60 (seconds) = 11.6 writes per second.
A DynamoDB Write Capacity Unit can handle 1 write per second, so you need 12 Writes Capacity Units. Similarly, to handle 1 million strongly consistent reads per day, you need 12 Read Capacity Units.
Using on-demand pricing in the US East (N. Virginia) Region. You got 12 Write Capacity Units would cost $0.1872 per day and 12 Read Capacity Units would cost $0.0374 per day. So your total cost of provisioned throughput capacity is $0.1872 + $ 0.03474 = $0.2246 per day. Storage costs $0.25 per GB per month.
Assuming a 30-day month, your 3GB would cost you 3 * $0.25/30 = $0.025 per day. Combining these numbers, the total cost of your DynamoDB table would be $0.2246 (for provisioned throughput capacity) + $0.025 (for storage) = $0.2496 per day or $7.50 per month.
abcdevops © 2019 all rights reserved.
Amazon ElastiCache - In-memory data store and caching
ElastiCache
ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based database. ElastiCache supports two open-source in-memory caching engines:
ElastiCache – Use Cases
Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing and Q&A portals) or compute-intensive workloads (such as a recommendation engine). Caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of I/O-intensive database queries or the results of computationally-intensive calculations. If data does not change regularly and it’s ok the cache you would use ElastiCache can take load of your database service running computation service your are running and that’s by you would use it.
ElastiCache Use Two Different Engines
Memcached
A widely adopted memory object caching system. ElastiCache is protocol complaint with Memcached, so popular tools that you use today with existing Memcached environments will work seamlessly with the service.
Redis
A popular open-source in-memory key-value store that supports data structures such as sorted sets and lists. ElastiCache supports Master/Slave replication and Multi-AZ which can be used to achieve cross AZ redundancy.
abcdevops © 2019 all rights reserved.
Amazon Redshift, Big data solution
Amazon Redshift
Amazon Redshift is a fast and powerful, fully managed, petabyte-scale data warehouse service in the cloud. Customers can start small for just $0.25 per hour with no commitments or upfront costs and scale to a petabyte or more for $1,000 per terabyte per year, less than a tenth of most other data warehousing solutions.
Configuration of Redshift
- You start with the Single Node which is 160Gb.
- Then you can scale to Multi-Nodes
Leader Node (manages client connections and receives queries). Compute Node (store data and perform queries and computations). So with Redshift you can have up to 128 Compute Nodes but you can just start with the single node which combines the leader node and compute node into one row but you can then scale out.
10 Times Faster
Columnar Data Storage:-
Instead of storing data as a series of rows, Amazon Redshift organizes the data by column. Unlike row-based systems, which are ideal for transaction processing, column-based systems are ideal for data warehousing and analytics, where queries often involve aggregates performed over large data sets. Since only the columns involved in the queries are processed and columnar data is stored sequentially on the storage media, column-based systems require far fewer I/Os, greatly improving query performance.
Advanced Compression:-
Columnar data stores can be compressed much more than row-based data stores because similar data is stored sequentially on disk. Amazon Redshift employs multiple compression techniques and can often achieve significant compression relative to traditional relational data stores. In addition, Amazon Redshift doesn’t require indexes or materialized views and so uses less space than traditional relational database systems. When loading data into an empty table, Amazon Redshift automatically samples your data and selects the most appropriate compression scheme.
Massively Parallel Processing (MPP):-
Amazon Redshift automatically distributes data and query load across all nodes. Amazon Redshift makes it easy to add nodes to your data warehouse and enables you to maintain fast query performance as your data warehouse grows.
Pricing
- Compute Node Hours (total number of hours you run across all your compute nodes for the billing period. You are billed for 1 unit per node per hour, so a 3-node data warehouse cluster running persistently for an entire month would incur 2,160 instance hours. You will not be charged for leader node hours; only compute nodes will incur charges.)
- Backup
- Data transfer (only within a Virtual Private Cloud, not outside it)
Security
- Encrypted in transit using SSL
- Encrypted at rest using AES-256 encryption
By default RedShift takes care of keys management. Manage your own keys through Hardware Security Models. AWS Key Management Service.
Availability
- Currently only available in 1AZ.
- If you do lose AZ you can restore snapshots to new AZ’s so that’s the way you can get some kind of redundancy that is not automatic that is a manual process.
abcdevops © 2019 all rights reserved.
Amazon Virtual Private Cloud
Amazon Virtual Private Cloud Definition
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.
You can easily customize the network configuration for your Amazon Virtual Private Cloud. For example, you can create a public-facing subnet for your web servers that has access to the Internet, and you can place your back-end systems such as databases or application servers in a private-facing subnet with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to the Amazon EC2 instances in each subnet.
Additionally, you can create a Hardware Virtual Private Network (VPN) connection between your corporate data-center and your VPC and leverage the AWS cloud as an extensions of your corporate data-center.
What Can I Do with a Virtual Private Cloud
- You can launch instances into a subnet of your choosing this might be EC2 or RDS instances etc.
- Assign custom IP address ranges in each subnet so that custom range in each subnet you can also extend that range into subnets if you are choosing you can bring your own IP address ranges over.
- You can configure route tables between subnets.
- Create internet gateways and attach them to subnets (or not) if you have an internet gateway and you attach it to a subnet that subnet is publicly accessible by the internet. you can then have other subnets they do not have internet gateway attach to them. That means those subnets do not have internet access you cannot get to the any resources within that directly via that subnet you have go in via different subnets.
- You also get much better security control over your AWS resources you actually get two level security
Instance security groups Subnet network access control lists (ACLS)
Default VPC vs Custom VPC
- Default VPC is user friendly, allowing you to immediately deploy instances.
- All Subnets in default VPC have an internet gateway attached. So that when we are in different AZ or different subnets if you put it instance inside those different AZ they all had internet access by default.
- Each EC2 instance has both a public and private IP address
- If you delete the default VPC the only way to get it back is to contact AWS.
VPC Peering
- Allows you to connect one VPC with another via a direct network route using private IP addresses.
- Instances behave as if they were on the same private network
- You can peer VPC’s with other AWS accounts as well as with other VPCs in the same account.
- Peering is in a star configuration, ie 1 central VPC peers with 4 others. But you can’t do is have say three VPC’s you got VPC1 that peers with VPC2 which then peers with VPC3 but VPC1 would not be able to directly communicate with VPC3. They could only connect to VPC2 that one in the middle.
VPC Restrictions
- You only get 5 Elastic IP addresses per VPC.
- 5 Internet Gateways.
- You can have 5 VPCs per region (can be increased upon request)
- 50 VPN connections per region.
- 50 Customer Gateways per region.
- 200 Route tables per region.
- 100 Security Groups per VPC.
- 50 Rules per security group.
VPC Creation Summary
- We created a custom VPC.
Defined our IP Address Range so we did that using our CIDR 10.0.0.0/16 that was our IP address range. By default this created a Network ACL & Route
- Created a Custom route table.
- Created 3 Subnets 10.0.1.0/24, 10.0.2.0/24 and 10.0.3.0/24 .
- We then created an Internet Gateway.
- Attached our Internet Gateway to our VPC and then we essentially created a custom route table. Then without route table we created a outbound route to that internet Gateway.
- Adjusted our public subnet to use the newly defined route.
- Provisioned an EC2 instances with an Elastic IP address that was in our public subnet we also create an EC2 instances in our private subnet. One thing that comes up again and again is just because of an EC2 instances in your public subnet doesn’t mean that has access to the internet you needed to either have an Elastic IP Address or to have an Elastic Load Balancer attach to it. So just remember that you put an EC2 instances in a public subnet doesn’t mean that has internet by default.
NAT Summary
- Created a security group
- Allowed inbound connections to 10.0.1.0/24 and 10.0.2.0/24 on HTTP and HTTPS
- Allowed outbound connections on HTTP and HTTPS for all traffic.
- Provisioned our NAT instance inside our public subnet.
- We Disable Source/Destination Check for the NAT instance. That’s the way you get the NAT instance to work. You have to disable the source/destination check.
- Setup up a route on our private subnets to route through the NAT instance
ACL Summary
- ACLs can be across multiple subnets.
- But Subnets can only have 1 NACL.
- ACLs encompass all security groups under the subnets associated with them.
- Rule Numbers, Lowest is incremented first.
abcdevops © 2019 all rights reserved.
Amazon Direct Connect, Data warehouse to Cloud
Direct Connect
AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your data-center, office, or co location environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than using Internet-based connections.
AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS Direct Connect locations. Using industry standard 802.1q VLANs this dedicated connections can be partitioned into multiple virtual interfaces. This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space, and private resources such as Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC) using private IP space, while maintaining network separation between the public and private environments. Virtual interfaces can be reconfigured at any time to meet your changing needs.
Main advantage of Direct Connect over VPN?
Bandwidth & a more consistent network experience! So if you had a side to side VPN you know that they can drop out your VPN can just terminate you need to reconnect your VPN. So you have less reliability. However the main advantage of direct connect is bandwidth you can get a huge amount of bandwidth suppose to using a side to side VPN.
A VPC VPN Connection utilizes IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the internet VPN Connections can be configured in minutes and are a good solution if you have an immediate need, have low to modest bandwidth requirements, and you can tolerate the inherent variability in Internet-based connectivity. AWS Direct Connect does not involve the Internet; instead, it uses dedicated, private network connections between your intranet and Amazon VPC.
Direct Connect Benefits
- Reduce costs when using large volumes of traffic.
- Increase reliability
- Increase the amount of bandwidth the you can use to communicate to AWS.
Direct Connect Connection
Available in 10Gbps, 1Gbps and you can get it below 1Gbps can be purchased through AWS Direct Connect Partners and uses Ethernet VLAN trunking (802.1Q).
abcdevops © 2019 all rights reserved.
Amazon Route53, Powerful control on DNS.
Route 53
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service and the reason is called the Route 53 is it’s combination of Route 66. Which is the old interstate of first interstate route across the United States but it’s named after port 53 which is the DNS Port that’s how it came to get its name.
What is Domain Name System?
If you’ve used the internet before, you’ve used DNS. DNS is used to convert human friendly domain names (such as http://example.com) into an Internet Protocol (IP) address (such as http://192.0.2.1) IP address are used by computers to identify each other on the network. An IP address commonly come in 2 different forms, IPv4 and IPv6.
Difference between IPv4 and IPv6
The IPv4 space is a 32 bit field and has over 4 trillion different addresses (4,294,967,296 the radical addresses to be precise). Then the problem is that we started getting the internet of things there is a lot more devices out there on the internet now and essentially we are running out of IPv4 addresses.
IPv6 was created to solve this depletion issue and has an address space of 128bits which in theory is 340 undecillion addresses it is massively larger than IPv4. Now the problem that moment is that nobody seems to be using IPv6 very prevalently all the ISPs still use IPv4 to spite us running out of address spaces and it’s really down to the ISP to force uses to start using the IPv6 spite of a cart before the whole scenario and right now you know typically using IPv4 there is clever techniques somewhere you translate IPv4 to IPv6 addresses but as a stands today to spite IPv6 being out for quite of few years now most peoples still use IPv4.
Top Level Domains
If we look at common domain names such as google.com, bbc.co.uk, izap.in etc you will notice a string of characters separated by dots (periods). The last word in a domain name represents the “top level domain”.
.com
.in
.edu
.gov
.co.uk
.gov.uk
.com.au
These top level domain names are controlled by the internet Assigned Numbers Authority (IANA) in a root zone database which is essentially a database of all available top level domain names. You can view this database by visiting – http://www.iana.org/domains/root/db web address and they you can see a list of top level domains.
Domain Registrars
Because all of the names in a given domain name have to be unique there needs to be a way to organize this all so that domain names aren’t duplicated. This is where domain registrars come in. A registrar is an authority that can assign domain names directly under one or more top-level domain names. These domains are registered with Inter-NIC, a service of ICANN, which enforces uniqueness of domain names across the internet. Each domain name becomes registered in a central database known as the WhoIS database. Popular domain registrars include GoDaddy.com, 123-reg.co.uk etc and Amazon have started you can register name domain name through Amazon now as well.
Common DNS Types
Some common DNS types include:
SOA Records :-
A start of authority (SOA) record is information stored in a DNS zone about that zone. A DNS zone is the part of a domain for which an individual DNS server is responsible (i.e the bit that you store A records, CNAMEs etc). Each Zone contains a single SOA record.The SOA record stores information about;
The name of the server that supplied the data for the zone. The administrator of the zone. The current version of the data file. The number of seconds a secondary name server should wait before checking for updates. The number of seconds a secondary name server should wait before retrying a failed zone transfer. The maximum number of seconds that a secondary name server can use data before it must either be refreshed or expire. The default number of seconds for the time-to-live file on resource records.
NS Records :-
NS stands for Name Server records and are used by Top Level Domain servers to direct traffic to the Content DNS server which contains the authoritative DNS records.
A Records :-
An “A” record is the fundamental type of DNS record and the “A” in A record stands for “Address”. The A record is used by a computer to translate the name of the domain to the IP address. For example http://example.com might point to http://192.0.2.1.
CNAMEs :-
A Canonical Name (CName) can be used to resolve one domain name to another. For example, you may have a mobile website with the domain name http://m.example.com that is used for when users browse to your domain name on their mobile devices. You may also want the name http://mobile.example.com to resolve to this same address. C Names are most commonly used when you want http://www.example.com to resolve to http://example.com (i.e the naked domain name)
MX Records :-
The MX resource record specifies a mail exchange server for a DNS domain name. The information is used by Simple Mail Transfer Protocol (SMTP) to route emails to proper hosts. Typically, there are more than one mail exchange server for a DNS domain and each of them have set priority.
PTR Records :-
You can think of the PTR record as an opposite of the A record. While the A record points a domain name to an IP address, the PTR record resolves the IP address to a domain name. PTR records are used for the reverse DNS lookup. Using the IP address you can get the associated domain name. An A record should exist for every PTR record.
Route 53 Routing Policy
Simple :-
This is the default routing policy when you create a new record set. This is most commonly used when you have a single resources that performs a given function for your domain, for example, one web server that serves content for the http://example.com website. In this example Route53 will respond to DNS queries that are only in the record set (i.e there is no intelligence built in to this response).
Weighted :-
Lets you split your traffic based on different weights assigned. For example you can set 10% of your traffic to go to US-EAST-1 and 90% to go to EU-WEST-1 so you can use weighted based routing.
Latency :-
Latency based routing allows you to route your traffic based on the lowest network latency for your end user (ie which region will give them the fastest response time). To use latency-based routing you create a latency resource record set for the Amazon EC2 (or ELB) resource in each region that hosts your website. When Amazon Route53 receives a query for your site, it selects the latency resource record set for the region that gives the user the lowest latency. Route 53 then responds with the value associated with that resources record set.
Failover :-
Failover routing policies are used when you want to create an active/passive setup. For example you may want your primary site to be in US-West-1 and your secondary DR Site in US-East-1. Route53 will monitor the health of your primary site using a health check. A health check monitors the health of your endpoints.
Geo location :-
Geo location routing lets you choose where your traffic will be sent based on the geographic location of your users (ie the location from which DNS queries originate). For example, you might want all queries from Europe to be routed to a fleet of EC2 instances that are specifically configured for your European customers. These servers may have the local language of your European customers and all prices are displayed in Euros.
abcdevops © 2019 all rights reserved.
Amazon CloudWatch, A monitoring tool.
What is CloudWatch ?
Amazon CloudWatch is a monitoring service to monitor your AWS resources as well as the applications that you run on AWS.
What can CloudWatch Do ?
Cloud Watch can monitor things like;
- EC2.
- DynamoDB
- RDS DB Instances etc.
- As well as monitoring the services CloudWatch also has the ability to monitor custom metrics generated by your applications and services.
- CloudWatch can also monitor any log files your applications generated.
CloudWatch & EC2
Default monitoring is host level monitoring by CloudWatch. Default monitoring resources:
- CPU
- Network
- Disk
- Status Check
RAM Utilization is a custom metric, By default EC2 monitoring is 5 minutes internals, unless you enable detailed monitoring which will then make it 1 minute intervals. Remember: RAM Utilization is not host level metric, it is custom metric.
EC2 Status Checks
System Status Checks:-
Checks the underlying physical host. So just remember that system equal to physical host. So your physical host is going to either be your rack mounting servers or blade inside your blade chassis is the actually physically host that is hosting your virtual machine. System status check is always going to check the host that the virtual machine is sitting on.
Loss of network connectivity. Loss of system power. Software issues on the physical host. Hardware issues on the physical host. Best way to resolve issues is to stop the virtual machine and start it again and the reason for that is when you stop the virtual machine and you start it again. The virtual machine can startup on another physical host.
Instance Status Checks :-
It checks virtual machine itself.
Failed system status checks Mis-configured networking or startup configuration Exhausted memory Corrupted file system Incompatible kernel Best way to trouble shoot is by rebooting the instance or by making modifications in your operating system.
How Long are CloudWatch Metrics Stored ?
By Default CloudWatch Metrics stored for 2 Weeks. You can retrieve data that is longer than 2 weeks using the GetMetricStatistics API or by using third party tools offered by AWS partners.
You can retrieve data from any terminated EC2 or ELB instance for up to 2 weeks after it’s termination.
Metric Granularity ?
It depends on the AWS service. Many default metrics for many default services are 1 minute, but it can be 3 or 5 minutes depending on the service.
For custom metrics the minimum granularity that you can have is 1 minute.
CloudWatch Alarms
You can create an alarm to monitor any Amazon CloudWatch metric in your account. This can include EC2 CPU Utilization, Elastic Load Balancer Latency or even the charges on your AWS bill so let say you got budget of 50$ and you want to know alert when you coming into like 45$ for the month. You can actually setup a custom CloudWatch alarm which will send you a notifications saying you hit 45$ you close to your 50$ budget for the month. You can set the appropriate thresholds in which to trigger the alarms and also set the what actions should be taken is an alarm state is reached.
abcdevops © 2019 all rights reserved.
AWS OpsWorks – Configuration Management - Amazon Web Services
What is OpsWorks?
Cloud-based applications usually require a group of related resources – web servers, application servers, database servers, and so on – that must be created and managed collectively. This collection of instances is called a stack.
AWS OpsWorks provides a simple and straightforward way to create and manage stacks and their associated applications and resources.
Amazon Definition – AWS OpsWorks is an application management service that helps you automate operational tasks like code deployment, software configurations, package installations, database setups, and server scaling using Chef. OpsWorks gives you the flexibility to define your application architecture and resource configuration and handles the provisioning and management of your AWS resources for you. OpsWorks includes automation to scale your application based on time or load, monitoring to help you troubleshoot and take automated action based on the state of your resources, and permissions and policy management to make management of multi-user environments easier.
OpsWorks is GUI to deploy and configure your infrastructure quickly. OpsWorks consists of two elements, Stacks and Layers.
A stack is a container ( or group) of resources such as ELBs, EC2 Instances, RDS instances etc.
A layer exists within a stack and consists of things like a web application layer. An application processing layer or a database layer.
When you create a layer, rather than going and configuring everything manually (like installing Apache, PHP etc) OpsWorks takes care of this for you.
Layers
- You need 1 or more layers in the stack
- An instance must be assigned to at least 1 layer. So if you got EC2 instances has to be in the web server layer or the application layer or in the database layer.
- Which chef layers run, are determined by the layer the instance belongs to so if you pushing out an update to your code these things are code recipe essentially there are only going to be applied to the layers that you push those updates out to.
- OpsWorks gives you whole bunch of Preconfigured Layers include;
Applications Layers Databases Layers Load Balancers Layers Caching Layers
abcdevops © 2019 all rights reserved.
Amazon Simple Workflow Service - Cloud Workflow Development - AWS
Amazon Simple Workflow
Amazon Simple Workflow Service (Amazon SWF) is a web service that makes it easy to coordinate work across distributed application components. Amazon SWF enables applications for a range of use cases, including media processing, web application back-end, business process workflows, and analytic pipelines, to be designed as a coordination of tasks. Tasks represent invocations of various processing steps in an application which can be performed by executable code, web service calls, human actions, and scripts. Now, the most important things to take out that statement is human actions vs SQS is automated to telling web server our web server telling a message queue that an image’s been uploaded needs to be watermarked SWF is broken down into a series of tasks. Amazon actually use SWF service inside their distribution centers when workers given a task to go and locate a particular object and get it ready for package and sending to a person you know he might about DVD for example you know toy or something so is not entirely pro-grammatically you know it can be used in warehouses and distribution centers it doesn’t instruction of computers it can involved human actions that task simply be go to sec. 34E of this distribution warehouse and pick up item number 104 and then tape that item pack to the posting in packing area that would be a task it’s assigned to human workers so that’s the main difference between simple workflow service and simple queue service.
SWF vs SQS
- Amazon SWF presents a task-oriented API, whereas Amazon SQS offers a message-oriented API.
- Amazon SWF ensure that a task is assigned only once and is never duplicated. With Amazon SQS, you need to handle duplicated message is processed only once.
- Amazon SWF keeps track of all the tasks and events in an application. With Amazon SQS, you need to implement your own application-level tracking, especially if your application uses multiple queues.
abcdevops © 2019 all rights reserved.
AWS CloudFormation - Infrastructure as Code & AWS Resource Provisioning
Amazon CloudFormation
AWS CloudFormation an easy way enable you to create and manage a collection have AWS resources using a templating language called CloudFormation templating language. That template allow you to provision and update AWS resources in an orderly predictable and repeatable fashion not unable to version control your AWS infrastructure following a trend which is becoming more widespread now actually it was when cloudformation was first created which is to be define the infrastructure that support your application as code version controlling to infrastructure in the same way that you might version control releases of your code base so releases of static data might be incorporated into your application enables you to deploy and updates stack using either the Amazon Web Services console the command line all the API and of course the API is wrapped by a variety of different SDK python, dotnet, go was recently released and others so you can control cloud formation using those three different mechanisms and in common with I think all AWS deployment and management tools you only pay for the resources that you create with AWS CloudFormation. So CloudFormation itself is a service without charge but as in when you creates resources within your account you’re them charge for the resources this service creates because quite important to bear in mind.
The simple basic characteristics really AWS cloudformation firstly you don’t need to reinvent the wheel so with AWS cloudformation templates create you can use the repeatedly to create a identical copies of the same stack or maybe is the foundation to start a new stack every resources you want to create you can capture and control region specific variations such as EC2 AMI Amazon machine image ID’s, Elastic Block Storage volumes, Relational Database Service snapshot names within your templates so it allows you to have this characteristics every use. The templates themselves are simple JSON formatted text files obviously means as we said earlier that you can place these in the normal source all version control mechanisms install that in public or private locations such as Amazon S3 exchanged via email and you can take a look at these templates and see exactly which AWS resources make up the stack that you gonna create. you have full control to modify and you have the resources that created as part of the stack either by changing the template itself in doing an update or by working directly with the AWS resources themselves.
The templating language is declarative and it’s flexible so you can create whatever infrastructure you wanna essentially AWS resources configuration values interconnections the you might need within your template and then in you can AWS cloudformation carry the heavy lifting of creating those resources just by executing that template by the management console using the command line interface that we provide all by a single request to the AWS API then you are have to remember the details of the resources in that stack for future creation activities once you’re in that template you can use as many times as you wish course and in many cases you won’t need to write these templates from scratch sample templates provides some really good resources for example templates you can use either us a whole solution stack creation activities or as a baseline from which you might derive a modified stack that suits your purpose exactly so definitely you can take a look at making use of those sampled.
You have the ability to customize stack so you got execution time parameters that you can pass to your template runtime essentially when the stack is below parameters so could pass in things like RDS database size of instance types database a web server port numbers this can enable you to use parameterized templates to create multiple stacks the different in controlled ways obviously very good if you all for example using one stack definition to create development and production environment in the scenario you may wish to create development with small EC2 instance type then production environment order to help you cost optimize as effectively as possible you may want to do different things in different regions where you creating stacks and you can use these templates parameters to tune settings and thresholds in each region independently of one another while still maintaining consistent infrastructure deployment across different region or different environments that’s a really helpful feature of this service.
You got integration ready capabilities so you can integrate cloud formation with development and management tools of your choice whether that’s a CI. CD deployment tool or other development management tools that you might use the programmatic access to cloud formation via CLI SDKs APIs makes that very simple to do and cloud formation publishes progressive through another service called the Amazon Simple Notification Service so you not only can you initiate stack creation actions or update or deletion action using those programmatic interfaces be can also track stack creation and deletion progress by email or integrate with those updates pro-grammatically using other interfaces mechanisms that are available such really helpful for integration and there’s no charge for cloud confirmation in its own right your only pay for the resources that cloud formation creates your application uses to you to operate so cloud formation itself is available at no additional charge.
abcdevops © 2019 all rights reserved.
Amazon Simple Notification Service (SNS) | AWS
Simple Notification Service
Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up, operate, and send notifications from the cloud. It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications.
Amazon SNS follows the “publish-subscribe” (pub-sub) messaging paradigm, with notifications being delivered to clients using a “push” mechanism that eliminates the need to periodically check or “poll” for new information and updates. With simple APIs requiring minimal up-front development effort, no maintenance or management overhead and pay-as-you-go pricing, Amazon SNS gives developers an easy mechanism to incorporate a powerful notification system with their applications.
Push notifications to Apple, Google, Fire OS, and Windows devices, as well as Android devices in China with Baidu Cloud Push.
Besides pushing cloud notifications directly to mobile devices, Amazon SNS can also deliver notifications by SMS text message or email, to Amazon Simple Queue Service (SQS) queues, or to any HTTP endpoint.
To prevent messages from being lost, all messages published to Amazon SNS are stored redundantly across multiple availability zones.
SNS is arranged by Topics
SNS allows you to group multiple recipients using topics. A topic is an “access point” for allowing recipients to dynamically subscribe for identical copies of the same notification. One topic can support deliveries to multiple endpoints types – for example, you can group together IOS, Android and SMS recipients. When you publish once to a topic, SNS delivers appropriately formatted copies of your message to each subscriber.
Different Benefits of SNS
- Instantaneous, push-based delivery (no polling)
- Simple APIs and easy integration with applications
- Flexible message delivery over multiple transport protocols
- Inexpensive, pay-as-you-go model with no up-front costs
- Web-based AWS Management Console offers the simplicity of a point-and-click interface
Difference between SNS vs SQS
Both Messaging Services in AWS
- SNS – SNS is push Service
- SQS – SQS is pull Service
SNS Pricing
- Users Pay $0.50 per 1 million Amazon SNS Requests.
- $0.06 per 100,000 Notification deliveries over HTTP.
- $0.75 per 100 Notification deliveries over SMS.
- $2.00 per 100,000 Notification deliveries over Email.
abcdevops © 2019 all rights reserved.
Amazon Simple Queue Service - message queue service
Amazon SQS is a web service that gives you access to a message queue that can be used to store messages while waiting for a computer to process them.
So we have an example of a web application may be you upload an image file to this web application and then what that application will do is load the SQS that a user has uploaded it an image file and that job needs to be executed on its stored that message on the SQS system.
Amazon SQS is a distributed queue system that enables web service applications to quickly and reliably queue messages that one component in the application generates to be consumed by another component. A queue is a temporary repository for messages that are awaiting processing.
Using Amazon SQS, you can decouple the components of an application so they run independently, with Amazon SQS easing message management between components. Any component of a distributed application can store messages in a fail-safe queue. Messages can contain up to 256 KB of text in any format. Any component can later retrieve the messages pro-grammatically using the Amazon SQS API.
The queue acts as a buffer between the component producing and saving data, and the component receiving the data for processing. This means the queue resolves issues that arise if the producer is producing work faster than the consumer can process it, or if the producer or consumer are only intermittently connected to the network.
Amazon SQS ensures delivery of each message at least once, and supports multiple readers and writers interacting with the same queue. A single queue can be used simultaneously by many distributed application components, with no need for those components to coordinate with each other to share the queue.
Amazon SQS is engineered to always be available and deliver messages. One of the resulting trade offs is that SQS does not guarantee first in, first out delivery of messages. For many distributed applications, each message can stand on its own, and as long as all messages are delivered, the order is not important. If your system requires that order be preserved, you can place sequencing information in each message, so that you can reorder the messages when the queue returns them.
To illustrate, suppose you have a number of image files to encode. In an Amazon SQS message for each file specifying the command (jpeg-encode) and the location of the file in Amazon S3. A pool of Amazon EC2 instances running the needs image processing software does the following:
- Asynchronously pulls the task messages from the queue so that go in and pull the messages from queue. SQS always pulls the messages never the message queue never pushed out you have got EC2 instances constantly polling the queue trying to polling the data down.
- Polling the messages from the queue then Retrieves the filename of the image.
- Processes the conversion so might apply watermark.
- Writes the image back to Amazon S3
- Writes a “task complete” message to another queue
- Deletes the original task message
- Checks for more messages in the worker queue
- 30 Seconds visibility Time Out by Default. Maximum of 12 hours.
SQS Pricing
- First 1 million Amazon SQS Requests per month are free.
- $0.50 per 1 million Amazon SQS Requests per month thereafter ($0.00000050 per SQS Request).
- A single request can have from 1 to 10 messages, up to a maximum total payload of 256KB.
- Each 64KB ‘chunk’ of payload is billed as 1 request. For example, a single API call with a 256KB payload will be billed as four requests.
abcdevops © 2019 all rights reserved.