Improve AWS Roadmap (#7403)

* RDS

* DynamoDB.

* ECS

* Lambda

* Final Phase.
pull/7427/head
Vedansh 1 month ago committed by GitHub
parent 8535c6eef2
commit 788825fb75
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 4
      src/data/roadmaps/aws/content/110-rds/100-db-instances.md
  2. 4
      src/data/roadmaps/aws/content/110-rds/101-storage-types/100-general-purpose.md
  3. 4
      src/data/roadmaps/aws/content/110-rds/101-storage-types/101-provisioned-iops.md
  4. 4
      src/data/roadmaps/aws/content/110-rds/101-storage-types/102-magnetic.md
  5. 4
      src/data/roadmaps/aws/content/110-rds/101-storage-types/index.md
  6. 4
      src/data/roadmaps/aws/content/110-rds/102-backup-restore.md
  7. 4
      src/data/roadmaps/aws/content/110-rds/index.md
  8. 4
      src/data/roadmaps/aws/content/111-dynamodb/100-tables-items.md
  9. 4
      src/data/roadmaps/aws/content/111-dynamodb/101-primary-keys.md
  10. 4
      src/data/roadmaps/aws/content/111-dynamodb/102-data-modeling.md
  11. 4
      src/data/roadmaps/aws/content/111-dynamodb/103-streams.md
  12. 4
      src/data/roadmaps/aws/content/111-dynamodb/104-capacity-settings.md
  13. 4
      src/data/roadmaps/aws/content/111-dynamodb/105-limits.md
  14. 4
      src/data/roadmaps/aws/content/111-dynamodb/106-backup-restore.md
  15. 4
      src/data/roadmaps/aws/content/111-dynamodb/107-dynamo-local.md
  16. 4
      src/data/roadmaps/aws/content/111-dynamodb/index.md
  17. 4
      src/data/roadmaps/aws/content/112-elasticache/100-quotas.md
  18. 4
      src/data/roadmaps/aws/content/112-elasticache/index.md
  19. 4
      src/data/roadmaps/aws/content/113-ecs/100-clusters.md
  20. 4
      src/data/roadmaps/aws/content/113-ecs/101-tasks.md
  21. 4
      src/data/roadmaps/aws/content/113-ecs/102-services.md
  22. 4
      src/data/roadmaps/aws/content/113-ecs/103-launch-config.md
  23. 4
      src/data/roadmaps/aws/content/113-ecs/104-fargate.md
  24. 4
      src/data/roadmaps/aws/content/113-ecs/index.md
  25. 5
      src/data/roadmaps/aws/content/114-ecr.md
  26. 5
      src/data/roadmaps/aws/content/115-eks.md
  27. 4
      src/data/roadmaps/aws/content/116-lambda/100-creating-invoking.md
  28. 4
      src/data/roadmaps/aws/content/116-lambda/101-layers.md
  29. 4
      src/data/roadmaps/aws/content/116-lambda/102-custom-runtimes.md
  30. 4
      src/data/roadmaps/aws/content/116-lambda/103-versioning-aliases.md
  31. 4
      src/data/roadmaps/aws/content/116-lambda/104-event-bridge.md
  32. 4
      src/data/roadmaps/aws/content/116-lambda/105-cold-start-limitations.md
  33. 4
      src/data/roadmaps/aws/content/116-lambda/106-api-gateway.md
  34. 8
      src/data/roadmaps/aws/content/116-lambda/107-lambda-edge.md
  35. 4
      src/data/roadmaps/aws/content/116-lambda/index.md
  36. 10
      src/data/roadmaps/aws/content/index.md

@ -1,3 +1,7 @@
# DB Instances # DB Instances
The term "DB Instance" is used within the context of Amazon's Relational Database Service (RDS). A DB Instance is essentially an isolated database environment in the cloud, run within Amazon RDS. A DB Instance can contain multiple user-created databases, and can be accessed using the same tools and applications that you might use with a stand-alone database instance. You can create and manage a DB Instance via the AWS Management Console, the AWS RDS Command Line Interface, or through simple API calls. The term "DB Instance" is used within the context of Amazon's Relational Database Service (RDS). A DB Instance is essentially an isolated database environment in the cloud, run within Amazon RDS. A DB Instance can contain multiple user-created databases, and can be accessed using the same tools and applications that you might use with a stand-alone database instance. You can create and manage a DB Instance via the AWS Management Console, the AWS RDS Command Line Interface, or through simple API calls.
Visit the following resources to learn more:
- [@official@DB Instances - RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.html)

@ -1,3 +1,7 @@
# General Purpose # General Purpose
General Purpose Storage in AWS refers to Amazon Elastic Block Store (Amazon EBS) volumes designed for a broad range of workloads, including small to medium-sized databases, development and test environments, and boot volumes. The General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of transactional workloads and delivers consistent baseline performance of 3 IOPS/GB to a maximum of 16000 IOPS. Moreover, General Purpose SSD (gp2) volumes also provide the ability to burst to higher levels of performance when needed. General Purpose Storage in AWS refers to Amazon Elastic Block Store (Amazon EBS) volumes designed for a broad range of workloads, including small to medium-sized databases, development and test environments, and boot volumes. The General Purpose SSD (gp2) volumes offer cost-effective storage that is ideal for a broad range of transactional workloads and delivers consistent baseline performance of 3 IOPS/GB to a maximum of 16000 IOPS. Moreover, General Purpose SSD (gp2) volumes also provide the ability to burst to higher levels of performance when needed.
Visit the following resources to learn more:
- [@official@General Purpose Storage](https://aws.amazon.com/ebs/general-purpose/)

@ -1,3 +1,7 @@
# Provisioned IOPS # Provisioned IOPS
"Provisioned IOPS" is a storage option available in Amazon Web Services' (AWS) Elastic Block Store (EBS). This option is designed to deliver fast, predictable, and consistent I/O performance. It allows you to specify an IOPS rate when creating a volume, and AWS will provision that rate of performance, hence the name. It's primarily suitable for databases and workloads that require high IOPS. An EBS volume with provisioned IOPS is backed by solid-state drives (SSDs), and you can specify up to a maximum of 64,000 IOPS per volume. "Provisioned IOPS" is a storage option available in Amazon Web Services' (AWS) Elastic Block Store (EBS). This option is designed to deliver fast, predictable, and consistent I/O performance. It allows you to specify an IOPS rate when creating a volume, and AWS will provision that rate of performance, hence the name. It's primarily suitable for databases and workloads that require high IOPS. An EBS volume with provisioned IOPS is backed by solid-state drives (SSDs), and you can specify up to a maximum of 64,000 IOPS per volume.
Visit the following resources to learn more:
- [@official@Provisioned IOPS](https://docs.aws.amazon.com/ebs/latest/userguide/provisioned-iops.html)

@ -1,3 +1,7 @@
# Magnetic # Magnetic
"Magnetic" in AWS refers to Magnetic storage, also known as Amazon EBS (Elastic Block Store) Magnetic volumes. These storage types are designed for workloads where data is accessed infrequently, and sceneries where the lowest storage cost is important. Magnetic volumes offer cost-effective storage for applications with moderate or bursty I/O requirements. Though magnetic storage provides the lowest cost per gigabyte of all EBS volume types, it has the poorest performance capability and a higher latency compared to solid-state drive storage options. "Magnetic" in AWS refers to Magnetic storage, also known as Amazon EBS (Elastic Block Store) Magnetic volumes. These storage types are designed for workloads where data is accessed infrequently, and sceneries where the lowest storage cost is important. Magnetic volumes offer cost-effective storage for applications with moderate or bursty I/O requirements. Though magnetic storage provides the lowest cost per gigabyte of all EBS volume types, it has the poorest performance capability and a higher latency compared to solid-state drive storage options.
Visit the following resources to learn more:
- [@official@Magnetic Storage](https://aws.amazon.com/ebs/previous-generation/)

@ -1,3 +1,7 @@
# Storage Types # Storage Types
AWS RDS offers three types of storage: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. General Purpose (SSD) storage delivers a consistent baseline of 3 IOPS/GB and can burst up to 3,000 IOPS. It's suitable for a broad range of database workloads that have moderate I/O requirements. Provisioned IOPS (SSD) storage is designed to meet the needs of I/O-intensive workloads, particularly database workloads that are sensitive to storage performance and consistency. Magnetic storage, the most inexpensive type, is perfect for applications where the lowest storage cost is important and is best for infrequently accessed data. AWS RDS offers three types of storage: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. General Purpose (SSD) storage delivers a consistent baseline of 3 IOPS/GB and can burst up to 3,000 IOPS. It's suitable for a broad range of database workloads that have moderate I/O requirements. Provisioned IOPS (SSD) storage is designed to meet the needs of I/O-intensive workloads, particularly database workloads that are sensitive to storage performance and consistency. Magnetic storage, the most inexpensive type, is perfect for applications where the lowest storage cost is important and is best for infrequently accessed data.
Visit the following resources to learn more:
- [@official@RDS Storage Types](https://aws.amazon.com/rds/instance-types/)

@ -1,3 +1,7 @@
# Backup / Restore # Backup / Restore
`Backup Restore` in AWS RDS provides the ability to restore your DB instance to a specific point in time. When you initiate a point-in-time restore, a new DB instance is created and all transactions that occurred after the specified point-in-time are not part of the new DB instance. You can restore up to the last restorable time (typically within the last five minutes) as indicated in the AWS RDS Management Console. The time it takes to create the restore depends on the difference in time between when you initiate the restore and the time you are restoring to. The process happens with no impact on the source database and you can continue using your database during restore. `Backup Restore` in AWS RDS provides the ability to restore your DB instance to a specific point in time. When you initiate a point-in-time restore, a new DB instance is created and all transactions that occurred after the specified point-in-time are not part of the new DB instance. You can restore up to the last restorable time (typically within the last five minutes) as indicated in the AWS RDS Management Console. The time it takes to create the restore depends on the difference in time between when you initiate the restore and the time you are restoring to. The process happens with no impact on the source database and you can continue using your database during restore.
Visit the following resources to learn more:
- [@official@Backup & Restore - RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_CommonTasks.BackupRestore.html)

@ -1,3 +1,7 @@
# RDS # RDS
Amazon RDS (Relational Database Service) is a web service from Amazon Web Services. It's designed to simplify the setup, operation, and scaling of relational databases in the cloud. This service provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks. RDS supports six database engines: Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. These engines give you the ability to run instances ranging from 5GB to 6TB of memory, accommodating your specific use case. It also ensures the database is up-to-date with the latest patches, automatically backs up your data and offers encryption at rest and in transit. Amazon RDS (Relational Database Service) is a web service from Amazon Web Services. It's designed to simplify the setup, operation, and scaling of relational databases in the cloud. This service provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks. RDS supports six database engines: Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server. These engines give you the ability to run instances ranging from 5GB to 6TB of memory, accommodating your specific use case. It also ensures the database is up-to-date with the latest patches, automatically backs up your data and offers encryption at rest and in transit.
Visit the following resources to learn more:
- [@official@Amazon RDS](https://aws.amazon.com/rds/)

@ -1,3 +1,7 @@
# Tables / Items / Attributes # Tables / Items / Attributes
In Amazon DynamoDB, tables are a collection of items. An item is a group of attributes that is identified by a primary key. Items are similar to rows or records in other database systems. Each item in a table is uniquely identifiable by a primary key. This key can be simple (partition key only) or composite (partition key and sort key). Every attribute in an item is a name-value pair. The name of the attribute is a string and the value of an attribute can be of the following types: String, Number, Binary, Boolean, Null, List, Map, String Set, Number Set, and Binary Set. In Amazon DynamoDB, tables are a collection of items. An item is a group of attributes that is identified by a primary key. Items are similar to rows or records in other database systems. Each item in a table is uniquely identifiable by a primary key. This key can be simple (partition key only) or composite (partition key and sort key). Every attribute in an item is a name-value pair. The name of the attribute is a string and the value of an attribute can be of the following types: String, Number, Binary, Boolean, Null, List, Map, String Set, Number Set, and Binary Set.
Visit the following resources to learn more:
- [@official@Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/WorkingWithItems.html)

@ -1,3 +1,7 @@
# Primary Keys / Secondary Indexes # Primary Keys / Secondary Indexes
DynamoDB supports two types of primary keys, namely `Partition Key` and `Composite Key` (Partition Key and Sort Key). A `Partition Key`, also known as a hash key, is a simple primary key that has a scalar value (a string, a number, or a binary blob). DynamoDB uses the partition key's value to distribute data across multiple partitions for scalable performance. A `Composite Key` consists of two attributes. The first attribute is the partition key, and the second attribute is the sort key. DynamoDB uses the partition key to spread data across partitions and also uses the sort key to store items in sorted order within those partitions. This sort key provides further granular control over data organization. DynamoDB supports two types of primary keys, namely `Partition Key` and `Composite Key` (Partition Key and Sort Key). A `Partition Key`, also known as a hash key, is a simple primary key that has a scalar value (a string, a number, or a binary blob). DynamoDB uses the partition key's value to distribute data across multiple partitions for scalable performance. A `Composite Key` consists of two attributes. The first attribute is the partition key, and the second attribute is the sort key. DynamoDB uses the partition key to spread data across partitions and also uses the sort key to store items in sorted order within those partitions. This sort key provides further granular control over data organization.
Visit the following resources to learn more:
- [@official@Primary Keys / Secondary Indexes](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html)

@ -2,3 +2,7 @@
In AWS DynamoDB, data modeling is a process that involves determining how to organize, access, and understand the data stored in a database. This process is crucial as it outlines how data will be stored and accessed across a wide range of databases and applications. The primary components of data modeling in DynamoDB include tables, items, and attributes. Tables are collections of data. Items are individual pieces of data that are stored in a table. Attributes are elements of data that relate to a particular item. DynamoDB uses a NoSQL model which means it’s schema-less, i.e., the data can be structured in any way that your business needs In AWS DynamoDB, data modeling is a process that involves determining how to organize, access, and understand the data stored in a database. This process is crucial as it outlines how data will be stored and accessed across a wide range of databases and applications. The primary components of data modeling in DynamoDB include tables, items, and attributes. Tables are collections of data. Items are individual pieces of data that are stored in a table. Attributes are elements of data that relate to a particular item. DynamoDB uses a NoSQL model which means it’s schema-less, i.e., the data can be structured in any way that your business needs
prescribe, and can be changed at any time. This contrasts with traditional relational databases which require pre-defined schemas. prescribe, and can be changed at any time. This contrasts with traditional relational databases which require pre-defined schemas.
Visit the following resources to learn more:
- [@official@Data Modeling](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/data-modeling.html)

@ -1,3 +1,7 @@
# Streams # Streams
AWS DynamoDB Streams is a time-ordered sequence of item-level modifications in any DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table. The changes are recorded in near real-time and can be set up to trigger AWS Lambda functions immediately after an event has occurred. With DynamoDB Streams, applications can access this log and view the data modifications in the order they occurred. The stream records item-level data modifications such as `Insert`, `Modify`, and `Remove`. Each stream record is then organized into a stream view type, where applications can access up to 24 hours of data modification history. AWS DynamoDB Streams is a time-ordered sequence of item-level modifications in any DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table. The changes are recorded in near real-time and can be set up to trigger AWS Lambda functions immediately after an event has occurred. With DynamoDB Streams, applications can access this log and view the data modifications in the order they occurred. The stream records item-level data modifications such as `Insert`, `Modify`, and `Remove`. Each stream record is then organized into a stream view type, where applications can access up to 24 hours of data modification history.
Visit the following resources to learn more:
- [@official@Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html)

@ -1,3 +1,7 @@
# Capacity Settings # Capacity Settings
Amazon DynamoDB capacity settings refer to the read and write capacity of your tables. The read capacity unit is a measure of the number of strong consistent reads per second, while the write capacity unit is a measure of the number of writes per second. You can set up these capacities either as provisioned or on-demand. Provisioned capacity is where you specify the number of reads and writes per second that you expect your application to require. On the other hand, on-demand capacity allows DynamoDB to automatically manage your read and write capacity to meet the needs of your workload. Amazon DynamoDB capacity settings refer to the read and write capacity of your tables. The read capacity unit is a measure of the number of strong consistent reads per second, while the write capacity unit is a measure of the number of writes per second. You can set up these capacities either as provisioned or on-demand. Provisioned capacity is where you specify the number of reads and writes per second that you expect your application to require. On the other hand, on-demand capacity allows DynamoDB to automatically manage your read and write capacity to meet the needs of your workload.
Visit the following resources to learn more:
- [@official@Capacity Settings](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/capacity-mode.html)

@ -1,3 +1,7 @@
# Limits # Limits
In terms of DynamoDB, it’s important to be aware of certain limits. There are two types of capacity modes - provisioned and on-demand, with varying read/write capacity units. You have control over the provisioning of throughput for read/write operations. However, there's a maximum limit of 40000 read capacity units and 40000 write capacity units for on-demand mode per table. It's also important to note that the partition key value and sort key value can be a maximum of 2048 bytes and 1024 bytes respectively. Each item, including primary key, can be a maximum of 400KB. The total provisioned throughput for all tables and global secondary indexes in a region cannot exceed 20,000 write capacity units and 20000 read capacity units for on-demand mode. Remember, you can request to increase these limits by reaching out to AWS Support. In terms of DynamoDB, it’s important to be aware of certain limits. There are two types of capacity modes - provisioned and on-demand, with varying read/write capacity units. You have control over the provisioning of throughput for read/write operations. However, there's a maximum limit of 40000 read capacity units and 40000 write capacity units for on-demand mode per table. It's also important to note that the partition key value and sort key value can be a maximum of 2048 bytes and 1024 bytes respectively. Each item, including primary key, can be a maximum of 400KB. The total provisioned throughput for all tables and global secondary indexes in a region cannot exceed 20,000 write capacity units and 20000 read capacity units for on-demand mode. Remember, you can request to increase these limits by reaching out to AWS Support.
Visit the following resources to learn more:
- [@official@Limit Settings](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ServiceQuotas.html)

@ -1,3 +1,7 @@
# Backup / Restore # Backup / Restore
In AWS, DynamoDB has built-in support for data backup and restore features. This includes both on-demand and continuous backups. On-demand backups allow you to create complete backups of your tables for long-term retention and archival, helping meet corporate and governmental regulatory requirements. Continuous backups enable you to restore your table data to any point in time in the last 35 days, thus offering protection from accidental writes or deletes. During a restore operation, you can choose to restore the data to a new DynamoDB table or overwrite data in an existing table. These backups include all necessary metadata, including DynamoDB global secondary indexes. In AWS, DynamoDB has built-in support for data backup and restore features. This includes both on-demand and continuous backups. On-demand backups allow you to create complete backups of your tables for long-term retention and archival, helping meet corporate and governmental regulatory requirements. Continuous backups enable you to restore your table data to any point in time in the last 35 days, thus offering protection from accidental writes or deletes. During a restore operation, you can choose to restore the data to a new DynamoDB table or overwrite data in an existing table. These backups include all necessary metadata, including DynamoDB global secondary indexes.
Visit the following resources to learn more:
- [@official@Backup & Restore](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Backup-and-Restore.html)

@ -1,3 +1,7 @@
# DynamoDB Local # DynamoDB Local
DynamoDB Local is a downloadable version of Amazon DynamoDB that lets you write and test applications without accessing the real AWS services. It mimics the actual DynamoDB service. You can write code while sitting in a place where internet isn't available as you don't need internet connectivity to use DynamoDB Local. It supports the same API as DynamoDB and works with your existing DynamoDB API calls. The data is stored locally in your system, not on a network, and persists between restarts of DynamoDB Local. DynamoDB Local is a downloadable version of Amazon DynamoDB that lets you write and test applications without accessing the real AWS services. It mimics the actual DynamoDB service. You can write code while sitting in a place where internet isn't available as you don't need internet connectivity to use DynamoDB Local. It supports the same API as DynamoDB and works with your existing DynamoDB API calls. The data is stored locally in your system, not on a network, and persists between restarts of DynamoDB Local.
Visit the following resources to learn more:
- [@official@DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html)

@ -1,3 +1,7 @@
# DynamoDB # DynamoDB
Amazon DynamoDB is a fully managed NoSQL database solution that provides fast and predictable performance with seamless scalability. It is a key-value and document database that delivers single-digit millisecond performance at any scale. DynamoDB can handle more than 10 trillion requests per day and support peaks of more than 20 million requests per second. It maintains high durability of data via automatic replication across three different zones in an Amazon defined region. Amazon DynamoDB is a fully managed NoSQL database solution that provides fast and predictable performance with seamless scalability. It is a key-value and document database that delivers single-digit millisecond performance at any scale. DynamoDB can handle more than 10 trillion requests per day and support peaks of more than 20 million requests per second. It maintains high durability of data via automatic replication across three different zones in an Amazon defined region.
Visit the following resources to learn more:
- [@official@Amazon DynamoDB](https://aws.amazon.com/dynamodb/)

@ -1,3 +1,7 @@
# Quotas # Quotas
AWS ElastiCache quotas define the limit on the maximum number of clusters, nodes, parameter groups, and subnet groups you can create in an AWS account. These quotas vary by region and can be increased upon request to the AWS Service team. Quotas for ElastiCache are implemented to prevent unintentional overconsumption of resources. It's important to monitor your current usage and understand the quotas of your account to efficiently manage your ElastiCache resources. AWS ElastiCache quotas define the limit on the maximum number of clusters, nodes, parameter groups, and subnet groups you can create in an AWS account. These quotas vary by region and can be increased upon request to the AWS Service team. Quotas for ElastiCache are implemented to prevent unintentional overconsumption of resources. It's important to monitor your current usage and understand the quotas of your account to efficiently manage your ElastiCache resources.
Visit the following resources to learn more:
- [@official@Amazon ElastiCache Quotas](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/quota-limits.html)

@ -1,3 +1,7 @@
# ElastiCache # ElastiCache
Amazon ElastiCache is a fully managed in-memory data store from Amazon Web Services (AWS). It is designed to speed up dynamic web applications by reducing the latency and throughput constraints associated with disk-based databases. ElastiCache supports two open-source in-memory engines: Memcached and Redis. Redis is commonly used for database caching, session management, messaging, and queueing, while Memcached is typically used for caching smaller, simpler datasets. One of the key features of ElastiCache is its uniform performance and scalability, which enables it to handle large datasets and high-traffic websites. Amazon ElastiCache is a fully managed in-memory data store from Amazon Web Services (AWS). It is designed to speed up dynamic web applications by reducing the latency and throughput constraints associated with disk-based databases. ElastiCache supports two open-source in-memory engines: Memcached and Redis. Redis is commonly used for database caching, session management, messaging, and queueing, while Memcached is typically used for caching smaller, simpler datasets. One of the key features of ElastiCache is its uniform performance and scalability, which enables it to handle large datasets and high-traffic websites.
Visit the following resources to learn more:
- [@official@Amazon ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/WhatIs.html)

@ -1,3 +1,7 @@
# Clusters / ECS Container Agents # Clusters / ECS Container Agents
In AWS, an ECS **Cluster** is a logical grouping of tasks or services. If you run tasks or create services, you do it inside a cluster, so it's a vital building block of the Amazon ECS infrastructure. It serves as a namespace for your tasks and services, as these entities cannot span multiple clusters. The Amazon ECS tasks that run in a cluster are fundamentally distributed across all the Container Instances within an ECS Cluster. In AWS, an ECS **Cluster** is a logical grouping of tasks or services. If you run tasks or create services, you do it inside a cluster, so it's a vital building block of the Amazon ECS infrastructure. It serves as a namespace for your tasks and services, as these entities cannot span multiple clusters. The Amazon ECS tasks that run in a cluster are fundamentally distributed across all the Container Instances within an ECS Cluster.
Visit the following resources to learn more:
- [@official@Clusters / ECS Container Agents](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html)

@ -1,3 +1,7 @@
# Tasks # Tasks
Tasks in Amazon ECS are the instantiation of a task definition within a cluster. They can be thought of as the running instance of the definition, the same way an object is an instance of a class in object-oriented programming. A task definition is a text file in JSON format that describes one or more containers, up to a maximum of 10. The task definition parameters specify the container image to use, the amount of CPU and memory to allocate for each container, and the launch type to use for the task, among other options. When a task is launched, it is scheduled on an available container instance within the cluster. Tasks in Amazon ECS are the instantiation of a task definition within a cluster. They can be thought of as the running instance of the definition, the same way an object is an instance of a class in object-oriented programming. A task definition is a text file in JSON format that describes one or more containers, up to a maximum of 10. The task definition parameters specify the container image to use, the amount of CPU and memory to allocate for each container, and the launch type to use for the task, among other options. When a task is launched, it is scheduled on an available container instance within the cluster.
Visit the following resources to learn more:
- [@official@Tasks in ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html)

@ -1,3 +1,7 @@
# Services # Services
AWS ECS Services are defined as a set of part or all of your task definitions that run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster. If any of your tasks should fail or stop for any reason, the Amazon ECS service scheduler launches another instance of your task definition to replace it and maintain the desired count of tasks, ensuring the service's reliability and availability. ECS services can be scaled manually or with automated scaling policies based on CloudWatch alarms. In addition, ECS service scheduling options define how Amazon ECS places and terminates tasks. AWS ECS Services are defined as a set of part or all of your task definitions that run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster. If any of your tasks should fail or stop for any reason, the Amazon ECS service scheduler launches another instance of your task definition to replace it and maintain the desired count of tasks, ensuring the service's reliability and availability. ECS services can be scaled manually or with automated scaling policies based on CloudWatch alarms. In addition, ECS service scheduling options define how Amazon ECS places and terminates tasks.
Visit the following resources to learn more:
- [@official@Services in ECS](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html)

@ -1,3 +1,7 @@
# Launch Config / Autoscaling Groups # Launch Config / Autoscaling Groups
`Launch Configuration` is a template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you've launched an instance before, you can specify the same parameters for your launch configuration. Any parameters that you don't specify are automatically filled in with the default values that are set by the launch wizard. `Launch Configuration` is a template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you've launched an instance before, you can specify the same parameters for your launch configuration. Any parameters that you don't specify are automatically filled in with the default values that are set by the launch wizard.
Visit the following resources to learn more:
- [@official@Launch Config in EC2](https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-configurations.html)

@ -1,3 +1,7 @@
# Fargate # Fargate
Fargate is a technology used for the deployment of containers in Amazon's Elastic Container Service (ECS). This technology completely removes the need to manage the EC2 instances for your infrastructure; therefore, you would not have to be concerned about selecting the right type of EC2 instances, deciding when to scale your clusters, or optimizing cluster packing. In simple terms, Fargate allows you to focus on designing and building your applications instead of managing the infrastructure. Fargate is a technology used for the deployment of containers in Amazon's Elastic Container Service (ECS). This technology completely removes the need to manage the EC2 instances for your infrastructure; therefore, you would not have to be concerned about selecting the right type of EC2 instances, deciding when to scale your clusters, or optimizing cluster packing. In simple terms, Fargate allows you to focus on designing and building your applications instead of managing the infrastructure.
Visit the following resources to learn more:
- [@official@Amazon Fargate](https://aws.amazon.com/fargate/)

@ -1,3 +1,7 @@
# ECS # ECS
Amazon Elastic Container Service (ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure. With simple API calls, you can launch and stop Docker-enabled applications, query the complete state of your application, and access many familiar features like Amazon EC2 security groups, EBS volumes and IAM roles. ECS also integrates with AWS services like AWS App Mesh for service mesh, Amazon RDS for database services, and AWS Systems Manager for operational control. Amazon Elastic Container Service (ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure. With simple API calls, you can launch and stop Docker-enabled applications, query the complete state of your application, and access many familiar features like Amazon EC2 security groups, EBS volumes and IAM roles. ECS also integrates with AWS services like AWS App Mesh for service mesh, Amazon RDS for database services, and AWS Systems Manager for operational control.
Visit the following resources to learn more:
- [@official@Amazon Elastic Container Service (ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html)

@ -1,3 +1,8 @@
# ECR # ECR
AWS Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure. ECR hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications. Integration with AWS Identity and Access Management (IAM) provides resource-level control of each repository. AWS Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure. ECR hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications. Integration with AWS Identity and Access Management (IAM) provides resource-level control of each repository.
Visit the following resources to learn more:
- [@official@AWS Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/)
- [@official@Concepts of Amazon ECR](https://docs.aws.amazon.com/AmazonECR/latest/userguide/concept-and-components.html)

@ -1,3 +1,8 @@
# EKS # EKS
Amazon Elastic Kubernetes Service (EKS) is a managed service that simplifies the deployment, management, and scaling of containerized applications using Kubernetes, an open-source container orchestration platform. EKS manages the Kubernetes control plane for the user, making it easy to run Kubernetes applications without the operational overhead of maintaining the Kubernetes control plane. With EKS, you can leverage AWS services such as Auto Scaling Groups, Elastic Load Balancer, and Route 53 for resilient and scalable application infrastructure. Additionally, EKS can support Spot and On-Demand instances use, and includes integrations with AWS App Mesh service and AWS Fargate for serverless compute. Amazon Elastic Kubernetes Service (EKS) is a managed service that simplifies the deployment, management, and scaling of containerized applications using Kubernetes, an open-source container orchestration platform. EKS manages the Kubernetes control plane for the user, making it easy to run Kubernetes applications without the operational overhead of maintaining the Kubernetes control plane. With EKS, you can leverage AWS services such as Auto Scaling Groups, Elastic Load Balancer, and Route 53 for resilient and scalable application infrastructure. Additionally, EKS can support Spot and On-Demand instances use, and includes integrations with AWS App Mesh service and AWS Fargate for serverless compute.
Visit the following resources to learn more:
- [@official@Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)
- [@official@Concepts of Amazon EKS](https://docs.aws.amazon.com/eks/)

@ -1,3 +1,7 @@
# Creating / Invoking Functions # Creating / Invoking Functions
To create a Lambda function in AWS, navigate to the AWS Management Console, select "Lambda" under "Compute" and then "Create function". Specify the function name, execution role and runtime environment. Once the function is created, you can write or paste the code into the inline editor. To invoke a Lambda function, you can either do it manually, via an API gateway, or schedule it. Manually invoking can be done by selecting your function in the AWS console, then "Test", add the event JSON and "Test" again. If set up with an API gateway, it'll be triggered when the endpoints are hit. Scheduling involves using AWS Cloudwatch to trigger the functions periodically. To create a Lambda function in AWS, navigate to the AWS Management Console, select "Lambda" under "Compute" and then "Create function". Specify the function name, execution role and runtime environment. Once the function is created, you can write or paste the code into the inline editor. To invoke a Lambda function, you can either do it manually, via an API gateway, or schedule it. Manually invoking can be done by selecting your function in the AWS console, then "Test", add the event JSON and "Test" again. If set up with an API gateway, it'll be triggered when the endpoints are hit. Scheduling involves using AWS Cloudwatch to trigger the functions periodically.
Visit the following resources to learn more:
- [@official@Create Your First Lambda Function](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html)

@ -1,3 +1,7 @@
# Layers # Layers
AWS Lambda layers are distribution mechanisms for libraries, custom runtimes, and other function dependencies. In other words, they are a distribution mechanism for artifacts. The layers can be versioned, and each version is immutable. An AWS Lambda layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. Lambda functions can be configured to reference these layers. The layer is then extracted to the `/opt` directory in the function execution environment. Each runtime looks for libraries in a different location under the `/opt` folder, depending on the language. AWS Lambda layers are distribution mechanisms for libraries, custom runtimes, and other function dependencies. In other words, they are a distribution mechanism for artifacts. The layers can be versioned, and each version is immutable. An AWS Lambda layer is a ZIP archive that contains libraries, a custom runtime, or other dependencies. Lambda functions can be configured to reference these layers. The layer is then extracted to the `/opt` directory in the function execution environment. Each runtime looks for libraries in a different location under the `/opt` folder, depending on the language.
Visit the following resources to learn more:
- [@official@AWS Lambda Layers](https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html)

@ -1,3 +1,7 @@
# Custom Runtimes # Custom Runtimes
AWS Lambda supports several preconfigured runtimes for you to choose from, including Node.js, Java, Ruby, Python, and Go. However, if your preferred programming language or specific language version isn't supported natively, you can use **custom runtimes**. A custom runtime in AWS Lambda is a Linux executable that handles invocations and communicates with the Lambda service. It enables you to use any programming language to handle AWS Lambda events. The runtime is responsible for running the bootstrap, which is an executable file, to start the execution process environment, process incoming requests, and manage interaction between your function code and the infrastructure. AWS Lambda supports several preconfigured runtimes for you to choose from, including Node.js, Java, Ruby, Python, and Go. However, if your preferred programming language or specific language version isn't supported natively, you can use **custom runtimes**. A custom runtime in AWS Lambda is a Linux executable that handles invocations and communicates with the Lambda service. It enables you to use any programming language to handle AWS Lambda events. The runtime is responsible for running the bootstrap, which is an executable file, to start the execution process environment, process incoming requests, and manage interaction between your function code and the infrastructure.
Visit the following resources to learn more:
- [@official@AWS Lambda Runtimes](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html)

@ -1,3 +1,7 @@
# Versioning / Aliases # Versioning / Aliases
In AWS Lambda, **Versioning** provides a way to manage distinct and separate iterations of a Lambda function, enabling both risk reduction and more efficient development cycles. Conversely, an **Alias** is a pointer to a specific Lambda function version. Aliases are mutable; they can be re-associated to a different version, manifesting a form of flexibility. With aliases, one can avoid direct updating of event triggers or downstream services as they can point to an alias and the corresponding version can be updated, hence separating the infrastructure/code changes. In AWS Lambda, **Versioning** provides a way to manage distinct and separate iterations of a Lambda function, enabling both risk reduction and more efficient development cycles. Conversely, an **Alias** is a pointer to a specific Lambda function version. Aliases are mutable; they can be re-associated to a different version, manifesting a form of flexibility. With aliases, one can avoid direct updating of event triggers or downstream services as they can point to an alias and the corresponding version can be updated, hence separating the infrastructure/code changes.
Visit the following resources to learn more:
- [@official@AWS Lambda Versioning](https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html)

@ -1,3 +1,7 @@
# Event Bridge / Scheduled Execution # Event Bridge / Scheduled Execution
Amazon EventBridge is a serverless event bus that makes it easy to connect applications together using data from your own applications, Software-as-a-Service (SaaS) applications, and AWS services. It enables you to build a bridge between your applications, regardless of where they are. With EventBridge, you simply ingest, filter, transform, and deliver events. It simplifies the process of ingesting and delivering events across your application architecture, while also handling event management. EventBridge combines all of the functionality of CloudWatch Events with new and enhanced features. Amazon EventBridge is a serverless event bus that makes it easy to connect applications together using data from your own applications, Software-as-a-Service (SaaS) applications, and AWS services. It enables you to build a bridge between your applications, regardless of where they are. With EventBridge, you simply ingest, filter, transform, and deliver events. It simplifies the process of ingesting and delivering events across your application architecture, while also handling event management. EventBridge combines all of the functionality of CloudWatch Events with new and enhanced features.
Visit the following resources to learn more:
- [@official@Amazon EventBridge](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html)

@ -1,3 +1,7 @@
# Cold Start and Limitations # Cold Start and Limitations
AWS Lambda's cold start refers to the delay experienced when Lambda invokes a function for the first time or after it has updated its code or dependencies. This happens because Lambda needs to do some initial setup, such as initializing the runtime, before it can execute the function code. This setup process adds to the function's execution time, and is particularly noticeable in situations where low latency is critical. Cold start times also vary based on the memory size, with bigger lambda functions taking longer times to start. Further, unused functions may face a cold start again as AWS may clear out idle resources from time to time. AWS Lambda's cold start refers to the delay experienced when Lambda invokes a function for the first time or after it has updated its code or dependencies. This happens because Lambda needs to do some initial setup, such as initializing the runtime, before it can execute the function code. This setup process adds to the function's execution time, and is particularly noticeable in situations where low latency is critical. Cold start times also vary based on the memory size, with bigger lambda functions taking longer times to start. Further, unused functions may face a cold start again as AWS may clear out idle resources from time to time.
Visit the following resources to learn more:
- [@official@AWS Cold Start and Limitations](https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html)

@ -1,3 +1,7 @@
# API Gateway # API Gateway
AWS API Gateway is a fully-managed service that makes it easy to create, publish, maintain, monitor, and secure APIs at any scale. It acts as a "front door" for applications to access data, business logic, or functionality from your backend services, like workloads running on Amazon EC2, AWS Lambda, or any web application. API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, throttling, and API version management. It provides robust, native capabilities to deliver the API governance and lifecycle management capabilities your business needs to productize its APIs. AWS API Gateway is a fully-managed service that makes it easy to create, publish, maintain, monitor, and secure APIs at any scale. It acts as a "front door" for applications to access data, business logic, or functionality from your backend services, like workloads running on Amazon EC2, AWS Lambda, or any web application. API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, throttling, and API version management. It provides robust, native capabilities to deliver the API governance and lifecycle management capabilities your business needs to productize its APIs.
Visit the following resources to learn more:
- [@official@AWS API Gateway](https://aws.amazon.com/api-gateway/)

@ -1,3 +1,7 @@
# Lambda@Edge # Lambda Edge
Lambda@Edge is a feature of AWS Lambda that allows you to run coding functions at AWS edge locations, closest to your customers. This can result in lower latency response times and better customer experiences. It allows you to customize the content that CloudFront delivers, executing the code after the CloudFront response or request. Lambda@Edge scales automatically, from a few requests per day to thousands per second. Lambda Edge is a feature of AWS Lambda that allows you to run coding functions at AWS edge locations, closest to your customers. This can result in lower latency response times and better customer experiences. It allows you to customize the content that CloudFront delivers, executing the code after the CloudFront response or request. Lambda@Edge scales automatically, from a few requests per day to thousands per second.
Visit the following resources to learn more:
- [@official@AWS Lambda Edge](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-the-edge.html)

@ -1,3 +1,7 @@
# Lambda # Lambda
AWS Lambda is a serverless computing service that runs code in response to events and automatically manages the computing resources required by that code. It lets us run applications and services without thinking about servers. You can execute your back-end application code, run code in response to HTTP requests using Amazon API Gateway, or call your functions using API calls made using AWS SDKs. AWS Lambda automatically scales your applications in response to incoming requests. AWS Lambda supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby code, and provides a Runtime API which allows you to use any additional programming languages to author your functions. AWS Lambda is a serverless computing service that runs code in response to events and automatically manages the computing resources required by that code. It lets us run applications and services without thinking about servers. You can execute your back-end application code, run code in response to HTTP requests using Amazon API Gateway, or call your functions using API calls made using AWS SDKs. AWS Lambda automatically scales your applications in response to incoming requests. AWS Lambda supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby code, and provides a Runtime API which allows you to use any additional programming languages to author your functions.
Visit the following resources to learn more:
- [@official@AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html)

@ -1,9 +1,13 @@
# Introduction # Introduction
AWS (Amazon Web Services) offers a broad set of global cloud-based products including compute, storage, databases, analytics, networking, mobile, developer tools, management tools, IoT, security, and enterprise applications: on-demand, available in seconds, with pay-as-you-go pricing. From data warehousing to deployment tools, directories to content delivery, over 200 AWS services are available. New services can be provisioned quickly, without the upfront fixed expense. This allows enterprises, start-ups, small and medium-sized businesses, and customers in the public sector to access the building blocks they need to respond quickly to changing business requirements. This whitepaper provides you with an overview of the benets of the AWS Cloud and introduces you to the services that make up the platform. AWS (Amazon Web Services) offers a broad set of global cloud-based products including compute, storage, databases, analytics, networking, mobile, developer tools, management tools, IoT, security, and enterprise applications: on-demand, available in seconds, with pay-as-you-go pricing. From data warehousing to deployment tools, directories to content delivery, over 200 AWS services are available. New services can be provisioned quickly, without the upfront fixed expense. This allows enterprises, start-ups, small and medium-sized businesses, and customers in the public sector to access the building blocks they need to respond quickly to changing business requirements. This whitepaper provides you with an overview of the benefits of the AWS Cloud and introduces you to the services that make up the platform.
Learn more from the following links: Learn more from the following links:
- [@article@AWS Documentation](https://docs.aws.amazon.com/) - [@official@Amazon AWS](https://aws.amazon.com/)
- [@article@Introduction of AWS](https://docs.aws.amazon.com/whitepapers/latest/aws-overview/introduction.html) - [@official@AWS Documentation](https://docs.aws.amazon.com/)
- [@official@Introduction of AWS](https://docs.aws.amazon.com/whitepapers/latest/aws-overview/introduction.html)
- [@official@Amazon DynamoDB](https://aws.amazon.com/dynamodb/)
- [@official@AWS Elastic Container Registry (ECR)](https://aws.amazon.com/ecr/)
- [@official@Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)
- [@video@AWS Tutorial for Beginners](https://www.youtube.com/watch?v=zA8guDqfv40) - [@video@AWS Tutorial for Beginners](https://www.youtube.com/watch?v=zA8guDqfv40)
Loading…
Cancel
Save