AWS Compute Services

 

  1. Containers
    1. A task is a logical group of running containers. 
    2. Previously, tasks running on Amazon ECS shared the elastic network interface of their EC2 host. 
    3. Now, the new awsvpc networking mode lets you attach an elastic network interface directly to a task. 
    4. With the default bridge network mode, containers on an instance are connected to each other using the docker0 bridge. Containers use this bridge to communicate with endpoints outside of the instance, using the primary elastic network interface of the instance on which they are running. 
    5. The awsvpc networking mode enables you to run multiple copies of the container on the same instance using the same container port without needing to do any port mapping or translation, simplifying the application architecture. 
    6. Associating security group rules with a container or containers in a task allows you to restrict the ports and IP addresses from which your application accepts network traffic. 
    7.  Task Role =  Instead of creating and distributing your AWS credentials to the containers or using the EC2 instance’s role, you can associate an IAM role with an ECS task definition or RunTask API operation. The applications in the task’s containers can then use the AWS SDK or CLI to make API requests to authorized AWS services.
    8. Accesing credential using instance profile metadata should be avoided.
    9. The task execution role grants the Amazon ECS container and Fargate agents permission to make AWS API calls on your behalf.
  2. EC2
    1. You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically recovers the instance if it becomes impaired due to an underlying hardware failure or a problem that requires AWS involvement to repair. Terminated instances cannot be recovered.
    2. When the StatusCheckFailed_System alarm is triggered, and the recover action is initiated, you will be notified by the Amazon SNS topic that you selected when you created the alarm and associated the recover action. 
    3. EC2Rescue can help you diagnose and troubleshoot problems on Amazon EC2 Linux and Windows Server instances. You can run the tool manually or you can run the tool automatically by using Systems Manager Automation and the AWSSupport-ExecuteEC2Rescue document. The AWSSupport-ExecuteEC2Rescue SSM document is designed to perform a combination of Systems Manager actions, AWS CloudFormation actions, and Lambda functions that automate the steps normally required to use EC2Rescue.
    4. If your instance has a public IPv4 address, it retains the public IPv4 address after recovery.
    5. A recovered instance is identical to the original instance, including the instance ID, private IP address, Elastic IP addresses, and all instance metadata.
    6. A good, general rule for EC2 instances is that if your maximum CPU and memory usage is less than 40% over a four-week period, you can safely cut the machine in half. 
  3. Spot Strategy
    1. To avoid interruption to your Spot instances, you can actually set up a diversified spot fleet allocation strategy in which you are using a range of different EC2 instance types such as c3.2xlarge, m3.xlarge, r3.xlarge et cetera instead of just one type. This will effectively increase the chances of providing a more stable compute capacity to your application. Therefore, in the event that there is a Spot interruption due to the high demand for a specific instance type, say c3.2xlarge, your application could still scale using another instance type such as m3.xlarge or r3.xlarge.
  4. Placement Groups
    1. Spread placement groups are recommended for applications that have a small number of critical instances that should be kept separate from each other. Launching instances in a spread placement group reduces the risk of simultaneous failures that might occur when instances share the same racks.
    2. Cluster placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. They are also recommended when the majority of the network traffic is between the instances in the group. To provide the lowest latency and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking.
    3. Before you move or remove the instance from one placement group to another, the instance must be in the stopped state. 
    4. If you set the Launch Configuration Tenancy to default and the VPC Tenancy is set to dedicated, then the instances have dedicated tenancy. If you set the Launch Configuration Tenancy to dedicated and the VPC Tenancy is set to default, then again the instances have dedicated tenancy.
  5. Auto Scaling Groups
    1. ASG can span Multi-AZ within same region. Cannot span regions. However in same ASG we can have different instance types and purchase options.
    2. Scaling Policy Types
      • Target tracking scaling—Increase or decrease the current capacity of the group based on a target value for a specific metric.
      • Step scaling—Increase or decrease the current capacity of the group based on a set of scaling adjustments, known as step adjustments, that vary based on the size of the alarm breach.
      • Simple scaling—Increase or decrease the current capacity of the group based on a single scaling adjustment.
    3. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value.
    4. Use Predictive Scaling to forecast your future load demands by analyzing your historical records for a metric. It also allows you to schedule scaling actions that proactively add and remove resource capacity to reflect the load forecast, and control maximum capacity behavior. Only available for EC2 Auto Scaling groups.
    5. When you create a group, you can specify its minimum, maximum, and desired number of EC2 instances. Your group uses a launch configuration as a template for its EC2 instances. When you create a launch configuration, you can specify information such as the AMI ID, instance type, key pair, security groups, and block device mapping for your instances.
    6. You can only specify one launch configuration for an Auto Scaling group at a time, and you can’t modify a launch configuration after you’ve created it.
    7. As-describe-launch-configs describes all the launch config parameters created by the AWS account in the specified region. Generally, it returns values, such as
      Launch Config name, Instance Type and AMI ID. If the user wants additional parameters, such as the IAM Profile used in the config, he has to run command: as-describe-launch-configs --show-long
    8. EC2 - One way to execute code and actions before terminating an instance is to create a lifecycle hook that puts the instance in Terminating:Wait status. This allows you to perform any desired actions before immediately terminating the instance within the Auto Scaling group. The Terminating:Wait status can be monitored by an Amazon CloudWatch event, which triggers an AWS Systems Manager automation document to perform the action you want.
    9. By adding a launch lifecycle hook to your Auto Scaling group, you can ensure that your bootstrap scripts have completed successfully and the applications on the instances are ready to accept traffic before they are registered to the load balancer at the end of the lifecycle hook.
    10. The communication between Auto Scaling and CodeDeploy during a scale in event is based on Auto Scaling lifecycle hooks.
    11. The cooldown period is a configurable setting that helps ensure to not launch or terminate additional instances before previous scaling activities take effect.
    12. You can also use instance protection to prevent specific instances from being terminated during automatic scale in.
    13. If you have attached a load balancer to your Auto Scaling group, you can have Auto Scaling include the results of Elastic Load Balancing health checks when it determines the health status of an instance.On the next run the unhealthy instance is terminated.
    14.  An Auto Scaling instance that has just come into service needs to warm up before it can pass the Auto Scaling health check. Auto Scaling waits until the health check grace period ends before checking the health status of the instance. While the EC2 status checks and ELB health checks can complete before the health check grace period expires, Auto Scaling does not act on them until the health check grace period expires. To provide ample warm-up time for your instances, ensure that the health check grace period covers the expected startup time for your application.
    15. When a new AZ is added to ASG, the ASG rebalances by terminating an instance in another AZ.
  6. Systems Manager - 
    1. SSM agent required- EC2 or on-prem
    2. SSM Automation simplifies common maintenance and deployment tasks of Amazon EC2 instances and other AWS resources. Automation enables you to do the following:
      1. Build Automation workflows to configure and manage instances and AWS resources.
      2. Create custom workflows or use pre-defined workflows maintained by AWS
      3. Receive notifications about Automation tasks and workflows by using Amazon CloudWatch Events.
      4. Monitor Automation progress and execution details by using the Amazon EC2 or the AWS Systems Manager console.
    3. AWS Systems Manager State Manager is primarily used as a secure and scalable configuration management service that automates the process of keeping your Amazon EC2 and hybrid infrastructure in a state.
    4. Session Manager when used to log to EC2 instance can log all the commands used during the session. Port Forwarding allows you to securely create tunnels between your instances deployed in private subnets, without the need to start the SSH service on the server, to open the SSH port in the security group or the need to use a bastion host. Similar to SSH Tunnels, Port Forwarding allows you to forward traffic between your laptop to open ports on your instance. Once port forwarding is configured, you can connect to the local port and access the server application running inside the instance. Systems Manager Session Manager’s Port Forwarding use is controlled through IAM policies on API access and the Port Forwarding SSM Document
    5. AWS Systems Manager -Session Manager is a fully managed AWS Systems Manager capability that lets you manage EC2 instances, on-premises instances, and virtual machines (VMs) .
      1. managed node is any machine configured for AWS Systems Manager. You can configure Amazon Elastic Compute Cloud (Amazon EC2) instances; AWS IoT Greengrass core devices; and on-premises servers, edge devices, and virtual machines (VMs) in a hybrid environment as managed nodes.
      2.  The standard-instances tier allows you to register a maximum of 1,000 machines per AWS account per AWS Region. 
      3. If you need to register more than 1,000 machines in a single account and Region, then use the advanced-instances tier. 
      4. Fleet Manager, a capability of AWS Systems Manager, is a unified user interface (UI) experience that helps you remotely manage your nodes running on AWS or on premises. With Fleet Manager, you can view the health and performance status of your entire server fleet from one console. 
      5. AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes as parameter values.
  7. Enduser Computing Services - Globally available, Fully managed, secure VDI services - H/w, s/w and administration is managed by AWS. We need to take care of images, apps and users. Cost optimizer analyzes usage patterns.
    1. Workspaces - Fully managed Desktop as a service - eliminates On-prem VDI. INtegrated with MS AD.
    2. Workspace Application Manager - Deploy and manage applications as virtualized app containers.
    3. Workdocs - File storage service
    4. AppStream 2.0 - Desktop App streaming service to web browsers - Ex. Third party desktop gaming software can be streamed through the browser and accessed via browser.
  8. High performance computing (HPC) 
    1. workload cluster should be in a single AZ.
    2. AWS Batch supports multi-node parallel jobs, which enables you to run single jobs that span multiple EC2 instances. This feature lets you use AWS Batch to easily and efficiently run workloads such as large-scale, tightly-coupled High Performance Computing (HPC) applications or distributed GPU model training. AWS Batch also supports Elastic Fabric Adapter, a network interface that enables you to run applications that require high levels of inter-node communication at scale on AWS.
    3. AWS Batch efficiently and dynamically provisions and scales Amazon EC2 and Spot Instances or leverages Fargate and Fargate Spot based on the requirements of your jobs. 
    4. AWS Batch provides Managed Compute Environments that dynamically provision and scale compute resources based the volume and resource requirements of your submitted jobs.
    5. With Fargate, every job receives the exact amount of CPU and memory that it requests (within allowed Fargate SKU’s), so there is no wasted resource time or need to wait for EC2 instance launches.
    6. AWS Batch enables you to set up multiple queues with different priority levels. Batch jobs are stored in the queues until compute resources are available to execute the job. 
    7. Enhance Networking - In virtualization, SR-IOV (Single Root Input/Output Virtualization) is a specification and the standard used to share networking resources (NICs) and secure network traffic. Enhanced networking uses single root I/O virtualization (SR-IOV) to provide high-performance networking capabilities. SR-IOV is a method of device virtualization that provides higher I/O performance and lower CPU utilization when compared to traditional virtualized network interfaces. Enhanced networking provides higher bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies. There is no additional charge for using enhanced networking.
    8. CUDA (or Compute Unified Device Architecture) is a parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing unit (GPU) for general purpose processing – an approach called general-purpose computing on GPUs (GPGPU). G3/G4 is the cost effective standard. Cluster GPU Instances give customers with HPC workloads an option beyond Cluster Compute Instances to further customize their high performance clusters in the cloud for applications that can benefit from the parallel computing power of GPUs.Cluster GPU Instances use the same cluster placement group functionality as Cluster Compute Instances for grouping instances into clusters – allowing applications to get the low-latency, high bandwidth network performance required for tightly-coupled node-to-node communication typical of many HPC applications. 
  9. Fargate - Serverless dataplane
    1. In case of ECS the worker nodes need to be provisioned and managed by us.
    2. In case of Fargate its serverless and we can deploy our pods without worrying about Worker nodes.
    3. Control plane - 
  10. Elastic Beanstalk 
    1. Elastic Beanstalk never deletes source bundles automatically. You should delete source bundles when you no longer need them.
    2. AWS Elastic Beanstalk provides several options for how deployments are processed, including deployment policies (All at once, Rolling, Rolling with additional batch, and Immutable). Default is all-at-once deployments.
    3. AWS Elastic Beanstalk performs an in-place update when you update your application versions, your application might become unavailable to users for a short period of time.
    4. With rolling deployments, Elastic Beanstalk splits the environment’s EC2 instances into batches and deploys the new version of the application to one batch at a time, leaving the rest of the instances in the environment running the old version of the application. 
    5. To maintain full capacity during deployments, you can configure your environment to launch a new batch of instances before taking any instances out of service. This option is known as a rolling deployment with an additional batch. When the deployment completes, Elastic Beanstalk terminates the additional batch of instances.”
    6. Immutable deployments perform an immutable update to launch a full set of new instances running the new version of the application in a separate Auto Scaling group, alongside the instances running the old version. Immutable deployments can prevent issues caused by partially completed rolling deployments. If the new instances don’t pass health checks, Elastic Beanstalk terminates them, leaving the original instances untouched.

Comments

Popular posts from this blog

AWS Organizations, IAM

Key Concepts

Linear Algebra Concepts