Elasticache
- Amazon ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis protocol-compliant server nodes in the cloud. Amazon ElastiCache improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory system, instead of relying entirely on slower disk-based databases.
- Once a cluster is provisioned, Amazon ElastiCache automatically detects and replaces failed nodes.
- The in-memory caching provided by Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing and Q&A portals) or compute-intensive workloads (such as a recommendation engine).
- Each node runs an instance of the Memcached or Redis protocol-compliant service and has its own DNS name and port.
- Reserved Nodes or Reserved Instance (RI) is an offering that provides you with a significant discount over on-demand usage when you commit to a one-year or three-year term. With Reserved Nodes, you can make a one-time, up-front payment to create a one or three year reservation to run your node in a specific Region.
- Amazon ElastiCache will repair the node by acquiring new service resources, and will then redirect the node's existing DNS name to point to the new service resources. Thus, the DNS name for a Redis node remains constant, but the IP address of a Redis node can change over time.
- Auto Discovery enables automatic discovery of cache nodes by clients when they are added to or removed from an Amazon ElastiCache cluster.
- An Amazon ElastiCache for Redis node may take on a primary or a read replica role. A primary node can be replicated to multiple read replica nodes (upto 5).
- you can create cross region replicas using the Global Datastore feature in Amazon ElastiCache for Redis. Global Datastore provides fully managed, fast, reliable and secure cross-region replication. It allows you to write to your Amazon ElastiCache for Redis cluster in one region and have the data available to be read from up to two other cross-region replica clusters, thereby enabling low-latency reads and disaster recovery across regions.
- Use of Read Replicas - The excess read traffic can be directed to one or more read replicas. Serving read traffic while the primary is unavailable. read replica may only be provisioned in the same or different Availability Zone of the same Region as your cache node primary.
- Updates to a primary cache node will automatically be replicated to any associated read replicas. ReplicationLag metric can be used to find the lag of replica from primary.
- Using Redis replication in conjunction with Multi-AZ provides increased availability and fault tolerance.
- A snapshot is a copy of your entire Redis cluster at a specific moment stored in S3. Backup and Restore is a feature that allows customers to create snapshots and restore them. Snapshots can be copied from one region to another.
- Memcache
- For improved fault tolerance, locate your Memcached nodes in various Availability Zones (AZs) within the cluster's AWS Region. That way, a failure in one AZ has minimal impact upon your entire cluster and application.
- Each node in a Memcached cluster has its own endpoint. The cluster also has an endpoint called the configuration endpoint. If you enable Auto Discovery and connect to the configuration endpoint, your application automatically knows each node endpoint, even after adding or removing nodes from the cluster. As long as a cluster is in the available state, you are being charged for it, whether or not you are actively using it. To stop incurring charges, delete the cluster.
- Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data doesn't exist in the cache or has expired, your application requests the data from your data store. Your data store then returns the data to your application. Your application next writes the data received from the store to the cache. This way, it can be more quickly retrieved the next time it's requested.
- The write-through strategy adds data or updates data in the cache whenever data is written to the database.
- Time to live (TTL) is an integer value that specifies the number of seconds until the key expires. Memcached specifies this value in seconds. When an application attempts to read an expired key, it is treated as though the key is not found. The database is queried for the key and the cache is updated.
- ElastiCache allows you to control access to your clusters using security groups. A security group acts like a firewall, controlling network access to your cluster. By default, network access to your clusters is turned off. If you want your applications to access your cluster, you must explicitly enable access from hosts.
- Reserving one or more nodes may be a way for you to reduce costs. Reserved nodes are charged an up front fee that depends upon the node type and the length of reservation—one or three years. This charge is much less than the hourly usage charge that you incur with On-Demand nodes.
- No support for data at rest, snapshots, replication, pub/sub
- Multi-threaded
- In Amazon ElastiCache, the number of cache nodes in the cluster is a key factor in the availability of your cluster running Memcached. The failure of a single cache node can have an impact on the availability of your application and the load on your back-end database while ElastiCache provisions a replacement for the failed cache node and it get repopulated. You can reduce this potential availability impact by spreading your memory and compute capacity over a larger number of cache nodes, each with smaller capacity, rather than using a fewer number of high capacity nodes.
- Redis
- Multi-AZ
- Single threaded.
- Support data at rest encryption
- A cluster is a collection of one or more cache nodes, all of which run an instance of the Redis cache engine software.
- A single-node Redis (cluster mode disabled) cluster has no shard, and a multi-node Redis (cluster mode disabled) cluster has a single shard. Redis (cluster mode enabled) clusters can have up to 500 shards, with your data partitioned across the shards. Each node has its own Domain Name Service (DNS) name and port.
- Write-through caching is a caching strategy in which the cache and database are updated almost simultaneously. When we want to update the information in the cache, we first update the cache itself, and then propagate the same update to the underlying database.
- The ability for client programs to automatically identify all of the nodes in a cache cluster, and to initiate and maintain connections to all of these nodes.
- Automating common administrative tasks such as failure detection and recovery, and software patching.
- Providing detailed monitoring metrics associated with your Cache Nodes, enabling you to diagnose and react to issues very quickly
- lazy loading is a caching strategy that loads data into the cache only when necessary.
- Resharding involves adding and removing shards or nodes to your cluster and redistributing key spaces.
- By default, the data in a Redis node on ElastiCache resides only in memory and isn't persistent. If a node is rebooted, or if the underlying physical server experiences a hardware failure, the data in the cache is lost. You can choose the following options to improve the data durability of your ElastiCache cluster:
- Daily automatic backups
- Manual backups using Redis append-only file (AOF) - When this feature is enabled, the node writes all of the commands that change cache data to an append-only file. When a node is rebooted and the cache engine starts, the AOF is "replayed." The result is a warm Redis cache with all of the data intact. AOF is disabled by default. However Pt 3 is the best solution which will avoid data loss.
- Setting up a Multi-AZ with Automatic Failover - A Multi-AZ with Automatic Failover provides fault tolerance if your cluster’s read/write primary cluster node becomes unreachable or fails. Use this option when data retention, minimal downtime, and application performance are a priority. Multi-AZ is the lowest-cost option. Use Multi-AZ when you can’t risk losing data as a result of hardware failure or you can’t afford the downtime.
Comments
Post a Comment