# D. Cache ## Caching In computing, the data in a cache is generally stored in **fast-access hardware**, such as random-access memory (RAM), and may also be used in correlation with a software component. A cache’s primary purpose is to **increase data retrieval performance** by reducing the need to access the underlying slower storage layer. Trading off capacity for speed, a cache typically **stores a subset of data** transiently, in contrast to databases whose data is usually complete and durable. ### Benefits of Caching A cache provides **high-throughput**, **low-latency** access to commonly accessed application data by storing the data in memory. Caching can improve the speed of your application. Caching reduces the response latency, which improves a user’s experience with your application. Time-consuming database queries and complex queries often create bottlenecks in applications. In read-intensive applications, caching can provide large performance gains by reducing application processing time and database access time. Write-intensive applications typically do not see as great a benefit to caching. However, even write-intensive applica- tions normally have a read/write ratio greater than 1, which implies that read caching can be beneficial. In summary, the **benefits** of caching include the following: * Improve application performance * Reduce database cost * Reduce load on the backend database tier Facilitate predictable performance * Eliminate database hotspots * Increase read throughput (IOPS) The following **types of information** or applications can often benefit from caching: * Results of database queries * Results of intensive calculations * Results of remote API calls * Compute-intensive workloads that manipulate large datasets, such as high-performance computing simulations and recommendation engines Consider caching your data if the following **conditions** apply: * It is slow or expensive to acquire when compared to cache retrieval. * It is accessed with sufficient frequency. * Your data or information for your application is relatively static * Your data or information for your application is rapidly changing and staleness is not significant.**** ## Cache levels ### Client-side cache Caches can be located on the client side (OS or browser) ### CDN caching A content delivery network (CDN) is a globally distributed network of proxy servers, serving content from locations closer to the user. Generally, static files such as HTML/CSS/JS, photos, and videos are served from CDN, although some CDNs such as Amazon's CloudFront support dynamic content. The site's DNS resolution will tell clients which server to contact. Serving content from CDNs can significantly improve performance in two ways: * Users receive content at data centers close to them * Your servers do not have to serve requests that the CDN fulfills #### Push CDNs Push CDNs receive new content whenever changes occur on your server. You take full responsibility for providing content, uploading directly to the CDN and rewriting URLs to point to the CDN. You can configure when content expires and when it is updated. Content is uploaded only when it is new or changed, minimizing traffic, but maximizing storage. Sites with a small amount of traffic or sites with content that isn't often updated work well with push CDNs. Content is placed on the CDNs once, instead of being re-pulled at regular intervals. #### Pull CDNs Pull CDNs grab new content from your server when the first user requests the content. You leave the content on your server and rewrite URLs to point to the CDN. This results in a slower request until the content is cached on the CDN. A time-to-live (TTL) determines how long content is cached. Pull CDNs minimize storage space on the CDN, but can create redundant traffic if files expire and are pulled before they have actually changed. Sites with heavy traffic work well with pull CDNs, as traffic is spread out more evenly with only recently-requested content remaining on the CDN. #### Disadvantages * CDN costs could be significant depending on traffic, although this should be weighed with additional costs you would incur not using a CDN. * Content might be stale if it is updated before the TTL expires it. * CDNs require changing URLs for static content to point to the CDN. ### Web server caching Reverse proxies and caches such as Varnish can serve static and dynamic content directly. Web servers can also cache requests, returning responses without having to contact application servers. ### Database caching Database usually includes some level of caching in a default configuration, optimized for a generic use case. Tweaking these settings for specific usage patterns can further boost performance. ### Application cache In-memory caches such as **Memcached** and **Redis** are key-value stores between your application and your data storage. Since the data is held in RAM, it is much faster than typical databases where data is stored on disk. RAM is more limited than disk, so cache invalidation algorithms such as least recently used (LRU) can help invalidate 'cold' entries and keep 'hot' data in RAM. Redis has the following additional features: * Persistence option * Built-in data structures such as sorted sets and lists There are multiple levels you can cache that fall into two general categories: database queries and objects: * Row level * Query-level * Fully-formed serializable objects * Fully-rendered HTML <p style="background-color:red"> WHY ???</br> Generally, you should try to avoid file-based caching, as it makes cloning and auto-scaling more difficult. </p> #### Caching at the database query level Whenever you query the database, hash the query as a key and store the result to the cache. This approach suffers from expiration issues: * Hard to delete a cached result with complex queries * If one piece of data changes such as a table cell, you need to delete all cached queries that might include the changed cell #### Caching at the object level See your data as an object, similar to what you do with your application code. Have your application assemble the dataset from the database into a class instance or a data structure(s): * Remove the object from cache if its underlying data has changed * Allows for asynchronous processing: workers assemble objects by consuming the latest cached object **Suggestions of what to cache:** * User sessions * Fully rendered web pages * Activity streams * User graph data ## Cache update ### Cache-aside ![](https://i.imgur.com/wdhnoP4.png) The application is responsible for reading and writing from storage. The cache does not interact with storage directly. The application does the following: * Look for entry in cache, resulting in a cache miss * Load entry from the database * Add entry to cache * Return entry **Memcached** and **Redis** are generally used in this manner. Subsequent reads of data added to cache are fast. Cache-aside is also referred to as lazy loading. Only requested data is cached, which avoids filling up the cache with data that isn't requested. #### Disadvantages * Each cache miss results in three trips, which can cause a noticeable delay. * Data can become stale if it is updated in the database. This issue is mitigated by setting a time-to-live (TTL) which forces an update of the cache entry, or by using write-through. * When a node fails, it is replaced by a new, empty node, increasing latency. ### Write-through ![](https://i.imgur.com/FZmyszs.png) The application uses the cache as the main data store, reading and writing data to it, while the cache is responsible for reading and writing to the database: * Application adds/updates entry in cache * Cache synchronously writes entry to data store * Return Write-through is a **slow** overall operation due to the **write** operation, but subsequent **reads** of just written data are **fast**. Users are generally more tolerant of latency when updating data than reading data. Data in the cache is not stale. #### Disadvantage * When a new node is created due to failure or scaling, the new node will not cache entries until the entry is updated in the database. Cache-aside in conjunction with write through can mitigate this issue. * Most data written might never be read, which can be minimized with a TTL. ### Write-behind (write-back) ![](https://i.imgur.com/yvdkFHp.png) In write-behind, the application does the following: * Add/update entry in cache * Asynchronously write entry to the data store, improving write performance #### Disadvantages * There could be data loss if the cache goes down prior to its contents hitting the data store. * It is more complex to implement write-behind than it is to implement cache-aside or write-through. ### Refresh ahead ![](https://i.imgur.com/GGVkbOu.png) You can configure the cache to automatically refresh any recently accessed cache entry prior to its expiration. Refresh-ahead can result in reduced latency vs read-through if the cache can accurately predict which items are likely to be needed in the future. #### Disadvantages Not accurately predicting which items are likely to be needed in the future can result in reduced performance than without refresh-ahead. ## Disadvantages * Need to maintain consistency between caches and the source of truth such as the database through cache invalidation. * Cache invalidation is a difficult problem, there is additional complexity associated with when to update the cache. * Need to make application changes such as adding Redis or memcached.