146. LRU Cache

題目描述

Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.

Implement the LRUCache class:

  • LRUCache(int capacity) Initialize the LRU cache with positive size capacity.
  • int get(int key) Return the value of the key if the key exists, otherwise return -1.
  • void put(int key, int value) Update the value of the key if the key exists. Otherwise, add the key-value pair to the cache. If the number of keys exceeds the capacity from this operation, evict the least recently used key.

The functions get and put must each run in O(1) average time complexity.

範例

Example 1:

Input
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, null, -1, 3, 4]

Explanation
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1);    // return 1
lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3}
lRUCache.get(2);    // returns -1 (not found)
lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3}
lRUCache.get(1);    // return -1 (not found)
lRUCache.get(3);    // return 3
lRUCache.get(4);    // return 4

Constraints:

  • 1 <= capacity <= 3000
  • 0 <= key <= 104
  • 0 <= value <= 105
  • At most 2 * 105 calls will be made to get and put.

解答

C++

class LRUCache { public: class Node { public: int key, val; Node* prev, * next; Node(int k, int v) { this->key = k; this->val = v; } }; Node* head, * tail; int size, capacity; unordered_map<int, Node*> dict; list<int> cache; LRUCache(int capacity) { head = new Node(-1, -1); tail = new Node(-1, -1); head->next = tail; tail->prev = head; size = 0; this->capacity = capacity; } int get(int key) { /* case 1: key exists remove(key) add2head(key, value) case 2: key not exists return -1 */ if (dict.count(key) > 0) { Node* node = dict[key]; remove(key); add2head(key, node->val); return node->val; } else { return -1; } } void put(int key, int val) { /* case 1: key exists remove(key) add2head(key, value) case 2: key not exists add2head(key, value) if (cache.size() > k) { removeTail() } */ if (dict.count(key) > 0) { remove(key); add2head(key, val); } else { add2head(key, val); } } void remove(int key) { Node* curr = dict[key]; Node* prev = curr->prev; Node* next = curr->next; prev->next = next; next->prev = prev; size --; dict.erase(key); } void add2head(int key, int val) { Node* newNode = new Node(key, val); Node* next = head->next; head->next = newNode; newNode->next = next; next->prev = newNode; newNode->prev = head; dict[key] = newNode; size ++; if (size > capacity) { Node* LRU_Node = tail->prev; remove(LRU_Node->key); } } };

Jerry Wu18 July, 2023

Python

class LRUCache: def __init__(self, capacity: int): self.size = capacity self.cache = OrderedDict() def get(self, key: int) -> int: if key not in self.cache: return -1 self.cache.move_to_end(key) return self.cache[key] def put(self, key: int, value: int) -> None: if key not in self.cache: if len(self.cache) >= self.size: self.cache.popitem(last=False) else: self.cache.move_to_end(key) self.cache[key] = value

Yen-Chi ChenTue, Jul 18, 2023

Reference

回到題目列表