chmonk

@chmonk

Joined on Sep 1, 2021

  • C: get all the same newest response -> Every read receives the most recent write or fails (strong consistency). -> In a distributed system, when data is written to one node, all subsequent reads (from any node) must return the most up-to-date value (may has some latency on read request to get the up-to-date data)(synchronous). Example: If you write X=100 on Node A, a read from Node B should immediately return X=100, not an older version. A: get response even some node fail -> The system does not refuse requests as long as there is a reachable node, even if it means returning stale (outdated) data.
     Like  Bookmark
  • visibility compound operations function volatile atomic variable 單步驟process ref: https://ithelp.ithome.com.tw/articles/10229833
     Like  Bookmark
  • definition https://www.youtube.com/watch?v=j-DqQcNPGbE http://www.mathcs.emory.edu/~cheung/Courses/171/Syllabus/9-BinTree/heap-delete.html complete binary tree parent node all larger/less than child delete time 正比O(dep)== O(ln N)完滿二元樹深度LnN sort每次log(N)做N次 => O(nlnN) complete binary tree
     Like  Bookmark
  • Using the Event sourcing pattern to develop business logic Implementing an event store Integrating sagas and event sourcing-based business logic Implementing saga orchestrators using event sourcing 6.1 Developing business logic using event sourcing event represents a state change of the aggregate. persists an aggregate as a sequence of events recreates the current state of an aggregate by replaying the events.
     Like  Bookmark
  • how to archiving avoid PK collision Avoiding primary key collisions when archiving data in an RDBMS (Relational Database Management System) is crucial to maintain data integrity. Here are some strategies to prevent primary key collisions during the archiving process: Use a Composite Primary Key(with timestamp): If your table has a composite primary key (consisting of multiple columns), consider including a timestamp or another unique identifier in the composite key. This way, even if the data is archived and later reintroduced into the system, the combination of the existing primary key columns and the timestamp or unique identifier ensures uniqueness. Use a Surrogate Key: Employ a surrogate key (an artificial key, often an auto-incremented integer) as the primary key for the archived data. When archiving, you can generate new surrogate keys for the archived records. Upon restoring or accessing the archived data, these surrogate keys can be used without worrying about collisions with the original primary keys. Adjust Primary Key Values During Archiving(if data requery infrequently): When archiving data, if feasible, you can modify the primary key values of the archived records to ensure they do not conflict with existing primary keys.For example, you might prefix or append a specific identifier to the primary key values during the archiving process. Use a Different Database Schema or Table:Store archived data in a separate schema or table within the same database or even in a different database. This ensures that the archived data has its own set of primary keys that won't collide with the active dataset. This approach is particularly useful when the archived data follows a different retention policy or is accessed infrequently. Include a Version or Archive Identifier: Add a version or archive identifier column to your table. When archiving, update this column to distinguish between the original dataset and the archived records.
     Like  Bookmark
  • comparing cluster data sharding across to multiple nodes, each node control part of hash slot support higg availability: auto replica ready for failover avoid single point failure compare to single instance auto-node discovery: Has built-in support for automatic node discovery. Clients can connect to any node in the cluster, and the cluster will inform them about the current topology. easy for horizontal scaling by increase hash slot eventual consistency for latency between nodes read-write separation:Redis Cluster supports both read and write operations on any node. However, it allows you to configure certain nodes as read-only, and others as read-write.
     Like  Bookmark
  • https://www.youtube.com/watch?v=v4u7m2Im7ng&ab_channel=ADev%27Story when local to micro service, the communication might be problem: how to find the right ip of target services in cloud-based system. (all service might shutdown and up) DNS record the ip to different internet domain * simple/coupling client side service discovery
     Like  Bookmark
  • 4.1 Transaction management in a microservice architecture chanllenge: update data across databaselegacy @transaction will introduce coupling to application SAGAonly ACD, without isolation conquer the absent isolation by introducing "data STATUS", control the isolation by developer 4.1.1 The need for distributed transactions in a microservice architecture from monolith-app to microservices: the transaction maintains from single DB to multiple DB 4.1.2 The trouble with distributed transactions distributed transactions:synchonous IPC:reduce availibility, while today prefer availability rather than consistency.
     Like  Bookmark
  • 2.2 Define an application's microservice architecture three step process to decompose:step1.Identify system operations服務的使用場景(an abstraction of a request that the application must handle) step2.Identify services: 切分相同性質形成服務 by business capability around domain-driven design subdomains(DDD) step3.Define service APIs and collaborations
     Like  Bookmark
  • uid gid 雖然我們登入 Linux 主機的時候,輸入的是我們的帳號,但是,其實 Linux 主機並不會直接認識你的『帳號名稱』的,他僅認識 ID 啊~ID 就是一組號碼啦~ 主機對於數字比較有概念的,帳號只是為了讓人們容易記憶而已。 而您的 ID 與帳號的對應就在 /etc/passwd 當中哩。 /etc/passwd 這個檔案的構造是這樣的:每一行都代表一個帳號, 有幾行就代表有幾個帳號在你的系統中!不過需要特別留意的是, 裡頭很多帳號本來就是系統中必須要的,我們可以簡稱他為系統帳號, 例如 bin, daemon, adm, nobody 等等,這些帳號是系統正常運作時所需要的,請不要隨意的殺掉他呢! 這個檔案的內容有點像這樣 每個程序都需要取得 uid 與 gid 來判斷權限的問題,所以, /etc/passwd 的權限必須要設定成為 -rw-r--r-- 帳號名稱 密碼:後來就將這個欄位的密碼資料給他改放到 /etc/shadow ,x 這表示密碼已經被移動到 shadow 這個加密過後的檔案囉
     Like  Bookmark
  • token bucket leaking bucket fixed window counter sliding window log sliding window counter purpose A rate limiter is a tool used to control the amount of traffic that can be sent to a server or application over a specific period of time. By setting a limit on the rate of requests or connections that can be made within a given timeframe, a rate limiter can help prevent Distributed Denial of Service (DDoS) attacks, but not eliminate the ddos, with great amount requests, the system still be overwhelmed. token bucket
     Like  Bookmark
  • aim to method/class level-handle(cf. interceptor aim to handle request or respons of webapplication level) aspect class declaration : @Aspect pointcut: the valaildation of the selection rule pattern @after/before/afterthrowing: declare the actionbefore advice after advice around advice after-throwing advice jointpoint: working flow at any moment, examine by pointcut rule to check valid or not
     Like  Bookmark
  • view and original table create a table like structure, those data point to the location of some table row, hence update the data in view will update the data in table,too. view : provide dynamic access to data with real-time updates A view is a virtual table created by a query. A view does not store any data itself. It is a saved SQL query that can be referenced and used like a table. The query is executed every time the view is accessed, the results are dynamically generated. create another table by exposing the data from some table,if we dont want only expose some column to user, we can only give user the priviledge to read a view rather than table.
     Like  Bookmark
  • only unchecked exception will trigger rollback in default. According to Spring documentation: In its default configuration, the Spring Framework’s transaction infrastructure code marks a transaction for rollback only in the case of runtime, unchecked exceptions. That is, when the thrown exception is an instance or subclass of RuntimeException. ( Error instances also, by default, result in a rollback). Checked exceptions that are thrown from a transactional method do not result in rollback in the default configuration. unchecked exception(runtime exception) 1. runtimeException::non-checked excetion => trigger rollback, in class declaration, you dont need add throws exception and compile successes. 2. java dont check these exception during compile.
     Like 1 Bookmark
  • https://www.youtube.com/watch?v=i2eVTk2Fb40&ab_channel=ADev%27Story complete rebuild temporal query event replay owing to object state record by log of events, we can get the object status at any time before now snapshot: record the compiled object in the past
     Like  Bookmark
  • what is e2e(end to end)? https://www.youtube.com/watch?v=jkV1KEJGKRA&ab_channel=Computerphile make the message cannot be decrypted in transition from source to target. server only can get the encrypted message. how end to end flow? A <-> server <-> B A,B create pair key(public, private) for themself,
     Like  Bookmark
  • ref: https://leetcode.com/problems/binary-search/ Constraints: 1 <= nums.length <= 104 -104 < nums[i], target < 104 All the integers in nums are unique. nums is sorted in ascending order. dfs tc: O(n+e) visit all node once, and visit each edge once
     Like  Bookmark
  • tags: leetcode,bfs,djs,medium ref: https://leetcode.com/problems/number-of-islands/ Given an m x n 2D binary grid grid which represents a map of '1's (land) and '0's (water), return the number of islands. An island is surrounded by water and is formed by connecting adjacent lands horizontally or vertically. You may assume all four edges of the grid are all surrounded by water. tc O(n) sc O(n)
     Like  Bookmark
  • ref: https://leetcode.com/problems/count-unreachable-pairs-of-nodes-in-an-undirected-graph/description/ You are given an integer n. There is an undirected graph with n nodes, numbered from 0 to n - 1. You are given a 2D integer array edges where edges[i] = [ai, bi] denotes that there exists an undirected edge connecting nodes ai and bi. Return the number of pairs of different nodes that are unreachable from each other. class Solution { public long countPairs(int n, int[][] edges) { //dfs
     Like  Bookmark
  • ref: https://leetcode.com/problems/web-crawler/ tc O(n) n for all url sc O(n) for queue /** * // This is the HtmlParser's API interface. * // You should not implement it, or speculate about its implementation * interface HtmlParser { * public List<String> getUrls(String url) {}
     Like  Bookmark