題本:
首先我們先從 開始想,我們將有水點的位置分成三類:
對於 2. 顯然答案是 ,否則假設有水點位於 左邊,觀察到如果在操作中戳到位於 左邊點的話是不可能把水抽到點 的。而只要戳到 或是 右邊的點,就可以直接把水抽到點 。因此答案等價於期望要戳幾次才能戳到點編號 的點,由於此事件的機率為 ,可以推出期望值為 。同理可以推出若一開始有水點位於 右邊,期望值會是 。
接著考慮推廣到 的 case,可以發現如果一開始 的兩邊都有水,那顯然答案是 ,否則隨便挑一個點出來做 的 case 就可以了。
經由 subtask 1 的推導不難想到以下結論:令有水點為點 ,目標點為點 , 為以 為根 的子樹點集合。則一戳到集合 內的點就可以結束,戳到其他點都不可能結束。因此,答案會是 ,其中 為集合 的大小。
因此我們問題轉換成:有 筆詢問,每次會給定兩個點 ,詢問以 為根下點 個子樹大小。
subtask 2 的限制下每次暴力 dfs 問大小會過,而 subtask 3 的話可以採取以下方法:
(1) 以 為根的情況下,點 是點 的祖先。則我們先用倍增找到在 到 的路徑上且深度比 多 的點。令此點為 ,則答案會是
(2) 否則,答案會是
整體複雜度會是
同樣由 subtask 1 可以知道對於 ,我們要判斷的事情是以 為根下,所有有水的點是否都在同一棵子樹,如果是那答案會跟 一樣,否則答案會是 。
在這筆 subtask 中,我們可以直接求出所有有水點以 為根下的 LCA,若 LCA 為 則代表有水的點不在同一棵子樹。
至於滿分解的話,上面的做法會出錯的原因是若有點在以 為根點 子樹以外的話 LCA 沒有辦法判斷成功。然而由於我們的目標只是要判斷它們是否都在以 為根的同一個子數裡,所以事實上我們可以先用祖先關係算完有多少點在 子樹內。如果數量介於 的話那答案顯然是 0,而若數量是 則代表所有點都在子樹外,否則可以用跟 subtask 4 相同的作法,算完 LCA 再判斷即可。
注意到 ,所以直接回傳一個長度為 的字串 ,若 則代表邊 。
黑雞先將樹以 為根 dfs 一遍,之後對於每個點 將他的父親 轉成 進位後把他們接起來再傳給白雞,之後白雞很容易就可以回推出樹的樣子
用的 bit 數大約為 。
黑雞回傳樹上歐拉迴路(euler tour tree)給白雞,其中用 代表一條往下走的邊, 代表往上走的邊。而之後白雞很容易就可以回推出樹的樣子。
用的 bit 數大約為 。
稍微壓一下個數大概可以達到 60 分
我們考慮以 為根,將每一種樹都轉成一個編號。
定義 代表有多少個 個點的樹,且根的每一個小孩的子樹大小都至少為 。這裡若兩棵樹為有根同構則把它們視為相同
有了這個 dp 後就不難把每棵樹打到一個唯一的編號,但可能有不少細節,這裡留給讀者自行思考。
之後白雞考慮相同的 dp,用類似的方法即可回推出原樹的樣子。
樹的數量會 ,所以這個方法會使用 個 bit。
前面的做法我們直接以 為根,好像有點太浪費了
我們考慮把樹以樹重心為根,這樣一來由於子樹的上界會變成 ,樹的數量就會變少了。
詳細的話,用 代表有多少個 個點的樹,且根的每一個小孩的子樹大小都介於 。這裡若兩棵樹為有根同構則把它們視為相同
之後黑雞與白雞只要採用類似的方法作編號與解碼即可
由於樹的數量 ,所以這個方法會是使用 個 bit,也就成功解決了這個問題。
Observation 1. 如果我們決定了一個疊方塊的順序,那麼對於 ,由上往下前 個方塊的重心應該要恰好在第 個方塊的邊界,而全部 個方塊的重心要在桌子的邊界。
Proof. 顯然全部 個方塊的重心要在桌子的邊界。假設存在一個 滿足前 個方塊的重心不在第 個方塊的邊界,並且這個 是最小的,我們可以將前 個方塊一起往左邊移一點直到它們的重心在第 個方塊的邊界上,並且把前 個方塊一起往右邊移一點直到它們的重心回到本來的位置,如此一來前 個方塊的重心就會在第 個方塊的邊界,並且方塊塔也不會傾倒。不斷重複之後就可以讓方塊塔滿足上述條件。
由此可知,如果我們固定疊方塊的順序,由上往下第 個方塊重量是 ,我們一開始先把方塊們垂直整齊疊起來,接下來把最上面那一塊往左移,直到不能移為止、再把最上面的 2 塊往左移,直到不能移為止……依此類推,就能讓方塊塔延伸最長的距離。
在我們移最上面的 塊的時候,我們能移動的距離就是此時前 個方塊的重心到前 個方塊的重心的距離,也就是 。
所以答案就是
Observation 2. 重的方塊應該要放在下面。
Proof. 如果由上往下第 和第 個方塊滿足 ,那麼:
得出
其中 。由此可知,把第 和第 個方塊交換後,答案會更大。
因此,只要把方塊們按重量排序,並計算上述式子,就能得到答案。
答案 的情況可以先掃過全部的序列求出最大值,所以以下只考慮答案 的情況。
以下,「區間和」皆指跟 取 的值。
The case where the answer can be handled by scanning all sequences to find the maximum value, so the following only considers the case where the answer is .
In below, the "sum"s mentioned are all taken max with .
Trivial.
For those who do not know how to calculate the "maximum interval continuous sum", you can refer to this tutorial.
Time Complexity: .
觀察到答案可以分為兩種狀態:
其中,對於左右界在同一序列的情形,答案就是該序列的最大連續和。
對於左右界在不同序列的情形,以下數字會被貢獻進答案:
其中,你可以在 對每個序列求出最大的前綴和、後綴和、以及序列和,顯然只有在序列和 的時候把他包在中間才有意義。
於是你可以枚舉左右界跟枚舉剩下的區間來在 時間內計算出答案。
Time Complexity: .
Observing that the answer can be divided into two states:
In the case where the left and right boundaries are in the same sequence, the answer is the maximum subarray sum of that sequence.
For the case where the left and right boundaries are in different sequences, the following numbers will be contributed to the answer:
In which, you can compute the maximum prefix sum, suffix sum, and total sum of each sequence in .
It is obvious that only when the sequence sum is greater than , it makes sense to include it in the middle.
Therefore, you can enumerate the left and right boundaries and the remaining intervals to calculate the answer in .
Time Complexity: .
繼續上個 subtask 的做法,只要維護 ,就能在枚舉左右界的時候 更新答案。
Time Complexity: .
Continuing the solution from the previous subtask, as long as you maintain , you can update the answer in when enumerating the left and right boundaries.
Time Complexity: .
再繼續上個 subtask 的做法,考慮只枚舉左界,並計算把哪個序列變成右界會使答案增加最多。
觀察到把一個序列變成左界的時候,選擇最好的右界會讓答案增加
(1.) 跟 (3.) 都可以在預處理的時候做掉。
於是我們可以將所有區間按照「最大前綴和 總和」由大至小排序,(2.) 就只會是第一項或第二項(如果左界是第一項),可以在 求出。
Time Complexity: .
Continuing the solution from the previous subtask, consider only enumerating the left boundary and calculating which sequence to make the right boundary will increase the answer the most.
Observing that when a sequence becomes the left boundary, choosing the best right boundary will increase the answer by
Points (1.) and (3.) can be done during preprocessing.
Therefore, we can sort all intervals by "maximum prefix sum" minus "sum of the sequence" from largest to smallest, and point (2.) will only be the first or second item (if we choose the first item as the left boundary), which can be calculated in .
Time Complexity: .
原本這題需要構造解,但是驗題完之後覺得不夠 trivial 於是就把那部分拿掉了。
不過因為我的作法在做的時候就一定會找出左右界,所以我不知道為什麼把那部分拿掉會變簡單。
問了驗題者的解法之後才意識到這題直接 DP 過去就好了。
一樣預處理出最大的前綴和、後綴和、區間和、總和之後,只要維護 代表前 個區間有沒有選取前綴跟後綴,轉移式就很顯然了。
Time Complexity: .
The original solution to this problem required a construction of the permutation, but after the tester tested the problem, it was felt that it was not trivial enough, so that part was removed.
However, since the official solution always finds the left and right boundaries when solving the problem, Neko-chan is curious about why removing that part makes it easier.
After asking the testers for their solutions, Neko-chan realized that this problem can be solved directly with DP.
After preprocessing the maximum prefix sum, suffix sum, interval sum, and total sum, as long as you maintain to represent whether the prefix and suffix of the first intervals are selected, the DP transition is obvious.
Time Complexity: .
枚舉所有子集合,應該不需要題解吧(X
我們可以考慮一個類似背包的 DP 結構。定義 為考慮集合中前 個元素中恰有 個元素的子集合 的 ,那麼答案就會是 。
DP 式可以這樣建構
怪怪NTT,窩不會,待補
我們可以回頭考慮 Subtask 2 的DP式子
仔細想想就會發現,這個式子其實不用那麼複雜
既然我們的答案中, 會被乘上一個 ,那我們可以找個方法直接定義 為考慮前 個物品時的答案
為了要轉移 ,我們就先再定義 ,其中 是前 個物品不乘上子集大小時的答案。
的轉移很顯然是
這能如何幫助我們求出 呢?先考慮第 個物品有沒有被選取,沒有被選取時答案會是 ,這在 Subtask 2 的 DP 式中即為 ;有被選擇時對答案的貢獻在 Subtask 2 中便是剩下的 ,注意到這裡對整體答案的貢獻,當 變成 時, 也就是我們其實是在嘗試先把 加上 ,再一起乘上 。
綜合以上有選及沒選的 case,這就可以推導出最後的 DP 式子:
於是我們就能以 的時間做出這題。
我不會QQ
令 為一個形式變數並考慮這個形式多項式函數 。展開後 的係數恰好就是所有大小為 的子集合內元素乘積的總和。由於我們想要算的是這些總和乘以 的值,於是可以聯想到我們要計算的是將多項式 拿去微分後 項的 就會掉下來乘在係數上了。所以我們要算的值就是 。
Just enumerate through all subsets, trvial in time.
We can consider dynamic programming here. Consider a knapsack-problem-like structure.
Define as following. For all subsets of size of the first element in , . Then the answer is trivially .
我們把城市視為點,道路視為邊,血量視為邊權。
First do the obvious intepretation to graph theory: let the cities be vertices, the roads be edges and the health be weights.
首先很明顯 ,所以如果在等號成立下跑最短路的結果 ,則一定是 。唬爛一下會發現可以拿 50 分,這顯示 的充要條件。
It's obvious that , when the equality holds, if the length of the shortest path , the answer must be . It's also quite believable that it's sufficient. Implementing this solution gives points, which proves the sufficiency using proof by AC.
這代表圖是一棵樹。
注意到路徑是唯一的,所以就直接用 Partial 的判法,然後用某種方法找出所有路徑上的邊,把最大的邊調到夠高就會過了。
The graph is a tree.
Note that there's a unique path from to . Using the heuristic above, we can then find the largest edge along the path from to , adjusting the weight of this edge large enough gives the correct construction.
這代表圖是一個環。
路徑恰有兩條,考慮有邊權最大的邊的路徑 和另一個路徑 ,兩個路徑長分別是 。
有三個情況。
The graph is a cycle.
There are exactly two paths, considers paht , which have the edge of the largest weight, and another path , the length of the two paths are respectively
There are three cases.
Do the same thing in Subtask 1 for path B. Remember to adjust other edges accordingly.
and
Do the same thing in Subtask 1 for path A.
and
There's no solution
考慮反覆找一個最短路。
如果最短路 ,則一定 。
如果最短路 ,則調整這條最短路的最大的邊到足夠高使得這條路徑長度變恰好為 ,把邊權比這個邊還大的所有邊都設到至少 。
如果最短路 ,則我們已經調整完了,這時輸出 以及調整完的所有邊權。
正確性是顯然的。
注意到每次至少會調整一條邊,因此重複跑最短路次數是 ,最短路用 dijkstra 是 。
時間複雜度是 .
Repeatedly find the shortest path from to .
if the length , then the answer is .
if the length , then adjust the largest edge in this path to make the length of this path , also adjust the edges that are bigger(but are not in the previous shortest path) to be just big enough to satisfy the constraints.
if the length , then we're done and can output the solution
先預設 ,這是每個邊的最小可能權重,我們對目前的圖做一個二分搜。利用二分搜,我們找出最小的 ,使得如果只考慮邊權 的邊, 到 的最短路徑 。如果這個 不存在,則輸出 ,否則一定是 。
將所有邊權 的邊的邊權都調到 ,即忽略他們。以下我們把邊權是 的邊視為不存在。找出一個 到 的最短路徑 ,路徑長為 。
注意到 一定有邊權是 的邊(稱為 ),我們把 的邊權調成 。注意到對於任何 到 的路徑 ,如果 不包含 ,則根據二分搜,我們知道 的長度 ,如果 包含 ,假設他原本的路徑長是 ,根據最短路,,而改變 之後的路徑長變成 。
所以所有可能的路徑長都至少是 ,因此我們的構造是正確的,將 設成 即可。
用 dijkstra 來計算最短路,因此複雜度為
First, check if and are connected, if not then the answer is .
If , check if .
First set , this is the minimum possible weight. Do a binary search on the graph to find the smallest such that if we consider edges with weight , the shortest path from to is . if this doesn't exist then the answer is , otherwise the answer is .
We can then ignore edges of weight by letting their weight greater than . We now treat those edges as non-exsistent. Find a shortest path from to with length .
Notice that must have the edge of weight due to its definition, call that edge , change the weight of to be . We claim that this is a valid construction. For every path from to , if doesn't have , the length of is by the binary search property; if does have , the length of is , so after the change the length of will be greater than . Our claim was proven.
Set to is the solution.
We use Dijkstra to calculate the shortest path, So the time complexity is
首先,由簡單的greedy的想法可知道題目的意思即可轉換成
給定正整數 以及 個正整數 。
有 筆詢問,每次詢問會給你一個正整數 ,請回答有多少個 使得 在 進位制下的位數和恰好是 。
First, with simple greedy, the problem can be transformed as finding how many of of the given integers has digit sum of exactly in base
直接模擬即可,由於在轉換成進位的次數最多為次,所以此作法的複雜度為,其中為值域(2e6)
Direct calculation suffices, because the number of digits of in base is at most , this solution has a time complexity of , where is the range (2e6)
以下的子題都是直接算出所有值域以下的所有的答案記下來,之後每筆詢問輸出,以下記代表子題中的上限
In the following subtasks, we try to calculate the answer for every base in the range, and answer the queries in time. Let be the upper limit of in each of the subtasks.
觀察到當大於時,以下的正整數在進位的表示下最多只有位,所以小於等於的就直接把到中的所有數轉成進位後驗證位數和是否為,這邊的複雜度是,大於的的話位數和若要是只有這M種可能,且一旦表示的數超過值域就可以不用再計算,故可以證明這邊的複雜度是,所以總複雜度即為
Notice that when is greater than , integers below have at most digits in base . For integers below $\sqrt{C} we use the brute force method described above; for integers above we can enumerate the two digits and check. In the first part the time complexity is ; in the second part the time complexity is . The total time complexity is .
跟subtask2同樣的想法,再觀察到在計算到的進位表示法時不用每次都重算,而是可以把計算的結果加上,能進位就進位後就可以得到一個正確的的進位表示法,而這種作法進位的複雜度均攤為,所以此時的總複雜度就變為
Following the idea in Subtask 2. We can then notice that when calculating the base representation of we can use the representation of , this means all the calculation in a base can be done in amortized, optimizing the solution to .
只有前面的分塊的想法理論上絕對會在這個子題TLE,所以我們需要更進一步的觀察
觀察到在進位底下,若是一個正整數在進位底下的位數和為,則跟會在模下同餘,所以對於每一個進位,所有可能使得在進位底下位數和為的數字必定型如,其中為非負整數,而每次加的進位次數一樣會均攤成,故這個做法的複雜度為
You should get TLE with only the square root technique no matter what, more observation is needed for this subtask.
Notice that in base , if an integer has digit sum , then . So for every base , every candidate must be of the form , where is a non-negative integer. It turns out the adding in base multiple times is also amortized , so this solution has a time complexity of .
我們把整題的方向轉一下:根永遠訂在 A,問 C 得到的答案 D,代表的是 B 和 C 的 LCA
Do a simple reinterpretation: the root is always at A, the answer D we get when we query C means the LCA of B and C.
如果我們改問 C 的小孩 那我們可以獲得更多資訊
If we instead of asking , ask its child , then we get more information.
永遠只需要問葉子
We always query leaves.
既然有了 Observation 3,我們可以考慮一下當我們詢問一個葉子會發生什麼事情。
為了方便說明,我們假設 有 個小孩:,而我們現在詢問的葉子 就在 所在的子樹底下。
此時有兩種可能的情形:
這邊好像有點 DP 的感覺了:問一片葉子,我們要嘛排除整個子樹,要嘛知道答案就在這個子樹裡面。
於是我們先訂個狀態:dp[x]
代表如果已知答案在 x
的子樹底下,最多還要問幾次。
轉移也很明顯了,因為我們希望盡可能的問最少次,所以策略必然是先挑那個如果中了就需要問很多次的子樹,因此只要把 child 的 DP 值排序,依序詢問並取 max
即可。
Hint: 這題要怎麼寫 checker?
Hint: How is the checker implemented?
Observation 1. 可以變成 iff 滿足以下條件:
- unordered pair 的 multiset ,也就是所有相鄰數字的 unordered pair 不變
Proof. 很明顯要滿足這些條件,才有辦法把 變成 。考慮我們現在要把 變成一個滿足這些條件的 。令 為最小的滿足 的數字,如果 不存在代表 。否則,根據上述條件,。
根據上述條件,存在一個 滿足 ,並且分為兩種 case:
把 的 反轉後,就能讓 。
令 、(別忘了對於 ,), 會長成像這樣:
如果存在 ,滿足 ,那麼反轉 後,會變成第一種 case。如果對於任何滿足上述條件的 都不存在這樣的 的話,考慮把 和 當成一張以它們為一條歐拉路徑的圖。由於 在 出現了兩次,我們知道 會長成
根據上述條件,這兩張圖是同構的,因此根據 ,圖上會有一個包含邊 的環。然而,因為我們要的 是不存在的,把 裡的任一個 拔掉(其實根本只有一個),都會變成兩條不共點的路徑,因此拔掉 後圖會變不連通,矛盾。由此可知,一定存在這樣的 。綜上所述,我們總是可以把 變成 ,一直換之後 就會變成 。
Observation 1. can be transformed into if and only if the following condition holds:
- the multiset of unordered pairs , that is, the set of unordered pairs of adjacent numbers doesn't change.
Proof. It's obvious that the conditions are necessary. We now prove the sufficiency. Let's say and satisfy the constraints, we want to transform into . Let be the smallest number such that , if there' no such means , otherwise, we have .
By the constraints, we know there exists an that satisfies , with the following 2 cases:
reverse the interval in , then .
Let , (Note that for all , ), can be visuallized as:
If there exists satisfying , then reversing reduce the problem to the first case. We'll prove that such reduction is always possible. We prove this by contradiction, that is, for every such there doesn't exist such pair . Consider and as two different Eulerian path on a graph (with number as vertex and edge as adjacent pairs). Because appears at least twice in , we can visualize as
Notice that the two graphs are isomorphic. From the visualization of , the graph has a cycle containing . However, because there's no such , removing the pair from disconnects the graph. We have arrived at a contradiction, meaning such must exist.In conclusion we can always transform into , repeating this transforms into .
利用 Observation 1,一個 的暴力解便呼之欲出。
A brute force solution of using Observation 1. is simple.
利用 Observation 1,我們觀察到序列的「 與 連續區間的交替次數」肯定是固定的,且我們已知開頭的數字跟原序列相同。
基於這個性質,我們可以先用最少的 和 滿足交替次數後,注意到每個非頭尾的數字都在期待著自己可以被多放上同樣的數字來讓不好聽的區段盡量少。
而在諸如 這樣的序列中間放上一個 並不會影響交替次數,還可以增加一組 的相鄰對。
因此,貪心的利用我們手上統計出來剩餘的 和 相鄰對放 和 到每個中間的數字旁邊,放到不夠或放不下為止,如果放不下則隨便找地方塞就可以了。
可以在 的時間內完成。
Using Observation 1, we can see that the number of alternation between 1's and 2's is constant, also the
t and end must match.
Based on this, we first use the minimal amount of 1 and 2 to satisfy the number of alterations. Then, we notice that every number in the middle can utilize another of the same number to decrease the number of bad triplets. For example, in triplets like , adding a 2 is valid, and it's good because it decreases the number of bad triplets.
Therefore for this subtask, we can greedily place and to decrease the bad triplets count, if there's numbers left just place them anywhere. This solution works in .
把每個 視為一條無向邊放在圖上, 會是一條歐拉路徑。我們的目標是找到一條起終點不變的歐拉路徑當作答案。
每當走過一條邊,從 走到 之後,我們會希望走到的下一個點 能滿足 不是難聽的。如果我們對於每個點,都將它的鄰邊兩兩配對,每一對都表示「當我從其中一條邊走過來時,我就要走到跟他配對的那條邊」,直覺上最好的配法是盡量讓連接小點和大點的邊配對,如果不夠的話就配自環的邊,逼不得已才小配小、大配大。而一個壞組合就是這個配對會製造一個壞旋律(也就是小邊配小邊或大邊配大邊)。如果起終點不同,起終點可以會有一個不配對的邊,把無法配到好組合的一條邊拿出來就好。顯而易見地,這樣配出來的壞組合數量是答案的下界。
它之所以有可能不是真正的答案,是因為這樣子配對之後,不一定會是一整條路徑,而是最多一條路徑和一些迴路。不過,其實這就是真的答案,因為我們可以將路徑與迴路或迴路與迴路合併。
先考慮迴路與迴路合併的狀況,假設我們現在要將兩個有共點的迴路合併,從這兩迴路挑出一個共的點鄰邊的配對,接著分成幾種情況:
由此可知,我們可以在不改變壞組合數量的情況下,把兩個迴路合起來。迴路與路徑合併的情況相似,如果它們共用的點有出現在路徑的中間,那和上面是一樣的。反之又分成幾種 case:
因此同樣地,我們可以在不改變壞組合數量的前提下將路徑與迴路合併。綜上所述,我們可以把所有路徑和迴路都合併,並且答案就是剛剛說的下界。
在實作上,考慮一般的歐拉路徑作法。每當 DFS return 時,代表我們其實是找完了一個路徑或一個迴路。從某個點 return 回到本來的點,再往另一條邊走時,其實就是在做 merge 兩個迴路,或者一個路徑和一個迴路的事情。
因此,想像我們已經用前面的 greedy 方法配好邊了,DFS 時就先往配對的邊走,如果回來時我們還必須往別的邊走,那代表發生了需要 merge 的事情。在不是兩種狀況的第一個 case 時,怎麼走都是沒差的,不然我們會需要保持好組合還是好組合,結論是永遠盡量往「和剛剛過來的邊會是好組合」的邊 DFS 即可。注意如果剛剛走過來的邊是中邊(自環),那麼要判斷一下它在本來的配對裡是配對大邊(代表它的身分是小邊)還是配對小邊(代表它的身分是大邊),才能決定從配對邊 return 回來後,走其他的邊要先走什麼。
注意到每個節點的配對是獨立的,而且在決定一個隨意的配對後,我們總是有辦法製造出一條壞組合數量相同的歐拉路徑,因此只要你有辦法決定出一個壞組合數量最少的配對,你不管怎麼配都會是好的。(記得歐拉路徑不要寫爛。)
Now the transformation is clear: we have a Eulerian path , and our goal is to find another Eulerian path with the same starting and ending point, while minimizing the bad triplet count.
When passing through an edge, we would hope that the next edge wouldn't form a bad triplet with the current edge, therefore if we pair all the edges around a certain vertex, as in "If I come from this edge, this is the edge I'll go next", the heuristic is that we want to pair edges to minimize the number of bad triplets, and only construct bad triplets if we're forced to. It can be seen that this is the lower bound of bad triplet count.
However, this may not be possible due to the fact that this pairing might not produce a path, but instead (at most) a path and several cycles.
Now we have to convince ourselves that this is also the upper bound, otherwise this problem wouldn't be solvable, would it?
We have two tasks left in order to make our dream come true: merging cycles and merging a path and a cycle. It's all casework from now.
In the following discussion we use the current vertex root, so we can classify edges as G(pointing to a bigger number than root), E(pointing to itself), L(pointing to a smaller number than root), and GE means an edge that is either G or E; LE means an edge that is either L or E.
Let's consider how to merge cycles first. When we merge two cycles sharing a point, there are three cases
Finally consider how to merge a path and a cycle, if the contact point isn't an endpoint, it's the same as the cycle case, otherwise two more cases follows
We have now successfully constructed the lower bound.
As for implementation, we can do a slight modification on the Eulerian path algorithm. Whenever the DFS returns, it means we're done finding a path or a cycle, when we return to the original point, and going out to the next edge, we are essentially merging the parts.
If we have paired the edges for each vertex, we can traverse the designated edge first when we're doing DFS, if we're back to where we started, then it's time to merge, the only case we need to worry about is the first case on both situation, because for other cases we just do whatever. So the implication is "walk on good edges first" when we're DFS is good. However for E edges we need to check whether its supposed to be a G or an L, we can do this by checking what it's paired to.
Notice that the pairing for each vertex is independent. Also, after an optimal pairing is found, the construction of Eulerian path is always possible, therefore we only need to worry about finding a good pairing (and implementing Eulerian path correctly).
以下用 來表示 的最大值。
Let denote the maximum of .
注意到,我們可以把一開始塗黑的格子 當成一個信號,如果格子 AND 自己是黑色的,我們就能把那個結果拿去放到該半徑圓上所有格子。
這樣的次數會是所有圓的周長總和,大約是
For each of the circles, we color the circle of radius with the color of , this takes around operations.
In practice, we can use or_col to color each cell, so we don't accidentally color a black cell white.
顯然的,每個圓一定是上下對稱和左右對稱。因此我們在預處理的時候可以只看第一象限,然後用 or_col 跟 or_row 把這個圖形對折兩次。這樣的詢問次數會是原本的大約 倍。
Obviously every circle is symmetric around x and y axis, so we only pre-process the first quadrant, and use or_col and or_row to mirror the image twice, this takes around 4 times less operations.
以下的解都會需要用到 Subtask 2 的鏡射技巧。
Not model solution 1: ~61pts
可以發現到圓上會有許多連續一排的黑色格子。因此我們希望能把這一排連續的東西在一次操作做出。要支援這個功能的話,我們需要一個足夠長的連續黑色區段。這可以用 次的倍增做到。之後再用前面的方式預處理即可
Not model solution 2 ~55pts:
把表格當成一個計算機,直接計算需要塗哪些格子。這個解是原本的官解,實做難度一點點點點高,可能要實做加法跟乘法之類的東西。
Model Solution:
想法:先把第一象限依照斜對角分成上面和右邊兩半,因為兩邊的情況對稱所以之後只討論右邊。對於每一個 值,我們希望只塗上最小的 一格,之後對一整排的格子把訊號推上去。
具體來說,一開始預處理的格子可能有這些:
然後我們可以用 次的 or_row 和 or_col 把所有在那些格子右上方的格子塗黑。
注意到如果我們將相鄰的兩排 xor,可以找到兩個 值中間所有 的格子。但是這樣的話上面的格子會少,因此我們由下到上再做一遍。
如下圖:
我們可以用其他象限的空間當成是記憶體,儲存這兩個圖形的長相。最後只要把這兩張圖 OR 起來就會接近最後的答案了
接下來只要把 軸跟 軸多餘的部份處理掉就做完了。除了預處理之外,後面的運算只需要 次操作。
這邊寫完的話會發現還是是錯的,原因是因為在某些 值,斜對角線上會多一格,可以觀察 的狀況。這個 case 無法用前面的方法解決,但是幸運的,會發生這件事情的 值在 時,只有四個可能:,因此我們特判這些狀況即可(這些例外的數值可以寫一個程式輕鬆找出)。
這樣會是約 次,有大約 分。
The following solutions all use the reflection idea from Subtask 2.
Notice that there are many consecutive black cells on the circle, an idea is to try to use only one operation for each segment. When we're processing the -th circle, we should copy multiple times using doubling (we could use some temporary space in other quadrant), this part contribute operations per circle.
After this we just do similar things as previous subtasks.
Implement a binary calculator in the grid to directly calculate what cells are black. This was the original official solution, which is a tad bit difficult to implement. It may require implementing addition and multiplication.
Split the first quadrant again into the top part and the right part. As they are identical so we'll only discuss the right part.
As in previous solutions, we do this circle by circle.
Main idea: for each we only color the cell with the smallest in that column, then push the colors upward.
For example we might only color these cell first.
Then we use or_row's and or_col's To color every cell towards the upper right black.
For the right part we can find its contour by xor-ing adjacent columns, while for the upper part we xor adjacent rows.
Like the following figure.
Again we use other quadrants to store these two contours, then or the resulting two figures.
After this we just need to clean up the leftover black cells. Apart from preprocessing, the rest only takes operations.
You'll discover that this solution is still wrong, the reason is because for some , there will be one more cell on the diagonal, for example we can look at the case when . We have to process those edge cases one by one, but luckily there are only edge cases when : (this can be found readily with another program). Fix those up and we're done.
This takes around operations, giving around points.
All previous solutions process all circles separately, however it's actually possible to process neighboring circles together. The way to do it is to draw many vectors from to points on the -th circle, if for some neighboring 's they share some vector, we can process them together with or_col or or_row. After finding out all the vectors we can use a simple two-pointer method to finish this problem.
The official solution uses around operations
It could probably still be optimized.
首先考慮一個起始點在什麼情況下會被 merge。經過一些觀察,可以發現這個起始點的上下左右四個方向都要有三口羊,如附圖。
一個起始點的上面是左上的直線跟右上的直線中間的區域(包含線上)。X 代表三口羊的位置。注意到右下角的三口羊同時算為右邊跟下面。如果要嚴謹證明的話,可以考慮四個方向的三口羊與 8e7 的距離。每當 8e7 走一步,所有方向代表的三口羊與 8e7 的距離不會減少,而至少有一個方向會減少,因此必定存在一個時間會重疊。這樣也可以推得另一個性質:如果 8e7 是安全的,他一定能往正上/正下/正左/正右其中一個方向逃出。因此可以枚舉所有起始點(或是用怪怪 DFS)做出 Subtask 1。以下會用「上/下/左/右」覆蓋點代表蓋住該點那個方向的三口羊。
觀察一個三口羊對哪些點有影響。一個三口羊對所有下方(左下直線到右下直線的區域)可以作為上覆蓋點。其他方向以此類推,如附圖:
因此,我們可以對每個點維護他是否存在上/下/左/右覆蓋點,如果四個都存在就代表該點是不安全的。而更新覆蓋點的事情可以用 DFS 做到,這樣足以過 Subtask 2,複雜度 。
可以發現,這題需要用要的形狀都是轉 45 度的三角形,因此我們可以考慮平面逆時針旋轉 45 度的狀況。以下解釋都用旋轉過後的狀況。
考慮一直排有哪些點是不安全的。原本覆蓋點的條件會變成:該點左上/左下/右上/右下都需要有一隻三口羊。因此我們可以考慮左邊最高/最低的三口羊,以及右邊最高/最低的三口羊,他們會在直排上形成兩個區間。可以發現不安全的點數即為兩個區間的交集長度,如附圖:
紅色為交集的區間,而黑色圓圈是不安全的點,注意到因為有轉 45 度,所以要考慮每個點的奇偶性的狀況。
經過前面的觀察,我們可以暴力維護每一排的前綴高度(y 值)最大/最小值和後綴最大/最小值,每次加點之後再暴力詢問每一排得到答案,這樣能過 Subtask 3,複雜度 。
以下附 Subtask 3 的標程
Firstly let's consider what makes a point dangerous, after some observation, we can see that to make a point dangerous requires Three-Mouthed Sheep in each cardinal direction, as the following graph shows, X means there's a three mouthed sheep there.
The diagonal splits the plane into 4 parts. Note that sheep on the diagonal counts for two directions at the same time. For a more rigorous proof, consider the bounding box formed by Three-Mouthed Sheep. Whenever 8e7 moves, Three-Mouthed sheep can move in a way that decreases the size of the bounding box. After enough moves the size of the bounding box is 0 and 8e7 is merged (poor 8e7!).
With this, another implication can be made: if he is safe in that square, there exists one escape route that is a straight line. This gives a brute force method of solving Subtask 1 (You can also try some weird DFS ideas).
From now on we use Up/Down/Left/Right Scout with respect to a point P to denote the Three-Mouthed Sheep guarding that side when 8e7 starts at P.
Let's observe how a Three-Mouthed Sheep is useful. A Three-Mouthed Sheep can be Up Scout for points below, and similar for other directions, as the following figure shows:
Therefore we can maintain for each point whether all directions are guarded, if so then that point is dangerous, Updating can be done with DFS. This idea is enough to pass Subtask 2 with a time complexity of .
A common trick used in these types of problems is to rotate the plane 45 degrees, so let's do that. Now each point requires a UR/UL/DR/DL scout in order to be dangerous. Therefore for each column, we consider the top and bottom Three-Mouthed Sheep on both side of the column, their projection to the current column forms two segments, it's easy to see the size of their intersection is the number of dangerous points on that column, as the following figure shows.
The red segment denotes the intersection, while black circles denotes dangerous points. Notice that because we've rotated 45 degrees, we have to consider the parity for each point.
With these observations, we can write an algorithm maintaining the prefix/suffix min/max of the values, with brute force updates. This solution passes Subtask 3. with a time complexity of .
Here we give a piece of code that passes Subtask 3.
官解是 Subtask 3 的延伸,我們希望快速的維護新增一個點之後,前綴最大/最小值以及後綴最大/最小值。具體來說,我們希望算出
這件事情可以用線段樹上二分搜 + 區間修改做到。
一個要注意的細節是,在計算答案的時候不能把區間長度為 的部份算進答案。因此需要想辦法知道每一排什麼時候答案變為正的。第一個方法是用整體二分搜,複雜度會變成 之類的,可能會過不了 TL。另一個方法是觀察一排什麼時候答案會是非負的,這件事情等價於「出現一個左上-右下的點對經過該排以及左下-右上的點對經過該排,或是一個水平的點對穿過該排至少一個點」。這件事情可以透過線段樹加上維護一些 set 做到,均攤會是 。
最後考慮怎麼處理 跟 取 min 的問題,可以發現存在一個點 使得 ,且 ,且這個 點恰好是最高點的 位置。因此我們可以很簡單的維護這個位置並且在線段樹上用一個區間和做到。
The official solution optimizes the brute force update mentioned previously. We'd like to maintain the prefix/suffix max/min, and calculate
This can be done using binary search on segment tree and lazy propagation.
The first detail is when the length of the intersection is we should ignore the interval, so we need to know when to stop ignoring each column. One idea is to use total binary search, the time complexity is around , might not be fast enough; instead, we could make use of another observation: we should count a column iff either "there's a TopLeft-BottomRight pair AND a BottomLeft-TopRight pair crossing this column" or "there's a horizotal segment crossing this column". This can be implemented with segment tree plus some std::set's, in the end it should be amortized.
The second detail is how to maintain the min of prefix-max and suffix-max, notice that there's a point c such that , and , in fact, it's exactly the position of the maximum value. we can easily maintain the position and use segment tree with lazy propagation again.
另外一種解是 Subtask 2 作法的延伸,可以發現每一種點只有 16 種可能的狀態,因此可以用類似的方式開 16 棵線段樹維護,這樣的話複雜度仍然是 ,但是要注意常數的問題。
The other solution is an extension of the solution in Subtask 2, we notice that for every point there are only 16 possible states, so we can use a similar method and utilize 16 segment trees to maintain our solution, this also has a time complexity of . It will be fast enough if the implementation is efficient.