owned this note
owned this note
Published
Linked with GitHub
# Gruppindelning
## Subtask 1
In this subtask, a slow DP suffices. We keep track of which people have been assigned to a group. Then, to build the groups, we have an additional parameter which is the leader of the group we're building. The transitions are then:
- Add some person to the group
- Finish this group and start a new group with a new leader
To not add too many people, we also need to keep track of the number of people in the group currently. All in all, this runs in something like $O(2^NN^3)$. By adding an entire group at once as the transition, it can also be made to run in $O(3^NN)$.
## $O(N^5)$
The most important idea is that when a person is not a leader, their properties do not matter; they are only a quantity.
Thus, it seems helpful to know beforehand how many non-leaders we will have. Let's brute over how many non-leaders we should have. Then, we could do a DP
$$\text{DP[i][j][k]=max strength considering first i people, j non-leaders are used up,}$$
$$\text{and k people have been chosen as non-leaders}$$
The transitions are then to either say that person $i$ is not a leader and increase $k$ by 1, or say that person $i$ is a leader and consume some amount of non-leaders for $i$'s group.
To get the answer, we iterate over each possible amount of non-leaders, $k$, and calculate the DP. The score for $k$ is then $DP[n][k][k]$: we have to use up exactly $k$ people as non-leaders, and for that to happen, we need exactly $k$ people to not become leaders.
## Subtask 2: $O(N^4)$
To optimize the previous solution, we can realize that iterating over $k$ is unnecessary: we can simply do a single DP and examine all $DP[n][i][i]$.
## $O(N^3)$
We can further optimize the state: the only thing we're using $j$ and $k$ for are to ensure that $j=k$. If we instead store $j-k$, we reduce state to $O(N^2)$, and can still check only states where $j=k$: we simply check $DP[n][0]$. Note that we might end up with negative numbers. To do this, simply let the inner dimension be of size $2 * n + 10$, and then redefine 0 to be at $n+5$, and change your indexing accordingly.
## $O(N^2\log(N))$
When optimizing a DP, it's often nice to have concrete code before you:
```python
n=int(input())
mid = n+5
inf = 10**14
dp = [[-inf] * (2*mid) for _ in range(n+1)]
dp[0][mid] = 0
for i in range(1, n+1):
a,b,c=map(int,input().split())
for balance in range(2*mid):
# Dont make me a leader, increase balance
if balance > 0:
dp[i][balance]=dp[i - 1][balance - 1]
# Make me a leader
val = -inf
for g_size in range(1, c+1):
if balance + g_size - 1 >= len(dp[0]):
break
val = max(val, a * g_size + b + dp[i - 1][balance + g_size - 1])
dp[i][balance] = max(dp[i][balance], val)
print(dp[n][mid])
```
It seems difficult to get rid of state. Meanwhile, the loop over group size (Make me a leader) seems to be a prime candidate for optimization (note that I took care to write the DP such that this would jump out).
We basically take an interval, add $1a+b$, $2a+b$, ..., and then take max. For example, suppose $a=5, b=7$ and we operate over the interval [2,5]:

So naively, we would need a data structure over DP[i-1] that can:
- Take a range and add a $1k+m, 2k+m, 3k+m, ...$ to an interval
- Take max
This data structure exists, but is complicated. Instead, we can use the fact that $a$ and $b$ are the same for all states we're computing: we can just add do $DP[i-1][0]+=1a+b$, $DP[i-1][1]+=2a+b$, ....

Now, look at the previous range. Notice that every element differs from the value we want by exactly 10: this is because adding to everything instead of the range will only add the same constant to every number in the range. With this, we can simply add to $DP[i-1][0]$, then build a max-segment tree for the range maximum query, and subtract the overcounted amount (which will be equal to $l \cdot a$ for a query $[l,r]$).
You can read about segment trees here: https://cp-algorithms.com/data_structures/segment_tree.html
## Full solution: $O(N^2)$
The log factor from the segment tree makes it difficult to pass the time limit. We can save a log factor by realizing that we are only computing the maximum value for intervals of fixed size and using the monotone queue RMQ algorithm.
You can read about it here: https://cp-algorithms.com/data_structures/stack_queue_modification.html
# Krokodiler
For simplicitly, let's assume that $N=M$.
## Subtask 1-2: $O(N^3)$
Let's model this problem as a graph. For every crocodile, draw an edge to all crocodiles it points towards. Now, we can realize that:
- Every crocodile part of a cycle can never leave the pool
- Every crocodile that can reach a cycle can never leave the pool
- All other crocodiles can leave the pool: we can repeatedly remove such crocodiles with no outgoing edges until we're only left with crocodiles of type 1 and 2
One way to count the number of crocodiles is to simply keep track of the number of edges outgoing from each crocodile, and then repeatedly remove some crocodile with zero outgoing edges, and then updating all crocodiles pointing into it. This runs in linear time, and since there are $O(N^3)$ edges, this is the our final complexity.
## Subtask 3: $O(N^2)$
What we did in the last subtask is analogous to computing a topological ordering of the grid: more exactly, we're basically running [Kahn's algorithm](https://en.wikipedia.org/wiki/Topological_sorting#Kahn's_algorithm).
Let's instead consider the classical DFS-based algorithm for topological sorting:
```python
def dfs(u, vis, order, adj):
for e in adj[u]:
if vis[e]:
continue
vis[e] = 1
dfs(e, vis, order, adj)
order.append(u)
```
One very nice property of this algorithm is that it doesn't need us to generate all edges explicitly: we only need an oracle that can answer "given a node, give me any unvisited neighbour, and then mark them as visited". If we can implement such an oracle in $O(T)$ time, we can solve the problem in $O(N^2T(N))$ time. In this problem, when going right, this amounts to: for each row in the visited matrix, given an index $i$, return the first $0$ to the right, and then set it to 1. The case is similar for going for up down left.
### Oracle 1: C++ set, $T(N)=O(\log(N))$
Store every row as a set of non-deleted indices, use lower_bound to find the index and then delete it. Struggles with time limit.
### Oracle 2: C++ bitsets, $T(N)=O(N/64)$
Despite seeming slower than sets, bitsets have amazing constant factor. We store everything as ones initially, then use `_Find_next` and then set the index to 0.
### Oracle 3: union-find-like, $O(1)$ amortized
You can read about this data structure [here](https://hackmd.io/J5FH98KdR6yNsC7qfWTC9A).