There is no commentSelect some text and then click Comment, or simply add a comment to this page from below to start a discussion.
Dodgeball diplomacy
Subtask 1,2
Simply maintain the list of edges. We can handle add queries in , delete queries in and dodgeball queries (min unfairness round) in where is the number of edges currently.
Add and delete queries are trivial. For dodgeball queries, we need to first compute the component sizes. This can be done using either a DFS or the union-find data structure. Then, it turns out that it is optimal to sort all team sizes in decreasing order and create matches from cities whose size are adjacent after sorting.
Proof: suppose we four sizes , and we match them by and , which gives an unfairness of . If we compare it with what sorting gives, and , which gives , we can set up the following inequality:
Which can be seen as being true given the above assumption . Using this information, we take any assignment and continually use this argument to sort the assignment while improving or keeping the same unfairness.
Subtask 3
In this subtask, the slow part is that there are lots of removals. To speed this up, we can use a priority queue to be able to remove the largest treaty in . Insertion now increases to , but this passes easily.
Subtask 4
Maintaining component sizes is easy in this subtask. Whenever an edge is added you either merge two intervals or extend some interval. When an edge is deleted, some interval is either shrunk or split in two parts.
How can we efficiently maintain the sum after these changes? First, realize that we given component sizes , we essentially maintain the sum . Let's create a segment tree, where each index stores the number of components of size there are. First, note that we only care about the number of components of size mod 2, as it's always optimal to pair them up as if there is more than 1. To know when to introduce negative signs into the sum, we need to the number of components of size (mod 2) in addition to the sum. So each leaf node with index stores:
if the number of components of size is odd (, since that is the contribution to the sum),
otherwise .
In every interior node, we store , where is the sum of all in its subtree, and is an alternating + - sum of all nodes right to left. We can realize that maintaining the correct sum can be done by checking whether the right child's cnt is even or odd. If it's even, add left child sum, otherwise subtract it. In code, the segment tree's merge function can be implemented as follows:
Note that the order of children that we call with is important: we must always call it with (left child, right child), otherwise we get wrong answer. Notably, KACTL's segment tree doesn't guarantee this. The easiest solution is probably to use a normal recursive segment tree (although it is possible to do it using segment tree by changing the layout slightly).
Subtask 5
Given the segment tree from subtask 4, the only remaining challenge is maintaining component sizes under updates. It is a classical exercise to solve this in offline. However, we can't use this, since everything must be answered online. I have to mention that there is a beautiful algorithm for dynamic connectivity in . However, this is not the intended solution, and might have problems being fast enough.
Let's think about the special condition we have: we always remove the largest edge. Given the restriction in this subtask that edge insertions are in increasing order, we can think of them as a stack; we only ever remove the last inserted edge. Thus, we will use a common trick: a union-find that can "undo" the last operation. The idea is that whenever we merge two components, only values change, so we can simply store a stack of changes; whenever we do a merge, we store the information needed to restore the union-find to its state before the merge. It is described here. Using this, we can keep track of the component sizes and update the segment tree accordingly.
Subtask 6
In this subtask, we will instead remove the oldest edge. This can be solved by the ideas in the following blogpost.
We might think that some point must lie on the corner of the rectangle, giving candidate rectangles. Then, each one could be tested in by using sweepline + segment tree. Unfortunately, this is false:
In this example, the optimal square doesn't have a point on its corner.
However, two of its sides must intersect points.
This gives us candidates, which immediately leads to an or time algorithm. For full points, we need a different perspective.
Full solution
For full points, let's do a sweep, where we try every possible placement of the coordinate of the rectangle. It suffices to only check the coordinates of points present in the input.
We will sort all points by their coordinate. First, we will look at the bottommost point, let it be . We will add all points that satify to some data structure that allows us to find the optimal placement in time. Then, we will remove from the data structure, then add all that satisfy . Repeat until we have tried every possible coordinate for the coordinate of the rectangle.
Now, how do we design a data structure where we can insert points, remove points and find the value of the best -coordinate for the rectangle in per operation?
Let's think of how adding some point influences other points in the data structure. It turns out that adding a point with coordinate and value increases the value of all rectangles starting at where by . Therefore, we could use a data structure that allows us to add to all in , and allows us to find the largest value among all coordinates. This data structure exists, and is called lazy, sparse segment tree. (Note that some prefer calling the sparse segment trees dynamic or implicit).
Because of the high time limit, sparse segment trees are fast enough. If the constraint were tighter, using coordinate compression might have been needed. Funnily enough, we don't even need lazy segment trees; a normal segment tree suffices. I will leave this as an exercise to the reader (if you can't figure it, dm matistjati on discord).