<style> .reveal { font-size: 18px; } </style> # CS4234 # TSP <!-- .slide: style="font-size: 24px;" --> TSP is a very interesting COP, and we initially started out by trying out naïve methods. Unforuntately, it did not prove to be very effective, so we found a bit of success in hybrid search. In all methods, we use double bridging (or an equivalent). --- ## 2-OPT and 3-OPT (26) In this method, we first try improving our solution with 2-OPT twice, then with 3-OPT once. The difference between the two 2-OPT calls is that in the first call, we only consider potential 2-OPT candidates that are at most 1.2x distance from the current tour edge. In the second call, we consider every possible neighbor. ```cpp opt2Base(1.2,maxnumberneighbor,false); // Results in vertices <= 1.2x of the curr tour edge being used. opt2Base(-1,maxnumberneighbor,false); // Naive 2-OPT opt3LocalOptimal(maxnumberneighbor); // 3-OPT, but we break immediately when we can improve ``` --- ## Double Bridging This is a very nice pertubation that we borrow from 4-OPT, that allows for a "big" jump in the search space for pertubation. We use this for all our main TSP solutions. ![](https://i.imgur.com/VUAH70v.png) --- ## Distance optimization (30+) We realized that simply multiplying the neighbors' distance (in the quick 2-OPT call) was not adequate, as the polynomial search space (10^6) was costly. Thus, we saw significant gains when considering only then 20 nearest vertices. This drastically reduced the search space to a constant size (~400). The intuition is that if we scrutinize local areas rather than rural areas for switching 2-OPT segments, we tend to find improvements. The low frequency of improvements in rural areas makes searching there a waste of CPU time. --- ## 3-OPT + 2-OPT + local search (38) This resulted in a score of 38. Like the previous mixed approach, we first search closer neighbors and search further neighbors for the second 2-OPT call. However, for 2-OPT, we select the first improvement and continue the local search, rather than finding the "best" improvement in the local solution search space. This decreased overfitting, allowing us to hit a slightly better global optimum as sometimes (as discussed in class), we are better off not taking the strictly better option. We also use the Minimum and maximum length of the link as a speed-up. We will break the loop if there is no further improvement. This brought us to 34 points. For an extra 4 points boost, we did a double bridge with random perturbation. This means that we pick five random double bridge scenarios and pick the best one to perturb the candidate solution. ```cpp= ll perdis=perturbation(seed10,-1); FOR(p,5)perdis=perturbation(seed10+p+1,perdis); FOR(n)umax(maxneighbor,w[tour[i]][tour[(i+1)%n]]); ``` --- ## Naive 3-OPT Naive opt-3 is really bad because it's N^3 in nature, so we end up spending a lot of CPU time on finding a solution. Sticking to the fast-return method as in the previous methods helped us to net way more points, as we are able to search the solution space a lot more in 2s. --- ## Optimize 2-OPT and 3-OPT (39) Not doing 3-opt and 2-opt separately can improves the result globally. We also do a fast reversal of the segment for a 2-opt improvement. This means that to improve the speed of the reversal, we always choose the shorter segment to reverse, as there are 2 segments to choose from (as the tour is obviously cyclic). ![](https://i.imgur.com/zYxCtA0.png) --- ## Quick Reset ```cpp= else if(curdis>mindis*1.5){ // new bad local optimal, change back to old good local optimal curdis=mindis; memcpy(tour,besttour,nsizeof(int)); } ``` If we ended up searching a space where the solution was 1.5x worse, we would quickly reset. We felt like although this sounded good in theory, this didn't work. It didn't help much, so we got rid of it. --- ## 4-OPT (not good) We did an initial implementation(only 23 situations), but we could not optimize it to get a fast naive 4-OPT working. We did takeaway that the double bridge scenario was good, so we used that for perturbation. --- ## Don't look bit We tried keeping bits to indicate if we could not find improvements for a particular city. This means that we save some computation by not looking for an improvement w.r.t a particular city when the bit is flipped. We flip the bit everytime we cannot improve the solution at a city. If we find an improvement where the city is one of the endpoints involved, we flip the bit back to 0 (as we have discovered that an improvement is possible). While sounding promising, this method did not grant us any improvements. --- # MWVC We found a published algorithm online known as the FastWVC algorithm. We considered implementing it, but we ended up trying our way and using our own evaluation functions and ste --- ## Naive local search (31) We started with a valid solution and iteratively searched for a vertex to remove such that the result of removing it and adding all its neighbors will result in a lower total weight. After adding neighbor vertices, we need to remove some useless vertices. We keep doing this until there are no more of such vertices to remove. No perturbation was used. --- ## Added Perturbation, Randomized Approach (33) For perturbing the current answer, we randomly pick five vertices to be removed from the answer. ```cpp for (int i = 0,cc=0; i < 200&&cc<5; ++i) { int u=u=randint(0,n-1); ``` We then add the neighbors of the removed vertex to make the new answer candidate. ```cpp dif=optRemove(u, tempCover); for(int &v:vec[u]){ if(cover[v])vis[v]=1; else if(!cover[v])cover[v]=1,vis[v]=2,dif+=w[v]; } ``` The above one gets the best result. We also tried the shuffle iteration order for different variables. However, we did not get better results. We tried to randomly flip some vertices to get some infeasible results and then fix these infeasible results. However, we did not improve our results. --- ## Better Initialization (34) First, we sort edges by a comparator. Now the problem becomes finding the best comparator. ```cpp bool operator < (const Edges& c)const{ //return abs(w[u]-w[v])>abs(w[c.u]-w[c.v]); //return max(childw[u]-w[u],childw[v]-w[v])>max(childw[c.u]-w[c.u],childw[c.v]-w[c.v]); // best return min(w[u],w[v])<min(w[c.u],w[c.v]); } ``` We take the "smallest" edge according to the comparator and cover the edge with one of its endpoints (the one with the smaller weight). We keep covering edges in increasing order, skipping some edges if they are already covered until all edges are covered. For the first 2 times we got a score of 34 and 33 for the last time.
{"metaMigratedAt":"2023-06-15T15:35:49.006Z","metaMigratedFrom":"Content","title":"CS4234","breaks":true,"contributors":"[{\"id\":\"6341f2bd-d5a3-4452-9829-0f8b4efbdb38\",\"add\":7497,\"del\":2317},{\"id\":null,\"add\":6423,\"del\":4892}]"}
    193 views