# MKSP for travelling plane with example
## Given
- We have n task between the two points.
- Number each task $T_{1},T_{2},....,T_{n}$
- for each task,
- We have distance from source to the Task : ST=[$st_{1}....st_{n}$]
- We have distance from Task to the destination : TD=[$td_{1}....td_{n}$]
- We have distance from between Tasks : TT=$\begin{bmatrix}\delta_{11} &. &. &. &\delta_{1n} \\ \delta_{21} &. &. &. &\delta_{2n} \\ . &. &. &. &. \\ . &. &. &. &. \\ \delta_{n1} &. &. &. &\delta_{nn} \\ \end{bmatrix}$
- $max\sum\limits_{i =1}^{n} R(T_{i})x_{i}$ where R is the Reward for a task $T_{i}$
- $\sum\limits_{i =1}^{n} d_{i}x_{i} < D$ , where D is the total allowed distance and $d_{i}$ is the distance travelled when for task i
- $\sum\limits_{i =1}^{n} t_{i}x_{i} < T$ , where T is the total allowed time and $t_{i}$ is the time recquired to complete of task i
**where** ,$$: x_{i}\in \{0,1\}$$
- Example Data:
- 
- 
## Hueristic Algorithm
#### Assumptions
- We can only move in the forward direction,for two task $v_{i}$ and $v_{j}$ , the movement is allowed if $i<j$.
- Once we leave the ideal route , we dont return to the ideal path(waypoint)
### Initialization
- efficiency of task when we are at i($\epsilon_{i}$):
$$\frac{R(T_{i})}{\sum\limits_{k =i}^{n} [T-t_{k}]^{+}+\sum\limits_{k =i}^{n} [D-d_{k}]^{+}}$$
$Here [x]^{+}$ means that whenever the value is -ive replace it with 0 and whenever it is 0 replace it with 1.
### Algorithm
* Start with i=0.
- find $\epsilon$ for all j and $d_{j}=TT[j,k]+TD[j]$, where j>i and j<n
- $$\frac{R(T_{i})}{(D-ST[i])+\sum\limits_{k =i}^{n} [T-t_{k}]^{+}+\sum\limits_{k =i}^{n} [D-d_{k}]^{+}}$$
- 
- Choose the $t_{j}$ with the highest $\epsilon$
- Remove $t_{j}$ from list.
- Change $T=T-t_{j} and D=D-d_{j}$
- Example(Here Task 3 is chosen)
- Selected_Task = [$T_{3}$]
* With i=j(for j=3) ,
- for each task($t_{m}$)[Where m in range(i,n)]:
- find $\epsilon$ for all j and $d_{j}=TD[m]+TT[i][m]$, where j>i and j<n
- 
- Choose the $t_{j}$ with the highest $\epsilon$
- Remove $t_{j}$ from list.
- Change $T=T-t_{j} and D=D-d_{j}$
- Slected_Task=[$T_{3},T_{4}$]
- With i=j(for j=4)
- j<n : False
## Conclusion
- If we used Brute Force Technique for the same data we see:

- Thus the algo gives same reward with **more** efficiency.