# Multidimensional Knapsack Problem for travelling plane with certainity parameter
## Given
- We are given n tasks between the two way points namely Source and Destination.
- Source is defined as $S(x,y,z)$ where $x$ is the x - coordinate, $y$ is the y - coordinate and $z$ is the z - coordinate of the Source
- Destination is defined as $D(x,y,z)$ where $x$ is the x - coordinate, $y$ is the y - coordinate and $z$ is the z - coordinate of the Destination
- Each Tasks $T_{i}$ is defined as $T_{i}(x_i,y_i,z_i,t_i,r_i)$ where $x_{i}$ is the x - coordinate, $y_{i}$ is the y - coordinate and $z_{i}$ is the z - coordinate of the task $T_{i}$ . $t_{i}$ stands for the time needed to complete task $T_{i}$ and $r_i$ is the reward associated with task $T_i$
- We have distance from source to the Task : $ST$=[$ST_{1}.......ST_{n}$]
- We have distance from Task to the destination :
$TD$=[$T_{1}D.......T_{n}D$]
- We have distance from between Tasks : $TT$=$\begin{bmatrix}\delta_{11} &. &. &. &\delta_{1n} \\ \delta_{21} &. &. &. &\delta_{2n} \\ . &. &. &. &. \\ . &. &. &. &. \\ \delta_{n1} &. &. &. &\delta_{nn} \\ \end{bmatrix}$
- We have speed from source to the Task : $speedST$=[$sST_{1}.......sST_{n}$]
where the speed of plane e efrom source to task is taken as random by a random number generator .
- We have speed from Task to the destination:
$speedTD$=[$sT_{1}D.......sT_{n}D$]
where the speed of plane from task to destination is taken as random by a random number generator .
- We have speed from between Tasks : $speedTT$=$\begin{bmatrix}s_{11} &. &. &. &s_{1n} \\ s_{21} &. &. &. &s_{2n} \\ . &. &. &. &. \\ . &. &. &. &. \\ s_{n1} &. &. &. &s_{nn} \\ \end{bmatrix}$
which is random from one task to another task and also randomizes on choosing a task . In Real Scenerio the speed might be random because of environmental factors and changing because of changing whether conditions.
- All the speeds will randomize after choosing a task for route .
- We have time required to reach Task from Source :
$TimeST$=[$tST_{1}.......tST_{n}$]
where $tST_{i}$ is calculated as $ST_{i}$ / $speedST_{i}$
- We have time required to reach Destination from task :
$TimeTD$=[$tT_{1}D.......tT_{n}D$]
where $tTD_{i}$ is calculated as $TD_{i}$ / $speedTD_{i}$
- We have time required to reach a Task from another Task :
$TimeTT$=$\begin{bmatrix}tTT_{11} &. &. &. &tTT_{1n} \\ tTT_{21} &. &. &. &tTT_{2n} \\ . &. &. &. &. \\ . &. &. &. &. \\ tTT_{n1} &. &. &. &tTT_{nn} \\ \end{bmatrix}$
where $tTT_{ij}$ is calculated as $TT_{ij}$ / $speedTT_{ij}$
- As the Speed is being randomized after choosing each task so we need to recalculate the corresponding time arrays.
- We are randomizing speeds array and time arrays accordingly to simulate the changing whether conditions that might effect the speed of plane and its time to reach a particular task point .
- $Dmax$ is the total distance allowed and $Tmax$ is the total time allowed while travelling from Source to Destination completing the tasks such as to maximize the total reward.
- We have a certainity associated with each task:
$ct$=[$ct_{1}.......ct_{n}$]
such that the higher the certainity the higher is the probability of the task to be chosen .
This certainity is dependent on various factors such as environmental factors and location of task to be chosen and it varies according to time . Higher certainity means more preferable task .
This certainity changes with time .
For the sake of simplicity we have randomized certainity after chosing a task point with a random number generator.
- $Max\sum\limits_{i =1}^{n} r_{i}x_{i}$ where $r_{i}$ is the Reward for a task $T_{i}$
- $\sum\limits_{i =1}^{n} d_{i}x_{i} < Dmax$ , where $Dmax$ is the total allowed distance and $d_{i}$ is the distance travelled to reach to task $T_i$ from previous selected task(Source, till no task is selected)
- $\sum\limits_{i =1}^{n} \tau_{i}x_{i} < Tmax$ , where $Tmax$ is the total allowed time and $\tau_{i}$ is the time required to complete task i ie $t_{i}$ + time required to reach a task from previous task selected ie $TimeTT_{ls}$ . In addition to this we will also add the $TimeST_{s}$ which is the time to reach first task from source and $TimeTD_{f}$ which is the time to reach the destination from last point in the total path .
**where** , $$:x_{i}\in \{0,1\}$$
## Heristic Algorithm
#### Assumptions
- We can only move in the forward direction,for two task $v_{i}$ and $v_{j}$ , the movement is allowed if $i<j$.
### Initialization
- efficiency of task when we are at i($\epsilon_{i}$):
$$\frac{ct_{i}*r_i}{d_i+\tau_{i}}$$ ,where $ct_{i}$ is the certainity associated with task i $d_i$ can be Source-Task Or Task-Task Distance and $\tau_{i}$ is time it takes to perform the task + time to reach task s from previous task l .
### Algorithm
* Chosen tasks : $ta$ = [] and $TotalR$ = 0
* For Start choose most Efficient Task as follows :
- find $\epsilon$ for all i and take $d_i=ST_{i}$ , where i<n
- $$\epsilon_{i}=\frac{ct_{i}*r_{i}}{ST_i+\tau_{i}}$$
- Choose the $T_{i}$ , with the highest $\epsilon$
- Change $Tmax=Tmax-\tau_{i}$ and $Dmax=Dmax-ST_i$
- Add chosen task to ta and $TotalR = TotalR + r_{i}$
* Choosing further tasks ,
- while we have tasks left in range i+1 to n
- randomize all the certainity values for each task point.
- randomize all the speeds from source to tasks.
- randomize all the speeds from tasks to tasks.
- randomize all the speeds from tasks to destination.
- recalculate time to reach a task point from source.
- recalculate time to reach a task point from another task point .
- recalculate time to reach destination from a task point .
- for each task($T_{m}$)[Where m in range(i+1,n)]:
- find $\epsilon$ for all i , $\tau_{i}$ = $t_{i}$ + $TimeTT_{li}$ and $d_{i}$=TT[i][j] , where i<n
- find maximum $\epsilon$ as $\epsilon$$max$ and corresponding task s
- if Dmax - $d_{s}$ - $TD_{s}$ >= 0 and $Tmax$ - $t_{s}$ - $TimeTT_{ls}$ - $TimeTD_{s}$ >= 0
- add task $s$ to ta as a chosen task
- Change $TotalR = TotalR + r_{s}$
- Change $Tmax=Tmax-t_{i}$-$TimeTT_{ls}$ and $Dmax=Dmax-TT_{ls}$
* Repeat untill no task is left to choose in range i+1 to n.
* To end Change $Dmax = Dmax$ - $TD_{l}$ and $Tmax = Tmax$ - $TimeTD[l]$ where l is the last task chosen .
* We Have Total Reward in $TotalR$