# Journal paper
#### Problem specifications
- m = 2 objectives (problems with m>2 not considered)
- d = 2D, 5D, 10D
- Benchmarks:
- Jordan's collection
- BBOB problem combinations
#### Algorithm specifications
- Algorithms: NSGA-III, MOEA/D-IEpsilon, C-TAEA, AGE, NSGA-II, SPEA2, NSDE, GDE-3, NSDER
- Population size: 100\*m
- Number of generations: 60\*d\*5
#### Features
- ELA features
#### Prediction
- Predicting best performing algorithm
- if other algorithms have similar I_CMOP as the best performing algorithm, use all of them as best performing algorithms
- We find similar problems using a statistical test
- Check the best performing algorithm on 5 targets
- 1/1 of all evaluations (maximum number of evaluations)
- 1/2 of all evaluations
- 1/6 of all evaluations
- 1/24 of all evaluations
- 1/120 of all evaluations
---
---
---
---
# DNN CMOP algorithm performance prediction
#### Problem specifications
- m = 2 objectives (problems with m>2 not considered)
- d = 2D, 3D, 5D
- Benchmarks:
- MW (8/14 for 2D, 14/14 for 3D, 14/14 for 5D)
- CF (0/10 for 2D, 5/10 for 3D, 7/10 for 5D)
- C-DTLZ (5/6 for 2D, 6/6 for 3D, 6/6 for 5D)
- CTP (8/8 for 2D, 8/8 for 3D, 8/8 for 5D)
- DAS-CMOP (6/9 for 2D, 6/9 for 3D, 6/9 for 5D)
- DC-DTLZ (6/6 for 2D, 6/6 for 3D, 6/6 for 5D)
- LIR-CMOP (0/14 for 2D, 12/14 for 3D, 12/14 for 5D)
- NCTP (0/18 for 2D, 18/18 for 3D, 18/18 for 5D)
- Classic (3/5 for 2D, 0/5 for 3D, 0/5 for 5D)
<!-- - CRE (0/8 for 2D, 1/8 for 3D, 0/8 for 5D) -->
<!-- - RCM (3/22 for 2D, 4/22 for 3D, 2/22 for 5D) -->
#### Algorithm specifications
- Algorithms: NSGA-III, MOEA/D-IEpsilon, C-TAEA (each has a specific CHT)
- Population size: 100\*m
- Number of generations: 60\*d
- Performance indicator: ECDF (AUC)
#### Features
- Extracted using an autoencoder + FFNN architecture
- Convolutional neural network
- Conv2D, Conv3D, Conv5D
- Conv2D, mapping to Conv2D from 3D and 5D
- Conv2D, dimensionality reduction and mapping to squares using Voronoi matrices
- Conv2D, dimensionality reduction them using a self-organizing map to cover all squares
#### Prediction
- Prediction task:
- Predicting ECDF (AUC)
- Reached feasible solution, defined in 5 classes:
- 1 ... 100m evaluations
- 100m + 1 ... 500mD
- 500mD + 1 ... 2000mD
- 2000mD + 1 ... 6000mD
- Never
- Leave-one-problem-out
---
---
---
---
# IS paper
#### Problem specifications
- m = 2 objectives (problems with m>2 not considered)
- d = 2D, 3D, 5D
- Benchmarks:
- MW (8/14 for 2D, 14/14 for 3D, 14/14 for 5D)
- CF (0/10 for 2D, 5/10 for 3D, 7/10 for 5D)
- C-DTLZ (5/6 for 2D, 6/6 for 3D, 6/6 for 5D)
#### Algorithm specifications
- Algorithms: NSGA-III, MOEA/D-IEpsilon, C-TAEA (each has a specific CHT)
- Population size: 100\*m
- Number of generations: 60\*d
- Performance indicator: ECDF (AUC)
#### Features
- Features: Alsouly et al. + Vodopija et al. (start with their numerical values)
- In Alsouly et al. features, change HV calculation, by applying Aljosa's performance indicator
- Normalize objective values using ideal and nadir point
- Sampling method: LHS
- Sample size: 1000d (for Alsouly et al.), Vodopija et al. (from the paper)
- Better: low computational cost vs. high-computational cost
- Features should include Aljosa's performance indicator. In case of features where PF/PS is required (eg. pf_dist_max), we still need to decide what to do (ask authors how they dealt with this problem).
#### Prediction
- Prediction task:
- Predicting ECDF (AUC)
- Regression methods (from features to ECDF): linear/logistic regression, random forest, SVR (requires tuning)
- Prediction model for each algorithm (How many runs of each algorithm on each problem?)
- Normalize ECDF by scaling it from 0 to 1
- An option: Predicting the number of evaluations until feasible solutions are obtained (also from ECDF, for specific problem)
- Leave-one-problem-out
# Ideas from the CI workshop on 20. 10. 2023
We discussed two ways to go from m n-D samples to the 2-D grid representation used by CNNs
### Cut first, sample next
1. Cut the n-D hypercube with a 2-D plane in any direction (does not have to be axis-parallel, you do have to take care that the interection is not too small). What you get should be rectangle-shaped.
2. Sample that 2-D rectangle with m points and evaluate them (compute the objectives and constraints).
3. Use different channels in your CNN:
- f1
- f2
- v (overall contraint violation)
- dominance rank ratio (together with the constraint violation)
### Sample first, cut next
1. Sample the n-D space with m samples as usual (use some DoE method).
2. Span a surrogate model on these points (Gaussian process, Voronoi diagram (if they can be done for any dimension), etc.).
3. Cut the surrogate model-based n-D hypercube one or multiple times (TODO: many options how this could be done, needs further thought if we would want to go that way)