# Different initial NNs
The first generation of NEAT is initialize with NNs with minimal structure. So far, these initial NNs were fully-connected from all input nodes to all output nodes, assuming no hidden nodes. Here I examine three different settings:
- *fs_neat*, which stands for Feature Selection NEAT (introduced in a Stanley paper), this simply initialize the first generation with **NNs having one random input-output connection**
- *partial_direct* option is less extreme and the first NNs have a probability 1/4 (4 coming from our output/action space size) of having input-output connection
- *full* is the case where NNs are fully-connected from input to output layers (no hidden neurons).
For each initial population, I did 3 runs lasting 150 generations each. All the simulations share the same parameters apart from the initial population.
### Fitness
The fitness is the average success rate over the range [0.01, 0.15] evaluated on 5000 puzzles for each error rate.


### Time

(The different training durations come from different complexity of NNs encountered during training taking more or less time to evaluate.)
## Network complexity

The complexity is defined as the number of connections in the best NN encountered during training.
## Generalization ability
I picked one NN for each case above and used it to initialize (transplantate) the genome of a d=5 population (doing 3 runs).

This seems to be the decisive criterion in favor of starting with full connection of the NN.