# Anomaly Detection Roadmap The anomaly detection (AD) module of aeon is still relatively new and an experimental module. Thus, our primary focus for the AD module in the short-term is fleshing out the interfaces by covering all algorithm types and improving the testing. This would allow us to declare the AD module as part of the stable API. In the long-term, we want to improve the algorithm coverage and add proper benchmarking harnesses. The following sections present our detailed roadmap. The order of unordered lists does not imply any priority. ## Short term (within 1 year) 1. improve algorithm coverage - Implement at least 1 algorithm for each of the subpackages (algorithm types) - Add a wrapper/framework class for forecasting algorithms: wrapping the new aeon forecasting module 2. Improve interface design and testing - Make decision on learning type exclusivity - Improve general testing for both types of anomaly detectors - Finalize interfaces and architectures for the deep learning detectors/algorithms 3. Remove experimental tag ## Long term - Implement more recent algorithms (e.g., Monash DL clustering) - Provide benchmarking code and benchmark implementations - Add more metrics for anomaly detection to benchmarking (especially [affiliation-based](https://dl.acm.org/doi/10.1145/3534678.3539339) metrics), more details: https://link.springer.com/article/10.1007/s10618-023-00988-8 - Run a GSoC project implementing key algorithms, which are currently missing - Add module for thresholding anomaly scores (similar to pythresh)