# HIVE COTE project ###### tags: `Research` ### People Diego Furtado Silva, UFSCAR lead Jason Lines and Tony Bagnall, UEA lead Matthew Middlehurst, post doc Anderson Giacomini, student #### Anderson Giacomini. Internal weighting of HC2, will work with post doc on the sktime grant HIVE-COTE 2.0 is a meta-ensemble that uses weighted ensembles from different categories of classification algorithms where the contribution of each category to the final decision depends on their accuracy estimated through cross-validation in the training set. This weight estimation step is a relevant bottleneck for using HIVE-COTE 2.0 for large volumes of data. In this context, to circumvent this bottleneck, we propose using simple meta-learning-based models to automatically and efficiently estimate weights for the algorithms comprised by HIVE-COTE 2.0. ### First meeting 29/11/22 Plan: 1. Install estimator-evaluation package, get classification_experiments running https://github.com/time-series-machine-learning/tsml-estimator-evaluation 2. Implement HIVE-COTE loaded from file. See skeleton implementation, talk to us if problems 3. Set it up to use different alpha 4. Try cross validating alpha on UCR datasets, test hypothesis that tuning meta parameter alpha improves overall performance. ### 7/12/22 1. HC2 from file pushed, need to sort out pre-commit. 2. Next goal is to tune HC2 for alpha to do 1. Read machine learning lecture on classifier evaluation 2. Set up a file in here called Tuning.py https://github.com/time-series-machine-learning/tsml-estimator-evaluation/tree/main/tsml_estimator_evaluation/_wip/estimator_from_file 3. Try tuning HC2 from file on ItalyPowerDemand. http://timeseriesclassification.com/dataset.php ### 5/1/23 1. Experiment 1: Does tuning alpha on range 1-10 improve performance? progress: experiments running, some technical issues, try on research lab machines. cmp-22dgopc.uea.ac.uk Mount the machine, copy over results and run from there Figure out how to compare results. Arrange meeting with Diego for next stages Think about writing a paper. Look at transfer learning/domain knowledge/data characteristics as priors for weighting scheme. ### 12/1/23 1. Experiment 1: Does tuning alpha on range 1-10 improve performance? Results currently show building from file and tuning make it worse, this is probably a bug!