# Hands-On 5 ###### tags: `homework/lab` This is the place to ask/share/answer any questions related to the hands-on lab. ::: info Submission deadline: 6/10 20:00. Late submission deadline: 6/11 20:00 **(10 point penalty)**. **A grade of 0 (zero) will be given for submissions after "Late submission"** ::: ## Readings about Performance Measures ::: info While our classification task is *multi-label* classification (each sample is labeled with a subset of labels, and not just one), you may find the following articles which I wrote on Medium to be useful. They explain precision, recall, and F1 for binary and *multi-class* metrics: * [Multi-Class Metrics Made Simple, Part I: Precision and Recall](https://towardsdatascience.com/multi-class-metrics-made-simple-part-i-precision-and-recall-9250280bddc2?source=friends_link&sk=aed5c11c0ba28470479f286b2ef239f9) * [Multi-Class Metrics Made Simple, Part II: the F1-score](https://towardsdatascience.com/multi-class-metrics-made-simple-part-ii-the-f1-score-ebe8b2c2ca1?source=friends_link&sk=e925a881bc07852b50b58bf1fccf7791) ::: ## Discussion :::success This forum is used for asking question and sharing information. . **Students are encouraged to share their knowledge or solutions with other students** In addition, the forum will be checked by the TAs every 12~24 hours. ::: > I'd like to ask about the `Team` page on the Codalab site. When I click the button, I am redirected to a page like this. ![](https://i.imgur.com/7U2mA7r.png) How can I go to the original `Team` page? >>Is this for [Lab 5](https://competitions.codalab.org/competitions/25115?secret_key=e3d89637-74fe-41e4-acb5-c734d91edebe) or the [Final Project](https://competitions.codalab.org/competitions/24761)?[name=Boaz][color=green] >>> It's for lab 5. >>>> Can you email me the exact details with screenshots and account names?[name=Boaz][color=green] >>>>> It's working correctly now. Thanks! > Now it is the evaluation phase for the final competition. I noticed that the total submission is 10. Is that means we can only submit 10 times in this phase? If so, can we still submit the dev.json in unlimited times for practice of our model? >> Please post questions related to the Final Project on [this HackMD note](https://hackmd.io/@nlp-108/BJ2mgzn9U).[name=Boaz][color=green] >>> Sorry. > Hi, Boaz. Which data file is used for submission? Is it `dev_unlabeled.json`, `test_unlabeled` or testing data splitted from `train_gold.json`? I tried submitting `dev.json` (the prediction result for `dev_unlabeled.json`), but got the error "Invalid file type (application/json)." What's wierd is that our team submitted the exact same format of `dev.json` to **EmotionGIF 2020** and got accepted. >> Sounds like your forgot to ZIP the file?For more information about the prediction files to be uploaded, please see [Phase 1](https://sites.google.com/view/emotiongif-2020/shared-task/competition-phases?authuser=0#h.p_5OCFW8s3ored) and [Submission Format](https://sites.google.com/view/emotiongif-2020/shared-task/submission-format). [name=Boaz][color=green] >>> Yes, it turned out to be the problem! Thank you! > Hi, Boaz. I also had a problem on submission and got the error "Invalid file type (application/json)." I used the file format 'dev.json' and also ZIP it. > > I happened to upload the file successfully. Nothing changed on my file. Not sure what happened. > > I submitted my dev.json in a zip file. While I can't see my score. Have I missed anything? >> Sometimes it takes a few minutes to compute the score. Check back after a few minutes at the [Participate -> Submit/View Results](https://competitions.codalab.org/competitions/25115?secret_key=e3d89637-74fe-41e4-acb5-c734d91edebe#participate-submit_results) tab. Make sure to refresh the page.[name=Boaz][color=green] >>> This is how my submission looks like:![](https://i.imgur.com/sps2zOg.png) Although I submitted it yesterday, I still cannot receive my score for now, on the secret website. >>>> OK, thank you for the screenshot :) this should be fixed now! [name=Boaz][color=green] >>>>> Thanks for your help. By the way, Could you please announce the details of final competition once it is available? I am worried about miss anything important. Thanks in advance. > In lab 5, should we stick to the steps in google slide or we can base on the google slide and modify the model as we want? >> For the homework (Colab script), you need to provide the two models specified in the homework (Majority and Gaussian Naive Base) and also upload the corresponding predictions to the NCTU website. If you like, you can also upload additional predictions to the website if you want to start practicing for the competition, but do not include the code in the homework submission -- keep this for the final project. >> >> Note that you can add information about each of your predictions. You can find this option useful, for example to label your entries (for example, "My majority-based prediction"). >> >> ![](https://i.imgur.com/W7Di6CD.png) >> >> [name=Boaz][color=green] > My submission to the EmotionGIF got error message as below. > WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap. Wrong number of samples in submission file (4000) > I am not sure what's happened. >> I think you need to work carefully with your json output, including output format, total index etc. > Just want to check again. So in the Majority Prediction part, we don't need to train the data, right? >> Right. [name=Boaz][color=green] > Do we need to concatenate reply into the each part of data? >> In Lab 5 we do not use the ``reply`` data.[name=Boaz][color=green] > We have many submissions on the secret website. We marked in the description for the "Mojority Prediction" & "Naive Bayes Prediction" each. Is that OK? Or other information for the submission is required? >> It's up to you. As long as you have the two needed submissions uploaded (Majority and NB) you should be fine. If you wish, you can delete the excess submissions from the secret site.[name=Boaz][color=green] > Only for curiousity, is it really possible to get mAP over 1.0 shown on leaderboard yesterday? >> No. By the way, the metric is mean average recall (MAR), not mean average precision (MAP).