# Self-supervised toolkit with PyTorch-Ignite ## Thoughts, Ideas - Sylvain: usage inside Michelin ? - Ahmed: provide deadline on sub-tasks ## Kick-off meeting (10/11/2021) TODO for everyone: Please put here relevant info for the project: ### Papers with algorithms we would like to re-implement: - Contrastive learning - SimCLR : https://arxiv.org/pdf/2002.05709.pdf (Taras, Victor) - BYOL : https://arxiv.org/pdf/2006.07733.pdf (Victor) - MOCO : https://arxiv.org/abs/1911.05722 - SwAV : https://arxiv.org/abs/2006.09882 - CLIP? : https://arxiv.org/abs/2103.00020 - DINO : https://arxiv.org/abs/2104.14294 - PAWS? : https://arxiv.org/abs/2104.13963 - A good review about Contrastive Self Supervised Learning (I think it covers SimCLR, BYOL and others) : https://arxiv.org/pdf/2011.00362v3.pdf Question: What about NLP? Tabular (for later) * VIME https://vanderschaar-lab.com/papers/NeurIPS2020_VIME.pdf **Soft-deadline: 19/11/2021** **Hard-deadline: 24/11/2021** #### Notes: ### Existing open-source code/toolkits to inspire from - [facebookresearch/vissl](https://github.com/facebookresearch/vissl) - Probably MONAI? e.g. how to structure a code? - models - losses - workflows - engines - https://github.com/lightly-ai/lightly - https://github.com/hankook/AugSelf **Soft-deadline: 24/11/2021** **Hard-deadline: 29/11/2021** ## Toolkit code structure proposals ### Victor's draft The toolkit could be used like that: ```python from pytorch_ignite_ssl import DINOTrainer customization_kwargs = { # customization params } trainer = DINOTrainer(**customization_kwargs) # trainer is ignite.engine.Engine with predefined training_step # attached handlers and evaluators as attributes ssl_data = DINOTrainer.setup_data(dataset1, dataset2) trainer.run(ssl_data, max_epochs=100) ``` ### Taras' draft
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up