owned this note
owned this note
Published
Linked with GitHub
# Deep Learning Final Project
The final research project is aimed to give you an idea of what a deep learning research project entails, and hopefully, get you excited about doing research in this field. It requires critical thinking that you will develop by learning the material and doing assignments during the semester.
At the end of the semester, you will share your work with your peers through presentations and writing a report. This type of project-based exercise will help you in developing skills to perform independent research like:
- Thinking of a project idea
- Doing literature survey
- Designing experiments
- Reporting results in a clear and concise manner.
:::danger
**Please read this handout in its entirety.** It contains all the information, forms, and **deadlines** you'll need to know about!
:::
## Overview
Both students in 1470 and 2470 will complete final projects in groups of **3-4 people**. These projects are an opportunity for you to apply the knowledge you’ve gained in class to a topic or area of interest. Your group may implement research papers, existing projects, or create entirely new projects. **The time commitment per student should be *approximately equal regardless of the group size*, thus larger groups should justify their group size by taking on more ambitious projects.**
Your project group, once assembled, will be assigned a **mentor TA**, who will guide you throughout the remainder of this semester. Your mentor will be the one to give you pointers on how to get started, where to look for ideas, and also be one of the people evaluating your work. You will be graded on completion of goals, punctuality with which you meet your deadlines, and professionalism in reports and presentations.
We do not expect you to build a magical model that solves all of the world’s problems. What we do expect is concentrated and well thought out effort into solving a problem: Rather than reporting a single number using a certain metric and showing your model just “works”, we expect you to perform **quantitative ablation studies** (e.g. show us what architectural changes, hyper parameters, and regularization techniques you tried and what are helpful, explain why you think so) to analyze your models, along with **qualitative evaluation** (e.g. visualizations) to illustrate what the model learns and how it might fail. You might want to check Prof. Bill Freeman’s awesome talk on “How to Write a Good CVPR Paper” ([video](https://www.youtube.com/watch?v=W1zPtTt43LI&t=2681s), [slides](https://billf.mit.edu/sites/default/files/documents/cvprPapers.pdf)). Of course, we do not expect you to finish a project at “top AI conference” level within a semester, but the general high-level principles are helpful.
Do **not** be afraid of negative results! Some of the most interesting and well-received presentations from past years were ones that failed to produce “successful” results!
## Requirements
- Groups of 3 to 4 students (A solo project is possible, but not recommended, for 2470 or 1470 with capstone students. The student should check with the instructor and get approval.)
- Each group must be completely {1470} or completely {2470, 1470 with capstone}. If a 1470 student works together with 2470/1470-capstone students, the expectation is that every member of the group **contributes at the 2470 capstone level**.
- The project must be related to course material.
- Present and submit a poster/slides of the project at the end of the semester recapping your work. The online forum to share your work is DevPost.
- We will host a “Deep Learning Day” at the end of the semester. Student groups will participate in-person for poster presentations (1470 groups) and oral presentations (2470+1470 capstone groups).
- Participate in “Deep Learning Day” by engaging with other student’s projects at the end of the semester.
- Meet all deadlines and check-ins specified below.
- Submit all project-related code via sharing a Github Repo link to your mentor TA. Projects without code submission are considered incomplete and will not be graded. Note: Even if you are implementing an existing paper, your code submission must consist of your **own original work**.
- Your code can be written using any deep learning library, such as TensorFlow, Keras, Jax, or PyTorch. We recommend you take on a project that requires moderate-level of compute (e.g. your desktop, or department machines).
- Reports and proposals must be written as PDF files.
:::danger
**NO** Late Days may be used for the final submission date, but if you absolutely need an extra day or two for earlier checkpoints, talk to your TA; they will approve this at their own discretion.
:::
## Scope
The project is “open-ended,” meaning it’s open to interpretation. For CS1470 students, there are two options. For CS2470 students, you must choose the **second** option of solving a new problem.
### Option 1: Re-implement a research paper
Find a paper from a recent machine learning conference that describes a deep-learning-based system, and try to reproduce its results. For this approach to be valid, the re-implementation must not be a trivial effort. If there’s already open-source code that comes with the paper, you can still do it, but you’d need to at least (a) implement the system in a different framework (e.g. PyTorch instead of TensorFlow) **AND** also (b) try your implementation on a different dataset than the one(s) in the paper. We’ll also ask you to share links to any public implementations you come across for verifying your code is your own work. If you need inspiration for potential papers to try, look through the recent proceedings of the following conferences:
- AI / Machine Learning/ Data Mining:
- Neural Information Processing Systems (NeurIPS)
- The International Conference on Learning Representations (ICLR)
- The International Conference on Machine Learning (ICML)
- Knowledge Discovery and Data Mining (KDD)
- Computer Vision
- Computer Vision and Pattern Recognition (CVPR)
- The International/European Conference on Computer Vision (ICCV/ECCV)
- Natural Language Processing
- The Association for Computational Linguistics (ACL)
- Empirical Methods in Natural Language Processing (EMNLP)
- Computational Biology and Health
- Research in Computational Molecular Biology (RECOMB)
- Intelligent Systems for Molecular Biology /European Conference on Computational Biology (ISMB/ECCB)
- International Workshop on Data Mining in Bioinformatics (BIOKDD)
- Pacific Symposium on Biocomputing (PSB)
- ACM Conference on Bioinformatics, Computational Biology and Biomedicine (ACM-BCB)
Most paper authors will have made pre-prints publicly available on their personal websites or via [arXiv](https://arxiv.org/). Empirically, it is often a good idea to pick research papers whose source code has been released by the authors. This gives you a good idea how easy it is to reproduce the results with their own codes, and the amount of work required for reimplementation.
### Option 2: Try to solve a new problem
You can do this using whatever deep learning methods you can find that get the job done. Ideally, the project would involve more than one major topic we covered in the class (CNNs, RNNs or Transformers, Generative models, Fairness and model interpretability, and Reinforcement Learning). For 1470-capstone students, your capstone requires you to work on a project that connects what you have learned in more than one course (e.g. deep learning, machine learning, and computer vision). You are encouraged to implement your project “from scratch”, but can also use open-source deep learning projects as a component in your framework. Example use of open-source projects that are permitted:
- You are building a generative model that turns food videos into recipes, you can extract visual features with an open-sourced model, such as [CLIP](https://github.com/openai/CLIP).
- You want to build a reinforcement learning model for a game you invented, you can try out some well implemented [RL baselines](https://github.com/openai/baselines).
- You run a thorough analysis of model bias (e.g. gender, racial) for one or a collection of popular deep learning models that have been open-sourced.
Example uses that are **not** permitted:
- You take open-sourced model checkpoints and just “fine-tune” them on another dataset.
- You take an open-sourced framework and replace its ResNet-50 with a ResNet-101.
Please cite all the open-source frameworks you used in the final report, and check with your mentor TA if they are okay with your proposal.
You can check out the projects in the [previous Deep Learning Day](https://brown-deep-learning-day-f2021.devpost.com/project-gallery) or [Stanford CS231n](http://cs231n.stanford.edu/2017/reports.html) to draw some inspiration, but your project needs to differentiate from previous projects in meaningful ways.
## Compute Resources
We recommend a project that can run on your desktop or department machines. Please check with your TAs if your project proposal might require excessive compute resources than we can provide. Remember that you will not be judged for the absolute performance (e.g. you get the best numbers in the world on some benchmark), but the creativity of ideas, quality of code and documents, as well as thoroughness of the ablation experiments.
If you really want to take on a very ambitious project that requires several GPUs with large memories, please ensure that you have obtained the computational resources **before** you start with the project and get the TA’s permission. Example resources:
- You are working in a research lab and the lab provides you with GPU machines.
- A free exploratory account from Brown CCV (choose the “exploratory account without a PI” option).
- Try Cloud service providers that offer free student credits (e.g. GCP). Google colab notebooks will also let you use 1 GPU for free.
## Deliverables
### Forming teams
:::info
**Due:** Feb 16, 2024 (Fri) 6PM EST
:::
Please fill out this [Google form](https://forms.gle/ExFrnEVg5eNAjeSk8) to let us know if you would like to form your own team or would like us to assign you a team. **If you have decided to form your own team,** please also submit the names of your team members and one form submission per team is sufficient. Remember you can form a group of 3-4 people. Each group must be completely {1470} or completely {2470, 1470 with capstone}. If a 1470 student works together with 2470/1470-capstone students, the expectation is that every member of the group contributes **at the 2470 capstone level**.
**Final team assignments will be shared by February 28th, 2024 (Wed)**
### Project Check-in #1
:::info
**Due:** Week of March 04, 2024
:::
For your very first check-in, you (with your team members) will meet with your mentor TA and have a brainstorming session. Reach out to your mentor TAs to set up a meeting during the week of 03/04 (until 03/14). You should prepare a few ideas in advance to discuss the plausibility as well as scope of the project. This could include some application domains that you are interested in, a paper or 2 that you found interesting, some deep learning model you really want to implement, etc. This check-in is your opportunity to start thinking about your project proposal as a team and get some guidance for the same from your TA mentor.
### Project Proposal
:::info
**Due:** March 15, 2024 (Fri) 6PM EST
:::
With your team members, decide a team name, and submit your final project idea by filling out the form [here](https://docs.google.com/forms/d/e/1FAIpQLSds4AxIBxzbtcmlLqCC9FuBfpYiMw3374ct009M7Ywiul7Vpw/viewform?usp=sf_link)! Only one person from your group needs to submit the form for everyone. If you are re-implementing an existing paper, please cite the paper that you want to implement. If you are trying to solve something new, please describe the problem and your plan of action. We will approve all that are appropriate.
:::warning
Please note that if you do not submit your proposals by the deadline, you will receive a **2% deduction** on grade for this project. This deadline cannot be extended except for in extenuating circumstances, as TAs need to review and approve your proposals before greenlighting your project in checkin #2.
:::
### Project Check-in #2
:::info
**Due:** Week of April 08, 2024
[**Rubric**: TAs will be grading this check-in based on this rubric.](https://docs.google.com/document/d/1W_ugB1oRvtxTngrx7spdCmy_5jB93Qhq_4NAqntrjOA/edit?usp=sharing)
:::
For the second check in, there are two parts:
1. You will submit an outline that details your plan and the main ideas via Devpost **before your meeting**. The outline requirements are described below. Additionally, submit the URL of your Devpost submission and the URL of your Github repo in [this form](https://forms.gle/Js2yXnjtyK1hLEw1A).
Please do **not** include your Github URL in the Devpost submission.
2. You will meet with your mentor TA and review the work you have done since your project has been approved. Reach out to your mentor TA by 04/10 to schedule a meeting within the week of 04/08 - 04/14. Prior to this meeting, please try to have an idea of:
- **Understanding**: have as thorough of an understanding as possible of the paper you’re replicating/problem you’re solving
- **Data**: come with ideas on what data you’ll need to use (and how you can access it)
- **Methods**: have a rough idea of what kind of architecture you plan on implementing
- **Metrics**: have a proposal for your base, target, and stretch goals.
:::info
**Note:**
1. **Base goal** = what you think you definitely can achieve by the final due date.
2. **Target goal** = what you think you should be able to achieve by the due date.
3. **Stretch goal** = what you want to do if you exceed your target goal.
Further note that these goals are flexible and can be re-evaluated at later checkpoints.)
:::
The outline that you submit/write-up to Devpost should contain the following:
- **Title**: Summarizes the main idea of your project.
- **Who**: Names and logins of all your group members.
- **Introduction**: What problem are you trying to solve and why?
- If you are implementing an existing paper, describe the paper’s objectives and why you chose this paper.
- If you are doing something new, detail how you arrived at this topic and what motivated you.
- What kind of problem is this? Classification? Regression? Structured prediction? Reinforcement Learning? Unsupervised Learning? etc.
- **Related Work**: Are you aware of any, or is there any prior work that you drew on to do your project?
- Please read and briefly summarize (no more than one paragraph) at least one paper/article/blog relevant to your topic beyond the paper you are re-implementing/novel idea you are researching.
- In this section, also include URLs to any public implementations you find of the paper you’re trying to implement. Please keep this as a “living list”--if you stumble across a new implementation later down the line, add it to this list.
- **Data**: What data are you using (if any)?
- If you’re using a standard dataset (e.g. MNIST), you can just mention that briefly. Otherwise, say something more about where your data come from (especially if there’s anything interesting about how you will gather it).
- How big is it? Will you need to do significant preprocessing?
- **Methodology**: What is the architecture of your model?
- How are you training the model?
- If you are implementing an existing paper, detail what you think will be the hardest part about implementing the model here.
- If you are doing something new, justify your design. Also note some backup ideas you may have to experiment with if you run into issues.
- **Metrics**: What constitutes “success?”
- What experiments do you plan to run?
- For most of our assignments, we have looked at the accuracy of the model. Does the notion of “accuracy” apply for your project, or is some other metric more appropriate?
- If you are implementing an existing project, detail what the authors of that paper were hoping to find and how they quantified the results of their model.
- If you are doing something new, explain how you will assess your model’s performance.
- What are your base, target, and stretch goals?
- **Ethics**: Choose 2 of the following bullet points to discuss; not all questions will be relevant to all projects so try to pick questions where there’s interesting engagement with your project. (Remember that there’s not necessarily an ethical/unethical binary; rather, we want to encourage you to think critically about your problem setup.)
- What broader societal issues are relevant to your chosen problem space?
- Why is Deep Learning a good approach to this problem?
- What is your dataset? Are there any concerns about how it was collected, or labeled? Is it representative? What kind of underlying historical or societal biases might it contain?
- Who are the major “stakeholders” in this problem, and what are the consequences of mistakes made by your algorithm?
- How are you planning to quantify or measure error or success? What implications does your quantification have?
- Add your own: if there is an issue about your algorithm you would like to discuss or explain further, feel free to do so.
- **Division of labor**: Briefly outline who will be responsible for which part(s) of the project.
### Project Check-in #3
:::info
**Due:** Week of April 22, 2024
[**Rubric**: TAs will be grading this check-in based on this rubric.](https://docs.google.com/document/d/11qbo9pYnqvq5MXqMyG45r9sUXAQ5QXqCfGcjY__-kIE/edit?usp=sharing)
:::
For the third check in, you will 1) write a one-page reflection on your progress so far and 2) meet with your mentor TA. We expect you are wrapping up the implementation and performing final experiments. If you have questions before the third check-in, please contact your mentor TA, or post questions on Ed.
:::info
**Submit the reflection (as described below) by linking it on your Devpost submission before your meeting.**
:::
For this checkin, we also require you to write up a reflection including the following:
- **Introduction**: This can be copied from the proposal.
- **Challenges**: What has been the hardest part of the project you’ve encountered so far?
- **Insights**: Are there any concrete results you can show at this point?
- How is your model performing compared with expectations?
- **Plan**: Are you on track with your project?
- What do you need to dedicate more time to?
- What are you thinking of changing, if anything?
This check in meeting with your mentor TA can either be in-person or over over Zoom, Google Meet, etc. Reach out to your mentor TA before 04/24 to schedule this meeting.
Regarding what we generally expect you to have **done** by this time:
- You should have collected any data and preprocessed it.
- You should have shared the Devpost link containing a Github repo link with your mentor TA
- You should have almost finished implementing your model, and are working on training your models and ablation experiments.
- Please make sure you are keeping your list of public implementations you’ve found up-to-date.
### Final Check-in (Optional)
:::info
**Due Date:** Week of April 29, 2024
:::
If you want to, you can meet with your mentor TA one final time to review your project, poster, and presentation. Your mentor TA can give feedback on how best to present your implementation and outcomes.
### Deep Learning Day
:::info
**Date:** May 06/07, 2024 (Mon/Tue)
:::
This is a chance to show off your teams awesome project and see all of the great work your peers have done! You’ll be expected to attend your theme session to present yout work and to ask question to other groups. More logistical details about the event and participation will be shared as we get closer to the event.
{1470} students should be prepared to give a ~2 minute presentation of their poster, {2470/1470 capstone} students should prepare a longer ~10 minute presentations describing their project using slides.
Your poster/slides must contain the following information:
- **Title**
- **Names of project group members**
- **Introduction**: what problem you’re solving and why it’s important
- **Methodology**: your dataset and model architecture, etc.
- **Results**: both qualitative *and* quantitative (e.g. if you’re doing an image-related project, we want to see both pictures *and* graphs/tables)
- **Discussion**: lessons learned, lingering problems/limitations with your implementation, future work (i.e. how you, or someone else, might build on what you’ve done)
### Final Projects Due
:::info
**Due:** May 10, 2024 (Fri) 6PM EST
:::
You will need **three final deliverables** the due date (note this is a **hard deadline**):
1. Poster - {1470} students will post their digital posters on Devpost
Oral Presentation - {2470/1470 capstone} students will post their slides on Devpost.
2. Finalized code on GitHub
3. Final writeup/reflection
#### Poster
For poster presentations, we require one high resolution horizontal 4:3 poster (in the form of a JPG) to be displayed on your Devpost submission. You should keep things sufficient and also make it visually appealing. We recommend using InDesign, PowerPoint, or LaTex for your poster.
#### Final Writeup/Reflection
Along with your Devpost submission, provide a final write up/reflection of the final project.
Please be sure to include:
- **Title**
- **Who**
- **Introduction**
- **Methodology**
- **Results**
- **Challenges**
- **Reflection**
Note that most of this writeup should already be mostly complete without much extra effort, as “Introduction” and “Methods” should be mostly adaptable from your initial outline (although be sure to modify accordingly if you pivoted or otherwise adjusted from the initial outline), and “Challenges” can build off of what you discussed in your checkpoint #2 reflection. The “Results” section can summarize your results as they are in the poster; this is also a space to add any additional results that didn’t make your poster. In your final reflection, please address the following questions (along with other thoughts you have about the project, if any):
- How do you feel your project ultimately turned out? How did you do relative to your base/target/stretch goals?
- Did your model work out the way you expected it to?
- How did your approach change over time? What kind of pivots did you make, if any? Would you have done differently if you could do your project over again?
- What do you think you can further improve on if you had more time?
- What are your biggest takeaways from this project/what did you learn?
## Grade Breakdown
A project submission is considered complete if the written report, presentation, **AND** code are all submitted. An incomplete project receives zero grade. A complete project will be graded as:
- Written reports: 35%
- Poster + Oral Presentation: 35%
- Code: 15%
- DL Day Participation: 10%
- Peer evaluation: 5%
- To encourage a fair distribution of work between group members, each student will fill out a form at the end of the semester in which they describe the contributions that every other group member made to the project. You can find this form here.
## Previous Final Project Posters/Reports
To help with knowing what’s expected of you for your final project posters and reports, here are some DevPost links of DL Days from past offerings (you might have to log in to see the projects):
- [Spring 2023](https://brown-deep-learning-day-s23.devpost.com/)
- [Fall 2022](https://brown-deep-learning-day-f2022.devpost.com/)
We encourage you to also check out some of the projects that current and past TAs implemented in previous years:
- What the f&nt? ([poster](https://drive.google.com/file/d/1jRNYz6BC6JnPVeBxtEyur8Snro_GlECI/view), [writeup](https://docs.google.com/document/d/1G1yaYVTVwvIwT_JxketJEGNGAMkp2BuI5DUtAEiMVeM/edit?usp=sharing))
- Social Media Fake News Detector ([poster](https://docs.google.com/presentation/d/1KwmZb1-IOV0GSeQgFtu1AeXlNpphmkWx6udelo60KLc/edit?usp=sharing), [writeup](https://docs.google.com/document/d/1Dt8ir_jogCRbEcb9aWegED3CayN4NmBlH4ufNvGbDVA/edit?usp=sharing))
- Computational Photography in Extreme Low Light ([poster](https://drive.google.com/file/d/1Xh7U88LvsjTeSgaMy87ppBxmMBSYUudS/view?usp=sharing))
- Colorizer ([poster](https://docs.google.com/presentation/d/1r8hzBGwNVbMGL55a7pP0Hi_MkwZd5hjG6Z2wSg9YOZU/edit), [writeup](https://docs.google.com/document/d/1uWwGWh3g_jzh4wdPc5KjIeos-bYt2A6XXN4p90ZjJk4/edit))