HOME | RESEARCH | TEACHING |
---|
In this course we will look at a handful of ubiquitous algorithms in machine learning. We will cover several classical tools in machine learning but more emphasis will be given to recent advances and developing efficient and provable algorithms for learning tasks. A tentative syllabus/schedule can be found below; the topics may change based on student interests as well. You can also check last year's course notes for a more details about what's to come.
Make sure you can answer the questions in Assignment 0 for yourselves before registering for the course.
There will be four assignments for the course that will cover 70% of the credit and a final that will cover 25% of the credit; 5% is for class participation (live-class participation and/or extensive contributions to online discussions). Each assignment will have both written as well as programming components. We'll predominantly use gradescope for the assignments. The final will be as per university schedule. You could potentially discuss how to solve problems with others specifically (a better option would be to just ask questions openly on edStem) but see course policies below and make sure you do not violate them.
Some background in algorithms, probability, linear algebra (all at a basic undergraduate level) will be quite helpful.
Please click on triangle to the right to expand the sub-heading and see more details (if any).
โ Zoom
โ Note that each lecture is on a separate page.
โ Module 5: Graphical Models
โ Module 4: Privacy in ML
โ Module 3: PCA
โ Module 1,2: Optimization and Online Learning
โ Assignment 0 (not to be submitted)
โ [Assignment 1] (18 points). Out April 12, due April 19th.
โ [Assignment 2] (18 points). Out April 26th, due May May 3rd.
โ [Assignment 3] (18 points). Out May 10th, due May 17th.
โ [Assignment 4] (16 points). Out May 24th, due May 31st.
Solutions will be sketched on the discussion forum after the due date.
We will make use of edStem extensively. Videos will also be uploaded here. You must have received an invitation via your official UCLA email ID (the one used on CCLE). If you haven't let me know immediately. It will be our main way for communicating with each other. Please ask questions on edStem so that others may also benefit from the knowledge.
We will also use it for releasing homework solutions. Homeworks themselves will be posted on Gradescope.
Registration links will be provided soon.
We will use Gradescope for homework and they have to be submitted by 10PM on their due date. Things to keep in mind: 1) Within a week of the course, you should receive a registration link from Gradescope. If you don't receive it before the first homework, contact me; this will give you access to the website. 2) Watch this video with instructions. Follow them to the letter! The simple guidelines make the process considerably smoother. 3) Make sure you start each problem of a homework on a new page. 4) To generate a PDF scan of the assignments, you can follow the instructions here; you can also use the scanners in the library.
It is strongly recommended to use LaTeX or other word processing software for submitting the homework. Grades will take into account both the correctness and the quality of the solutions. Correctness is a prerequisite but clarity is also important: you are responsible for communicating your solution in a legible and understandable way.
Some helpful guidelines: (1) Start early to use office hours. (2) Serious/honest attempts count - there will be reasonable partial credit for attempts that show understanding of the problem/concepts involved.
โ Sanjeev Arora's course
โ Moritz Hardt's course
โ Elad Hazan's course
โ Foundations of Data Science by Blum, Hopcroft and Kannan.
Here is a tentative list of topics for the course. I will adapt some of these topics based on student interest.
โ Learning as optimization
โ Gradient descent
โ Stochastic gradient descent
โ Accelrated gradient descent methods
โ Autograd and Adagrad
โ Online optimization and regret minimization
โ Multiplicative weights
โ Boosting
โ Best-fit subspaces, low-rank approximations
โ Computing and applying Singular Value Decomposition
โ Modeling dependencies
โ Learning and inference on graphical models
โ GLMs and Sparsitron
โ How hard is it for learning to violate privacy?
โ Models of privacy: Differential privacy. Laplace mechanism
โ Differentially private SGD.