# Grading Guidelines
We will use [gradescope](https://www.gradescope.com/) to handle homework submissions and grading. There [FAQ page](https://help.gradescope.com/category/t4mtl2if8n-instructor-faq) and [youtube page](https://www.youtube.com/@Gradescope) has some excellent tips, so do check it out.
## 1. Philosophy
Try to reward ideas and efforts. Typos and simple mistakes, which are easily fixable, should not cost too many points. It takes a lot of time and effort for a student to produce a HW submission. Grade it as if it is yours.
## 2. Rubrics
A consistent grading needs to have a rubric. A good rubric should be:
1. General: Covering most common mistakes.
2. Concise: Only has about 10 items or less.
Here is an example of a rubric. The maximum number of points for the question is 15.
![](https://i.imgur.com/Cjyuw0M.png)
Due to the generality, the rubric only highlights the kind of mistakes that a student made, but not the precise detail.
## 3. Giving Feedback
Giving feedback to the students is **very very** important for their learning. The rubric only gave some indications of the kind of mistakes they made, but typically we need to provide more context for each submission. If you really think the rubric is self-explained, (for example: incorrect/blank solution), then you don't have to provide further comments. However, it is often not the case, and a short comment helps **tremendously**. Here is an example. (I masked the sensitive part of the comment.)
![](https://i.imgur.com/wW8rNYT.png)
## 4. Using Point Adjustment
There is a point adjustment that you can use for (rare) submissions where no rubric item applies. If you use point adjustment, make sure explaining in the comment for the student to know why.

Demystifying the Computer SciencePhD Admission in US Universities, Part I

8/22/2023Low distortion metric embeddings provide a powerful algorithmic toolkit, with applications ranging from approximation/sublinear/online/distributed algorithms, to machine learning , biology, and computer vision. We say that an embedding $f$ from a metric space $(X,d_X)$ to a metric space $(Y,d_Y)$ has multiplicative distortion $t$, if the embedding is dominating for every pair of points $u,v\in X$ it holds that $\alpha\cdot d_X(u,v)\le d_Y(f(u),f(v))\le t\cdot \beta\cdot d_X(u,v)$ for some constans $\alpha,\beta$. Typical applications of metric embeddings have the following structure: take some instance of a problem in a "hard" metric space $(X,d_X)$; embed $X$ into a "simple" metric space $(Y,d_Y)$ via a low-distortion metric embedding $f$; solve the problem in $Y$, and "pull-back" the solution to $X$. Thus, the objectives are low distortion and "simple" target space. The two notable metric embedding workshops, one organized in Haifa, Israel in 2002 and another organized in Princeton in 2003, have been very successful till this day: they introduced metric embedding to the broader theory community and provided a host of open problems that have served as an excellent guidance for advancing the field much further. Over the last two decades, significant progress has been made on many open problem put forth by the 2002-2003 workshops At the same time, many new research directions in metric embedding have emerged. Here we highlight several examples. These developments have lead to new algorithm design techniques as well as a deeper understanding of a variety of metric spaces. On the other hand, they opened up a host of challenging problems. The goal of our workshop is to bring the forefronts in the area of metric embeddings to the broader theory community. By doing so, we hope that researchers in different areas could exploit the new ideas and techniques in metric embedding developed over the past two decades to solve their problems. Organizers Arnold Filtser (Bar-Ilan University)

2/23/2023
Published on ** HackMD**