# Rubric for Feedback and Grading
## Scoring Guide (35 Points Total)
### Question 1 (5 points)
**Code (2 points)**: The setup chunk correctly loads the necessary packages, and the glimpse(gapminder) function is executed with its output clearly visible.
**Interpretation (3 points)**: The student provides a clear, 3–4 sentence description of the dataset's structure and offers a valid insight into why the variables are useful for visualization.
### Question 2 (10 points)
**Code (5 points)**: The code correctly uses ggplot2 to generate a boxplot mapping continent to the x-axis and lifeExp to the y-axis. The final plot is correctly assigned to plot_q2 and displayed.
**Interpretation (3 points)**: The student writes 3–4 clear sentences describing the plot's main insight for a general audience.
**Critique (2 points)**: The student provides a thoughtful 2–3 sentence critique of the AI's code.
### Question 3 (5 points)
**Answer (3 points)**: The student correctly identifies Oceania as having the highest median life expectancy and specifies this is determined by the horizontal line inside the box.
**AI Check & Reflection (2 points)**: The student effectively compares their answer with an AI's response and provides a convincing explanation for why they trust their own interpretation.
### Question 4 (15 points)
**Code (7 points)**: The final plot assigned to plot_q4 is correct and meets all specifications, including the use of geom_line, geom_smooth, facet_wrap, and all required labels.
**Interpretation (5 points)**: The student writes a detailed 4–5 sentence description of the visualization, highlighting at least one clear insight for a general audience.
**Critique & Reflection (3 points)**: The student thoughtfully critiques the AI's output by identifying specific inaccuracies (e.g., incorrect functions, flawed logic) and explaining the steps taken to correct the code. The reflection then connects this iterative process of critique and revision to their understanding of the task.
## Rubric for letter grades
For assignments with letter grades, you could use a rubric like the following to grade AI-integrated coding assignments:
| Grade Range | AI-Enhanced Version |
|:-----------:| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **A** | - Code runs correctly, fully revised from AI suggestions where needed. <br>- AI workflow log (if used) is transparent, complete, and thoughtfully annotated.<br>- Critiques of AI output are insightful, demonstrating strong evaluative judgment.<br>- Explanations and interpretations are accurate, detailed, and connect visualizations to insights.<br>- Reflection clearly explains how AI use supported or challenged learning.<br>- Submission is polished, well-organized, and free of major errors. |
| **A-** | - Code is correct with minor formatting or labeling issues.<br>- AI workflow log (if used) is complete but lacks some detail.<br>- Critiques are clear but may not fully explore AI limitations.<br>- Explanations and interpretations are accurate but could be more developed.<br>- Reflection discusses AI’s role but is somewhat surface-level. |
| **B+** | - Code runs with correct output but may not fully meet formatting specifications.<br>- AI workflow log is present but minimal or incomplete.<br>- Critiques identify issues but lack depth.<br>- Explanations are mostly correct but somewhat generic.<br>- Reflection is present but superficial. |
| **B** | - Code runs partially or does not fully adhere to assignment requirements.<br>- AI workflow log is incomplete or missing.<br>- Critiques are vague or minimal.<br>- Explanations show gaps in understanding.<br>- Reflection is missing or very minimal. |
| **B-** | - Code contains notable errors or omissions.<br>- Little evidence of AI workflow documentation or critique.<br>- Explanations are inaccurate or incomplete.<br>- Interpretations misstate findings.<br>- Reflection absent or unconvincing. |
| **C** | - Code fails to run correctly or deviates significantly from requirements.<br>- No evidence of AI workflow or critique.<br>- Explanations and interpretations are missing or incorrect.<br>- Reflection absent. |
| **D** | - Minimal attempt to complete the assignment.<br>- Code does not run or is missing.<br>- No evidence of AI workflow, critique, explanations, or reflection. |