# GenAI HW7 Questions
### Explainable AI
1. Why is the explanation and interpretability of AI important?
- [ ] To ensure AI systems are transparent and trustworthy.
- [ ] To increase the computational complexity of AI systems
- [ ] To enable AI to function without any data
### Token Importance Analysis
2. How does token importance analysis help in understanding language models?
- [ ] It can speedup model generation by focusing on key tokens.
- [ ] It helps identify which words or tokens in an input sequence are most influential in generating the response.
- [ ] It can reduce the size of the language model by concentrating on important tokens.
- [ ] It can directly replace other evaluation metrics (e.g., Accuracy).
3. In the **machine translation** task, when generating the word “machine”, identify the word with the highest importance score according to **gradient-based (saliency)** visualization.
- [ ] 機器
- [ ] 學
- [ ] 習
- [ ] 智慧
4. In the **machine translation** task, when generating the word “artificial”, what is the importance score of the word “機器” using **attention-based** visualization?
- [ ] 0.26
- [ ] 0.233
- [ ] 0.036
- [ ] 0.178
5. In the **sentence completion** task, which method yields results that are more closely aligned with human judgment?
- [ ] gradient (saliency)
- [ ] attention
6. Which of the following does the gradient-based method (saliency map) visualize?
- [ ] The partial derivative of the loss with respect to the model parameter.
- [ ] The partial derivative of the model parameter with respect to the loss.
- [ ] The partial derivative of the output logit with respect to the input tensor.
- [ ] The partial derivative of the input tensor with respect to the output logit.
7. Which of the following does the attention mechanism visualize?
- [ ] The gradient of the loss function with respect to the model parameter
- [ ] The activation values of the neurons in the model's hidden layers
- [ ] The attention weight between the model’s output and the input tokens
### LLM Explanation
8. What is the advantage of Large Language Model (LLM) explanation over other explainable/interpretable methods?
- [ ] LLM explanation can provide natural language explanations that are more intuitive and easier for humans to understand.
- [ ] LLM explanation is always more accurate in identifying feature importance.
- [ ] LLM explanation requires less computational resources compared to other methods.
9. Use ChatGPT to perform sentiment analysis on a movie review by following the prompt provided below. Record your findings, which should include:
- Paste the output from ChatGPT.
- Evaluate whether the results from ChatGPT are reasonable.
```
You are a creative and intelligent movie review analyst, whose purpose is to aid in sentiment analysis of movie reviews. Determine whether the review below is positive or negative, and explain your answers.
Review: This film is a compelling drama that captivates audiences with its intricate storytelling and powerful performances.
```
10. Use ChatGPT to perform sentiment analysis on a movie review, analyzing the importance of each word and punctuation. Follow the prompt provided below and record your findings, which should include:
- Paste the output from ChatGPT.
- Evaluate whether the results from ChatGPT are reasonable.
```
You are a movie review analyst tasked with sentiment analysis. For each review, provide a list of tuples representing the importance of each word and punctuation, with values ranging from -1 (negative) to 1 (positive). Then, classify the review as positive (1) or negative (-1). The review is within <review> tags.
Example output:
[(<word or punctuation>, <float importance>), ...]
<int classification>
<review> This film is a compelling drama that captivates audiences with its intricate storytelling and powerful performances. <review>
```
**Note that if the results differ from the example output, you may need to try multiple times.**