# Generative Image Model Evaluation ## Agenda 1. Human evaluation 2. Pixel-based metrics 3. Feature-based metrics i. Inception Score ii. CLIP Score 4. Novelty-based metrics 5. Model Explanation i. CLIP ii. Stable diffusion ## Human Evaluation 1. Ask human judges to rate the images on various criteria, such as **realism, diversity, relevance, and creativity**. 2. Human evaluation can **capture the subjective aspects** of image generation that are hard to quantify by automated metrics. However, human evaluation is also costly, time-consuming, and prone to bias and inconsistency. ## Pixel-based metrics 1. Compare the generated images with the real images from the same domain using pixel-based metrics, such as **mean squared error (MSE)**, peak signal-to-noise ratio (PSNR), or structural similarity index (SSIM). 2. These metrics **measure the pixel-wise similarity** or difference between two images, assuming that the closer the pixels are, the better the image quality is. 3. Pixel-based metrics have some limitations, such as **being sensitive to image transformations**, **ignoring high-level semantic features, and failing to account for diversity and novelty**. ## Feature-based metrics ### Simple Description 1. **Extract high-level features from the images using pre-trained neural networks**, such as convolutional neural networks (CNNs) or generative adversarial networks (GANs). 2. These features **capture the semantic and perceptual aspects** of the images, such as shapes, textures, colors, and styles. 3. Feature-based metrics, such as [inception score (IS)](https://www.techtarget.com/searchenterpriseai/definition/inception-score-IS), Fréchet inception distance (FID), [CLIP Score](https://unimatrixz.com/blog/latent-space-clip-score/), or perceptual path length (PPL), compare the feature distributions of the generated and real images, and assess how well the generative model preserves the diversity and quality of the original domain. ### Inception Score - The score produced by the IS algorithm can **range from zero (worst) to infinity** (best). The inception score algorithm measures two factors: 1. **Quality:** **How good the generated image is.** Generated images should be believable or realistic as if a real person painted a picture or took a photograph. For example, if the AI produces images of cats, each image should include a clearly identifiable cat. If the object is not clearly identifiable as a cat, the corresponding IS will be low. 2. **Diversity:** **How diverse the generated image is.** Generated images should have high randomness (entropy), meaning that the generative AI should produce highly varied images. For example, if the AI produces images of cats, each image should be a different cat breed and perhaps a different cat pose. If the AI is producing images of the same cat breed in the same pose, the diversity and corresponding IS will be low. - **How does Inception Score work** 1. Calculating an inception score starts by using the image classification network to ingest a generated image and **return a probability distribution for the image**. 2. The image classification network is fundamentally a **pre-trained Inception v3** model, which can predict class probabilities. 3. The probability distribution helps to determine whether the generated image **contains one well-defined thing**, or a series of things that are harder (if not impossible) for the image classification network to identify. This is the foundation of the **quality factor.** 4. Next, the inception score process **compares the probability distribution for all the generated images**. There may be as many as 50,000 generated images in a sample. This creates a second factor called **marginal distribution, which indicates the amount of variety** present in the generative AI's images: ![](https://hackmd.io/_uploads/ryNWHozlT.png) 5. The score is calculated using **Kullback-Leibler divergence**, or KL divergence. When **there is high KL divergence, there is a strong probability distribution and an even (flat) marginal distribution** -- each image has a distinct label (such as a cat), but the overall set of images has many different labels. This yields the highest inception score. ### CLIP Score - The goal of CLIP is to enable models to **understand the relationship between visual and textual data** and to use this understanding to perform various tasks, such as image captioning, visible question answering, and image retrieval. - CLIP score returns values **between +1 and -1**. - **What is CLIP** 1. CLIP (Contrastive Language-Image Pretraining) is an OpenAI model that combines computer vision and natural language understanding capabilities, it is considered a **image classification method**. 2. It trains on large images with captions to **learn representations for images and text in a joint embedding space.** Images and their captions are close together in this space, while unrelated images and captions are further apart. 3. CLIP can extract text from images, and the resulting text can be compared with a given text. Converting images to text, means to **extracting an embedding of an image and lookup similar text embeddings in the CLIP AI model.** #### Text-guided image generation - CLIP score measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility - How does CLIP score work? - ![](https://hackmd.io/_uploads/SyRcJ0Vbp.png) - We can follow the line draw by sampling steps and clipscore to determine the parameters. #### Image-conditioned text-to-image generation - **Condition the generation pipeline** with an input image as well as a text prompt. - For example, it takes an **edit instruction** as an input prompt and an **input image to be edited**. - Measure the **consistency of the change** between the two images (in CLIP space) with the change between the two image captions. - This is referred to as the **CLIP directional similarity**: ![](https://hackmd.io/_uploads/SkxbG04ZT.png) - Caption 1 corresponds to the input image (image 1) that is to be edited. - Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction. ## Task-based metrics 1. **Measure how well the generated images can be used for downstream tasks**, such as classification, segmentation, captioning, or retrieval. 2. Task-based metrics can reflect the usefulness and applicability of the generative model for specific purposes and domains. 3. For example, one can use **classification accuracy, segmentation accuracy, captioning BLEU score, or retrieval precision and recall as task-based metrics**. 4. However, task-based metrics depend on the choice and performance of the downstream models, and **may not capture the general aspects** of image generation. ## Model Explanation ### CLIP (Contrastive Language–Image Pre-training) #### Background - This model learns the **relationship between a whole sentence and the image** it describes. - Given an input sentence it will be able to retrieve the most related images corresponding to that sentence. - It is trained on full sentences **instead of single classes** like car, dog, etc. - By prompt engineering, it can also act as a classifier. - **CLIP is zero shot by design** hence not restricted to number of labels. It does not constrain the model to reduce to one concept or a label. #### Key factors in CLIP - **Large Dataset:** CLIP is trained over WebImage Text(WIT) 400M image-text pair. Diverse dataset crawled from internet. More data is better. - **Contrastive Pre-Training:** ![](https://hackmd.io/_uploads/BJO0b4L-6.png) - In contrastive learning it is trying to maximise diagonal value (I_1,T_1), (I_2,T_2)… I_N,T_N while minimising off diagonal elements. - Most of the learning coming from negative image description. In a batch of 32,768 there is only one positive pair. - CLIP’s learns most by, what this image is not about. - **Core Implementation of CLIP** ![](https://hackmd.io/_uploads/H1xrGE8-p.png) - **Prediction Task (Application):** ![](https://hackmd.io/_uploads/SyVFz4U-T.png) #### Applications - **Zero Shot Image classification:** Using Clip embedding out of the box to do zero shot image classification. - **Fine Tuned Image Classification:** Adding a classification head and fine tune the head for specific fine-grained systematic image classification. - **Semantic Image Retrieval:** Text to image and Reverse Image search both are possible with rich CLIP embeddings. - **Image Ranking :** It’s not just factual representations that are encoded in CLIP’s memory. It also knows about qualitative concepts as well - **Image Captioning:** With the feature vectors from CLIP have been wired into GPT-2 to output an English description for a given image. ### Framework of Stable Diffusion ![](https://hackmd.io/_uploads/SJtImTVWT.png) ### Framework of Stable Diffusion XL ![](https://hackmd.io/_uploads/B1afg8wfa.png) ### Resource [Simple Intro of Image Evaluation](https://www.linkedin.com/advice/1/what-most-effective-ways-evaluate-generative) [HuggingFace Image Evaluation](https://huggingface.co/docs/diffusers/conceptual/evaluation) [FID Score](https://machinelearningmastery.com/how-to-implement-the-frechet-inception-distance-fid-from-scratch/) [CLIP from scratch](https://towardsdatascience.com/simple-implementation-of-openai-clip-model-a-tutorial-ace6ff01d9f2) [Understanding OpenAI CLIP & Its Applications](https://akgeni.medium.com/understanding-openai-clip-its-applications-452bd214e226) [Stable diffusion Easy Explanation](https://stable-diffusion-art.com/how-stable-diffusion-work/) [SD-XL Easy Explanation](https://stable-diffusion-art.com/sdxl-model/)