# Computerized Adaptive Testing (CAT) with Item Response Theory(IRT) models.
## Intro
This note aims to provide a summarized explanation and conceptual framework for computerized adaptive tests (CAT) with Item Response Theory (IRT) models base on these papers and articles.
* [Computerized Adaptive Test (CAT) Applications and Item Response Theory Models for Polytomous Items](https://www.researchgate.net/publication/318793431_Computerized_Adaptive_Test_CAT_Applications_and_Item_Response_Theory_Models_for_Polytomous_Items)
* [Item Response Theory](https://www.publichealth.columbia.edu/research/population-health-methods/item-response-theory)
* [Short Explaination of CAT ](https://medium.com/@aircto/aircto-screening-chatbot-computerised-adaptive-test-bcf5fad0b288)
* [Parameter Calibration](https://itp.education.uiowa.edu/ia/documents/Validity-of-the-Three-Parameter-Item-Response-Theory-Model-for-Field-Test.pdf)
## Computerized Adaptive Testing (CAT)
### Why we use the CAT ?
The adaptive tests are designed to challenge candidates. High-achieving candidates who take a test that is designed around an average are not challenged by questions below their individual achievement abilities. Likewise, if lower-achieving candidates served by questions that are far above their current abilities they are left guessing the answers instead of applying what they already know.
Computerized adaptive testing ( CAT) is a form of test that adapts to the examinee’s ability level. In other words, it is a form of computer-administered test in which the next item or set of items selected to be administered depends on the correctness of the test taker’s responses to the most recent items administered.

### Terminology and key concepts:
* True Score/ True Ability: True score is the score an examinee would receive on a perfectly reliable test.
* Since all tests contain error, true scores are a theoretical concept; in an actual testing program, we will never know an individual’s true score.
* True ability is denoted as θ; the true score for examinee j is denoted $\theta_j$.
### Properties of CAT
* Items are selected accordingly to the level of ability/traits of test takers.
* Find out the most appropriate items to determine the level of test takers.
* Avoiding items are too difficults or too easy => a tailored test form.
* Using computer to facilitate the application make it possible to simultaneously accessible to many test takers.
The adaptive test is based on a statistical model defined by the Item Response Theory. We measure the ability of the candidate and the question difficulty on the same scale.
### Assumptions of Item Response Theory
Some assumptions need to be met in order to obtain valid results with the IRT models developed for both dichotomous and polytomous items. These assumptions are unidimensionality, local independence.
## The Item Response Theory:
**Item Response Theory (IRT)** is a statistical framework in which examinees can be described by a set of one or more ability scores that are predictive, through mathematical models, linking actual performance on test items, item statistics, and examinee abilities.
* Item parameters are independent of the ability level of the group.
* The individual's ability is independent of the item sample taken.
* The error estimate is made according to the ability/trait level of the individual.
* Item scores are defined on a scale.
* Distinctive tests may be developed for all ability levels.
* The probability that an individual will response correctly to an item can be estimated.

When unidimensional IRT models are examined, it can be seen that a classification has been made for dichotomous and polytomous items. Models developed for the dichotomous items vary depending on the difficulty of item (b), item discrimination (a), and pseudo-guess $c$ parameters (DeMars, 2010). That why we have 3 typical modesl for IRT framework which is 1PL(1 parameter), 2PL ( 2 parameters) , 3PL (3 parameters)
### Dichotomous IRT
* Dichotomous IRT models are those where there are two possible item scores.
* The most common example of a dichotomous item is multiple choice, which typically has 4 to 5 options, but only two possible scores (correct/incorrect)
* Other item types that can be dichotomous are Scored Short Answer and Multiple Response (all or nothing scoring)
### Polytomous IRT
* Polytomous models are for items that have more than two possible scores.
* The most common examples are Likert-type items (Rate on a scale of 1 to 5) and partial credit items (score on an Essay might be 0 to 5 points)
* In addition, a category and ordinal response could also be considered as Polytomous. For instance, answers to question: _How would you describe your satisfaction to our customer service?_ : _"worst"_, _"bad"_, _"good"_, _"excellent"_
### Item parameters:
1. **Item Discrimination Parameter**(a)
* This parameter shows how well an item (question) discriminates individuals who have similar ability level.
2. **Item Difficulty Parameter**(b)
* b represents an item’s difficulty parameter. This parameter is measured on the same scale as θ.
* An item with a high value tends to be answered correctly by all individuals whose θ is above the items' difficulty level and wrongly by all the others.
* Since b and θ are measured in the same scale, b follows the same distributions as θ.
* For a CAT, it is good for an item bank to have as many items as possible in all difficulty levels, so that the CAT may select the best item for each individual in all ability levels.
3. **Pseudo-Guessing Parameter** $c$
* c represents an item’s pseudo-guessing parameter. This parameter denotes what is the probability of individuals with low proficiency values to answer the item correctly. Since c is a probability, 0< c ≤1, the lower the value of this parameter, the better the item is considered.
* Let’s assume that you are given a question with four options and you do know the correct answer. If you select an answer randomly there is a 0.25 chance of success. This is the guessing parameter.
* Some times when you know the answer partially, one question might feel more probable than others. In this case the guessing parameter will be the highest probable one.
### Item Calibration
#### Initial item parameter calibration
* These parameters are calculated based on prior administrations of the items to a sample population.This is called pilot-test. These parameters are retrieved from the responses on this pilot-test.
* Another way to calculate these parameters are inviting experts who has the expertise to measure the items manually.
#### Adding new items
* When adding new items into the items pool. Test designers should conduct a field-test. This is to examine if the new items are good enough to put in the item pools for operational test (the real test).The new item parameters are calculated based on the responses of the test takers drawn from the population(minimum 27% populution is recommended)
* Online calibration is also a worthy method to consider.In this method, new items are randomly administered into the operational test. The parameters of these new items are updated after these test.
#### Learning IRT parameters using computer algorithms
To learn item parameters in IRT model, one have to conduct a field test on a large number of examinees and collect their responses.
* If we know the examinees' ability parameters, then item parameters can be derived using Maximum Likelihood Estimation.
* If we don't know the examinees' ability parameters, then item parameters and ability parameters can be simultaneously estimated using iterative algorithm Expectation-Maximization.
### The process of CAT

θ is the examinee’s true ability, θ^ is the estimated ability.
The 3PL IRT model states that probability of a correct response to an item i for an examinee is a function of the three item parameters and examinee j’s true ability θj.
Under IRT, the probability of an examinee with a given θ^ value (estimated ability), to answer item i correctly, given the item parameters, is given by:

The CAT (Computerized Adaptive Test) algorithm: is usually an iterative process with the following steps:
* **Ability Initialisation:** (only first time): This can based on the prior information before the test or initialize Randomly or on Fixed Point (-4,4).****
* **First-item Administration:** The first item selected may affect the psychological state of the individual receiving the test. For this reason, as the first item, an item should be selected which is neither under nor over for individual‟s trait level. If the selected item is far below from the trait of the individual, he/she may not fill the test seriously, and the level of anxiety of the individual may increase with an item that is selected far above the trait level.
* **Ability Estimation:** All the items that have not yet been administered are evaluated to determine which will be the best one to administer as next-item, given the currently estimated ability level.
* A new ability estimate is computed based on the responses to all of the administered items. There are two main types of ways of estimating θ^: and these are the Bayesian methods and maximum-likelihood ones :
* **Maximum-likelihood methods** choose the θ^ value that maximizes the log likelihood of an examinee having a certain response vector, given the corresponding item parameters.
* **Bayesian methods** used a priori information (usually assuming proficiency and parameter distributions) to make new estimations. The knowledge of new estimations is then used to make new assumptions about the parameter distributions, refining future estimations.
* **Item Selection:** The “best” next item is administered and the examinee responds. A way to find the "best" item is using an implementation of the random sequence selector in which, at every step of the test, an item is randomly chosen from the n most informative items in the item bank, n being a predefined value.These most informative items can be calculated base on maximum information function below
* With IRT, maximum information can be quantified as the standardized slope of ( θ) at θ^.

* or using Fisher's information function :

**$P_{ik}(θ)$**: the probability that the individual at the θ level selects the category k in item i.
**m:** number of category in the item.
**k**: category number.
* A **new ability estimate** is computed based on the responses to all of the administered items.
* Steps 1 through 3 are repeated until a **stopping criterion** is met.
* **Stopping rules:**:The stopping criterion could be time, the number of items administered, change in ability estimate, content coverage, a precision indicator such as the standard error.
* Incase stopping rule is bound by precision indicator such as standard error in the maximum likelihood function. The Test designer should take into consideration about the balance between the threshold for standard error. If the threshold for standard error is too low, it might have to use all the items. A standard error = 0.5 is recommended. However, it could be change based on the design of the test.
* The standard errors of the maximum-likelihood estimates are estimated by taking the root of the negative inverse of the second derivative of the loglikelihood function

## IRT Programs and Packages
* [List of R packages implement IRT model](https://cran.r-project.org/web/views/Psychometrics.html) _(open source)_
* [BILOG-MG](https://ssicentral.com/index.php/products/bilogmg-gen/) _(commercial)_
* [PARSCALE](https://ssicentral.com/index.php/products/psl-general/) _(commercial)_