[//]: # "Currently, my enthusiasm for psychology and cognition has reached new heights, and I strongly advocate for society to invest greater efforts in comprehending individuals and societies. I firmly believe that enhanced understanding of people holds the key to fostering cooperation, both within local communities and on a global scale. In my pursuit of unraveling the complexities of the human mind, I have delved into literature concerning neurodivergent and non-neurotypical individuals. This article was the product of those thoughts: https://peakd.com/@dexterdev/an-autism-awareness-post-aka-rant-"
[//]: # "The second perspective I can think about topics like cognition is from Neural Network persepctive. Because it is my expertise. And then I saw this paper:"
# [Turning large language models into cognitive models](https://arxiv.org/pdf/2306.03917.pdf)
Large language models possess remarkable capabilities across various tasks, including translation and mathematical reasoning. However, they often exhibit characteristics that diverge from human-like behavior. In this recent arxiv paper, the authors addressed this disparity and explored the potential of converting large language models(LLMs) into cognitive models by fine-tuning them using data from psychological experiments. They saw that these models accurately represent human behavior and even outperform traditional cognitive models in two decision-making domains. They also demonstrated that these models' representations contain the necessary information to model behavior at an individual subject level. Importantly, by fine-tuning on multiple tasks, these large language models can predict human behavior in previously unseen tasks. Collectively, these findings indicate that adaptable, pre-trained models can serve as comprehensive cognitive models, presenting novel research avenues that have the potential to revolutionize cognitive psychology and the behavioral sciences as a whole.
Utilizing text-based descriptions of psychological experiments, the researchers fed the data to a large language model and extracted the corresponding embeddings. Subsequently, they performed fine-tuning on these embeddings by applying a linear layer to predict human choices. The resulting model, which is named CENTaUR, encompasses this process.

Large Language Model -> Cognitive model
_Figures used from from [here](https://research.aimultiple.com/large-langu[age-model-training/#easy-footnote-bottom-2-63038) and [here](https://www.psy.uni-hamburg.de/en/arbeitsbereiche/allgemeine-psychologie/cognitive-modeling-academy-hamburg.html)_
References:
[1]Paper: https://arxiv.org/pdf/2306.03917.pdf
[2]CENTaUR github link(Documentation will be updated soon it seems): https://github.com/marcelbinz/CENTaUR
#cognition #psychology #llm #CENTaUR #chatGPT #GPT #neuroscience #AI #DNN #ML