# Research interests

I am a researcher in computational neuroscience/psychiatry. My research focus on developing computational models replicating defective human behavior. My long-term goal is to better understand the origin of human systematic mistakes throughout their education in order to ease their academic learning.

# Short biography

I am a former student of where I studied Maths (Bsc), Computer Science (BSc + MSc), and specialized in Machine Learning (MSc). I started my journey in computational modeling at during my PhD in the Mnemosyne team under the mentorship of Nicolas Rougier and Xavier Hinaut where I built models of Working Memory using Recurrent Neural Networks (RNNs). I then continued my journey in computational modeling at in the Stanford Cognitive & Systems Neuroscience Laboratory (SCSNL) where I am currently a postdoctoral scholar under the mentorship of Vinod Menon building computational models of mathematical cognition. For more details I invite you to see my Curriculum Vitae.

# Selected publications

- Mistry, P. K., Strock, A., Liu, R., Young, G., & Menon, V. (2023). Learning-induced reorganization of number neurons and emergence of numerical representations in a biologically inspired neural network.
*Nature Communications*,*14*(1). https://doi.org/10.1038/s41467-023-39548-5Number sense, the ability to decipher quantity, forms the foundation for mathematical cognition. How number sense emerges with learning is, however, not known. Here we use a biologically-inspired neural architecture comprising cortical layers V1, V2, V3, and intraparietal sulcus (IPS) to investigate how neural representations change with numerosity training. Learning dramatically reorganized neuronal tuning properties at both the single unit and population levels, resulting in the emergence of sharply-tuned representations of numerosity in the IPS layer. Ablation analysis revealed that spontaneous number neurons observed prior to learning were not critical to formation of number representations post-learning. Crucially, multidimensional scaling of population responses revealed the emergence of absolute and relative magnitude representations of quantity, including mid-point anchoring. These learnt representations may underlie changes from logarithmic to cyclic and linear mental number lines that are characteristic of number sense development in humans. Our findings elucidate mechanisms by which learning builds novel representations supporting number sense.

- Strock, A., Hinaut, X., & Rougier, N. P. (2020). A Robust Model of Gated Working Memory.
*Neural Computation*,*32*(1), 153–181. https://doi.org/10.1162/neco_a_01249Gated working memory is defined as the capacity of holding arbitrary information at any time in order to be used at a later time. Based on electrophysiological recordings, several computational models have tackled the problem using dedicated and explicit mechanisms. We propose instead to consider an implicit mechanism based on a random recurrent neural network. We introduce a robust yet simple reservoir model of gated working memory with instantaneous updates. The model is able to store an arbitrary real value at random time over an extended period of time. The dynamics of the model is a line attractor that learns to exploit reentry and a nonlinearity during the training phase using only a few representative values. A deeper study of the model shows that there is actually a large range of hyperparameters for which the results hold (e.g., number of neurons, sparsity, global weight scaling) such that any large enough population, mixing excitatory and inhibitory neurons, can quickly learn to realize such gated working memory. In a nutshell, with a minimal set of hypotheses, we show that we can have a robust model of working memory. This suggests this property could be an implicit property of any random population, that can be acquired through learning. Furthermore, considering working memory to be a physically open but functionally closed system, we give account on some counterintuitive electrophysiological recordings.