Research interests
I am a researcher in computational neuroscience/psychiatry, focusing on developing computational models to replicate and understand biases in human behavior during problem-solving tasks. My goal is to understand the underlying mechanisms behind the systematic errors humans make during their academic education, and to find ways to decrease how often they occur. For example, I design computational models that aim to identify defects in numerical representation and to understand the effect of behavioral training on numerical representation, that I validate using human behavior and neural recordings. My long-term vision is to collaborate closely with experts in experimental neuroscience and educational science to design personalized behavioral/neural therapy to improve academic learning.
Short biography
I am a former student of where I studied Maths (Bsc), Computer Science (BSc + MSc), and specialized in Machine Learning (MSc).
I started my journey in computational modeling at
during my PhD in the Mnemosyne team under the mentorship of Nicolas Rougier and Xavier Hinaut where I built models of Working Memory using Recurrent Neural Networks (RNNs).
I then continued my journey in computational modeling at
in the Stanford Cognitive & Systems Neuroscience Laboratory (SCSNL) where I am currently a postdoctoral scholar under the mentorship of Vinod Menon building computational models of mathematical cognition.
For more details I invite you to see my Résume.
Selected publications
- Strock, A., Mistry, P. K., & Menon, V. (2024). Digital twins for understanding mechanisms of learning disabilities: Personalized deep neural networks reveal impact of neuronal hyperexcitability. BioRxiv.
Learning disabilities affect a significant proportion of children worldwide, with far-reaching consequences for their academic, professional, and personal lives. Here we develop digital twins – biologically plausible personalized Deep Neural Networks (pDNNs) – to investigate the neurophysiological mechanisms underlying learning disabilities in children. Our pDNN reproduces behavioral and neural activity patterns observed in affected children, including lower performance accuracy, slower learning rates, neural hyper-excitability, and reduced neural differentiation of numerical problems. Crucially, pDNN models reveal aberrancies in the geometry of manifold structure, providing a comprehensive view of how neural excitability influences both learning performance and the internal structure of neural representations. Our findings not only advance knowledge of the neurophysiological underpinnings of learning differences but also open avenues for targeted, personalized strategies designed to bridge cognitive gaps in affected children. This work reveals the power of digital twins integrating AI and neuroscience to uncover mechanisms underlying neurodevelopmental disorders.
- Mistry, P. K., Strock, A., Liu, R., Young, G., & Menon, V. (2023). Learning-induced reorganization of number neurons and emergence of numerical representations in a biologically inspired neural network. Nature Communications, 14(1). https://doi.org/10.1038/s41467-023-39548-5
Number sense, the ability to decipher quantity, forms the foundation for mathematical cognition. How number sense emerges with learning is, however, not known. Here we use a biologically-inspired neural architecture comprising cortical layers V1, V2, V3, and intraparietal sulcus (IPS) to investigate how neural representations change with numerosity training. Learning dramatically reorganized neuronal tuning properties at both the single unit and population levels, resulting in the emergence of sharply-tuned representations of numerosity in the IPS layer. Ablation analysis revealed that spontaneous number neurons observed prior to learning were not critical to formation of number representations post-learning. Crucially, multidimensional scaling of population responses revealed the emergence of absolute and relative magnitude representations of quantity, including mid-point anchoring. These learnt representations may underlie changes from logarithmic to cyclic and linear mental number lines that are characteristic of number sense development in humans. Our findings elucidate mechanisms by which learning builds novel representations supporting number sense.
- Strock, A., Hinaut, X., & Rougier, N. P. (2020). A Robust Model of Gated Working Memory. Neural Computation, 32(1), 153–181. https://doi.org/10.1162/neco_a_01249
Gated working memory is defined as the capacity of holding arbitrary information at any time in order to be used at a later time. Based on electrophysiological recordings, several computational models have tackled the problem using dedicated and explicit mechanisms. We propose instead to consider an implicit mechanism based on a random recurrent neural network. We introduce a robust yet simple reservoir model of gated working memory with instantaneous updates. The model is able to store an arbitrary real value at random time over an extended period of time. The dynamics of the model is a line attractor that learns to exploit reentry and a nonlinearity during the training phase using only a few representative values. A deeper study of the model shows that there is actually a large range of hyperparameters for which the results hold (e.g., number of neurons, sparsity, global weight scaling) such that any large enough population, mixing excitatory and inhibitory neurons, can quickly learn to realize such gated working memory. In a nutshell, with a minimal set of hypotheses, we show that we can have a robust model of working memory. This suggests this property could be an implicit property of any random population, that can be acquired through learning. Furthermore, considering working memory to be a physically open but functionally closed system, we give account on some counterintuitive electrophysiological recordings.
See my Résumé for the complete list of publications.
News
2024
[2024-05-23] I’m giving an invited talk Neuroscience & Artificial Intelligence meeting of the French Neuroscience Society!
[2024-04-29] Our preprint “Digital twins for understanding mechanisms of learning disabilities: Personalized deep neural networks reveal impact of neuronal hyperexcitability” is out in bioRxiv!
2023
[2023-06-29] Our paper “Learning-induced reorganization of number neurons and emergence of numerical representations in a biologically inspired neural network” is out in Nature Communications!