A PDF version of my full Curriculum Vitae.

Education

2021-now   Stanford UniversityStanford University   Postdoc in Computational Psychiatry, Stanford, California, USA
       
2017-2020   InriaInria   PhD in Computational Neuroscience, Bordeaux, France
       
2015-2017   ENS RennesENS Rennes   Research master’s degree in Computer Science, Rennes, France
       
2015-2016   ENS RennesENS Rennes   Bachelor’s degree in Mathematics, Rennes, France
       
2014-2015   ENS RennesENS Rennes   Bachelor’s degree in Computer Science, Rennes, France

Publications

Journals

  1. Mistry, P. K., Strock, A., Liu, R., Young, G., & Menon, V. (2023). Learning-induced reorganization of number neurons and emergence of numerical representations in a biologically inspired neural network. Nature Communications, 14(1). https://doi.org/10.1038/s41467-023-39548-5
    ABSTRACT
    URL

    Number sense, the ability to decipher quantity, forms the foundation for mathematical cognition. How number sense emerges with learning is, however, not known. Here we use a biologically-inspired neural architecture comprising cortical layers V1, V2, V3, and intraparietal sulcus (IPS) to investigate how neural representations change with numerosity training. Learning dramatically reorganized neuronal tuning properties at both the single unit and population levels, resulting in the emergence of sharply-tuned representations of numerosity in the IPS layer. Ablation analysis revealed that spontaneous number neurons observed prior to learning were not critical to formation of number representations post-learning. Crucially, multidimensional scaling of population responses revealed the emergence of absolute and relative magnitude representations of quantity, including mid-point anchoring. These learnt representations may underlie changes from logarithmic to cyclic and linear mental number lines that are characteristic of number sense development in humans. Our findings elucidate mechanisms by which learning builds novel representations supporting number sense.

  2. Strock, A., Rougier, N. P., & Hinaut, X. (2022). Latent Space Exploration and Functionalization of a Gated Working Memory Model Using Conceptors. Cognitive Computation. https://doi.org/10.1007/s12559-020-09797-3
    ABSTRACT
    URL

    Working memory is the ability to maintain and manipulate information. We introduce a method based on conceptors that allows us to manipulate information stored in the dynamics (latent space) of a gated working memory model. This latter model is based on a reservoir: a random recurrent network with trainable readouts. It is trained to hold a value in memory given an input stream when a gate signal is on and to maintain this information when the gate is off. The memorized information results in complex dynamics inside the reservoir that can be faithfully captured by a conceptor. Such conceptors allow us to explicitly manipulate this information in order to perform various, but not arbitrary, operations. In this work, we show (1) how working memory can be stabilized or discretized using such conceptors, (2) how such conceptors can be linearly combined to form new memories, and (3) how these conceptors can be extended to a functional role. These preliminary results suggest that conceptors can be used to manipulate the latent space of the working memory even though several results we introduce are not as intuitive as one would expect.

  3. Boraud, T., & Strock, A. (2021). [Re] A Neurodynamical Model for Working Memory. ReScience C, 7(1). https://doi.org/10.5281/zenodo.4655870
    ABSTRACT
    URL

    Neurodynamical models of working memory (WM) should provide mechanisms for storing, maintaining, retrieving, and deleting information. Raznan Pascanu and Herbert Jaeger have suggested that memory states correspond, intuitively in terms of nonlinear dynamics, to attractors in an input-driven system, giving a simple WM model training all performance modes into a Recurrent Neural Network (RNN) of the Echo State Network (ESN) type. In this replication, we reproduced this WM model in Python in order to see wether or not these observations could be observed on a slightly different model made from scratch.

  4. Strock, A., Hinaut, X., & Rougier, N. P. (2020). A Robust Model of Gated Working Memory. Neural Computation, 32(1), 153–181. https://doi.org/10.1162/neco_a_01249
    ABSTRACT
    URL

    Gated working memory is defined as the capacity of holding arbitrary information at any time in order to be used at a later time. Based on electrophysiological recordings, several computational models have tackled the problem using dedicated and explicit mechanisms. We propose instead to consider an implicit mechanism based on a random recurrent neural network. We introduce a robust yet simple reservoir model of gated working memory with instantaneous updates. The model is able to store an arbitrary real value at random time over an extended period of time. The dynamics of the model is a line attractor that learns to exploit reentry and a nonlinearity during the training phase using only a few representative values. A deeper study of the model shows that there is actually a large range of hyperparameters for which the results hold (e.g., number of neurons, sparsity, global weight scaling) such that any large enough population, mixing excitatory and inhibitory neurons, can quickly learn to realize such gated working memory. In a nutshell, with a minimal set of hypotheses, we show that we can have a robust model of working memory. This suggests this property could be an implicit property of any random population, that can be acquired through learning. Furthermore, considering working memory to be a physically open but functionally closed system, we give account on some counterintuitive electrophysiological recordings.

Conferences

  1. Strock, A., Rougier, N. P., & Hinaut, X. (2018, July). A Simple Reservoir Model of Working Memory with Real Values. 2018 International Joint Conference on Neural Networks (IJCNN). https://doi.org/10.1109/ijcnn.2018.8489262
    ABSTRACT
    URL

    The prefrontal cortex is known to be involved in many high-level cognitive functions, in particular, working memory. Here, we study to what extent a group of randomly connected units (namely an Echo State Network, ESN) can store and main-tain (as output) an arbitrary real value from a streamed input, i.e., can act as a sustained working memory unit. Furthermore, we explore to what extent such an architecture can take advantage of the stored value in order to produce non-linear computations. Comparison between different architectures (with and without feedback, with and without a working memory unit) shows that an explicit memory improves the performances.

  2. Strock, A., Rougier, N., & Hinaut, X. (2019). Using Conceptors to Transfer Between Long-Term and Short-Term Memory. Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions, 19–23. https://doi.org/10.1007/978-3-030-30493-5_2
    ABSTRACT
    URL

    We introduce a model of working memory combining short-term and long-term components. For the long-term component, we used Conceptors in order to store constant temporal patterns. For the short-term component, we used the Gated-Reservoir model: a reservoir trained to hold a triggered information from an input stream and maintain it in a readout unit. We combined both components in order to obtain a model in which information can go from long-term memory to short-term memory and vice-versa.

  3. Évain, A., Argelaguet, F., Strock, A., Roussel, N., Casiez, G., & Lécuyer, A. (2016, June). Influence of Error Rate on Frustration of BCI Users. Proceedings of the International Working Conference on Advanced Visual Interfaces. https://doi.org/10.1145/2909132.2909278
    ABSTRACT
    URL

    Brain-Computer Interfaces (BCIs) are still much less reliable than other input devices. The error rates of BCIs range from 5% up to 60%. In this paper, we assess the subjective frustration, motivation, and fatigue of BCI users, when confronted to different levels of error rate. We conducted a BCI experiment in which the error rate was artificially controlled. Our results first show that a prolonged use of BCI significantly increases the perceived fatigue, and induces a drop in motivation. We also found that user frustration increases with the error rate of the system but this increase does not seem critical for small differences of error rate. Thus, for future BCIs, we would advise to favor user comfort over accuracy when the potential gain of accuracy remains small.

PhD Thesis

  1. Strock, A. (2020). Working memory in random recurrent neural networks [Université de Bordeaux]. https://theses.hal.science/tel-03150354
    ABSTRACT
    URL

    Working memory can be defined as the ability to temporarily store and manipulate information of any kind. For example, imagine that you are asked to mentally add a series of numbers. In order to accomplish this task, you need to keep track of the partial sum that needs to be updated every time a new number is given.The working memory is precisely what would make it possible to maintain (i.e. temporarily store) the partial sum and to update it (i.e. manipulate). In this thesis, we propose to explore the neuronal implementations of this working memory using a limited number of hypotheses.To do this, we place ourselves in the general context of recurrent neural networks and we propose to use in particular the reservoir computing paradigm. This type of very simple model nevertheless makes it possible to produce dynamics that learning can take advantage of to solve a given task. In this job, the task to be performed is a gated working memory task. The model receives as input a signal which controls the update of the memory. When the door is closed, the model should maintain its current memory state, while when open, it should update it based on an input. In our approach, this additional input is present at all times, even when there is no update to do. In other words, we require our model to be an open system, i.e. a system which is always disturbed by its inputs but which must nevertheless learn to keep a stable memory. In the first part of this work, we present the architecture of the model and its properties, then we show its robustness through a parameter sensitivity study. This shows that the model is extremely robust for a wide range of parameters. More or less, any random population of neurons can be used to perform gating. Furthermore, after learning, we highlight an interesting property of the model, namely that information can be maintained in a fully distributed manner, i.e. without being correlated to any of the neurons but only to the dynamics of the group. More precisely, working memory is not correlated with the sustained activity of neurons, which has nevertheless been observed for a long time in the literature and recently questioned experimentally. This model confirms these results at the theoretical level. In the second part of this work, we show how these models obtained by learning can be extended in order to manipulate the information which is in the latent space. We therefore propose to consider conceptors which can be conceptualized as a set of synaptic weights which constrain the dynamics of the reservoir and direct it towards particular subspaces; for example subspaces corresponding to the maintenance of a particular value.More generally, we show that these conceptors can not only maintain information, they can also maintain functions. In the case of mental arithmetic mentioned previously, these conceptors then make it possible to remember and apply the operation to be carried out on the various inputs given to the system. These conceptors therefore make it possible to instantiate a procedural working memory in addition to the declarative working memory.We conclude this work by putting this theoretical model into perspective with respect to biology and neurosciences.

© 2023 Anthony Strock.