Anna Ivanova

Anna Ivanova

Postdoctoral Associate

Massachusetts Institute of Technology

Hi!

I am a postdoctoral researcher at MIT, interested in studying the relationship between language and other aspects of human cognition. In my work, I use tools from cognitive neuroscience (such as fMRI) and artificial intelligence (such as large language models).

To learn more, browse this website or check out this 5-min TEDx talk about applying insights from neuroscience to better understand the capabilities of large language models.

You can contact me at annaiv [at] mit [dot] edu or follow me on Twitter.

UPDATE: In January 2024, I will be starting as an Assistant Professor at Georgia Tech Psychology! You can find more information at my lab website, Language, Intelligence, and Thought. I am hiring at all levels, don’t be shy to reach out!

Interests
  • Neuroscience of language and cognition
  • World knowledge in large language models
Education
  • PhD in Brain & Cognitive Sciences, 2022

    Massachusetts Institute of Technology

  • BS in Neuroscience & Computer Science, 2017

    University of Miami

Position

Postdoctoral Associate
  • Developing a large-scale benchmark to evaluate world knowledge in language models
  • Designing a platform to enable all researchers to study world knowledge in machines using custom tests / models

All Publications

Quickly discover relevant content by filtering publications.
(2023). The language network is not engaged in object categorization. Cerebral Cortex.

Cite DOI Tweeprint

(2023). A Better Way to Do Masked Language Model Scoring. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL).

PDF Cite Tweeprint

(2023). Dissociating language and thought in large language models: a cognitive perspective. arXiv.

PDF Cite DOI Tweeprint

(2022). Event knowledge in language models: the gap between the impossible and the unlikely. arXiv.

Cite DOI Tweeprint

(2022). Convergent Representations of Computer Programs in Human and Artificial Neural Networks. Advances in Neural Information Processing Systems (NeurIPS).

PDF Cite

(2022). Beyond linear regression: mapping models in cognitive neuroscience should align with research goals. Neurons, Behavior, Data analysis, and Theory.

PDF Cite DOI Tweeprint

(2022). Probabilistic atlas for the language network based on precision fMRI data from >800 individuals. Scientific Data.

PDF Cite DOI Tweeprint

(2022). The language network reliably 'tracks' naturalistic meaningful non-verbal stimuli. bioRxiv.

Cite DOI Tweeprint

(2021). Probing artificial neural networks: Insights from neuroscience. ICLR 2021 Workshop ``How Can Findings About The Brain Improve AI Systems?’'.

PDF Cite Video DOI Tweeprint

(2021). The language network is recruited but not required for nonverbal event semantics. Neurobiology of Language.

PDF Cite Code DOI Tweeprint

(2020). Comprehension of computer code relies primarily on domain-general executive brain regions. eLife.

PDF Cite Code DOI Tweeprint

(2020). Linguistic overhypotheses in category learning: Explaining the label advantage effect. Proceedings of the 42nd Annual Conference of the Cognitive Science Society.

PDF Cite Code Tweeprint

(2019). The language of programming: a cognitive perspective. Trends in Cognitive Sciences.

PDF Cite DOI

(2018). Does the brain represent words? An evaluation of brain decoding studies of language understanding. Computational Cognitive Neuroscience Conference.

PDF Cite Code DOI

(2018). Pragmatic inference of intended referents from binomial word order. Proceedings of the 40th Annual Conference of the Cognitive Science Society.

PDF Cite Code

(2017). Intrinsic functional organization of putative language networks in the brain following left cerebral hemispherectomy. Brain Structure and Function.

PDF Cite DOI

(2014). Post-fire succession in the northern pine forest in Russia: a case study. Wulfenia.

PDF Cite