Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

My research explored the language predictiveness of untrained neural language models to better understand how their activations were predictive of neural data. My results indicate that the results published in Schrimpf for the Blank2014 fMRI are erroneous. The results of the linguistic analysis contradicted expectations, especially for the untrained models: The XL-Untrained model significantly outperformed the GPT2-Untrained model on fMRI prediction, but significantly underperformed on predicting linguistic targets. Although trained GPT-2-XL outperformed the GPT2-Untrained Model on fMRI prediction, it had similar performance on next-word ngram probability prediction, and it unperformed the base on part of speech prediction. Finally, a possible theory is proposed that reconciles the apparent contradiction of the results.

Details

PDF

Statistics

from
to
Export
Download Full History