Computational models of speech production must explain both findings from unimpaired speakers and patterns of impairment observed following brain damage. The current project simulates brain damage in a scaled version of the Dark Side model of lexical access (Oppenheim, Dell & Schwartz, 2010). The Dark Side model implements several core connectionist assumptions— competitive selection, shared activation and incremental, error-driven learning— into a computational model of lexical access. The model was reliably able to qualitatively predict the patterns of performance observed in unimpaired speakers. However, the original model included only a small vocabulary and an unrealistic representation of semantics, so questions remained about its ability to scale. A scaled version of the model has recently been developed with a larger vocabulary and a more realistic representation of semantics (Oppenheim, 2016). The scaled model is capable of both qualitatively and quantitatively matching the patterns of performance on specific lexical access experiments.
The present study compares the predictions of the scaled Dark Side model to the patterns of naming impairments observed in aphasia. Specifically, three thousand models were individually trained with the same vocabulary of approximately 1500 words. Each model was damaged by adding varying levels of Gaussian noise and given a single administration of the Philadelphia Naming Test (PNT). Patterns of errors produced by the damaged models were then compared to the performance of aphasic patients drawn from the Moss Aphasia Project Database.
First, the model simulations show that the error rate increases as the PNT progresses. The patient data match this prediction qualitatively. However, model overestimates the buildup of errors observed in patients. Figure 1 compares models with patients matched for severity. A mean error position of 87.5 (the midpoint of the PNT) indicates no buildup of errors over the experiment. Both the models and the patients tend to have a mean error position greater than 87.5, but the mean error position of the model is significantly greater than that of the patients. Second, the model simulations make perseveration errors, that is that the damaged model incorrect repeats the items previous seen in the experiment. Again, this is also observed in the patient data. However, the model predicts many more perseveration errors than are actually produced by patients matched on severity.
Simulations of the Dark Side model demonstrate that the theory can account for qualitative trends observed in the aphasic patient data. The model, in its default state, however cannot explain the quantitative results of both the error build up as well as the nature of the perseverations. Further work will explore how the model can be modified to match patient data both qualitatively and quantitatively.