Research

At Denison University, I run the Language & Visual Cognition Lab

My research sits at the interface of visual attention and language comprehension. I study various cognitive processes (e.g., phonological coding, ambiguity resolution) and the time course of their influences on behavior. To explore these topics, I primarily use eye tracking, but also response accuracy and reaction time measures.

Phonological Coding During Reading
When reading silently, why do we experience a little voice in our head—the so-called inner voice—saying the words as we read? Proponents of speed-reading will tell you that it is merely a carry-over from when you learned to read, and that suppressing your inner voice is a necessary step toward increasing your reading speed. However, the fact that even highly skilled readers experience an inner voice makes it seem unlikely that it is entirely epiphenomenal. My research seeks to determine what its functions are, lest we do away with something integral to the reading process in the pursuit of more efficient reading. As I laid out in a recent review, there is still considerable debate surrounding what role(s) phonological coding (the recoding of visual words into a sound-based code) actually plays during normal reading (Leinenger, 2014, Psychological Bulletin).

The main goal of my dissertation work was to more precisely characterize the time course of phonological coding during reading, because this is a critical step toward determining the role(s) that it could play. In this research, I use traditional means analyses as well as survival analyses of eye movement data, which are specifically well-suited to answering questions about the time course of cognitive processes in reading, as they can reveal the earliest discernable effect of a manipulation on behavior. Across a number of manipulations, means analyses and survival analyses converge in support of an early time course for phonological coding during silent reading, such that it could be implicated in processes associated with word identification.

Because phonological codes are generated rapidly and seem to be important for normal reading, it is interesting to consider whether individuals who do not have access to the sound information of the language in which they are reading also generate and use phonological codes. As such, my dissertation has also explored whether there is any evidence for the generation of English phonological codes by congenitally deaf individuals. Results suggest that some skilled deaf readers are in fact generating English phonological codes rapidly like their hearing counterparts, despite not having direct access to the sound information of the language they are reading. Finally, in addition to generating English phonological codes, deaf readers, whose primary language is American Sign Language (ASL), might also be rapidly generating ASL phonological codes—visuomanual codes based on ASL phonological features (e.g., hand shape, orientation, movement, location)—while reading English. Indeed, there is prior evidence to suggest that deaf readers rely on ASL codes to support working memory, but it is not clear if these codes are being generated rapidly, such that they could also support lexical access. I directly explore this possibility in my dissertation through the reanalysis of existing eye movement data.

(Click here for a PDF of the poster I presented at Psychonomics 2015)

Ambiguity Resolution
The knowledgeable farmer is an expert in his field. Puns, such as this example, exploit a natural phenomenon of human language—that it is pervasively ambiguous. Although ambiguities can arise at multiple levels of representation, my research investigates the processing of ambiguous words. Specifically that several words have more than one potential meaning (e.g., field can mean an area of flat land or a branch of study), yet we are generally able to quickly and accurately identify the meanings of words and sentences we read. Our ability to do this hinges on two additional properties of language: 1) certain meanings of words are simply more common (more frequent) than others, and 2) words are rarely encountered in isolation, but rather are almost always embedded in a greater context. A further strand of my research is aimed at determining the relative influences of these different cues to meaning, and the time courses with which these cues exert their effects.

Despite previous conflicting results, my research demonstrates that, when readers encounter ambiguous words without any prior cues to the intended meaning, they do not attempt to maintain both meanings while they wait for disambiguating information, rather they hedge their bets and rapidly activate only the more common meaning—a strategy which works most of the time, but leads to processing difficulty if it turns out that the less common meaning was actually intended (Leinenger, Myslín, Rayner, & Levy, 2017, J of Memory & Language). However, if they do have strong contextual cues to the intended meaning, they can use that information to rapidly activate and integrate the less common meaning of an ambiguous word (Leinenger & Rayner, 2013, J of Cognitive Psychology). Furthermore, my research suggests that bilinguals—who encounter ambiguities not only within each of their languages, but also between their given languages (e.g., arena for Spanish-English bilinguals, since it means “sand” in Spanish and “sports complex” in English)—can make use of the sentence context to select a given meaning of a cross-language ambiguity, even if it is not the meaning consistent with the language currently being read (Leinenger et al, 2013; talk presented at the European Conference on Eye Movements).