Friday, 19 April 2013

Do we really decode words sequentially?

Do we really decode words sequentially?

Visual word recognition. The first 250ms.Oxford-Kobe Symposium on the neurobiology of reading.

One of the most interesting presentations at the Symposium was the one by Piers Cornelissen. Cornelissen presented evidence from MEG studies.
These showed that there appears to be a direct coupling/connection from visual areas of the cortex to the Left Inferior Frontal Gyrus. (LIFG) during reading. The methods they used (Partial Directed Coherence …PDC) shows the direction of communication of the link.  The visual areas sent information to the LIFG in the first 130ms of the onset of the visual image.
The LIFG then has direct access to the part of the brain which enables phonological output.

The activation of this part of the brain at the same time as the Fusiform Gyrus suggests that the phonological output is independent of the orthography in fluent readers.

‘Using brain imaging, researchers showed that the speech motor areas of the brain (inferior frontal gyrus) were active at the same time (after a seventh of a second) as the orthographic word-form was being resolved within a brain region called the fusiform gyrus. 

The finding challenges the conventional view of a temporally serial processing sequence for reading in which letter forms are initially decoded, interact with their phonological and semantic representations, and only then gain access to a speech code. .

Why do I consider this important?

My colleagues and I have recorded the phonological output of thousands (over 11,000) dyslexic adults on a default computer screen and on a screen which has been objectively optimised to maximise their phonological output.
We measure their ‘reading speeds’ in several ways.

Aloud and silent…….
Oral reading Fluency…… ORF…   reading aloud complex text

Rapid Automatised Naming……RAN... random short word arrays of a small 
number of simple words... No syntax.

Silent reading only….

Binocular eye tracking….. Recording their eye movements when reading complex text silently.

If this data is analysed for the frequency of particular speeds, there are several distinct modes.  It is multimodal.

Aloud….Default setting.. font 12 Red255  Green 255  Blue 255

Not dyslexic.. ORF……….. 184 words per minute (wpm)
Not Dyslexic … RAN……….184 wpm
Dyslexic…..ORF…….138 wpm
Dyslexic ….RAN…….138 wpm

Silent default settings
Not dyslexic….ORF….460wpm
Dyslexic……….ORF…..158 wpm

Aloud…..Optimised settings (font size and background)

Not Dyslexic…. ORF….219 wpm
Dyslexic………..ORF….158wpm, 184 wpm. and 219 wpm

Silent….. optimised settings

Not dyslexic…..ORF……460 wpm
Dyslexic……..ORF…….158 wpm, 219wpm  and 480 wpm

The idea that these modes are quite robust, suggests that they are fundamental to the neurobiological mechanisms driving the reading process.

The data reported by Cornelissen et al suggest that the visual data arrives at the LIFG 130ms after the visual image data arrives at the retina. This would enable a ‘reading speed output of   462 wpm (60/0.130).

I love the way my work of the last 30 years appears to be converging with the unfolding neurobiology.
I would love to know what the other modes in output we have found actually represent.
I have my hypotheses.  Any suggestions are welcome.

This research can be looked at in the light of the work on visual attention span, referred to in other posts, which will possibly give an insight into the number of fixations needed to deliver the word letter strings to and hence the possible speed of phonological output .

This is also supported by the work of Facoetti et al that the letters are processed in parallel rather than decoded and blended  serially.

'Non words and new words ' would still need to be serially processed but the development of automaticity would be dependent on the visual attention span as implied by the work of Sylviane Valdois et al.
The visual attention span is likely to be controlled by visual crowding.. a visual processing issue.

No comments:

Post a Comment