Poster Session

#1

Phasic modulation of tonic attentional biases in horizontal and vertical dimensions: A cued visual line bisection study


Yamaya Sosa & Mark E. McCourt
Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University

Background. Biases of spatial attention occur in both horizontal and vertical dimensions. Normal observers exhibit a tonic leftward error in the perceived midpoint of horizontal lines thought to result from a surplus of spatial attention directed into the left (contralateral) hemifield (or toward the left half of objects) by the dominant right hemisphere which induces a left-side size overestimation. Less well-studied is the tonic upward error in the perceived midpoint of vertical lines which, interestingly, is common to both normal observers and neglect patients. Transient visual cues phasically modulate the tonic leftward error for horizontal lines; the effect of cues on vertical bisection error is unknown. Method. In two experiments (E1, E2) observers (N = 34 and 32) bisected horizontal (E1) and vertical (E2) lines (50% contrast; 26.7° by 3°; viewing distance = 57 cm), in both cued (3° diameter circular cosines, 3 c/d, 100% contrast, 30 ms duration, 60 ms cue-line SOA) and uncued conditions. In E1 horizontal lines cues appeared within the left and right line halves at eight spatial locations. In E2 vertical lines appeared within the upper and lower line halves at eight spatial locations. Results. In E1 a tonic leftward bisection error was significantly modulated by cues. Right cue potency exceeded left cue potency. Cues located inside line boundaries were equally potent, but those located outside were ineffective in modulating bisection error. In E2 a tonic upward bisection error was modulated by cues. Downward cue potency exceeded upward cue potency. Again, cues located outside the line boundary were ineffective; unlike E1, cue potency varied with location inside the line. Conclusions. Cues modulate tonic biases of spatial attention in both horizontal and vertical dimensions. Cue potency is strongest for cues which antagonize the tonic biases. Cues effects are exerted in object-referenced coordinates.
NIH COBRE P20 GM103505

#2

Tonic and phasic influences on perceived size: Effects of visual field, stimulus eccentricity and smooth pursuit eye movements


Katsumi Minakata, Yamaya Sosa & Mark E. McCourt
Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University

Background. A tonic leftward bias exists in the perceived midpoint of horizontal lines due to the differential magnification of left hemispace by a prepotent contralateral vector of visuospatial attention by the dominant right hemisphere. Many early studies reported that visual scanning phasically modulated this tonic bias, where leftward bisection error increased with rightward scanning and vice versa. More recent studies report the opposite effect: leftward bisection error increases after execution of leftward smooth pursuit eye movements (SPEMs) and vice versa. Companion experiments found that rectangular stimuli presented ahead of an SPEM appeared larger than identical stimuli located the same distance behind the SPEM trajectory. We further explore the effect of SPEMs on size perception. Method. Stimuli were second-order isotropic blobs of Gaussian-enveloped binary noise (100% contrast; 15 comparison blobs, σ = 0.17° - 0.31°; standard blob ? = 0.24°). In scanning conditions subjects smoothly tracked (7.7°/s) a target dot which moved 18.9° leftward or rightward toward the center of the display, whereupon standard and comparison blobs were presented for 50 ms to the left and right of fixation, respectively. Subjects (N=21) judged which blob was larger. A fixed-gaze (no scan) condition was also included, such that there were three levels of scanning (leftward/rightward/none) and two levels of blob eccentricity (1° and 4°). Results. A 3 (Scanning) x 2 (Eccentricity) within- subjects ANOVA revealed a main effect of Scanning (p < .001), where scanning increased the perceived size of blobs ahead of the scan trajectory; a trend effect of Eccentricity (p = .085) where at 1° left blobs appeared larger than right blobs (reversed at 4°); and a Scanning x Eccentricity interaction (p<.001) where the magnitude of the scanning effect is largest at 4° blob eccentricity. Conclusions. Tonic visual field dependent differences in perceived size are modulated by eccentricity and SPEMs.
NIH COBRE P20 GM103505

#3

The roles of physical and physiological simultaneity in facilitative audiovisual multisensory integration


Lynnette Leone & Mark E. McCourt
Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University

Using a reaction time (RT)/race model paradigm we measured the degree of audiovisual multisensory integration as a function of stimulus onset asynchrony (SOAAV) in a series of four experiments. We evaluated: 1) the range of SOAAV over which facilitation occurred when the unisensory stimuli were weak; 2) whether the range of SOAAV producing facilitation supported the hypothesis that the physiological simultaneity of unisensory convergence governs multisensory facilitation; and 3) whether AV multisensory facilitation depended on relative stimulus intensity. We compared RT distributions to unisensory auditory (A) and visual (V) stimuli with those to AV stimulus combinations over a wide range (300 ms, 20 ms increments) of SOAAV, across four conditions of varying stimulus intensity. In Condition 1 unisensory stimulus intensity was adjusted such that d-prime ≈ 2, creating stimuli sufficiently strong for reliable detection (corresponding to an 87% correct response rate in a 2- alternative forced-choice task, where 75% correct is typically taken as threshold), but sufficiently weak that performance could be meaningfully contrasted to that observed in subsequent experimental conditions where stimulus intensity was highly suprathreshold (d-prime > 4; 99.9% correct response rate). In Condition 2, V stimulus intensity was increased (d-prime > 4), while A stimulus intensity was as in Condition 1. In Condition 3, A stimulus intensity was increased (d-prime > 4) while V stimulus intensity was as in Condition 1. In Condition 4, both A and V stimulus intensities were increased to clearly suprathreshold levels (d-prime > 4). Contrary to predictions based on the rule of inverse effectiveness, multisensory facilitation increased with increasing stimulus intensity. Despite large alterations in unisensory processing speed (mean RT) caused by changing stimulus intensity (Piéron's law), significant multisensory facilitation nevertheless occurred nearly exclusively at physical stimulus simultaneity (SOAAV = 0 ms), a result which is incompatible with the simple physiological simultaneity rule. Our results imply that multisensory integration involves strong Bayesian priors where the latency of unisensory convergence is adjusted for stimulus intensity.
NIH COBRE P20 GM103505

#4

Chinese-reading expertise modulates the crowding effect of Chinese characters on face categorization


Hsin-Mei Sun & Benjamin Balas
Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University

Crowding refers to the difficulty in identifying an object when other objects are presented nearby. The phenomenon of "holistic crowding" (in which the recognition of an upright target face is more impaired by upright flanker faces than inverted ones), suggests that crowding may occur at the level of holistic face representations of upright faces rather than the low-level features in the images (Farzin, Rivera, & Whitney, 2009). Presently, we examined whether the flanker-inversion effect on crowding in face recognition can be observed with non-face flankers that are processed holistically. Visual expertise leads to increased holistic processing of non-face objects, so we opted to use expertise with Chinese characters as a means of studying the generality of holistic crowding. We hypothesized Chinese characters would induce holistic crowding of faces only in individuals with sufficient expertise. In Experiment 1, a target face was briefly presented in the fovea or the periphery. The target was either presented alone or surrounded by faces or Chinese characters that were presented upright or inverted. Non-Chinese readers and native Chinese readers indicated the sex of the target face. Our results demonstrated that categorization performance was worse when the target was surrounded by faces compared to Chinese characters in both groups. Moreover, native Chinese readers showed a stronger crowding effect when the target was surrounded by upright compared to inverted faces or Chinese characters. However, the orientation of face or Chinese character flankers did not modulate the crowding effect in non-Chinese readers. In Experiment 2, we used the same experimental design with two-tone Mooney faces and obtained the same results. Our data suggest that visual expertise (and possibly holistic processing) affects crowding even when flankers and targets belong to separate categories; therefore, the high-level components of crowding may depend less on stimulus appearance and more on processing strategy.
NIH COBRE P20 GM103505

#5

Incentive effects on behavior and ERPs in a go/nogo paradigm with lottery-type reward


David S. Leland1, Catherine L. Reed2, Paige M. Jablonski1, Jacob L. Bradley1 & Kellyn A. Kroner1
1University of Wisconsin-Eau Claire
2Claremont McKenna College


#6

The effects of 10-Hz rTMS on sustained visual attention and visual short-term memory


Johnson, J.S.1,3, Emrich, S.M.1, Sutterer, D.W.2 & Postle, B.R.1,2
1Department of Psychiatry, University of Wisconsin-Madison
2Department of Psychology,University of Wisconsin-Madison
3Center for Visual and Cognitive Neuroscience, Department of Psychology North Dakota State University

Recently, an increasing number of studies have revealed that low-frequency oscillations may play a critical role the maintenance of information in visual short-term memory (VSTM). This relationship has been further established through the use of high-frequency repetitive transcranial magnetic stimulation (rTMS), as stimulating areas critical to VSTM retention at frequencies associated with short-term maintenance (~10 Hz) produces changes in delay-period alpha-band activity that are correlated with changes in memory performance. It is possible, however, that this relationship could reflect changes in sustained attention, rather than changes in VSTM maintenance. In the present study, we examined the effects of 10-Hz rTMS on performance on a VSTM task, as well as a perceptually and procedurally similar attention task with no mnemonic component. In the memory task, participants had to maintain the colors and locations of four items over a 1,600 ms delay period. In the attention task, the colors remained present in the display, and participants had to monitor the items for small changes in brightness. On half of the trials of both tasks, a 1 second train of 10-Hz rTMS was applied to the inferior intraparietal sulcus (IPS), an area associated with both VWM maintenance and attentional selection. While the VWM task and attention task exhibited dramatically different patterns of activity (alpha-band power increased during the delay period of the VSTM task, but decreased during the attention task), across both tasks rTMS-related changes in performance were positively correlated with rTMS-related changes in alpha-band power, suggesting a common underlying mechanism between tasks.
NIH F32 MH088115 (JSJ), NIH R01 MH064498 (BRP)

#7

The cognitive neuroscience of auditory distraction


Tom Campbell
Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University

Background sound can disrupt cognitive performance, even when the person performing that task tries to ignore that sound and even when the sound is unrelated to the task being performed. There are several forms of auditory distraction, each of which may implicate multiple cognitive processes that, in turn, could be related to multiple brain processes. One form of auditory distraction can disrupt the performance upon a memory task that involves holding lists of to-be-remembered digits in memory briefly, before attempting to report back those to-be-remembered items in the original order. This auditory distraction effect occurs during a retention interval when those items are held in memory and when the sounds are presented. This disruption of immediate memory by auditory distraction could be related to the generation of brain processes in response to the presented sounds, as may be indexed by an auditory ERP protocol. Support for the N1 hypothesis that distraction can involve factors related to the generation of the N1 component of the auditory ERP is offered by the finding that increases in token set size produce a disruption of performance alongside an increase in N1. This increase in the disruption of immediate memory by auditory distraction is not associated with MMN, but rather is associated with a spatiotemporally and functionally-distinct increase in N1, which has been shown to occur without the concomitant elicitation of P3a.

The work reviewed was, in part, supported by the UK's Engineering and Physical Science Research Council, the Academy of Finland (200522), a Marie Curie Fellowship of the European Community programme "Improving the Human Research Potential and the Socio-Economic Knowledge Base" under contract number HPMF-CT-2000-00902, the University of Helsinki, and by the Hungarian National Research Fund (OTKA T048383).

#8

In pursuit of perspective: Does linear perspective disambiguate depth from motion parallax?


Joshua I. Johnson, Jonathon M. George & Mark Nawrot
Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University

The human visual system must disambiguate depth from its surroundings using any of several depth cues. Mounting evidence supports the role of an extra retinal signal from the pursuit system to disambiguate depth. However, along with pursuit movements, vertical perspective information is generated during the lateral image motion of computer generated motion parallax stimuli and as such is a possible confound. Through careful isolation of vertical perspective and pursuit eye movements, this study evaluates the possibility of confounding effects of vertical perspective during the translation of motion parallax stimuli. In addition, the disambiguating power of each cue is investigated. Results support the efficacy of pursuit eye movements during the disambiguation of depth, while vertical perspective is not salient at the levels present in this paradigm.
NIH COBRE P20 GM103505

#9

Assessing extra-retinal signal magnitude in the perception of depth from motion parallax


Zachary Leonard1, Mik Ratzlaff1, Joshua I. Johnson1, Jonathon George1, Mark Nawrot1 & Keith Stroyan2
1Center for Visual and Cognitive Neuroscience, Department of Psychology North Dakota State University
2Mathematics Department, University of Iowa

Background: Motion parallax (MP), a monocular depth perception cue, is reliant on an extra-retinal signal to disambiguate the depth of a scene. There is abundant evidence supporting the role of pursuit eye movements as that disambiguating factor. However, conditions have been observed in which unambiguous depth is perceived while the eyes remained completely stationary. Presumably this is due to the generation of an internal pursuit signal, to countermand a reflexive optokinetic response (OKR) induced by a translating background, while maintaining fixation. The magnitude of this internal pursuit signal was investigated. Methods: Participants were presented with a "non- translating" random-dot MP stimulus in the center of an OKR-evoking background of square wave grating translating leftward or rightward at either 5.5 or 11 deg/sec. The MP stimulus was followed by a stationary binocular stereogram on a stationary grey background. Observers made comparisons of perceived depth magnitudes between MP and stereo stimuli using a 2-alternative forced choice response. Results: For each dot-velocity value of the MP stimulus, a point of subjective "relative depth" equality (PSE) was estimated from the psychometric function. This PSE gives the amount of binocular disparity producing the equivalent magnitude of perceived depth from MP at that dot-velocity. From this estimate of perceived depth, the magnitude of the internal pursuit signal could be calculated using the motion/pursuit ratio. The two different OKR background speeds elicited the same perceived depth magnitude, and therefore the same internal pursuit signal, for each respective peak dot-velocity value. Conclusion: This suggests that a uniform internal pursuit signal was being generated rather than one tailored to countermand the OKR.
NIH COBRE P20 GM103505

#10

Modeling human perception of depth from motion parallax


Marcus Mahar1, Mik Ratzlaff1, Zachary Leonard1, Joshua I. Johnson1, Mark Nawrot1 & Keith Stroyan2
1Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University
2Mathematics Department, University of Iowa

Background: Visual perception of depth from motion parallax (MP) relies on a neural process comparing retinal image motion (dθ), pursuit eye movements (dα), and viewing distance (f) to determine perceived relative depth (d). The relationship, d/f ~ dθ/ dα, represents a quantitative model for MP based on the dynamic geometry. The current study determines its applicability to human depth perception. Methods: Observers compared perceived depth magnitude between MP and binocular stereopsis stimuli. MP stimuli translated laterally generating pursuit (dα) while stimulus dots within the stimulus also shifted laterally (dθ). The stereo stimuli, identical in composition to the MP stimuli, were stationary. Psychophysical procedures were used to determine the point of subjective equality, so that the magnitude of depth from binocular disparity could be used to determine the magnitude of perceived depth from MP. Three viewing distances (f) were used. Results: Perceived depth from MP had significant depth foreshortening. This is presumably due to misperception of retinal motion and pursuit signals. Discussion: Power-law transducers for retinal motion (dθr) and pursuit (dαe) provide a conversion of the physical stimulus values into internal signal values. Values of r=0.417 and e=0.192 provide an excellent fit for the perceived depth data collected in this study.
NIH P20 GM103505

#11

Motion parallax, pursuit eye movements and night vision goggles


Kyle Sundberg, Jonathon George & Mark Nawrot
Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University

Single optic night vision goggles (SONVGs) provide important visual function in dark, vision-less conditions. However, the visual function they do provide is greatly constrained. For instance, the field of view available through the optics is restricted and binocular stereopsis is lost. Even though monocular depth cues, including motion parallax, are still available, SONVG users often complain of perceptual problems involving depth and spatial relationships (Wiley, 1989). Here we investigate the hypothesis that motion parallax is affected with SONVGs due to interference with the pursuit signal (necessary for the unambiguous perception of depth from motion parallax; Nawrot & Joyce, 2006) in order to maintain ocular alignment with the SONVG optics. The study used a free-viewing task with a modified Howard-Dolman apparatus in which the participant used a string to align two identical black rods in the frontal-parallel plane. Unaided binocular (UB), unaided monocular motion parallax (UM) and night vision goggle monocular motion parallax (NV) viewing conditions were compared to psychophysically assess the accuracy of depth perception in each. Each participant completed all three conditions. In both monocular motion parallax conditions the horizontal movements of the right eye were measured using a Skalar IRIS IR eye-tracker as the participant performed the task. The largest mean offset and standard deviation was observed in the NV condition (4 orders of magnitude larger than the UB condition), followed by the UM condition (2 orders of magnitude larger than the UB condition) and UB condition. Eye movement recordings were analyzed for the magnitude of smooth pursuit eye movements. Smaller magnitude pursuit eye movements were observed in the NV condition as compared to the UM condition. These findings suggest that disambiguation of depth using motion parallax information is hindered due to smaller or absent eye movements in the NV condition relative to the UM condition.
NIH COBRE P20 GM103505

#12

Eye-hand coordination when reaching around an obstacle


Timothy Graham & Jonathan J. Marotta
Perception and Action Lab, Department of Psychology
University of Manitoba

To interact with the environment, one requires coordination between the visual and motor systems. To investigate this coordination, an experiment was designed in which subjects reached to pick up a target object in the presence of an obstacle located either ipsilateral or contralateral to their reaching hand. Ipsilateral obstacles influenced reaches to a larger degree than did those contralateral, which differed minimally from obstacle-free reaches. This suggests obstacles ipsilateral to the reaching hand are encoded on a trial-by-trial basis due to greater relevance to the reach action. Obstacles appear to push gaze and the hand away from their location.
NIH COBRE P20 GM103505

#13

Interaction compresses environmental representations in spatial memory


Laura E. Thomas1, Christopher C. Davoli2 & James R. Brockmole2
1Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University
2Department of Psychology
University of Notre Dame

People perceive individual objects as closer when they have the ability to interact with them than when they do not. We asked how interaction with multiple objects impacts representations of the environment. Participants studied multiple-object layouts, manually exploring or simply observing each object, then drew a scaled version of the environment (Experiment 1) or reconstructed a copy of the environment and its boundaries (Experiment 2) from memory. Participants who interacted with multiple objects remembered these objects as closer together and reconstructed smaller environment boundaries than participants who looked without touching. These findings provide evidence that action-based perceptual distortions accumulate over a moving observer's multiple interactions, compressing representations not only between touched objects, but also untouched environmental boundaries.
NIH COBRE P20 GM103505

#14

Hand dominance influences outcome predictions when observing self-generated actions


Christopher A. Kuylen, Benjamin Balas & Laura Thomas
Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University

When people observe an action, how do they predict its outcome? Knoblich and Flach (2001) proposed that people simulate performing observed actions to predict their effects. They found that people were more accurate in predicting the outcome of an action when viewing videos of themselves than when viewing videos of another person. Presumably, when people watched videos of themselves, the system that performed the original action also simulated the action during perception and this match yielded more accurate predictions. We sought to investigate and extend this simulation hypothesis. We filmed participants throwing Velcro balls at two targets from an allocentric (profile) viewpoint and an egocentric (over-the-shoulder) viewpoint. Participants made throws with their dominant and non-dominant hands. We subsequently asked participants to predict the trajectory of their own throws and another participant's throws as they watched videos spanning the onset of the throwing motion to the last frame before release. Participants made more accurate predictions when they viewed videos taken from an allocentric viewpoint compared to an egocentric viewpoint regardless of whether they watched their own throws or another's throws. Interestingly, participants' predictions were also more accurate when they viewed a throw performed with the dominant hand than with the non-dominant hand, but only when they observed their own actions. This result provides new support for the simulation hypothesis: participants were more accurate in predicting throws they made with their own dominant hand because they used the same system to both generate and simulate these actions. The fact that participants were no more accurate in predicting the outcome of throws made with their non-dominant hand than they were in predicting throws made by another actor also suggests that people automatically make predictions that guide perception based on dominant hand simulations, regardless of which hand they observe.
NIH COBRE P20 GM103505/NSF EPSCoR EPS-0814442

#15

Seeing red: Anger and the perception of red


Adam K. Fetterman, Michael D. Robinson & Robert D. Gordon
Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University

Background. A class of metaphors links the experience of anger to perceptions of redness (e.g. "Seeing red"). Whether such metaphors have significant implications for understanding perception is not known. Metaphoric representation theory contends that people think, rather than merely speak, metaphorically. Accordingly, we hypothesized that priming anger concepts, and inducing anger, would lead to increased subjective perceptions of red. Method. We conducted two experiments, with 177 participants, in which anger concepts were activated or emotional experiences induced. In Experiment 1, participants categorized anger (versus sadness) words and were then presented with a degraded color (red or blue) screen that they identified as red or blue. In Experiment 2, participants were made angry on some trials by noise blasts, with no noise blasts for control trials, following which the same color identification task was performed. Results. In both experiments, priming anger concepts (Experiment 1) or inducing anger (Experiment 2) led participants to perceive color stimuli as red, independent of the actual color presented. Conclusions. These findings extend the New Look, perceptual, metaphoric, and social cognitive literatures. Most importantly, the results suggest that emotion representation processes of a metaphoric type can be extended to the perceptual realm.
NIH COBRE P20 GM103505

#16

Neurotic individuals see things as more distant


Tianwei Liu1, Scott Ode2 & Michael D. Robinson1
1Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University
2Medica Research Institute

Background. Three studies examined the novel hypothesis that the trait of neuroticism would predict perceptions consistent with a distance-enhancing dynamic. Study 1 presented participants with ambiguous time phrases (e.g., "moved forward") and found that trait negative affect (the affective core of neuroticism) predicted their interpretation in a distance-enhancing temporal direction. Study 2 asked individuals to estimate the size of words and found that individuals higher in neuroticism generally thought the words were smaller than individuals lower in neuroticism did. In Study 3, people high (but not low) in neuroticism perceived words to be shrinking faster than they were growing. Although normative effects differed by task, the link of neuroticism to greater perceptual distancing was robust across studies. The findings validate but significantly extend avoidance-motivation perspectives of individual differences in neuroticism.
NIH COBRE P20 GM103505

#17

Perceptual egocentrism in primary psychopathy


R.L. Boyd, K. Bresin & Michael D. Robinson
Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University

Several theories of psychopathy link it to an egocentric mode of perceiving the world. This explanatory perspective is quite plausible given that psychopaths are viewed as callous, uncaring, and narcissistic. This explanatory perspective, though, has been an insufficient focus of research, particularly in basic cognitive tasks. Building on the work of Wapner and Werner (1957), an implicit measure of cognitive egocentrism was developed. Continuous variations in primary and secondary psychopathy were assessed in a sample of college undergraduates (N = 80). Individuals high in primary psychopathy exhibited cognitive egocentrism, whereas individuals low in primary psychopathy did not. On the other hand, variations in secondary psychopathy were non-predictive of performance in the task. Results are discussed in terms of theories of psychopathy, distinctions between its primary and secondary components, and the utility of modeling egocentrism in basic cognitive terms.
NIH COBRE P20 GM103505

#18

Infants' attention to object shape depends on experience


Rebecca J. Woods & Jena Schuler
Department of Human Development and Family Science
North Dakota State University

Background. Infants encode, categorize, and reason about the physical properties of objects based on shape and often ignore or fail to encode other object features. In this study, we propose that infants' attention to object shape during physical reasoning is dependent upon the reliability with which shape predicts object identity. If so, infants' attention to shape would be grounded in experiences that validate shape's stability. The current study examines the flexibility with which infants attend to shape when reasoning about objects following one of two key experiences with shape information. Method. Attention to shape as a basis for object individuation, which requires memory and the ability to reason about objects and their properties, was tested in 32 8.5-month-old infants. Prior to testing, infants participated in one of two learning trials. Half of the infants watched as an experimenter made evident the stable, rigid shape of three objects presented successively. The other infants watched as an experimenter made evident the malleability of three objects. During these demonstrations, infants were allowed to touch and look at the objects. Following the learning trials, infants' attention to shape was assessed using the narrow-screen task of Wilcox & Baillargeon. Infants saw 4 test trials in which a ball moved behind an occluding screen and a box emerged on the other side. The ball and box were identical in every way except in their shape. The trajectory was reversed and repeated until the end of the trial. Infants saw the event with one of two test-trial screens. Half of the infants saw the objects hide behind a wide screen and half saw the objects hide behind a screen that was too narrow to simultaneously hide both. If infants understood that two objects were involved in the event based on the difference in shape, they should look longer at (be surprised by) the narrow-screen event, but not the wide-screen event. Results. Infants' mean looking times were analyzed using a between-subjects 2 learning trial (rigid or malleable) x 2 test-screen size (narrow or wide) ANOVA and planned comparisons. Results indicated that infants who saw rigid objects during the learning trials looked longer at the narrow- than wide-screen test event, while infants who participated in the malleable-shape learning trials looked about equally at the two events. Conclusions. This outcome suggests that infants who saw the malleable objects during the learning trials ignored shape information during the task and support the hypothesis that infants' attention to object features during physical reasoning tasks is based on recent experiences with those features.
NIH COBRE P20 GM103505

#19

Luminance and chromatic negation equally affect human and monkey face recognition in adulthood and early childhood


Kate Stevenson1, Michael Mangini1 & Benjamin Balas2
1Department of Psychology
Concordia College
2Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University

Adult face processing is limited in generality as a function of experience — discrimination ability and face-specific behavioral effects are reduced in out-group faces. Nonetheless, other-species faces phylogenetically close to our own (e.g. chimpanzees) are processed by face-specific "holistic" mechanisms (Taubert, 2009). Presently, we asked whether or not the well-known effect of contrast-negation on face recognition (Galper, 1970) was exclusive to human faces or generalized to monkey faces. Negation disrupts face pigmentation substantially (Russell, 2007), allowing us to examine species-specific use of surface cues as a function of extended visual development.
We tested adults (N=24) and children between the ages of 3.5-5 years (N=21) on a 4AFC discrimination task. Trials consisted of four same-species faces: Three identical distractors and a non-matching target. Participants identified the target as quickly as possible using a touchscreen. Adults completed this task using four types of stimuli: (1) The original faces, (2) Luminance-negated faces: Reversed contrast but original chroma, (3) Chroma-negated faces: Reversed chroma, but original contrast, (4) Fully-negated faces: Both contrast and chroma negated. Children completed this task using only the original faces and the fully-negated stimuli.
Adults were highly accurate in all conditions. Further, adults' response latencies revealed a main effect of stimulus category (Monkey RTs < Human RTs) and an interaction between luminance and chroma negation, such that the original faces were discriminated more quickly than all others independent of species. Children were also equally accurate across all conditions and displayed the same main effect of species on response time. However, children showed no evidence of a negation effect for either species. Our data thus suggest that human and monkey faces suffer equally from luminance and chroma negation in adult observers, but that children have yet to develop a robust negation effect in early childhood.
NIH COBRE P20 GM103505

#20

Attention to dynamic scenes of ambiguous provocation and aggression in youth: It's not just what they look at, it's when


Wendy Troop-Gordon1, Laura Vogel-Ciernia11, Robert D. Gordon1, Elizabeth Ewing Lee1 & Kari J. Visconti2
1Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University
2Department of Family and Human Development, Arizona State University

Research points to two pathways in which individual differences in attention to social cues contribute to hostile responses to perceived provocation: a) attention to aggressogenic information resulting in bottom-up processing of cues and hostile attributions of events, and b) top-down processing characterized by little attention to relevant cues except those most proximal (i.e., recent) to the event and interpretation based on personal beliefs rather than information in the immediate context. Support for the latter comes from recent eye tracking studies suggesting that aggressive individuals make quick, schema-based interpretations of events and then attend to schema-inconsistent information (Horsley, de Castro, Van der Schoot, 2010; Wilkowski, Robinson, Gordon, & Troop-Gordon, 2007). However, these studies relied on static stimuli, and could not address whether aggression is associated with delayed attention to cues, resulting in a lack of attention to earlier cues which might reveal benign intent. The current study addressed this limitation using videotapes of ambiguous provocation. Seventy-two children (38 girls; Mage = 9.15 years) watched 12 video clips portraying an act of ambiguous provocation. In addition to the provocateur and victim, in each scene, one child was present who laughed at the provocation and one who showed concern. The time at which participants first fixated on a role (e.g., provocateur) and total time spent observing each role was calculated. Aggression was assessed using peer-, teacher-, and parent-report measures. Peer beliefs were measured using the Peer Beliefs Inventory (Rabiner, Keane, & Mackinnon-Lewis, 1993). Although few significant correlations emerged between total time participants spent looking at different roles and their aggression scores, a consistent association emerged between delay in first fixation to the provocateur and each measure of aggression. Furthermore, delayed processing predicted greater aggression when participants held average or high levels of antisocial beliefs and low or average levels of prosocial beliefs. These findings implicate a delay in attention to relevant information as a key predictor of aggression among youth. Cues indicating benign intent may be missed, and, in the absence of complete information, aggressive youth may rely on schemas that others are hostile, and low in prosocial traits, when interpreting and responding to a perceived provocation.
NIH COBRE P20 GM103505

#21

Allocation of visual attention to scenes of peer harassment accounts for individual differences in peer victimized youths' behavioral and emotional adjustment


Robert D. Gordon1, Wendy Troop-Gordon1, Elizabeth Ewing Lee1 & Kari J. Visconti2
1Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University
2Department of Family and Human Development, Arizona State University

Despite extensive research into the social-cognitive correlates of aggression and psychopathology, scant attention has been paid to a core assumption of social-information processing theories — that individual differences in attention to social cues contribute to the biased processing of information underlying externalizing and internalizing syndromes. The current study used eye tracking to test the proposition that links between victimization and psychopathology are amplified among children who attend to depressogenic information (i.e., bullies and reinforcers of bullies) and are dampened among children who attend to cues indicating support for the victim (i.e., defenders). Seventy-two children (38 girls; Mage = 9.15 years) participating in a longitudinal study provided data for this investigation. Children watched 36 video clips portraying an interaction between a bully and victim in the presence of a reinforcer of the bully and a defender of the victim. Eighteen vignettes were acted out twice, once by 4 male child actors and once by 4 female child actors. Assistants coded the location of each eye fixation, and total time spent observing each role (e.g., bully) was computed. Relations to mental health were examined using a combination of peer-, self-, and teacher-report measures collected the previous spring and parent-report measures obtained during the lab visit. Peer victimization was associated with greater aggression when children attended to bullies and greater internalizing problems when children attended to reinforcers. Relations between victimization and internalizing problems were reduced when children attended to defenders. These findings provide novel insights into individual differences in the relation between peer victimization and psychopathology.
NIH COBRE P20 GM103505

#22

Impact of road sign distance on driving performance of older and middle-aged drivers on rural highways


Nora D. Gayzur1, Linda K. Langley1, Alyson L. Saville1, Marcus A. Mahar1, Kimberly Vachal2, Enrique Alvarez Vazquez1 & Robert D. Gordon1
1Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University
2Upper Great Plains Transportation Institute
North Dakota State University

The proportion of drivers who are 65 and over is increasing in rural areas. The purpose of the present study was to investigate the impact of road sign placement on older adults' driving performance on a simulated rural highway during day and night conditions. Destination road signs were located at a distance currently designated by the U.S. Department of Transportation (200 ft from the intersection) and at two greater distances (400 and 600 ft). Some intersections were additionally marked with intersection warning signs. Middle-aged (40-59 years) and older (60-80 years) participants were instructed to turn at an intersection marked with a particular destination city. Older adults drove more slowly than middle-aged adults. Although general speed did not vary by lighting conditions, both middle-aged and older adults went into turns with greater speed at night than during the day. Road sign distance impacted both age groups. When a destination sign was close to the intersection, drivers slowed down more at the sign, but went faster into the turn, than when the sign was at greater distances. When an intersection warning sign was present, middle-aged and older drivers slowed down more at the destination sign than when the warning sign was absent, particularly at night. These findings suggest that, in rural areas, moving the destination road sign farther from the intersection and using warning signs increases preparatory behaviors in middle-aged and older drivers and increases driving safety.
NIH COBRE P20 GM103505

#23

Video game training to enhance visual cognition in older adults


Alyson L. Saville, Linda K. Langley, Marcus Mahar, Nora D. Gayzur, Hannah Ritteman, Sara V. Wyman, Katie A. Sage, Enrique Alvarez Vasquez & Robert D. Gordon
Center for Visual and Cognitive Neuroscience, Department of Psychology
North Dakota State University

The current study examined whether video game training is a useful tool in enhancing older adults' attention skills and daily functioning. Seventeen older adults (60 - 81 years) completed 14 one-hour training sessions on the video game Medal of Honor: Heroes 2, which is a high action, first person shooter game. Participants showed a general improvement in gameplay skill, taking 22% less time to complete a level, dying 69% less, and killing 4% more enemies in their last session compared to their first. Modest gains were evident across all of the attention tasks (change detection, multiple- object tracking, useful field of view, rapid serial visual presentation), with increased accuracy from pre-test to post-test. Due to ceiling effects at pre-test, we did not observe training effects on daily functioning (life satisfaction, health status, instrumental activities of daily living). Thus, overall, there were small but not trivial cognitive benefits from video game play. Future work will determine whether training benefits were specific to a high action video game or could be found with other types of video games and whether certain characteristics (e.g., mental status, vision, initial attention performance, gaming gains) distinguished older adults who benefited from game training from those who did not.
NIH COBRE P20 GM103505