Invited Speakers

The brain of the digital native: Better, faster, dumber?

Paul Atchley
Department of Psychology
University of Kansas

The average age of mobile phone adoption is now eight year old. Will these "digital natives" reap benefits or costs for their wired lives? We adapt our brain to fit the needs we place on it. For example, simple visual capacities for understanding quantity have been adapted to perform advanced mathematics and our ability to develop and use language has led us to consider philosophical concepts that have no basis in physical reality. More recently, our capacity to switch between tasks has been pushed to its limit to allow us to "multitask" in ways we were never "designed" to do. Approaching and exceeding these hardwired limits has costs. These costs are being experienced in terms of reduced safety despite increased feelings of security, reduced face-to-face interactions despite feeling more "connected", and reduced cognitive capacity despite feeling smarter than ever. These patterns beg the question "Is technology making us less smart?" In my talk, I would like to explore this issue from a variety of directions, using data from our own work and the work of others. Using the example of task switching between language and attention, I will discuss how sharing neurophysiological resources may produce grave consequences for safety. I will talk about our work using delay discounting to assess the compulsiveness of social information to raise the question of whether the reward systems of digital natives are being re-wired for task-switching rather than task completion. And I will present an alternative to the wired culture, by talking about our latest work showing sustained exposure to natural environments may boost our capacity for advanced cognitive processes such as creative verbal output.


Maintaining feature bindings in visual working memory: Attention shifts and representation volatility

Melissa R. Beck and Amanda E. van Lamsweerde
Department of Psychology
Louisiana State University

Storing an integrated and complete representation of an object in visual working memory (VWM) involves storing the features (red and round) of the object (apple) and binding these features together. It is still unclear from previous research if attention is needed to maintain the feature bindings of an object in VWM. Therefore, there are two conflicting hypotheses of feature binding memory: the attention hypothesis, which suggests that attention is needed to maintain feature bindings in VWM and the volatile representation hypothesis, which suggests that feature bindings in VWM are volatile and easily overwritten, but do not require sustained attention. The typical design used to address test these hypotheses involves presenting participants with a task that occupies attention during the maintenance period of a change detection task, and the tasks used across studies vary greatly and do not carefully control attention. In the current study, we measured attention during encoding by tracking participants' eye movements and made the change contingent on the number of objects fixated. This allowed us to control when a change occurred based on when the object was attended. If attention is required to maintain bindings, we would expect that detecting binding changes would be better for the object in the focus of attention at the time of the change versus objects attended prior to the change. However, we found no deficits in the ability to detect binding changes relative to the ability to detect a single feature change, even for items attended several attended objects prior to the change. However, binding change detection was impaired when the encoding conditions supported only a volatile binding representation. Therefore, the current studies support the volatile representation hypothesis. Specifically, the volatility of the representation, rather than attention, appears to determine if binding information will be effectively maintained in VWM.


The role of form processing in the perception of rotational motion

Gideon Caplovitz
Department of Psychology
University of Nevada, Reno

Although form and motion processes have long been thought to be mediated by independent neural mechanisms, a wealth of psychophysical research has demonstrated that form and motion processes interact in numerous and complex ways. A fundamental principle that underlies these interactions is that the processing of form provides an additional source of information than is used to overcome ambiguities intrinsic to the basic processing of motion. This is particularly true for the case for the motion of an object that is rotating. Unlike the case of translation in which every point of an object is moving with the same velocity, when an object is rotating, every point along its contour is moving with a distinct velocity. From this set of different motion signals, the visual system is faced with the challenge of constructing a unified percept of a rotating object: not a trivial task. In this talk, I will describe a series of psychophysical and fMRI experiments that have investigated the role of form processing in the context of rotational object motion. The fundamental take-home message from the talk will be that the visual system is able to derive a motion signal that provides a size-invariant (angular velocity) cue to the rotational speed of a rotating object by integrating distinct form features over space and time. I will argue that this spatiotemporal form integration represents a fundamental characteristic of visual processing that underlies both object and motion perception.


The role of holistic processing in the development of specialized face perception in infants

Cara Cashon
Department of Psychological and Brain Sciences
University of Louisville

During the first year of life infants become attuned to the world around them. Past research indicates that infants start out open to a wide range of speech sounds but by the end of the first year they begin to specialize in the sounds of their native languages. Recent research indicates that infants' face perception system also becomes specialized during the first year. This work shows that infants' face discrimination and face recognition abilities become attuned to human, upright, own-race faces by the end of the first year. In our work, we have been exploring how infants process such faces and how that processing changes with development. When adults process an upright, own-race face, they process the relations between features (e.g., holistic and 2nd-order configural information), but not when they process inverted or other-race faces. In this talk, I will discuss several studies in which we investigated the development of holistic processing of own- versus other-race faces and upright versus inverted faces in infants. Although the bulk of the studies discussed will focus on typically-developing infants, I will also discuss a recent extension of our work that focuses on infants and toddlers with Williams syndrome, a rare genetic developmental disability associated with extreme interest in people and faces.


Higher order modulation of the chromatic visual evoked potential

Michael Crognale
Department of Psychology
University of Nevada, Reno

The visual evoked potential (VEP) is the recording of massed neural activity in the brain in response to visual stimulation. The VEP has been shown to be a convenient, non-invasive, and objective indication of visual function in humans. VEP measures can be quite rapid and sensitive with results in agreement with slower behavioral methods. These characteristics make it ideally suited for use in non-verbal populations such as infants for clinical evaluation of visual function. The VEP can take many forms and each type of recording can provide different and complementary information. For example, the chromatic onset VEP (cVEP) is useful for detection of anomalies in color vision. The cVEP has been shown to be particularly sensitive to acquired changes in color vision from pathology such as diabetes. However, one concern for the clinical application of the VEP to infant visual function is the effects of attentional state. Previous research has shown that some VEP recordings (e.g. achromatic pattern reversal) are prone to modulation by attention. Consequently, clinical recommendations are to somehow monitor attentional state, which is difficult. However, we have conducted a series of experiments in order to quantify the effects of attention on the cVEP. Our results indicate that the cVEP is in fact largely robust to attentional modulation, negating the need to monitor attention in the clinic for this measure. These results also suggest that the cVEP arises at a cortical stage that is not easily controlled by feedback from higher order mechanisms. We report here the results of these experiments as well as some new findings that demonstrate that even the cVEP can be modulated by higher cortical areas using techniques that have been shown to gate sensory input (e.g. hypnosis).


Spatial and temporal dynamics of auditory attention

Edward Golob
Department of Psychology and Program in Neuroscience
Tulane University

Spatial hearing is particularly useful for orienting attention to environmental events because it can detect sounds coming from any direction, often from behind obstructions or at a substantial distance. As part of the orienting response, spatial hearing can also supply information on where to orient the eyes for detailed examination of an event. The need for these attentional shifts in the first place may be a byproduct of limited attentional capacity. Behavioral studies suggest that capacity limitations can be expressed as spatial gradients centered on an attended location, and are observed in both visual and auditory modalities. Here I will present findings from our lab on how cortical processing of sound location is influenced by spatial attention. In a spatial target detection task cortical event-related potentials to sounds begin to show gradient- like patterns after ~200 ms and last for at least 1 second. The gradient patterns develop in several distinct phases, which likely reflect different cortical sources engaged over time. Lastly, the impact of non-spatial information on spatial event-related potential gradients will be examined in the context of auditory objects. Taken together, the above observations indicate prolonged attentional influences on auditory spatial processing, and bear on the interplay of top- down and bottom-up coding in the human brain.


The need for an integrative theory of film perception and comprehension

Lester Loschky
Department of Psychology
Kansas State University

There has been a recent surge in interest in both the perception of film, from researchers in scene perception, and the comprehension of film, from researchers in discourse comprehension. This interest has led to a number of recent studies of film perception and film comprehension, but what is lacking is a coherent theoretical base capable of integrating findings across both domains. Research and theory in the area of scene perception offers many important insights that can speak to what should be in the front-end of such a theory. Research and theory in the area of discourse comprehension can offer key elements of the back-end of such a theory. Importantly, however, it is at the border of these front- and back- ends, namely the no-man's-land between perception and comprehension, that some of the most novel insights and predictions can be generated. I will discuss these ideas in the context of a brief clip from the James Bond movie, "Moonraker," and sketch out some questions that such a theory should try to explain.


Distinct representation mechanisms of visual features and faces in the Fusiform Face Area

Ming Meng
Department of Psychological and Brain Sciences
Dartmouth College

A hypothetical "grandmother cell" should be equally responsive to a high-contrast picture of grandmother as well as a low-contrast picture of grandmother as long as the contrast is still above recognition threshold. Similarly, if the Fusiform Face Area (FFA: Kanwisher, McDermott & Chun, 1997) is truly face-selective, it's response function should be independent of low-level visual features such as contrast. It is known that contrast-negated faces lead to reduced neural activity in the FFA (George et al., 1999; Gilad, Meng & Sinha, 2009; Ohayon, Freiwald & Tsao, 2012). Moreover, low-level stimulus features in general modulate FFA activation (Yue et al., 2011). Relationship between feature processing and feature-invariant face representation in the inferior temporal (IT) cortex remains unclear. Here we examine both average BOLD and multivariate patterns of fMRI data regarding how category-specific (face versus house) and non- specific (stimulus contrast) information is represented in human IT. The average BOLD in the FFA is modulated by the contrast of faces, but not houses, while the Parahippocampal Place Area (PPA: Epstein & Kanwisher, 1998) is modulated by contrast of both stimuli. Interestingly, activation patterns in the FFA and PPA are barely affected by contrast at statistically near chance level, whereas the categorical information can be decoded from the patterns at significantly above chance level. Our results confirm with previous studies that both face- specific and non-specific information is represented in the FFA. However, stimulus contrast and the category-specific information are encoded in qualitatively different ways, suggesting distinct mechanisms underlying feature processing and face/object representation.


The visual and haptic perception of 3-D object shape

J. Farley Norman
Department of Psychology
Western Kentucky University

Visual 3-D shape perception has been the subject of active scientific investigation for the past 175 years (e.g., since Wheatstone, 1838). Haptic (i.e., active touch) solid shape perception has been studied for 50 years (e.g., since the pioneering research of J. J. Gibson & colleagues in the 1960's). Given that all human adults are subject to aging and that aging is known to be associated with atrophy and deterioration in physiological functioning of the cerebral cortex of the brain, it wouldn't be surprising to find that increases in age negatively affect object shape perception and discrimination. Over the past 14 years, my laboratory has intensively studied aging and 3-D shape perception. Our psychophysical experiments have investigated both vision and active touch/haptics. Although there are a few exceptions, we have found that the ability to perceive 3-D object shape is remarkably unaffected by increases in age, despite the fact that aging does lead to significant and large reductions in basic sensory abilities (e.g., both visual and tactile acuity decrease significantly with age). Our finding of preserved visual and haptic functionality in older adults demonstrates that the common stereotypes associated with older adults are inaccurate.


Neural correlates of social exclusion

Catherine Norris
Department of Psychological and Brain Sciences
Dartmouth College

Humans are social creatures. Throughout life we rely on relationships with others to meet fundamental needs for food and safety, as well as to fulfill a basic need to belong (Baumeister & Leary, 1995). When our relationships are threatened in some way, we often experience what has become known as social pain. In fact, past research has demonstrated that social pain activates brain regions impliciated in the neural network associated with physical pain, in particular the dorsal anterior cingulate cortex (dACC : Eisenberger, Lieberman & Williams, 2003). This talk will expand on these ideas by exploring what studying the brain can tell us about the effects of social exclusion on (a) how we process social and emotional information and (b) attention to and memory for social cues, as well as (c) how individual differences (e.g. in self-esteem) can moderate these effects. Specifically, both chronic loneliness (Cacioppo, Norris et al., 2009) and acute social exclusion reduce activation of regions of the social brain network in response to unpleasant images of other people, suggesting that the experience of social pain may decrease the deployment of theory of mind or perspective taking processes when confronting distressed peers. Furthermore, individuals with relatively low self-esteem exhibit better memory for social rejection, and event-related brain potentials (ERPs) have shown that self-esteem moderates neural responses to acute rejection and acceptance experiences (Norris & Caughey, in prep). Thus, studying the brain can shed light on the downstream consequences of social exclusion and how they differ across individuals. I will further argue that although researchers have tended to lump different forms of social exclusion into a single category, the extant data suggest that chronic forms of exclusion (like loneliness) differ from more acute forms (rejection) and our focus going forward should be to delineate the psychological processes that are shared versus unique across different forms of exclusion.


Tuning into face gender and race information during infancy

Paul C. Quinn
Department of Psychology
University of Delaware

A program of research will be reviewed that has been investigating how infants' face processing is tuned by experience with different classes of faces early in development. The research reveals that different degrees of exposure to the categories of gender and race impact how infants organize faces into different social groupings, and also attend to and recognize individual faces within these general classes. In particular, early in development, infants may process a broad range of faces from different races and genders with equal facility. As infants develop and are selectively exposed to a limited number of face categories (i.e., one's own race and the gender of the primary caregiver) they come to demonstrate certain processing differences for those predominantly experienced categories relative to categories of lesser experience (i.e., increased visual attention, superior recognition, and categorization as opposed to categorical perception). Implications for theories of the acquisition of face expertise as well as the origin of social biases that lead to stereotyping and prejudicial belief systems will be discussed.


How do perception, cognition, and action interact in a complex visual environment?

Joo-Hyun Song
Department of Cognitive, Linguistic and Psychological Sciences
Brown University

Most real world visual scenes are complex and crowded. Instead of a single isolated object, multiple objects compete for attention and directed action. Thus, successful interactions with this complex external world require seamless coordination among multiple brain systems. To date, this integrated process has typically been broken down into three general components—cognition, perception, and action—each studied independently, rather than as part of an integrated whole. Furthermore, at present, the motor systems are mostly seen merely as tools that implement action plans chosen by cognitive processes. However, we demonstrated that visually-guided action tasks not only provide insight into visuo-motor behavior itself, but also reveal glimpse into otherwise hidden yet significant dynamic internal representations. In addition, we also showed that that motor areas known to be involved in planning and executing eye and hand movements are also critically involved in higher-level cognitive processes. Taken together, perception, cognition and action are highly flexible and interactive processes.


Adaptation and visual experience

Michael Webster
Department of Psychology
University of Nevada, Reno

Visual perception is constantly regulated by processes of adaptation that adjust sensitivity in response to the current stimulus context. In this talk I will review studies where we have explored how adaptation adjusts to natural variations in the visual environment (e.g. in the colors or faces we are exposed to) or to natural variations in the observer (e.g., to compensate for the spatial and spectral sensitivity limits of the observer). I will also consider the consequences of adaptation for perception and performance when observers are confronted with unnatural environments, such as a radiologist inspecting medical images. To the extent that the perceptual changes induced by adaptation are known, images from new environments can be "pre-adapted" to match the visual coding of the observer. This provides a novel technique for optimizing the visual information in images and for exploring the functional consequences of adaptation, by measuring performance with images that simulate theoretically complete adaptation. It also provides a common framework for evaluating the effects of stimulus versus observer variations in perception. These effects will be illustrated by considering how color perception is predicted to vary when the same observer is exposed to different environments or different observers are exposed to the same environment.