As we view the world through our senses, most information is discarded, but some is kept, packaged into mental representations that can be stored in memory. This process is analogous to a film editor who transforms raw camera footage into a compact series of shots and scenes that contain only the most important information.
Our goal is to understand the neural mechanisms that edit our sense-data into remembered percepts. The mechanisms include attentional filters to shape and restrict the incoming information, and memory storage processes which remember information at varying levels of precision according to task relevance and memory load. We are especially interested in how these processes interact in both directions (i.e. how attention affects encoding and how encoding affects attention).
These mechanisms underly a broad variety of phenomena in visual cognition such as attentional cueing, capture, the attentional blink, episodic or event-based memory, visual working memory and conscious awareness. We incorporate data from all of these literatures when building neural simulations of attention and memory processes. We also use neural data from EEG recordings as an additional source of data in our modeling efforts. These models then inspire new questions that we test experimentally, producing a tight feedback loop between theory and data.
Our work in the Psychology department of Penn State University (University Park campus) is supported by the National Science Foundation, the Applied Research Labs at Penn State and the Office of Naval Research. You can read more about specific projects below.