University Relations
http://www.umn.edu/urelate
612-624-6868

FALL 2017 Colloquia

Mondays, 12:00 - 1:30 pm

September 11: CCS colloquium: "Intro to the NRT grant and 5 min presentations by the NRT fellows"

 

September 18

Sheng He, Psychology

"Studying Human Vision using Invisible Images"

 

Paul SchraterSeptember 25

Paul Schrater, Psychology and Computer Science and Engineering

"Merging AI, Behavior and Neural data: using AI models to structure nonparametic statistical analyses of behavioral and neural data."

 

 

October 2

Hyun Soo Park. Computer Science and Engineering

"Towards Learning Skills from First Person Demonstration"

Abstract & Bio

We learn sophisticated skills, e.g., cooking, forehand stroke, and social signaling, from demonstrations of others. A first person camera that records such actions in situ opens up a new opportunity to computationally analyze subtle skills, and further train personalized robots. In this talk, I will present my team effort to measure, model, and predict physical/social skills revealed in a first person video. (1) A person exerts his/her intention through applying physical force and torque to scenes and objects, which effects in visual sensation. We leverage the first person visual sensation to precisely compute force and torque that the first person experienced by integrating visual semantics, 3D reconstruction, and inverse optimal control. Such visual sensation also allows associating with our past experiences that eventually provide a strong cue to predict future activities. (2) When interacting with other people, social attention is a medium that controls group behaviors, e.g., how they form a group and move. We learn the geometric and visual relationship between group behaviors and social attention measured from first person cameras. Based on the learned relationship, we derive a predictive model to localize social attention from third person view. (3) At last, I will introduce a new multiview camera system that produces unprecedented pixel density for measuring skill performance.

Bio:
Hyun Soo Park is an Assistant Professor at the Department of Computer Science and Engineering, the University of Minnesota. He is interested in understanding human visual sensorimotor behaviors from first person cameras. Prior to the UMN, he was a Postdoctoral Fellow in GRASP Lab at University of Pennsylvania. He earned his Ph.D. from Carnegie Mellon University.

Webpage:
http://www-users.cs.umn.edu/~hspark/

Relevant papers:
Park, Hwang, and Shi, "Force from Motion: DDecember 19, 2017, CVPR 2016

Park and Shi, "Social Saliency Prediction", CVPR 2015

Park, Hwang, Niu, and Shi, "Egocentric Future Localization", CVPR 2016

 

October 9

Daniel Kersten, Psychology

"Data and speculations on the computational functions of feedback to human V1."

 

 

October 16

Cheryl Olman (Olman Lab)

"What do we expect from fMRI?"

Functional MRI is clearly a useful tool in neuroscience, and the data are a great deal of fun to acquire and analyze. In the last 5 years, it has become routine to acquire images with sub-millimeter resolution over large enough swaths of cortex to be actually useful for a range of cognitive and perceptual questions. While these developments are exciting and encouraging, scrutiny of standard data acquisition, analysis and interpretation procedures does raise questions about what the data can really tell us about the underlying neural responses we want to understand. This presentation will be essentially a guided discussion -- I will present beautiful images and recent results from fMRI research and we will have a frank discussion about what we think fMRI is useful for, what kinds of information we think fMRI will never be able to access, and what caveats we should bear in mind as we read the literature.

 

 

23rd Oct: Barbara Shinn-Cunningham, Boston University

"Cortical networks for controlling auditory (vs. visual) attention?"

 

 

Gordon Legge30th Oct: Gordon Legge, Psychology

 

 

6th Nov: Zachary Port, Bethel University
"The Rider and the Elephant: An Exploration of Moral Affordance Theory"
Abstract: While moral psychologists have long posited decision-making as a dichotomy between the rational "rider" and the emotional "elephant", the simple two-fold system neglects a large area of the perceptual literature that has focused on the inferential processes that are deeply wedded to pairing of decision-making and action. In a fusion of Gibson's affordance theory, medieval Islamic thought, and Haidt's moral detectors, we will be examining a perceptual extension of the rider and elephant metaphor to see if there is a third faculty by which we make decisions.

 

 

November 13
Sha Li, Jiang Lab, Psychology

"Statistical learning of simulated X-ray images: dissociation between "tumor" detection and "tumor" discrimination"

 

 

Andrew OxenhamNovember 20
Andrew Oxenham, Otolaryngology, Head and Neck Surgery

"Understanding speech in noise: Are musicians special?"

 

Natalia Zaretskaya

 

December 4 - Elliott N639

Dr Natalia Zaretskaya, University of Tübingen, Germany
Hosted by Engel Lab

"Parietal cortex and subjective visual experience"

 

 


December 11:
Kendrick Kay, Neuroscience
Elliott Hall N219

"A perspective on models of neural information processing"
In this somewhat informal talk which is geared to elicit discussion, I'll start with some general principles for models of neural information processing (including deep neural networks), mention some technical fMRI data issues that we are investigating (high-res anatomy, function, vein-related stuff), and then discuss some recent fMRI and ECoG data pertaining to flexible top-down modulation of responses in high-level visual cortex.


Updated December 19, 2017->