University Relations
http://www.umn.edu/urelate
612-624-6868

 

Fall 2015

Mondays, 12:00 pm, Elliott Hall S204, Lunch will be provided.

September 21 - Definitions, descriptions, concepts

"Descriptions: An intermediate stage in memory retrieval" - Norman and Bobrow.

Abstract
It is possible to be either vague or precise in the specification of an idea. For many purposes, loose characterizations can be perfectly adequate; the degree of specificity required depends upon the purpose of the characterization and the form of alternative interpretations. In attempting to retrieve information from memory, the specification of that information can be either vague or precise: just how specific the characterization can be depends upon how much is known of the information that is being sought; how specific the characterization must be depends upon what else within memory might be specified by a weaker characterization.

In this paper we present a semiformal model of memory retrieval based upon the notion of variable levels of specification. We call the specification a description. A description of an entity is a collection of perspectives, each of which is a way of viewing that entity in terms of a previously known prototype. The level of specification of a description is provided both by the choice of prototype and by further specification of the ways in which the described entity differs from the prototype.

We postulate that retrieval starts with a description of the desired information as an initial specification of the records sought from memory. This retrieval description guides the memory search process and helps determine the suitability of retrieved records for the purpose of the retrieval. The initial description can be modified as intermediate information becomes available during the retrieval cycle. Which records are retrieved is determined both by the form of the retrieval description and the form of encoding of these records at acquisition. The effectiveness of the descriptions for retrieval is determined by two properties, discriminability and constructability. Discriminability is the ability of a description to discriminate among all possible records in memory at the time of retrieval. Constructability is the likelihood that an appropriate description will be constructed at the time retrieval is desired.

 

September 28 - Heirarchy and complexity

The Architecture of Complexity - Herbert A. Simon

A number of proposals have been advanced in recent years for the development of "general systems theory" which, abstracting from properties peculiar to physical, biological, or social systems, would be applicable to all of them. We might well feel that, while the goal is laudable, systems of such diverse kinds could hardly be expected to have any nontrivial properties in common. Metaphor and analogy can be helpful, or they can be misleading. All depends on whether the similarities the metaphor captures are significant or superficial. It may not be entirely vain, however, to search for common properties anlong diverse kinds of complex systems. The ideas that go by the name of cybernetics constitute, if not a theory, at least a point of view that has been proving fruitful over a wide range of application. It has been useful to look at the behavior of adaptive systems in terms of the concepts of feedback and homeostasis. More ...

 

October 5 - Replication - Part 1

More Is Different - P.W. Anderson

"The reductionist hypothesis may still be a topic for controversy among philosophers, but among the great majority of active scientists I think it is accepted without question. The workings of our minds and bodies, and of all the animate or inaminate matter of which we have any detailed knowledge, are assumed to be controlled by the same set of fundamental laws, which except under certain extreme conditions we feel we know pretty well.

"It seems inevitable to go on uncritically to what appears at first sight to be an obvious corollary of reductionism: that if everything obeys the same fundamental laws, then the only scientists who are studying anything really fundamental are those who are working on those laws. In practice, that amounts to some astrophysicists, some elementary particle physicists, some logicians and other mathematicians, and few others ..." More ...

 

October 12- Replication - Part 2

HARKing: Hypothesizing After the Results are Known - Norbert L. Kerr

 

October 19- Replication - Part 3

False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant - Joseph P. Simmons, et. al

In this article, we accomplish two things. First, we show that despite empirical psychologists' nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.

 

October 26

Deep Learning - Yann LeCun, Yoshua Bengio & Geoffrey Hinton

Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature, 521(7553), 436–444. Retrieved from http://dx.doi.org/10.1038/nature14539

 

November 2 - Neural networks and their history - Distributed processing - Part 1

Parallel Distributed Processing at 25: Further Explorations in the Microstructure of Cognition - Timothy T. Rogers, and James L. McClelland

This paper introduces a special issue of Cognitive Science initiated on the 25th anniversary of the publication of Parallel Distributed Processing (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP framework, the key issues the framework has addressed, and the debates the framework has spawned, and presents viewpoints on the current status of these issues. The articles focus on both historical roots and contemporary developments in learning, optimality theory, perception, memory, language, conceptual knowledge, cognitive control, and consciousness. Here we consider the approach more generally, reviewing the original motivations, the resulting framework, and the central tenets of the underlying theory. We then evaluate the impact of PDP both on the field at large and within specific subdomains of cognitive science and consider the current role of PDP models within the broader landscape of contemporary theoretical frameworks in cognitive science. Looking to the future, we consider the implications for cognitive science of the recent success of machine learning systems called "deep networks"  systems that build on key ideas presented in the PDP volumes.

 

November 9 - Neural networks and their history - Distributed processing - Part 2

Parallel Distributed Processing at 25: Further Explorations in the Microstructure of Cognition - Timothy T. Rogers, and James L. McClelland

This paper introduces a special issue of Cognitive Science initiated on the 25th anniversary of the publication of Parallel Distributed Processing (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP framework, the key issues the framework has addressed, and the debates the framework has spawned, and presents viewpoints on the current status of these issues. The articles focus on both historical roots and contemporary developments in learning, optimality theory, perception, memory, language, conceptual knowledge, cognitive control, and consciousness. Here we consider the approach more generally, reviewing the original motivations, the resulting framework, and the central tenets of the underlying theory. We then evaluate the impact of PDP both on the field at large and within specific subdomains of cognitive science and consider the current role of PDP models within the broader landscape of contemporary theoretical frameworks in cognitive science. Looking to the future, we consider the implications for cognitive science of the recent success of machine learning systems called "deep networks" — systems that build on key ideas presented in the PDP volumes.

 

November 16 - Perception and Art - Part 1

Is artists' perception more veridical? by Florian Perdreau and Patrick Cavanagh

Figurative artists spend years practicing their skills, analyzing objects, and scenes in order to reproduce them accurately. In their drawings, they must depict distant objects as smaller and shadowed surfaces as darker, just as they are at the level of the retinal image. However, this retinal representation is not what we consciously see. Instead, the visual system corrects for distance, changes in ambient illumination and view-point so that our conscious percept of the world remains stable. Does extensive experience modify an artist's visual system so that he or she can access this retinal, veridical image better than a non-artist? We have conducted three experiments testing artists' perceptual abilities and comparing them to those of non-artists. The subjects first attempted to match the size or the luminance of a test stimulus to a standard that could be presented either on a perspective grid (size) or within a cast shadow. They were explicitly instructed to ignore these surrounding contexts and to judge the stimulus as if it were seen in isolation. Finally, in a third task, the subjects searched for an L-shape that either contacted or did not contact an adjacent circle. When in contact, the L-shape appeared as an occluded square behind a circle. This high-level completion camouflaged the L-shape unless subjects could access the raw image. However, in all these tasks, artists were as much affected by visual context as novices. We concluded that artists have no special abilities to access early, non-corrected visual representations and that better accuracy in artists' drawings cannot be attributed to the effects of expertise on early visual processes.

 

 

November 23 - Perception and Art - Part 2

"The Medawar Lecture 2001: Knowledge for vision: vision for knowledge" by Richard L. Gregory

An evolutionary development of perception is suggested — from passive reception to active perception to explicit conception — earlier stages being largely retained and incorporated in later species. A key is innate and then individually learned knowledge, giving meaning to sensory signals. Inappropriate or misapplied knowledge produces rich cognitive phenomena of illusions, revealing normally hidden processes of vision, tentatively classified here in a 'peeriodic table'. Phenomena of physiology are distinguished from phenomena of general rules and specific object knowledge. It is concluded that vision uses implicit knowledge, and provides knowledge for intelligent behaviour and for explicit conceptual understanding including science.

 

December 7

December 7

"The Adaptive Character of Thought" - John R. Anderson

"This important volume examines the phenomena of cognition from an adaptive perspective. Rather than adhering to the typical practice in cognitive psychology of trying to predict behavior from a model of cognitive mechanisms, this book develops a number of models that successfully predict behavior from the structure of the environment to which cognition is adapted. The methodology — called rational analysis — involves specifying the information-processing goals of the system, the structure of the environment, and the computational constraints on the system, allowing predictions about behavior to be made by determining what behavior would be optimal under these assumptions.The Adaptive Character of Thought applies this methodology in great detail to four cognitive phenomena: memory, categorization, causal inference, and problem solving." — Amazon review

 



Updated December 11, 2015