Pietro Perona received a Ph.D. in electrical engineering and computer science from the University of California, Berkeley, in 1990. From 1990 to 1991, he was a postdoctoral fellow at the Massachusetts Institute of Technology in the Laboratory for Information and Decision Systems. In the fall of 1991, Perona joined the California Institute of Technology as assistant professor. He became full professor in 1996 and the Allen E. Puckett Professor of Electrical Engineering and Computation and Neural Systems in 2006. From 1999 to 2005, Perona was the director of the National Science Foundation Center for Neuromorphic Systems Engineering. From 2005 to 2016, he led the Computation and Neural Systems program at the California Institute of Technology.

Perona’s research focuses on the computational aspects of vision and learning. He is known for the anisotropic diffusion equation, a partial differential equation that filters image noise while enhancing region boundaries. He is currently interested in visual recognition and in visual analysis of behavior. In the early 2000s, Perona pioneered the study of visual categorization. Currently, in collaboration with colleagues Markus Meister and David Anderson, he applies machine vision to measuring and analyzing the behavior of laboratory animals as they learn complex tasks and as the engage in social behavior.

Perona is the recipient of the 2013 Longuet-Higgins Prize, the 2010 Koenderink Prize and the 2021 PAMI Distinguished Researcher Award for fundamental contributions in computer vision. He is the recipient of the 2003 CVPR best paper award. He is also the recipient of a 1996 NSF Presidential Young Investigator Award.

The ability to understand and manipulate numbers and quantities emerges during childhood, but the mechanism through which humans acquire and develop this ability is still poorly understood. In particular, it is not known whether for a child, or a machine, acquiring such a number sense, as well as other abstract concepts, is possible without supervision from a teacher.

This question is explored through a model, where a pre-trained motor system is teaching perception. The assumption is that the learner is able to pick and place small objects and will spontaneously engage in undirected manipulation, e.g. playing with small objects. Further assumptions are that the learner’s visual system will monitor the changing arrangements of objects in the scene and will learn to predict the effects of each action by comparing perception with the efferent signal of the motor system. Perception is modelled using standard deep networks for feature extraction and classification, and gradient descent learning.

The main finding is that, from learning the unrelated task of action prediction, an unexpected image representation emerges exhibiting regularities that foreshadow the perception and  (I will  of numbers and quantities. These include distinct categories for zero and the first few natural numbers, a strict ordering of the numbers, and a one-dimensional signal that correlates with numerical quantity. As a result, the model acquires the ability to estimate numerosity, i.e. the number of objects in the scene, as well as subitization, i.e. the ability to recognize at a glance the exact number of objects in small scenes. Remarkably, subitization and numerosity estimation extrapolate to scenes containing many objects, far beyond the three objects used during training.  

The conclusion is that important aspects of a facility with numbers and quantities may be learned without teacher supervision.

Literature:
Neehar Kondapaneni, Pietro Perona. A Number Sense as an Emergent Property of the Manipulating Brain, arXiv:2012.04132, 2020 – https://arxiv.org/abs/2012.04132

©2022 IARAI - INSTITUTE OF ADVANCED RESEARCH IN ARTIFICIAL INTELLIGENCE

Imprint | Privacy Policy

Stay in the know with developments at IARAI

We can let you know if there’s any

updates from the Institute.
You can later also tailor your news feed to specific research areas or keywords (Privacy)
Loading

Log in with your credentials

or    

Forgot your details?

Create Account