1

Computational properties of interhemispheric communication between abstract and specific visual-form subsystems

David R. Andersen & Chad J. Marsolek

Elliott Hall, N218, 75 East River Rd, Minneapolis, MN 55455, E-mail: dandrese@levels.psych.umn.edu, Tel. 612-626-1546

The neural processing subsystems underlying vision are only weakly modular. We investigated implications of this neural architecture for visual-form recognition subsystems and interhemispheric communication between them. In a behavioral experiment, participants judged whether two visual dot patterns are the same exemplar (cf. "p" and "p") or not (cf. "p" and "P") more efficiently when the comparison items were presented directly to the same hemisphere than to different hemispheres. However, they judged whether patterns belong to the same category of shape (cf. "p" and "P") or not (cf. "p" and "S") more efficiently when the items were presented directly to different hemispheres than to the same hemisphere. Concurrently developed neural network models with simulated hemispheres were trained to perform the same tasks as in the human behavioral experiment. Explanations for the human behavioral results are related to architectural and functional aspects of the networks.
2

Binding What, Where, and When in Object Identification

Michael D. Anes, Jacqueline Liederman, and Takeo Watanabe

Vision Science Laboratory, Boston University, Department of Psychology, michanes@bu.edu

Two priming experiments investigated object representations with several conflicting attributes. Participants identified a single numeral after seeing moving numerals. Numeral identity (same as/different from moving numerals) and location (same as/shifted from trajectory endpoint) were manipulated in both experiments. In Experiment 2, multiple visual features (numeral frame shape, frame color and font) were also manipulated simultaneously. Reaction time increased after identity and location changes, with no interaction of factors in both experiments. Changing multiple visual features had significant negative effects, unlike single feature changes in other tasks. Location by feature interactions obtained: identification was facilitated by a) holding features constant if identity and location provided "same object" evidence and b) changing features if identity and location indicated a "new object". Experiment 3 manipulates temporal parameters in the task. We suggest a) limitations exist in binding some dorsal and ventral attributes yet b) interactive processing of other attributes provides redundancy gain in cases of both perceptual stability and disruption.


3

An attractor field model of face representation: Effects of typicality and image morphing

Marian Stewart Bartlett, James W. Tanaka

Salk Institute Marian Stewart Bartlett, Ph.D. Computational Neurobiology Lab The Salk Institute 10010 N. Torrey Pines Road La Jolla, CA 92037 marni@salk.edu, Phone: (619)453-4100 x1420, Fax: (619)587-0417 http://www.cnl.salk.edu/~marni

A morphed face image at the midpoint between a typical face and an atypical face tends to be perceived as more similar to the atypical parent than the typical parent (Tanaka, Kremen, & Giles, 1997). One account of this atypicality bias is provided by the hypothesis that face representations are characterized as fields of attraction in face space. Atypical faces have larger attractor fields than typical faces since they are farther from the origin of face space where the density of faces is much lower. We test this hypothesis on a set of graylevel face images. We first demonstrate that feedforward models based on principal component analysis, which have accounted for other face perception phenomena, cannot account for the atypicality bias. Next we show that the density of face space is greater near typical faces than atypical faces, and typical faces are closer to the origin of face space than atypical faces. Finally, an attractor network model of face representations is implemented, and the attractor fields are examined by presenting morphed faces to the network.
4

Dissociable Mechanisms for Priming of Planar-Rotated Unfamiliar Objects

E. Darcy Burgund & Chad J. Marsolek

Department of Psychology, University of Minnesota, 75 East River Road, Minneapolis, MN 55455, dburgund@levels.psych.umn.edu, Tel. (612) 626-0807, (612) 626-1546 Fax:(612) 626-2079

Typically, recognition is impaired when objects are disoriented in the plane. We investigated whether dissociable neural subsystems underlie recognition of disoriented objects in qualitatively different ways. Participants decided whether laterally presented unfamiliar objects were structurally possible or impossible, after encoding objects in the central visual field. Test objects were presented in the same or different orientation (by 90 or 180 degrees) compared with encoding, and orientation-specific priming was measured as a greater bias to respond "possible" to same- than to different- orientation objects. When objects were presented directly to the right hemisphere, orientation-specific priming (in the relatively fast responses) was observed when objects were re-oriented by 90 degrees, but not by 180 degrees. When objects were presented directly to the left hemisphere, orientation-specific priming (in the relatively slow responses) was observed when objects were re- oriented by 90 or 180 degrees. We relate these findings to current theories of object representation.
5

How well do chess masters remember famous chess positions? Implications for theories of expertise

Christopher F. Chabris, Daniel J. Benjamin, and Daniel J. Simons

Department of Psychology, Harvard University, 33 Kirkland Street, Cambridge, MA 02138 USA, cfc@wjh.harvard.edu, Tel. 1-617-876-5759, Fax 1-617-491-9570

Abstract ( < 150 words): Two experiments tested whether the memory representations used by chess masters (1) are lists of familiar clusters or chunks of chess pieces or (2) include only information about important or relevant pieces. First, 23 masters recognized famous chess positions they were likely to have previously studied. When a position was modified from the correct one, subjects detected the change more often if it affected the "meaning" of the position than if it did not, even if the non-meaningful change affected more pieces. Second, one grandmaster recalled a similar set of positions without prior study; most of his errors did not alter the meaning of the positions. In each case memory for unimportant pieces was retained as well, though not as strongly as for important pieces. These results support a hybrid model in which chunks of pieces, global categories, and meaning are all represented in the visual memory of chess masters.
6

Influences of spatial context on novel object recognition

C.G. Christou, B.S. Tjan and H.H. Bülthoff

Address: Max-Planck-Institute for Biological Cybernetics, Spemannstrasse 38 Tuebingen 72076, Germany, chris@kyb.tuebingen.mpg.de, Telephone: 49 7071 601630, Fax: 49 7071 601616

The visual recognition of novel objects is influenced by the degree of deviation from studied viewing direction, especially when their geometric structure cannot be decomposed into view-invariant components. In a natural context, changes in the appearance of an object are often caused by changes in the observer's vantage point. Moreover, these changes are almost always apparent to the observer and could in principle be used in the recognition process. We investigated the use of implicit viewer vantage point information by constructing a highly realistic virtual living room which was rich in visual depth information and in which subjects could make simulated movements. On a pedestal in the middle of the room, we placed various 3D geometrical objects. After an initial exploratory period, in which subjects familiarised themselves with the room, training and recognition tests were conducted. In long-term encoding and recognition tasks we observed an improvement in performance when testing occurred with the room visible than when testing occurred without the room. To test if this benefit derives from the room providing implicit vantage-point information we repeated the object identification task while randomly perturbing the orientation of the room with respect to the objects. We found that performance in this case was poorer in comparison to when no room was present. These results indicate that in the recognition of novel objects, people can make use of implicit information specifying where they are and where they are looking.
7

Visuospatial Constructive Ability of People with Williams Syndrome

Alexandra Fonaryova Key, John R. Pani, and Carolyn B. Mervis

Department of Psychology, University of Louisville, Louisville, KY 40292, a0fona01@ulkyvm.louisville.edu, Tel. (502) 852-4639, Fax: (502) 852-8904

Williams syndrome is a genetic disorder characterized by mild to moderate mental retardation or learning disabilities. Because Williams syndrome is associated with a characteristic pattern of cognitive strengths and weaknesses, full-scale IQ masks large discrepancies across different types of abilities. Visuospatial construction is the domain of greatest weakness for individuals with Williams syndrome. In our presentation, we briefly summarize three studies of spatial cognition in individuals with Williams syndrome and describe our working hypotheses regarding the nature of their difficulty with visuospatial construction. In particular, it appears that they have difficulty in changing spatial organization.
8

Perceiving Curvilinear Heading in the Presence of Moving Objects

Nam-Gyoon Kim and Brett R. Fajen

Department of Psychology, Box U-20, 406 Babbidge Road, University of Connecticut, Storrs, CT 06269-1020,E-mail: brf93003@uconnvm.uconn.edu, Tel.(860) 486-2212,Fax: (860) 486-2760,

Four experiments were directed at understanding the influence of multiple moving objects on curvilinear (i.e., circular and elliptical) heading perception. Displays simulated observer movement over a ground plane in the presence of moving objects, depicted as transparent, opaque or black cubes. Objects either moved parallel to or intersected the observerÕs path, and either retreated from or approached the moving observer. Heading judgments were accurate across all conditions, but perceptual biases did occur. Whereas transparent objects elicted a bias in perceived heading direction that depended on the objectsÕ motion, opaque and black objects tended to elicit a weaker bias. Discussion focused on the significance of these results for computational models of heading perception and the possible role of dynamic occlusion
9

Evaluating the Componential Assessment of Visual Perception test (CAVP) as a measure of visual performance in the learning disabled.

Lemon L, Robertson KM, Jutai J, Steffy R.

School of Optometry, University of Waterloo, Waterloo, Ontario N2L 3G1, sarik@golden.net, Tel: 1-519-742-1579,Fax: 1-519-725-0784

A computerized test, The Componential Assessment of Visual Perception (CAVP) has been designed to measure areas of visual perception most pertinent to academic performance. The CAVP measures the ability to adequately filter out irrelevant information by scoring the response time (search scores) with different memory loads (memory requirements). Search scores are taken in high distraction mode or in low distraction mode. The CAVP was applied to evaluate a learning disabled population who previously demonstrated anomalies in visual perception on a specific battery of tests. Subjects were selected from the Waterloo Psycho-Educational Neurometric Program (WATPEN). Their results on the CAVP were compared to a normal control group. The total test time required for the WATPEN selected subjects to complete the CAVP was significantly greater than the total test time of normal control group. The WATPEN subjects who were identified as having visual perception difficulties demonstrated a higher search score (total test time) then the normal population. Potentially, the CAVP could be used as a clinical and/or educational tool to screen possible visual perception problems.
10

Part-set cueing effect in visual object recognition.

Qiang Liu & Michael J. Wenger

Department of Psychology, Social Science II, University of California, Santa Cruz, CA 95064, qxliu@cats.ucsc.edu ,Office: (831) 459-5679, FAX: (831) 459-3579

When participants are asked to remember a set of study items and at test are given part of the set as cues to recall the remaining items, provision of these cues often impairs performance on the remaining items (e.g., Slamecka, 1968, 1969; Watkins, 1975; Muller & Watkins, 1977; Rundus, 1973; Roediger, 1973). It has been suggested that this is a relatively general phenomenon, but it has not consistently been demonstrated in memory for visual objects nor across a range of memory tasks. Numerous hypotheses have been formulated to explain this phenomenon; some suggest a decisional basis while others contend that it is a memory effect. One way of assessing this is through the use of signal detection measures in a recognition task. The present experiment employed an old/new recognition task using 36-item sets of photographic images (composed of faces of men and women, cats, dogs, car grills, and geometric figures). After studying a set of images, participants were given either 0, 6, 12, or 18 of the study items as intra-list cues before the old/new recognition task. The overall analysis included the hit and false alarm rates, P(C), d' and c. Analysis of the hit rate demonstrated the part-set cueing effect was obtained, and the false alarm rate also decreased as a function of the number of cues provided. However, d', a measure of memory strength, did not vary across levels of cueing, whereas the measure of decisional criterion, c, did change significantly as a function of the number of cues given. Thus, the inhibitory effect of part-set cues in recognition seems to be a result of changes in decision criterion rather than changes in memory strength.


11

Perceived Lightness as a Measure of Perceptual Grouping

Michael C. Mangini, Irving Biederman, and Elizabeth K. Williams

Hedco Neuroscience Building, MC 2520, University of Southern California, Los Angeles, CA 90089-2520, mangini@usc.edu, bieder@usc.edu, Tel. (213) 740-6102, Fax: (213) 740-5687

The standard interpretation of simultaneous contrast, in which, for example, a gray bar on a black background appears lighter than when on a white background, has been that of lateral inhibition. What would happen if the gray bar is flanked with noncontiguous white bars on the black background (so the set of bars resembles a picket fence)? Gilchrist and his associations (e.g., Economou, Annan, & Gilchrist, 1998) have recently demonstrated that, surprisingly, the effect of the background disappears and the contrast is solely produced by the effects of the contextual bars, which he termed ³reversed contrast² (RC). These investigators reported several manipulations suggesting that the magnitude of the RC effect was a function of the strength of the perceptual grouping of the gray and context bars. We report a series of experiments designed to investigate further whether the variations in perceived brightness reflect the presumed organization of complex shapes. In the first two experiments we replicated the RC effect, and show that a contiguous gray bar that is not grouped with the contextual bars (because it is at right angles and partially occluded by them) is unaffected by their presence. However, a possible residual effect of lateral inhibition was also present. A third experiment investigated the relation between the RC effect and whether the context was grouped into the same or different parts of the same figure by varying whether the parts were joined by T or L vertices. No effect of differences in part grouping strength was observed.
12

Spatial summation of facial images reveals configural processing

Paolo Martini & Ken Nakayama

Vision Sciences Lab, Dept. of Psychology, Harvard University, 33 Kirkland St., 7F, Cambridge MA 02138, pmartini@wjh.harvard.edu, Tel.: 617-495-3884, Fax: 617-495-3764

Humans code faces not as collections of independent features, but in a more synthetic way that sacrifices features' independence. Such holistic coding is intolerant to misorientation: inverted faces are decomposed into parts (see the "Margaret Thatcher illusion"). Using the technique of spatial summation we explore this effect and show that faces degraded by noise are perceived as wholes only when upright. By segmenting faces in blocks we measured S/N thresholds for discrimination of face pairs in a 2AFC paradigm as a function of percentage of face area shown. Spatial integration was poor for inverted and scrambled faces, suggesting decomposition into individual features. Upright faces showed a similar trend over small areas, but with larger areas discrimination became more resistant to noise, suggesting holistic processing. Moreover, discrimination thresholds for complete upright faces, but not for inverted or scrambled, approached detection levels.


13

Categorical perception of face identity in noise: A method for isolating configural processing

Elinor McKone Paolo Martini, & Ken Nakayama

Vision Sciences Lab, Dept. of Psychology, Harvard University, 33 Kirkland St., 7F, Cambridge MA 02138, E-mail: emckone@wjh.harvard.edu, Telephone: 617-495-3884, Fax: 617-495-3764

Identification of an upright face could, in principle, occur via local features (nose shape, hairstyle), configural information about spatial relationships between features, or some combination of both. Inversion is believed to destroy configural processing but, because feature-based identification remains possible, isolation of configural processing requires identifying a phenomenon which disappears completely with inversion. We show that categorical perception (CP) of faces in noise is such a phenomenon. We added noise to morphs between pairs of initially novel target faces, obscuring local information. CP was found for upright faces: pairs of morphs which crossed the category boundary predicted from a binary (Face 1 or Face 2) classification task showed smaller discrimination thresholds and were rated as more dissimilar. Despite up to 10,000 trials of practice, however, no CP emerged for inverted faces on either task. Further data directly demonstrated the role of noise in eliminating feature-based CP.
14

Playing a game can tell a lot about face recognition

Alexa I. Ruppertsberg, Hendrik-Jan van Veen, Galia Givaty, and Heinrich H. Bu¨lthoff

Max-Planck-Institut fu¨r biologische Kybernetik, Spemannstr. 38, 72076 Tu¨bingen, Germany,email: alexa/veen@kyb.tuebingen.mpg.de

We implemented an internet version of the well-known memory game on our webserver to study viewpoint influences on face recognition. We were able to attract more than 200 anonymous participants through the website. Players had to find eight face pairs in a 4-by-4 card array. There were three different levels at which the game could be played. Level 1: A pair consisted of two identical frontal faces illuminated from the front. Level 2: A pair consisted of two symmetric views of a face: 45 and -45 deg. The 45 deg view was illuminated from the front, the -45 deg view was illuminated from -45 deg. Level 3: A pair consisted of two different views: frontal and 45 deg, both illuminated from the front. Players could only reach the next level by finishing the previous one. Quitting the game was allowed at any time. We analyzed the number of errors participants made until finishing each level. Result: Players made more errors on level 3 than on level 2, and more on level 2 than on level 1. To test for possible learning effects (the faces were kept the same in all levels) another group of players played the levels in a different order. However, error rates were independent of the order in which the levels were played. Apparently, using bilateral symmetry inherent in the face seems to be easier (in the sense of less errors made), than to make use of a common illumination direction. This is consistent with a study by Troje and Bu¨lthoff (Vision Research 38,1,1998) where a same/different paradigm in a precisely timed lab experiment using untextured faces was employed. The results of our game paradigm show that their results can be extended to other paradigms, longer presentation times, and textured faces. When lab members (n=16) who were familiar with the faces (their colleagues) played the game, their error over all levels did not vary, suggesting a rather image-independent but semantic-dependent behavior. Conclusion: The usage of a game paradigm challenges and motivates participants and allows to draw conclusions about mechanisms in face recognition.
14

Change Blindness and Exogenous Attentional Capture

Brian J. Scholl

Rutgers Center for Cognitive Science, Rutgers University, Busch Campus, Psychology Bldg, New Annex, New Brunswick, NJ 08903, scholl@ruccs.rutgers.edu, http://ruccs.rutgers.edu/~scholl, Tel : 732-445-6163, FAX : 732-445-6715

When two scenes are alternately displayed, separated by a mask, even large, repeated changes between the scenes often go unnoticed for surprisingly long durations. Change blindness of this sort is attenuated at "centers of interest" in the scenes, however, supporting a theory of scene perception in which attention is necessary to perceive such changes in scenes (Rensink, O'Regan, & Clark, 1997). Problems with this measure of attentional selection -- via verbally described 'centers of interest' -- are discussed, including worries about describability and explanatory impotence. Other forms of attentional selection, not subject to these problems, are employed in two 'flicker' experiments. Attenuated change blindness is observed at attended items when attentional selection is realized via involuntary exogenous capture of visual attention (to late-onset items and color singletons), even when these manipulations are uncorrelated with the loci of the changes, and are thus completely irrelevant to the change detection task. These demonstrations ground the attention-based theory of change blindness in a type of attentional selection which is understood more rigorously than are 'centers of interest'.


15

Memory for spatial layout from visual and tactile experience

Amy L. Shelton and Timothy P. McNamara

Department of Psychology, 301 Wilson Hall, Vanderbilt Univ., Nashville, TN 37240, amy.l.shelton@vanderbilt.edu, Telephone: (615)322-6050, (615)662-4984, Fax: (615) 343-8449

Previous research on orientation dependence in spatial memory has relied primarily on visual experience; however, much of our daily experiences also involve the actions we perform within space. This study examined both visual and tactile/motor experience in spatial learning and memory. Participants viewed a display of objects from one perspective and reconstructed the display from either the same perspective or a novel perspective using only tactile information. Performance on judgments of relative direction and scene recognition revealed best performance on the orientation corresponding to the reconstructed perspective. These results indicate that the addition of a generative motor task did not yield an orientation independent representation. Moreover, people preferred the view generated during tactile reconstruction over the view experienced visually, suggesting that generating a perspective produced a stronger representation in memory than direct viewing.
16

Does human object recognition use independent shape dimensions?

Brian J. Stankiewicz

University of Minnesota

Three experiments investigate whether human object recognition represents three- dimensional object shape using independent shape dimensions. The answer to this question is crucial if one hopes to resolve how the brain represents three-dimensional object shape. Experiment 1 used an object recognition task in which subjects discriminated between two metrically different volumes that differed in their aspect-ratio or primary-axis curvature. Shape noise was parametrically added to these two dimensions. Results from experiment 1 are consistent with the independent processing of an object's aspect ratio and curvature. Experiment 2 asked whether the visual system can represent any orthogonal shape dimensions independently. Experiment 2 used two orthogonal that were linear combination of aspect ratio and primary-axis curvature. Results from Experiment 2 suggest that subjects do not readily treat all orthogonal shape dimensions independently. Experiment 3 investigated how these shape dimensions are combined in a cue-combination experiment. Results from this experiment are consistent with the independent processing of primary-axis curvature and aspect ratio.
17

The Roles of the Magnocellular and Parvocellular Systems in the Perception of Dynamic Forms

Yukari Takarae, Lawrence E. Melamed Kent State University, Michael K. McBeath Arizona State University

Psychology Department, Kent State University, Kent, OH 44242, ytakarae@kent.edu, Telephone: (330)672-2166

Although many studies have demonstrated that there are two distinct functional streams in the human visual system, recent evidence suggests that these systems interact with each other extensively. The current study was designed to investigate the roles of the magnocellular and parvocellular systems in dynamic form perception. A form discrimination task was used whose target was indicated by dynamic occlusion. Hues and contrast of display elements were manipulated according to the existing models of parallel pathways in the visual system in order to alter activation levels of the magnocellular and parvocellular pathways. Participants' response accuracy was strongly influenced by target speed and background hue. The results suggest that the perception of target shapes was mediated by two separate systems depending on the target speed, and the likely candidates for these systems are the magnocellular and parvocellular systems.
19

Pointing to hidden landmarks is similar in real and virtual environments

H.A.H.C. van Veen, K. Sellen, and H.H. Buelthoff

Max-Planck-Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tuebingen, Germany, hendrik-jan.veen@tuebingen.mpg.de, phone +49 7071 601631 ; fax +49 7071 601616

More and more researchers are acknowledging the significance of virtual environments for the study of human perception and behaviour. Especially when studying abilities for which the human-in-the-loop element forms a key ingredient -- such as in wayfinding and visually guided locomotion -- do virtual reality techniques constitute an increasingly popular tool for research. We try to develop a better understanding of the advantages and disadvantages of this approach by comparing perception and spatial behaviour in real and virtual environments. The current study compares pointing accuracy in an outdoor environment (a 600 by 400 m section of the highly irregular centre of Tuebingen) with that in a corresponding virtual environment. Standing near one of eleven well-known landmarks, subjects (n=10) had to turn a pointer in the estimated direction of each of the other ten invisible (occluded) landmarks. This procedure was repeated from all eleven locations along the subjects' route through town (all subjects knew the city very well). The second experiment was conducted in the laboratory with the same group of subjects. They were seated in the middle of a 7m diameter half-circular projection screen on which 180 by 48 degree fragments of panoramic photographs taken near each of the above mentioned landmarks were displayed. Pointing was accomplished by rotating the image until the object of interest was thought to be in the straight ahead direction. The results show that errors made in the virtual environment are quite similar to errors made outdoors: the mean absolute pointing error was only slightly better outdoors (10.9±0.4deg) than in the laboratory (12.9±0.5deg). The pattern of systematic errors was strikingly similar between the two environments. We conclude that a) subjects are capable of using spatial knowledge acquired in a real environment for accurately orienting themselves in the corresponding virtual environment, and b) the role of artificial pictorial cues introduced by the (large) projection screen is negligible for pointing tasks.
20

When Does Variation in Contrast Polarity Affect Contour Grouping in Object Recognition?

Edward A. Vessel,Suresh Subramaniam, & Irving Biederman

Hedco Neuroscience Building, MC 2520, University of Southern California, Los Angeles, CA 90089-2520, E-mail: vessel@usc.edu, Tel: (213) 740-6102, (213) 740-6094 Fax: (213) 740-5687

When the sections of a contour can be grouped according to smooth collinearity or curvilinearity, there is no effect of variations in contrast polarity, so that an all-black or all-white object on a gray background is named as rapidly as an object where half of each contour is white and the other half black (as in Fig. 2 below, Subramaniam, 1998; see also Economou, Annan, & Gilchrist, et al., 1998; Spehar, 1998). We assessed the effects of variations in contrast polarity at contour endings in a series of object naming experiments in which sections of the objects' contours were deleted in midsegment to produce gaps . Small additional contours were added at right angles to both line endings at each gap to produce a pair of L-vertices in one condition and T-vertices in another. When the additional contours were of the same polarity as the object's contours, e.g., so the legs of the L were both black on a gray background, naming performance was much worse than that for objects with all-black T vertices, consistent with Hummel and Biederman's (1992) claim that grouping is suppressed (or not supported) through L vertices. However, when the additional contours differed in polarity from that of the object (so, for example, one of the legs of the L-vertex was white and the other black), grouping of the legs of the L was suppressed and the grouping of the object's contours was facilitated so that naming performance was equivalent for objects whose gaps were bridged by L and T vertices. These results are derivable from a theory that would build detectors based on the statistics of images in that changes of contrast polarity are common along a length of contour but extremely rare at the point of a vertex. The results imply that end-stopped cells supporting object recognition--unlike complex cells--should be sensitive to the direction of contrast.
21

Representation of Object relationship and Environment Shape

Frances Wang & Elizabeth S. Spelke

Massachusetts Institute of Technology

Do humans and other animals use an allocentric reference frame and represent spatial positions independently of their own heading and position, or do they use an egocentric reference frame and form spatial representations that alter as they move? Our experiment studied the effects of disorientation on accuracy of spatial relationships among objects and among features of the environment shape. Subjects pointed to objects or corners of a chamber with eyes-covered, before and after disorientation by spinning in a chair. Internal consistency among object positions is significantly reduced after disorientation, meaning representations of object relationships rely on one's sense of orientation, thus supports the egocentric representation hypothesis. On the contrary, relative errors among the corners are the same before and after disorientation, supporting an allocentric representation hypothesis. Ongoing studies are investigating whether this distinction reflects single or multiple systems, and whether it is due to the symmetry of the chamber.
22 Geometrical constraints on the perception of 3-D objects

Willems, B. & Wagemans, J. (Department of Psychology, University of Leuven, Belgium)

Bert Willems University of Leuven Department of Psychology Tiensestraat 102 B-3000 Leuven Belgium Bert.Willems@psy.kuleuven.ac.be

We investigated the conditions under which subjects are able to discriminate between an orthogonal and an oblique cross (i.e., with a 75 deg angle). The projection of this 3-D angle depends on the viewpoint from which the cross is seen which we manipulated experimentally by varying the position of the cross in depth. We found that accuracy deteriorated with increasing foreshortening of the two legs defining the 3-D angle. For each position of the cross we calculated the uncertainty-distribution over the 2-D angle, given an uncertainty over the 3-D location of the cross. We found that the variance of these uncertainty-distributions was much higher with increasing foreshortening of the two legs. This type of research should help to clarify some of the confusion surrounding viewpoint-dependency of 3-D object recognition (e.g., the distinction between 'good' or canonical and 'bad' or foreshortened views of 3-D objects).
23

Recognizing Rotated Faces: Properties of Symmetric Relations

Safa R. Zaki & Thomas A. Busey

Psych Dept., Indiana Univ. Bloomington IN 47405, szaki@indiana.edu, (812) 855-4261, (812) 855-4691 (fax)

Models derived from categorization work are applied to the recognition of rotated faces. We find evidence for the use of symmetric relations when recognizing faces at angles that are near-mirror images of studied views. However, the use of these symmetric relations appears only under specific conditions in which subjects are asked to recognize old faces at any orientation. Multidimensional scaling fits derived from similarity ratings are used as input to a number of quantitative models that describe the recognition processes within a geometric framework.
24

Light from shadow: Is it a perceptual object?

Daniele Zavagno & Manfredo Massironi

Dr. Daniele Zavagno, Dipartimento di Psicologia Generale, Universita' di Padova, Via Venezia 8, 35131 Padova, Italy, E-mail: dzavagno@psico.unipd.it, Telephone: office: ++39-49-8276671 / Lab: ++39-49-8276922, Fax: ++39-49-8276600

Author List: Daniele Zavagno* & Manfredo Massironi**; *Dipartmento di Psicologia Generale, Universitý di Padova; **Istituto di Psicologia, Universitý di Verona

It is studied the possibility to see light as the illuminating agent in achromatic visual scenes. In some cases light can have a very strong phenomenal evidence, and we called such evidence surface light. Our hypothesis is that the perceptual experience of surface light is linked to the organization of cast shadows inside the visual scene. We tested the hypothesis on a group of 19 subjects using Renaissance achromatic engravings as stimuli. Two factors seem to be important for the appearance of surface light: i) high contrast among illuminated areas and shadows; ii) presence of cast shadows inside the scene. The last factor invokes the role played by margins in visual perception: the collection of cast shadows inside the visual scene determines a new set of margins that overlaps the set of margins that regulate figure-ground organization. When this overlap is transversal and there is a high luminance contrast ratio between light areas and dark areas, surface light is perceived.
This page is maintained by Alice O'Toole, and was last updated on 8/23/98