The Role of Preattentive Processing in Product Design

BACKGROUND

I conducted a literature review on preattentive processing and why this is critical to designing effective products. Applying these concepts, I completed a design critique of the USAF product, C2Core, highlighting the effective design choices and opportunities for improvement within the context of preattentive processing.

Introduction

Preattentive processing is the automatic grouping of perceptual information in preparation for focused attention (Schweizer, 2001). The human ability to detect, group, and discard perceptual information is an evolutionary advantage that has enabled the survival of our species and allows the visual field to be screened for signs of danger (Schweizer, 2001). Visual information begins its journey in the retina, where it is processed by various specialized cells, preserved and travels through the optic nerve, reassembled by the Lateral Geniculate Nuclei and, lastly, transmitted to the primary and secondary visual cortex (V1 and V2) for final reassembly (Cook & McReynolds, 1998). Factors such as common region, proximity, alignment, and similarity are detected and processed by various, highly specialized cells, to contribute to the successful perceptual grouping of elements (Treisman, 1985). In product design, it is important to understand how sensory information is grouped and communicated to the brain so that products can enable users to effectively accomplish critical tasks. Understanding the way the neurological system expects information, and delivering it in this way will create a good user experience.

Perceptual Grouping

Perceptual grouping is powerful, “if two elements are grouped in the perceptual process to form a single unit, then selective attention to one of the elements in that unit should be difficult, or perhaps impossible without breaking up the unit through some sort of perceptual reorganization.” (Pomerantz & Schwaitzberg, 1975). There are various perceptual qualities by which detected signals may be grouped (Treisman & Hayes, 1992). Segmentation begins with the identification of elements belonging to rough categories including color (e.g. red, green, blue, and yellow), orientation (vertical, horizontal, and left and right diagonal), and aspects of shape (curvature, closure, gaps, or terminators but not line arrangements) (Treisman & Hayes, 1992). There are two levels of encoding processes happening in parallel, the first includes elements defined by luminance and contrast, and the second includes “elements whose boundaries are “defined by discontinuities in the spatial and temporal patterning of the first-order properties, giving rise to motion or texture differences or depth from binocular disparity” (Cavanagh, Arguin, & Treisman, 1990).” 

Common region is the perceptual grouping of elements that are bounded within a common region of space (Palmer, 1992). A border or difference between adjacent areas of color, brightness, or texture can indicate the boundaries of a region (Palmer, 1992; Treisman, 1982). Many segmentation techniques use a low-level edge detection process to find differences in intensity in the image as this typically coincides with object boundaries (Fellenz & Hartmann, 1996). After extracting the edges, the segmentation methods group them into region boundaries using edge following and linking methods (Fellenz & Hartmann, 1996). Common region “strongly influences perceived grouping and is capable of overcoming the effects of other powerful grouping factors such as proximity and similarity.” (Palmer, 1992).

Proximity is a perceptual grouping cue that involves the space both inside and outside of the group (Pomerantz & Schwaitzberg, 1975; Ben-av & Sagi, 1994). Elements that are closer to each other in proximity will appear grouped, and elements that are further apart will be perceived as unrelated (Pomerantz & Schwaitzberg, 1975). “As the two elements comprising each stimulus arc moved further apart, perceptual grouping at some point would surely break down.” (Pomerantz & Schwaitzberg, 1975). Similarity, another perceptual grouping cue, can be detected with both similarity of shape and similarity of luminance (Ben-av & Sagi, 1994). Ben-av and Sagi (1994) explored the relationship between similarity and proximity in perceptual grouping and found that “proximity grouping was found to be perceived much faster than similarity grouping.” However, when subjects were given “ longer processing times, similarity-based grouping takes over and dominates performance” (Ben-av & Sagi, 1994). Overall, it was shown that when both variables of shape similarity and proximity were presented in the combined display, similarity would either create cooperation with proximity or competition depending on the spacing between elements (Ben-av & Sagi, 1994) and contour saliency increases with element alignment and proximity”, (Claessens & Wagemans, 2005).

Another perceptual grouping cue is alignment (Claessens & Wagemans, 2005). In a study in 2005 by Classens & Wagemans, they examined the effect of both proximity and alignment on perceptual grouping and found that element alignment along a specific orientation increases the odds of perceptual grouping (Claessens & Wagemans, 2005). Supported by Treisman (1982) it was stated: “that differences in line orientation could be as effective as differences in brightness in segregating two groups of elements.” It was even shown that orientation alignment could override the grouping cue of proximity in some instances (Claessens & Wagemans, 2005).

Design Case Introduction

C2Core is the modernized version of the Air Force Weapons System, operators using this system must plan, monitor, and update hundreds of air missions in real-time.

Design Critique

There are many regions defined within the dashboard view of C2Core. Beginning with the sidebar navigation menu, this panel is defined with a common region indicated by the background color added (1) that defines the edge of the space (Palmer, 1992). Within the menu, there is also a grouping of the navigation (2) in which alignment plays a strong role (Claessens & Wagemans, 2005). The very clear alignment of options indicated the relation of the different items listed (Claessens & Wagemans, 2005). Proximity contributes secondarily here, as these items have less space between them when compared to other elements within the menu (Ben-av & Sagi, 1994). There is also some contribution of similarity, as the key pages all have an icon and color that may contribute to perceptual grouping, however, alignment and proximity play a stronger role (Claessens & Wagemans, 2005; Ben-av & Sagi, 1994). Although there is a strong perceptual group, I believe there is unneeded redundancy and recommend removing the icons and blue color to achieve effective perceptual grouping without unneeded processing of the user.

The actions in the bottom left of the sidebar (3) are also perceptually grouped due to proximity (Ben-av & Sagi, 1994). The space between each item is much smaller than the space surrounding the group therefore forming the perceptual group (Ben-av & Sagi, 1994). Alignment and similarity also play a role in this instance, although they contribute less than proximity (Claessens & Wagemans, 2005; (Ben-av & Sagi, 1994)). I would make the same recommendation in this case as the general navigation options. Removing redundancy (icons and color) would still indicate a strong perceptual group due to proximity and alignment (Claessens & Wagemans, 2005; Pomerantz & Schwaitzberg, 1975).

There is a clear grouping of the different time zones (4) near the top of the page. Proximity is the most powerful factor at play here (Pomerantz & Schwaitzberg, 1975), with common region also having a heavy contribution (Palmer, 1992). However, the redundancy of both is not necessary and I would recommend removing the bounded region of this section, relying solely on the proximity of the elements for perceptual grouping (Pomerantz & Schwaitzberg, 1975). I believe this would still be effective, and not only would it reduce the unnecessary use of boundaries, but it would also increase the active white space on the screen, contributing to more effective feature identification of the various perceptual groups (Pomerantz & Schwaitzberg, 1975). The two icon buttons (5) are related to the time zones, however, their placement does not illustrate this relationship, besides being within the same bounded container. To improve this relationship, I recommend moving them closer to the grouped timezones, perhaps in close proximity to the title text “Timezones”.

There are six elements in the bottom half of the page (6) all bound by their own regions. Common region is the strongest contributor to these groups due to the border and background color (Palmer, 1992). However, I have several recommendations that would make the perceptual grouping of this section more effective. Firstly, similarity implies the grouping of the four boxes on the left side of the page, and the separate grouping of the right-most two boxes, due to their background color and shape (Ben-av & Sagi, 1994). However, based on the text of each region, it appears that the top and bottom boxes in each column are related, with less relationship between each column. To more effectively illustrate these relationships, I recommend combining the top and bottom boxes of each column within their bounded regions, so rather than having six separate groups, there will instead be three columns (Palmer, 1992). The separation of the information can still be made with adequate spacing within each column, allowing for titles and data to be within close proximity to eachother (Pomerantz & Schwaitzberg, 1975), but this would more accurately illustrate the relationship of each of the groups.

The two action buttons in the top right corner (7), form a perceptual group with the most contribution from proximity (Pomerantz & Schwaitzberg, 1975), and secondarily similarity (Ben-av & Sagi, 1994). The close spacing of the buttons indicates their relationship and the similar background color further supports this implication (Ben-av & Sagi, 1994). Overall, I feel that this treatment is effective and I offer no recommendations.

Finally, the largest region on the page (8) is communicated with a boundary and background fill that indicates this grouping. Common region is the most powerful in this instance, however, due to the lack of spacing around the group as well as throughout the page, this perceptual grouping is not as effective as it could be (Palmer, 1992). My strongest recommendation to improve perceptual grouping within this screen would be to increase the space between groups to increase the strength of proximity (Pomerantz & Schwaitzberg, 1975). In the current state, there is very little space between each group of elements, making it more difficult to distinguish between them. It seems as though the team has compensated for this by creating bounded regions for each section of information, and while this does help somewhat with perceptual grouping, it would be more effective with greater white space. 

Conclusion

The neurological structures of the eye and brain in which visual information travels are specially tuned for the identification of perceptual groups (Treisman, 1985). The human ability to detect, process, filter, and discard perceptual information is crucial to human survival as it allows us to swiftly recognize potential threats in our surroundings (Schweizer, 2001). Understanding this perceptual journey allows product designers to more effectively present information to users in a way that makes perceptual processing most effective, therefore leading to a more pleasant experience.

References

Atkinson, J. (1992). Early visual development: Differential functioning of parvocellular and magnocellular pathways. Eye, 6(2), 129-135.

Baker, G. E. (2009). Anatomy of vision. Optometry: Science, Techniques and Clinical Management

E-Book, 17. Google Books

Ben-Av, M. B., & Sagi, D. (1995). Perceptual grouping by similarity and proximity: Experimental results can be predicted by intensity autocorrelations. Vision research, 35(6), 853-866.

Cook, P. B., & McReynolds, J. S. (1998). Lateral inhibition in the inner retina is important for spatial tuning of ganglion cells. nature neuroscience, 1(8), 715.

Claessens, P. M., & Wagemans, J. (2005). Perceptual grouping in Gabor lattices: Proximity and alignment. Perception & Psychophysics, 67(8), 1446-1459.

Euler, T., Haverkamp, S., Schubert, T., & Baden, T. (2014). Retinal bipolar cells: elementary building blocks of vision. NATURE REVIEWS| NEUROSCIENCE, 15, 507.

Fellenz, W. A., & Hartmann, G. (1996, August). Preattentive grouping and attentive selection for early visual computation. In Proceedings of 13th International Conference on Pattern Recognition (Vol. 4, pp. 340-345). IEEE.

Hubel, D. H., & Wiesel, T. N. (1979). Brain mechanisms of vision. Scientific American, 241(3), 150-163.

Kihlstrom, J. F. (1987). The cognitive unconscious. Science, 237(4821), 1445-1452.

McElree, B., & Carrasco, M. (1999). The temporal dynamics of visual search: evidence for parallel processing in feature and conjunction searches. Journal of Experimental Psychology: Human Perception and Performance, 25(6), 1517.

Paika, S. B., & Ringacha, D. L. (2012). Link between orientation and retinotopic maps in primary visual cortex. PNAS, 109(18), 7091-7096.

Palmer, S. E. (1992). Common region: A new principle of perceptual grouping. Cognitive psychology, 24(3), 436-447.

Pomerantz, J. R., & Schwaitzberg, S. D. (1975). Grouping by proximity: Selective attention measures. Perception & Psychophysics, 18(5), 355-361.

Roelfsema, P. R. (2006). Cortical Algorithms for Perceptual Grouping. Annu. Rev. Neurosci, 29, 203-27.

Schweizer, K. (2001). Preattentive processing and cognitive ability. Intelligence, 29(2), 169-186.

Sincich, L. C., & Horton, J. C. (2005). The Circuitry of V1 and V2: Integration of Color, Form, and Motion. Annu. Rev. Neurosci, 28, 303-26.

Treisman, A. (1982). Perceptual Grouping and Attention in Visual Search for Features and for Objects. Journal of Experimental Psychology, 8(2), 194-214.

Treisman, A. (1985). Preattentive processing in vision. Computer vision, graphics, and image processing, 31(2), 156-177.

Treisman, A., Vieira, A., & Hayes, A. (1992). Automaticity and preattentive processing. The American journal of psychology, 341-362.

Wang, J., Zhou, T., Qiu, M., Du, A., Cai, K., Wang, Z., ... & Chen, L. (1999). Relationship between ventral stream for object vision and dorsal stream for spatial vision: An fMRI+ ERP study. Human Brain Mapping, 8(4), 170.

Weyand, T. G. (2016). The multifunctional lateral geniculate nucleus. Rev. Neurosci, 27(2), 135-157.

Yeonan-Kim, J., & Bertalmío, M. (2016). Retinal lateral inhibition provides the biological basis of long-range spatial induction. PloS one, 11(12), e0168963.

Yoonessi, A., & Yoonessi, A. (2011). Functional assessment of magno, parvo and konio-cellular pathways; current state and future clinical applications. Journal of ophthalmic & vision research, 6(2), 119.

SEE MORE PASSION PROJECTS
Get in touch
morgan.r.hoose@gmail.com