top of page
Search

Research Paper: The role of the Eyes and the Mouth on Facial Processing

  • candicesavary1
  • Jul 18
  • 14 min read

Updated: Aug 16



Abstract


Face processing is distinct from object processing, as is well demonstrated from previous research. While previous studies have explored the mechanisms involved in face processing, they have seldom thoroughly examined the contributions of individual facial features within different processing models. Therefore, the present study aimed to investigate the effects of eye region contrast and mouth region contrast on face recognition. Participants were presented with either edited faces or unedited faces and asked to identify which faces were “old” or “new” during a memory task involving old and new images. This assessed the extent to which impairments in specific facial areas influenced recognition. Our results indicated a main effect of eye contrast, which significantly reduced recognition, while no significant effects were observed for mouth contrast or for the interaction between eye and mouth contrast. Thus, these findings did not support the widely accepted theory of holistic processing and instead seemed to suggest a mechanism more aligned with feature-based processing. However, limitations such as participant vision, experimental conditions, and participant ethnicity may have influenced the results, potentially affecting their validity and generalisability. Future research could attempt to replicate this study, addressing these limitations and exploring the effects of manipulating other facial features


Introduction


Which factors make us so good at facial processing? Faces are amongst the most informative stimuli we encounter, playing a crucial role in shaping social interaction, communication, and relationships (Morris, Weickert, & Loughland, 2009). Our ability to efficiently process and thus efficiently recognise faces allows us to infer identity, gender, and even emotional states within milliseconds of viewing a face. Remarkably, we can recognise familiar faces even when they are blurred, partially obscured, or lack color and distinct features (Burton et al., 1999).


It is generally understood that faces are processed more effectively than other visual stimuli. Yin (1969) demonstrated this singularity by assessing participants' ability to recognise and recall faces and objects presented upright or inverted. While inversion impaired recognition for all stimuli, inverted faces made them disproportionately harder to recognise in comparison to non-face objects. This suggests that face recognition may involve a unique cognitive mechanism distinct from general object recognition. Moreso, it indicates a specialised, face-specific visual processing and dedicated facial perception mechanisms.


This distinctiveness was developed by McDermott and Chun (1997), who highlighted the localisation of facial processing in the brain. Using fMRI, a region of the brain in the fusiform gyrus was seen to exhibit greater activation when participants viewed faces compared to objects. This highlighted the selective involvement of what became termed the fusiform facial area (FFA) in facial processing. Such neurological underpinnings were further developed by Tsao and Livingston (2009), through single unit experiments which revealed strong correlations between face-selective neurones in the fusiform facial area (FFA) and facial processing. Collectively, these findings support the view that faces may be processed via distinct neural mechanisms, separate from general object recognition.


From such research emerged the domain-specific hypothesis, proposing that holistic processing underpins facial recognition, where faces are perceived as unified wholes rather than as a collection of independent features (Duchaine, 2007). Unlike most objects, recognising faces relies on both individual features (e.g., eyes, nose, and mouth) and their spatial relationships, or ‘emergent features’ , which arise only when the parts are processed together underscoring the ‘holistic’ nature of face perception.


Given the potential role of holism as a mechanism for facial processing, research has aimed to understand how individual features contribute to creating a holistic image that facilitates efficient face recognition. For example, Sekiguchi (2011) found that participants with stronger face memory abilities tended to fixate more on the eyes when memorising faces during the Cambridge Face Memory Test (Duchaine & Nakayama, 2006).

This study employed eye-tracking to measure fixation patterns during the memorisation phase, providing insight into the reliance on eye information for recognition. While this method highlights the extent to which participants use the eyes as cues for recognition, it does not establish a causal relationship or the direct impact of the eyes on facial recognition. As a result, this may limit our understanding of the specific contributions of individual facial features to recognition and holistic processing.


Similarly, Royer et al. (2018) demonstrated that individuals with higher face recognition abilities relied more on the eye region, where higher focus on the eyes accounted for approximately large amounts of the variance in participants’ facial recognition performance. Participants had to identify faces based on the partial information provided in each trial, facial areas were concealed by placing ‘bubble’ icons over face regions, allowing researchers to statistically correlate the visible facial information with recognition performance. These findings underscore the critical role of the eye region in facial recognition, where eyes seem to aid extraction and integration of facial information into a cohesive whole. However, this particular method relied on numerous trials over and extended period. This may have reduced reliability and validity due to participant fatigue and cognitive load, leading to possible inconsistent responses, reduced attention, and disengagement. Additionally, potential demand characteristics from repetition may have introduced biases, making the results less reflective of natural face-recognition processes.


A renowned example of the effect of facial features on holistic facial processing comes from the The Thatcher illusion (Thompson, 1980), where inverting the eyes and mouth of a face makes it appear grotesque when upright but not when upside-down. The paper demonstrated that inversion reduces the grotesque appearance of ‘Thatcherised’ (featurally inverted and upside-side down) faces. Thatcherised and distorted faces were judged as more similar to normal smiling faces when inverted but not when upright. These findings suggest that inverting a face disrupts the processing of holistic information, where inverted faces rely more on individual components to process faces rather than upright faces which rely on the relationship between facial features or ‘holistic’ information.

However, Thompson’s original study primarily manipulated the eyes and lips together, without isolating their independent effects. This limitation constrains our understanding of how specific facial features contribute to holistic processing, as it prevents direct comparisons of the individual versus combined effects of the eyes and lips. Consequently, it hinders at a deeper exploration of whether and how these features interact within a holistic framework of facial recognition.


To summarise, preliminary findings by Yin (1969) proposed a unique mechanism for facial processing, later supported by McDermott and Chun’s (1997) neurological research identifying specific brain regions involved in face perception. However, these studies could be seen to have lacked experimental evidence to fully validate the uniqueness and potential holism of facial processing. Similarly, Sekiguchi (2011) and Royer et al. (2018) examined the role of the eye region in facial recognition, providing insights into attention and focus patterns but offering little conclusive evidence for or against domain-specific or holistic models. There were also methodological limitations which may have affected the validity and reliability of results. Finally, Thompson (1980) demonstrated evidence for eyes and mouth areas affecting processing but did not clearly assess their individual contributions. Thus, these studies fall short of fully explaining the role of individual facial features and their potential integration into a holistic mechanism of facial recognition.Therefore, the present experiment aims to examine the roles of two facial features—the eyes and mouth—both independently and in combination, in contributing to or challenging a holistic model of facial perception. We will ‘impair’ visual information from the eye and mouth region, given that we know they may impact facial processing.


Our hypotheses are threefold: first, we expect a main effect of eye impairment on facial recognition in light of previous literature demonstrating importance of the eye region. Second we expect a main effect of mouth impairment, given its impact on facial recognition (Thompson, 1980) . Third, we expect a potential interaction between these effects, where impairing both features together would further hinder recognition beyond their individual contributions (as posited by a holistic model of processing). Testing these predictions will provide insights into whether facial recognition operates through a holistic mechanism or relies more on independent processing of individual features.


To address these predictions, we employ a recognition paradigm (Valentine & Bruce, 1986), a validated method for studying facial processing. In this paradigm, participants first view a series of manipulated target faces and later identify them from a mix of new faces and previously seen, unedited faces. We altered facial features using contrast reversal, a technique demonstrated to reduce visual information from a particular feature (NederHouser et al., 2007) . This manipulation was applied selectively to either the eyes, the mouth, or both. We also aim to address previous limitations, as this method effectively measures the impact of our facial edits, where short and easy trials will prevent participant fatigue and cognitive load.


Methods


Participants

We recruited 62 participants through opportunistic sampling with links to the experiment sent out via messaging. Participants were aged from 18-70, (M = 25.10, SD = 13.86). We had an even split between the gender of participants where 31 were male and 31 were female. As an exclusion criteria, we would not have included any participants data who answered only ‘yes’ or only ‘no’ on over 80% of the trials as this may be indicative of an attention deficit. However none of the participants needed to be filtered out and we analysed all the data we collected. Additionally, around 40 of participants were of Asian/Chinese ethnicity, 18 were White/White British, 3 were Indian and one was of Mixed Asian/African ethnicity.


Design

We adopted a 2x2 between subject design. This design helped us make the experimental time faster for participants and reduce any types of demand characteristics or carryover effects from one condition to the next, given participants only took part in one of four conditions. We had two independent variables with two levels: Mouth Contrast (contrast vs no contrast) and Eye Contrast (contrast vs no contrast). This design also allowed for presence of a condition with no facial manipulations to serve as a baseline for valid comparison to experimental conditions. Additionally, this design allowed us to separately and jointly evaluate the contributions of the eyes and lips to facial recognition and assess whether their effects align with a holistic or featural processing framework. Our dependant variable was the accuracy of facial recognition, operationalised through calculating A’ scores which assessed the participants’ sensitivity for detecting images they had already seen (old items) to images they had not yet seen (new items)


Materials

Our stimuli were selected from the Chicago Face database (Correll & Wittenbrink, 2015) and consisted of 30 white male faces. The controlling of gender and ethnicity permitted us to add a level control to the stimuli to better test the sole effect of our manipulations on facial recognition, rather than gender differences being a cue for recognition. We also include a range of ages within these images to better generalise any results to the wider population. Images were cropped to remove hair and hairline area and ear area to prevent clothing, hair or non-face cues to facilitate recognition. All of our images were converted to greyscale (black and white) to prevent eye colour or skin tone influencing or aiding recognition. Reverse contrast was applied respective sections of the face. See Figure 1 for examples of contrast.

Our test paradigm was made on Gorilla Experiment Builder (Anwyl-Irvine et al., 2018).


ree

Procedure

Contrast Manipulation Visual

The experiment started by participants reading an information sheet sharing necessary details about the experiment they will be taking. Participants were then asked to disclose their age, ethnicity and gender. The experimental stage then began where participants were placed in one of four conditions, either where they saw either 15 faces with lips contrast reversed, eyes contrast reversed, lips and eyes contrast reversed or no manipulations (control condition). Participants were exposed to these faces during a memorisation phase where they were asked to memorise the faces presented on screen. Each face was presented for 3 seconds. Subsequently, in the test phase, participants saw 30 faces, 15 ofwhich were the unedited version of the manipulated faces they initially saw, and 15 of which were new and unedited. See Figure 2 for simplified experimental procedure. They were asked to discern if the faces were old or new. An example of this test task can be seen in Figure 3. Before the experiment ended participants were asked if they had guessed the aim of the experiment, and to report any glitches. All trials were done on a computer



ree

ree

Statistics

A’ scores were used to calculate sensitivity to distinguishing old and new material. These were calculated by comparing the proportion of correctly identified old items as new (hit rate) and the proportion of incorrectly identifying new items as old (false rate). When A’ is equal to 0.5 the participant is performing at a chance level, when the A’ score is bigger than 0.5 there is a higher sensitivity in detecting old from new items. Therefore, the more A’ increases, the higher that participants’ sensitivity - meaning an A’ score of 1 is indicative of strong discriminative power for identifying old from new items. Notably, we counted responses that had timed out as ‘false’. A’ scores were calculated on R software using the ‘psycho’ package Makowski (2018).



Results

We ran a 2 x 2 between-subjects ANOVA to investigate the effect of Eye Contrast (Eye Contrast vs No Eye Contrast ) and Mouth Contrast (Mouth Contrast vs No Mouth Contrast) on Ascores. We found a main effect of Eye Contrast F(1, 58)=18.9, p= >.001, ηg2 = 0.25, indicating that participants recognise faces less well when the eyes are reverse contrasted (M= 0.63; SD= 0.18) compared to the no-contrast, control condition (M=0.82, SD= 0.09). There was no main effect of Lip Contrast F(1, 58)=0.006, p= 0.94, ηg2= 0.0001), despite a difference in A’ score for the Lip Contrast condition (M= 0.78, SD = 0.13) compared to the control. Finally, we found non-significant interaction between Lip Contrast and Eye Contrast F(1, 58)=1.78, p= 0.18, ηg2 = 0.03. A visualisation of our results can be seen in



ree



Discussion


The above results suggest that we may significantly rely on the eye region for facial recognition, indicating that eyes play a significant role in facial processing and recognition. In contrast, the mouth region did not have a significant impact on facial processing or recognition. Additionally, the non- significant interaction between eye and lip contrast suggests that the effects of impairing visual information in these regions are independent and do not influence each other. This indicates that the eyes contribute to facial recognition as a separate component rather than through a shared or synergistic mechanism with other features. Consequently, these findings may not support a holistic or domain-specific model of facial processing.


Our findings of a main effect of eye contrast align with prior research by Sekiguchi (2011) and Royer et al. (2018), which demonstrated the crucial role of the eyes in processing and memorising faces. Consistent with these studies, our research underscores the significant role of the eyes in facial recognition, suggesting that the eyes may be a more critical feature compared to others, such as the lips. A potential explanation for this prominence could lie in neural mechanisms. For instance, Engell and McCarthy (2014) identified specific eye-selective areas in the fusiform face area (FFA) through EEG experiments, indicating a visual processing area reserved for the eyes. This enhanced focus on the eyes may provide particularly detailed information, which is logical given the variability in eye color, shape, and shading; factors that may make eyes especially useful for distinguishing individuals. Our non-significant effect of mouth contrast suggests that the mouth area has less of an effective impact in aiding facial recognition. These finding aligns with previous research, notably McKelvie (1976), who used a recognition paradigm to show that the mouth plays a less prominent role in facial recognition compared to the eyes. Similarly, Lestriandoko et al. (2022) reported that while the mouth area being blurred caused some differences in recognition tasks, these were attributed to ‘noise’ rather than a significant effect. Based on our results, we also provide some evidence that the mouth region may not make a substantial contribution to facial processing and recognition.


It is also important to consider the lack of an interaction effect in our study and its potential implications on the domain-specific hypothesis. This holistic approach is often described as facial features forming a "gestalt," where the combination of features creates an emergent property: the face as a unified percept. Intrinsic in this theory is the prediction that features interact, meaning their combined effects contribute to facial recognition beyond their individual effects. However, our findings, which did not demonstrate an interaction between impairing the eyes and lips, suggest a more independent, featural approach to facial processing. Therefore our results could be seen to contradict findings to the concept that holistic mechanisms are central to facial recognition.


Our results may align more closely with the expertise hypothesis (Valentine,1988), which offers an alternative to the domain-specific theory. The expertise hypothesis argues that our ability to recognise faces exceptionally well stems from our familiarity and frequent exposure to faces, rather than a specialised facial processing system. It suggests that face processing is similar to object processing, relying heavily on individual features to form a whole percept.


There are also a range of studies supporting this. For example McCarthy (2014) found that eyes independently exert a significant impact on facial recognition, with a disproportionately large role in facial processing compared to other features. Diamond and Carey’s (1986) widely recognised findings examining if faces are uniquely vulnerable to inversion, suggested that being an ‘expert’ or more familiar with certain visual stimuli also affected the inversion effect. This presented the potential role of familiarity in aiding facial processing. Therefore, our results along with these findings could be seen to provide support for the expertise hypothesis of facial processing.


It is also important to consider some potential limitations of our research. One limitation lies in the lack of ethnic and gender diversity in our stimuli. While using all-white, male, expressionless faces allowed us to control for other facial cues that might aid recognition, it may have limited the validity and reliability of our results. For instance, given the potential influence of familiarity on facial processing, the ethnic background of participants may have affected how they processed and memorised faces. Our study included participants from diverse ethnic backgrounds, such as White, Asian, and Mixed Black backgrounds. This diversity might have introduced confounds; for example, an Asian participant may have found it more difficult than a White participant to memorise a White face due to a potential lack of familiarity. This could have influenced our findings and introduced bias. Specifically, the generalisability of our results may have been lowered, given our participants were in majority of Asian and White ethnicities.


Lastly, there were factors we were unable to control for, such as individual traits and circumstances. For example, participants' eyesight could have affected how clearly they saw the images and in turn, how they processed the faces. Additionally, as the experiment was computer-based and conducted individually, uncontrolled variables such as distance from the screen, screen size and screen brightness may have influenced how well participants processed and memorised the images. These factors could pose threats to the validity, reliability, and generalisability of our results given we were unsure as to how well participant saw and thus, could process the images.


The present study also offers insightful directions for future research. Future studies could investigate the role of other facial features, such as the nose or eyebrow regions, in facial recognition. This would provide a broader understanding of which features contribute to facial processing and how their relative importance is weighted. Such research would offer further evidence for or against a holistic model of facial processing. Moreover, future research could address the limitations of this study by placing participants in a controlled laboratory setting. By standardising factors such as screen size, distance, and brightness, researchers could draw more reliable and valid conclusions about the results.



References

Burton, A. M., Wilson, S., Cowan, M., & Bruce, V. (1999). Face Recognition in Poor-Quality Video: Evidence From Security Surveillance. Psychological Science, 10(3), 243–248. https://doi.org/ 10.1111/1467-9280.00144

Diamond, R., & Carey, S. (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115(2), 107–117. https://doi.org/10.1037//0096-3445.115.2.107

Duchaine, B., & Nakayama, K. (2006). The Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants. Neuropsychologia, 44(4), 576–585. https://doi.org/10.1016/ j.neuropsychologia.2005.07.001

Engell, A. D., & McCarthy, G. (2014). Face, eye, and body selective responses in fusiform gyrus and adjacent cortex: an intracranial EEG study. Frontiers in Human Neuroscience, 8(8). https://doi.org/ 10.3389/fnhum.2014.00642

Kanwisher, N., McDermott, J., & Chun, M. M. (1997b). The fusiform face area: a module in human extrastriate cortex specialized for face perception. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 17(11), 4302–4311. https://pubmed.ncbi.nlm.nih.gov/9151747/

Lestriandoko, N. H., Veldhuis, R., & Spreeuwers, L. (2022). The contribution of different face parts to deep face recognition. Frontiers in Computer Science, 4(4). https://doi.org/10.3389/ fcomp.2022.958629

Ma, D. S., Correll, J., & Wittenbrink, B. (2015). The Chicago Face Database: A free stimulus set of faces and norming data. Behavior Research Methods, 47(4), 1122–1135.

McKelvie, S. J. (1976a). The Role of Eyes and Mouth in the Memory of a Face. The American Journal of Psychology, 89(2), 311. https://doi.org/10.2307/1421414McKelvie, S. J. (1976b). The Role of Eyes and Mouth in the Memory of a Face. The American Journal of Psychology, 89(2), 311. https://doi.org/10.2307/1421414

Morris, R. W., Weickert, C. S., & Loughland, C. M. (2009). Emotional face processing in schizophrenia. Current Opinion in Psychiatry, 22(2), 140–146. https://doi.org/10.1097/ yco.0b013e328324f895

Nederhouser, M., Yue, X., Mangini, M. C., & Biederman, I. (2007). The deleterious effect of contrast reversal on recognition is unique to faces, not objects. Vision Research, 47(16), 2134–2142. https:// doi.org/10.1016/j.visres.2007.04.007

Royer, J., Blais, C., Charbonneau, I., Déry, K., Tardif, J., Duchaine, B., Gosselin, F., & Fiset, D. (2018b). Greater reliance on the eye region predicts better face recognition ability. Cognition, 181(181), 12–20. https://doi.org/10.1016/j.cognition.2018.08.004

15

Sekiguchi, T. (2011). Individual differences in face memory and eye fixation patterns during face learning. Acta Psychologica, 137(1), 1–9. https://doi.org/10.1016/j.actpsy.2011.01.014Thompson, P. (1980). Margaret Thatcher: A New Illusion. Perception, 9(4), 483–484. https://doi.org/ 10.1068/p090483

Tsao, D. Y., & Livingstone, M. S. (2008). Mechanisms of Face Perception. Annual Review of Neuroscience, 31(1), 411–437. https://doi.org/10.1146/annurev.neuro.30.051606.094238

Valentine, T. (1988). Upside-down faces: A review of the effect of inversion upon face recognition. British Journal of Psychology, 79(4), 471–491. https://doi.org/10.1111/j.2044-8295.1988.tb02747.x

Yin, R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81(1), 141– 145. https://doi.org/10.1037/h0027474

 
 
 

Recent Posts

See All
Did I choose to write this article?

Free will Psychology, as a discipline, has the unique capacity to investigate human phenomena often considered too complex or abstract for traditional scientific inquiry. Psychologists have joined in

 
 
 

Comments


Post: Blog2_Post
bottom of page