While it has been taken for granted in the development of several automatic facial expression recognition tools, the question of the coherence between subjective feelings and facial expressions is still a subject of debate. On one hand, the “Basic Emotion View” conceives emotions as genetically hardwired and therefore being genuinely displayed through facial expressions. Consequently, emotion recognition is perceiver-independent. On the other hand, the constructivist approach conceives emotions as socially constructed, the emotional meaning of a facial expression being inferred by the perceiver. Hence, emotion recognition is perceiver-dependent. In order 1) to evaluate the coherence between the subjective feeling of emotions and their spontaneous facial displays, and 2) to compare the recognition of such displays by human perceivers and by an automatic facial expression classifier, 232 videos of expressers recruited to carry out an emotion elicitation task were annotated by 1383 human perceivers as well as by Affdex, an automatic classifier. Results show a weak consistency between self-reported emotional states by expressers and their facial emotional displays. They also show low accuracy both of human perceivers and of the automatic classifier to infer the subjective feeling from the spontaneous facial expressions displayed by expressers. However, the results are more in favor of a perceiver-dependent view. Based on these results, the hypothesis of genetically hardwired emotion genuinely displayed is difficult to support, whereas the idea of emotion and facial expression as being socially constructed appears to be more likely. Accordingly, automatic emotion recognition tools based on facial expressions should be questioned.