Updating the Chieti Affective Action Videos database with older adults | Scientific Data – Nature.com

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Advertisement
Scientific Data volume 8, Article number: 272 (2021)
Metrics details
Validation of the Chieti Affective Action Videos (CAAV) database was replicated with a sample of older adults (age range 65–93). When designing experimental studies of emotions, it is crucial to take into consideration the differences in emotional processing between young and older adults. Therefore, the main goal of the present study was to provide an appropriate dataset for the use of CAAV in aging research. For this reason, the CAAV administration and the data collection methodology was faithfully replicated in a sample of 302 older adults. All the 360 standardized stimuli were evaluated on the emotional dimensions of valence and arousal. The CAAV validation in an older adults’ population increases the potential use of this innovative tool. The present validation supports the use of the CAAV database in future experimental studies on cognitive functions in healthy and pathological aging.
Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.14992173
The Chieti Affective Action Videos (CAAV)1 is an innovative database of movie clips developed specifically for experimental research in psychology. The CAAV comprises a large number of emotional action videos rated for valence and arousal. The action videos are homogeneous in terms of length (15 seconds), brightness, and camera angle. All the videos include everyday life actions. Crucially, the innovative aspect of this tool consists in controlling two factors: a) the gender of the actors performing the action; and b) the point of view (POV) through which the action is carried out. In particular, an actor and an actress performed the same 90 actions both in first-person POV and in third-person POV, for a total of 360 emotional action videos. For each stimulus, the CAAV provides an emotional rating based on arousal and valence scores. Indeed, the database validation is based on the Dimensional Model of Emotions2,3 and specifically on the the circumplex model of affect4. Briefly, this model postulates that emotions can be identified based on their location along two dimensions: valence and arousal. The dimension of valence (i.e., pleasantness) differentiates positive (pleasant) from negative (unpleasant) emotional states. The dimension of arousal (i.e., activation) differentiates highly exciting and arousing states from calm and relaxing states. The rating of the CAAV database is based on this dimensional approach and provides a continuous and balanced distribution of the stimuli across valence and arousal. As a result, the CAAV allows the identification of action videos with intermediate scores on both dimensions, which can be classified as emotionally neutral. The CAAV has innovative features when compared to previous emotional databases5,6,7. Several databases explored the emotional dimensions with static and dynamic stimuli (e.g., words8, pictures9, sounds10, faces11 and movie clips12). Among those using dynamic emotional stimuli, many of them consists of a collection of stimuli extracted from movie scenes, which reduces the possibility to standardize (or experimentally control) specific features (e.g., duration, brightness, and camera angle). On the contrary, the CAAV’s movie clips were tailored to maintain constant these features. In addition, each action of the CAAV was performed by a female and a male actor in both first and third person POVs. Therefore, the CAAV provides a gender and POV balanced material. These features make the CAAV expressly well-suited to analyze the role of POV on individuals’ emotional response13. Furthermore, the CAAV may be especially useful to explore and avoid evaluation bias due to the actor’s gender14,15. These characteristics make the CAAV a highly ecological and immersive tool.
In the previous validation of the CAAV, we included only young adults. Therefore, we neglected age-related differences in valence and arousal ratings of the emotional stimuli16,17. Indeed, studies showed that older adults differ from younger adults in several aspects of emotional processing. For instance, older adults demonstrated greater stability of mood18, greater capacity for emotional regulation19,20,21, reduced autonomous reactions to emotional stimuli22,23,24,25,26, and reduced emotion recognition27,28. These differences could affect the CAAV ratings on valence and arousal. Some emotional action videos may be perceived as more or less positive or exciting by a sample of older adults. Supporting this possibility, previous studies found differences between young and older adults in the ratings of emotional images from the International Affective Picture System (IAPS)9 database29,30. For this reason, it is crucial that studies on aging utilize emotional stimuli rated by a sample of older people. Given the growing research interest on the role of emotions on cognitive functions in aging, the goal of this work was to replicate the CAAV validation with a sample of older adults. As a result, the appropriate valence and arousal scores will be provided for the use of an innovative, ecological, and immersive tool for experimental psychological research in aging.
The sample included 302 healthy older adults (65–93 years old, mean = 72.67, SD = 6.61) who participated on a voluntarily basis, receiving no compensation. All participants were Italian and Caucasian. They were all Italian native speakers and able to write and read. Specifically, we recruited 151 males and 151 females with years of education ranging from 0 to 24 (mean = 8.95 years; SD = 4.47). In order to detect the presence of cognitive impairment in older adults, the Mini Mental State Examination (MMSE)31 test was administered. The test consists of 30 items referring to seven cognitive domains: orientation in time, orientation in space, words encoding, attention and calculation, recall, language, and constructive praxia. The total score ranged from 0 to 30 points. The raw score is corrected according to the age and years of education of the participant. A corrected score below 24 indicates possible impairment of cognitive abilities. All participants achieved a corrected MMSE score of at least 24 (mean = 26.73; SD = 1.51). Participants received and signed a written informed consent before starting the experiment. Ethical approval was obtained by the Institutional Review Board of Psychology (IRBP) of the Department of Psychological, Health, and Territorial Sciences of the G. d’Annunzio University of Chieti-Pescara.
All the 360 video clips in the CAAV database were used. The video clips presented 90 actions, balanced both by the perspective (third-person and first-person POVs), and the actor’s gender (male and female). The POV was manipulated to control for the immersivity of the emotional actions. In fact, stimuli in the first-person POV have been found to be more immersive and to elicit higher valence and arousal scores32,33,34. Notably, perspective taking ability changes across the lifespan, and age-related differences has been consistently found in previous studies35,36. Moreover, the actors’ gender was manipulated to control for potential gender-biases in the evaluations37,38,39. Therefore, the CAAV database comprises the same 90 actions performed by I) a male actor in the first person POV, II) a male actor in the third person POV, III) a female actor in the first person POV, and IV) a female actor in the third person POV. Both actors were 24 years old and worn a black shirt and a pair of blue jeans in each video.
Finally, different aspects were controlled when developing the CAAV such as: the presence of a few simultaneous elements in the scene, the variation of the camera angle, the light exposure, the setting, and the background.
Regardless the action that was performed, the length of the videos was kept constant (15 seconds). The movie clips do not contain any sound. Finally, each video shows a single straightforward action, so that all the stimuli are easy to encode even for older adult participants. We used the video stimuli in their original format (.mpg extension) and with a 1920 × 1080 resolution. For each video, we kept the same identification code originally assigned in the database. For more details on the stimuli creation, refer to the previous validation study1. An overview of the CAAV stimuli and the entire database are freely available for download on the Figshare platform40.
The rating procedure was the same used for the previous validation with the younger adults sample1. In fact, the goal of the present study was to carefully replicate the CAAV validation in an older adults sample. For this reason, the 360 action videos have been divided into four different lists (A, B, C, D). Each list contained 90 randomized actions. The lists were the same used in the previous validation study. The videos selected for each list were balanced by gender of the actor and POV. The four lists contained the same actions, but each list had one video of each action in one of the following four versions: (1) first person POV – male actor; (2) first person POV – female actor; (3) third person POV – male actor; (4) third person POV – female actor. Therefore, the same action was never repeated within each list. Based on the two dependent variables considered (valence and arousal), the total sample was divided into two groups (Table 1): one group rated the videos for valence, the other group rated videos for arousal. The four different lists were balanced among the participants of each group. The group that rated videos’ valence was composed by 141 participants (69 M/72 F), aged between 65 and 89 (mean = 72.47 years; SD = 6.13), with years of education between 0 and 18 (mean = 8.82 years; SD = 4.08), and an average corrected MMSE score of 26.73 (SD = 1.54). The group that rated videos’ arousal was composed by 161 participants (82 M/79 F), aged between 65 and 93 years (mean = 72.84 years; SD = 7.01), with years of education between 3 and 24 (mean = 9.07 years; SD = 4.79), and an average corrected MMSE score of 26.73 (SD = 1.49).
The tool we used for the evaluation of the movie clips was the Self-Assessment Manikin (SAM)41. The SAM is a non-verbal pictorial assessment technique commonly used in the study of emotions. Furthermore, the SAM technique is a simple and rapid administration tool that can be used efficiently in the older adult population42. This tool measures a person’s affective reaction to a stimulus based on the Dimensional Model of Emotions, namely the Circumplex Model. According to the Circumplex Model, the emotions are distributed in a two-dimensional circular space, containing valence and arousal dimensions4. Valence represents the horizontal axis and expresses the level of pleasure that ranges from negative to positive (left-right on the x-axis). Arousal represents the vertical axis and expresses the level of physiological activation from low to high (down-up on the y-axis). The SAM tool measures both these dimensions using two Likert scales. As in the previous CAAV validation, we used the 9-point scale version. Therefore, one group used the SAM to rate videos’ valence, where the value 1 corresponded to negative, 5 to neutral, and 9 to positive valence. The other group similarly used SAM to rate videos’ arousal, where the value 1 corresponded to low, 5 to medium, and 9 to high activation. We would like to highlight that by using the same rating tool adopted in the previous CAAV validation it is possible to directly compare older adults’ scores with their younger counterpart.
All the data obtained for the validation of the CAAV in the older adult sample can be downloaded on the Figshare platform43. In particular, the data are reported in an Excel file named “AgingCAAV_Dataset”. The data have been arranged in the same way as in the previous CAAV dataset to facilitate consultation and comparison between the two datasets40. The file contains the mean scores and standard deviations for both valence and arousal of all the 360 videos. These scores are available both for the whole sample and separated by gender of the participant. Furthermore, a file containing the raw data of all the experimental subjects is available in an additional Excel file named “AgingCAAV_RawData”. In the current dataset the MMSE scores (raw and corrected) and years of education have been added. Consequently, the file contains the following variables: subject ID, gender, age, education (in years), MMSE_Raw, MMSE_Correct, list administered, and the valence/arousal rating for each of the 360 videos.
Regarding the methodological reliability, both the administration procedure and the instruments used were the same as the previous CAAV validation1. The rating was performed using a laptop. In particular, the video stimuli were presented using the E Prime 2.0 software, which allowed to randomize the presentation of the 90 stimuli within each list. Each participant carried out the rating task in a quiet room. Lighting conditions were kept constant among all participants. Before starting the rating task, a print version of the MMSE was administered to each participant. Subsequently, the participant was placed in front of the laptop screen where s/he performed the rating task. Before starting the task, three tutorial videos (“play with a balloon”, “waving a fan” and “punch a wall”) were presented, with both POVs (first/third POV) and actors (female/male). In this way, the participants were able to familiarize with the type of stimuli and the rating method. These videos are not included in the official database as they were used for demonstration purposes only. Once the tutorial session was completed, the participant was ready to start the main session of the task. Each video was preceded by the phrase “Please rate the next video” that stayed on the screen for three seconds. Immediately after, the movie clip was presented for its entire duration (15 seconds). When the video ended, it disappeared from the screen, and the image of the SAM tool appeared. Specifically, in the valence group the SAM instructions were “Please rate the video based on valence”, while for the arousal rating group the SAM instructions were “Please rate the video based on arousal”. Participants could express their ratings by pressing the corresponding key number (1–9). The interplay between the mean scores of valence and arousal is plotted in Fig. 1. A U-shaped distribution emerged along the valence and arousal dimensions continuum, with greater arousal for negative (i.e., low valence) and positive (i.e., high valence) actions, and lower arousal for neutrals (i.e., average valence) actions, similar to what was found in the previous CAAV validation. The entire administration procedure lasted about 45 minutes.
Scatterplot of the interaction between valence and arousal scores of each video. The average valence score is reported on the X axis, while the average arousal score is reported on the Y axis for each video.
By validating the CAAV database with a sample of older adults, the emotional video stimuli can be selected and used more adequately for experimental studies on aging. Indeed, the literature shows that across the life span the way people process emotional information changes44,45,46,47,48. Consequently, the rating data collected in the older adult sample would allow the stimuli of the CAAV to be more suitable for aging research. The provided dataset encourages new experimental studies on emotions that will investigate differences between young and older adults. Furthermore, these original and innovative video stimuli could be useful for studies on cognitive functions in general (attention, perception, memory, etc.) and emotions in both healthy and pathological aging49,50,51. Emotional action videos of the CAAV could be also used for the development of both mood induction methodologies and emotional regulation training programs52,53. The availability of a well-matched and highly controlled database of video stimuli that explicitly manipulates different perspectives (third/first person POV) opens new avenues for ecological studies54,55,56. For instance, it would be interesting to use the CAAV stimuli through augmented reality instruments (e.g., with Virtual Reality tool). To further increase the immersion factor, the database could be developed further by adding new stimuli where the action videos are performed by older adult actors (both male and female). For example, the FACES database provides images of facial emotional expressions of young, middle, and older adults57. A shared age between the actors in the videos and the participants who observe the stimuli could modulate the assessment of the valence and arousal of the observed actions by further increasing the emotional involvement of older adult.
Di Crosta, A. et al. The Chieti Affective Action Videos database, a resource for the study of emotions in psychology. Sci. Data 7, 32 (2020).
PubMed  PubMed Central  Article  Google Scholar 
Wundt, W. M. & Judd, C. H. Outlines of psychology. (Engelmann, 1902).
Osgood, C. E., Suci, G. J. & Tannenbaum, P. H. The measurement of meaning. (University of Illinois press, 1957).
Russell, J. A. A circumplex model of affect. J. Pers. Soc. Psychol. 39, 1161 (1980).
Article  Google Scholar 
Gross, J. J. & Levenson, R. W. Emotion elicitation using films. Cogn. Emot. 9, 87–108 (1995).
Article  Google Scholar 
Ray, R. D. & Gross, J. J. Emotion elicitation using films. Handb. Emot. Elicitation Assess. 9 (2007).
Deng, Y., Yang, M. & Zhou, R. A new standardized emotional film database for Asian culture. Front. Psychol. 8, 1941 (2017).
PubMed  PubMed Central  Article  Google Scholar 
Bradley, M. M. & Lang, P. J. Affective norms for English words (ANEW): Instruction manual and affective ratings. (1999).
Lang, P. J., Bradley, M. M. & Cuthbert, B. N. International affective picture system (IAPS): Instruction manual and affective ratings. Cent. Res. Psychophysiol. Univ. Fla. (1999).
Bradley, M. M. & Lang, P. J. The International Affective Digitized Sounds (; IADS-2): Affective ratings of sounds and instruction manual. Univ. Fla. Gainesv. FL Tech Rep B-3 (2007).
Goeleven, E., D Raedt, R., Leyman, L. & Verschuere, B. The Karolinska directed emotional faces: a validation study. Cogn. Emot. 22, 1094–1118 (2008).
Article  Google Scholar 
Baveye, Y., Dellandrea, E., Chamaret, C. & Chen, L. LIRIS-ACCEDE: A video database for affective content analysis. IEEE Trans. Affect. Comput. 6, 43–55 (2015).
Article  Google Scholar 
Denisova, A. & Cairns, P. First person vs. third person perspective in digital games: do player preferences affect immersion? In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems 145–148 (2015).
Herlitz, A. & Lovén, J. Sex differences and the own-gender bias in face recognition: A meta-analytic review. Vis. Cogn. 21, 1306–1336 (2013).
Article  Google Scholar 
Cannito, L. et al. Health anxiety and attentional bias toward virus-related stimuli during the COVID-19 pandemic. Sci. Rep. 10, 16476 (2020).
CAS  PubMed  PubMed Central  Article  Google Scholar 
Smith, D. P., Hillman, C. H. & Duley, A. R. Influences of age on emotional reactivity during picture processing. J. Gerontol. B. Psychol. Sci. Soc. Sci. 60, P49–P56 (2005).
PubMed  Article  Google Scholar 
Wieser, M. J., Mühlberger, A., Kenntner-Mabiala, R. & Pauli, P. Is emotion processing affected by advancing age? An event-related brain potential study. Brain Res. 1096, 138–147 (2006).
CAS  PubMed  Article  Google Scholar 
Lawton, M. P., Kleban, M. H., Rajagopal, D. & Dean, J. Dimensions of affective experience in three age groups. Psychol. Aging 7, 171 (1992).
CAS  PubMed  Article  Google Scholar 
Carstensen, L. L., Pasupathi, M., Mayr, U. & Nesselroade, J. R. Emotional experience in everyday life across the adult life span. J. Pers. Soc. Psychol. 79, 644 (2000).
CAS  PubMed  Article  Google Scholar 
Gross, J. J. et al. Emotion and aging: experience, expression, and control. Psychol. Aging 12, 590 (1997).
CAS  PubMed  Article  Google Scholar 
Ceccato, I. et al. Age-related differences in the perception of COVID-19 emergency during the Italian outbreak. Aging Ment. Health 25, 1305–1313 (2021).
PubMed  Article  Google Scholar 
Levenson, R. W., Carstensen, L. L., Friesen, W. V. & Ekman, P. Emotion, physiology, and expression in old age. Psychol. Aging 6, 28 (1991).
CAS  PubMed  Article  Google Scholar 
Kunzmann, U. & Grühn, D. Age differences in emotional reactivity: the sample case of sadness. Psychol. Aging 20, 47 (2005).
PubMed  Article  Google Scholar 
Mammarella, N. et al. Is there an affective working memory deficit in patients with chronic schizophrenia? Schizophr. Res. 138, 99–101 (2012).
PubMed  Article  Google Scholar 
Mammarella, N., Fairfield, B. & Frisullo, E. & Di Domenico, A. Saying it with a natural child’s voice! When affective auditory manipulations increase working memory in aging. Aging Ment. Health 17, 853–862 (2013).
PubMed  Article  PubMed Central  Google Scholar 
Mammarella, N. et al. The modulating role of ADRA2B in emotional working memory: Attending the negative but remembering the positive. Neurobiol. Learn. Mem. 130, 129–134 (2016).
CAS  PubMed  Article  PubMed Central  Google Scholar 
Ceccato, I., Lecce, S., Cavallini, E., Vugt, F. Tvan & Ruffman, T. Motivation and social-cognitive abilities in older adults: Convergent evidence from self-report measures and cardiovascular reactivity. PloS One 14, e0218785 (2019).
CAS  PubMed  PubMed Central  Article  Google Scholar 
Zebrowitz, L. A., Franklin, R. G. & Palumbo, R. Ailing voters advance attractive congressional candidates. Evol. Psychol. Int. J. Evol. Approaches Psychol. Behav. 13, 16–28 (2015).
Google Scholar 
Grühn, D. & Scheibe, S. Age-related differences in valence and arousal ratings of pictures from the International Affective Picture System (IAPS): Do ratings become more extreme with age? Behav. Res. Methods 40, 512–521 (2008).
PubMed  Article  PubMed Central  Google Scholar 
Palumbo, R., D’Ascenzo, S., Quercia, A. & Tommasi, L. Adaptation to complex pictures: Exposure to emotional valence induces assimilative aftereffects. Front. Psychol. 8 (2017).
Folstein, M. F., Folstein, S. E. & McHugh, P. R. Mini-mental state”: A practical method for grading the cognitive state of patients for the clinician. J. Psychiatr. Res. 12, 189–198 (1975).
CAS  PubMed  Article  Google Scholar 
Oosterhof, N. N., Tipper, S. P. & Downing, P. E. Viewpoint (in) dependence of action representations: an MVPA study. J. Cogn. Neurosci. 24, 975–989 (2012).
PubMed  Article  Google Scholar 
Kallinen, K., Salminen, M., Ravaja, N., Kedzior, R. & Sääksjärvi, M. Presence and emotion in computer game players during 1st person vs. 3rd person playing view: Evidence from self-report, eye-tracking, and facial muscle activity data. Proc. PRESENCE 187, 190 (2007).
Google Scholar 
Watanabe, R. & Higuchi, T. Behavioral advantages of the first-person perspective model for imitation. Front. Psychol. 7, 701 (2016).
PubMed  PubMed Central  Google Scholar 
Lecce, S., Ceccato, I. & Cavallini, E. Investigating ToM in aging with the MASC: from accuracy to error type. Aging Neuropsychol. Cogn. 26, 541–557 (2019).
Article  Google Scholar 
Lecce, S., Ceccato, I. & Cavallini, E. Theory of mind, mental state talk and social relationships in aging: The case of friendship. Aging Ment. Health 23, 1105–1112 (2019).
PubMed  Article  PubMed Central  Google Scholar 
Man, T. W. & Hills, P. J. Eye-tracking the own-gender bias in face recognition: Other-gender faces are viewed differently to own-gender faces. Vis. Cogn. 24, 447–458 (2016).
Article  Google Scholar 
Wang, B. Gender difference in recognition memory for neutral and emotional faces. Memory 21, 991–1003 (2013).
CAS  PubMed  Article  Google Scholar 
Palumbo, R., Adams, R. B. Jr, Hess, U., Kleck, R. E. & Zebrowitz, L. Age and gender differences in facial attractiveness, but not emotion resemblance, contribute to age and gender stereotypes. Front. Psychol. 8, 1704 (2017).
PubMed  PubMed Central  Article  Google Scholar 
Crosta, A. D. et al. Chieti Affective Action Video – CAAV: Technical Manual and Affective Rating. figshare https://doi.org/10.6084/m9.figshare.c.4691840.v1 (2020).
Bradley, M. M. & Lang, P. J. Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. J. Behav. Ther. Exp. Psychiatry 25, 49–59 (1994).
CAS  PubMed  Article  Google Scholar 
Backs, R. W., da Silva, S. P. & Han, K. A comparison of younger and older adults’ self-assessment manikin ratings of affective pictures. Exp. Aging Res. 31, 421–440 (2005).
PubMed  Article  Google Scholar 
La Malva, P. et al. Updating the Chieti Affective Action Videos database with older adults. figshare https://doi.org/10.6084/m9.figshare.14988489.v3 (2021).
Carstensen, L. L. & Mikels, J. A. At the intersection of emotion and cognition: Aging and the positivity effect. Curr. Dir. Psychol. Sci. 14, 117–121 (2005).
Article  Google Scholar 
Spreng, R. N., Wojtowicz, M. & Grady, C. L. Reliable differences in brain activity between young and old adults: a quantitative meta-analysis across multiple cognitive domains. Neurosci. Biobehav. Rev. 34, 1178–1194 (2010).
PubMed  Article  Google Scholar 
Orgeta, V. Specificity of age differences in emotion regulation. Aging Ment. Health 13, 818–826 (2009).
PubMed  Article  Google Scholar 
Fairfield, B., Mammarella, N., Palumbo, R. & Di Domenico, A. Emotional meta-memories: a review. Brain Sci. 5, 509–520 (2015).
PubMed  PubMed Central  Article  Google Scholar 
Di Domenico, A., Palumbo, R., Mammarella, N. & Fairfield, B. Aging and emotional expressions: is there a positivity bias during dynamic emotion recognition? Front. Psychol. 6, 1130 (2015).
PubMed  PubMed Central  Article  Google Scholar 
Palumbo, R. & Di Domenico, A. Editorial: New boundaries between aging, cognition, and emotions. Front. Psychol. 9, 1–2 (2018).
Google Scholar 
Di Domenico, A., Palumbo, R., Fairfield, B. & Mammarella, N. Fighting apathyin Alzheimer’s dementia: Abrief emotional-based intervention. Psychiatry Res. 242, 331–335 (2016).
PubMed  Article  PubMed Central  Google Scholar 
Malone, C. et al. False memories in patients with mild cognitive impairment and mild Alzheimer’s disease dementia: Can cognitive strategies help? J. Clin. Exp. Neuropsychol. 41, 204–218 (2019).
PubMed  Article  PubMed Central  Google Scholar 
Palumbo, R., Mammarella, N., Di Domenico, A. & Fairfield, B. When and where in aging: the role of music on source monitoring. Aging Clin. Exp. Res. 30, 669–676 (2018).
PubMed  Article  Google Scholar 
Malone, C., Turk, K. W., Palumbo, R. & Budson, A. E. The Effectiveness of Item-Specific Encoding and Conservative Responding to Reduce False Memories in Patients with Mild Cognitive Impairment and Mild Alzheimer’s Disease Dementia. J. Int. Neuropsychol. Soc. JINS 27, 227–238 (2021).
PubMed  Article  PubMed Central  Google Scholar 
Maiella, R. et al. The Psychological Distance and Climate Change: A Systematic Review on the Mitigation and Adaptation Behaviors. Front. Psychol. 11, 568899 (2020).
PubMed  PubMed Central  Article  Google Scholar 
Ceccato, I. et al. “What’s next?” Individual differences in expected repercussions of the COVID-19 pandemic. Personal. Individ. Differ. 174, 110674 (2021).
Article  Google Scholar 
Rosi, A. et al. Risk Perception in a Real-World Situation (COVID-19): How It Changes From 18 to 87 Years Old. Front. Psychol. 12, 646558 (2021).
PubMed  PubMed Central  Article  Google Scholar 
Ebner, N. C., Riediger, M. & Lindenberger, U. FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation. Behav. Res. Methods 42, 351–362 (2010).
PubMed  Article  PubMed Central  Google Scholar 
Download references
We thank Rocco Martella and Miriam Curti for their assistance in filming the scenes.
These authors contributed equally: Pasquale La Malva, Irene Ceccato.
Department of Psychological, Health and Territorial Sciences (DiSPUTer), University G. d’Annunzio – Via dei Vestini, 31 – 66100, Chieti, Italy
Pasquale La Malva, Nicola Mammarella, Rocco Palumbo & Alberto Di Domenico
Department of Neuroscience, Imaging and Clinical Science, University G. d’Annunzio – Via dei Vestini, 31 – 66100, Chieti, Italy
Irene Ceccato, Adolfo Di Crosta, Mirco Fasolo & Riccardo Palumbo
Behavioral Economics and Neuroeconomics, Center of Advanced Studies and Technology (CAST), G. d’Annunzio University of Chieti-Pescara, Chieti, 66100, Italy
Irene Ceccato & Riccardo Palumbo
Department of Neurology, Boston University, 150 South Huntington Avenue, Boston, MA, 02130, USA
Anna Marin
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
You can also search for this author in PubMed Google Scholar
Pasquale La Malva: Software, Data collection, Data curation, Writing. Irene Ceccato: Methodology, Data collection, Data curation, Writing, Supervision. Adolfo Di Crosta: Software, Data collection, Data curation, Review & Editing. Anna Marin: Data collection, Data curation, Review & Editing. Mirco Fasolo: Review & Editing, Resources, Supervision. Riccardo Palumbo: Review & Editing, Resources, Supervision. Nicola Mammarella: Review & Editing, Resources, Supervision. Rocco Palumbo: Conceptualization, Methodology, Writing, Review & Editing, Supervision. Alberto Di Domenico: Conceptualization, Review & Editing, Resources, Supervision.
Correspondence to Rocco Palumbo.
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
The Creative Commons Public Domain Dedication waiver http://creativecommons.org/publicdomain/zero/1.0/ applies to the metadata files associated with this article.
Reprints and Permissions
La Malva, P., Ceccato, I., Di Crosta, A. et al. Updating the Chieti Affective Action Videos database with older adults. Sci Data 8, 272 (2021). https://doi.org/10.1038/s41597-021-01053-z
Download citation
Received: 15 March 2021
Accepted: 06 September 2021
Published: 20 October 2021
DOI: https://doi.org/10.1038/s41597-021-01053-z
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative
Advertisement
Advanced search
Scientific Data (Sci Data) ISSN 2052-4463 (online)
© 2021 Springer Nature Limited
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

source