The THINGS-Mooney database provides a set of ~2k “mooney-style” distorted images (created from the original license-free THINGS-plus images). Additionally, this repository contains a toolbox to create your own mooney-style images, plus subjective recognition scores for each mooney image collected on a large sample of participants (n = 947).
If you use this dataset in your research, please cite this repository as follows:
Linde-Domingo, J.*, Ortiz-Tudela, J.*, Voeller, J., & González-García, C. (2024). THINGS-Mooney database (v1.0.0). Zenodo. https://doi.org/10.5281/zenodo.14040189
(* denotes equal contribution)
Manuscript here.
This resource includes the visual and linguistic materials used in the study “Evidence for a reversal of the neural information flow between object perception and object reconstruction from memory” (Linde-Domingo, Treder, Kerrén, & Wimber, 2019, Nature Communications). The stimuli consist of a curated series of photographs and line drawings depicting both animate and inanimate objects (the majority obtained from the BOSS database). Additionally, the set includes a list of action verbs used to create vivid and interactive encoding scenarios. These materials are useful for research on perception, memory retrieval, and the dynamics of visual and semantic processing. Download here.