A dataset recorded during development of an affective brain-computer music interface: testing session

OpenNeuro/NEMAR Dataset: ds002723 Files: 204 Dataset size: 2.6 GB
Channels: 32 EEG,1 ECG,4 Misc
Participants: 8
Event files: 44 View events summary
HED annotation: No

Download Download this dataset as a zip file (2.3 GB).
Compute Process this dataset using the Neuroscience Gateway (NSG).
Discuss Read & contribute to a discussion of this dataset. (1 comment)
OpenNeuro Browse the OpenNeuro entry for this dataset.
Citations Browse 2 citing papers.
README
  1. Sections

  2. Project

  3. Dataset

  4. Terms of Use

  5. Contents

  6. Method and Processing

  7. PROJECT

Title: Brain-Computer Music Interface for Monitoring and Inducing Affective States (BCMI-MIdAS) Dates: 2012-2017 Funding organisation: Engineering and Physical Sciences Research Council (EPSRC) Grant no.: EP/J003077/1

  1. DATASET

EEG data from an affective Music Brain-Computer: online real-time control.

Description: This dataset accompanies the publication by Daly et al. (2018) and has been analysed in Daly et al. (2016) (please see Section 5 for full references). The purpose of the research activity in which the data were collected was to investigate the performance of a real-time and online brain-computer interface that identified the user’s emotional state and modified music on-the-fly in order to induce a target emotional state. For this purpose, participants listened to 60 s music clips targeting different affective states, as defined by valence and arousal. The music clips were generated using a synthetic music generator. The dataset contains the EEG data from 8 healthy adult participants during real-time control of the system while listening to the music clips, together with the reported affective state (valence and arousal values).

This dataset is connected to 2 additional datasets:

  1. EEG data from an affective Music Brain-Computer Interface: system calibration. doi:
  2. EEG data from an affective Music Brain-Computer: offline training data to induce target emotional states. doi:

Please note that the number of participants varies between datasets; however, participant codes are the same across all three datasets.

Publication Year: 2018

Creators: Nicoletta Nicolaou, Ian Daly. Contributors: Isil Poyraz Bilgin, James Weaver, Asad Malik, Alexis Kirke, Duncan Williams. Principal Investigator: Slawomir Nasuto (EP/J003077/1). Co-Investigator: Eduardo Miranda (EP/J002135/1). Organisation: University of Reading Rights-holders: University of Reading Source: The synthetic generator used to generate the music clips was presented in Williams et al., “Affective Calibration of Musical Feature Sets in an Emotionally Intelligent Music Composition System”, ACM Trans. Appl. Percept. 14, 3, Article 17 (May 2017), 13 pages. DOI: https://doi.org/10.1145/3059005.

  1. TERMS OF USE

Copyright University of Reading, 2018. This dataset is licensed by the rights-holder(s) under a Creative Commons Attribution 4.0 International Licence: https://creativecommons.org/licenses/by/4.0/.

  1. CONTENTS

The dataset comprises data from 8 subjects. The sampling rate is 1 kHz and the music listening task corresponding to a music clip is 60 s long (clip duration). During the first 20 s, the music clip places the listener in emotional state A, while for the remaining 40 s the music clip targets the affective trajectory from emotional state B to C. Within a 60s music listening epoch there are two target affective states. In the first 20s the music is generated to target one affective state (target A), for the next 20s the BCMI attempts to (a) work out what affective state the participant is actually in, and (b) generate music to move them from this affective state to the next targetted affective state (target B), which is targetted for the last 20s of the 60s music listening epoch.

  1. METHOD and PROCESSING

This information is available in the following publications:

[1] Daly, I., Nicolaou, N., Williams, D., Hwang, F., Kirke, A., Miranda, E., Nasuto, S.J., “Neural and physiological data from participants listening to affective music”, Scientific Data, 2018. [2] Daly, I., Williams, D., Hwang, F., Kirke, A., Malik, A., Weaver, J., Miranda, E. R., Nasuto, S. J., “Affective Brain-Computer Music Interfacing”, Journal of Neural Engineering, 13:4, July 2016. http://dx.doi.org/10.1088/1741-2560/13/4/046022 If you use this dataset in your study please cite these references, as well as the following reference: [3] Williams, D., Kirke, A., Miranda, E.R., Daly, I., Hwang, F., Weaver, J., Nasuto, S.J., “Affective Calibration of Musical Feature Sets in an Emotionally Intelligent Music Composition System”, ACM Trans. Appl. Percept. 14, 3, Article 17 (May 2017), 13 pages. DOI: https://doi.org/10.1145/3059005

Thank you for your interest in our work.


BIDS Version: 1.0.2 HED Version: Version: 1.1.0

On Brain life: True Published date: 2020-04-24 15:29:36

Tasks: Run

Available modalities: channels, eeg, events

Format(s): .edf

Sessions: 1 Scans/session: 6 Ages (yrs): 19 - 30 License: CC0

Dataset DOI: 10.18112/openneuro.ds002723.v1.1.0

Uploaded by Ian Daly on 2020-04-22 21:34:22

Last Updated 2020-04-23 11:34:51

Authors
Ian Daly, Nicoletta Nicolaou, Duncan Williams, Faustina Hwang, Alexis Kirke, Eduardo Miranda, Slawomir J. Nasuto

Acknowledgements

How to Acknowledge

Funding

References and Links

Ethics Approvals