Frontiers in Signal Processing
Auditory Saliency Classification Using EEG Data
Download PDF (624.5 KB) PP. 17 - 23 Pub. Date: July 10, 2017
Author(s)
- Silvia Corchs*
Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy - Francesca Gasparini
Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy
Abstract
Keywords
References
[1] T. Gruber, M. M. Müller, A. Keil, and T. Elbert, “Selective visual-spatial attention alters induced gamma band responses in the human eeg,” Clinical neurophysiology, vol. 110, no. 12, pp. 2074–2085, 1999.
[2] M. M. Müller, T. Gruber, and A. Keil, “Modulation of induced gamma band activity in the human eeg by attention and visual information processing,” International Journal of Psychophysiology, vol. 38, no. 3, pp. 283–299, 2000.
[3] A. P. Souza, L. B. Felix, A. M. M. de Sá, and E. M. Mendes, “Vision-free brain-computer interface using auditory selective attention: evaluation of training effect,” in XIV Mediterranean Conference on Medical and Biological Engineering and Computing 2016. Springer, 2016, pp. 196–199.
[4] H. Tang, S. Crain, and B. W. Johnson, “Dual temporal encoding mechanisms in human auditory cortex: Evidence from meg and eeg,” NeuroImage, vol. 128, pp. 32–43, 2016.
[5] H. Higashi, T. M. Rutkowski, Y. Washizawa, A. Cichocki, and T. Tanaka, “Eeg auditory steady state responses classification for the novel bci,” in Engineering in Medicine and Biology Society, EMBC, 2011 Annual International Conference of the IEEE. IEEE, 2011, pp. 4576–4579.
[6] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Machine Intell., vol. 20, p. 1254a??1259, 1998.
[7] S. Corchs, G. Ciocca, and R. Schettini, “Video summarization using a neurodynamical model of visual attention,” in Multimedia Signal Processing, 2004 IEEE 6th Workshop on. IEEE, 2004, pp. 71–74.
[8] T. Yubing, F. A. Cheikh, F. F. E. Guraya, H. Konik, and A. Trémeau, “A spatiotemporal saliency model for video surveillance,” Cognitive Computation, vol. 3, no. 1, pp. 241–263, 2011.
[9] C. Kayser, C. I. Petkov, M. Lippert, and N. K. Logothetis, “Mechanisms for allocating auditory attention: an auditory saliency map,” Current Biology, vol. 15, no. 21, pp. 1943–1947, 2005.
[10] I. Choi, S. Rajaram, L. A. Varghese, and B. G. Shinn-Cunningham, “Quantifying attentional modulation of auditory-evoked cortical responses from single-trial electroencephalography,” Frontiers in human neuroscience, vol. 7, pp. 115, 2013.
[11] N. Hill and B. Sch?lkopf, “An online brain–computer interface based on shifting attention to concurrent streams of auditory stimuli,” Journal of neural engineering, vol. 9, no. 2, pp. 026011, 2012.
[12] M. Lopez-Gordo, E. Fernandez, S. Romero, F. Pelayo, and A. Prieto, “An auditory brain–computer interface evoked by natural speech,” Journal of neural engineering, vol. 9, no. 3, pp. 036013, 2012.
[13] M. Schreuder, T. Rost, and M. Tangermann, “Listen, you are writing! speeding up online spelling with a dynamic auditory bci,” Frontiers in neuroscience, vol. 5, pp. 112, 2011.
[14] A. Kübler, A. Furdea, S. Halder, E. M. Hammer, F. Nijboer, and B. Kotchoubey, “A brain–computer interface controlled auditory event-related potential (p300) spelling system for locked-in patients,” Annals of the New York Academy of Sciences, vol. 1157, no. 1, pp. 90–100, 2009.
[15] F. Faugeras and L. Naccache, “Dissociating temporal attention from spatial attention and motor response preparation: A high-density eeg study,” NeuroImage, vol. 124, pp. 947–957, 2016.
[16] X.-W. Wang, D. Nie, and B.-L. Lu, “Emotional state classification from eeg data using machine learning approach,” Neurocomputing, vol. 129, pp. 94–106, 2014.
[17] I. Mehmood, M. Sajjad, S. W. Baik, and S. Rho, “Audio-visual and eeg-based attention modeling for extraction of affective video content,” in Platform Technology and Service (PlatCon), 2015 International Conference on. IEEE, 2015, pp. 17–18.
[18] C. Pokorny, D. S. Klobassa, G. Pichler, H. Erlbeck, R. G. Real, A. Kübler, D. Lesenfants, D. Habbal, Q. Noirhomme, M. Risetti et al., “The auditory p300-based single-switch brain–computer interface: paradigm transition from healthy subjects to minimally conscious patients,” Artificial intelligence in medicine, vol. 59, no. 2, pp. 81–90, 2013.
[19] A. Delorme and S. Makeig, “Eeglab: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis,” Journal of neuroscience methods, vol. 134, no. 1, pp. 9–21, 2004.
[20] L. Venables and S. H. Fairclough, “The influence of performance feedback on goal-setting and mental effort regulation,” Motivation and Emotion, vol. 33, no. 1, pp. 63–74, 2009.
[21] A. R. Clarke, R. J. Barry, R. McCarthy, and M. Selikowitz, “Eeg analysis in attention-deficit/hyperactivity disorder: a comparative study of two subtypes,” Psychiatry research, vol. 81, no. 1, pp. 19–29, 1998.
[22] K.-H. Lee, L. M. Williams, M. Breakspear, and E. Gordon, “Synchronous gamma activity: a review and contribution to an integrative neuroscience model of schizophrenia,” Brain Research Reviews, vol. 41, no. 1, pp. 57–78, 2003.
[23] A. Kübler, B. Kotchoubey, J. Kaiser, J. R. Wolpaw, and N. Birbaumer, “Brain–computer communication: Unlocking the locked in.” Psychological bulletin, vol. 127, no. 3, pp. 358, 2001.
[24] L. Lu, H.-J. Zhang, and H. Jiang, “Content analysis for audio classification and segmentation,” Speech and Audio Processing, IEEE Transactions on, vol. 10, no. 7, pp. 504–516, 2002.
[25] S. Corchs, G. Ciocca, M. Fiori, and F. Gasparini, “Video salient event classification using audio features,” Proc. SPIE 9027, Imaging and Multimedia Analytics in a Web and Mobile World 2014, 90270P (March 3, 2014).
[26] A. F. Rossi, L. Pessoa, R. Desimone, and L. G. Ungerleider, “The prefrontal cortex and the executive control of attention,” Experimental Brain Research, vol. 192, no. 3, pp. 489–497, 2009.
[27] M. Murugappan, N. Ramachandran, and Y. Sazali, “Classification of human emotion from eeg using discrete wavelet transform?” Journal of Biomed.Sci.Eng, vol. 3, pp. 390-396, 2010.
[28] R. W. Picard, E. Vyzas, and J. Healey, “Toward machine emotional intelligence: Analysis of affective physiological state,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 23, no. 10, pp. 1175–1191, 2001.
[29] S. Pazhanirajan and P. Dhanalakshmi, “Eeg signal classification using linear predictive cepstral coefficient features,” International Journal of Computer Applications, vol. 73, no. 1, 2013.