Gamelan Demung Music Transcription Based on STFT Using Deep Learning
Abstract
Full Text:
PDFReferences
M. A. Maula and H. Setiawan, “Spectrum identification of peking as a part of traditional instrument of gamelan,” in 2018 8th International Conference on Intelligent Systems, Modelling and Simulation (ISMS), 2018, pp. 72–77.
Y. Suprapto, E. M. Yuniarno, and K. Fithri, “Gamelan notation gener- ating using band pass filter for saron instrument,” in 2019 International Conference on Computer Engineering, Network, and Intelligent Multi- media (CENIM), 2019, pp. 1–6.
M. D. Anssi Klapuri, Signal Processing Methods for Music Transcrip-
tion. "Springer US", 2006.
P. Yampolsky, “Javanese gamelan and the west by sumarsam,” Asian Music, vol. 49, no. 2, pp. 158–162, 2018.
Y. K. Suprapto, S. M. Ramadhan, and E. Pramunanto, “Gamelan simulator multiplatform application development,” in 2018 International Conference on Computer Engineering, Network and Intelligent Multi- media (CENIM), 2018, pp. 227–233.
Y.Triwidyastuti,“Saronmusiconsetdetectionongamelanorchestraus- ing hidden markov model for music transcription,” in Institut Teknologi Sepuluh Nopember, 2013.
A. Tjahyanto, D. P. Wulandari, Y. K. Suprapto, and M. H. Purnomo, “Gamelan instrument sound recognition using spectral and facial fea- tures of the first harmonic frequency,” Acoustical Science and Technol- ogy, vol. 36, no. 1, pp. 12–23, 2015.
G. Wendt and R. Bader, “Analysis and perception of javanese gamelan tunings,” in Computational Phonogram Archiving. Springer, 2019, pp. 129–142.
L. Fitria, Y. K. Suprapto, and M. H. Purnomo, “Music transcription of javanese gamelan using short time fourier transform (stft),” in 2015 International Seminar on Intelligent Technology and Its Applications (ISITIA), May 2015, pp. 279–284.
Y. K. Suprapto and Y. Triwidyastuti, “Saron music transcription based on rhythmic information using hmm on gamelan orchestra,” Telkomnika, vol. 13, no. 1, p. 103, 2015.
Z.GuibinandL.Sheng,“Automatictranscriptionmethodforpolyphonic music based on adaptive comb filter and neural network,” in 2007 International Conference on Mechatronics and Automation, Aug 2007, pp. 2592–2597.
F. Firdausillah, D. Gilang Mahendra, J. Zeniarja, A. Luthfiarta, H. Agus Santoso, A. Nugraha, E. Yudi Hidayat, and A. Syukur, “Implementation of neural network backpropagation using audio feature extraction for classification of gamelan notes,” in 2018 International Seminar on Application for Technology of Information and Communi- cation, 2018, pp. 570–574.
D. Nurdiyah, Y. K. Suprapto, and E. M. Yuniarno, “Gamelan orchestra transcription using neural network,” in 2020 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM), 2020, pp. 371–376.
O. Barkan and D. Tsiris, “Deep synthesizer parameter estimation,” in
ICASSP 2019 - 2019 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), 2019, pp. 3887–3891.
K. W. Cheuk, K. Agres, and D. Herremans, “The impact of audio input representations on neural network based music transcription,” in 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp.
–6.
D. Stursa and P. Dolezel, “Comparison of relu and linear saturated
activation functions in neural network for universal approximation,” in 2019 22nd International Conference on Process Control (PC19), 2019, pp. 146–151.
A. D. Rasamoelina, F. Adjailia, and P. Sincˇák, “A review of activation function for artificial neural network,” in 2020 IEEE 18th World Sym- posium on Applied Machine Intelligence and Informatics (SAMI), 2020, pp. 281–286.
DOI: https://doi.org/10.12962/jaree.v6i2.276
Refbacks
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.