top of page

Peer-Reviewed Publications

[1] A. B. Ma and A. Lerch, “Representation learning for the automatic indexing of sound effects libraries,” in Proceedings of the 23rd International Society for Music Information Retrieval Conference (ISMIR), Dec 2022. (43% acceptance rate)

Accepted

Accepted

[2] A. B. Ma, “Engulf,” Society for Electro-Acoustic Music in the United States National Conference (SEAMUS), March 2020. [Online]. Available: https://seamus2020.music.virginia.edu.

Accepted

Machine Learning Research Projects at the Georgia Institute of Technology

Machine Learning Research Projects @ Georgia Institute of Technology (2020-2022)

  1. Deep Learning Approaches to Symbolic Sequential Music Generation and Musical In-painting (2021)​
    • Collaborators: Yilun Zha, Alison Ma, Iman Haque, Yufei Xu, Bowen Ran

    • Description: Surveyed deep learning approaches to symbolic sequential music generation and musical in-painting for ABC format, employing LSTMs with attention and Transformer architectures on the folk-rnn data_v2_worepeats dataset

  2. Phoneme Sequence Modeling for Speech (2021)

    • Collaborators: Nikhil Ravi Krishnan, Akhil Shukla, Alison Ma

    • Description: Worked with HMMGMMs, and BiLSTM-CTC Networks using the TIMIT dataset

  3. Audio versus MIDI-based Genre Classification (2021)

    • Collaborators: Alison Ma

    • Description: Conducted ablation study experiments on the LAKH MIDI Dataset v0.1, Million Song Dataset, and top- MAGD MSD Allmusic Genre Dataset to compare MIDI and audio-based classification with Random Forest, MLP, and CNN architectures

  4. Automated Image Captioning (2020)

    • Collaborators: Aryan Pariani, Ishaan Mehra, Alison Ma, Jun Chen, Max Rivera

    • Description: Utilized attention-based Mask-RCNNs and LSTMs on the Flickr30k Kaggle dataset to achieve a BLEU score of 0.795 on the best caption from the test set in Keras

  5. The Relationship Between Stem Combinations of Features and Popularity through the 1925-2010s (2020)

    • Collaborators: Alison Ma

    • Description: Executed a feature analysis study and conducted statistical analysis utilizing Billboard Hot 100 metadata and SigSep Open-UnMix extracted audio stems for songs in the Million Song Dataset

Refresh page (in new window) or click button above if the embedded HTML below doesn't load.

Musical Map: Location & Weather Sonification (2021)

ml.lib Machine Learning & Sound Synthesis (2021)

Collaborators: Nathan Johnson, Kelian Li, Alison Ma

Description: Max For Live MIDI mapping and generative music, JavaScript and websockets.

Collaborators: Alison Ma

Description: ml.lib mappings for sound synthesis.

Max for Live Human-Computer Interaction

Google Chrome Speech-to-Text Performance System
for Poetry (2019)

Max for Live OSC Phone Sensor MIDI Mapper (2021)

- controlling Doppler effects with your phone -

Collaborators: Alison Ma

Description: Designed a real-time performance system using JavaScript, node.js, and socket.io to integrate Google Chrome's Speech-to-Text engine with Max for Live devices at the Berklee College of Music

Collaborators: Alison Ma

Description: MultiSense OSC application, MIDI mapping in Max for Live.

bottom of page