top of page

Alison Bernice Ma (b. 1998) is a Data Engineer & Audio Lead working on applied machine learning and text-to-speech at WellSaid Labs. On the side, Alison continues to work on sound design as well as her personal artist project.


2020 - 2022

Georgia Institute of Technology


Master of Science in Music Technology

  • Music Informatics Research Lab

  • Focus on machine learning for audio, music information retrieval (MIR), and
    sound design for video games

2016 - 2019

Berklee College of Music


Bachelor of Music in Electronic Production & Design, Sound Design for Video Games

  • Focus on sound design for video games and interactive media

Previous education includes:

At Georgia Tech, Alison was the recipient of the 2020 College of Design Dean's Fellowship for one incoming graduate student and participated as a member of the Music Informatics Group research lab. At the Berklee College of Music, Alison was the recipient of the 2019 Max Mathews Award amongst other scholarships and honors. Following the end of her time at Georgia Tech, Alison's machine learning research, "Representation Learning for the Automatic Indexing of Sound Effects Libraries", was accepted for publication at the prestigious 23rd International Society for Music Information Retrieval conference hosted in Bengaluru, India (ISMIR 2022). This research featured collaboration with various audio post-production and game audio industry members, conducting novel research on the Universal Category System.

Screen Shot 2022-12-03 at 9.47.18 PM.png

As a sound designer, Alison is always looking to discover new possibilities for layering, mangling, and morphing sounds. Primarily, she loves to apply her sound design skills in interactive media applications, i.e. (i) personalization tasks in game audio and (ii) art installations. She finds implementing sound in games equally as enjoyable as actually designing the sound assets themselves. Unique manifestations of her work can be found at Georgia Tech's Sonification Research Lab where the target application was submersible research vessels and those with impaired sight at the Woods Hole Oceanographic Institute, as well as in her efforts as a contributing member at Itoka by OctAI, an AI/web3/NFT solution to music generation & monetization. She has experience in recording studios, i.e. Prairie Sun Recording. One of her experimental 2-channel sound design works for DSP & voice, Engulf, was presented at the SEAMUS National Conference in 2020.


As a researcher, Alison's interests reside within the field of digital signal processing, music information retrieval, and machine learning for musical applications. Her efforts are motivated by her desire to discover uncharted heights of sonic inspiration, reduce repetitive tasks so that creativity always takes precedence, and understand the fundamental building blocks that make up sound. Currently, Alison has been developing her knowledge in the meta/continual-learning of sound databases as well as generative models for sound synthesis, speech synthesis, and music generation.

Fun facts about Alison:

  • Games I'm playing...

    • Fortnite, Super Smash Bros. Ultimate, Pokemon Legends: Arceus

  • Go-to instrument: Modular synths!

  • Favorite kind of plug-in: Saturation (post-2021)

  • Latest sound design discovery: GRM Tools Shift for robot voices

  • Guilty pleasure: Organizing data...

  • Niche interest: Personalization through sound design or machine learning

bottom of page