SPONSORED

ABOUT THIS EPISODE


I talk through generating 10 melodies, two of which I play at the conclusion using a model trained on thousands of midi examples contained in a .mag Magenta file bundle. I used the Biaxial RNN (https://github.com/hexahedria/biaxial-rnn-music-composition) by a student named Daniel Johnson and the Basic RNN (https://github.com/tensorflow/magenta/tree/master/magenta/models/melody_rnn#basic) by Google's Magenta group within TensorFlow and learned that priming a melody with a single note can set the key for each generated melody, and, Anaconda's single 'source activate' line replaces the need for virtualenv and installs all of the necessary dependencies to make this environment easily reproducible. 2 - 3 more details are posted at: https://medium.com/@SamPutnam/deep-learning-zero-to-one-music-generation-46c9a7d82c02
English
United States
SPONSORED

TRANSCRIPT

Transcribe this episode
We transcribe podcasts (Example). Transcribing all podcasts in the world takes time... Please help us prioritize what episodes to transcribe.
Disclaimer: The podcast and artwork embedded on this page are from Sam Putnam, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.

EDIT

Thank you for helping to keep the podcast database up to date.
SPONSORED

RECOMMENDATIONS