abstract |
A computer implemented method, preferably using artificial intelligence, of rendering music into audio format comprising receiving user-defined music production parameters such as tempo, duration and musical intensity and using them to produce a custom music piece in digital musical notation format (e.g. MIDI). The custom music piece is rendered into audio format for output to the user. Prior to the rendering step being completed, a preview audio render is created using pre-generated music segments stored in audio format. The segments have been generated by producing multiple sections of music according to different predetermined music production parameters. The segments are stored with associated metadata indicating the production parameters used to produce them and the preview audio render is created by matching sections of the custom music piece to different ones of the pre-generated music segments, based on the user-defined production parameters and the metadata, and sequencing the selected segments. A user device for creating an audio render of a custom music piece, involving downloading pre-generated music segments from a remote system, is also disclosed. |