Abstract:
In recent years, there has been a growing interest in using machine learning models for music generation. However, existing melody generation models have several limitations, including the inability to generate music that is properly quantizable, does not adhere to specific time signatures, and cannot be categorized into specific genres. Additionally, these models do not consider the emotional content of lyrics when generating melodies. To address these limitations, this thesis proposes a novel melody generation model that generates melodies based on the emotions conveyed by the lyrics. The model is designed to produce quantizable music that adheres to specific time signatures and can be categorized into genre-specific. To maintain tonality, the generated melody will be in a particular scale, such as the major scale (diatonic scale). This thesis aims to provide a comprehensive solution to the existing limitations of melody generation models.