Digital Repository

Music Cortex: Lyrics-Conditional Melody Generation with Reduced Silence using Deep Learning Technique

Show simple item record

dc.contributor.author Subendran, Benhanan
dc.date.accessioned 2025-06-05T03:28:39Z
dc.date.available 2025-06-05T03:28:39Z
dc.date.issued 2024
dc.identifier.citation Subendran, Benhanan (2024) Music Cortex: Lyrics-Conditional Melody Generation with Reduced Silence using Deep Learning Technique. BSc. Dissertation, Informatics Institute of Technology en_US
dc.identifier.issn 20191203
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/2424
dc.description.abstract In recent years, there has been a growing interest in using machine learning models for music generation. However, existing melody generation models have several limitations, including the inability to generate music that is properly quantizable, does not adhere to specific time signatures, and cannot be categorized into specific genres. Additionally, these models do not consider the emotional content of lyrics when generating melodies. To address these limitations, this thesis proposes a novel melody generation model that generates melodies based on the emotions conveyed by the lyrics. The model is designed to produce quantizable music that adheres to specific time signatures and can be categorized into genre-specific. To maintain tonality, the generated melody will be in a particular scale, such as the major scale (diatonic scale). This thesis aims to provide a comprehensive solution to the existing limitations of melody generation models. en_US
dc.language.iso en en_US
dc.subject Melody Generation en_US
dc.subject Lyrics Emotion en_US
dc.subject Quantization en_US
dc.title Music Cortex: Lyrics-Conditional Melody Generation with Reduced Silence using Deep Learning Technique en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account