dc.description.abstract |
In recent years, there has been a growing interest in using machine learning models for music generation. However, existing melody generation models have several limitations, including the inability to generate music that is properly quantizable, does not adhere to specific time signatures, and cannot be categorized into specific genres. Additionally, these models do not consider the emotional content of lyrics when generating melodies. To address these limitations, this thesis proposes a novel melody generation model that generates melodies based on the emotions conveyed by the lyrics. The model is designed to produce quantizable music that adheres to specific time signatures and can be categorized into genre specific. To maintain tonality, the generated melody will be in a particular scale, such as the major scale (diatonic scale). This thesis aims to provide a comprehensive solution to the existing limitations of melody generation models. Further delving into the proposed model, it integrates sophisticated emotion recognition algorithms that analyze the sentiment and depth of lyrics, ensuring the melodic output resonates with the lyrical intent. By incorporating these advancements, the model promises a richer musical experience that bridges the gap between lyrical sentiment and melodic expression. Besides, its architecture is scalable, making it versatile enough to be applied across varied musical traditions and genres. The development process also incorporated feedback from musicians, ensuring that the generated melodies not only meet technical specifications but also resonate with artistic sensibilities. In essence, this thesis doesn't just present a technical solution; it reimagines the very essence of music generation by weaving emotion, structure, and artistry into a harmonious blend. |
en_US |