Digital Repository

An Approach to Improve Machine Reading Comprehension based Question Answering

Show simple item record

dc.contributor.author Mohamed, Azim
dc.date.accessioned 2024-02-12T04:15:40Z
dc.date.available 2024-02-12T04:15:40Z
dc.date.issued 2023
dc.identifier.citation Mohamed, Azim (2023) An Approach to Improve Machine Reading Comprehension based Question Answering. MSc. Dissertation, Informatics Institute of Technology en_US
dc.identifier.issn 20200770
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/1621
dc.description.abstract Question answering is one of the main concepts that come under natural language processing, and it can be also referred to as a branch of artificial intelligence. As the name describes, question answering systems focus on identifying or constructing an answer for the given question and context through an unstructured data collection in a natural language. With the advancements in the NLP domain, question answering has improved by leaps and bounds over the last few years with high performing models. BERT, XLNet, RoBERTa and ALBERT are a few of the models that were built in hope of improving the performance to exceed the human level performance. Although these models have performed well in general, when a context with complex scenarios was input into these models the performance has reduced. This research is focused on building a novel approach to conducting question answering by applying tokenize and machine reading comprehension (MRC) which requires a machine to answer questions based on a given context, has attracted increasing attention with the incorporation of various deep-learning techniques over the past few years. BERT handles this task by encoding the question and giving paragraphs into a single sequence of words as the input. Then, it performs the classification task only on the output fragment corresponding to the context. Although BERT shows excellent performance, we argue that there are problems with this approach. Hence, we implement a new accurate model with high performances, changing the architecture and fine tuning a BERT model. In that new model achieved f1 value with 84.71% and exact match with 76.01%. en_US
dc.language.iso en en_US
dc.publisher IIT en_US
dc.subject Question Answering en_US
dc.subject Text Processing en_US
dc.subject Natural Language Processing (NLP) en_US
dc.title An Approach to Improve Machine Reading Comprehension based Question Answering en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account