Digital Repository

An Ambiguity Scorer for Chatter Bots in Operation on Nonnative Grounds

Show simple item record

dc.contributor.author Peris, Thanthirige Chami Yasasri
dc.date.accessioned 2021-07-06T16:47:57Z
dc.date.available 2021-07-06T16:47:57Z
dc.date.issued 2020
dc.identifier.citation Peris,Thanthirige Chami Yasasri (2020) An Ambiguity Scorer for Chatter Bots in Operation on Nonnative Grounds, BEng. Dissertation Informatics Institute of Technology en_US
dc.identifier.other 2015163
dc.identifier.uri http://dlib.iit.ac.lk/xmlui/handle/123456789/560
dc.description.abstract Bots are everywhere and they can be anywhere offering some service specific to some day to day activity. Studying – Study Bot, listening to news – News Bot, getting some hint about a new medication – Medi Bot, just spend time speaking till you fall asleep (for Insomnia patients) – InsomnoBot are just a bunch of them. It’s evident that bots are becoming a part of our lives. Providing these many services upon almost all the daily needs, bots should be praised; yet they are blamed for being less comprehensive. It is acceptable that getting to answer too many follow up questions to clear up and get one small activity just because of a less comprehensive bot being irritating. Yet an issue is always two sided. Blaming lays off a bridge connecting two parties; the party that blames and the party that gets blamed. But, if the user query is less comprehendible to anyone who listens, is it the bot to be blamed. Users who interact with bots could be from different parts of the world and not everyone is fluent. Vardi Rachman et. al in their research on “Semantic Role Labeling in Conversational Chat using Deep Bidirectional Long Short Term Memory Networks with Attention Mechanism” reveal that reasons such as emergence of texting language, informalities of texting language and the so called texting language being loosely typed to be some reasons where the chatbots may tend to misinterpret things, due to not being able to understand loosely looped words thrown down into chat bots. There’s no one sitting behind counting the ambiguities that a user throws, down with the query. No gauge has yet been found to measure up that either. Can a machine be responsible for errors made by humans? If it does, how? Is it believable if it is said, that the deep learning acts overwhelmingly positive in this matter? The suggested solution is a natural language understanding add on for chat bots deployed on nonnative grounds to verify the quality of input fed in, which in turn responds back with a graphical representation of the ambiguity distribution of the user input if the user input carries any ambiguities, uncertainties or informalities whereas ordinary virtual speaking assistants tend to pull out some response without validating the quality of the input fed into it. en_US
dc.title An Ambiguity Scorer for Chatter Bots in Operation on Nonnative Grounds en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search


Advanced Search

Browse

My Account