Abstract:
"This research presents a novel deep learning-based system with the potential to transform the way the visually impaired engage with and purchase for groceries. To simulate a human capacity that has been underexplored in the field of deep learning so far, the current system makes use of computer vision techniques to assign a freshness quality to digital images of produce. This new technology might drastically reduce food waste while simultaneously encouraging more responsible eating habits.
Despite improvements in fruit recognition methods, identifying fruits only by their colors and shapes is frequently inefficient because many fruits share these characteristics. To combat this, we offer a new approach to fruit recognition that integrates color, shape, and texture research. This holistic method boosts recognition precision and goes beyond the capabilities of more traditional approaches. The system uses closest neighbors’ classification to analyze fruit photos, then returns the appropriate label and brief explanation to the user.
The system has shown encouraging results in real-life testing, with an accuracy rate of 85.35 %. Moving forward, the goal is to extend this system to other objects, boosting its practical uses and making it easier for people with vision impairments to shop for fresh produce."