Abstract:
Diet-related health issues that continue to increase have created an immediate requirement for real-time nutritional evaluation tools. The current methods for nutrition estimation require either time-consuming processing periods or human manual input that makes them difficult to use for providing instant response. The study develops a deep learning system which provides real-time nutritional analysis of food images along with accurate measurement details. The main purpose exists to provide users with automated nutrition estimation that requires minimum user input along with instant result delivery.
A feature extraction layer and convolutional neural networks (CNNs) compose the developed system which identifies food items simultaneously with nutritional calculations. A wide-ranging training data set consisting of many food categories lets the model identify accurately the nutritional content of food including calories and macro nutrients. The model employs combining visual identification tools with text processing techniques to enhance its capability for identifying food items that appear like one another. Data augmentation techniques, including rotation, zoom, and horizontal flipping, are employed to enhance generalization and prevent overfitting. The system is designed for real-time performance, making it suitable for mobile applications.
The model shows experimental evidence of successful food identification alongside precise nutritional value estimation according to test results. Generalization abilities of the model are strong since validation accuracy matches training accuracy results. Further feature extraction optimization along with data augmentation improvements hold potential to enhance the current 96% accuracy score of the model. The detection process encounters obstacles due to food items that appear similar to one another leading to sporadic precision problems in food classification.