Keeping track of dietary habits traditionally requires a written diary of the foods consumed at every meal. To simplify this process, we imagine an automated food computer vision system that would save people time and convenience by autonomously logging meals. In this project, we lay the foundation for this vision. We aggregate a dataset of annotated multi-label food images, then train a PyTorch model to recognize and localize 18 different foods with 91% accuracy. Our research creates a baseline for future work that we hope can be used to track nutritional intake in hospitals, schools, and correctional facilities.
Loading Thumbnail...
Loading Thumbnail...