Personalized Health Framework for Visually Impaired

Megha Rathi, Shruti Sahu, Ankit Goel, Pramit Gupta

Abstract


Vision is one of the most essential human sense. The life of a visually impaired person can be transformed from a dependent individual to a productive and functional member of the society with the help of modern assistive technologies that use the concepts of deep learning and computer vision, the science that aims to mimic and automate human vision to provide a similar, if not better, capability to a computer.   However, the different solutions and technologies available today have limited outreach and end users cannot fully realize their benefits. This research work discusses an easily-operable and affordable android application designed to aid the visually impaired in healthcare management. It also aims to resolve the challenges faced due to visual impairment in daily life and uses the concepts of computer vision and deep learning. Broadly, the application consists of the following modules: object recognition in immediate surroundings using region-based convolutional neural networks, disease prediction with the help of symptoms, monitoring of health issues and voice assistant for in-app interaction and navigation.


Full Text:

PDF

References


Amnon Shashua and Ziv Aviram(2015). OrCam MyEye. Retrieved fromhttps://www.orcam.com/en/myeye/.

Andò, B., Lombardo, C. O., &Marletta, V. (2015). Smart homecare technologies for the visually impaired: recent advances. Smart Homecare Technol. TeleHealth, 3, 9-16.

Bradski, G. R. (1998). Computer vision face tracking for use in a perceptual user interface.

Dakopoulos, D., &Bourbakis, N. G. (2010). Wearable obstacle avoidance electronic travel aids for blind: a survey.IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 40(1), 25-35.

Dai, J., He, K., & Sun, J. (2016). Instance-aware semantic segmentation via multi-task network cascades. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3150-3158).

Datasource1: WebMD, L. L. C. (2010). WebMD.

Datasource2: Shiel Jr, W. C. (2009). MedicineNet. com.

Deng, J., Berg, A., Satheesh, S., Su, H., Khosla, A., & Fei-Fei, L. (2012). Imagenet large scale visual recognition competition. ilsvrc2012.

Dhruv Parthasarathy (2017).A Brief History of CNNs in Image Segmentation: From R-CNN to Mask R-CNN. Retrieved from https://blog.athelas.com/a-brief-history-of-cnns-in-image-segmentation-from-r-cnn-to-mask-r-cnn-34ea83205de4.

Felzenszwalb, P. F., Girshick, R. B., McAllester, D., & Ramanan, D. (2010). Object detection with discriminatively trained part-based models. IEEE transactions on pattern analysis and machine intelligence, 32(9), 1627-1645.

Fernandes, H., Costa, P., Filipe, V., Hadjleontiadis, L., & Barroso, J. (2010). Stereo vision in blind navigation assistance. In World Automation Congress 2010–WAC2010.

Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 1440-1448).

Girshick, R., Donahue, J., Darrell, T., Malik, J. (2016). Region-Based Convolutional Networks for Accurate Object Detection and Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence(pp. 142-158).

Gordois, A., Cutler, H., Pezzullo, L., Gordon, K., Cruess, A., Winyard, S., ... & Chua, K. (2012). An estimation of the worldwide economic and health burden of visual impairment.Global public health, 7(5), 465-481.

He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).(pp. 770-778).

Jonas, J. B., Bourne, R. R., White, R. A., Flaxman, S. R., Keeffe, J., Leasher, J., &Resnikoff, S. (2014). Visual impairment and blindness due to macular diseases globally: a systematic review and meta-analysis. American journal of ophthalmology, 158(4), 808-815.

Kanuganti, S., Chang, Y., & Bock, L. (2017). U.S. Patent No. 9,836,996. Washington, DC: U.S. Patent and Trademark Office.

Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. InAdvances in neural information processing systems (pp. 1097-1105).

Lewis, C. W., Mathers, D. R., Hilkes, R. G., Munger, R. J., & Colbeck, R. P. (2012). U.S. Patent No. 8,135,227. Washington, DC: U.S. Patent and Trademark Office.

Li, Y., Qi, H., Dai, J., Ji, X., & Wei, Y. (2016). Fully convolutional instance-aware semantic segmentation. arXiv preprint arXiv:1611.07709.

Lin, T., Dollár, P., Girshick, R., He, K., Hariharan, B.,& Belongie, S. (2017).Feature Pyramid Networks for Object Detection. In Proceedings of Conference on Computer Vision and Pattern Recognition (pp. 936-944). Honolulu, HI: IEEE.

Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., &Zitnick, C. L. (2014, September). Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740-755). Springer, Cham.

Milne, L. R., Bennett, C. L., & Ladner, R. E. (2014). The accessibility of mobile health sensors for blind users.

Ren, S., He, K., Girshick, R., & Sun, J. (2017). Faster R-CNN: Towards Real-Time Object Detection with Regional Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence (pp. 1137-1149).

Ren, Y., Werner, R., Pazzi, N., &Boukerche, A. (2010). Monitoring patients via a secure and mobile healthcare system. IEEE Wireless Communications, 17(1).

SA, S. (2013). Intelligent heart disease prediction system using data mining techniques. Int J Healthcare Biomed Res,1, 94-101.

Sachdeva, N., & Suomi, R. (2013). Assistive technology for totally blind− barriers to adoption. SOURCE IRIS: Selected Papers of the Information Systems Research Semina, 47.

Segler, M. H., Kogej, T., Tyrchan, C., & Waller, M. P. (2017). Generating focused molecule libraries for drug discovery with recurrent neural networks. ACS central science, 4(1), 120-131.

Szegedy, C., Toshev, A., & Erhan, D. (2013). Deep neural networks for object detection. In Advances in neural information processing systems (pp. 2553-2561).

Tian, Y., Yang, X., &Arditi, A. (2010, July). Computer vision-based door detection for accessibility of unfamiliar environments to blind persons. In International Conference on Computers for Handicapped Persons (pp. 263-270). Springer, Berlin, Heidelberg.

Velázquez, R. (2010). Wearable assistive devices for the blind. In Wearable and autonomous biomedical devices and systems for smart environment (pp. 331-349). Springer, Berlin, Heidelberg.

Xie, S., Girshick, R.,Dollár, P., Tu, Z., & He, K. (2017). Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of Conference on Computer Vision and Pattern Recognition (pp. 5987-5995).Honolulu, HI: IEEE.

Wexler, Y., &Shashua, A. (2015). U.S. Patent No. 9,025,016. Washington, DC: U.S. Patent and Trademark Office.

World Health Organization. (2016). Visual impairment and blindness, 2014. URL http://www. who. int/mediacentre/factsheets/fs282/en.




DOI: https://doi.org/10.31449/inf.v46i1.2934

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.