Call for special issue

Special Issue On: Deep Learning Assisted Intelligent Human Computer Interaction for Next Generation Internet Applications

Submission Due Date: August 10, 2023

 

In the associated domains, a wide range of recognition techniques have been put forth and tested in trials. High recognition accuracy was attained compared to interactive methods devoid of deep learning. Context is crucial in human-machine interfaces (HMIs) with speech support for enhancing user interfaces. When HCI and deep learning are integrated, they can retain higher robustness in a variety of applications, including voice navigation, mobile communication, and child speech recognition. The precision as well as accuracy of action recognition can be significantly increased by combining long-short-term memory networks and convolutional neural networks. As a result, additional sectors will be included in the HCI application field in the future, and more opportunities are anticipated. It includes the creation, testing, and assessment of user-friendly, interactive systems that are tailored to the demands of their users. Making user interfaces that are instinctive and natural for users, especially non-technical users, is one of the main difficulties in HCI. Additionally, there is a rising need to create interfaces that are usable by individuals with a variety of skills and disabilities as technology becomes more widespread in our daily lives.

Enhancing interface usability is one of the main advantages of intelligent HCI with deep learning assistance. Deep learning can be used to design interfaces that are easier for users to understand and use, especially non-technical users. Deep learning can also be used to design interfaces that are more usable by those with a variety of abilities and limitations. With the rise of the IoT, there is an increasing need to design interfaces that support multiple modalities of interaction, such as voice, touch, and gesture. Deep learning can be used to create intelligent interfaces that can recognise and respond to multiple modalities of interaction, making them more versatile and easy to use. Designing interfaces that enable multitasking and many interaction modalities is another challenge. HCI also needs to address concerns about security and privacy. Deep learning adds significant benefits to HCI for developing next-generation internet applications. Deep learning can be used in HCI to build intelligent interfaces that can comprehend users and react to them in a way that seems natural and intuitive.

This Special Issue seeks original, high-quality submissions in the domain of creations in new security and privacy methods, a significant area of research in deep learning-assisted intelligent HCI. We welcome submissions addressing scalability and sustainability for real-world applications and also encourage research that investigates new emerging sensing technologies still at the proof-of-concept stage.

 

Topics of interest include but are not limited to, the following:

 

  • Employing deep learning techniques for improving gesture recognition in HCI
  • Text categorization Based on Deep Learning: A Systematic Analysis
  • A Decentralised System for Human-Computer Interaction in IoT Applications: Ubiquitous Learning
  • An overview of vision-based human action recognition and associated practical challenges
  • Computer service supplier social media advertising: a concept-linking mining technique analyses
  • Virtual and augmented reality human-computer interaction using intelligent deep learning models
  • Multimodal Human-Computer Interaction on Several Technologies with Artificial System
  • Deep Learning assisted HCI systems for disease diagnosis in 5G-enabled eHealth systems
  • Intelligent System for Heterogeneous Human-Computer Interaction on Multiple Platforms
  • Deep learning for resolving the granular task ambiguity in HCI systems
  • HCI-based on emotion-infused deep learning and neural network systems

Tentative Timeline for this Special Issue:

Submission deadline  :           February 10, 2024

Author notification     :           April 30, 2024

Revised papers due    :           June 30, 2024

Final notification        :           August 30, 2024

The Publication of the special issue will as per the policy of journal

 

Guest Editor Information:

Dr. Jungpil Shin [Managing Guest Editor]

Professor,

School of Computer Science and Engineering,

The University of Aizu,

Aizuwakamatsu, Japan

Email: jpshin@u-aizu.ac.jp, jpshin.uoa@gmail.com 

Google Scholar: https://scholar.google.com/citations?user=x8__gM4AAAAJ&hl=ja&oi=ao

 

Biography: Dr. Jungpil Shin is a professor of School of Computer Science and Engineering, The University of AIZU and Supervisor of Pattern Processing Lab, The University of AIZU. He is serving to the University of AIZU as an academician since 1999. His current research interests are pattern recognition, HCI (Human Computer Interaction), image processing, computer vision, and medical diagnosis. He is currently doing research on developing algorithms and systems for non-Touch input interfaces to recognize and identify the Human and Gesture, Non-touch character input system based on hand tapping gestures, Gesture based non-touch flick character input system, automatic diagnosis and clinical evaluation of neurological movement disorders disease, lung disease prediction and diagnosis using advanced image processing and machine intelligence techniques.


Dr. Md. Al Mehedi Hasan [Co-Guest Editor]

Professor,

Dept. of Computer Science and Engineering,

Rajshahi University of Engineering and Technology,

Rajshahi, Bangladesh

Email: mehedi_ru@yahoo.com

Google Scholar: https://scholar.google.com/citations?user=kMspjFIAAAAJ&hl=en

Biography: Md. Al Mehedi Hasan received his B.Sc. degree in computer science and engineering from the University of Chittagong, Bangladesh, and the Combined M.S. and Ph.D. degree in electrical engineering from the University of Ulsan, Korea, in 2009 and 2019, respectively. From 2012 to 2014 he worked as a software engineer in two different leading software development companies in Bangladesh. At present, he is with the American International University - Bangladesh (AIUB) as an Assistant Professor in the Department of Computer Science and Engineering. His current research interests include mobile network optimization, energy-efficient mobile communication, mobility management, traffic offloading, and load balancing.

 

Dr. Yong Seok Hwang [Co-Guest Editor]

Holodigilog Human Media Research Center (HoloDigilog),

Nano Device Application Center (NDAC),

Kwangwoon University, Seoul, Korea

Email: thestone@kw.ac.kr

Google Scholar: https://scholar.google.com/citations?user=xem3aGwAAAAJ&hl=en&oi=sra

Biography: Dr. Yong Seok Hwang received his Dept. of Electronics Engineering, Pusan National University, Pusan, Korea in 2004. He completed his B.Sc., and M.Sc., from the Dept. of Electronics Engineering Pukyong National University, Pusan, Korea. He is currently working as Professor in Nano Device Application Center (NDAC), Kwangwoon University, Seoul, Korea. His research interest include Hologram image processing, Machine learning, human-computer interaction, non-touch interfaces, human gesture recognition, Digital therapeutics for autism diagnosis.