Special Issue on Role of Deep Learning in Smart Era

Submission Deadline: May 20, 2020

Please click the link to know more about Manuscript Preparation: http://www.ijdst.org/submission

This special issue currently is open for paper submission and guest editor application.

Please download to know all details of the Special Issue

Special Issue Flyer (PDF)
  • Lead Guest Editor
    • Amit Kumar Tyagi
      School of Computing Science and Engineering, Vellore Institute of Technology, Chennai, India
  • Guest Editor
    Guest Editors play a significant role in a special issue. They maintain the quality of published research and enhance the special issue’s impact. If you would like to be a Guest Editor or recommend a colleague as a Guest Editor of this special issue, please Click here to complete the Guest Editor application.
    • Sreenath N
      Department of Computer Science and Engineering, Pondicherry Engineering College, Puducherry, India
  • Introduction

    Due to the recent development in technologies and integration of millions of internet of thing’s devices, a lot of data is being generated everyday (known as Big Data, having 7 V’s like Volume, Velocity, Variety, Variability, Veracity, Visualization, and Value). This data required to be analysis for improving growth of several organizations/ applications like e-healthcare (i.e., for disease prediction), satellite (i.e., for weather prediction), etc. Together this, we are entering into an era of smart world, where Robotics is going to take place in most of the applications (to solve world’s problems). Implementing Robotics in applications like medical, automobile, etc., is an aim/ goal of computer vision. Computer Vision (CV) objective is fulfilled by several components like Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL). Apart from that, the need of reliable deep learning models is in applications like Colorization of Black and white images, Adding sounds to silent movies, automatic machine translation, object classification in photographs, automatic handwritten generation, character text generation, image caption generation, automatic game playing. Hence, Reliable Deep Learning methods for perception are highly effective for many tasks/ applications. However, in practice, it is unavoidable to encounter scenarios where the assumptions are violated: data may have different statistics during training and deployment, data may change over time, may contain noise, or there could even be adversarial attacks or critical issue or challenges which are require to be overcome. We are interested in investigating and guaranteeing deep leaning or deep neural networks model’s performance in several situations (applications). Hence, this (proposed) special issue will invite unpublished research articles which will cover all possible where deep learning can be used in possible areas (in near future).
    Aims and Scope:
    1. Reliable Deep Learning Models
    2. Convolutional Neural network
    3. Long Short Terms Memory
    4. Recurrent Neural Network
    5. Artificial Neural Network
    6. Natural Image Processing

  • Guidelines for Submission

    Manuscripts can be submitted until the expiry of the deadline. Submissions must be previously unpublished and may not be under consideration elsewhere.

    Papers should be formatted according to the guidelines for authors (see: http://www.ijdst.org/submission). By submitting your manuscripts to the special issue, you are acknowledging that you accept the rules established for publication of manuscripts, including agreement to pay the Article Processing Charges for the manuscripts. Manuscripts should be submitted electronically through the online manuscript submission system at http://www.sciencepublishinggroup.com/login. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal and will be listed together on the special issue website.

  • Published Papers

    The special issue currently is open for paper submission. Potential authors are humbly requested to submit an electronic copy of their complete manuscript by clicking here.