IEEE CIS Summer/ Winter School

Lecture Material


DAY 1 : December 5th, 2018


Prof. Akhil R. Garg

Tittle: Convolutional Neural Network: the biologically inspired model of deep neural network

Lecture Slide:

Convolution neural networks (CNN) an example of deep neural networks, are hierarchical multi-layered networks and have been extensively used for many pattern recognition tasks such as facial recognition, hand written character recognition, image classification, medical image analysis, and natural language processing. Convolutional networks were inspired by the neurophysiological findings on the visual system of mammals in that the simple cells of primary visual cortex respond to stimuli only in a restricted region of visual field known as the receptive field of that cell. Further, these simple cell act as local feature detector and their output are then combined in higher layer of visual cortex to form higher order features. It has been shown that this knowledge can be easily built into a multi-layered network of neuron wherein each hidden layer consists of a set of feature detectors which are used to detect distinctive features of the object and to make a feature maps containing position information of that feature. The first layer of the CNN is used to detect local features and then these local features are combined in next layer to form higher order features, this process of combining features is continued to form next higher order features. Finally, these higher order features are used as input to multi layered perceptron (MLP) consisting of single hidden layer. Each of these hidden layers contain a subsampling layer (pooling layer) to provide CNN’s the property of translation and deformation invariance. Starting with the explanation of neurophysiological findings on the visual system of mammals which are used to construct CNN’s an attempt in this talk will be made to explain the architecture, working and learning mechanisms of CNN. The talk will culminate with giving details and explanation on one of the applications of CNN.



Prof. A. K. Deb

Tittle: Support Vector Machines and Its Applications

Lecture Slide:


Support Vector Machines (SVM) that originate from Statistical Learning Theory has superior generalization capability as it not only takes care of the empirical risk but also the structural risk. This talk will deliberate on a classification architecture that is a hybrid of neural networks and SVMs. Evolution of alternate SVMs that are computationally fast and hence suitable for large datasets will be discussed. Application of SVMs in automotive modelling and in the control framework will be demonstrated.



Prof. B. Bhattacharya

Tittle: On some Electro-Mechanical Aspects of Neuron Model

Lecture Slide:


In this talk, I'll talk about the existing Hodgkin-Huxley model of neural impulse flow through axon system and indicate some of the anomalies related to this model. This will be further explored by introducing a new Soliton based wave model of the neural impulse transmission. The issues related to both the models will be discussed and a hybrid model based on co-propagation of electric and mechanical signal will be presented. The impact of the new model in developing Biological Neural Network as an advanced model of ANN will be discussed.



Prof. J. C. Bansal

Tittle: Nature Inspired Optimization Algorithms

Lecture Slide:



It become infeasible to apply classical optimization algorithms if the problem under consideration is highly nonlinear, have multiple optima and the gradient information is not available. For these optimization problems, there exist few intelligent random search algorithms. These algorithms search the solution stochastically. The intelligence in this random search is inserted by inspiration from the nature. Whether it is ants foraging behavior or birds’ swarming behavior or human evolution process, the computational optimization algorithms have been developed based on many natural phenomena. Researchers found that these nature inspired optimization algorithms can efficiently tackle the problems, which are otherwise difficult to solve using traditional optimization algorithms. During this lecture, few popular and successful algorithms (Particle Swarm Optimization, Artificial Bee Colony Algorithm, Spider Monkey Optimization and Biogeography Based Optimization) will be introduced. Artificial Bee Colony Algorithm will be discussed in detail.


DAY 2 : December 6th, 2018


Prof. Vipul Arora

Tittle: Machine Learning for Speech and Audio Applications

Lecture Slide:


Recently machine learning has been very successful for a large variety of audio applications: speech recognition, speaker identification, hearing aids and smart home devices to name a few. This talk will discuss the basics of audio signal processing and machine learning models commonly used for some of the popular audio applications like speech recognition and audio detection.



Prof. Vrijendra Singh

Tittle: Time Series Analysis: Recent Advancements & Applications

Lecture Slide:


Time series analysis has been always important for pattern recognition researchers. Time series analysis comprises methods for analysing time series data in order to extract meaningful information for the purpose of modelling, classification, clustering and forecasting etc.. Parametric and non parametric techniques have been widely used in the study of time series data. Recent advancements like time series anchored chains and deep learning model have been presented including real life applications and challenges.



Prof. Nishchal K Verma

Tittle: Deep Learning and Deep Fuzzy Networks

Lecture Slide:


Deep learning has recently attracted lots of attention. In Simple Learning, One who learns faster is more intelligent. In basic structure of artificial learning mechanism, an artificial neuron contains a nonlinear activation function and has several incoming and outgoing weighted connections. Deep Learning (DL)framework comprises of multiple layers of nonlinear processing nodes which are trained over a large set of data. It works on the architecture of deep neural network (DNN) which is an artificial neural network with multiple hidden layers. However, DNN is unable to model uncertainty due to vagueness, ambiguity and impreciseness. Deep Fuzzy Network (DFN) can process uncertainty due to vagueness, ambiguity, imprecision (fuzziness) in inputs and actual output is close to the desired output.



Prof. L. Behera

Tittle: Selected Applications of Deep Networks and CI

Lecture Slide:


This talk will focus some interesting applications of CI and deep networks in Intelligent Control, Visual Perception, and EEG analysis. The primary motivation is to highlight how computing power juxtaposed with simple adaptive laws can extract meaningful information from huge data. Simultaneously it will be demonstrated that while control applications are governed by stability constraints, applications in visual grasping and EEG analysis are guided by convergence speed, accuracy and generalization.


DAY 3 : December 7th, 2018


Prof. Jagannathan Sarangapani

Tittle: Direct Error Driven Deep Learning Scheme for Bigdata Classification

Lecture Slide:



Complex systems like smart environments, fleet of vehicles, machining center, process control, smart grid and health care applications generate large quantities of data. Challenges in big data analytics are Noisy data, Noisy dimensions, Heterogeneity and Computational Cost. Big Data can be analysed using Dimension-reduction based and Learning Based Approaches Learning based approach involves deep learning techniques using deep neural networks and L1 penalized feature selection.



Prof. P. K. Kalra

Tittle: Issues,Myths,and Best Application s of Computational Intelligence




This lecture will describing the three phases of Artificial Intelligence, first being the Theorem proving phase in the 1950s-1960s when there wasn't sufficient data and computing power, followed by the beginning of Neural Networks in 1980s and finally the third phase where there is sufficient data and power. We grazed through the history of the subject and neural networks, moving on to some basic, yet tricky problems such as the XOR problem. We were explained the importance of understanding the data, and how misinterpretation can lead to misleading results. Various other problems were discussed such as the blind source problem, inverse problem, PCA for complex variable, and so on. Finally, the insightful talk concluded by explaining out how important it is to integrate different concepts together in order to have an efficient and robust network.



Prof. Sandeep Shukla

Tittle: Machine Learning Applications in Cyber Security

Lecture Slide:


Learning from examples, and use the leaned model to classify or predict the future data points, have become the raison d'etre of machine learning. Deep Learning in particular has been particularly useful in unsupervised machine learning. In this talk, we will discuss the various ways machine learning is used in the field of cyber security -- anomaly detection in cyber-physical systems, intrusion detection in network, malware detection for various classes of malware, malware classification etc. Unfortunately, as researchers start using machine learning for detection of various types of attacks, the attackers are responding by creating techniques for obfuscating the features so that the models cannot distinguish through regular feature sets. Such attacks on the learning algorithms and counter measures for those also will be discussed in brief. At the Interdisciplinary center for cyber security of critical infrastructure (C3I) at IIT Kanpur, a number of tools are being developed in this context. We will provide glimpses of some of those as well.



Prof. Swagatam Das

Tittle: Predictive and Generative Deep Neural Architectures of Recent Interest: Foundations, Perspectives, and Future Challenges

Lecture Slide:


Building from basics, this talk will elaborate on the advent of deep neural networks and the associated gradient-based optimization algorithms. The talk will then discuss Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) with comprehensive examples. Finally, the talk will unearth some challenging future research avenues for deep learning.