DLHUB is a versatile platform for you to develop deep learning models using various network architectures for different applications.

Please visit ANSCENTER web-blog for the following examples (and more to be uploaded):

Convolution Neural Network for MNIST hand writing recognition

This example shows how to implement the Convolution Neural Network architecture in DLHUB.

MNIST handwriting recognition is a well-known example. However, these tutorials mainly use deep learning platforms such as Tensorflow, CNTK, MXNet, PyTorch, Caffe, and so on. To design a deep learning neural network model properly, a user requires both in-depth knowledge of that AI field and Python programming skill.  The purpose of this article is to introduce a new way to design a deep learning model without requiring the Python programming skill, understanding complicated deep learning APIs, and deep knowledge of deep learning architecture. DLHUB is designed to simplify this design process with just a few clicks.

It's a step by step guide on how to obtain the data set, load training data, how to set up the convolution network by simple clicks, how to train and evaluate the model. DLHUB shows the revolutionized way of developing a deep neural network without programming.

Blog Link: https://www.anscenter.com/Blogs/Blog/BlogPost/convolution_neural_network_for_mnist_hand_writing_recognition

Residual Neural Network: Deep learning For Cifar-10 Classifier

This example shows how to implement Residual Neural Network architecture in DLHUB.

With traditional deep learning models such as multiple perception layers (MLP) and convolution layers, we will face the training speed and saturate accuracy problems when we increase the number of layers. To solve this issue, Residual Neural Network architecture is introduced to allow us to increase the number of layers, yet improving accuracy and avoid the problem of vanishing gradients.

The example uses the Cifar-10 dataset provided by Toronto University, it consists of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 1000 test images. The dataset structure contains an image folder with training and test map files.

Link: https://www.anscenter.com/Blogs/Blog/BlogPost/deep_learning_cifar_10_classifier_using_residual_neural_network

Transfer Learning: Image classification of Fruits

This example shows how to implement Transfer Learning architecture in DLHUB to detect different types of fruits.

Deep learning has become a powerful technique for pattern recognition and regression problems and a Convolution Neural Network has proven to be the most effective neural network structure for image recognition. However, deep learning often requires a very big data set to achieve reasonable accuracy and performance.  Moreover, because of the complexity in deep learning architecture which contains a lot of layers, the training process often is very time-consuming.  We all know that in deep learning architecture, the last few layers often act as the main classifier, and previous layers act as a feature extractor that extracts important information of the data set as features to feed to the main classifier.

Traditionally, in machine learning applications, feature extraction and classification often are separated into two processes, and feature extraction is always a difficult task as it depends on data set structure, and this feature extractor' parameters are always fixed. In deep learning architecture, these two processes are combined in one deep learning model and feature extractor's parameters are adjusted during the training process. 

Thanks to that deep learning architecture, the transfer learning technique is introduced to solve the small data set and training time problem. The idea is to use an existing deep learning model that has been trained for a similar problem and extract all the layers from the input layer to nearly last few layers to be used as a feature extractor.  New output layers will be added in this feature extractor for a certain classification task, and these output layers will form a classifier that will be trained on the training process.

Blog Link: https://www.anscenter.com/Blogs/Blog/BlogPost/fruit_recognition_using_transfer_learning

Transfer Learning: Image classification of Super Heroes

This example shows how to implement Transfer Learning in DLHUB.

In this example, we demonstrate how to do super hero image classification with only around 10 images per super hero category by utilizing the power of transfer learning!

It's a step by step guide to show you how to prepare images in folders, how to import data, train and evaluate your model.

Link: https://www.anscenter.com/Blogs/Blog/BlogPost/how_to_teach_a_kid_to_design_a_deep_learning_model_that_can_recognize_marvel_superhuman