Docker guidelines

tmux : for resuming sessions on ubuntu server

tmux new -s teja  # create new session
Ctrl+b then d to leave the current session
tmux attach  # to attach to previous session
tmux attach -t teja   # attach to named session
tmux kill-session -t teja # kill/delete named session
tmux list-sessions    # list of all sessions
$ tmux attach-session -t 0

Load Docker on GPU

$ NV_GPU=’2′ nvidia-docker run -it -v /home/dgxusername/:/home/dgxusername/data nvcr.io/nvidia/caffe:17.11

NV_GPU=’0′ nvidia-docker run -it -v /home/dgxuser104/:/home/dgxuser104/ pytorch/pytorch:latest

$ NV_GPU=’1, 2′

Ctrl+p and Ctrl+q to exit the current container

ls -l /dev/nvidia*

$ docker images   //list of all docker images
$ nvidia-smi  //list usage of all GPU’s
$ watch -n0.5 nvidia-smi
$ docker system prune -a -f
$ sudo service docker restart
$ df -h
$ sudo kill -9 9889
$ sudo nvidia-smi –gpu-reset 5 or i/-i 5
$ sudo chmod -R a+rwx /path/to/folder
$ ls -l /usr/bin/python*
$ tar xvf Python.tar.xz

Load docker with named session

$ NV_GPU=’2′ nvidia-docker run -p 5052:8888 –name tejadocker1 -it -v /home/dgxuser125/:/home/dgxuser125 nvcr.io/nvidia/torch:17.11

$ nvidia-docker start tejadocker1
$ nvidia-docker attach tejadocker1

Install Python 3.6 along with Python 2.7

Load the container and run following commands
$ apt-get update && apt-get install -y software-properties-common
$ add-apt-repository ppa:jonathonf/python-3.6
$ apt-get update
$ apt install python3.6
$ update-alternatives –install /usr/bin/python3 python3 /usr/bin/python3.5 1
$ update-alternatives –install /usr/bin/python3 python3 /usr/bin/python3.6 2
$ apt-get install python3-pip

$ pip3 install torchvision
$ python3 demo.py

Advertisements

Top1 vs Top5 Accuracy and loss

Top1 vs Top5 Accuracy

Top-1 accuracy is the conventional accuracy, model prediction (the one with the highest probability) must be exactly the expected answer.

Top-5 accuracy means any of our model’s top 5 highest probability answers match with the expected answer.

For example, if we are working with a simple classification problem using deep learning. We gave one picture (of Dog) as an input to our model, and these are the outputs of our deep learning model:

  • Tiger: 0.4

  • Dog: 0.3

  • Cat: 0.1

  • Cow 0.09

  • Lion: 0.08

  • Deer: 0.02

In the above  output probabilities:

Using top-1 accuracy, we will count this output as wrong, because it predicted a Tiger instead of a Dog.

Using top-5 accuracy, we count this output as correct, because the Dog is among the top-5 guesses (second place with 0.3 probability ).

Top1 vs Top5 Error rate

The error rate is the complement of the accuracy.

The top-1 error:- The percentage of time that the classifier did not give the correct class the highest probability score.

The top-5 error:- The percentage of time that the classifier did not involve the correct class among the top 5 probabilities or guesses.

dataset pre-processing

Add Padding to data/images:

x_train = np.pad(x_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
x_test = np.pad(x_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(x_train[0].shape))

Image Resizing:

import cv2
import numpy as np
from keras import backend as K
from keras.utils import np_utils
# Example 1
if K.image_dim_ordering() == 'th':
   X_train = np.array([cv2.resize(img.transpose(1,2,0), (img_rows,img_cols)).transpose(2,0,1) for img in X_train[:nb_train_samples,:,:,:]])
   X_valid = np.array([cv2.resize(img.transpose(1,2,0), (img_rows,img_cols)).transpose(2,0,1) for img in X_valid[:nb_valid_samples,:,:,:]])
else:
   X_train = np.array([cv2.resize(img, (img_rows,img_cols)) for img in X_train[:nb_train_samples,:,:,:]])
   X_valid = np.array([cv2.resize(img, (img_rows,img_cols)) for img in X_valid[:nb_valid_samples,:,:,:]])

# Example 2
x_train=tf.keras.backend.resize_images(x_test, 224, 224, 'channels_last', interpolation='nearest')

# Example 3
x_train=tf.image.resize_images(x_train, [224,224], align_corners=False, preserve_aspect_ratio=False, name=None)

# Example 4
nb_train_samples = 50000 # 5000 training samples
nb_test_samples = 10000 # 10000 test samples
img_rows, img_cols = 224, 224
if K.image_dim_ordering() == 'th':
  x_train = np.array([cv2.resize(img.transpose(1,2,0), (img_rows,img_cols)).transpose(2,0,1) for img in x_train[:nb_train_samples,:,:,:]])
  x_test = np.array([cv2.resize(img.transpose(1,2,0), (img_rows,img_cols)).transpose(2,0,1) for img in x_test[:nb_test_samples,:,:,:]])

Evaluation metrics

MAE and RMSE are the two most popular metrics for continuous variables. Both the MAE and RMSE can range from 0 to . They are negatively-oriented scores: Lower values are better.

Mean Squared Error (MSE)

  • MSE is the simplest but least used, measures average squared error of predictions.
  • For each point, it calculates square difference between the predictions and the target and then average those values.
  • The higher this value, the worse the model is. It is never negative, since we’re squaring the individual prediction-wise errors before summing them, but would be zero for a perfect model.
  • If we make a single very bad prediction, the squaring will make the error even worse and it may skew the metric towards overestimating the model’s badness.

Mean Absolute Error (MAE)

  • The error is calculated as an average of absolute differences between the target values and the predictions.
  • The MAE is a linear score which means that all the individual differences are weighted equally in the average.
  • Work well when there are outliers in the data. If outliers are just some unexpected values and we still care about them then use MSE

Root Mean Squared Error (RMSE)

  • Penalizes the higher difference more than MAE. RMSE is just the square root of MSE.

Confusion Matrix

  • A confusion matrix is a table that is often used to describe the performance of a classification model (or “classifier”) on a set of test data for which the true values are known.
  • true positives (TP): These are cases in which we predicted yes (they have the disease), and they do have the disease.
  • true negatives (TN): We predicted no, and they don’t have the disease.
  • false positives (FP): We predicted yes, but they don’t actually have the disease. (Also known as a “Type I error.”)
  • false negatives (FN): We predicted no, but they actually do have the disease. (Also known as a “Type II error.”)

Accuracy: Overall, how often is the classifier correct?

accuracy=(TP+TN)/(TP+TN+FP+FN)

Precision

  • It is the measure of what proportion of positive identifications are actually correct. Or
  • It can also be defined as the number of correct positive predictions to the number of positive predictions. Mathematically it is defined as:

Precision = TP / (TP +FP)

Recall

  • It is the measure of what proportion of actual positives are identified correctly.
  • It can also be defined as the number of correct positive prediction to the number of positives. examples [12]. Mathematically it is defined as:

Recall = TP / (TP+FN)

Example: 

F1 Score

  • F-score is a harmonic mean of precision and recall.
  • Mathematically it is defined as:

f1_score = 2 (Precision x Recall) / (Precision + Recall)

IoU (Intersection over Union)

  • We use IoU to evaluate object detectors. Also defined as the area of overlap versus area of union
  • We need ground-truth bounding boxes (i.e., the hand-labeled bounding boxes from the testing set that specify wherein the image our object is).
  • The predicted bounding boxes from our model.

 

mAP (Mean Average Precision)

  • Used for object detection tasks

BLEU (Bilingual Evaluation Score)

  • Used for sequence models

 

Read more:

    https://medium.com/human-in-a-machine-world/mae-and-rmse-which-metric-is-better-e60ac3bde13d
    https://medium.com/usf-msds/choosing-the-right-metric-for-machine-learning-models-part-1-a99d7d7414e4
    https://towardsdatascience.com/how-to-select-the-right-evaluation-metric-for-machine-learning-models-part-1-regrression-metrics-3606e25beae0
    https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/https://classeval.wordpress.com/introduction/basic-evaluation-measures/

https://www.pyimagesearch.com/2016/11/07/intersection-over-union-iou-for-object-detection/

View at Medium.com

 

 

Useful links

RNN: 

Guide to RNN, LSTM and GRU,

 

Data Augmentation:
How to Configure Image Data Augmentation in Keras
Keras ImageDatGenerator and Data Augmentation
Keras Daty aug:cifar10

Classification

Object Detection
Faster R-CNN object detection with PyTorch
 A-step-by-step-introduction-to-the-basic-object-detection-algorithms-part-1
OD on Aerial images using RetinaNet
OD with Keras Mark-RCNN
OD with Keras Faster-RCNN

Image Segmentation
Segmentation
Mark -R-CNN segmentation with PyTorch
 Instance Segmentation Using Mark-RCNN
Semantic segmentation with UNET

Image Analysis
Deep-learning-and-medical-image-analysis-with-keras

Transfer Learning
Transfer-learning-with-keras-and-deep-learning/
TF-learning

Unsupervised learning:

Autoencoder
Auto Encoders 1, Auto Encoders 2, encoder-decoder (text summarization:Keras),
Auto Encoders in Keras, Autoencoder
LSTM Auto-encoders,
Autoencoders
Image Denoising using AE

Generative Adversarial Network (GAN)

GAN1, GAN2, GAN3, GAN4,
Why GAN hard to train, GAN Applications

Clustering Problems: K-Means
Generative models (old): RBM, Naive Bays, DBN

TensorFlow Lite

Ported to Arduino

How do I change image dimensions of a pre-trained CNN?

CUDA 10, cuDNN and PyTorch with GPU support Installation

To install CUDA 10.1, cuDNN 10.1 and PyTorch with GPU on Windows 10 follow the following steps in order:

Update current GPU driver

  • Download appropriate updated driver for your GPU from NVIDIA site here
  • You can display the name of GPU which you have and accordingly can select the driver, run folllowng command to get the GPU information on command prompt.
    • wmic path win32_VideoController get name
  • In my case the above command display “NVIDIA Quadro K1200”

Download and install CUDA and cuDNN

  • Download CUDA Toolkit 10.1 from here and install it.
  • The default installation path would be similar to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
  • Download cuDNN 10.1 from here and extract the downloaded file.
  • Copy all the folders (bin, include and lib) from the extracted folder and paste to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1

Update/add environment variables

  • Set/Update environment variables as:
    • CUDA_HOME=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
    • CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
    • CUDA_PATH_V10_1=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1
  • Open path variable under system variable and make sure that you have added/updated the values as highlighted below in red
  • After doing all the above steps your CUDA 10.1 and cuDNN 10.1 installation is complete, now you have the working GPU

Install PyTorch

  • Go to PyTorch official site and select appropriate command for the installation of PyTorch. The following command is for:
    • PyTorch Build: Stable (1.0)
    • OS: Windows
    • Package: Conda
    • Language: Python 3.6
    • CUDA 10.0
  • conda install pytorch torchvision cudatoolkit=10.0 -c pytorch

If you want to install using pip, the above command will change to :

pip3 install https://download.pytorch.org/whl/cu100/torch-1.0.1-cp36-cp36m-win_amd64.whl
pip3 install torchvision

Verify installation

To verify the installation you can execute the following lines. If everything is fine, you will see GPU device name, otherwise, you will see “CPU” as output

import torch
device = torch.device("cuda:0" if torch.cuda.is_available() else "CPU")
print(device)

Text data pre-processing in python

Remove all special characters & numbers other than  a-z, A-Z

import re
data=re.sub('[^A-Za-z0-9]+', '', "[{(Python is $#! great]  for learning 5678")
print(data)

Remove all the blank lines from text file

outputfile= open("outputfile.txt",'w') # output file

with open("inputfile.txt") as f:
    for line in f:
        if not line.isspace():
            outputfile.write(line)
outputfile.close()
print("done")

Remove only first (specific) word from the string

text='us he is right in us.'
#text='int he is int in'
st=text.split()

if st[0]=='us':
    data=st[1:]
if st[0]=='int':
    data= st[1:]

finaltext=' '.join(data)
    
print(finaltext)

Remove all special symbols, empty lines and specific first word (from each line)

import sys
import re

fun = lambda x: re.sub('[^a-zA-z\s]','',x)
newfile = open("output.txt", 'w')
with open("input.txt") as f:
    for line in f:
        if not line.isspace():
            line = fun(line)
            line = line.replace('^' , '')
            line = line.replace('[' , '')
            line = line.replace(']' , '')
            line = " ".join(line.split())
            line = line +"\n"
            
            words=line.split()
            #print(st[0])
            if words[0]=='us':
                firstword = words[1:]
            if words[0]=='int':
                firstword = words[1:]
            line=' '.join(firstword)
            line=line+'\n'
        
            newfile.write(line)
newfile.close()