Friday, June 14, 2019

Deep CNN using Low level Graph API

Master CNN

Deep CNN using Low level Graph API

In [1]:
try:
    import tensorflow as tf 
    from tensorflow.examples.tutorials.mnist import input_data
    print("all Modules Loaded successfully .....")
except:
    print("Some Modules are missing .... ")
all Modules Loaded successfully .....
In [69]:
from IPython.display import YouTubeVideo
YouTubeVideo('QpBN1YH27q4')
Out[69]:

Layer 1

Here you can see we have a input image which is 28x28 image we have a kernel which is 5x5. depending upon what type open convolution you are doing output shape after convolution is N-M+1 ----> 28-5+1 after that we add Bias and apply Activation function on that Next step is apply Pooling function I am using Max Pooling Function. once that is done you can see we are performing 6 more convolution and repeat the procedure.

Layer 2

Once the Convolution is performed and Polling is done Shape of image we get is 4x4 in example shown in picture. once we have that we have 6 Feature Map we need to Flatten the output. ---> 4x4x6. ---> sizexsizexNumberFeature Map

3Layer 3

Once we have Flatten the output we can connect regular neural Network I am suing 50 Neuron in Hidden and followed by 10 Neuron so the size of weight matrix is

W ----> outputxinput --- > 50xFlatten Output

for the example in picture it will be 50X96 ------> 50(Neuron) x (4x4x6) where 4x4 is the input image after pooling in second layer and 6 is Feature Map.

Layer 4

finally we can use 10 Neuron for output and compute the loss

For the Code Purpose I am using 32 Feature Map Layer 1 and 64 Feature map in Layer 2 and 1024 Neuron with Drop out and followed by 10 Neuron.

In [64]:
class Neural_Network(object):
    def __init__(self, learning_rate=0.001, Epoch=1000):
        self.learning_rate = learning_rate
        self.Epoch = Epoch 
        self.FeatureMap1 = 32
        self.FeatureMap2 = 64
        self.hidden_layer1 = 1024 
        self.output = 10

    def load_dataset(self):
        self.mnist = input_data.read_data_sets("MNSIT_data/",one_hot=True)
        return self.mnist
    
    def init_weight(self,shape):
        init_random_dist = tf.truncated_normal(shape=shape,stddev=0.1)
        return tf.Variable(init_random_dist)  
    
    def init_bias(self,shape):
        init_bias_val = tf.constant(0.1,shape=shape)
        return tf.Variable(init_bias_val)
    
    def conv2d(self,x,W):
        # X .... > [Batch,Height,Width,Channel]
        # W .....> [Filter Height, Filter Width,Channle In, Channle Out]
        return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME')
    
    def max_pooling_2by2(self,x):
        # x ... > [batch,width,height, channel]
        return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')
    
    def foreward_pass(self):
        mnist = self.load_dataset()
        
        x = tf.placeholder(tf.float32,shape=[None,784])
        y_true = tf.placeholder(tf.float32,shape=[None,10])
        x_image = tf.reshape(x,[-1,28,28,1])
        
        self.K1_Shape = [5,5,1,self.FeatureMap1]
        self.K2_Shape = [5,5,self.FeatureMap1, self.FeatureMap2]
        
        
        # Layer 1 -------------------------------
        K1 = self.init_weight(shape=self.K1_Shape)
        B1 = self.init_bias(shape=[self.FeatureMap1])
        CC1_convo = self.conv2d(x_image,K1)
        CC1_Temp = tf.add(CC1_convo, B1)
        CC1 = tf.nn.relu(CC1_Temp)
        CP1 = self.max_pooling_2by2(CC1)
        
        # Layer 2 --------------------------------
        K2 = self.init_weight(shape=self.K2_Shape)
        B2 = self.init_bias(shape=[self.FeatureMap2])
        CC2_convo = tf.add(self.conv2d(CP1, K2),B2)
        CC2 = tf.nn.relu(CC2_convo)
        CP2 = self.max_pooling_2by2(CC2)
        
        # Layer Flat Hidden Dense Neuron Layer  ----------------------------
        convo_2_flat = tf.reshape(CP2,[-1,7*7*self.FeatureMap2]) 
        input_size = int(convo_2_flat.get_shape()[1])
        
        W3 = self.init_weight(shape=[input_size, self.hidden_layer1])
        B3 = self.init_bias(shape=[self.hidden_layer1])
        S3 = tf.add(tf.matmul(convo_2_flat,W3), B3)
        A3 = tf.nn.relu(S3)
        dropout = tf.placeholder(tf.float32)
        A3 = tf.nn.dropout(A3,keep_prob=dropout)
        
     
        # Layer Dense Output Neuron Layer 10
        input_size1 = int(A3.get_shape()[1])
        W4 = self.init_weight(shape=[input_size1, self.output])
        B4 = self.init_bias(shape=[self.output])
        S4 =  tf.add(tf.matmul(A3,W4), B4)
        #A4 = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=S4)
        
        # -- LOSS ----
        loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_true,logits=S4))
        optimizer = tf.train.AdamOptimizer(learning_rate= self.learning_rate)
        train = optimizer.minimize(loss)
        init = tf.global_variables_initializer()
        
        with tf.Session() as sess:
            sess.run(init)
            for i in range(self.Epoch):
                batch_x, batch_y =mnist.train.next_batch(100)
                sess.run(train, feed_dict={x:batch_x, y_true:batch_y, dropout:0.7})

                if i %100 == 0 :
                    print("ON STEP {}:".format(i))
                    print("Accuracy")
                    matches = tf.equal(tf.math.argmax(S4,1),tf.math.argmax(y_true,1))
                    acc = tf.reduce_mean(tf.cast(matches,tf.float32))
                    print(sess.run(acc,feed_dict={x:mnist.test.images, y_true:mnist.test.labels, dropout:1.0}))
                    print("\n")
In [65]:
c =Neural_Network()
In [66]:
c.foreward_pass()
Extracting MNSIT_data/train-images-idx3-ubyte.gz
Extracting MNSIT_data/train-labels-idx1-ubyte.gz
Extracting MNSIT_data/t10k-images-idx3-ubyte.gz
Extracting MNSIT_data/t10k-labels-idx1-ubyte.gz
ON STEP 0:
Accuracy
0.1301


ON STEP 100:
Accuracy
0.9603


ON STEP 200:
Accuracy
0.9704


ON STEP 300:
Accuracy
0.9799


ON STEP 400:
Accuracy
0.9761


ON STEP 500:
Accuracy
0.984


ON STEP 600:
Accuracy
0.9841


ON STEP 700:
Accuracy
0.9851


ON STEP 800:
Accuracy
0.9873


ON STEP 900:
Accuracy
0.9859


In [ ]:
 

Developer Guide: Getting Started with Flink (PyFlink) and Hudi - Setting Up Your Local Environment and Performing CRUD Operations via flink

flink-hudi-final Install Flink and Python ¶ conda info --envs # Create ENV conda ...