TensorFlow 2.x

TensorFlow 2.x

I was bedridden due to some illness. During this time to engage myself, I thought of learning new ML language. I selected Tensorflow for that. Here I am compiling Tensorflow keras APIs which can be used as reference.

It is assumed that reader has basic knowledge about Tensorflow and Keras APIs.

Certification Detail: https://www.tensorflow.org/extras/cert/TF_Certificate_Candidate_Handbook.pdf

Env Set up: Before we start lets set up environment.

### Set Up
   * Install Python3 from  https://www.python.org/downloads/

#### Create Virtual env 
   * Create Virtual env 
     * python3 -m venv venv
   * Activate the source
     * source venv/bin/activate
   * Install the dependencies
     * pip3 install -r requirements.txt
     
#### Pycharm set up
   * Open the project in pycharm
   * Change Python interpreter to your venv path
       * Pycharm -> Preferences -> Python Interpreter 
       * Change Python Interpreter to <project_root>/venv/bin/python

        

Requirement.txt:

tensorflow==2.2.0rc3
matplotlib==3.3.3
tensorflow_datasets
pandas==1.3.0        

API Details:

  • Build, compile and train machine learning (ML) models using TensorFlow.

model = tf.keras.models.Sequential([tf.keras.layers.Dense(input_shape=[1)])

model.compile(optimizer='sgd', loss='mean_squared_error')

model.fit(x, y, epochs=10)

model.predict([10])        

  • Build sequential models with multiple layers.

model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
                                  tf.keras.layers.Dense(128, activation='relu', tf.keras.layers.Dense(10, activation='softmax'),
])

# Another example

model = tf.keras.models.Sequential([tf.keras.layers.Conv2D(64, (3,3), input_shape=(28, 28,1), activation='relu'),
tf.keras.laers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), input_shape=(28, 28,1), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu', tf.keras.layers.Dense(10, activation='softmax')
])        

  • Build and train models for binary classification.

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])        

  • Build and train models for multi-class categorization

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])        

  • Plot loss and accuracy of a trained model.

import matplotlib.pyplot as plt

history = model.fit(...)
acc = history.history('accuracy')
val_acc = history.history(val_accuracy')
loss = history.history('loss')
val_loss = history.history(val_loss')

# accuracy 

plt.plot  ( epochs,     acc )
plt.plot  ( epochs, val_acc )
plt.title ('Training and validation accuracy')
plt.figure())

#loss

plt.plot  ( epochs,     loss )
plt.plot  ( epochs, val_loss )
plt.title ('Training and validation loss')
plt.figure())
        

  • Image Augmentation:


from tensorflow.keras.preprossing.image import ImageDataGenerator
 
train_datagen = ImageDataGenerator(rescale=1./255, width_shift_range=0.2, height_shift_range=0.2, rotation_range=40, shear_range=0.2, zoom_range=0.2, horizontal_flip=True,fill_mode='nearest')

# Data Preprocessing

train_generator= train_datagen.flow_from_directory('train_dir', target_size(150, 150), batch_size=10,class_mode='binary')

# Training

history = model.fit(train_generator, steps_per_epoch=10, epochs=20, validation_data=(valid_generator), validation)
_steps=10)        

  • Use pretrained models (transfer learning).

from tensorflow.keras.applications.inception_v3 import ImceptionV3
from tensorflow.keras import layers

local_weights_file = ''

pre-trained_model = InceptionV3(input_shape=(150,150,3), include_top=False, weights=None)

pre_trained_model.load_weights(local_weights_file)
for layer in pre_trained_model.layers:
  layer.trainable=False

last_layer = pre_trained_model.get_layer('mixed7')
last_output = last_output.output

x = layers.Flatten()(last_output)

x = layers.Dense(1024, activation='relu')(x)
x = layers.dropout(.2)(x)

x = layers.Dense  (1, activation='sigmoid')(x)
model = Model(pre_trained_model.input, x)  

from tensorflow.keras.optimizers import RMSpro
from tensorflow.keras import Model


model.compile(optimizer=RMSPrp(learning_rate=.0001, loss='binary_crossentropy', metrics=['accuracy'] ))        

  • Use callbacks to trigger the end of training cycles.

class Callback(tf.keras.callbacks.Callback():

   def on_epoch_end(self, epoch, logs):
      if logs.get('loss') > 0.4:
            self.model.stop_training = True         

  • Use datasets from different sources.

# keras dataset
fashion_mnsit = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnsit.load_data()


# tensorflow_datasets 

import tensorflow_datasets as tfds
imdb, info = tfds.load('imdb_reviews', with_info =True, as_supervised=True)

train_data =  imdb['train']
test_data = imdb['test']        

Image Classification

  •  Define Convolutional neural networks with Conv2D and pooling layers.

tf.keras.models.Sequential([tf.keras.layers.Conv2D(16, (3,3), input_shape=(150, 150, 3), activation='relu'),
tf.keras.layers.MaxPolling2D(2,2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPolling2D(2,2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPolling2D(2,2),
tf.keras.layers.flatten(),
tf.keras.layers.Dense(512, activation='relu')
tf.keras.layers.Dense(1, activation='sigmoid')

])        

  • Use image augmentation to prevent overfitting

train_datagen = ImageDataGenerator(rescale=1./255, width_shift_range=0.2, height_shift_range=0.2, rotation_range=40, shear_range=0.2, zoom_range=0.2, horizontal_flip=True,fill_mode='nearest')        

  • Use ImageDataGenerator

from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import RMSprop

train_datagen = ImageDataGenrator(rescale = 1./255)
test_datagen = ImageDataGenrator(rescale = 1./255)

train_generator = train_datagen.flow_from_directory('train_dir', batch_size = 10, target_shape=(150, 150), class_mode='binary')

validation_generator = test_datagen.flow_from_directory('validation_dir', batch_size = 10, target_shape=(150, 150), class_mode='binary')



model.compile(optimizer=RMSprop(learning_rate=0.001),
              loss='binary_crossentropy',
              metrics = ['accuracy'])

model.fit(train_generator, steps_per_epoch=100, epcohs=15, validation_data=validation_generator, validation_Steps=15)

classes = model.predict(images)        

Natural Language Processing

  • Prepare text to use in TensorFlow models.

from tensorflow.keras.preprocessing.Text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequence

sentences = []

tokenizer = Tokenizer(num_of_words=100, OOV_TOken='<OOV>')
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
sequences = tokenizer.texts_to_sequences(sentences)
padded = pad_sequences(sequences, maxLength=5, padding'post')

        

  • Use word embeddings in your TensorFlow model.

tf.keras.models.Sequential([tf.keras.layers.Embedding(vocab_size, embedding_dimension, input_length=max_length),
tf.keras.layers.flatten(),
tf.keras.layers.Dense(6, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')])
        

  • Use LSTMs in your model

tf.keras.models.Sequential([tf.keras.layers.Embedding(vocabsize, 64),
                    tf.keras.layers.BiDirectional(tf.keras.LSTM(64, return_sequence=True)),

tf.keras.layers.Dense(6, activation='relu')
tf.keras.layers.Dense(1, activation='sigmoid')]),        

  • Using CNN for text

tf.keras.models.Sequential([tf.keras.layers.Embedding(vacab_size, embedding_dim, input_lenght=num_length),
tf.keras.layers.Conv1D(128, 5, activation='relu'),
tf.keras.layers.GlobalMaxPolling1D(),
tf.keras.layers.Dense(24, activation='relu')
tf.keras.layers.Dense(1, activation='sigmoid')])])        

  • Train LSTMs on existing text to generate text (such as songs and poetry)

data = ''
corpus = data.lower.().split('\n')
tokenzer = Tokenizer()
tokenzer.fit_on_texts(corpus)

total_words = len(tokenizer.word_index)+1


# prepare the training data

input_sentence = []
for line in corpus:
    token_list = tokenizer.texts_to_sequences(line)[0]
    for i in range(1, len(token_list)):
      n_gram_sequence = token_list(:i+1)
      input_sequence.append(n_gram_sequence)  
 
max_sequence_length = max([len(x) for x in input_sequence])  
 
input_sequences= np.array(pad_sequences(input_sequence, padding='pre', maxlen= max_sequence_length))  

xs =  input_sequences[:, :-1]      
labels =  labels = input_sequences[:, :-1] 
ys=tf.keras.utils.to_catgorical(labels, num_classes=total_words)

model=Sequential()

model.add(Embedding(total_words, 64, input_length=max_sequence_length-1))
model.add(LSTM(20))
model.add(Dense(total_words, activation='softmax')])

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(xs, ys epochs=500, verbose=1)

# predicting a word

seed_text = ''
next_words=10

for _ in range(next_words)

   token_list = tokenzier.texts_to_sequences(seed_text)
   token_list = pad_sequences(token_list, maxlen=max_sequence_len-1, padding='pre') 
   predicted = model.predict(token_list, verbose=0)
   
   for word, index in tokenizer.word_index.items():
        if index==predicted:
           output_word =  word
           break
   seed_text = ' ' + output_word

print(Seed_text)           

Time Series

  • Time Series Data preparation:

dataset = tf.data.Dataset.range(10)
datset = dataset.window(5, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(5))
dataset = dataset.map(lambda window: (window[:-1], window[-1:]))
dataset = dataset.shuffle(buffer_size=10) #sequence bias
dataset = dataset.batch(10).prefetch(1) 
for x, y in dataset:
   print(x.numpy(), y.numpy())        

Single layer neural Network

split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]

window_size = 10
batch_size=20
shuffle_buffer_size = 1000

dataset = windowed_dataset(series, window_size, batch_size, shuffle_buffer_size) #code above

l0 = tf.keras.layers.Dense(1, iput_shape=[window_size])
model = tf.keras.models.Sequential([l0])

model.compile(loss='mse', optimizer=tf.keras.optimzers.SGD(lr=1e-6, momentum=0.9))

model.fit(dataset, epochs=100, verbose=0)

forecast = []

for time in range(len(series)- window_size):
    forecast.append(model.predict(series[time:time+window_size][np.newaxis]))     

forecast = forecast[split_time-window_size:]
results = np.array[forcast)[:, 0, 0]
 
mse = tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
        



To view or add a comment, sign in

More articles by Mritunjay Kumar

  • Transaction Management in Spring

    Note: Here it is assumed that you have undestanding about Springs and the Spring beans configuration required for the…

  • Kubernetes for Application Developer

    Kubernetes is an open source system for automating deployment, scaling and management of containerized application. The…

    4 Comments
  • OAuth 2.0 in a Nutshell

    OAuth 2.0 is an open authorization protocol which enables application to access each other data.

  • Quick Intro To Probability Distributions:

    A probability distribution is a mathematical function that provides the probabilities of occurrence of different…

Explore content categories