Cainvas

Text generation using LSTM

Credit: AITS Cainvas Community

Photo by VergilLee_1992 on Dribbble

In [1]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import string
%matplotlib inline
In [2]:
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/Speech.txt"
text = ""
with open('Speech.txt', 'r') as f:
    text = f.read()
print(text)
--2021-06-28 11:05:24--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/Speech.txt
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.62.68
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.62.68|:443... connected.
HTTP request sent, awaiting response... 304 Not Modified
File ‘Speech.txt’ not modified on server. Omitting download.

“ Good afternoon, and thank you and, wow. I am so privileged and so honored to be sharing this afternoon with all of you and these incredibly amazing women that are being honored today.
I’d like to extend my congratulations to each one of you, Octavia, Michelle, Kelly, Patty, and all fifty women that have been included in the impact report.
Your achievements not just inspire me but also so many others to work harder to be better and to make a dent wherever we can.
So, I’m very, very proud to be standing alongside you.
So, in life you know there are moments when you stop and ask yourself: “How did I get here?”
Like: “Why am I standing here?”
Well, this is definitely one of those moments for me and I find myself going back to the beginning, back to my roots.
I was born to incredible parents, amazing parents who served as doctors in the Indian Army.
I was the first born and as far back as I can remember I made my parents very proud and happy 99% of the time.
Okay, slight exaggerations of personal achievements are allowed from time to time, don’t you think?
My brother was born a few years later and even then, nothing changed for me.
We were both given equal opportunities, and I want to emphasize this, I want to really emphasize this for you because I don’t think a lot of people might understand that being equal might seem very normal but where I come from India and a lot of developing countries around the world more of not this is an exception.
It’s actually a privilege.
My first experience of the glaring disparity between boys and girls came at a very, very young age.
I grew up in a middle-class family with extremely philanthropic parents who constantly reminded me and my brother how lucky we were and how giving back to those who were less fortunate was not a choice it was a way of life.
Simple.
I was seven or eight years old when my parents started taking me on these visits in a traveling clinic to developing communities around and villages around the city that we lived in called Bareilly.
We were packed into this ambulance and would my parents would provide free medical care to people who couldn’t afford it.
My job at the age of eight was an assistant pharmacist.
I would count all the medicines put them in an envelope and give it out to patients, and I really took my job very seriously, very seriously.
But the more I went on these expeditions, the more I began to notice the simplest things that distinguished a boy from a girl or a man from a woman.
For example, girls were pulled out of school when they hit puberty because they were considered ready for marriage and babies.
That’s 12 and 13 while boys still enjoyed their childhood.
Or basic human rights such as health care were denied just because they were women.
Let this, let’s call this whole experience trigger number one for me.
Fast-forward a few years and many, many triggers in between.
Like a producer-director for example early on in my career, I must have been about 18 or 19, telling me that if I didn’t agree to the ridiculous terms or painfully low salary in his movie that he would just replace me because girls are replaceable in the entertainment business.
That was a memorable one.
Made me decide to make myself irreplaceable.
But I think what really moved the needle for me and ultimately led me to create the Priyanka Chopra foundation for health and education and around the same time partner with UNICEF was an encounter with my housekeeper’s daughter.
About 12 years ago I came home from set early one day and she was sitting in my library reading a book and she must have been eight or nine years old and I knew she loved reading.
So, I asked her, I was like, this is, I mean, it’s a weekday why aren’t you in school?
And she said: “Oh, I don’t go to school anymore.”
So, I went and asked her mother and I said, you know: “Why isn’t she in school?”
And her mom said that her family couldn’t afford to send her and her brother’s to school, so they chose the boys.
The reason, she would eventually get married and it would be a waste of money.
I was completely blown, and it shook me to my core.
Eventually, I decided to cover the cost of her education so that she could continue to learn because education is a basic human right.
And a huge necessity especially today.
From that point on I was determined to make a difference and as many children’s lives as I could.
In whatever big or small way that I could contribute.
There’s a really, really beautiful quote that I read recently, and I think it’s absolutely appropriate to say, to explain what I’m trying to say today.
“The hand that rocks the cradle, the procreator, the mother of tomorrow; a woman shapes the destiny of civilization. Such is the tragic irony of fate, that a beautiful creation such as a girl child is today one of the gravest concerns facing humanity.”
Girls have the power to change the world.
It is a fact and yet today girls are more likely than boys never to set foot in a classroom.
Despite all the efforts and progress made over the last two decades. More than, I’m just gonna give you a stat, more than 15 million girls of primary school age will never learn how to read or write compared to 10 million boys.
Primary school it’s the beginning of our future.
Over the last 11 years, I have witnessed firsthand the incredible work that UNICEF does for children around the world. Especially victims and survivors of child marriage, displacement, war, sexual violence.
But there is still so much work to do.
And for me, that is the fuel to my fire.
The reason I’m so committed to this cause and that is where my passion stems from because I know that a girl’s education not just empowers families but communities and economies.
A result of her education we all do better. It’s just as simple as that.
As entertainers and influencers sitting in this room I feel that is our social responsibility to be a voice for the voiceless, which is why I applaud each and every woman in this room for being such a badass.
For using your platform and your voice to contribute to change and for ensuring that there is not even one lost generation as long as we are alive.
I’d like to thank variety and all of you for encouraging me and all of us in this room to keep going and fighting on.
Thank you so much.
In [3]:
data = text.split("\n")
data[:6]
Out[3]:
['“ Good afternoon, and thank you and, wow. I am so privileged and so honored to be sharing this afternoon with all of you and these incredibly amazing women that are being honored today.',
 'I’d like to extend my congratulations to each one of you, Octavia, Michelle, Kelly, Patty, and all fifty women that have been included in the impact report.',
 'Your achievements not just inspire me but also so many others to work harder to be better and to make a dent wherever we can.',
 'So, I’m very, very proud to be standing alongside you.',
 'So, in life you know there are moments when you stop and ask yourself: “How did I get here?”',
 'Like: “Why am I standing here?”']
In [4]:
print("Total lines:",len(data))
Total lines: 56
In [5]:
data = " ".join(data)
data[:20]
Out[5]:
'“ Good afternoon, an'
In [6]:
def clean(doc):
    tokens = doc.split()
    table = str.maketrans("","",string.punctuation)
    tokens = [w.translate(table) for w in tokens]
    tokens = [word for word in tokens if word.isalpha()]
    tokens = [word.lower() for word in tokens]
    return tokens
In [7]:
tokens = clean(data)
In [8]:
length = 50 + 1
lines = []
for i in range(length,len(tokens)):
    sequence = tokens[i-length:i]
    line = " ".join(sequence)
    lines.append(line)
print(len(lines)) 
print(lines[:6])   
1067
['good afternoon and thank you and wow i am so privileged and so honored to be sharing this afternoon with all of you and these incredibly amazing women that are being honored today like to extend my congratulations to each one of you octavia michelle kelly patty and all fifty women', 'afternoon and thank you and wow i am so privileged and so honored to be sharing this afternoon with all of you and these incredibly amazing women that are being honored today like to extend my congratulations to each one of you octavia michelle kelly patty and all fifty women that', 'and thank you and wow i am so privileged and so honored to be sharing this afternoon with all of you and these incredibly amazing women that are being honored today like to extend my congratulations to each one of you octavia michelle kelly patty and all fifty women that have', 'thank you and wow i am so privileged and so honored to be sharing this afternoon with all of you and these incredibly amazing women that are being honored today like to extend my congratulations to each one of you octavia michelle kelly patty and all fifty women that have been', 'you and wow i am so privileged and so honored to be sharing this afternoon with all of you and these incredibly amazing women that are being honored today like to extend my congratulations to each one of you octavia michelle kelly patty and all fifty women that have been included', 'and wow i am so privileged and so honored to be sharing this afternoon with all of you and these incredibly amazing women that are being honored today like to extend my congratulations to each one of you octavia michelle kelly patty and all fifty women that have been included in']
In [9]:
print("First line:",lines[0])
print("First token:",tokens[0])
First line: good afternoon and thank you and wow i am so privileged and so honored to be sharing this afternoon with all of you and these incredibly amazing women that are being honored today like to extend my congratulations to each one of you octavia michelle kelly patty and all fifty women
First token: good
In [10]:
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,LSTM,Embedding
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import load_model
In [11]:
tokenizer = Tokenizer()
tokenizer.fit_on_texts(lines)
sequences = tokenizer.texts_to_sequences(lines)
sequences = np.array(sequences)
In [12]:
X, y = sequences[:,:-1],sequences[:,-1]
In [13]:
print(X[0])
print(y[0])
[434 426   1 421  16   1 432   3 156  17 430   1  17 412   2  45 428  14
 426  56  37   6  16   1  76 424 155  75   7  36  74 412  44  46   2 418
   9 417   2 153  25   6  16 415 414 413 411   1  37 410]
75
In [14]:
vocab_size  = len(tokenizer.word_index) + 1
In [15]:
y = to_categorical(y,num_classes = vocab_size)
In [16]:
model = Sequential()
model.add(Embedding(vocab_size,50,input_length = 50))
model.add(LSTM(100, return_sequences = True))
model.add(LSTM(100))
model.add(Dense(100,activation = "relu"))
model.add(Dense(vocab_size,activation = "softmax"))
In [17]:
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding (Embedding)        (None, 50, 50)            21750     
_________________________________________________________________
lstm (LSTM)                  (None, 50, 100)           60400     
_________________________________________________________________
lstm_1 (LSTM)                (None, 100)               80400     
_________________________________________________________________
dense (Dense)                (None, 100)               10100     
_________________________________________________________________
dense_1 (Dense)              (None, 435)               43935     
=================================================================
Total params: 216,585
Trainable params: 216,585
Non-trainable params: 0
_________________________________________________________________
In [18]:
model.compile(optimizer ="adam" , loss ="categorical_crossentropy"  ,metrics = ["accuracy"])
In [19]:
record = model.fit(X,y, epochs = 150)
Epoch 1/150
34/34 [==============================] - 0s 10ms/step - loss: 5.9143 - accuracy: 0.0262
Epoch 2/150
34/34 [==============================] - 0s 7ms/step - loss: 5.5259 - accuracy: 0.0431
Epoch 3/150
34/34 [==============================] - 0s 7ms/step - loss: 5.4325 - accuracy: 0.0319
Epoch 4/150
34/34 [==============================] - 0s 7ms/step - loss: 5.4073 - accuracy: 0.0366
Epoch 5/150
34/34 [==============================] - 0s 7ms/step - loss: 5.4037 - accuracy: 0.0403
Epoch 6/150
34/34 [==============================] - 0s 6ms/step - loss: 5.3982 - accuracy: 0.0440
Epoch 7/150
34/34 [==============================] - 0s 7ms/step - loss: 5.3816 - accuracy: 0.0403
Epoch 8/150
34/34 [==============================] - 0s 7ms/step - loss: 5.3718 - accuracy: 0.0440
Epoch 9/150
34/34 [==============================] - 0s 7ms/step - loss: 5.3233 - accuracy: 0.0394
Epoch 10/150
34/34 [==============================] - 0s 7ms/step - loss: 5.1932 - accuracy: 0.0440
Epoch 11/150
34/34 [==============================] - 0s 7ms/step - loss: 5.0922 - accuracy: 0.0440
Epoch 12/150
34/34 [==============================] - 0s 7ms/step - loss: 5.0488 - accuracy: 0.0366
Epoch 13/150
34/34 [==============================] - 0s 7ms/step - loss: 4.9977 - accuracy: 0.0356
Epoch 14/150
34/34 [==============================] - 0s 6ms/step - loss: 4.9431 - accuracy: 0.0525
Epoch 15/150
34/34 [==============================] - 0s 7ms/step - loss: 4.8659 - accuracy: 0.0572
Epoch 16/150
34/34 [==============================] - 0s 7ms/step - loss: 4.7792 - accuracy: 0.0562
Epoch 17/150
34/34 [==============================] - 0s 7ms/step - loss: 4.6985 - accuracy: 0.0590
Epoch 18/150
34/34 [==============================] - 0s 7ms/step - loss: 4.6160 - accuracy: 0.0590
Epoch 19/150
34/34 [==============================] - 0s 7ms/step - loss: 4.5258 - accuracy: 0.0731
Epoch 20/150
34/34 [==============================] - 0s 7ms/step - loss: 4.4468 - accuracy: 0.0515
Epoch 21/150
34/34 [==============================] - 0s 6ms/step - loss: 4.3484 - accuracy: 0.0647
Epoch 22/150
34/34 [==============================] - 0s 7ms/step - loss: 4.2673 - accuracy: 0.0769
Epoch 23/150
34/34 [==============================] - 0s 7ms/step - loss: 4.1962 - accuracy: 0.0853
Epoch 24/150
34/34 [==============================] - 0s 7ms/step - loss: 4.1345 - accuracy: 0.0862
Epoch 25/150
34/34 [==============================] - 0s 7ms/step - loss: 4.0632 - accuracy: 0.0947
Epoch 26/150
34/34 [==============================] - 0s 7ms/step - loss: 3.9981 - accuracy: 0.1097
Epoch 27/150
34/34 [==============================] - 0s 7ms/step - loss: 3.9377 - accuracy: 0.1068
Epoch 28/150
34/34 [==============================] - 0s 6ms/step - loss: 3.8776 - accuracy: 0.1134
Epoch 29/150
34/34 [==============================] - 0s 7ms/step - loss: 3.8256 - accuracy: 0.1181
Epoch 30/150
34/34 [==============================] - 0s 6ms/step - loss: 3.7560 - accuracy: 0.1190
Epoch 31/150
34/34 [==============================] - 0s 7ms/step - loss: 3.7091 - accuracy: 0.1040
Epoch 32/150
34/34 [==============================] - 0s 7ms/step - loss: 3.6527 - accuracy: 0.1218
Epoch 33/150
34/34 [==============================] - 0s 7ms/step - loss: 3.5810 - accuracy: 0.1265
Epoch 34/150
34/34 [==============================] - 0s 7ms/step - loss: 3.5266 - accuracy: 0.1425
Epoch 35/150
34/34 [==============================] - 0s 7ms/step - loss: 3.4713 - accuracy: 0.1500
Epoch 36/150
34/34 [==============================] - 0s 6ms/step - loss: 3.4031 - accuracy: 0.1565
Epoch 37/150
34/34 [==============================] - 0s 7ms/step - loss: 3.3344 - accuracy: 0.1649
Epoch 38/150
34/34 [==============================] - 0s 7ms/step - loss: 3.2893 - accuracy: 0.1593
Epoch 39/150
34/34 [==============================] - 0s 7ms/step - loss: 3.2433 - accuracy: 0.1537
Epoch 40/150
34/34 [==============================] - 0s 7ms/step - loss: 3.1638 - accuracy: 0.1659
Epoch 41/150
34/34 [==============================] - 0s 7ms/step - loss: 3.0986 - accuracy: 0.1734
Epoch 42/150
34/34 [==============================] - 0s 7ms/step - loss: 3.0372 - accuracy: 0.1940
Epoch 43/150
34/34 [==============================] - 0s 6ms/step - loss: 2.9970 - accuracy: 0.1987
Epoch 44/150
34/34 [==============================] - 0s 7ms/step - loss: 2.9337 - accuracy: 0.2015
Epoch 45/150
34/34 [==============================] - 0s 7ms/step - loss: 2.8771 - accuracy: 0.2043
Epoch 46/150
34/34 [==============================] - 0s 7ms/step - loss: 2.8186 - accuracy: 0.2371
Epoch 47/150
34/34 [==============================] - 0s 7ms/step - loss: 2.7874 - accuracy: 0.2221
Epoch 48/150
34/34 [==============================] - 0s 7ms/step - loss: 2.7118 - accuracy: 0.2596
Epoch 49/150
34/34 [==============================] - 0s 7ms/step - loss: 2.6721 - accuracy: 0.2446
Epoch 50/150
34/34 [==============================] - 0s 6ms/step - loss: 2.6179 - accuracy: 0.2559
Epoch 51/150
34/34 [==============================] - 0s 7ms/step - loss: 2.5802 - accuracy: 0.2690
Epoch 52/150
34/34 [==============================] - 0s 7ms/step - loss: 2.5254 - accuracy: 0.2980
Epoch 53/150
34/34 [==============================] - 0s 7ms/step - loss: 2.4798 - accuracy: 0.2793
Epoch 54/150
34/34 [==============================] - 0s 7ms/step - loss: 2.4164 - accuracy: 0.3224
Epoch 55/150
34/34 [==============================] - 0s 7ms/step - loss: 2.3894 - accuracy: 0.3215
Epoch 56/150
34/34 [==============================] - 0s 7ms/step - loss: 2.3153 - accuracy: 0.3440
Epoch 57/150
34/34 [==============================] - 0s 6ms/step - loss: 2.2861 - accuracy: 0.3468
Epoch 58/150
34/34 [==============================] - 0s 7ms/step - loss: 2.2658 - accuracy: 0.3524
Epoch 59/150
34/34 [==============================] - 0s 7ms/step - loss: 2.2275 - accuracy: 0.3505
Epoch 60/150
34/34 [==============================] - 0s 7ms/step - loss: 2.1235 - accuracy: 0.3796
Epoch 61/150
34/34 [==============================] - 0s 8ms/step - loss: 2.0887 - accuracy: 0.4236
Epoch 62/150
34/34 [==============================] - 0s 7ms/step - loss: 2.0499 - accuracy: 0.4292
Epoch 63/150
34/34 [==============================] - 0s 6ms/step - loss: 1.9879 - accuracy: 0.4321
Epoch 64/150
34/34 [==============================] - 0s 7ms/step - loss: 1.9196 - accuracy: 0.4442
Epoch 65/150
34/34 [==============================] - 0s 7ms/step - loss: 1.8802 - accuracy: 0.4789
Epoch 66/150
34/34 [==============================] - 0s 7ms/step - loss: 1.8436 - accuracy: 0.4667
Epoch 67/150
34/34 [==============================] - 0s 7ms/step - loss: 1.8003 - accuracy: 0.5052
Epoch 68/150
34/34 [==============================] - 0s 7ms/step - loss: 1.7507 - accuracy: 0.5164
Epoch 69/150
34/34 [==============================] - 0s 7ms/step - loss: 1.6951 - accuracy: 0.5426
Epoch 70/150
34/34 [==============================] - 0s 7ms/step - loss: 1.6331 - accuracy: 0.5651
Epoch 71/150
34/34 [==============================] - 0s 7ms/step - loss: 1.5829 - accuracy: 0.5848
Epoch 72/150
34/34 [==============================] - 0s 7ms/step - loss: 1.5344 - accuracy: 0.6120
Epoch 73/150
34/34 [==============================] - 0s 8ms/step - loss: 1.5186 - accuracy: 0.6111
Epoch 74/150
34/34 [==============================] - 0s 7ms/step - loss: 1.4833 - accuracy: 0.6036
Epoch 75/150
34/34 [==============================] - 0s 6ms/step - loss: 1.4208 - accuracy: 0.6270
Epoch 76/150
34/34 [==============================] - 0s 7ms/step - loss: 1.3932 - accuracy: 0.6214
Epoch 77/150
34/34 [==============================] - 0s 7ms/step - loss: 1.3751 - accuracy: 0.6139
Epoch 78/150
34/34 [==============================] - 0s 7ms/step - loss: 1.3329 - accuracy: 0.6514
Epoch 79/150
34/34 [==============================] - 0s 7ms/step - loss: 1.2699 - accuracy: 0.6748
Epoch 80/150
34/34 [==============================] - 0s 7ms/step - loss: 1.2192 - accuracy: 0.6729
Epoch 81/150
34/34 [==============================] - 0s 7ms/step - loss: 1.1878 - accuracy: 0.7095
Epoch 82/150
34/34 [==============================] - 0s 7ms/step - loss: 1.1329 - accuracy: 0.7057
Epoch 83/150
34/34 [==============================] - 0s 6ms/step - loss: 1.0933 - accuracy: 0.7160
Epoch 84/150
34/34 [==============================] - 0s 7ms/step - loss: 1.0675 - accuracy: 0.7507
Epoch 85/150
34/34 [==============================] - 0s 7ms/step - loss: 1.0515 - accuracy: 0.7385
Epoch 86/150
34/34 [==============================] - 0s 7ms/step - loss: 1.0149 - accuracy: 0.7582
Epoch 87/150
34/34 [==============================] - 0s 7ms/step - loss: 0.9944 - accuracy: 0.7498
Epoch 88/150
34/34 [==============================] - 0s 7ms/step - loss: 0.9396 - accuracy: 0.7844
Epoch 89/150
34/34 [==============================] - 0s 7ms/step - loss: 0.8940 - accuracy: 0.8051
Epoch 90/150
34/34 [==============================] - 0s 7ms/step - loss: 0.8629 - accuracy: 0.8088
Epoch 91/150
34/34 [==============================] - 0s 7ms/step - loss: 0.8375 - accuracy: 0.8163
Epoch 92/150
34/34 [==============================] - 0s 7ms/step - loss: 0.8293 - accuracy: 0.8088
Epoch 93/150
34/34 [==============================] - 0s 7ms/step - loss: 0.8048 - accuracy: 0.8182
Epoch 94/150
34/34 [==============================] - 0s 7ms/step - loss: 0.7602 - accuracy: 0.8454
Epoch 95/150
34/34 [==============================] - 0s 7ms/step - loss: 0.7491 - accuracy: 0.8388
Epoch 96/150
34/34 [==============================] - 0s 7ms/step - loss: 0.7101 - accuracy: 0.8566
Epoch 97/150
34/34 [==============================] - 0s 8ms/step - loss: 0.6643 - accuracy: 0.8688
Epoch 98/150
34/34 [==============================] - 0s 7ms/step - loss: 0.6333 - accuracy: 0.8819
Epoch 99/150
34/34 [==============================] - 0s 7ms/step - loss: 0.6184 - accuracy: 0.8885
Epoch 100/150
34/34 [==============================] - 0s 7ms/step - loss: 0.5866 - accuracy: 0.8950
Epoch 101/150
34/34 [==============================] - 0s 7ms/step - loss: 0.5865 - accuracy: 0.8988
Epoch 102/150
34/34 [==============================] - 0s 7ms/step - loss: 0.5572 - accuracy: 0.8960
Epoch 103/150
34/34 [==============================] - 0s 7ms/step - loss: 0.5249 - accuracy: 0.9072
Epoch 104/150
34/34 [==============================] - 0s 6ms/step - loss: 0.5191 - accuracy: 0.9110
Epoch 105/150
34/34 [==============================] - 0s 7ms/step - loss: 0.5117 - accuracy: 0.9063
Epoch 106/150
34/34 [==============================] - 0s 7ms/step - loss: 0.4857 - accuracy: 0.9157
Epoch 107/150
34/34 [==============================] - 0s 7ms/step - loss: 0.4685 - accuracy: 0.9269
Epoch 108/150
34/34 [==============================] - 0s 7ms/step - loss: 0.4423 - accuracy: 0.9222
Epoch 109/150
34/34 [==============================] - 0s 7ms/step - loss: 0.4222 - accuracy: 0.9306
Epoch 110/150
34/34 [==============================] - 0s 7ms/step - loss: 0.3910 - accuracy: 0.9419
Epoch 111/150
34/34 [==============================] - 0s 6ms/step - loss: 0.3850 - accuracy: 0.9447
Epoch 112/150
34/34 [==============================] - 0s 6ms/step - loss: 0.3804 - accuracy: 0.9456
Epoch 113/150
34/34 [==============================] - 0s 7ms/step - loss: 0.3566 - accuracy: 0.9522
Epoch 114/150
34/34 [==============================] - 0s 7ms/step - loss: 0.3316 - accuracy: 0.9672
Epoch 115/150
34/34 [==============================] - 0s 7ms/step - loss: 0.3195 - accuracy: 0.9606
Epoch 116/150
34/34 [==============================] - 0s 7ms/step - loss: 0.3115 - accuracy: 0.9625
Epoch 117/150
34/34 [==============================] - 0s 7ms/step - loss: 0.2993 - accuracy: 0.9700
Epoch 118/150
34/34 [==============================] - 0s 7ms/step - loss: 0.2905 - accuracy: 0.9634
Epoch 119/150
34/34 [==============================] - 0s 7ms/step - loss: 0.2770 - accuracy: 0.9700
Epoch 120/150
34/34 [==============================] - 0s 6ms/step - loss: 0.2664 - accuracy: 0.9756
Epoch 121/150
34/34 [==============================] - 0s 7ms/step - loss: 0.2457 - accuracy: 0.9728
Epoch 122/150
34/34 [==============================] - 0s 7ms/step - loss: 0.2372 - accuracy: 0.9728
Epoch 123/150
34/34 [==============================] - 0s 7ms/step - loss: 0.2209 - accuracy: 0.9784
Epoch 124/150
34/34 [==============================] - 0s 7ms/step - loss: 0.2114 - accuracy: 0.9775
Epoch 125/150
34/34 [==============================] - 0s 7ms/step - loss: 0.2019 - accuracy: 0.9794
Epoch 126/150
34/34 [==============================] - 0s 7ms/step - loss: 0.1929 - accuracy: 0.9831
Epoch 127/150
34/34 [==============================] - 0s 6ms/step - loss: 0.1793 - accuracy: 0.9822
Epoch 128/150
34/34 [==============================] - 0s 7ms/step - loss: 0.1719 - accuracy: 0.9822
Epoch 129/150
34/34 [==============================] - 0s 7ms/step - loss: 0.1781 - accuracy: 0.9859
Epoch 130/150
34/34 [==============================] - 0s 8ms/step - loss: 0.1664 - accuracy: 0.9869
Epoch 131/150
34/34 [==============================] - 0s 7ms/step - loss: 0.1572 - accuracy: 0.9869
Epoch 132/150
34/34 [==============================] - 0s 7ms/step - loss: 0.1464 - accuracy: 0.9888
Epoch 133/150
34/34 [==============================] - 0s 7ms/step - loss: 0.1444 - accuracy: 0.9916
Epoch 134/150
34/34 [==============================] - 0s 7ms/step - loss: 0.1426 - accuracy: 0.9888
Epoch 135/150
34/34 [==============================] - 0s 7ms/step - loss: 0.1394 - accuracy: 0.9897
Epoch 136/150
34/34 [==============================] - 0s 6ms/step - loss: 0.1269 - accuracy: 0.9944
Epoch 137/150
34/34 [==============================] - 0s 7ms/step - loss: 0.1172 - accuracy: 0.9934
Epoch 138/150
34/34 [==============================] - 0s 7ms/step - loss: 0.1098 - accuracy: 0.9944
Epoch 139/150
34/34 [==============================] - 0s 7ms/step - loss: 0.1038 - accuracy: 0.9944
Epoch 140/150
34/34 [==============================] - 0s 7ms/step - loss: 0.1008 - accuracy: 0.9944
Epoch 141/150
34/34 [==============================] - 0s 7ms/step - loss: 0.0966 - accuracy: 0.9953
Epoch 142/150
34/34 [==============================] - 0s 7ms/step - loss: 0.0907 - accuracy: 0.9963
Epoch 143/150
34/34 [==============================] - 0s 6ms/step - loss: 0.0859 - accuracy: 0.9963
Epoch 144/150
34/34 [==============================] - 0s 7ms/step - loss: 0.0833 - accuracy: 0.9963
Epoch 145/150
34/34 [==============================] - 0s 7ms/step - loss: 0.0795 - accuracy: 0.9963
Epoch 146/150
34/34 [==============================] - 0s 7ms/step - loss: 0.0754 - accuracy: 0.9963
Epoch 147/150
34/34 [==============================] - 0s 7ms/step - loss: 0.0724 - accuracy: 0.9972
Epoch 148/150
34/34 [==============================] - 0s 7ms/step - loss: 0.0702 - accuracy: 0.9963
Epoch 149/150
34/34 [==============================] - 0s 7ms/step - loss: 0.0697 - accuracy: 0.9972
Epoch 150/150
34/34 [==============================] - 0s 7ms/step - loss: 0.0645 - accuracy: 0.9953
In [20]:
model.save('text_generate.h5')
In [21]:
model = load_model('text_generate.h5')
In [22]:
print(f"loss at epoch 1: {record.history['loss'][0]}")
print(f"loss at epoch 150: {record.history['loss'][149]}")
plt.plot(record.history['loss'])
loss at epoch 1: 5.914290428161621
loss at epoch 150: 0.06450717896223068
Out[22]:
[<matplotlib.lines.Line2D at 0x7f44587f3438>]
In [23]:
seed_text = lines[10]
In [24]:
def generate_text_seq(model,tokenizer,text_seq_length,seed_text,n_words):
    text = []
    for _ in range(n_words):
        encoded = tokenizer.texts_to_sequences([seed_text])[0]
        encoded = pad_sequences([encoded],maxlen = text_seq_length,truncating = 'pre')
        
        y_predict = model.predict_classes(encoded)
        
        predicted_words = " "
        for word,index in tokenizer.word_index.items():
            if index == y_predict:
                predicted_word = word
                break
        seed_text = seed_text + " " + predicted_word
        text.append(predicted_word)
    return " ".join(text) 
In [25]:
generate_text_seq(model,tokenizer,50,seed_text,100)
WARNING:tensorflow:From <ipython-input-24-3b474758edbd>:7: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.
Instructions for updating:
Please use instead:* `np.argmax(model.predict(x), axis=-1)`,   if your model does multi-class classification   (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`,   if your model does binary classification   (e.g. if it uses a `sigmoid` last-layer activation).
Out[25]:
'not just inspire me but also so many others to work harder to be better and to make a dent wherever we can so very very proud to be standing alongside you so in life you know there are moments when you stop and ask yourself did i get like am i standing well this is definitely one of those moments for me and i find myself going back to the beginning back to my roots i was born to incredible parents amazing parents who served as doctors in the indian army i was the first born and as far'