Cainvas
Model Files
speech_emotion_full.h5
keras
Model
deepSea Compiled Models
speech_emotion_full.exe
deepSea
Ubuntu

Speech Emotion Recognition

Credit: AITS Cainvas Community

Photo by Gleb Kuznetsov on Dribbble

Speech Emotion Recognition is the task of recognizing emotion on the basis of your speech.It has uses in application in song recommendation on the basis of your mood and it has various other applications as well in which mood of a person plays a vital role.

Importing the Dataset and Trained Model

This notebook contains the training part as well but if anyone wants to skip training and access performance of the model,you can use the pre trained model.

You can run the command and this will load the model to your workspace

In [1]:
# This will load the dataset.You will see a folder called ALL in your workspace.
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/SER.zip"
!unzip -qo SER.zip 
!rm SER.zip
--2020-10-30 13:48:49--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/SER.zip
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.66.108
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.66.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 100987649 (96M) [application/zip]
Saving to: ‘SER.zip’

SER.zip             100%[===================>]  96.31M  72.6MB/s    in 1.3s    

2020-10-30 13:48:51 (72.6 MB/s) - ‘SER.zip’ saved [100987649/100987649]

Importing Libraries

In [2]:
import pandas as pd
import numpy as np

import os
import sys

# librosa is a Python library for analyzing audio and music. It can be used to extract the data from the audio files we will see it later.
import librosa
import librosa.display
import seaborn as sns
import matplotlib.pyplot as plt

from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.model_selection import train_test_split

# to play the audio files
from IPython.display import Audio

import tensorflow as tf
import keras
from keras.callbacks import ReduceLROnPlateau
from keras.models import Sequential
from keras.layers import Dense, Conv1D, MaxPooling1D, Flatten, Dropout, BatchNormalization ,Activation
from keras.utils import np_utils, to_categorical
from keras.callbacks import ModelCheckpoint

import warnings
if not sys.warnoptions:
    warnings.simplefilter("ignore")
warnings.filterwarnings("ignore", category=DeprecationWarning) 

Data Preparation

Creating a Dataframe for the Dataset.

In [3]:
Savee = "ALL/"
In [4]:
savee_directory_list = os.listdir(Savee)

file_emotion = []
file_path = []

for file in savee_directory_list:
    file_path.append(Savee + file)
    part = file.split('_')[1]
    ele = part[:-6]
    if ele=='a':
        file_emotion.append('angry')
    elif ele=='d':
        file_emotion.append('disgust')
    elif ele=='f':
        file_emotion.append('fear')
    elif ele=='h':
        file_emotion.append('happy')
    elif ele=='n':
        file_emotion.append('neutral')
    elif ele=='sa':
        file_emotion.append('sad')
    else:
        file_emotion.append('surprise')
        
# dataframe for emotion of files
emotion_df = pd.DataFrame(file_emotion, columns=['Emotions'])

# dataframe for path of files.
path_df = pd.DataFrame(file_path, columns=['Path'])
data_path = pd.concat([emotion_df, path_df], axis=1)
data_path.head()
Out[4]:
Emotions Path
0 fear ALL/DC_f07.wav
1 disgust ALL/DC_d13.wav
2 fear ALL/JE_f05.wav
3 sad ALL/KL_sa15.wav
4 surprise ALL/JK_su03.wav

Data Visualisation and Exploration

First let's plot the count of each emotions in our dataset.

In [5]:
plt.title('Count of Emotions', size=16)
sns.countplot(data_path.Emotions)
plt.ylabel('Count', size=12)
plt.xlabel('Emotions', size=12)
sns.despine(top=True, right=True, left=False, bottom=False)
plt.show()

We can also plot waveplots and spectograms for audio signals

  • Waveplots - Waveplots let us know the loudness of the audio at a given time.
  • Spectograms - A spectrogram is a visual representation of the spectrum of frequencies of sound or other signals as they vary with time. It’s a representation of frequencies changing with respect to time for given audio/music signals.
In [6]:
def create_waveplot(data, sr, e):
    plt.figure(figsize=(10, 3))
    plt.title('Waveplot for audio with {} emotion'.format(e), size=15)
    librosa.display.waveplot(data, sr=sr)
    plt.show()

def create_spectrogram(data, sr, e):
    # stft function converts the data into short term fourier transform
    X = librosa.stft(data)
    Xdb = librosa.amplitude_to_db(abs(X))
    plt.figure(figsize=(12, 3))
    plt.title('Spectrogram for audio with {} emotion'.format(e), size=15)
    librosa.display.specshow(Xdb, sr=sr, x_axis='time', y_axis='hz')   
    #librosa.display.specshow(Xdb, sr=sr, x_axis='time', y_axis='log')
    plt.colorbar()
In [7]:
emotion='fear'
path = np.array(data_path.Path[data_path.Emotions==emotion])[1]
data, sampling_rate = librosa.load(path)
create_waveplot(data, sampling_rate, emotion)
create_spectrogram(data, sampling_rate, emotion)
Audio(path)
Out[7]:
In [8]:
emotion='angry'
path = np.array(data_path.Path[data_path.Emotions==emotion])[1]
data, sampling_rate = librosa.load(path)
create_waveplot(data, sampling_rate, emotion)
create_spectrogram(data, sampling_rate, emotion)
Audio(path)
Out[8]:
In [9]:
emotion='sad'
path = np.array(data_path.Path[data_path.Emotions==emotion])[1]
data, sampling_rate = librosa.load(path)
create_waveplot(data, sampling_rate, emotion)
create_spectrogram(data, sampling_rate, emotion)
Audio(path)
Out[9]:
In [10]:
emotion='happy'
path = np.array(data_path.Path[data_path.Emotions==emotion])[1]
data, sampling_rate = librosa.load(path)
create_waveplot(data, sampling_rate, emotion)
create_spectrogram(data, sampling_rate, emotion)
Audio(path)
Out[10]:

Data Augmentation

  • We will apply some data augmentation such as introduction of some noise or stretching the audio signals etc, for better training results
In [11]:
def noise(data):
    noise_amp = 0.035*np.random.uniform()*np.amax(data)
    data = data + noise_amp*np.random.normal(size=data.shape[0])
    return data

def stretch(data, rate=0.8):
    return librosa.effects.time_stretch(data, rate)

def shift(data):
    shift_range = int(np.random.uniform(low=-5, high = 5)*1000)
    return np.roll(data, shift_range)

def pitch(data, sampling_rate, pitch_factor=0.7):
    return librosa.effects.pitch_shift(data, sampling_rate, pitch_factor)

# taking any example and checking for techniques.
path = np.array(data_path.Path)[1]
data, sample_rate = librosa.load(path)

1. Simple Audio

In [12]:
plt.figure(figsize=(14,4))
librosa.display.waveplot(y=data, sr=sample_rate)
Audio(path)
Out[12]:

2. Noise Injection

In [13]:
x = noise(data)
plt.figure(figsize=(14,4))
librosa.display.waveplot(y=x, sr=sample_rate)
Audio(x, rate=sample_rate)
Out[13]:

We can see noise injection is a very good augmentation technique because of which we can assure our training model is not overfitted

3. Stretching

In [14]:
x = stretch(data)
plt.figure(figsize=(14,4))
librosa.display.waveplot(y=x, sr=sample_rate)
Audio(x, rate=sample_rate)
Out[14]:

4. Shifting

In [15]:
x = shift(data)
plt.figure(figsize=(14,4))
librosa.display.waveplot(y=x, sr=sample_rate)
Audio(x, rate=sample_rate)
Out[15]:

5. Pitch

In [16]:
x = pitch(data, sample_rate)
plt.figure(figsize=(14,4))
librosa.display.waveplot(y=x, sr=sample_rate)
Audio(x, rate=sample_rate)
Out[16]:
  • From the above types of augmentation techniques i am using noise, stretching(ie. changing speed) and some pitching.

Feature Extraction

  • Extraction of features is a very important part in analyzing and finding relations between different things. As we already know that the data provided of audio cannot be understood by the models directly so we need to convert them into an understandable format for which feature extraction is used.

In this project i am not going deep in feature selection process to check which features are good for our dataset rather i am only extracting 5 audio features:

  • Zero Crossing Rate
  • Chroma_stft
  • MFCC
  • RMS(root mean square) value
  • MelSpectogram to train our model.
In [17]:
def extract_features(data):
    # ZCR
    result = np.array([])
    zcr = np.mean(librosa.feature.zero_crossing_rate(y=data).T, axis=0)
    result=np.hstack((result, zcr)) # stacking horizontally

    # Chroma_stft
    stft = np.abs(librosa.stft(data))
    chroma_stft = np.mean(librosa.feature.chroma_stft(S=stft, sr=sample_rate).T, axis=0)
    result = np.hstack((result, chroma_stft)) # stacking horizontally

    # MFCC
    mfcc = np.mean(librosa.feature.mfcc(y=data, sr=sample_rate).T, axis=0)
    result = np.hstack((result, mfcc)) # stacking horizontally

    # Root Mean Square Value
    rms = np.mean(librosa.feature.rms(y=data).T, axis=0)
    result = np.hstack((result, rms)) # stacking horizontally

    # MelSpectogram
    mel = np.mean(librosa.feature.melspectrogram(y=data, sr=sample_rate).T, axis=0)
    result = np.hstack((result, mel)) # stacking horizontally
    
    return result

def get_features(path):
    # duration and offset are used to take care of the no audio in start and the ending of each audio files as seen above.
    data, sample_rate = librosa.load(path, duration=2.5, offset=0.6)
    
    # without augmentation
    res1 = extract_features(data)
    result = np.array(res1)
    
    # data with noise
    noise_data = noise(data)
    res2 = extract_features(noise_data)
    result = np.vstack((result, res2)) # stacking vertically
    
    # data with stretching and pitching
    new_data = stretch(data)
    data_stretch_pitch = pitch(new_data, sample_rate)
    res3 = extract_features(data_stretch_pitch)
    result = np.vstack((result, res3)) # stacking vertically
    
    return result
In [18]:
# Extracting features and performing augmentations
X, Y = [], []
for path, emotion in zip(data_path.Path, data_path.Emotions):
    feature = get_features(path)
    for ele in feature:
        X.append(ele)
        Y.append(emotion)
In [19]:
len(X), len(Y), data_path.Path.shape
Out[19]:
(1260, 1260, (420,))
In [20]:
Features = pd.DataFrame(X)
Features['labels'] = Y
Features.to_csv('features.csv', index=False)
Features.head()
Out[20]:
0 1 2 3 4 5 6 7 8 9 ... 153 154 155 156 157 158 159 160 161 labels
0 0.023844 0.376802 0.544572 0.515236 0.393183 0.371867 0.482094 0.712659 0.718695 0.479259 ... 4.097942e-06 9.281065e-07 2.970960e-07 2.264097e-07 2.703385e-07 3.726052e-07 5.253090e-07 7.502915e-07 8.361687e-07 fear
1 0.037516 0.477081 0.646359 0.631130 0.514678 0.485487 0.541560 0.742734 0.745133 0.556933 ... 1.199139e-02 1.294791e-02 1.197391e-02 1.192141e-02 1.171071e-02 1.117233e-02 1.146653e-02 1.185100e-02 1.183047e-02 fear
2 0.027655 0.288429 0.300280 0.476859 0.604356 0.408313 0.317014 0.379047 0.622897 0.778989 ... 5.480809e-06 2.006418e-06 9.367810e-07 7.464228e-07 7.515688e-07 8.350502e-07 1.116672e-06 1.503249e-06 1.655530e-06 fear
3 0.025920 0.468795 0.591989 0.610804 0.533011 0.498842 0.457363 0.418604 0.356388 0.357391 ... 3.380302e-07 2.274208e-07 1.679705e-07 1.774402e-07 2.407547e-07 3.790489e-07 6.543595e-07 9.678180e-07 1.121691e-06 disgust
4 0.026489 0.480656 0.601077 0.624142 0.550096 0.518418 0.474122 0.427073 0.367379 0.367936 ... 1.245124e-04 1.291739e-04 1.313194e-04 1.254868e-04 1.205003e-04 1.303233e-04 1.353458e-04 1.278713e-04 1.316384e-04 disgust

5 rows × 163 columns

Data Preparation

  • As of now we have extracted the data, now we need to normalize and split our data for training and testing.
In [21]:
X = Features.iloc[: ,:-1].values
Y = Features['labels'].values
In [22]:
# As this is a multiclass classification problem onehotencoding our Y.
encoder = OneHotEncoder()
Y = encoder.fit_transform(np.array(Y).reshape(-1,1)).toarray()
In [23]:
# splitting data
x_train, x_test, y_train, y_test = train_test_split(X, Y, random_state=0, shuffle=True)
x_train.shape, y_train.shape, x_test.shape, y_test.shape
Out[23]:
((945, 162), (945, 7), (315, 162), (315, 7))
In [24]:
# scaling our data with sklearn's Standard scaler
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
x_train.shape, y_train.shape, x_test.shape, y_test.shape
Out[24]:
((945, 162), (945, 7), (315, 162), (315, 7))
In [25]:
# making our data compatible to model.
x_train = np.expand_dims(x_train, axis=2)
x_test = np.expand_dims(x_test, axis=2)
x_train.shape, y_train.shape, x_test.shape, y_test.shape
Out[25]:
((945, 162, 1), (945, 7), (315, 162, 1), (315, 7))

Model Architecture

In [26]:
# building the model:
model = Sequential()
model.add(Conv1D(256, 8, padding='same',activation = 'relu',input_shape=(x_train.shape[1],1)))  
model.add(Conv1D(256, 8, padding='same', activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(MaxPooling1D(pool_size=(8)))
model.add(Conv1D(128, 8, padding='same', activation='relu'))
model.add(Conv1D(128, 8, padding='same', activation='relu'))
model.add(Dropout(0.4))
model.add(Conv1D(128, 8, padding='same', activation='relu'))
model.add(Conv1D(128, 8, padding='same', activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(MaxPooling1D(pool_size=(8)))
model.add(Conv1D(64, 8, padding='same', activation='relu'))
model.add(Conv1D(64, 8, padding='same', activation='relu'))
model.add(Flatten())
model.add(Dense(7, activation='softmax')) 
opt = keras.optimizers.Adam(learning_rate=0.001)
model.compile(optimizer= opt ,loss='categorical_crossentropy',metrics=['acc'])

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d (Conv1D)              (None, 162, 256)          2304      
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 162, 256)          524544    
_________________________________________________________________
batch_normalization (BatchNo (None, 162, 256)          1024      
_________________________________________________________________
dropout (Dropout)            (None, 162, 256)          0         
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 20, 256)           0         
_________________________________________________________________
conv1d_2 (Conv1D)            (None, 20, 128)           262272    
_________________________________________________________________
conv1d_3 (Conv1D)            (None, 20, 128)           131200    
_________________________________________________________________
dropout_1 (Dropout)          (None, 20, 128)           0         
_________________________________________________________________
conv1d_4 (Conv1D)            (None, 20, 128)           131200    
_________________________________________________________________
conv1d_5 (Conv1D)            (None, 20, 128)           131200    
_________________________________________________________________
batch_normalization_1 (Batch (None, 20, 128)           512       
_________________________________________________________________
dropout_2 (Dropout)          (None, 20, 128)           0         
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 2, 128)            0         
_________________________________________________________________
conv1d_6 (Conv1D)            (None, 2, 64)             65600     
_________________________________________________________________
conv1d_7 (Conv1D)            (None, 2, 64)             32832     
_________________________________________________________________
flatten (Flatten)            (None, 128)               0         
_________________________________________________________________
dense (Dense)                (None, 7)                 903       
=================================================================
Total params: 1,283,591
Trainable params: 1,282,823
Non-trainable params: 768
_________________________________________________________________

Creating checkpoint for Model Training

In [27]:
checkpointer = ModelCheckpoint('speech_emotion.h5', monitor='val_acc', mode='max', verbose=2, save_best_only=True)
history=model.fit(x_train, y_train, batch_size=64, epochs=100, validation_data=(x_test, y_test), callbacks=[checkpointer])
Epoch 1/100
15/15 [==============================] - ETA: 0s - loss: 1.9121 - acc: 0.1651
Epoch 00001: val_acc improved from -inf to 0.17460, saving model to speech_emotion.h5
15/15 [==============================] - 8s 507ms/step - loss: 1.9121 - acc: 0.1651 - val_loss: 1.8846 - val_acc: 0.1746
Epoch 2/100
15/15 [==============================] - ETA: 0s - loss: 1.8414 - acc: 0.2074
Epoch 00002: val_acc improved from 0.17460 to 0.23492, saving model to speech_emotion.h5
15/15 [==============================] - 7s 461ms/step - loss: 1.8414 - acc: 0.2074 - val_loss: 1.8850 - val_acc: 0.2349
Epoch 3/100
15/15 [==============================] - ETA: 0s - loss: 1.7774 - acc: 0.2497
Epoch 00003: val_acc did not improve from 0.23492
15/15 [==============================] - 7s 434ms/step - loss: 1.7774 - acc: 0.2497 - val_loss: 1.8935 - val_acc: 0.2032
Epoch 4/100
15/15 [==============================] - ETA: 0s - loss: 1.6597 - acc: 0.2815
Epoch 00004: val_acc did not improve from 0.23492
15/15 [==============================] - 7s 436ms/step - loss: 1.6597 - acc: 0.2815 - val_loss: 1.9472 - val_acc: 0.1714
Epoch 5/100
15/15 [==============================] - ETA: 0s - loss: 1.5704 - acc: 0.3111
Epoch 00005: val_acc did not improve from 0.23492
15/15 [==============================] - 7s 436ms/step - loss: 1.5704 - acc: 0.3111 - val_loss: 2.0164 - val_acc: 0.1746
Epoch 6/100
15/15 [==============================] - ETA: 0s - loss: 1.5018 - acc: 0.3640
Epoch 00006: val_acc did not improve from 0.23492
15/15 [==============================] - 7s 435ms/step - loss: 1.5018 - acc: 0.3640 - val_loss: 2.1091 - val_acc: 0.1810
Epoch 7/100
15/15 [==============================] - ETA: 0s - loss: 1.4256 - acc: 0.3651
Epoch 00007: val_acc did not improve from 0.23492
15/15 [==============================] - 7s 434ms/step - loss: 1.4256 - acc: 0.3651 - val_loss: 2.2415 - val_acc: 0.1587
Epoch 8/100
15/15 [==============================] - ETA: 0s - loss: 1.3699 - acc: 0.3937
Epoch 00008: val_acc did not improve from 0.23492
15/15 [==============================] - 7s 434ms/step - loss: 1.3699 - acc: 0.3937 - val_loss: 2.4747 - val_acc: 0.1619
Epoch 9/100
15/15 [==============================] - ETA: 0s - loss: 1.3131 - acc: 0.4402
Epoch 00009: val_acc did not improve from 0.23492
15/15 [==============================] - 6s 433ms/step - loss: 1.3131 - acc: 0.4402 - val_loss: 2.3424 - val_acc: 0.1683
Epoch 10/100
15/15 [==============================] - ETA: 0s - loss: 1.2421 - acc: 0.4646
Epoch 00010: val_acc did not improve from 0.23492
15/15 [==============================] - 6s 432ms/step - loss: 1.2421 - acc: 0.4646 - val_loss: 2.4025 - val_acc: 0.1714
Epoch 11/100
15/15 [==============================] - ETA: 0s - loss: 1.1481 - acc: 0.5291
Epoch 00011: val_acc did not improve from 0.23492
15/15 [==============================] - 7s 434ms/step - loss: 1.1481 - acc: 0.5291 - val_loss: 2.2960 - val_acc: 0.1778
Epoch 12/100
15/15 [==============================] - ETA: 0s - loss: 1.1002 - acc: 0.5132
Epoch 00012: val_acc did not improve from 0.23492
15/15 [==============================] - 7s 434ms/step - loss: 1.1002 - acc: 0.5132 - val_loss: 2.3074 - val_acc: 0.1651
Epoch 13/100
15/15 [==============================] - ETA: 0s - loss: 1.0516 - acc: 0.5492
Epoch 00013: val_acc did not improve from 0.23492
15/15 [==============================] - 6s 433ms/step - loss: 1.0516 - acc: 0.5492 - val_loss: 2.0155 - val_acc: 0.2222
Epoch 14/100
15/15 [==============================] - ETA: 0s - loss: 1.0005 - acc: 0.5661
Epoch 00014: val_acc did not improve from 0.23492
15/15 [==============================] - 6s 433ms/step - loss: 1.0005 - acc: 0.5661 - val_loss: 1.9065 - val_acc: 0.2317
Epoch 15/100
15/15 [==============================] - ETA: 0s - loss: 0.8992 - acc: 0.6370
Epoch 00015: val_acc did not improve from 0.23492
15/15 [==============================] - 6s 431ms/step - loss: 0.8992 - acc: 0.6370 - val_loss: 1.9427 - val_acc: 0.2349
Epoch 16/100
15/15 [==============================] - ETA: 0s - loss: 0.8721 - acc: 0.6444
Epoch 00016: val_acc improved from 0.23492 to 0.24444, saving model to speech_emotion.h5
15/15 [==============================] - 7s 440ms/step - loss: 0.8721 - acc: 0.6444 - val_loss: 1.8916 - val_acc: 0.2444
Epoch 17/100
15/15 [==============================] - ETA: 0s - loss: 0.8656 - acc: 0.6466
Epoch 00017: val_acc improved from 0.24444 to 0.27619, saving model to speech_emotion.h5
15/15 [==============================] - 7s 440ms/step - loss: 0.8656 - acc: 0.6466 - val_loss: 2.0221 - val_acc: 0.2762
Epoch 18/100
15/15 [==============================] - ETA: 0s - loss: 0.8151 - acc: 0.6688
Epoch 00018: val_acc improved from 0.27619 to 0.32381, saving model to speech_emotion.h5
15/15 [==============================] - 7s 446ms/step - loss: 0.8151 - acc: 0.6688 - val_loss: 1.8714 - val_acc: 0.3238
Epoch 19/100
15/15 [==============================] - ETA: 0s - loss: 0.7084 - acc: 0.7323
Epoch 00019: val_acc improved from 0.32381 to 0.34286, saving model to speech_emotion.h5
15/15 [==============================] - 7s 440ms/step - loss: 0.7084 - acc: 0.7323 - val_loss: 2.0141 - val_acc: 0.3429
Epoch 20/100
15/15 [==============================] - ETA: 0s - loss: 0.6611 - acc: 0.7259
Epoch 00020: val_acc did not improve from 0.34286
15/15 [==============================] - 7s 434ms/step - loss: 0.6611 - acc: 0.7259 - val_loss: 2.0259 - val_acc: 0.3048
Epoch 21/100
15/15 [==============================] - ETA: 0s - loss: 0.6844 - acc: 0.7291
Epoch 00021: val_acc did not improve from 0.34286
15/15 [==============================] - 7s 434ms/step - loss: 0.6844 - acc: 0.7291 - val_loss: 1.9659 - val_acc: 0.3175
Epoch 22/100
15/15 [==============================] - ETA: 0s - loss: 0.6233 - acc: 0.7418
Epoch 00022: val_acc did not improve from 0.34286
15/15 [==============================] - 6s 433ms/step - loss: 0.6233 - acc: 0.7418 - val_loss: 1.6875 - val_acc: 0.3302
Epoch 23/100
15/15 [==============================] - ETA: 0s - loss: 0.5378 - acc: 0.7810
Epoch 00023: val_acc improved from 0.34286 to 0.38095, saving model to speech_emotion.h5
15/15 [==============================] - 7s 438ms/step - loss: 0.5378 - acc: 0.7810 - val_loss: 1.6347 - val_acc: 0.3810
Epoch 24/100
15/15 [==============================] - ETA: 0s - loss: 0.5319 - acc: 0.7968
Epoch 00024: val_acc did not improve from 0.38095
15/15 [==============================] - 6s 433ms/step - loss: 0.5319 - acc: 0.7968 - val_loss: 1.6358 - val_acc: 0.3619
Epoch 25/100
15/15 [==============================] - ETA: 0s - loss: 0.5254 - acc: 0.7968
Epoch 00025: val_acc improved from 0.38095 to 0.43175, saving model to speech_emotion.h5
15/15 [==============================] - 7s 440ms/step - loss: 0.5254 - acc: 0.7968 - val_loss: 1.5051 - val_acc: 0.4317
Epoch 26/100
15/15 [==============================] - ETA: 0s - loss: 0.4699 - acc: 0.8265
Epoch 00026: val_acc improved from 0.43175 to 0.50794, saving model to speech_emotion.h5
15/15 [==============================] - 7s 438ms/step - loss: 0.4699 - acc: 0.8265 - val_loss: 1.2330 - val_acc: 0.5079
Epoch 27/100
15/15 [==============================] - ETA: 0s - loss: 0.3984 - acc: 0.8466
Epoch 00027: val_acc did not improve from 0.50794
15/15 [==============================] - 7s 435ms/step - loss: 0.3984 - acc: 0.8466 - val_loss: 1.2430 - val_acc: 0.5048
Epoch 28/100
15/15 [==============================] - ETA: 0s - loss: 0.4189 - acc: 0.8413
Epoch 00028: val_acc improved from 0.50794 to 0.53333, saving model to speech_emotion.h5
15/15 [==============================] - 7s 437ms/step - loss: 0.4189 - acc: 0.8413 - val_loss: 1.2516 - val_acc: 0.5333
Epoch 29/100
15/15 [==============================] - ETA: 0s - loss: 0.4401 - acc: 0.8349
Epoch 00029: val_acc did not improve from 0.53333
15/15 [==============================] - 7s 434ms/step - loss: 0.4401 - acc: 0.8349 - val_loss: 1.4101 - val_acc: 0.4857
Epoch 30/100
15/15 [==============================] - ETA: 0s - loss: 0.4522 - acc: 0.8233
Epoch 00030: val_acc improved from 0.53333 to 0.59683, saving model to speech_emotion.h5
15/15 [==============================] - 7s 439ms/step - loss: 0.4522 - acc: 0.8233 - val_loss: 1.0566 - val_acc: 0.5968
Epoch 31/100
15/15 [==============================] - ETA: 0s - loss: 0.4081 - acc: 0.8571
Epoch 00031: val_acc improved from 0.59683 to 0.60635, saving model to speech_emotion.h5
15/15 [==============================] - 7s 439ms/step - loss: 0.4081 - acc: 0.8571 - val_loss: 1.0815 - val_acc: 0.6063
Epoch 32/100
15/15 [==============================] - ETA: 0s - loss: 0.3961 - acc: 0.8455
Epoch 00032: val_acc improved from 0.60635 to 0.61587, saving model to speech_emotion.h5
15/15 [==============================] - 7s 439ms/step - loss: 0.3961 - acc: 0.8455 - val_loss: 1.0028 - val_acc: 0.6159
Epoch 33/100
15/15 [==============================] - ETA: 0s - loss: 0.3941 - acc: 0.8487
Epoch 00033: val_acc improved from 0.61587 to 0.65714, saving model to speech_emotion.h5
15/15 [==============================] - 7s 439ms/step - loss: 0.3941 - acc: 0.8487 - val_loss: 0.9496 - val_acc: 0.6571
Epoch 34/100
15/15 [==============================] - ETA: 0s - loss: 0.3617 - acc: 0.8698
Epoch 00034: val_acc did not improve from 0.65714
15/15 [==============================] - 7s 434ms/step - loss: 0.3617 - acc: 0.8698 - val_loss: 0.9374 - val_acc: 0.6286
Epoch 35/100
15/15 [==============================] - ETA: 0s - loss: 0.2218 - acc: 0.9122
Epoch 00035: val_acc improved from 0.65714 to 0.66032, saving model to speech_emotion.h5
15/15 [==============================] - 7s 438ms/step - loss: 0.2218 - acc: 0.9122 - val_loss: 0.9874 - val_acc: 0.6603
Epoch 36/100
15/15 [==============================] - ETA: 0s - loss: 0.2900 - acc: 0.9016
Epoch 00036: val_acc did not improve from 0.66032
15/15 [==============================] - 7s 439ms/step - loss: 0.2900 - acc: 0.9016 - val_loss: 1.0561 - val_acc: 0.6254
Epoch 37/100
15/15 [==============================] - ETA: 0s - loss: 0.2807 - acc: 0.9005
Epoch 00037: val_acc improved from 0.66032 to 0.68889, saving model to speech_emotion.h5
15/15 [==============================] - 7s 439ms/step - loss: 0.2807 - acc: 0.9005 - val_loss: 0.9161 - val_acc: 0.6889
Epoch 38/100
15/15 [==============================] - ETA: 0s - loss: 0.2294 - acc: 0.9111
Epoch 00038: val_acc did not improve from 0.68889
15/15 [==============================] - 6s 432ms/step - loss: 0.2294 - acc: 0.9111 - val_loss: 1.0402 - val_acc: 0.6698
Epoch 39/100
15/15 [==============================] - ETA: 0s - loss: 0.2287 - acc: 0.9037
Epoch 00039: val_acc did not improve from 0.68889
15/15 [==============================] - 6s 433ms/step - loss: 0.2287 - acc: 0.9037 - val_loss: 0.9622 - val_acc: 0.6762
Epoch 40/100
15/15 [==============================] - ETA: 0s - loss: 0.2432 - acc: 0.9101
Epoch 00040: val_acc improved from 0.68889 to 0.72381, saving model to speech_emotion.h5
15/15 [==============================] - 7s 437ms/step - loss: 0.2432 - acc: 0.9101 - val_loss: 0.8855 - val_acc: 0.7238
Epoch 41/100
15/15 [==============================] - ETA: 0s - loss: 0.2192 - acc: 0.9259
Epoch 00041: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.2192 - acc: 0.9259 - val_loss: 1.2175 - val_acc: 0.6571
Epoch 42/100
15/15 [==============================] - ETA: 0s - loss: 0.2019 - acc: 0.9228
Epoch 00042: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.2019 - acc: 0.9228 - val_loss: 1.0919 - val_acc: 0.6317
Epoch 43/100
15/15 [==============================] - ETA: 0s - loss: 0.2195 - acc: 0.9217
Epoch 00043: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 433ms/step - loss: 0.2195 - acc: 0.9217 - val_loss: 1.1018 - val_acc: 0.6635
Epoch 44/100
15/15 [==============================] - ETA: 0s - loss: 0.2229 - acc: 0.9206
Epoch 00044: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.2229 - acc: 0.9206 - val_loss: 0.9882 - val_acc: 0.6857
Epoch 45/100
15/15 [==============================] - ETA: 0s - loss: 0.2151 - acc: 0.9175
Epoch 00045: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.2151 - acc: 0.9175 - val_loss: 1.0946 - val_acc: 0.6508
Epoch 46/100
15/15 [==============================] - ETA: 0s - loss: 0.1891 - acc: 0.9249
Epoch 00046: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.1891 - acc: 0.9249 - val_loss: 1.5446 - val_acc: 0.5619
Epoch 47/100
15/15 [==============================] - ETA: 0s - loss: 0.2002 - acc: 0.9323
Epoch 00047: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 436ms/step - loss: 0.2002 - acc: 0.9323 - val_loss: 1.0510 - val_acc: 0.6921
Epoch 48/100
15/15 [==============================] - ETA: 0s - loss: 0.1530 - acc: 0.9503
Epoch 00048: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 434ms/step - loss: 0.1530 - acc: 0.9503 - val_loss: 1.1195 - val_acc: 0.7238
Epoch 49/100
15/15 [==============================] - ETA: 0s - loss: 0.1229 - acc: 0.9566
Epoch 00049: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 435ms/step - loss: 0.1229 - acc: 0.9566 - val_loss: 0.8553 - val_acc: 0.7206
Epoch 50/100
15/15 [==============================] - ETA: 0s - loss: 0.1522 - acc: 0.9556
Epoch 00050: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 433ms/step - loss: 0.1522 - acc: 0.9556 - val_loss: 1.1763 - val_acc: 0.7048
Epoch 51/100
15/15 [==============================] - ETA: 0s - loss: 0.1263 - acc: 0.9534
Epoch 00051: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 437ms/step - loss: 0.1263 - acc: 0.9534 - val_loss: 0.9988 - val_acc: 0.7111
Epoch 52/100
15/15 [==============================] - ETA: 0s - loss: 0.1115 - acc: 0.9577
Epoch 00052: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 434ms/step - loss: 0.1115 - acc: 0.9577 - val_loss: 1.2235 - val_acc: 0.6762
Epoch 53/100
15/15 [==============================] - ETA: 0s - loss: 0.1285 - acc: 0.9598
Epoch 00053: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 440ms/step - loss: 0.1285 - acc: 0.9598 - val_loss: 1.1789 - val_acc: 0.7238
Epoch 54/100
15/15 [==============================] - ETA: 0s - loss: 0.1193 - acc: 0.9608
Epoch 00054: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.1193 - acc: 0.9608 - val_loss: 1.4135 - val_acc: 0.6730
Epoch 55/100
15/15 [==============================] - ETA: 0s - loss: 0.1270 - acc: 0.9598
Epoch 00055: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 434ms/step - loss: 0.1270 - acc: 0.9598 - val_loss: 1.0638 - val_acc: 0.7079
Epoch 56/100
15/15 [==============================] - ETA: 0s - loss: 0.1023 - acc: 0.9672
Epoch 00056: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 434ms/step - loss: 0.1023 - acc: 0.9672 - val_loss: 1.2893 - val_acc: 0.6984
Epoch 57/100
15/15 [==============================] - ETA: 0s - loss: 0.0868 - acc: 0.9704
Epoch 00057: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 434ms/step - loss: 0.0868 - acc: 0.9704 - val_loss: 1.1409 - val_acc: 0.7175
Epoch 58/100
15/15 [==============================] - ETA: 0s - loss: 0.1181 - acc: 0.9587
Epoch 00058: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 435ms/step - loss: 0.1181 - acc: 0.9587 - val_loss: 1.1485 - val_acc: 0.7111
Epoch 59/100
15/15 [==============================] - ETA: 0s - loss: 0.1334 - acc: 0.9524
Epoch 00059: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 432ms/step - loss: 0.1334 - acc: 0.9524 - val_loss: 1.1942 - val_acc: 0.7143
Epoch 60/100
15/15 [==============================] - ETA: 0s - loss: 0.1239 - acc: 0.9640
Epoch 00060: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 434ms/step - loss: 0.1239 - acc: 0.9640 - val_loss: 1.1114 - val_acc: 0.7175
Epoch 61/100
15/15 [==============================] - ETA: 0s - loss: 0.1295 - acc: 0.9598
Epoch 00061: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.1295 - acc: 0.9598 - val_loss: 1.3954 - val_acc: 0.6762
Epoch 62/100
15/15 [==============================] - ETA: 0s - loss: 0.1251 - acc: 0.9608
Epoch 00062: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 432ms/step - loss: 0.1251 - acc: 0.9608 - val_loss: 1.4514 - val_acc: 0.6857
Epoch 63/100
15/15 [==============================] - ETA: 0s - loss: 0.1149 - acc: 0.9619
Epoch 00063: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 434ms/step - loss: 0.1149 - acc: 0.9619 - val_loss: 1.3677 - val_acc: 0.7048
Epoch 64/100
15/15 [==============================] - ETA: 0s - loss: 0.1204 - acc: 0.9524
Epoch 00064: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 431ms/step - loss: 0.1204 - acc: 0.9524 - val_loss: 1.2145 - val_acc: 0.6762
Epoch 65/100
15/15 [==============================] - ETA: 0s - loss: 0.0964 - acc: 0.9651
Epoch 00065: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.0964 - acc: 0.9651 - val_loss: 1.6868 - val_acc: 0.6222
Epoch 66/100
15/15 [==============================] - ETA: 0s - loss: 0.1112 - acc: 0.9577
Epoch 00066: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.1112 - acc: 0.9577 - val_loss: 1.2562 - val_acc: 0.6698
Epoch 67/100
15/15 [==============================] - ETA: 0s - loss: 0.0737 - acc: 0.9725
Epoch 00067: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 434ms/step - loss: 0.0737 - acc: 0.9725 - val_loss: 1.2060 - val_acc: 0.7238
Epoch 68/100
15/15 [==============================] - ETA: 0s - loss: 0.1000 - acc: 0.9683
Epoch 00068: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 432ms/step - loss: 0.1000 - acc: 0.9683 - val_loss: 1.2031 - val_acc: 0.7143
Epoch 69/100
15/15 [==============================] - ETA: 0s - loss: 0.1213 - acc: 0.9577
Epoch 00069: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.1213 - acc: 0.9577 - val_loss: 1.1675 - val_acc: 0.7175
Epoch 70/100
15/15 [==============================] - ETA: 0s - loss: 0.0664 - acc: 0.9820
Epoch 00070: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 434ms/step - loss: 0.0664 - acc: 0.9820 - val_loss: 1.3688 - val_acc: 0.6825
Epoch 71/100
15/15 [==============================] - ETA: 0s - loss: 0.0502 - acc: 0.9841
Epoch 00071: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 439ms/step - loss: 0.0502 - acc: 0.9841 - val_loss: 1.4669 - val_acc: 0.7048
Epoch 72/100
15/15 [==============================] - ETA: 0s - loss: 0.0558 - acc: 0.9788
Epoch 00072: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 432ms/step - loss: 0.0558 - acc: 0.9788 - val_loss: 1.2644 - val_acc: 0.7016
Epoch 73/100
15/15 [==============================] - ETA: 0s - loss: 0.0833 - acc: 0.9767
Epoch 00073: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 431ms/step - loss: 0.0833 - acc: 0.9767 - val_loss: 1.6786 - val_acc: 0.6635
Epoch 74/100
15/15 [==============================] - ETA: 0s - loss: 0.1017 - acc: 0.9619
Epoch 00074: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.1017 - acc: 0.9619 - val_loss: 1.1855 - val_acc: 0.7111
Epoch 75/100
15/15 [==============================] - ETA: 0s - loss: 0.1274 - acc: 0.9524
Epoch 00075: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 453ms/step - loss: 0.1274 - acc: 0.9524 - val_loss: 1.6144 - val_acc: 0.6571
Epoch 76/100
15/15 [==============================] - ETA: 0s - loss: 0.0966 - acc: 0.9651
Epoch 00076: val_acc did not improve from 0.72381
15/15 [==============================] - 7s 444ms/step - loss: 0.0966 - acc: 0.9651 - val_loss: 1.4843 - val_acc: 0.6603
Epoch 77/100
15/15 [==============================] - ETA: 0s - loss: 0.0688 - acc: 0.9810
Epoch 00077: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 431ms/step - loss: 0.0688 - acc: 0.9810 - val_loss: 1.3566 - val_acc: 0.6762
Epoch 78/100
15/15 [==============================] - ETA: 0s - loss: 0.0585 - acc: 0.9831
Epoch 00078: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 431ms/step - loss: 0.0585 - acc: 0.9831 - val_loss: 1.5050 - val_acc: 0.6857
Epoch 79/100
15/15 [==============================] - ETA: 0s - loss: 0.0600 - acc: 0.9810
Epoch 00079: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 432ms/step - loss: 0.0600 - acc: 0.9810 - val_loss: 1.5703 - val_acc: 0.6794
Epoch 80/100
15/15 [==============================] - ETA: 0s - loss: 0.0595 - acc: 0.9831
Epoch 00080: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.0595 - acc: 0.9831 - val_loss: 1.5127 - val_acc: 0.6508
Epoch 81/100
15/15 [==============================] - ETA: 0s - loss: 0.0838 - acc: 0.9735
Epoch 00081: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 432ms/step - loss: 0.0838 - acc: 0.9735 - val_loss: 1.5687 - val_acc: 0.6762
Epoch 82/100
15/15 [==============================] - ETA: 0s - loss: 0.0544 - acc: 0.9831
Epoch 00082: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 433ms/step - loss: 0.0544 - acc: 0.9831 - val_loss: 1.2932 - val_acc: 0.7143
Epoch 83/100
15/15 [==============================] - ETA: 0s - loss: 0.0396 - acc: 0.9905
Epoch 00083: val_acc did not improve from 0.72381
15/15 [==============================] - 6s 431ms/step - loss: 0.0396 - acc: 0.9905 - val_loss: 1.3005 - val_acc: 0.7111
Epoch 84/100
15/15 [==============================] - ETA: 0s - loss: 0.0641 - acc: 0.9788
Epoch 00084: val_acc improved from 0.72381 to 0.72698, saving model to speech_emotion.h5
15/15 [==============================] - 7s 439ms/step - loss: 0.0641 - acc: 0.9788 - val_loss: 1.0786 - val_acc: 0.7270
Epoch 85/100
15/15 [==============================] - ETA: 0s - loss: 0.0555 - acc: 0.9788
Epoch 00085: val_acc did not improve from 0.72698
15/15 [==============================] - 6s 433ms/step - loss: 0.0555 - acc: 0.9788 - val_loss: 1.3599 - val_acc: 0.7048
Epoch 86/100
15/15 [==============================] - ETA: 0s - loss: 0.0378 - acc: 0.9852
Epoch 00086: val_acc improved from 0.72698 to 0.74286, saving model to speech_emotion.h5
15/15 [==============================] - 7s 439ms/step - loss: 0.0378 - acc: 0.9852 - val_loss: 1.3505 - val_acc: 0.7429
Epoch 87/100
15/15 [==============================] - ETA: 0s - loss: 0.0396 - acc: 0.9862
Epoch 00087: val_acc did not improve from 0.74286
15/15 [==============================] - 6s 432ms/step - loss: 0.0396 - acc: 0.9862 - val_loss: 1.2370 - val_acc: 0.7365
Epoch 88/100
15/15 [==============================] - ETA: 0s - loss: 0.0399 - acc: 0.9884
Epoch 00088: val_acc did not improve from 0.74286
15/15 [==============================] - 7s 437ms/step - loss: 0.0399 - acc: 0.9884 - val_loss: 1.5999 - val_acc: 0.7175
Epoch 89/100
15/15 [==============================] - ETA: 0s - loss: 0.0696 - acc: 0.9778
Epoch 00089: val_acc did not improve from 0.74286
15/15 [==============================] - 6s 431ms/step - loss: 0.0696 - acc: 0.9778 - val_loss: 1.5915 - val_acc: 0.7048
Epoch 90/100
15/15 [==============================] - ETA: 0s - loss: 0.0634 - acc: 0.9810
Epoch 00090: val_acc did not improve from 0.74286
15/15 [==============================] - 6s 432ms/step - loss: 0.0634 - acc: 0.9810 - val_loss: 1.4696 - val_acc: 0.7048
Epoch 91/100
15/15 [==============================] - ETA: 0s - loss: 0.0564 - acc: 0.9820
Epoch 00091: val_acc did not improve from 0.74286
15/15 [==============================] - 7s 434ms/step - loss: 0.0564 - acc: 0.9820 - val_loss: 1.5705 - val_acc: 0.7016
Epoch 92/100
15/15 [==============================] - ETA: 0s - loss: 0.0674 - acc: 0.9757
Epoch 00092: val_acc did not improve from 0.74286
15/15 [==============================] - 6s 432ms/step - loss: 0.0674 - acc: 0.9757 - val_loss: 1.3960 - val_acc: 0.6921
Epoch 93/100
15/15 [==============================] - ETA: 0s - loss: 0.0614 - acc: 0.9831
Epoch 00093: val_acc did not improve from 0.74286
15/15 [==============================] - 7s 435ms/step - loss: 0.0614 - acc: 0.9831 - val_loss: 1.4067 - val_acc: 0.6952
Epoch 94/100
15/15 [==============================] - ETA: 0s - loss: 0.0829 - acc: 0.9746
Epoch 00094: val_acc did not improve from 0.74286
15/15 [==============================] - 6s 433ms/step - loss: 0.0829 - acc: 0.9746 - val_loss: 1.4459 - val_acc: 0.7016
Epoch 95/100
15/15 [==============================] - ETA: 0s - loss: 0.0860 - acc: 0.9831
Epoch 00095: val_acc did not improve from 0.74286
15/15 [==============================] - 6s 433ms/step - loss: 0.0860 - acc: 0.9831 - val_loss: 1.3829 - val_acc: 0.7143
Epoch 96/100
15/15 [==============================] - ETA: 0s - loss: 0.0879 - acc: 0.9714
Epoch 00096: val_acc did not improve from 0.74286
15/15 [==============================] - 7s 434ms/step - loss: 0.0879 - acc: 0.9714 - val_loss: 1.6529 - val_acc: 0.6889
Epoch 97/100
15/15 [==============================] - ETA: 0s - loss: 0.0746 - acc: 0.9704
Epoch 00097: val_acc did not improve from 0.74286
15/15 [==============================] - 6s 432ms/step - loss: 0.0746 - acc: 0.9704 - val_loss: 1.0296 - val_acc: 0.7302
Epoch 98/100
15/15 [==============================] - ETA: 0s - loss: 0.0689 - acc: 0.9757
Epoch 00098: val_acc did not improve from 0.74286
15/15 [==============================] - 6s 432ms/step - loss: 0.0689 - acc: 0.9757 - val_loss: 1.4023 - val_acc: 0.6952
Epoch 99/100
15/15 [==============================] - ETA: 0s - loss: 0.0524 - acc: 0.9894
Epoch 00099: val_acc did not improve from 0.74286
15/15 [==============================] - 7s 434ms/step - loss: 0.0524 - acc: 0.9894 - val_loss: 1.6762 - val_acc: 0.6984
Epoch 100/100
15/15 [==============================] - ETA: 0s - loss: 0.0421 - acc: 0.9905
Epoch 00100: val_acc did not improve from 0.74286
15/15 [==============================] - 6s 432ms/step - loss: 0.0421 - acc: 0.9905 - val_loss: 1.4737 - val_acc: 0.7175

Loading the model which we trained just now

In [28]:
present_model = tf.keras.models.load_model('speech_emotion.h5')
present_model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d (Conv1D)              (None, 162, 256)          2304      
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 162, 256)          524544    
_________________________________________________________________
batch_normalization (BatchNo (None, 162, 256)          1024      
_________________________________________________________________
dropout (Dropout)            (None, 162, 256)          0         
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 20, 256)           0         
_________________________________________________________________
conv1d_2 (Conv1D)            (None, 20, 128)           262272    
_________________________________________________________________
conv1d_3 (Conv1D)            (None, 20, 128)           131200    
_________________________________________________________________
dropout_1 (Dropout)          (None, 20, 128)           0         
_________________________________________________________________
conv1d_4 (Conv1D)            (None, 20, 128)           131200    
_________________________________________________________________
conv1d_5 (Conv1D)            (None, 20, 128)           131200    
_________________________________________________________________
batch_normalization_1 (Batch (None, 20, 128)           512       
_________________________________________________________________
dropout_2 (Dropout)          (None, 20, 128)           0         
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 2, 128)            0         
_________________________________________________________________
conv1d_6 (Conv1D)            (None, 2, 64)             65600     
_________________________________________________________________
conv1d_7 (Conv1D)            (None, 2, 64)             32832     
_________________________________________________________________
flatten (Flatten)            (None, 128)               0         
_________________________________________________________________
dense (Dense)                (None, 7)                 903       
=================================================================
Total params: 1,283,591
Trainable params: 1,282,823
Non-trainable params: 768
_________________________________________________________________

Getting the accuracy of the model that we trained.

In [29]:
print("Accuracy of our model on test data : " , present_model.evaluate(x_test,y_test)[1]*100 , "%")
10/10 [==============================] - 0s 48ms/step - loss: 1.3505 - acc: 0.7429
Accuracy of our model on test data :  74.28571581840515 %
In [30]:
# plot the training artifacts

plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train_acc','val_acc'], loc = 'upper right')
plt.show()
In [31]:
# plot the training artifacts

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train_loss','val_loss'], loc = 'upper right')
plt.show()

Accessing the Model's Performance

In [32]:
# predicting on test data.
pred_test = present_model.predict(x_test)
y_pred = encoder.inverse_transform(pred_test)

y_test_ = encoder.inverse_transform(y_test)
In [33]:
df = pd.DataFrame(columns=['Predicted Labels', 'Actual Labels'])
df['Predicted Labels'] = y_pred.flatten()
df['Actual Labels'] = y_test_.flatten()

df.head(10)
Out[33]:
Predicted Labels Actual Labels
0 disgust disgust
1 disgust sad
2 fear fear
3 happy happy
4 fear fear
5 angry angry
6 happy angry
7 sad sad
8 disgust disgust
9 disgust sad
In [34]:
print(classification_report(y_test_, y_pred))
              precision    recall  f1-score   support

       angry       0.86      0.59      0.70        51
     disgust       0.60      0.93      0.73        42
        fear       0.76      0.85      0.80        40
       happy       0.59      0.69      0.63        48
     neutral       0.84      0.76      0.80        49
         sad       0.91      0.73      0.81        44
    surprise       0.83      0.71      0.76        41

    accuracy                           0.74       315
   macro avg       0.77      0.75      0.75       315
weighted avg       0.77      0.74      0.74       315

As we can see that the model attained 75% test accuracy. Now we will load pre trained model which has been trained on a larger dataset and for larger number of epochs. If you want you can also skip this step.

In [35]:
!wget -N "https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/speech_emotion_full.h5"
--2020-10-30 14:03:00--  https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/speech_emotion_full.h5
Resolving cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)... 52.219.62.84
Connecting to cainvas-static.s3.amazonaws.com (cainvas-static.s3.amazonaws.com)|52.219.62.84|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15474968 (15M) [application/x-hdf]
Saving to: ‘speech_emotion_full.h5’

speech_emotion_full 100%[===================>]  14.76M  26.2MB/s    in 0.6s    

2020-10-30 14:03:01 (26.2 MB/s) - ‘speech_emotion_full.h5’ saved [15474968/15474968]

In [36]:
pre_trained_model = tf.keras.models.load_model('speech_emotion_full.h5')
pre_trained_model.summary()
Model: "sequential_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d_17 (Conv1D)           (None, 162, 256)          2304      
_________________________________________________________________
conv1d_18 (Conv1D)           (None, 162, 256)          524544    
_________________________________________________________________
batch_normalization_5 (Batch (None, 162, 256)          1024      
_________________________________________________________________
dropout_7 (Dropout)          (None, 162, 256)          0         
_________________________________________________________________
max_pooling1d_5 (MaxPooling1 (None, 20, 256)           0         
_________________________________________________________________
conv1d_19 (Conv1D)           (None, 20, 128)           262272    
_________________________________________________________________
conv1d_20 (Conv1D)           (None, 20, 128)           131200    
_________________________________________________________________
dropout_8 (Dropout)          (None, 20, 128)           0         
_________________________________________________________________
conv1d_21 (Conv1D)           (None, 20, 128)           131200    
_________________________________________________________________
conv1d_22 (Conv1D)           (None, 20, 128)           131200    
_________________________________________________________________
batch_normalization_6 (Batch (None, 20, 128)           512       
_________________________________________________________________
dropout_9 (Dropout)          (None, 20, 128)           0         
_________________________________________________________________
max_pooling1d_6 (MaxPooling1 (None, 2, 128)            0         
_________________________________________________________________
conv1d_23 (Conv1D)           (None, 2, 64)             65600     
_________________________________________________________________
conv1d_24 (Conv1D)           (None, 2, 64)             32832     
_________________________________________________________________
flatten_3 (Flatten)          (None, 128)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 7)                 903       
=================================================================
Total params: 1,283,591
Trainable params: 1,282,823
Non-trainable params: 768
_________________________________________________________________

As we can see that my pretrained model is attaining 90% test accuracy

In [37]:
print("Accuracy of our model on test data : " , pre_trained_model.evaluate(x_test,y_test)[1]*100 , "%")
10/10 [==============================] - 0s 49ms/step - loss: 0.5528 - acc: 0.9016
Accuracy of our model on test data :  90.15873074531555 %

Compiling the model with DeepC

In [38]:
!deepCC speech_emotion_full.h5
reading [keras model] from 'speech_emotion_full.h5'
Saved 'speech_emotion_full.onnx'
reading onnx model from file  speech_emotion_full.onnx
Model info:
  ir_vesion :  4 
  doc       : 
WARN (ONNX): graph-node conv1d_17's attribute auto_pad has no meaningful data.
WARN (ONNX): graph-node conv1d_18's attribute auto_pad has no meaningful data.
WARN (ONNX): spatial is not a valid graph-node attribute.
             operator BatchNormalization will be added without this attribute.
WARN (ONNX): graph-node conv1d_19's attribute auto_pad has no meaningful data.
WARN (ONNX): graph-node conv1d_20's attribute auto_pad has no meaningful data.
WARN (ONNX): graph-node conv1d_21's attribute auto_pad has no meaningful data.
WARN (ONNX): graph-node conv1d_22's attribute auto_pad has no meaningful data.
WARN (ONNX): spatial is not a valid graph-node attribute.
             operator BatchNormalization will be added without this attribute.
WARN (ONNX): graph-node conv1d_23's attribute auto_pad has no meaningful data.
WARN (ONNX): graph-node conv1d_24's attribute auto_pad has no meaningful data.
WARN (ONNX): terminal (input/output) conv1d_17_input's shape is less than 1.
             changing it to 1.
WARN (ONNX): terminal (input/output) dense_3's shape is less than 1.
             changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_3) as io node.
running DNNC graph sanity check ... passed.
Writing C++ file  speech_emotion_full_deepC/speech_emotion_full.cpp
INFO (ONNX): model files are ready in dir speech_emotion_full_deepC
g++ -std=c++11 -O3 -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 speech_emotion_full_deepC/speech_emotion_full.cpp -o speech_emotion_full_deepC/speech_emotion_full.exe
Model executable  speech_emotion_full_deepC/speech_emotion_full.exe