Cainvas
Model Files
emotion_detection.h5
keras
Model
deepSea Compiled Models
emotion_detection.exe
deepSea
Ubuntu

Detecting emotions using the EEG brainwave

Credit: AITS Cainvas Community

Photo by Lobster on Dribbble

The notebook uses EEG brainwave data to predict the emotions of humans.

What is EEG?

Brain cells communicate with each other through electrical signals.

Electroencephalography (EEG) is an electrophysiological monitoring method to record electrical activity of the brain. It is used for diagnosing or treating various disorders like brain tumors, stroke, sleep disorders etc.

Source - Wikipedia - Electroencephalography

In [1]:
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import random
import matplotlib.pyplot as plt

The dataset

The data was collected from two people (1 male, 1 female) for 3 minutes per state - positive, neutral, negative. A Muse EEG headband recorded the TP9, AF7, AF8 and TP10 EEG placements via dry electrodes. Six minutes of resting neutral data was also recorded.

The stimuli used to evoke the emotions are below

1. Marley and Me - Negative (Twentieth Century Fox) Death Scene
2. Up - Negative (Walt Disney Pictures) Opening Death Scene
3. My Girl - Negative (Imagine Entertainment) Funeral Scene
4. La La Land - Positive (Summit Entertainment) Opening musical number
5. Slow Life - Positive (BioQuest Studios) Nature timelapse
6. Funny Dogs - Positive (MashupZone) Funny dog clips

Dataset citation:

J. J. Bird, L. J. Manso, E. P. Ribiero, A. Ekart, and D. R. Faria, “A study on mental state classification using eeg-based brain-machine interface,”in 9th International Conference on Intelligent Systems, IEEE, 2018.

J. J. Bird, A. Ekart, C. D. Buckingham, and D. R. Faria, “Mental emotional sentiment classification with an eeg-based brain-machine interface,” in The International Conference on Digital Image and Signal Processing (DISP’19), Springer, 2019.

In [2]:
eeg = pd.read_csv('https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/emotions.csv')
eeg
Out[2]:
# mean_0_a mean_1_a mean_2_a mean_3_a mean_4_a mean_d_0_a mean_d_1_a mean_d_2_a mean_d_3_a mean_d_4_a ... fft_741_b fft_742_b fft_743_b fft_744_b fft_745_b fft_746_b fft_747_b fft_748_b fft_749_b label
0 4.620 30.3 -356.0 15.60 26.3 1.070 0.411 -15.700 2.060 3.15 ... 23.50 20.300 20.300 23.50 -215.0 280.00 -162.00 -162.00 280.00 NEGATIVE
1 28.800 33.1 32.0 25.80 22.8 6.550 1.680 2.880 3.830 -4.82 ... -23.30 -21.800 -21.800 -23.30 182.0 2.57 -31.60 -31.60 2.57 NEUTRAL
2 8.900 29.4 -416.0 16.70 23.7 79.900 3.360 90.200 89.900 2.03 ... 462.00 -233.000 -233.000 462.00 -267.0 281.00 -148.00 -148.00 281.00 POSITIVE
3 14.900 31.6 -143.0 19.80 24.3 -0.584 -0.284 8.820 2.300 -1.97 ... 299.00 -243.000 -243.000 299.00 132.0 -12.40 9.53 9.53 -12.40 POSITIVE
4 28.300 31.3 45.2 27.30 24.5 34.800 -5.790 3.060 41.400 5.52 ... 12.00 38.100 38.100 12.00 119.0 -17.60 23.90 23.90 -17.60 NEUTRAL
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2127 32.400 32.2 32.2 30.80 23.4 1.640 -2.030 0.647 -0.121 -1.10 ... -21.70 0.218 0.218 -21.70 95.2 -19.90 47.20 47.20 -19.90 NEUTRAL
2128 16.300 31.3 -284.0 14.30 23.9 4.200 1.090 4.460 4.720 6.63 ... 594.00 -324.000 -324.000 594.00 -35.5 142.00 -59.80 -59.80 142.00 POSITIVE
2129 -0.547 28.3 -259.0 15.80 26.7 9.080 6.900 12.700 2.030 4.64 ... 370.00 -160.000 -160.000 370.00 408.0 -169.00 -10.50 -10.50 -169.00 NEGATIVE
2130 16.800 19.9 -288.0 8.34 26.0 2.460 1.580 -16.000 1.690 4.74 ... 124.00 -27.600 -27.600 124.00 -656.0 552.00 -271.00 -271.00 552.00 NEGATIVE
2131 27.000 32.0 31.8 25.00 28.9 4.990 1.950 6.210 3.490 -3.51 ... 1.95 1.810 1.810 1.95 110.0 -6.71 22.80 22.80 -6.71 NEUTRAL

2132 rows × 2549 columns

In [3]:
# Number of samlpes in each class

eeg['label'].value_counts()
Out[3]:
NEUTRAL     716
NEGATIVE    708
POSITIVE    708
Name: label, dtype: int64

This is a balanced dataset.

In [4]:
# Checking the datatypes of the columns

eeg.dtypes
Out[4]:
# mean_0_a    float64
mean_1_a      float64
mean_2_a      float64
mean_3_a      float64
mean_4_a      float64
               ...   
fft_746_b     float64
fft_747_b     float64
fft_748_b     float64
fft_749_b     float64
label          object
Length: 2549, dtype: object
In [5]:
# defining the input and output columns to separate the dataset in the later cells.

input_columns = list(eeg.columns[:-1])
output_columns = ['NEGATIVE', 'NEUTRAL', 'POSITIVE']    # column names to be used after one-hot encoding

print("Number of input columns: ", len(input_columns))
#print("Input columns: ", ', '.join(input_columns))

print("Number of output columns: ", len(output_columns))
#print("Output columns: ", ', '.join(output_columns))
Number of input columns:  2548
Number of output columns:  3
In [6]:
# One hot encoding the labels

y = pd.get_dummies(eeg.label)
print(y)

# Adding the one hot encodings to the dataset
for x in output_columns:
    eeg[x] = y[x]
      NEGATIVE  NEUTRAL  POSITIVE
0            1        0         0
1            0        1         0
2            0        0         1
3            0        0         1
4            0        1         0
...        ...      ...       ...
2127         0        1         0
2128         0        0         1
2129         1        0         0
2130         1        0         0
2131         0        1         0

[2132 rows x 3 columns]
In [7]:
# Viewing the dataset again

eeg
Out[7]:
# mean_0_a mean_1_a mean_2_a mean_3_a mean_4_a mean_d_0_a mean_d_1_a mean_d_2_a mean_d_3_a mean_d_4_a ... fft_744_b fft_745_b fft_746_b fft_747_b fft_748_b fft_749_b label NEGATIVE NEUTRAL POSITIVE
0 4.620 30.3 -356.0 15.60 26.3 1.070 0.411 -15.700 2.060 3.15 ... 23.50 -215.0 280.00 -162.00 -162.00 280.00 NEGATIVE 1 0 0
1 28.800 33.1 32.0 25.80 22.8 6.550 1.680 2.880 3.830 -4.82 ... -23.30 182.0 2.57 -31.60 -31.60 2.57 NEUTRAL 0 1 0
2 8.900 29.4 -416.0 16.70 23.7 79.900 3.360 90.200 89.900 2.03 ... 462.00 -267.0 281.00 -148.00 -148.00 281.00 POSITIVE 0 0 1
3 14.900 31.6 -143.0 19.80 24.3 -0.584 -0.284 8.820 2.300 -1.97 ... 299.00 132.0 -12.40 9.53 9.53 -12.40 POSITIVE 0 0 1
4 28.300 31.3 45.2 27.30 24.5 34.800 -5.790 3.060 41.400 5.52 ... 12.00 119.0 -17.60 23.90 23.90 -17.60 NEUTRAL 0 1 0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2127 32.400 32.2 32.2 30.80 23.4 1.640 -2.030 0.647 -0.121 -1.10 ... -21.70 95.2 -19.90 47.20 47.20 -19.90 NEUTRAL 0 1 0
2128 16.300 31.3 -284.0 14.30 23.9 4.200 1.090 4.460 4.720 6.63 ... 594.00 -35.5 142.00 -59.80 -59.80 142.00 POSITIVE 0 0 1
2129 -0.547 28.3 -259.0 15.80 26.7 9.080 6.900 12.700 2.030 4.64 ... 370.00 408.0 -169.00 -10.50 -10.50 -169.00 NEGATIVE 1 0 0
2130 16.800 19.9 -288.0 8.34 26.0 2.460 1.580 -16.000 1.690 4.74 ... 124.00 -656.0 552.00 -271.00 -271.00 552.00 NEGATIVE 1 0 0
2131 27.000 32.0 31.8 25.00 28.9 4.990 1.950 6.210 3.490 -3.51 ... 1.95 110.0 -6.71 22.80 22.80 -6.71 NEUTRAL 0 1 0

2132 rows × 2552 columns

In [8]:
# Splitting into train, val and test set -- 80-10-10 split

# First, an 80-20 split
train_df, val_test_df = train_test_split(eeg, test_size = 0.2)

# Then split the 20% into half
val_df, test_df = train_test_split(val_test_df, test_size = 0.5)

print("Number of samples in...")
print("Training set: ", len(train_df))
print("Validation set: ", len(val_df))
print("Testing set: ", len(test_df))
Number of samples in...
Training set:  1705
Validation set:  213
Testing set:  214
In [9]:
# Splitting into X (input) and y (output)

Xtrain, ytrain = np.array(train_df[input_columns]), np.array(train_df[output_columns])

Xval, yval = np.array(val_df[input_columns]), np.array(val_df[output_columns])

Xtest, ytest = np.array(test_df[input_columns]), np.array(test_df[output_columns])
In [10]:
print("Range of values in X")

min(Xtrain[0]), max(Xtrain[0])
Range of values in X
Out[10]:
(-60200000.0, 544000000000000.0)
In [11]:
# Each feature has a different range. 
# Using min_max_scaler to scale them to values in the range [0,1].

min_max_scaler = MinMaxScaler()

# Fit on training set alone
Xtrain = min_max_scaler.fit_transform(Xtrain)

# Use it to transform val and test input
Xval = min_max_scaler.transform(Xval)
Xtest = min_max_scaler.transform(Xtest)
In [12]:
print("Range of values in X")

min(Xtrain[0]), max(Xtrain[0])
Range of values in X
Out[12]:
(0.0, 1.0)

Model

In [13]:
model = tf.keras.Sequential([
    layers.Dense(512, activation = 'relu', input_shape = Xtrain[0].shape),
    layers.Dense(256, activation = 'relu'),
    layers.Dense(128, activation = 'relu'),
    layers.Dense(3, activation = 'softmax')
])

model.compile(optimizer='adam', loss=tf.losses.CategoricalCrossentropy(), metrics=['accuracy'])
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 512)               1305088   
_________________________________________________________________
dense_1 (Dense)              (None, 256)               131328    
_________________________________________________________________
dense_2 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_3 (Dense)              (None, 3)                 387       
=================================================================
Total params: 1,469,699
Trainable params: 1,469,699
Non-trainable params: 0
_________________________________________________________________
In [14]:
history = model.fit(Xtrain, ytrain, validation_data = (Xval, yval), epochs=128)
Epoch 1/128
54/54 [==============================] - 0s 4ms/step - loss: 0.8533 - accuracy: 0.5871 - val_loss: 0.6454 - val_accuracy: 0.6901
Epoch 2/128
54/54 [==============================] - 0s 2ms/step - loss: 0.5063 - accuracy: 0.8029 - val_loss: 0.4437 - val_accuracy: 0.8122
Epoch 3/128
54/54 [==============================] - 0s 2ms/step - loss: 0.4669 - accuracy: 0.8041 - val_loss: 0.5076 - val_accuracy: 0.7230
Epoch 4/128
54/54 [==============================] - 0s 2ms/step - loss: 0.3563 - accuracy: 0.8657 - val_loss: 0.3008 - val_accuracy: 0.9108
Epoch 5/128
54/54 [==============================] - 0s 2ms/step - loss: 0.3175 - accuracy: 0.8839 - val_loss: 0.3966 - val_accuracy: 0.8075
Epoch 6/128
54/54 [==============================] - 0s 2ms/step - loss: 0.3157 - accuracy: 0.8821 - val_loss: 0.3169 - val_accuracy: 0.8592
Epoch 7/128
54/54 [==============================] - 0s 2ms/step - loss: 0.3094 - accuracy: 0.8845 - val_loss: 0.2527 - val_accuracy: 0.9155
Epoch 8/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2692 - accuracy: 0.9038 - val_loss: 0.4311 - val_accuracy: 0.8451
Epoch 9/128
54/54 [==============================] - 0s 2ms/step - loss: 0.3458 - accuracy: 0.8610 - val_loss: 0.3826 - val_accuracy: 0.8451
Epoch 10/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2568 - accuracy: 0.9073 - val_loss: 0.2272 - val_accuracy: 0.9202
Epoch 11/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2591 - accuracy: 0.9091 - val_loss: 0.2254 - val_accuracy: 0.9343
Epoch 12/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2756 - accuracy: 0.8979 - val_loss: 0.3069 - val_accuracy: 0.8685
Epoch 13/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2563 - accuracy: 0.9079 - val_loss: 0.2411 - val_accuracy: 0.9155
Epoch 14/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2559 - accuracy: 0.9155 - val_loss: 0.2488 - val_accuracy: 0.9014
Epoch 15/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2719 - accuracy: 0.8968 - val_loss: 0.2577 - val_accuracy: 0.8967
Epoch 16/128
54/54 [==============================] - 0s 2ms/step - loss: 0.3083 - accuracy: 0.8821 - val_loss: 0.2745 - val_accuracy: 0.8873
Epoch 17/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2209 - accuracy: 0.9249 - val_loss: 0.2364 - val_accuracy: 0.8967
Epoch 18/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2584 - accuracy: 0.8997 - val_loss: 0.2546 - val_accuracy: 0.8967
Epoch 19/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2131 - accuracy: 0.9232 - val_loss: 0.2269 - val_accuracy: 0.9155
Epoch 20/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2200 - accuracy: 0.9167 - val_loss: 0.2623 - val_accuracy: 0.8873
Epoch 21/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2051 - accuracy: 0.9296 - val_loss: 0.2586 - val_accuracy: 0.8920
Epoch 22/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2270 - accuracy: 0.9144 - val_loss: 0.3872 - val_accuracy: 0.8310
Epoch 23/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2105 - accuracy: 0.9132 - val_loss: 0.2432 - val_accuracy: 0.9343
Epoch 24/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1935 - accuracy: 0.9320 - val_loss: 0.1929 - val_accuracy: 0.9343
Epoch 25/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1671 - accuracy: 0.9390 - val_loss: 0.1857 - val_accuracy: 0.9296
Epoch 26/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2471 - accuracy: 0.9132 - val_loss: 0.2787 - val_accuracy: 0.8638
Epoch 27/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2399 - accuracy: 0.9144 - val_loss: 0.2583 - val_accuracy: 0.9014
Epoch 28/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2087 - accuracy: 0.9155 - val_loss: 0.2547 - val_accuracy: 0.9061
Epoch 29/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1657 - accuracy: 0.9372 - val_loss: 0.2318 - val_accuracy: 0.9108
Epoch 30/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1869 - accuracy: 0.9284 - val_loss: 0.5585 - val_accuracy: 0.7277
Epoch 31/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1961 - accuracy: 0.9296 - val_loss: 0.2427 - val_accuracy: 0.9296
Epoch 32/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2028 - accuracy: 0.9320 - val_loss: 0.2992 - val_accuracy: 0.9061
Epoch 33/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1562 - accuracy: 0.9390 - val_loss: 0.2805 - val_accuracy: 0.8920
Epoch 34/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1703 - accuracy: 0.9349 - val_loss: 0.3362 - val_accuracy: 0.8826
Epoch 35/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1837 - accuracy: 0.9308 - val_loss: 0.2098 - val_accuracy: 0.9437
Epoch 36/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1912 - accuracy: 0.9302 - val_loss: 0.2187 - val_accuracy: 0.9155
Epoch 37/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1966 - accuracy: 0.9290 - val_loss: 0.2033 - val_accuracy: 0.9202
Epoch 38/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1320 - accuracy: 0.9525 - val_loss: 0.1485 - val_accuracy: 0.9531
Epoch 39/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1076 - accuracy: 0.9543 - val_loss: 0.1623 - val_accuracy: 0.9624
Epoch 40/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1417 - accuracy: 0.9466 - val_loss: 0.3513 - val_accuracy: 0.8451
Epoch 41/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2544 - accuracy: 0.8950 - val_loss: 0.1601 - val_accuracy: 0.9671
Epoch 42/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1287 - accuracy: 0.9496 - val_loss: 0.2692 - val_accuracy: 0.9249
Epoch 43/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1075 - accuracy: 0.9584 - val_loss: 0.2589 - val_accuracy: 0.8967
Epoch 44/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1538 - accuracy: 0.9455 - val_loss: 0.2894 - val_accuracy: 0.8920
Epoch 45/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1713 - accuracy: 0.9402 - val_loss: 0.3270 - val_accuracy: 0.8638
Epoch 46/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1410 - accuracy: 0.9537 - val_loss: 0.1503 - val_accuracy: 0.9531
Epoch 47/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1110 - accuracy: 0.9619 - val_loss: 0.4532 - val_accuracy: 0.8732
Epoch 48/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1625 - accuracy: 0.9449 - val_loss: 0.2965 - val_accuracy: 0.9014
Epoch 49/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1396 - accuracy: 0.9543 - val_loss: 0.4053 - val_accuracy: 0.8498
Epoch 50/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2413 - accuracy: 0.8997 - val_loss: 0.1843 - val_accuracy: 0.9531
Epoch 51/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0844 - accuracy: 0.9718 - val_loss: 0.1953 - val_accuracy: 0.9484
Epoch 52/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0934 - accuracy: 0.9660 - val_loss: 0.2108 - val_accuracy: 0.9343
Epoch 53/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1020 - accuracy: 0.9636 - val_loss: 0.3523 - val_accuracy: 0.9249
Epoch 54/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1349 - accuracy: 0.9595 - val_loss: 0.1537 - val_accuracy: 0.9577
Epoch 55/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0891 - accuracy: 0.9648 - val_loss: 0.1702 - val_accuracy: 0.9671
Epoch 56/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0930 - accuracy: 0.9642 - val_loss: 0.2277 - val_accuracy: 0.9296
Epoch 57/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0803 - accuracy: 0.9707 - val_loss: 0.2380 - val_accuracy: 0.9296
Epoch 58/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1267 - accuracy: 0.9519 - val_loss: 0.2007 - val_accuracy: 0.9390
Epoch 59/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0748 - accuracy: 0.9748 - val_loss: 0.2188 - val_accuracy: 0.9343
Epoch 60/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0709 - accuracy: 0.9742 - val_loss: 0.2093 - val_accuracy: 0.9437
Epoch 61/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1142 - accuracy: 0.9548 - val_loss: 0.1543 - val_accuracy: 0.9624
Epoch 62/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0737 - accuracy: 0.9742 - val_loss: 0.2103 - val_accuracy: 0.9531
Epoch 63/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0902 - accuracy: 0.9713 - val_loss: 0.4497 - val_accuracy: 0.9014
Epoch 64/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1032 - accuracy: 0.9689 - val_loss: 0.1697 - val_accuracy: 0.9577
Epoch 65/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1234 - accuracy: 0.9589 - val_loss: 0.4732 - val_accuracy: 0.8451
Epoch 66/128
54/54 [==============================] - 0s 2ms/step - loss: 0.2070 - accuracy: 0.9255 - val_loss: 0.1653 - val_accuracy: 0.9671
Epoch 67/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0727 - accuracy: 0.9771 - val_loss: 0.1580 - val_accuracy: 0.9624
Epoch 68/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0947 - accuracy: 0.9654 - val_loss: 0.2051 - val_accuracy: 0.9531
Epoch 69/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0906 - accuracy: 0.9677 - val_loss: 0.2290 - val_accuracy: 0.9296
Epoch 70/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1518 - accuracy: 0.9460 - val_loss: 0.1845 - val_accuracy: 0.9390
Epoch 71/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0799 - accuracy: 0.9730 - val_loss: 0.2484 - val_accuracy: 0.9296
Epoch 72/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0694 - accuracy: 0.9736 - val_loss: 0.1713 - val_accuracy: 0.9577
Epoch 73/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0543 - accuracy: 0.9812 - val_loss: 0.2234 - val_accuracy: 0.9577
Epoch 74/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0987 - accuracy: 0.9677 - val_loss: 0.1647 - val_accuracy: 0.9577
Epoch 75/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0688 - accuracy: 0.9730 - val_loss: 0.1843 - val_accuracy: 0.9577
Epoch 76/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0742 - accuracy: 0.9748 - val_loss: 0.2604 - val_accuracy: 0.9531
Epoch 77/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1487 - accuracy: 0.9402 - val_loss: 0.2134 - val_accuracy: 0.9390
Epoch 78/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0819 - accuracy: 0.9707 - val_loss: 0.1971 - val_accuracy: 0.9624
Epoch 79/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1282 - accuracy: 0.9548 - val_loss: 0.1896 - val_accuracy: 0.9624
Epoch 80/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0706 - accuracy: 0.9748 - val_loss: 0.1688 - val_accuracy: 0.9718
Epoch 81/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0562 - accuracy: 0.9771 - val_loss: 0.4881 - val_accuracy: 0.8545
Epoch 82/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1604 - accuracy: 0.9496 - val_loss: 0.3882 - val_accuracy: 0.8967
Epoch 83/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1078 - accuracy: 0.9572 - val_loss: 0.1557 - val_accuracy: 0.9765
Epoch 84/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0512 - accuracy: 0.9848 - val_loss: 0.2343 - val_accuracy: 0.9531
Epoch 85/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0953 - accuracy: 0.9666 - val_loss: 0.2269 - val_accuracy: 0.9296
Epoch 86/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0554 - accuracy: 0.9783 - val_loss: 0.5062 - val_accuracy: 0.8357
Epoch 87/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1341 - accuracy: 0.9507 - val_loss: 0.2171 - val_accuracy: 0.9624
Epoch 88/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0360 - accuracy: 0.9906 - val_loss: 0.2578 - val_accuracy: 0.9343
Epoch 89/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1147 - accuracy: 0.9543 - val_loss: 0.2061 - val_accuracy: 0.9531
Epoch 90/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1465 - accuracy: 0.9478 - val_loss: 0.2452 - val_accuracy: 0.9061
Epoch 91/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0759 - accuracy: 0.9730 - val_loss: 0.1832 - val_accuracy: 0.9577
Epoch 92/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0377 - accuracy: 0.9912 - val_loss: 0.1735 - val_accuracy: 0.9671
Epoch 93/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0568 - accuracy: 0.9771 - val_loss: 0.3416 - val_accuracy: 0.9108
Epoch 94/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0606 - accuracy: 0.9818 - val_loss: 0.1842 - val_accuracy: 0.9624
Epoch 95/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0423 - accuracy: 0.9853 - val_loss: 0.2244 - val_accuracy: 0.9437
Epoch 96/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0962 - accuracy: 0.9584 - val_loss: 0.2435 - val_accuracy: 0.9390
Epoch 97/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0572 - accuracy: 0.9812 - val_loss: 0.2078 - val_accuracy: 0.9531
Epoch 98/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0535 - accuracy: 0.9836 - val_loss: 0.4658 - val_accuracy: 0.8545
Epoch 99/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0735 - accuracy: 0.9707 - val_loss: 0.1858 - val_accuracy: 0.9624
Epoch 100/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0461 - accuracy: 0.9853 - val_loss: 0.3178 - val_accuracy: 0.9296
Epoch 101/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1156 - accuracy: 0.9601 - val_loss: 0.2230 - val_accuracy: 0.9484
Epoch 102/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0975 - accuracy: 0.9648 - val_loss: 0.3749 - val_accuracy: 0.9202
Epoch 103/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0663 - accuracy: 0.9718 - val_loss: 0.2498 - val_accuracy: 0.9108
Epoch 104/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0904 - accuracy: 0.9654 - val_loss: 0.4116 - val_accuracy: 0.8216
Epoch 105/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1266 - accuracy: 0.9572 - val_loss: 0.1667 - val_accuracy: 0.9624
Epoch 106/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0401 - accuracy: 0.9900 - val_loss: 0.2209 - val_accuracy: 0.9484
Epoch 107/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0897 - accuracy: 0.9695 - val_loss: 0.1867 - val_accuracy: 0.9624
Epoch 108/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0542 - accuracy: 0.9806 - val_loss: 0.1672 - val_accuracy: 0.9718
Epoch 109/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0414 - accuracy: 0.9871 - val_loss: 0.1632 - val_accuracy: 0.9718
Epoch 110/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0942 - accuracy: 0.9672 - val_loss: 0.2877 - val_accuracy: 0.9249
Epoch 111/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0561 - accuracy: 0.9806 - val_loss: 0.1917 - val_accuracy: 0.9624
Epoch 112/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0593 - accuracy: 0.9765 - val_loss: 0.2367 - val_accuracy: 0.9390
Epoch 113/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0443 - accuracy: 0.9853 - val_loss: 0.2082 - val_accuracy: 0.9671
Epoch 114/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0623 - accuracy: 0.9742 - val_loss: 0.2495 - val_accuracy: 0.9484
Epoch 115/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0369 - accuracy: 0.9906 - val_loss: 0.1702 - val_accuracy: 0.9718
Epoch 116/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1077 - accuracy: 0.9683 - val_loss: 0.2269 - val_accuracy: 0.9296
Epoch 117/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1294 - accuracy: 0.9548 - val_loss: 0.2912 - val_accuracy: 0.9014
Epoch 118/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0707 - accuracy: 0.9754 - val_loss: 0.1796 - val_accuracy: 0.9718
Epoch 119/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0312 - accuracy: 0.9894 - val_loss: 0.2258 - val_accuracy: 0.9390
Epoch 120/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0636 - accuracy: 0.9765 - val_loss: 0.2752 - val_accuracy: 0.9249
Epoch 121/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0605 - accuracy: 0.9771 - val_loss: 0.2092 - val_accuracy: 0.9484
Epoch 122/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0410 - accuracy: 0.9859 - val_loss: 0.1652 - val_accuracy: 0.9718
Epoch 123/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0452 - accuracy: 0.9818 - val_loss: 0.1918 - val_accuracy: 0.9577
Epoch 124/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1569 - accuracy: 0.9472 - val_loss: 0.1654 - val_accuracy: 0.9624
Epoch 125/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0267 - accuracy: 0.9947 - val_loss: 0.1965 - val_accuracy: 0.9671
Epoch 126/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0405 - accuracy: 0.9871 - val_loss: 0.2020 - val_accuracy: 0.9624
Epoch 127/128
54/54 [==============================] - 0s 2ms/step - loss: 0.0394 - accuracy: 0.9848 - val_loss: 0.4939 - val_accuracy: 0.8498
Epoch 128/128
54/54 [==============================] - 0s 2ms/step - loss: 0.1142 - accuracy: 0.9595 - val_loss: 0.1922 - val_accuracy: 0.9484
In [15]:
model.evaluate(Xtest, ytest)
7/7 [==============================] - 0s 1ms/step - loss: 0.3845 - accuracy: 0.9579
Out[15]:
[0.3845388889312744, 0.9579439163208008]
In [16]:
def plot(history, variable):
    plt.plot(range(len(history[variable])), history[variable])
    plt.title(variable)
In [17]:
plot(history.history, "accuracy")
In [18]:
plot(history.history, "loss")

deepC

In [19]:
model.save('eeg_emotion.h5')

!deepCC eeg_emotion.h5
[INFO]
Reading [keras model] 'eeg_emotion.h5'
[SUCCESS]
Saved 'eeg_emotion_deepC/eeg_emotion.onnx'
[INFO]
Reading [onnx model] 'eeg_emotion_deepC/eeg_emotion.onnx'
[INFO]
Model info:
  ir_vesion : 4
  doc       : 
[WARNING]
[ONNX]: terminal (input/output) dense_input's shape is less than 1. Changing it to 1.
[WARNING]
[ONNX]: terminal (input/output) dense_3's shape is less than 1. Changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_3) as io node.
[INFO]
Running DNNC graph sanity check ...
[SUCCESS]
Passed sanity check.
[INFO]
Writing C++ file 'eeg_emotion_deepC/eeg_emotion.cpp'
[INFO]
deepSea model files are ready in 'eeg_emotion_deepC/' 
[RUNNING COMMAND]
g++ -std=c++11 -O3 -fno-rtti -fno-exceptions -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 "eeg_emotion_deepC/eeg_emotion.cpp" -D_AITS_MAIN -o "eeg_emotion_deepC/eeg_emotion.exe"
[RUNNING COMMAND]
size "eeg_emotion_deepC/eeg_emotion.exe"
   text	   data	    bss	    dec	    hex	filename
6003341	   2984	    760	6007085	 5ba92d	eeg_emotion_deepC/eeg_emotion.exe
[SUCCESS]
Saved model as executable "eeg_emotion_deepC/eeg_emotion.exe"
In [20]:
# Pick random test sample
i = random.randint(0, len(test_df)-1)

np.savetxt('sample.data', Xtest[i])

# run exe with input
!eeg_emotion_deepC/eeg_emotion.exe sample.data

# show predicted output
nn_out = np.loadtxt('dense_3.out')
print ("\nModel predicted the emotion: ", output_columns[np.argmax(nn_out)])

# actual output
print("Actual emotion: ", output_columns[np.argmax(ytest[i])])
writing file deepSea_result_1.out.
---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-20-6ad10ea1293e> in <module>
      8 
      9 # show predicted output
---> 10 nn_out = np.loadtxt('dense_3.out')
     11 print ("\nModel predicted the emotion: ", output_columns[np.argmax(nn_out)])
     12 

/opt/tljh/user/lib/python3.7/site-packages/numpy/lib/npyio.py in loadtxt(fname, dtype, comments, delimiter, converters, skiprows, usecols, unpack, ndmin, encoding, max_rows)
    979             fname = os_fspath(fname)
    980         if _is_string_like(fname):
--> 981             fh = np.lib._datasource.open(fname, 'rt', encoding=encoding)
    982             fencoding = getattr(fh, 'encoding', 'latin1')
    983             fh = iter(fh)

/opt/tljh/user/lib/python3.7/site-packages/numpy/lib/_datasource.py in open(path, mode, destpath, encoding, newline)
    267 
    268     ds = DataSource(destpath)
--> 269     return ds.open(path, mode, encoding=encoding, newline=newline)
    270 
    271 

/opt/tljh/user/lib/python3.7/site-packages/numpy/lib/_datasource.py in open(self, path, mode, encoding, newline)
    621                                       encoding=encoding, newline=newline)
    622         else:
--> 623             raise IOError("%s not found." % path)
    624 
    625 

OSError: dense_3.out not found.