Cainvas
Model Files
emotion_detection.h5
keras
Model
deepSea Compiled Models
emotion_detection.exe
deepSea
Ubuntu

Detecting emotions using the EEG brainwave

Credit: AITS Cainvas Community

Photo by Lobster on Dribbble

The notebook uses EEG brainwave data to predict the emotions of humans.

What is EEG?

Brain cells communicate with each other through electrical signals.

Electroencephalography (EEG) is an electrophysiological monitoring method to record electrical activity of the brain. It is used for diagnosing or treating various disorders like brain tumors, stroke, sleep disorders etc.

Source - Wikipedia - Electroencephalography

In [1]:
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
import random
import matplotlib.pyplot as plt

The dataset

The data was collected from two people (1 male, 1 female) for 3 minutes per state - positive, neutral, negative. A Muse EEG headband recorded the TP9, AF7, AF8 and TP10 EEG placements via dry electrodes. Six minutes of resting neutral data was also recorded.

The stimuli used to evoke the emotions are below

1. Marley and Me - Negative (Twentieth Century Fox) Death Scene
2. Up - Negative (Walt Disney Pictures) Opening Death Scene
3. My Girl - Negative (Imagine Entertainment) Funeral Scene
4. La La Land - Positive (Summit Entertainment) Opening musical number
5. Slow Life - Positive (BioQuest Studios) Nature timelapse
6. Funny Dogs - Positive (MashupZone) Funny dog clips

Dataset citation:

J. J. Bird, L. J. Manso, E. P. Ribiero, A. Ekart, and D. R. Faria, “A study on mental state classification using eeg-based brain-machine interface,”in 9th International Conference on Intelligent Systems, IEEE, 2018.

J. J. Bird, A. Ekart, C. D. Buckingham, and D. R. Faria, “Mental emotional sentiment classification with an eeg-based brain-machine interface,” in The International Conference on Digital Image and Signal Processing (DISP’19), Springer, 2019.

In [2]:
eeg = pd.read_csv('https://cainvas-static.s3.amazonaws.com/media/user_data/cainvas-admin/emotions.csv')
eeg
Out[2]:
# mean_0_a mean_1_a mean_2_a mean_3_a mean_4_a mean_d_0_a mean_d_1_a mean_d_2_a mean_d_3_a mean_d_4_a ... fft_741_b fft_742_b fft_743_b fft_744_b fft_745_b fft_746_b fft_747_b fft_748_b fft_749_b label
0 4.620 30.3 -356.0 15.60 26.3 1.070 0.411 -15.700 2.060 3.15 ... 23.50 20.300 20.300 23.50 -215.0 280.00 -162.00 -162.00 280.00 NEGATIVE
1 28.800 33.1 32.0 25.80 22.8 6.550 1.680 2.880 3.830 -4.82 ... -23.30 -21.800 -21.800 -23.30 182.0 2.57 -31.60 -31.60 2.57 NEUTRAL
2 8.900 29.4 -416.0 16.70 23.7 79.900 3.360 90.200 89.900 2.03 ... 462.00 -233.000 -233.000 462.00 -267.0 281.00 -148.00 -148.00 281.00 POSITIVE
3 14.900 31.6 -143.0 19.80 24.3 -0.584 -0.284 8.820 2.300 -1.97 ... 299.00 -243.000 -243.000 299.00 132.0 -12.40 9.53 9.53 -12.40 POSITIVE
4 28.300 31.3 45.2 27.30 24.5 34.800 -5.790 3.060 41.400 5.52 ... 12.00 38.100 38.100 12.00 119.0 -17.60 23.90 23.90 -17.60 NEUTRAL
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2127 32.400 32.2 32.2 30.80 23.4 1.640 -2.030 0.647 -0.121 -1.10 ... -21.70 0.218 0.218 -21.70 95.2 -19.90 47.20 47.20 -19.90 NEUTRAL
2128 16.300 31.3 -284.0 14.30 23.9 4.200 1.090 4.460 4.720 6.63 ... 594.00 -324.000 -324.000 594.00 -35.5 142.00 -59.80 -59.80 142.00 POSITIVE
2129 -0.547 28.3 -259.0 15.80 26.7 9.080 6.900 12.700 2.030 4.64 ... 370.00 -160.000 -160.000 370.00 408.0 -169.00 -10.50 -10.50 -169.00 NEGATIVE
2130 16.800 19.9 -288.0 8.34 26.0 2.460 1.580 -16.000 1.690 4.74 ... 124.00 -27.600 -27.600 124.00 -656.0 552.00 -271.00 -271.00 552.00 NEGATIVE
2131 27.000 32.0 31.8 25.00 28.9 4.990 1.950 6.210 3.490 -3.51 ... 1.95 1.810 1.810 1.95 110.0 -6.71 22.80 22.80 -6.71 NEUTRAL

2132 rows × 2549 columns

In [3]:
# Number of samlpes in each class

eeg['label'].value_counts()
Out[3]:
NEUTRAL     716
NEGATIVE    708
POSITIVE    708
Name: label, dtype: int64

This is a balanced dataset.

In [4]:
# Checking the datatypes of the columns

eeg.dtypes
Out[4]:
# mean_0_a    float64
mean_1_a      float64
mean_2_a      float64
mean_3_a      float64
mean_4_a      float64
               ...   
fft_746_b     float64
fft_747_b     float64
fft_748_b     float64
fft_749_b     float64
label          object
Length: 2549, dtype: object
In [5]:
# defining the input and output columns to separate the dataset in the later cells.

input_columns = list(eeg.columns[:-1])
output_columns = ['NEGATIVE', 'NEUTRAL', 'POSITIVE']    # column names to be used after one-hot encoding

print("Number of input columns: ", len(input_columns))
#print("Input columns: ", ', '.join(input_columns))

print("Number of output columns: ", len(output_columns))
#print("Output columns: ", ', '.join(output_columns))
Number of input columns:  2548
Number of output columns:  3
In [6]:
# One hot encoding the labels

y = pd.get_dummies(eeg.label)
print(y)

# Adding the one hot encodings to the dataset
for x in output_columns:
    eeg[x] = y[x]
      NEGATIVE  NEUTRAL  POSITIVE
0            1        0         0
1            0        1         0
2            0        0         1
3            0        0         1
4            0        1         0
...        ...      ...       ...
2127         0        1         0
2128         0        0         1
2129         1        0         0
2130         1        0         0
2131         0        1         0

[2132 rows x 3 columns]
In [7]:
# Viewing the dataset again

eeg
Out[7]:
# mean_0_a mean_1_a mean_2_a mean_3_a mean_4_a mean_d_0_a mean_d_1_a mean_d_2_a mean_d_3_a mean_d_4_a ... fft_744_b fft_745_b fft_746_b fft_747_b fft_748_b fft_749_b label NEGATIVE NEUTRAL POSITIVE
0 4.620 30.3 -356.0 15.60 26.3 1.070 0.411 -15.700 2.060 3.15 ... 23.50 -215.0 280.00 -162.00 -162.00 280.00 NEGATIVE 1 0 0
1 28.800 33.1 32.0 25.80 22.8 6.550 1.680 2.880 3.830 -4.82 ... -23.30 182.0 2.57 -31.60 -31.60 2.57 NEUTRAL 0 1 0
2 8.900 29.4 -416.0 16.70 23.7 79.900 3.360 90.200 89.900 2.03 ... 462.00 -267.0 281.00 -148.00 -148.00 281.00 POSITIVE 0 0 1
3 14.900 31.6 -143.0 19.80 24.3 -0.584 -0.284 8.820 2.300 -1.97 ... 299.00 132.0 -12.40 9.53 9.53 -12.40 POSITIVE 0 0 1
4 28.300 31.3 45.2 27.30 24.5 34.800 -5.790 3.060 41.400 5.52 ... 12.00 119.0 -17.60 23.90 23.90 -17.60 NEUTRAL 0 1 0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2127 32.400 32.2 32.2 30.80 23.4 1.640 -2.030 0.647 -0.121 -1.10 ... -21.70 95.2 -19.90 47.20 47.20 -19.90 NEUTRAL 0 1 0
2128 16.300 31.3 -284.0 14.30 23.9 4.200 1.090 4.460 4.720 6.63 ... 594.00 -35.5 142.00 -59.80 -59.80 142.00 POSITIVE 0 0 1
2129 -0.547 28.3 -259.0 15.80 26.7 9.080 6.900 12.700 2.030 4.64 ... 370.00 408.0 -169.00 -10.50 -10.50 -169.00 NEGATIVE 1 0 0
2130 16.800 19.9 -288.0 8.34 26.0 2.460 1.580 -16.000 1.690 4.74 ... 124.00 -656.0 552.00 -271.00 -271.00 552.00 NEGATIVE 1 0 0
2131 27.000 32.0 31.8 25.00 28.9 4.990 1.950 6.210 3.490 -3.51 ... 1.95 110.0 -6.71 22.80 22.80 -6.71 NEUTRAL 0 1 0

2132 rows × 2552 columns

In [8]:
# Splitting into train, val and test set -- 80-10-10 split

# First, an 80-20 split
train_df, val_test_df = train_test_split(eeg, test_size = 0.2)

# Then split the 20% into half
val_df, test_df = train_test_split(val_test_df, test_size = 0.5)

print("Number of samples in...")
print("Training set: ", len(train_df))
print("Validation set: ", len(val_df))
print("Testing set: ", len(test_df))
Number of samples in...
Training set:  1705
Validation set:  213
Testing set:  214
In [9]:
# Splitting into X (input) and y (output)

Xtrain, ytrain = np.array(train_df[input_columns]), np.array(train_df[output_columns])

Xval, yval = np.array(val_df[input_columns]), np.array(val_df[output_columns])

Xtest, ytest = np.array(test_df[input_columns]), np.array(test_df[output_columns])
In [10]:
print("Range of values in X")

min(Xtrain[0]), max(Xtrain[0])
Range of values in X
Out[10]:
(-36000000.0, 12500000000.0)
In [11]:
# Each feature has a different range. 
# Using min_max_scaler to scale them to values in the range [0,1].

min_max_scaler = MinMaxScaler()

# Fit on training set alone
Xtrain = min_max_scaler.fit_transform(Xtrain)

# Use it to transform val and test input
Xval = min_max_scaler.transform(Xval)
Xtest = min_max_scaler.transform(Xtest)
In [12]:
print("Range of values in X")

min(Xtrain[0]), max(Xtrain[0])
Range of values in X
Out[12]:
(0.0, 1.0)

Model

In [13]:
model = tf.keras.Sequential([
    layers.Dense(1024, activation = 'relu', input_shape = Xtrain[0].shape),
    layers.Dense(256, activation = 'relu'),
    layers.Dense(128, activation = 'relu'),
    layers.Dense(3, activation = 'softmax')
])

model.compile(optimizer='adam', loss=tf.losses.CategoricalCrossentropy(), metrics=['accuracy'])
In [14]:
history = model.fit(Xtrain, ytrain, validation_data = (Xval, yval), epochs=128)
Epoch 1/128
54/54 [==============================] - 1s 15ms/step - loss: 0.9714 - accuracy: 0.5795 - val_loss: 0.4979 - val_accuracy: 0.8826
Epoch 2/128
54/54 [==============================] - 1s 11ms/step - loss: 0.5090 - accuracy: 0.7988 - val_loss: 0.5934 - val_accuracy: 0.7418
Epoch 3/128
54/54 [==============================] - 1s 11ms/step - loss: 0.4057 - accuracy: 0.8440 - val_loss: 0.3410 - val_accuracy: 0.8873
Epoch 4/128
54/54 [==============================] - 1s 11ms/step - loss: 0.4965 - accuracy: 0.7865 - val_loss: 0.3648 - val_accuracy: 0.8826
Epoch 5/128
54/54 [==============================] - 1s 11ms/step - loss: 0.3395 - accuracy: 0.8903 - val_loss: 0.2605 - val_accuracy: 0.9202
Epoch 6/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2878 - accuracy: 0.9021 - val_loss: 0.2552 - val_accuracy: 0.8967
Epoch 7/128
54/54 [==============================] - 1s 11ms/step - loss: 0.3104 - accuracy: 0.8874 - val_loss: 0.2450 - val_accuracy: 0.9108
Epoch 8/128
54/54 [==============================] - 1s 11ms/step - loss: 0.3365 - accuracy: 0.8663 - val_loss: 0.4267 - val_accuracy: 0.8451
Epoch 9/128
54/54 [==============================] - 1s 11ms/step - loss: 0.3338 - accuracy: 0.8733 - val_loss: 0.2677 - val_accuracy: 0.9155
Epoch 10/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2598 - accuracy: 0.9091 - val_loss: 0.3116 - val_accuracy: 0.8404
Epoch 11/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2944 - accuracy: 0.8933 - val_loss: 0.4750 - val_accuracy: 0.7465
Epoch 12/128
54/54 [==============================] - 1s 11ms/step - loss: 0.3181 - accuracy: 0.8780 - val_loss: 0.3040 - val_accuracy: 0.8920
Epoch 13/128
54/54 [==============================] - 1s 12ms/step - loss: 0.2813 - accuracy: 0.8968 - val_loss: 0.2228 - val_accuracy: 0.9108
Epoch 14/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2878 - accuracy: 0.8909 - val_loss: 0.2074 - val_accuracy: 0.9390
Epoch 15/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2608 - accuracy: 0.9079 - val_loss: 0.2397 - val_accuracy: 0.9014
Epoch 16/128
54/54 [==============================] - 1s 12ms/step - loss: 0.2816 - accuracy: 0.8991 - val_loss: 0.4713 - val_accuracy: 0.7887
Epoch 17/128
54/54 [==============================] - 1s 11ms/step - loss: 0.3382 - accuracy: 0.8798 - val_loss: 0.2361 - val_accuracy: 0.9343
Epoch 18/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2365 - accuracy: 0.9173 - val_loss: 0.3367 - val_accuracy: 0.8873
Epoch 19/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2991 - accuracy: 0.8827 - val_loss: 0.3219 - val_accuracy: 0.8732
Epoch 20/128
54/54 [==============================] - 1s 11ms/step - loss: 0.3697 - accuracy: 0.8633 - val_loss: 0.2194 - val_accuracy: 0.9249
Epoch 21/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2612 - accuracy: 0.9097 - val_loss: 0.2001 - val_accuracy: 0.9390
Epoch 22/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2366 - accuracy: 0.9132 - val_loss: 0.3147 - val_accuracy: 0.8779
Epoch 23/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2753 - accuracy: 0.8927 - val_loss: 0.2137 - val_accuracy: 0.9343
Epoch 24/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2720 - accuracy: 0.8956 - val_loss: 0.2170 - val_accuracy: 0.9343
Epoch 25/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2290 - accuracy: 0.9191 - val_loss: 0.2612 - val_accuracy: 0.9108
Epoch 26/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2251 - accuracy: 0.9167 - val_loss: 0.2070 - val_accuracy: 0.9202
Epoch 27/128
54/54 [==============================] - 1s 12ms/step - loss: 0.2491 - accuracy: 0.9073 - val_loss: 0.2249 - val_accuracy: 0.9296
Epoch 28/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2404 - accuracy: 0.9155 - val_loss: 0.2316 - val_accuracy: 0.9249
Epoch 29/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2327 - accuracy: 0.9179 - val_loss: 0.1931 - val_accuracy: 0.9249
Epoch 30/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2094 - accuracy: 0.9249 - val_loss: 0.1924 - val_accuracy: 0.9249
Epoch 31/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2125 - accuracy: 0.9220 - val_loss: 0.1836 - val_accuracy: 0.9390
Epoch 32/128
54/54 [==============================] - 1s 12ms/step - loss: 0.2479 - accuracy: 0.9032 - val_loss: 0.2953 - val_accuracy: 0.8873
Epoch 33/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2168 - accuracy: 0.9226 - val_loss: 0.2104 - val_accuracy: 0.9108
Epoch 34/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2964 - accuracy: 0.8874 - val_loss: 0.4511 - val_accuracy: 0.7981
Epoch 35/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2814 - accuracy: 0.8891 - val_loss: 0.2194 - val_accuracy: 0.9296
Epoch 36/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2205 - accuracy: 0.9232 - val_loss: 0.4094 - val_accuracy: 0.8404
Epoch 37/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2090 - accuracy: 0.9202 - val_loss: 0.1709 - val_accuracy: 0.9296
Epoch 38/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2102 - accuracy: 0.9202 - val_loss: 0.3189 - val_accuracy: 0.8685
Epoch 39/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2777 - accuracy: 0.8933 - val_loss: 0.2888 - val_accuracy: 0.8451
Epoch 40/128
54/54 [==============================] - 1s 12ms/step - loss: 0.2210 - accuracy: 0.9155 - val_loss: 0.1682 - val_accuracy: 0.9390
Epoch 41/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1866 - accuracy: 0.9249 - val_loss: 0.1723 - val_accuracy: 0.9390
Epoch 42/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1660 - accuracy: 0.9402 - val_loss: 0.2277 - val_accuracy: 0.9108
Epoch 43/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2406 - accuracy: 0.9079 - val_loss: 0.2060 - val_accuracy: 0.9296
Epoch 44/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1967 - accuracy: 0.9261 - val_loss: 0.2575 - val_accuracy: 0.9108
Epoch 45/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2021 - accuracy: 0.9220 - val_loss: 0.1851 - val_accuracy: 0.9249
Epoch 46/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2418 - accuracy: 0.9062 - val_loss: 0.3877 - val_accuracy: 0.8310
Epoch 47/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1798 - accuracy: 0.9331 - val_loss: 0.2936 - val_accuracy: 0.9390
Epoch 48/128
54/54 [==============================] - 1s 12ms/step - loss: 0.2139 - accuracy: 0.9226 - val_loss: 0.2279 - val_accuracy: 0.9202
Epoch 49/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1729 - accuracy: 0.9326 - val_loss: 0.1846 - val_accuracy: 0.9343
Epoch 50/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1561 - accuracy: 0.9390 - val_loss: 0.2145 - val_accuracy: 0.9155
Epoch 51/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1593 - accuracy: 0.9478 - val_loss: 0.2124 - val_accuracy: 0.9390
Epoch 52/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1846 - accuracy: 0.9302 - val_loss: 0.1829 - val_accuracy: 0.9296
Epoch 53/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1506 - accuracy: 0.9501 - val_loss: 0.1937 - val_accuracy: 0.9390
Epoch 54/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1697 - accuracy: 0.9437 - val_loss: 0.1390 - val_accuracy: 0.9437
Epoch 55/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1589 - accuracy: 0.9355 - val_loss: 0.1475 - val_accuracy: 0.9437
Epoch 56/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1539 - accuracy: 0.9455 - val_loss: 0.1979 - val_accuracy: 0.9343
Epoch 57/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1394 - accuracy: 0.9478 - val_loss: 0.1103 - val_accuracy: 0.9531
Epoch 58/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1844 - accuracy: 0.9261 - val_loss: 0.3156 - val_accuracy: 0.8920
Epoch 59/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1798 - accuracy: 0.9326 - val_loss: 0.1811 - val_accuracy: 0.9390
Epoch 60/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0973 - accuracy: 0.9648 - val_loss: 0.1157 - val_accuracy: 0.9531
Epoch 61/128
54/54 [==============================] - 1s 12ms/step - loss: 0.1154 - accuracy: 0.9607 - val_loss: 0.2575 - val_accuracy: 0.9296
Epoch 62/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2131 - accuracy: 0.9349 - val_loss: 0.1640 - val_accuracy: 0.9390
Epoch 63/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1266 - accuracy: 0.9537 - val_loss: 0.1545 - val_accuracy: 0.9484
Epoch 64/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1267 - accuracy: 0.9490 - val_loss: 0.3801 - val_accuracy: 0.8357
Epoch 65/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1661 - accuracy: 0.9331 - val_loss: 0.3611 - val_accuracy: 0.8404
Epoch 66/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1024 - accuracy: 0.9625 - val_loss: 0.1040 - val_accuracy: 0.9437
Epoch 67/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0859 - accuracy: 0.9724 - val_loss: 0.1437 - val_accuracy: 0.9484
Epoch 68/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1273 - accuracy: 0.9560 - val_loss: 0.1431 - val_accuracy: 0.9437
Epoch 69/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1580 - accuracy: 0.9390 - val_loss: 0.1193 - val_accuracy: 0.9531
Epoch 70/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1037 - accuracy: 0.9619 - val_loss: 0.1350 - val_accuracy: 0.9437
Epoch 71/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0712 - accuracy: 0.9724 - val_loss: 0.1970 - val_accuracy: 0.9155
Epoch 72/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1207 - accuracy: 0.9531 - val_loss: 0.1198 - val_accuracy: 0.9577
Epoch 73/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1116 - accuracy: 0.9525 - val_loss: 0.0826 - val_accuracy: 0.9718
Epoch 74/128
54/54 [==============================] - 1s 12ms/step - loss: 0.0779 - accuracy: 0.9771 - val_loss: 0.1000 - val_accuracy: 0.9671
Epoch 75/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0621 - accuracy: 0.9801 - val_loss: 0.1037 - val_accuracy: 0.9671
Epoch 76/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0865 - accuracy: 0.9689 - val_loss: 0.2319 - val_accuracy: 0.9061
Epoch 77/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1430 - accuracy: 0.9531 - val_loss: 0.0986 - val_accuracy: 0.9765
Epoch 78/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0911 - accuracy: 0.9654 - val_loss: 0.1077 - val_accuracy: 0.9671
Epoch 79/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1034 - accuracy: 0.9607 - val_loss: 0.1272 - val_accuracy: 0.9437
Epoch 80/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1527 - accuracy: 0.9449 - val_loss: 0.1658 - val_accuracy: 0.9343
Epoch 81/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0941 - accuracy: 0.9666 - val_loss: 0.1195 - val_accuracy: 0.9577
Epoch 82/128
54/54 [==============================] - 1s 12ms/step - loss: 0.0629 - accuracy: 0.9771 - val_loss: 0.0828 - val_accuracy: 0.9671
Epoch 83/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0558 - accuracy: 0.9806 - val_loss: 0.0949 - val_accuracy: 0.9671
Epoch 84/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0997 - accuracy: 0.9654 - val_loss: 0.0752 - val_accuracy: 0.9671
Epoch 85/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0473 - accuracy: 0.9830 - val_loss: 0.1549 - val_accuracy: 0.9484
Epoch 86/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0530 - accuracy: 0.9812 - val_loss: 0.1515 - val_accuracy: 0.9390
Epoch 87/128
54/54 [==============================] - 1s 12ms/step - loss: 0.1456 - accuracy: 0.9589 - val_loss: 0.1246 - val_accuracy: 0.9484
Epoch 88/128
54/54 [==============================] - 1s 13ms/step - loss: 0.0832 - accuracy: 0.9718 - val_loss: 0.3178 - val_accuracy: 0.8920
Epoch 89/128
54/54 [==============================] - 1s 11ms/step - loss: 0.2499 - accuracy: 0.9202 - val_loss: 0.0992 - val_accuracy: 0.9577
Epoch 90/128
54/54 [==============================] - 1s 12ms/step - loss: 0.0851 - accuracy: 0.9689 - val_loss: 0.2963 - val_accuracy: 0.8779
Epoch 91/128
54/54 [==============================] - 1s 12ms/step - loss: 0.1391 - accuracy: 0.9578 - val_loss: 0.2097 - val_accuracy: 0.9296
Epoch 92/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0589 - accuracy: 0.9812 - val_loss: 0.2824 - val_accuracy: 0.9390
Epoch 93/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0930 - accuracy: 0.9660 - val_loss: 0.2457 - val_accuracy: 0.9484
Epoch 94/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0792 - accuracy: 0.9695 - val_loss: 0.1251 - val_accuracy: 0.9577
Epoch 95/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0593 - accuracy: 0.9783 - val_loss: 0.1078 - val_accuracy: 0.9624
Epoch 96/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1059 - accuracy: 0.9625 - val_loss: 0.1521 - val_accuracy: 0.9390
Epoch 97/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0679 - accuracy: 0.9765 - val_loss: 0.0771 - val_accuracy: 0.9718
Epoch 98/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0546 - accuracy: 0.9812 - val_loss: 0.1253 - val_accuracy: 0.9577
Epoch 99/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1030 - accuracy: 0.9601 - val_loss: 0.0874 - val_accuracy: 0.9765
Epoch 100/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0565 - accuracy: 0.9789 - val_loss: 0.3319 - val_accuracy: 0.8967
Epoch 101/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0777 - accuracy: 0.9689 - val_loss: 0.0804 - val_accuracy: 0.9765
Epoch 102/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0470 - accuracy: 0.9842 - val_loss: 0.0841 - val_accuracy: 0.9718
Epoch 103/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0972 - accuracy: 0.9642 - val_loss: 0.2817 - val_accuracy: 0.8873
Epoch 104/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1282 - accuracy: 0.9560 - val_loss: 0.1970 - val_accuracy: 0.9390
Epoch 105/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0578 - accuracy: 0.9783 - val_loss: 0.0830 - val_accuracy: 0.9718
Epoch 106/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0442 - accuracy: 0.9824 - val_loss: 0.1195 - val_accuracy: 0.9577
Epoch 107/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1273 - accuracy: 0.9519 - val_loss: 0.1049 - val_accuracy: 0.9671
Epoch 108/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0677 - accuracy: 0.9754 - val_loss: 0.0690 - val_accuracy: 0.9718
Epoch 109/128
54/54 [==============================] - 1s 13ms/step - loss: 0.0633 - accuracy: 0.9765 - val_loss: 0.1208 - val_accuracy: 0.9531
Epoch 110/128
54/54 [==============================] - 1s 13ms/step - loss: 0.1219 - accuracy: 0.9501 - val_loss: 0.0773 - val_accuracy: 0.9765
Epoch 111/128
54/54 [==============================] - 1s 13ms/step - loss: 0.1063 - accuracy: 0.9630 - val_loss: 0.0790 - val_accuracy: 0.9718
Epoch 112/128
54/54 [==============================] - 1s 12ms/step - loss: 0.0478 - accuracy: 0.9848 - val_loss: 0.1044 - val_accuracy: 0.9531
Epoch 113/128
54/54 [==============================] - 1s 12ms/step - loss: 0.0764 - accuracy: 0.9754 - val_loss: 0.3792 - val_accuracy: 0.8732
Epoch 114/128
54/54 [==============================] - 1s 13ms/step - loss: 0.0894 - accuracy: 0.9642 - val_loss: 0.1278 - val_accuracy: 0.9531
Epoch 115/128
54/54 [==============================] - 1s 12ms/step - loss: 0.0341 - accuracy: 0.9853 - val_loss: 0.0762 - val_accuracy: 0.9812
Epoch 116/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0292 - accuracy: 0.9912 - val_loss: 0.0752 - val_accuracy: 0.9812
Epoch 117/128
54/54 [==============================] - 1s 11ms/step - loss: 0.1674 - accuracy: 0.9390 - val_loss: 0.1039 - val_accuracy: 0.9577
Epoch 118/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0873 - accuracy: 0.9666 - val_loss: 0.0775 - val_accuracy: 0.9671
Epoch 119/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0371 - accuracy: 0.9883 - val_loss: 0.0553 - val_accuracy: 0.9718
Epoch 120/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0801 - accuracy: 0.9683 - val_loss: 0.0962 - val_accuracy: 0.9671
Epoch 121/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0250 - accuracy: 0.9930 - val_loss: 0.1030 - val_accuracy: 0.9624
Epoch 122/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0243 - accuracy: 0.9912 - val_loss: 0.1747 - val_accuracy: 0.9484
Epoch 123/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0526 - accuracy: 0.9818 - val_loss: 0.2590 - val_accuracy: 0.9061
Epoch 124/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0785 - accuracy: 0.9718 - val_loss: 0.0789 - val_accuracy: 0.9671
Epoch 125/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0284 - accuracy: 0.9912 - val_loss: 0.1020 - val_accuracy: 0.9577
Epoch 126/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0347 - accuracy: 0.9865 - val_loss: 0.0652 - val_accuracy: 0.9765
Epoch 127/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0418 - accuracy: 0.9865 - val_loss: 0.1125 - val_accuracy: 0.9531
Epoch 128/128
54/54 [==============================] - 1s 11ms/step - loss: 0.0977 - accuracy: 0.9689 - val_loss: 0.2830 - val_accuracy: 0.8779
In [15]:
model.evaluate(Xtest, ytest)
7/7 [==============================] - 0s 4ms/step - loss: 0.3656 - accuracy: 0.8879
Out[15]:
[0.3656277656555176, 0.8878504633903503]
In [16]:
def plot(history, variable):
    plt.plot(range(len(history[variable])), history[variable])
    plt.title(variable)
In [17]:
plot(history.history, "accuracy")
In [18]:
plot(history.history, "loss")

deepC

In [19]:
model.save('eeg_emotion.h5')

!deepCC eeg_emotion.h5
reading [keras model] from 'eeg_emotion.h5'
Saved 'eeg_emotion.onnx'
reading onnx model from file  eeg_emotion.onnx
Model info:
  ir_vesion :  3 
  doc       : 
WARN (ONNX): terminal (input/output) dense_input's shape is less than 1.
             changing it to 1.
WARN (ONNX): terminal (input/output) dense_3's shape is less than 1.
             changing it to 1.
WARN (GRAPH): found operator node with the same name (dense_3) as io node.
running DNNC graph sanity check ... passed.
Writing C++ file  eeg_emotion_deepC/eeg_emotion.cpp
INFO (ONNX): model files are ready in dir eeg_emotion_deepC
g++ -std=c++11 -O3 -I. -I/opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/include -isystem /opt/tljh/user/lib/python3.7/site-packages/deepC-0.13-py3.7-linux-x86_64.egg/deepC/packages/eigen-eigen-323c052e1731 eeg_emotion_deepC/eeg_emotion.cpp -o eeg_emotion_deepC/eeg_emotion.exe
Model executable  eeg_emotion_deepC/eeg_emotion.exe
In [20]:
# Pick random test sample
i = random.randint(0, len(test_df)-1)

np.savetxt('sample.data', Xtest[i])

# run exe with input
!eeg_emotion_deepC/eeg_emotion.exe sample.data

# show predicted output
nn_out = np.loadtxt('dense_3.out')
print ("\nModel predicted the emotion: ", output_columns[np.argmax(nn_out)])

# actual output
print("Actual emotion: ", output_columns[np.argmax(ytest[i])])
reading file sample.data.
writing file dense_3.out.

Model predicted the emotion:  NEUTRAL
Actual emotion:  NEUTRAL