Ticker

6/recent/ticker-posts

Facial Emotion Recognition With Neural Networks - OpenCV and Keras

 


This is a very cool project that detects the face emotion by neural networks. Its super cool and easy to code. Its a python project and we are going to use libraries like keras, tensorflow, cv2(opencv), numpy and pandas. 

We would make a neural network and train it on our facial emotions dataset. You can download the dataset from here.

Just download the dataset from the link above and separate it into test and train directories. Use approx. 95% of data in training and the rest for testing. Here CV2 would help for live video capture for emotion detection. We have 7 different emotion categories.

You can use the batch size for training in 2^n terms. I've used a batch size of 64, but a 32 batch size would be good too. Just play with the batch size and number of epochs in this case and try to find the maximum accuracy. My model had an accuracy of 88.12%  but I think that you can do even better.

Just remember this, For training, you have to put mode ="train". And for live emotion detection, just change the mode ="display".

Don't forget to add the dropout layer to avoid overfitting.

I hope you all would enjoy this supercool project.







Here's the source code for the above emotion detection project - 

        import numpy as np
        import argparse
        import matplotlib.pyplot as plt
        import cv2
        from tensorflow.keras.models import Sequential
        from tensorflow.keras.layers import Dense, Dropout, Flatten
        from tensorflow.keras.layers import Conv2D
        from tensorflow.keras.optimizers import Adam
        from tensorflow.keras.layers import MaxPooling2D
        from tensorflow.keras.preprocessing.image import ImageDataGenerator
        import os
        from keras.models import model_from_json

        # generate data

        train_dir = 'C:\data\emotions/train'
        val_dir = 'C:\data\emotions/test'
        num_train = 28709
        num_val = 7178
        batch_size = 64
        num_epoch = 50
        train_datagen = ImageDataGenerator(rescale=1./255)
        val_datagen = ImageDataGenerator(rescale=1./255)
        train_generator = train_datagen.flow_from_directory(
                train_dir,
                target_size=(48,48),
                batch_size=batch_size,
                color_mode="grayscale",
                class_mode='categorical')
        validation_generator = val_datagen.flow_from_directory(
                val_dir,
                target_size=(48,48),
                batch_size=batch_size,
                color_mode="grayscale",
                class_mode='categorical')

        # Create the model for image processing

        model = Sequential()
        model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(48,48,1)))
        model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))
        model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Conv2D(128, kernel_size=(3, 3), activation='relu'))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))
        model.add(Flatten())
        model.add(Dense(1024, activation='relu'))
        model.add(Dropout(0.5))
        model.add(Dense(7, activation='softmax'))


        mode = "display" # you have to change this to train for training - (mode="train")


        if mode == "train":
            model.compile(loss='categorical_crossentropy',optimizer=Adam(lr=0.0001, decay=1e-6),metrics=['accuracy'])
            model_info = model.fit_generator(
                    train_generator,
                    steps_per_epoch=num_train // batch_size,
                    epochs=num_epoch,
                    validation_data=validation_generator,
                    validation_steps=num_val // batch_size)
            
            model_json = model.to_json()
            with open("model.json", "w") as json_file:
                json_file.write(model_json)
            model.save_weights("model.h5")
            print("Saved model to disk")

        # emotions will be displayed on your face from the webcam feed
        elif mode == "display":
            model.load_weights('model.h5')
            # prevents openCL usage and unnecessary logging messages
            cv2.ocl.setUseOpenCL(False)
            # dictionary which assigns each label an emotion (alphabetical order)
            emotion_dict = {0: "Angry", 1: "Disgusted", 2: "Fearful", 3: "Happy", 4: "Neutral", 5: "Sad", 6: "Surprised"}
            # start the webcam feed
            cap = cv2.VideoCapture(0)
            while True:
                # Find haar cascade to draw bounding box around face
                ret, frame = cap.read()
                if not ret:
                    break
                facecasc = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
                gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
                faces = facecasc.detectMultiScale(gray,scaleFactor=1.3, minNeighbors=5)
                for (x, y, w, h) in faces:
                    cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
                    roi_gray = gray[y:y+h, x:x+w] 
                    cropped_img = np.expand_dims(np.expand_dims(cv2.resize(roi_gray, (48, 48)), -1), 0)
                    prediction = model.predict(cropped_img)
                    maxindex = int(np.argmax(prediction))
                    cv2.putText(frame, emotion_dict[maxindex], (x+20, y-60), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA)
                cv2.imshow('Video', cv2.resize(frame,(1200,960),interpolation = cv2.INTER_CUBIC))
                if cv2.waitKey(1) & 0xFF == ord('q'):
                    break
            cap.release()
            cv2.destroyAllWindows()

Post a Comment

0 Comments