As universities and high schools shut down indefinitely, college professors and educators the world over have been forced to abruptly move their lectures to the online medium of Video streaming applications. One of the most jarring shifts is the loss of the ability to "read the room"; most professors and public speakers actively gauge the mood of their audience based on body language and facial cues, they might be drowsy in the classroom and adjust accordingly to speed up, slow down, or pause for questions.
On a video conference call, however, the professor must focus on sharing their own screen and moving around the elements of their digital presentations. Even if they choose to view all the students' webcams, it is unrealistic for a professor to quickly and accurate gauge the mood of multiple webcam feeds on one screen.It doesnt care about the peoples interest and awareness ,
I wanted to do anything we could to help our professors and give them the tools to teach most comfortably and effectively and that students also give full attention.
This is also to give performance analysis of the students while working from home and check how their mood affects their overall performance as well. We can give recommendations to them to maximize the performance of the students.
What it does It detects if person is drowsy and alerts the user in case they get drowsy. It detects emotions of the user. It should give idea to the professors to see which students are attentive and can generate a report to the professors during online lecture. Also it calculate the peoples interest for the interest score of the student How I built it The first part is to i to screenshot continuously throughout a Video Conferencing call to feed that information into the Google Vision API implementation which can detect moods. I also built my own Emotion detector which can detect emotion of a person incase Google API is not able to uploaded for reasons. I also made Drowsiness detection using Open CV and Python libraries which detects the movement of the eyes.
Challenges I ran into My original plan was to use Zoom/Skype API but that wasnt possible, so I decided to use the default camera, run it over to see the usage of the machine learning algorithm.
Accomplishments that I'm proud of I was able to get very good accuracy with the model trained. And this is specially useful during corona virus outbreak.
What I learned I learned about google cloud, using Open CV in more detail.
What's next for MoodEduCare To integrate this across all video streaming platforms and online courses to let teachers/professors know about the performance of the students. This can be expanded for workplace as well specially for people working from home.
Built With opencv , cascades , tensorflow
Try It out
- Decoded Hacks: Adjusting to COVID-19
- COVID-19 Global Church Hackathon
- Hack Kosice Digital
- COVID-19 Global Hackathon 1.0
- Hack the Crisis Denmark
- HackCOVID 3: Logistics
- HackFromHome - Round 2
keras, numpy, opencv, python, tensorflow