Authors

Inspiration

As four high school students, we have firsthand experience with the problems with online learning. Video conferencing platforms like Zoom, WebEx and Google Hangouts are tools for businesses, not a replacement for the classroom. Although they do support basic conversations and lectures, they lack the features necessary to facilitate the kind of personalized learning available in physical schools. Teachers can barely see the class while presenting, and students can't ask for clarification. I (Amrit) suffer from a learning disability, and it is extremely difficult for me and my peers to get the academic support we need over Zoom, which our high school has adopted. If a student is getting distracted in the classroom, a gentle reminder from the teacher is all that's needed, but no such alternative is available over online learning. Moreover, if a student isn't understanding the material, the teacher can slow down during class, but again, this is not possible through the current system. We propose InsideScoop-- a conferencing-system agnostic platform that uses cutting-edge convolutional neural networks to give teachers insight into the learning and engagement of each student in the classroom, so no student is left behind.

What it does

InsideScoop uses cutting-edge PyTorch deep learning models to personalize student experiences in a class by providing actionable, real-time reports to teachers. In a simple and intuitive teacher application, InsideScoop displays student attentiveness, engagement, participation, and understanding of the material presented in class, so that students who need more help can get it. Because of our intelligent C++ based video analyzing engine, InsideScoop will be able to deliver insights regardless of the student's conferencing software (Zoom, WebEx, Hangouts, MS Teams, etc.), or the student's operating system (Windows, Linux, OS X). These insights in the teacher application are powered by cutting-edge convolutional neural networks in PyTorch that analyze student engagement in a privacy respectful way. InsideScoop, before it alerts teachers that a specific student might be getting off task, alerts the student, to allow them to engage in the content without teacher intervention. We hope that this will reinforce this sort of self-correcting behavior in the future. InsideScoop's insights extend beyond distracted students. If our machine learning models predict that a student is disengaged, we alert teachers to slow their pace. This removes the need for teachers to stop teaching and ask the entire class whether or not they understand the content (and shy students don't often respond). In addition to reporting their engagement state, we also measure a student's participation through audio input. We even create a participation grade for them sampled from a gaussian distribution. This sort of personalized education has been a massive struggle in the transition to online school, especially among students with learning disabilities. We believe that InsideScoop can bridge the gap of personalized learning in a very impersonal education system.

How I built it

We implemented two applications: a Swift macOS app for teachers that displays student engagement and class participation in real-time, and a cross-platform student app in C++ that processes and uploads data to our AWS backend websocket server that we wrote ourselves. The student app analyzes streaming audio input from the user’s microphone during class to calculate the time a student was participating in the conversation. It sends this to the socket along with periodic images captured from the student’s camera, which are used to recognize their engagement during class. On our backend server in AWS, we trained a PyTorch convolutional neural network to recognize student engagement. Finally, we then forwarded that data to the teacher app, which displays all of this information in an intuitive dashboard. This dashboard features a letter grade powered by a normal distribution algorithm which takes into account the total class size to determine optimal participation. In terms of project management tools, we used GitHub for version control, Slack for communication and Trello for issue tracking. Most of our engineering was done in “pair-programming” style over Zoom, which helped us work through bugs faster. We implemented the entire platform from idea to app during the Hackathon, and look forward to seeing where it takes us!

Challenges I ran into

Backend: Since we were using websockets to facilitate communication between the student and teacher clients, sharing information across these sockets (which are isolated to their threads), was quite difficult. Ultimately, Amrit ended up using RabbitMQ as a message broker between the student and the client. Additionally, there was very little training data for our task of classifying student engagement in convolutional neural networks, so we had to finetune an existing VGGNet model.

Student App: Since C++ is a very low level language, Sasha had to debug a lot of very cryptic and oftentimes difficult errors and unexpected behaviors. However, by using a C++ debugger like gdb, this was slightly easier.

Teacher App: For us, the biggest challenge was integration with the AWS websocket server. While at first we were challenged by the idea of integrating our UX/UI with a socket receiving information in real time, we eventually powered through it.

Accomplishments that I'm proud of

The biggest thing we are proud of is the product we were able to put together in such a limited time. The fact that we hadn’t come in with a single line of code and finished with a macOS teacher app, a cross-platform student app in C++, an AWS backend websocket server that we wrote ourselves, and an extremely accurate computer vision model feels truly great.

Another thing we are very proud of is the way it looks. As a team we believe that sometimes engineers get too caught up in the technical and forget about what the user needs to interact with. As a team, we made sure that this was not the case by developing a clean and intuitive UX/UI.

What I learned

This hackathon experience taught us a lot about the difference between assumption and reality. This was especially true for the backend where the implementation of the machine learning model ended up being completely different than what we started with, and that is okay.

What's next for InsideScoop

As we will come out of the hackathon with a fully functioning product, we would love to test it at our local high schools! In order to do this in a successful manner, we plan to establish student ambassadors in each of our partner schools. This infrastructure should allow for a smooth transition to our product.

We would also like to continue improving our tech stack by refining the ML algorithm to predict more states thereby providing more useful and actionable data for our teachers. We see this product not only as a solution to online classes suffering due to COVID-19 but also as a tool that could be used within online schools as the education sector becomes increasingly digitized.

Try It out

Hackathons

Technologies

amazon-web-services, c++, convolutional-neural-networks, keras, project-catalyst, python, qt, speech-recognition, swift, tensorflow, xcode

Devpost Software Identifier

259545