Authors

Inspiration

My teammate and I are both very into learning and reading books in general and we both share an immigration background. For many immigrants, reading English is always one of the most difficult things to do since we have to constantly look up words in every line of the page. It was always so hard to finish a book especially when the book is particularly intellectually challenging. We were extremely eager to learn and read faster. Therefore, we came up with a solution for a faster reading or learning experience.

What it does

Word assistant is an application that helps with reading and learning with great accessibility. A user may take a picture of a page in a book with our app. It will show the exact words within the picture.  Then, users will be able to click on an individual word to inspect its definition. Furthermore, a user has an option to save the words that he or she wants to learn or lookup again in the future into a word directory, which stores all the words that a user is interested in learning. We think that it offers good experience even with people who are deaf since they can just look up the words, and we envision that we develop voice assistant to help with the blind to learn faster as well.

How we built it

We used many different technologies in this application. First off, the application that has both the front and back end to delegate the workload with good usability, security, and speed. Also, we used NOSQL in Firestore and storing files and images in Cloud Storage, with Cloud functions being the middle layer for security where we developed several HTTP asynchronous APIs for a faster navigation experience. Additionally, we incorporate Computer Vision Technology, which analyzes and extracts words from the image. Furthermore, we used APIs to look up the definition of each word in one of the most trustworthy and reliable dictionary site - Oxford Dictionary, and in order to do each network request correct for definition lookup, we resolved deadlocks caused by circular dependencies with multi-threading and running functions asynchronously. Last, we were using the cutting edge cross-platform technology - React Native to offer users IOS and Android apps.

Challenges we ran into

There were a few challenges that we ran into while developing this project. First, it was sort of hard to distribute the work proportionally so that both of us share a good amount of work. Secondly, we got really stuck on deadlock problems since we were doing network requests within nested loops, which creates latencies and thus returning empty values at times. Moreover, the dictionary API was actually surprisingly hard to deal with since the free versions usually are very buggy and unreliable, and thus we decided to go with the API that costs very little but offers fast, reliable, and stable definition lookups. Last, it is always hard to distribute data on the UI perfectly especially when we want to create many screens within the application.

Accomplishments that we're proud of

First, it was extremely fun and exciting for both my partner and I to work together in two short days to ship the product immediately. We both were eager to get it done and communicated well along the way. We split the works into front and back end so that it was pretty smooth working together overall. We are also proud that we overcame the obstacles along the way. We had to read a lot of documentation and learn about different areas in software engineering to pick the best technologies to work with.

What we learned

We have learned that, most importantly, how important it is to actually split the work into two pieces so that both of us can maximize our time to ship the product that we are proud of. Also, we learned how to build serverless backend system, NOSQL database, React Native, Multi-threading, pros and cons of each technologies and how critical it was to constantly communicate with each other to ensure that we both were on the right page.

What's next for Word Assistant

Our app does not stop in Hackathon. We both are really eager to use technologies to help people with learning in general. After the features that I discussed about are done, we will be developing more diverse features that offers even more complete accessibilities to not only normal English learners but the blind or people who prefer learning or looking up words with audio. We are going to allow them to look up words with just speaking but also have the feature that most voice assistant does not have --- saving the words and preparing quiz for them to review the words that they want to understand and remember.

where is this project going?

Going forward, accessibility is our top priority. We don't people to be excluded from using our application regardless of their background or circumstances.

As a result, we intend to rework our project to be better integrated with builtin phone screen readers, and voice command, and older mobile devices.

what did I not have time to finish?

We wanted to integrate Google Assistant such that we could store every definition asked for and set up daily routines where users to practice their inquired vocabulary.

Additionally, we intended to incorporate AR into our application but it was technological expensive to create and impose a barrier on non-AR capable phones.

We wanted to determine a way to run our Image-To-Text api on the client so that communities with low bandwidth wouldn't have to wait excessively long.

Additionally, we wanted to reduce the application size such that more rural communities can more readily access our application.

Lastly, we wanted to be able to run our application entirely from the client so that internet access is not required.

Try It out

Hackathons

Technologies

computer-vision, firebase, google-cloud, google-cloud-computer-vision, google-cloud-function, google-cloud-serverless-backend, google-firebase-backend, google-stroage, javascript, machine-learning, natural-language-processing, nosql, react-native, wordsapi

Devpost Software Identifier

256387