Our inspiration was research studies published about the Covid-19 pandemic. The rate of transmission of the Covid-19 virus depends on how quickly we can diagnose and test patients, and our current method of testing can take days for results. While patients wait for results, they could have already spread the disease to other patients, doctors or people in their community.
We wanted to create a faster method of detection that could be performed by anyone, not just a doctor or test facility. This way, we can easily get infected patients the assistance they need without having to wait, as every minute we waste could mean a live lost.
We decided to put our machine learning skills to use and build an application where anyone could use a CT scan to determine if a patient has Covid-19, Pneumonia (very similar disease) or no disease at all. This method of detection would allow for easy diagnosis in a timely manner.
What it does
It's very simple; the user takes a picture of a CT Scan, and our Image Classification ML model classifies if the image shows no disease, Covid-19, Bacterial Pneumonia or Viral Pneumonia. Then, the user can read more information about the disease and add a patient to the management system, including general information, images and notes that could be handy.
How we built it
Our app is built around 2 of Apple's new machine learning frameworks; coreML and createML. We used these over Tensorflow due to our experience with these frameworks and the easier implementation within Swift and X-code. We built a realtime camera screen using AVCam, and then added our machine learning model to perform realtime detection. We added a prediction label as well, where our model outputs what its prediction is and how certain it is in its prediction.
We also used SnapKit and data structures in Swift to build a patient management system, where a doctor can input key information about their patient, such as their demographics, condition and an image of their CT scan for easy recall and analysis.
Our user interface was built entirely using UIKit and TableView, as we wanted to make the GUI easy to use and pleasant for the eyes. UIKit is the standard UI framework for iOS, and TableView allowed us to make easy tables for our management database.
Challenges we ran into
Our main challenge was implementing the machine learning model, as it required tons of images and data which was hard to locate. After hours of searching, we located datasets that could be used to train the model. The model training was also slow due to the large volume of images and we had to wait all night for it to process and produce a coherent model which we could use.
We also had some minor challenges with TableView and implementing our patient system, as storing an image and then recalling it was not something we were familiar with and took some time to learn. However, once we figured it out it was pretty straightforward and easy to understand.
Accomplishments that we're proud of
We're proud of our machine learning model since we needed it to be very accurate since it would be used in hospitals. Finding a lot of images and classifying them was a daunting task, but it was well worth the effort as it made our model more accurate and better at diagnosis.
We are also very proud of our UI, as we wanted to make design a priority and we can safely say that we were able to do that. We were not very familiar with TableView, so implementing TableView for our patient system was also something new and fun to experience.
What we learned
We learnt a lot about design and proper data storage in Swift. We usually treat our UI as an afterthought, but putting it as a real priority helped us learn something new and unique about the struggles and challenged that we face when it comes to design. It was frustrating when our image was just off center, and fixing those things showed us the trouble designers have to go through and it just grew our respect for them immensely. We also finally understood how to store information in Swift using Arrays and Dictionaries, and how useful such data structures can be for complicated systems.
We also grasped how much of a difference high resolution images can make; the more the data in an image, the better our model was able to train and understand, which was a new discovery for the both of us. While it seems obvious, we never strived for high resolution images but this changed after this project.
What's next for CovidScan:
We definitely want to add more diseases that can be recognized by our model, and not to limit the number of diseases so we can expand past just the Covid-19 pandemic. We also hope to keep training our model as more data is published, as this would allow for more accurate diagnosis which is extremely important when dealing with a patient's health.
Try It out
coreml, createml, ios, machine-learning, snapkit, swift, uikit