Whenever my friends and I came back from a big trip, both family and friends ask us, "Can I see pictures?". We would always have to go through the trouble in different ways to filter through photos. When I was with my friends, I had to scroll through hundreds of the same boring photos just so I can show them the crazy highlights we had. When I was with my family, I had to make sure my family did not catch any sight of the wild nights of our trip.
What it does
filter photos based on intended audience and custom search.This program takes in two input parameters from the user. For time's sake, I pre-uploaded some photos, but ideally the program would take in a bunch of random photos from the user's liking. Then the user is prompted to enter a keyword related to the kind of photos he/ she wants to view. (e.x. "winning a soccer game") The program then looks to see if any of the photos include objects related to that keyword. Using the keyword example above, common objects may be a soccer ball, soccer goal, or even grass. They are also prompted to whom they want the photo sent to, family or friends. Based on that response, the program identifies whether the given keyword has a positive or negative connotation. Although the program will categorize all photos either way, the program will then swift through for emotions relevant to that connotation and set it as the default emotion to look for. For example, since the phrase "winning a soccer game" has an overall positive connotation (via winning), the program will now set the default to joyful, happy emotions in photos. On the other hand, if the keyword has a negative connotation such as a keyword "funeral", the program will now set the default to angry emotions in photos. The reasoning behind setting the default to connotation related emotions is because those are the usual emotions found in those events. For example, you wouldn't be smiling in a picture unless some incident occurred. Similarly, if it were not for something happening, finding someone angry after winning a soccer game is not something you see everyday.
PLEASE REFER TO THE COMMENT BELOW FOR GITHUB REPO/ CODE!!!
How I built it
The coding portion of the program was built using Python implementing two Google Cloud APIs, the Google Cloud Vision API and the Google Cloud Natural Language API. The Vision API was used for identifying the emotions depicted by the people in the photo and all physical objects in the photo. The Natural Language API was used for sentiment analysis to determine the connotation of the keyword given by the user to describe which event-related photos he/she wanted to view. The User Interface design was made by using Figma.
Challenges I ran into
In terms of the coding, learning to implement the different Google Cloud APIs was very difficult. In terms of the design, mastering the loading animations and picture vertical scrolling deemed a challenge.
Accomplishments that I'm proud of
Being able to successfully use the Cloud APIs on my computer felt very accomplishing.
What I learned
What's next for picHoo
Try It out
figma, google-cloud-natural-language-api, google-cloud-vision-api, python