Skip to content

Implementing an image-based US State recognition tool using a machine learning model

Notifications You must be signed in to change notification settings

sbansal22/Gods-Eye

 
 

Repository files navigation

God's Eye

Machine Learning Fall 2019 Project 1

Background

Humans are exceptional at determining location by using cues like driving direction, vegetation, weather, languages, and other less evident facets. We attempt to train a deep neural network that uses the same human reasoning provided Google street view images to determine what state in the U.S. the picture was taken.

The Data

The dataset we are using is a collection of Google street view images found on DeepGeo: Photo Localization with Deep Neural Network. Each state contains 10K images divided into a training(90%) and test(10%) set. Each input sample contains 4 images facing each cardinal direction.

The Network

In the works! Check back in a week :)

Questions we hope to answer

How can we capture the effects of factors that are not binary (like architecture styles or reading signboards)? Do we have to create binary layers that add up to these multi-faceted features?

About

Implementing an image-based US State recognition tool using a machine learning model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 83.5%
  • Python 16.5%