This code pattern explains how to classify an American Sign Language (ASL) alphabet using PyTorch and deep learning networks. It uses a pretrained model from the PyTorch models zoo and retrains the last part of the network.
The code pattern uses PyTorch to build and train a deep learning model to classify images to 29 classes (26 ASL alphabet, space, Del, and nothing), which can be used later to help hard-of-hearing people communicate with others as well as with computers. The pattern uses a pretrained mobile network, defines a classifier, and connects it to network. It then trains this classifier along with some of the last blocks of the network on the data set. The pattern uses the Python and GPU environment in IBM® Watson™ Studio for faster training, which allows you to download, explore, build, and train your model. Learn more about available Watson Studio environments.
After completing this pattern, you understand how to:
- Obtain a data set from Kaggle
- Explore data and define transformers to preprocess images before training
- Define a classifier to have an output layer of 29 outputs
- Train the last blocks of the network along with the classifier that’s defined
- Test the trained model
- Log in to Watson Studio.
- Get your Kaggle API credentials.
- Run the Jupyter Notebook in Watson Studio.
Get detailed steps in the readme file. Those steps show how to:
- Sign up for Watson Studio.
- Create a new project.
- Create the notebook.
- Run the notebook.
- Test your model.