Learn practical skills, build real-world projects, and advance your career
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load

import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)

# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory

import os
for dirname, _, filenames in os.walk('/kaggle/input'):
    for filename in filenames:
        print(os.path.join(dirname, filename))

# You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All" 
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
/kaggle/input/sign-language-mnist/american_sign_language.PNG /kaggle/input/sign-language-mnist/amer_sign2.png /kaggle/input/sign-language-mnist/sign_mnist_train.csv /kaggle/input/sign-language-mnist/amer_sign3.png /kaggle/input/sign-language-mnist/sign_mnist_test.csv /kaggle/input/sign-language-mnist/sign_mnist_train/sign_mnist_train.csv /kaggle/input/sign-language-mnist/sign_mnist_test/sign_mnist_test.csv

Image classification with sign langauage MNIST using pytorch

This is a part of course project conducted by jovian.ml with freecodecamp. In this project,I have used sign langauage MNIST dataset to predict sign language images using diffrent modals like loginstic regression, feed forword nn, convolution nn.

project_name = 'final-project-jovain.ml'

sign language MNIST dataset

The American Sign Language letter database of hand gestures represent a multi-class problem with 24 classes of letters (excluding J and Z which require motion).

The dataset format is patterned to match closely with the classic MNIST. Each training and test case represents a label (0-25) as a one-to-one map for each alphabetic letter A-Z (and no cases for 9=J or 25=Z because of gesture motions). The training data (27,455 cases) and test data (7172 cases) are approximately half the size of the standard MNIST but otherwise similar with a header row of label, pixel1,pixel2….pixel784 which represent a single 28x28 pixel image with grayscale values between 0-255. The original hand gesture image data represented multiple users repeating the gesture against different backgrounds. The Sign Language MNIST data came from greatly extending the small number (1704) of the color images included as not cropped around the hand region of interest.