# Project Description
## Points
* The field, Computer Vision.
* Within computer vision, new technology called Human Pose Estimation.
* What is human pose estimiation?
* [Sign Language], we see that there is a need for Sign Language recognition.
* Emergency situations would be something that creates much value.
* We want to investigate if it is possible to recognise Auslan Signs using Human Pose Estimation.
* What did we do?
* Built a proof of concept system.
* To show that it is possible to use human pose estimation to perform Australian Sign Language Recognition.
* Features:
* Managed to build a web application.
* Recognises Auslan signs with a delay of 1-2 seconds after signing.
* Built our own sign recognition machine learning model:
* Recorded ourselves as data for our model.
* Developed, Trained and Optimized our own model.
* Achieving accuracy of about 90%.
* For reference:
* Check out our Road to Endeavour video see our journey
* Check out our Presentation video to have an overview what it is about
* Check out our Poster for more technical detail
* Check out our technial documentation for details on:
* Data Processing
* Model Development
* System Design and Application
## Output
As humans, we influence the world through our bodies. We express our emotions and actions through body posture and orientation. For computers to be partners with humans, they have to perceive us and understand our behavior - recognising our facial expressions, our gestures and our movements.
Here comes Human Pose Estimation.
Human Pose Estimation is field of research in developing robust algorithms and expressive representations that can capture human pose and motion. The application of human pose estimation has been extended to many fields such as computer 3D animation, medical technology and sports.
But, can we leverage this powerful technology for Sign Language Recognition?
Approximately 20,000 Australians rely on Australian Sign Language (Auslan) to communicate every day. Yet, we still see problems in communication due to the large communication gap between Auslan users and non-Auslan users. This problem becomes worse when both parties encounter emergency like situations.
Our group aims to investigate the possibility of performing Sign Language Recognition, using Human Pose Estimation.
To achieve this, we built a proof-of-concept system that recognises Auslan signs in real-time only using a laptop and a webcam!
Throughout our project journey, we achieved two notable outcomes.
First, we were able to launch our system as a web application for users at Endeavour to experience our project. Hop on to our application here in the link below to have a look here: https://real-time-pose-capstone.eee.unimelb.edu.au.
Second, we trained, developed and optimised our own sign language recognition model - achieving sign recognition speed at about 1-2 second delay and model accuracy at about 90%.
Feel free to contact us during the video sessions and shoot us any questions about our exciting project. For a more technical description of our project, please visit our project documentation website here in the following link: https://relientm96.github.io/capstone2020/docs.html.
Looking forward to speaking with you about our project!
[17 october, 2020]
[modified by - yick]
**Project Title**
Gesture Recognition using Machine Learning
**Project Description**
As humans, we influence the world through our bodies. We express our emotions and actions through body posture and orientation. For computers to be partners with humans, they have to perceive us and understand our behavior - recognising our facial expressions, our gestures and our movements.
This is where Human Pose Estimation comes in.
Human Pose Estimation is a field of research in developing robust algorithms and expressive representations that can capture human pose and motion. The application of human pose estimation has been extended to many fields such as computer 3D animation, medical technology and sports.
~~**[Could we use this as a basis for Gesture Recognition?]**~~
**Can we use this for Human Gesture Recognition?**
Our group aims to investigate the possibility of performing Gesture Recognition, using Human Pose Estimation.
To achieve this, we ~~**[have attempted to]**~~ have built a proof-of-concept system that recognises human gestures in real-time using only a laptop and a webcam! ~~**[four Emergency Australian Sign Language (Auslan) signs as the Gestures]**~~
As example gestures for our syste, we chose four unique Auslan signs: help, pain, ambulance and hospital. Our choice of gestures, and the project as a whole do not represent the Auslan Community whatsoever, and should not be construed as an attempt to deploy it in any real-world scenario
~~**We choose Auslan as the gestures to recognize, only as a way to demonstrate our application. *[flag this: Our choice of gestures to recognize, and the project as a whole do not represent the Auslan Community whatsoever, and should not be construed as an attempt to deploy it in any real-world scenario]*.**~~
Throughout our project journey, we achieved two notable outcomes.
First, we were able to launch our system as a web application for users at Endeavour to experience our project. Hop on to our application in the link here to have a look https://real-time-pose-capstone.eee.unimelb.edu.au.
Second, we trained, developed and optimised our own gesture recognition model - achieving recognition speed at about 1-2 second delay and model accuracy at about 90%.
Feel free to contact us during the video sessions and shoot us any questions about our exciting project. For a more technical description of our project, please visit our project documentation website here in the following link: https://relientm96.github.io/capstone2020/docs.html.
Looking forward to speaking with you about our project!