Elderly people and the motor impaired have difficulty in interacting with touch screen devices. Commonly-used mobile system uses a general model for gesture recognition. However, the general threshold-based model may not meet their special needs. Hence, we present DeepGesture, a 2-stage model providing self-learning function for gesture recognition. In first stage , three dimensional convolution neutral network is used to recognize the user intend gesture. It remarkably improves the success rate of recognizing common gestures, such as tap and pan. In second stage, a novel tapping optimizer is used to capture the most important touch location.The results show that DeepGesture achieves a higher success rate than iOS default system .