In this paper, a robust system is proposed for helping the deaf-and-dumb impaired communities for leading their life in an easier way. Our proposed system consists of three phases. In the first phase, the hearing impaired people communicate using sign language and this is considered as an input to the proposed system. Then, the sign-to-text conversion is classified by RCNN which includes RPN for detecting the hand contour. The classifier RCNN is trained involving the images from the Kaggle database. The output of RCNN classifier is an individual character. In second phase, the text is converted to speech with the aid of WaveNet algorithm. The output from WaveNet helps individuals with hearing and speech impairments communicate with others. Additionally, they can receive daily updates with the support of an internet-connected voice assistant. The voice assistant takes the speech and returns the processed information. In third phase, the processed information is taken to their knowledge via text with the help of Google voice recognition system. The experiment is conducted using a total of 64,148 sign images, sourced from a combination of a benchmark dataset and a custom dataset developed by our own. The results show that our proposed system could achieve better performance.