Skip to main navigation Skip to search Skip to main content

Deep learning-based multi-view 3D-human action recognition using skeleton and depth data

  • Sampat Kumar Ghosh*
  • , M. Rashmi*
  • , Biju R. Mohan
  • , Ram Mohana Reddy Guddeti
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Human Action Recognition (HAR) is a fundamental challenge that smart surveillance systems must overcome. With the rising affordability of capturing human actions with more advanced depth cameras, HAR has garnered increased interest over the years, however the majority of these efforts have been on single-view HAR. Recognizing human actions from arbitrary viewpoints is more challenging, as the same action is observed differently from different angles. This paper proposes a multi-stream Convolutional Neural Network (CNN) model for multi-view HAR using depth and skeleton data. We also propose a novel and efficient depth descriptor, Edge Detected-Motion History Image (ED-MHI), based on Canny Edge Detection and Motion History Image. Also, the proposed skeleton descriptor, Motion and Orientation of Joints (MOJ), represent the appropriate action by using joint motion and orientation. Experimental results on two datasets of human actions: NUCLA Multiview Action3D and NTU RGB-D using a Cross-subject evaluation protocol demonstrated that the proposed system exhibits the superior performance as compared to the state-of-the-art works with 93.87% and 85.61% accuracy, respectively.

Original languageEnglish
Pages (from-to)19829-19851
Number of pages23
JournalMultimedia Tools and Applications
Volume82
Issue number13
DOIs
Publication statusPublished - 05-2023

All Science Journal Classification (ASJC) codes

  • Software
  • Media Technology
  • Hardware and Architecture
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Deep learning-based multi-view 3D-human action recognition using skeleton and depth data'. Together they form a unique fingerprint.

Cite this