TY - GEN
T1 - Continuous human action segmentation and recognition using a spatio-temporal probabilistic framework
AU - Chen, Duan Yu
AU - Liao, Hong Yuan Mark
AU - Shih, Sheng Wen
PY - 2006
Y1 - 2006
N2 - In this paper, a framework of automatic human action segmentation and recognition in continuous action sequences is proposed. A star-like figure is proposed to effectively represent the extremities in the silhouette of human body. The human action, thus, is recorded as a sequence of the star-like figure parameters, which is used for action modeling. To model human actions in a compact manner while characterizing their spatio-temporal distributions, star-like figure parameters are represented by Gaussian mixture models (GMM). In addition, to address the intrinsic nature of temporal variations in a continuous action sequence, we transform the time sequence of star-like figure parameters into frequency domain by discrete cosine transform (DCT) and use only the first few coefficients to represent different temporal patterns with significant discriminating power. The performance shows that the proposed framework can recognize continuous human actions in an efficient way.
AB - In this paper, a framework of automatic human action segmentation and recognition in continuous action sequences is proposed. A star-like figure is proposed to effectively represent the extremities in the silhouette of human body. The human action, thus, is recorded as a sequence of the star-like figure parameters, which is used for action modeling. To model human actions in a compact manner while characterizing their spatio-temporal distributions, star-like figure parameters are represented by Gaussian mixture models (GMM). In addition, to address the intrinsic nature of temporal variations in a continuous action sequence, we transform the time sequence of star-like figure parameters into frequency domain by discrete cosine transform (DCT) and use only the first few coefficients to represent different temporal patterns with significant discriminating power. The performance shows that the proposed framework can recognize continuous human actions in an efficient way.
UR - http://www.scopus.com/inward/record.url?scp=46249083364&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=46249083364&partnerID=8YFLogxK
U2 - 10.1109/ISM.2006.53
DO - 10.1109/ISM.2006.53
M3 - Conference contribution
AN - SCOPUS:46249083364
SN - 0769527469
SN - 9780769527468
T3 - ISM 2006 - 8th IEEE International Symposium on Multimedia
SP - 275
EP - 282
BT - ISM 2006 - 8th IEEE International Symposium on Multimedia
T2 - ISM 2006 - 8th IEEE International Symposium on Multimedia
Y2 - 11 December 2006 through 13 December 2006
ER -