Dynamic visual saliency modeling for video semantics

Duan Yu Chen, Hsiao Rong Tyan, Sheng Wen Shih, Hong Yuan Mark Liaa

研究成果: 書貢獻/報告類型會議貢獻

1 引文 斯高帕斯(Scopus)

摘要

In this work, we propose a novel approach for modeling dynamic visual attention based on spatiotemporal analysis. Our model first detects salient points in three-dimensional video volumes, and then uses them as seeds to search the extent of salient regions in a motion attention map. To determine the extent of attended regions, the maximum entropy in the spatial domain is used to analyze the dynamics obtained from spatiotemporal analysis. To annotate video semantics, the extent of attended regions is further recognized as two predefined categories by using orientation filters, cars and people. The experiment results show that the proposed dynamic visual attention model can effectively detect visual saliency through successive video volumes.
原文英語
主出版物標題Proceedings - 2008 4th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2008
頁面188-191
頁數4
DOIs
出版狀態已發佈 - 2008
事件2008 4th International Conference on Intelligent Information Hiding and Multiedia Signal Processing, IIH-MSP 2008 - Harbin, 中國
持續時間: 8月 15 20088月 17 2008

出版系列

名字Proceedings - 2008 4th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2008

會議

會議2008 4th International Conference on Intelligent Information Hiding and Multiedia Signal Processing, IIH-MSP 2008
國家/地區中國
城市Harbin
期間8/15/088/17/08

ASJC Scopus subject areas

  • 人工智慧
  • 電腦繪圖與電腦輔助設計
  • 訊號處理

指紋

深入研究「Dynamic visual saliency modeling for video semantics」主題。共同形成了獨特的指紋。

引用此