Enchancing Next-Active-Object based Egocentric Action Anticipation with Guided Attention

Sanket Thakur
Cigdem Beyan
Pietro Morerio
Vittorio Murino
Alessio Del Bue

[Paper]
[Poster]
[GitHub - CVPR23 challenge submission, GANOv2]


The proposed Guided Attention fusion mechanism aims to combine th object detections and the spatial-temporal features from videos to create better feature representation of video frames with enchanced contextual information. The ''attended'' feature representation provides better results as compared to simple fusion based method.
Also, read about our Winner submission based on Guided-Attention for CVPR23 EGO4D STA challenge.

Abstract

Short-term action anticipation (STA) in first-person videos is a challenging task that involves understanding the next active object interactions and predicting future actions. Exist- ing action anticipation methods have primarily focused on utilizing features extracted from video clips, but often over- looked the importance of objects and their interactions. To this end, we propose a novel approach that applies a guided at- tention mechanism between the objects, and the spatiotempo- ral features extracted from video clips, enhancing the motion and contextual information, and further decoding the object- centric and motion-centric information to address the problem of STA in egocentric videos. Our method, GANO (Guided Attention for Next active Objects) is a multi-modal, end-to- end, single transformer-based network. The experimental re- sults performed on the largest egocentric dataset demonstrate that GANO outperforms the existing state-of-the-art methods for the prediction of the next active object label, its bounding box location, the corresponding future action, and the time to contact the object. The ablation study shows the positive contribution of the guided attention mechanism compared to other fusion methods. Moreover, it is possible to improve the next active object location and class label prediction results of GANO by just appending the learnable object tokens with the region of interest embeddings.


Code


Our GANO model uses a 3D convolution layer to extract frame patch features and an object detector to extract object embeddings from corresponding frames. These features are then fused together using a guided attention network to generate attended object-patch features. The attended features, Fi, are then given to a transformer encoder layer, along with positional encoding, to obtain features (Fe) from the last encoder layer. Fe are then used to extract Regions of Interest (ROIs) from the last observed frame, which are used to predict future actions and Time to Contact (TTC) (ˆv and δ, respectively) for the detected objects. Additionally, we append the learnable tokens to the ROI embeddings, creating a fixed query length, and use them to generate the next active object-related prediction

 [GitHub - CVPR23 challenge submission] |  [GANOv2 Technical Report]



Paper and Supplementary Material

S. Thakur, C. Beyan, P. Morerio, V. Murino, A. Del Bue
Enchancing Next-Active-Object based Egocentric Action Anticipation with Guided Attention
In International Conference on Image Processing, 2023
(hosted on arxiv)
Also, read about our Winner submission based on Guided-Attention for CVPR23 EGO4D STA challenge.

[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.