Recent Projects

Histogram of Oriented 4D Normals (CVPR 13)
We present a new descriptor for activity recognition from videos acquired by a depth sensor. Previous descriptors mostly compute shape and motion features independently; thus, they often fail to capture the complex joint shape-motion cues at pixel-level. In contrast, we describe the depth sequence using a histogram capturing the distribution of the surface normal orientation in the 4D space of time, depth, and spatial coordinates. To build the histogram, we create 4D projectors, which quantize the 4D space and represent the possible directions for the 4D normal...

Seeing Through Turbulence (TPAMI 12)
Turbulence mitigation refers to the stabilization of videos with non-uniform deformations due to the influence of optical turbulence. Typical approaches for turbulence mitigation follow averaging or de-warping techniques. Although these methods can reduce the turbulence, they distort the independently moving objects which can often be of great interest. In this paper, we address the novel problem of simultaneous turbulence mitigation and moving object detection. We propose a novel threeterm low-rank matrix decomposition approach in which we decompose the turbulence sequence into three components: the background, the turbulence, and the object...

Seeing Through Water (CVPR 11)
Several attempts have been lately proposed to tackle the problem of recovering the original image of an underwater scene using a sequence distorted by water waves. The main drawback of the state of the art is that it heavily depends on modelling the waves, which in fact is illposed since the actual behavior of the waves along with the imaging process are more complicated, and include several noise components; therefore, their results are not satisfactory. In this paper, we revisit the problem by proposing a data-driven two-stage approach, each stage is targeted toward a certain type of noise....


Action Recognition By Motion Decomposition (ICCV 2011)
Recognition of actions in a video acquired by a moving camera typically requires standard steps such as motion compensation, moving object detection and object tracking. The errors from the motion compensation propagate to the object detection stage, resulting in miss-detections, which further complicates the tracking stage, resulting in cluttered and incorrect tracks. In this project, we propose a novel approach which does not follow the standard steps, and avoids the aforementioned difficulties. Our approach is based on Lagrangian particle trajectories which are a set of dense trajectories obtained by advecting optical flow over time, thus capturing the ensemble motions of a scene...

Identity Recognition in Aerial Images (CVPR 2010)
Human identity recognition is an important yet underaddressed problem. Previous methods were strictly limited to high quality photographs, where the principal techniques heavily rely on body details such as face detection. In this paper, we propose an algorithm to address the novel problem of human identity recognition over a set of unordered low quality aerial images...




Part-based Multiple-Person Tracking (CVPR 2012)
The performance of state-of-the-art methods in single camera-based human tracking is often hindered by difficulties such as occlusion and changes in appearance which frequently occur in surveillance videos. In this project, we address such problems by proposing a robust part-based tracking by detection framework. Human detection using part models has become quite popular, yet its extension in tracking has not been fully explored. Our approach learns part-based person-specific SVM classifiers which capture the articulations of the human bodies in dynamically changing appearance and background....


Horizon Constraint for Unambiguous UAV Navigation (ICRA 2011)
When the UAV goes to high altitudes such that the observed surface of the earth becomes planar, the structure and motion recovery of the earth's moving plane becomes ambiguous. This planar degeneracy has been pointed out very often in the literature; therefore, current navigation methods either completely fail or give many confusing solutions in such scenario. Interestingly, the horizon line in planar scenes is straight and distinctive; hence, easily detected. Therefore, we show in this paper that the horizon line provides two degrees of freedom that control the relative orientation between the camera coordinate system and the local surface of earth. The recovered degrees of freedom help linearize and disambiguate the planar flow, and therefore we obtain a unique solution for the UAV motion estimation....