Research Areas

Machine Learning and Pattern Recognition

o    Unsupervised and Semi-Supervised Deep Learning

o    Multimodal Deep Learning

o    Generative Adversarial Network Theory and Applications

 

Computer Vision and Multimedia Computing

o    Visual Scene Understanding

o    Video Analysis and Understanding

o    Multimodal Analysis and Applications

 

Internet-of-Things (IOTs) Information Processing and Analytics 

o    Discovery of trustworthy sources/sensors

o    Multi-source information fusion

o    Heterogeneous sensor data analysis



Recent Projects (2017 - present)

  • [Deep Network Self-Supervised Pretraining] Using self-supervised  methods for unsupervised, semi-supervised and/or supervised (pre-)training of CNNs, GCNs, GANs. We developed two novel paradigms of self-supervised methods a) Auto-Encoding Transformations (AET) [pdf] that learns Transformation-Equivariant Representations; b)  Adversarial Contrast (AdCo) that  directly self-trains negative pairs in contrastive learning approach.
    • 1) Unsupervised training of CNNs: AETv1 [link][pdf][github] and AETv2 [link], 
    • 2) Variational AET and the connection to transformation-equivariant representation learning [link][pdf][github], 
    • 3) (Semi-)Supervised AET training with an ensemble of spatial and non-spatial transformations [pdf][github], 
    • 4) Unsupervised training of Graph Convolutional Networks (GCNs) [pdf][github],
    • 5) Transformation GAN (TrGAN)  by using the AET loss to train the discriminator for better generalization to create new images [pdf].
    • 6) Adversarial Contrast (AdCo) [pdf][github]: An adversarial contrastive learning method to directly train negative samples end-to-end. It shows high performance to pre-train ResNet-50 on ImageNet with 20% fewer epochs than the SOTA methods (e.g., MoCo v2, and BYOL)  while achieving even better top-1 accuracy. The model is easy to implement and can be used as a plug-in algorithm to combine with many pre-training tasks. 

  • [Regularized GANs] We present a regularized Loss-Sensitive GAN (LS-GAN), and extended it to a generalized version (GLS-GAN) with many variants of regularized GANs as its special cases. We proved both the distributional consistency and generalizability of the LS-GAN with polynomial sample complexity to generate new contents. See more details about
    • 1) LS-GAN and GLS-GAN [pdf][github],
    • 2) A landscape of regularized GANs in a big picture [url],
    • 3) An extension by obtaining an encoder of input samples directly with manifold margins through the loss-sensitive GAN [github:  torch, blocks] ,
    • 4) The LS-GAN has been adopted by Microsoft CNTK (Cognitive  Toolkit) as a reference regularized GAN model [link].
    • 5) Localized GAN was used to model the manifold of images along their tangent vector spaces.  It was used to capture and/or generate the local variants of input images so that their attributes can be edited by manipulating the input noises.  The local variants of images along the tangents can also be used to approximate the Beltrami-Laplace operator for semi-supervised representation learning [pdf].
 
  • [Deep Learning for IOTs and Multimodal Analysis]  We developed 1) State-Frequency Memory RNNs [pdf] for multiple-frequency analysis of signals, 2) Spatial-Temporal Transfomers [pdf] to integrate self-attentions over spatial topolgy and temporal dynamics for traffic forecasting, and 3) First-Take-All Hashing [pdf] to efficiently index and retrieve multimodal sensor signals at scale. 
    • 1) State-Frequency Memory (SFM) RNNs for Multi-Source/Sensor Signal Analysis. It explores multiple frequencies of dynamic memory for time-series analysis through SFM RNNs. The multi-frequency memory enables more accurate signal predictions than the LSTM in various ranges of dynamic contexts. For example, in financial anlayis [pdf], long-term investors use low-frequency information to forecast asset prices, while high-frequency traders rely more on high-frequency pricing signals to make investment decisions. 
    • 2) Spatial-Temporal Transformer for Traffic Forecasting. The spatial-temporal transformer [pdf] is among one of the first works to apply self-attention to dynamic graph neural networks by exploring both the network topology and temporal dynamics to forecast traffic flows from city-scale IOT data.
    • 3) First-Take-All Hashing and Deviced-Enabled Healthcare.  The First-Take-All (FTA) hashing was developed to efficiently index dynamic activities captured by multimodal sensors (cameras and depth sensors) [pdf] fior eldercare, and image [pdf] and cross-modal retrieval [pdf].  It is also applied to classify singals of brain neural activities for early diagnosis of ADHD [pdf], which is one order of magnitude faster than the SOTA methods on the multi-facility dataset in a Kaggle Challenge .
    • 4) Aligning Multi-Source/Device Signals. We propose Dynamically Programmable Layers to automatically align signals from multiple sources/devices. We successfully demonstrate its application to predict the brain connectivities between neurons [pdf].
    • 5) E-Optimal Sensor Deployment and Selection. We develop an optimal online sensor selection approach with the restricted isometry property based on e-optimality [link].  It was successfully applied for collaborative spectrum sensing in cognitive radio networks (CRNs), and selecting the most informative features from a large amount of data/signals. The paper will be featured in  IEEE Computer's "Spotlight on Transactions" Column.

  • [Small Data Challenges] Take a look at our survey of "Small Data Challenges in Big Data Era: A Survey of Recent Progress on Unsupervised and Semi-Supervised Methods" [pdf], and our tutorial presented at IJCAI 2019 [link] with the presentation slides [pdf].  Also see our recent works on
    • 1) Unsupervised Learning. AutoEncoding Transformations (AET)  [pdf], Autoencoding Variational Transformations (AVT)  [pdf], GraphTER (Graph Transformation Equivariant Representations) [pdf], TrGANs (Transformation GANs) [pdf],
    • 2) Semi-Supervsied Learning. Localized GANs (see how to compute Laplace-Beltrami operator directly for semi-supervised learning) [pdf], Ensemble AET [pdf],
    • 3) Few-Shot Learning. FLAT (Few-Short Learning via AET) [pdf], knowledge Transfer for few-shot learning [pdf], task-agnostic meta-learning [pdf]

  • [MAPLE Github] We are releasing the source code of our research projects at our MAPLE github homepage [url]. We are inviting everyone interested in our works to try them. Feedbacks and pull requests are warmly welcome. 

 

Back to top

Last updated 12/17/14