David Mohaisen

Research Overview

My research explores the intersection of systems and network security, online privacy, and applied machine learning. I am broadly interested in how large-scale networked systems—ranging from the Internet and IoT to mobile, blockchain, and AI-driven ecosystems—can be designed to be secure, trustworthy, and privacy-preserving. My work combines theory and practice, employing exploratory, constructive, and empirical methods to design, analyze, and evaluate secure systems and intelligent defense mechanisms. Over time, my research evolved from network and Internet security and privacy primitives to include security analytics using both traditional and deep learning approaches, social network privacy, blockchain trust and resilience, and AI robustness. A consistent theme is leveraging data-driven analysis to understand real-world adversarial behavior and to develop accountable, machine-assisted defense systems.

Vision and Impact

The long-term vision of my research is to advance trustworthy, privacy-preserving, and explainable computing systems. By combining rigorous system analysis with intelligent, data-driven methods, my group seeks to design resilient architectures that withstand emerging threats, develop privacy-enhancing technologies that empower user autonomy, and create analytics frameworks that proactively identify and mitigate risks across digital ecosystems. Our work contributes both fundamental insights to the research community and deployable solutions that enhance the safety, accountability, and reliability of modern computing.

Large Language Models (LLMs)

My recent research focuses on leveraging and securing large language models (LLMs) for cybersecurity and software engineering applications. On one front, my group explores how LLMs can be utilized to understand, generate, and repair code, emphasizing areas such as pseudocode-to-code translation, automated vulnerability detection, and feedback-directed code repair. On the other front, we examine the adversarial and misuse dimensions of LLMs, studying how these models can be exploited to produce malicious or evasive content--such as phishing campaigns or obfuscated code--and developing countermeasures to prevent such misuse. Our work integrates semantic reasoning, formal and symbolic verification, and uncertainty-aware learning to ensure that LLM-driven systems are both reliable and accountable, advancing the trustworthy and responsible deployment of generative AI in security-critical environments.

Adversarial and Applied Machine Learning

My group develops machine learning and deep learning–based security analytics tools for malware detection, vulnerability assessment, intrusion detection, and behavioral analysis. We use supervised and unsupervised learning methods, such as SVMs, random forests, neural networks, and clustering algorithms, to automate labeling and classification of threat indicators and malicious activities. As data complexity grows, we adopt deep neural models for automatic feature extraction and pattern recognition in applications such as authorship identification, IoT malware analysis, and website fingerprinting. We also study adversarial learning to understand and mitigate attacks that intentionally fool classifiers, building more robust and interpretable defense models in the process.

Blockchain and Decentralized Security

Our work on blockchain and decentralized systems bridges foundational research with applied system design. We investigate consensus algorithms that ensure privacy, fairness, and decentralization, while analyzing the resilience of blockchain-based infrastructures to attacks and abuse. We translate distributed system requirements into secure and composable blockchain architectures and employ predictive modeling to study the sustainability of system properties and security trade-offs. This research contributes to the design of transparent, accountable, and verifiable decentralized systems.

Mobile and IoT Security and Privacy

We design practical mechanisms for protecting mobile devices and Internet of Things (IoT) environments from compromise and data leakage. Our projects include developing efficient malware detection and traffic classification systems, privacy-preserving continuous authentication mechanisms, and cross-layer frameworks for smart home and industrial IoT networks. These systems integrate behavioral profiling, intrusion detection, and lightweight cryptographic functions to enable adaptive and secure operation across connected devices. A key focus is achieving strong protection while preserving usability and performance.

Distributed Denial-of-Service (DDoS) Analysis and Defense

We study distributed denial-of-service (DDoS) attacks through large-scale, data-driven analysis to uncover patterns and relationships among botnets, attackers, and victims. By applying model-guided and learning-based techniques, our work seeks to improve the robustness of DDoS detection systems and predict emerging attack trends. This line of research integrates traffic intelligence and behavioral modeling to guide the design of proactive and scalable defenses against botnet-driven Internet threats.

Wearable, AR/VR, and Human-Centric Security

Our research investigates privacy and security in human-centric and immersive technologies such as wearables, smartphones, and augmented or virtual reality systems. We analyze how metadata and sensor signals—such as GPS elevation profiles or motion traces—can leak sensitive information about users. We also examine spatial side-channel attacks in AR/VR settings, where adversaries can infer user actions or inputs through motion capture. These studies highlight the privacy risks of emerging interfaces and inform the design of safer, privacy-aware human-technology interactions.