Research
Core Philosophy
Formal Verification
Traditional software testing can demonstrate the presence of bugs but can never guarantee their absence. Our research moves beyond heuristic debugging to establish mathematical proofs of correctness, ensuring that a system strictly satisfies its critical safety properties.
We view verification not as a secondary check, but as the architectural bedrock of systems. We envision a principled development cycle where safe design and rigorous formal proofs evolve in tandem.
Mechanized Proofs
Mathematical reasoning can be represented as precise, machine-executable mechanisms. Proof Assistants are the computational frameworks that realize this concept as a computer program.
In this paradigm, a proof is a structured program, where the proof-checking process is as objective and definitive as a compiler checking a program’s type. Our goal is to tame the complexity of formalization so that the verification of large-scale systems becomes both feasible and scalable through Proof Engineering.
Research Topics
[Topic 1] Formal Theory of Real-Time Systems
Computer systems have become an inseparable part of our lives, yet we continue to witness catastrophic failures caused by software flaws. As these systems increasingly integrate AI, the unpredictability of software behavior has reached a critical turning point.
Modern safety-critical systems are inherently distributed and real-time, which introduces immense complexity to system execution. This complexity creates a massive hurdle for both safe design and rigorous verification, as timing and synchronization errors can lead to fatal consequences.
Our research addresses these fundamental challenges. Through our work, we have established:
- Faithful Timed Semantics: A methodology to extend standard programming language semantics into rigorous, timed representations that reflect actual system execution.
- General Verification Principles: A foundational framework providing the reasoning power necessary to verify complex real-time properties at scale.
Building on these theoretical foundations, we are expanding our reach toward practical, large-scale platforms such as ROS 2. Our goal is to bridge the gap between formal theory and industrial practice, ensuring that the next generation of robotic and autonomous systems is built upon a provably secure and predictable architecture.
[Topic 2] Scalable Verification of Complex Systems
We develop scalable verification methodologies to manage the exponential complexity of foundational computing layers. By taming this complexity, we ensure that rigorous formalization remains feasible and effective for large-scale, real-world systems.
- Compiler Verification: Compilers are the bedrock of software development, yet they are notoriously prone to bugs that can remain hidden for years. Since a single compiler error can silently introduce vulnerabilities, verifying the translation process is paramount. We apply formal methods to ensure that the executable code remains perfectly faithful to its source.
- Consensus Protocol Verification: Consensus protocols are the core mechanisms for maintaining data integrity in distributed systems—a critical technology in blockchain and financial networks, but not limited to them. We utilize principled formal methods to prove the safety and liveness of these algorithms, ensuring that distributed trust remains inviolable.
[Topic 3] AI meets Formal Verification
The rapid integration of AI into safety-critical systems presents both a formidable challenge and a transformative opportunity for formal verification. Our research explores this dual relationship:
- Formal Verification for AI: Traditional verification analyzes human-written logic, but AI components often operate as "black boxes," making their behavior unpredictable. This lack of transparency poses a new challenge to safety guarantees. To address this, we explore architectural safeguards like Simplex Architecture to provide runtime assurance. Our goal is to develop robust frameworks that bound the uncertainty of AI controllers, ensuring autonomous systems remain within safe operational limits.
- AI for Formal Verification: While mechanized proof provides absolute certainty, the human effort required for formalization is a major bottleneck. We investigate using AI to automate proof generation and assist in constructing complex formal arguments. As autonomous proving is in its infancy, we focus on foundational techniques to bridge this gap—such as narrowing problem domains or providing structured guidance to AI models.