I'm a PhD student at the University of Queensland 🎓, deeply immersed in the fascinating world of neural networks 🤖—a constantly evolving field that pushes me to think outside the box every single day!
My research focuses on neural network verification (NNV) 🧠💪. I'm passionate about ensuring these powerful models are robust, reliable, and dependable, regardless of the conditions or inputs they encounter.
Want to know more about me? Visit my website: zhongkuima.github.io
Key Features Across All Tools:
- wraact: Approximate activation function hull with convex polytopes. Supports ReLU, Sigmoid, Tanh, GeLU, and more. 🛠
- shapeonnx: Infer the shape of ONNX models. Simple yet powerful tool for understanding model dimensions. 📏
- slimonnx: Optimize and simplify ONNX models by removing redundant operations and resolving version issues. 🚀
- torchonnx: Convert ONNX models to PyTorch format (.pth for parameters, .py for structure). 🔄
- torchvnnlib: Convert VNN-LIB verification benchmarks (.vnnlib) to PyTorch tensors (.pth files). 🚀
- propdag: Bound propagation framework for neural network verification supporting DAG structures and both forward/backward propagation. 💪
I'm thrilled to announce the stable release for my neural network verification toolkit! All 6 core packages are now production-ready and actively maintained.
| Package | Description | Version | Stars | Size | Last Updated | Status |
|---|---|---|---|---|---|---|
| propdag | Bound propagation framework | v2026.1.1 | ||||
| wraact | Activation hull approximation | v2026.1.1 | ||||
| shapeonnx | ONNX shape inference | v2026.1.0 | ||||
| slimonnx | ONNX optimization | v2026.1.0 | ||||
| torchonnx | ONNX to PyTorch conversion | v2026.1.0 | ||||
| torchvnnlib | VNN-LIB to PyTorch | v2026.1.0 |
I've worked on several exciting projects related to neural networks and model security, some of which have been published in top-tier conferences:
- GHOST - "Mitigating Gradient Inversion Risks in Language Models via Token Obfuscation" (AsiaCCS'26)
- WraAct - "Convex Hull Approximation for Activation Functions" (OOPSLA'25)
- AIM - "Model Modulation with Logits Redistribution" (WWW'25)
- GRAB - "Uncovering Gradient Inversion Risks in Practical Language Model Training" (CCS'24)
- CoreLocker - "CORELOCKER: Neuron-level Usage Control" (S&P'24)
- WraLU - "ReLU Hull Approximation" (POPL'24)
- PdD - "Formalizing Robustness Against Character-Level Perturbations for Neural Network Language Models" (ICFEM'23)
Thanks and to be honored with my friends and collaborators, including Xinguo Feng, Zihan Wang. You can find more works by their scholar profiles.
If you find these tools useful, please consider:
- ⭐ Star the repositories on GitHub to show your support
- 🐛 Report issues if you encounter any bugs or have feature requests
- 💡 Contribute improvements through pull requests
- 📢 Share with colleagues who might benefit from these tools
Your feedback and contributions help make these tools better for everyone!
