Jian Xu (徐健)

jian.xu@ia.ac.cn  

     

I am currently an Associate Professor at Institute of Automation Chinese Academy of Sciences (CASIA) in PAL group.

Before joining CASIA, I have 3 years of experience in AI corporations HUAWEI and XREAL.

I obtained my Ph.D. in Pattern Recognition and Intelligent Systems from Institute of Automation, Chinese Academy of Science in 2020, under the supervision of Prof. Chunheng Wang. Previously I received my B.S. in Control Science and Engineering from Shandong University in 2015.

My research interests lie in Large Multimodal Models, AI4Science, Pose Estimation and Image Retrieval.

profile photo
profile photo
Publications
Recoverable Compression: A Multimodal Vision Token Recovery Mechanism Guided by Text Information
Yi Chen, Jian Xu, Xu-Yao Zhang, Wen-Zhuo Liu, Yang-Yang Liu, Cheng-Lin Liu
arXiv
[PDF]
StylePrompter: Enhancing Domain Generalization with Test-Time Style Priors
Jiao Zhang, Jian Xu, Xu-Yao Zhang, Cheng-Lin Liu
arXiv
[PDF]
CMMaTH: A Chinese Multi-modal Math Skill Evaluation Benchmark for Foundation Models
Zhong-Zhi Li, Ming-Liang Zhang, Fei Yin, Zhi-Long Ji, Jin-Feng Bai, Zhen-Ru Pan, Fan-Hu Zeng, Jian Xu, Jia-Xin Zhang, Cheng-Lin Liu
arXiv
[PDF]
FAVOR: Full-Body AR-Driven Virtual Object Rearrangement Guided by Instruction Text
Kailin Li, Lixin Yang, Zenan Lin, Jian Xu, Xinyu Zhan, Yifei Zhao, Pengxiang Zhu, Wenxiong Kang, Kejian Wu, Cewu Lu
AAAI 2024
[PDF] [Project]
ACR-Pose: Adversarial Canonical Representation Reconstruction Network for Category Level 6D Object Pose Estimation
Zhaoxin Fan, Zhenbo Song, Zhicheng Wang, Jian Xu, Kejian Wu, Hongyan Liu and Jun He
ICMR 2024
[PDF]
CHORD: Category-level Hand-held Object Reconstruction via Shape Deformation
Kailin Li, Lixin Yang, Haoyu Zhen, Zenan Lin, Xinyu Zhan, Licheng Zhong, Jian Xu, Kejian Wu, Cewu Lu
ICCV 2023
[PDF] [Project]
POEM: Reconstructing Hand in a Point Embedded Multi-view Stereo
Lixin Yang, Jian Xu, Licheng Zhong, Xinyu Zhan, Zhicheng Wang, Kejian Wu, Cewu Lu
CVPR 2023
[PDF] [Project]
Object level depth reconstruction for category level 6d object pose estimation from monocular rgb image
Zhaoxin Fan, Zhenbo Song, Jian Xu, Zhicheng Wang, Kejian Wu, Hongyan Liu, Jun He
ECCV 2022
[PDF] [Project]
Unsupervised Semantic-Based Aggregation of Deep Convolutional Features
Jian Xu, Chunheng Wang, Cunzhao Shi, Baihua Xiao
IEEE Transactions on Image Processing, 2019
[PDF] [Project]
Iterative Manifold Embedding Layer Learned by Incomplete Data for Large-Scale Image Retrieval
Jian Xu, Chunheng Wang, Chengzuo Qi, Cunzhao Shi, Baihua Xiao
IEEE Transactions on Multimedia, 2019.
[PDF] [Code]
Unsupervised Part-Based Weighting Aggregation of Deep Convolutional Features for Image Retrieval
Jian Xu, Cunzhao Shi, Chengzuo Qi, Chunheng Wang, Baihua Xiao
AAAI 2018
[PDF] [Code]
Research Projects

[1] 国家自然科学基金重点项目 “基于神经符号系统的数学推理研究”, 2025.1-2029.12, 主要完成人。

[2] 中国科学院先导A “地空多模态甘蔗表型数据智能分析与优异品种选育”, 2023.10-2028.10, 主要完成人。

[3] 北京市科技计划 “AI for Science重点领域研究案例与智能组件研发”, 2023.12-2025.12, 主要完成人。

[4] 2035创新任务 “科学大模型构建理论与方法”, 2024.03-2026.03, 主要完成人。

[5] 华为 “时序预测大模型高效微调技术研究”, 2024.05-2025.05, 主要完成人。

[6] 深地 “遥感大模型研制与应用开发”, 2023.11-2024.11, 主要完成人。

Patents

[1] 徐健, 吴克艰,杨理欣。用于确定手部形态的方法、装置、电子设备、介质及产品, 2023-03-31, 发明专利, CN202310344403。

[2] 徐健, 王志成, 吴克艰。用于输出关键点数据的方法、装置、设备、介质及产品, 2022-12-20, 发明专利, CN202211646062。

[3] 于杲彤, 徐健, 王志成, 吴克艰。用于确定手势类型的处理方法、装置、设备和介质, 2022-12-12, 发明专利, CN202211592961。

[4] 徐健, 王志成, 吴克艰。用于头戴显示设备的控制装置、方法、设备和存储介质, 2022-09-07, 发明专利, CN202211089235。

[5] 徐健, 王志成, 吴克艰。用于虚拟键盘的显示方法和装置, 2022-07-15, 发明专利, CN202210833540。

[6] 徐健, 张雅琪, 刘宏马。图像矫正方法、电子设备、介质及片上系统, 2021-09-01, 发明专利, CN202111020078。

[7] 徐健, 张超, 张雅琪, 刘宏马, 贾志平。目标跟踪方法及其装置, 2021-03-29, 发明专利, CN202110336639。

[8] 张超, 徐健, 张雅琪, 刘宏马, 贾志平, 吕帅林。 一种确定跟踪目标的方法及电子设备, 2020-12-29, 发明专利, CN202011607731, 2023-06-06授权。

[9] 张雅琪, 张超, 徐健, 刘宏马。一种目标跟踪方法及电子设备, 2020-09-30, 发明专利, CN202011066347。

[10] 王春恒, 徐健, 肖柏华。 卫星云图分类方法及系统, 2020-06-12, 发明专利, CN202010024821, 2023-04-28授权。

[11] 王春恒, 徐健, 肖柏华。 图像检索方法及系统, 2020-05-26, 发明专利, CN202010026336, 2023-04-25授权。


The website template was adapted from Xingyu Chen.