Hello! I am Liu Xiaoqing. A CV Alchemist.
I received the B.S. degree received the B.S. degree in network engineering from Anhui University, Hefei, China, in 2019. I'm currently working toward the M.S. degree in college of information science and engineering, Huaqiao University, Xiamen, China. My research interests include computer vision and cross-modal retrieval. Learn More
2020 - Now
Supervisor: Prof. Huanqiang Zeng@HQU
From the fall of 2020 to now, enrolled in the College of Information Science and Engineering at Huaqiao University.
Major Courses: Matrix Analysis, Machine Vision, Stochastic Processes, Image Analysis, etc.
2015 - 2019
I graduated from Anhui University, School of Computer Science and Technology, Department of Network Engineering, and received a Bachelor of Engineering degree.
Major Courses: Advanced Language Programming, Digital Logic, Discrete Mathematics, Data Structures, Computer Composition Principles, Operating Systems, Database Principles, Data Communication Principles, Computer Network Principles, Network, and Information Security, etc.
Cross-modal hashing retrieval approaches maps heterogeneous multi-modal data into a common hamming space to achieve efficient and flexible retrieval performance. However, existing cross-modal methods mainly exploit feature-level similarity between multi-modal data, the label-level similarity and relative ranking relationship between adjacent instances have been ignored. To address these problems, we propose a novel Deep Rank Cross-modal Hashing(DRCH) method that fully explores the intra-modal semantic similarity relationship.
We propose a new method called DCH-SCR for cross-modal retrieval, which addresses issues with existing methods that only consider feature-level similarity and ignore label-level fine-grained similarity. Our method preserves semantic similarity by combining label-level and feature-level information, narrows the gap between modalities with a ranking alignment loss function, and optimizes hash codes based on a common semantic space. We use the gradient and Normalized Discounted Cumulative Gain to achieve varying optimization strengths for data pairs with different similarities. Experiments on three image-text retrieval datasets show that DCH-SCR outperforms state-of-the-art methods.
No.668 Jimei Avenue, Xiamen, Fujian, China 361021