| Today,with the increasing number of different types of data sources and media content,cross-media retrieval has become an increasingly important issue.Cross-media retrieval refers to the search for related data across different media types,including image retrieval,speech retrieval,text retrieval,video retrieval,etc.,such as finding related information between images and text.To achieve this goal,cross-modal hashing has become a feasible direction.The goal of cross-modal hashing is to map data of different media types to a common hash code space and obtain the relevance of different media type data by calculating similarity in the hash code space.However,traditional cross-modal hashing methods only consider offline scenarios and require all training data to be available before the training process,which limits their application in online scenarios.In recent years,some online cross-modal hashing methods have emerged that can gradually train the model without needing to know all training data in advance,thereby efficiently performing cross-media retrieval in online scenarios.However,existing online cross-modal hashing methods often cannot guarantee the quality of hash codes when directly generating short bit hash codes because short-bit hash codes are more likely to lose information compared to long hash codes.With the increasing size of datasets,short-bit hash codes become more and more important.They can significantly reduce computation and storage costs,thereby improving the efficiency of cross-media retrieval.Therefore,in this paper,we propose a new online short-length hashing method,called Low-dimensional Compact Hashing for online cross-modal retrieval(LCH).Unlike other existing online cross-modal hashing methods,LCH can generate high-quality short-length hash codes,thereby achieving efficient cross-media retrieval in online scenarios.LCH is an unsupervised method that utilizes the inherent attributes between data and generates hash codes by building strong connections between original features and hash codes.The main contributions of this paper are as follows:(1)This paper proposes a new unsupervised online cross-modal hashing method,LCH,which can generate distinguishable compact hash codes by fully utilizing the original information between data,thereby overcoming the problem of severe information loss.(2)In order to capture the correlation between different modalities and the dynamic changes of multi-modal data streams in a timely manner,this paper adopts an effective self-weighting strategy to assist in the learning of the hash code space.(3)LCH is the first hashing method that uses compact hash codes to achieve efficient online cross-modal retrieval.By learning short-bit hash codes,significant computation cost savings can be achieved in terms of speed and storage for large-scale data.(4)In this paper,we conduct extensive experiments on three public benchmark datasets with LCH and compare it with existing baseline methods,which are the most state-of-the-art online cross-modal hashing methods.The experimental results show that LCH is significantly superior to existing baseline methods in performance,verifying the effectiveness and practicality of LCH. |