This page provides some fundamental and essential computer vision (CV) related terms, concepts, and algorithms.
Terms/Concepts/Algorithms
-
SIFT (Scale-Invariant Feature Transform)
-
SURF (Speeded Up Robust Features)
-
-
-
-
-
-
-
-
In computing, indexed color is a technique to manage digital images‘ colors in a limited fashion, in order to save computer memory and file storage, while speeding up display refresh and file transfers. It is a form of vector quantization compression.
When an image is encoded in this way, color information is not directly carried by the image pixel data, but is stored in a separate piece of data called a palette: an array of color elements. Every element in the array represents a color, indexed by its position within the array. The individual entries are sometimes known as color registers. The image pixels do not contain the full specification of its color, but only its index in the palette. This technique is sometimes referred as pseudocolor[1] or indirect color,[2] as colors are addressed indirectly.
Perhaps the first device that supported palette colors was a random-access frame buffer, described in 1975 by Kajiya, Sutherland and Cheadle.[3][4] This supported a palette of 256 36-bit RGB colors.



An image retrieval system is a computer system for browsing, searching and retrieving images from a large database of digital images. Most traditional and common methods of image retrieval utilize some method of adding metadata such as captioning, keywords, title or descriptions to the images so that retrieval can be performed over the annotation words. Manual image annotation is time-consuming, laborious and expensive; to address this, there has been a large amount of research done on automatic image annotation. Additionally, the increase in social web applications and the semantic web have inspired the development of several web-based image annotation tools.
The first microcomputer-based image database retrieval system was developed at MIT, in the 1990s, by Banireddy Prasaad, Amar Gupta, Hoo-min Toong, and Stuart Madnick.[1]
A 2008 survey article documented progresses after 2007.[2]
CBIR — the application of computer vision to the image retrieval. CBIR aims at avoiding the use of textual descriptions and instead retrieves images based on similarities in their contents (textures, colors, shapes etc.) to a user-supplied query image or user-specified image features.
List of CBIR Engines – list of engines which search for images based image visual content such as color, texture, shape/object, etc.
Further information: Visual search engine and Reverse image search
Image collection exploration is a mechanism to explore large digital image repositories. The huge amount of digital images produced every day through different devices such as mobile phones bring forth challenges for the storage, indexing and access to these repositories. Content-based image retrieval (CBIR) has been the traditional paradigm to index and retrieve images. However, this paradigm suffers of the well known semantic gap problem. Image collection exploration consists of a set of computational methods to represent, summarize, visualize and navigate image repositories in an efficient, effective and intuitive way.[1]
Summarization
Automatic summarization consists in finding a set of images from a larger image collection that represents such collection.[2] Different methods based on clustering have been proposed to select these image prototypes (summary). The summarization process addresses the problem of selecting a representative set of images of a search query or in some cases, the overview of an image collection.
References
