FAQ Database Discussion Community


Seven Segment Digital Data Recognition using Tessseract / Java

java,tesseract,image-recognition,tess4j,seven-segment-display
I am trying to recognize seven segment digital text from image using tess4J . My input is here I have made some normalization as follows 1 ] Image cropped . 2 ] Converted it into binary I wish to remove the jagged edges of text from image .How can i...

What is the best way to implement Color Detection in Android?

android,opencv,colors,image-recognition
I want to implement color detection in Android. What I exactly want to do is that after taking a picture with Android Camera, I want to detect the color of object in that picture. My goal is detecting colors according to color intensity. At this point, I searched and saw...

ORB FeatureDetector with Bag Of Words

opencv,image-recognition,orb
BOWImgDescriptorExtractor has to receive 32F so SURF or SIFT have to be used for the DescriptorExtractor, but for the FeatureDetector surely that can be any you wish, right? I just need some clarification here, I've only ever seen people say that "You can't use ORB with Bow" but when detecting...

Select complex colour range for skin-colour detection in OpenCV with Python

python,opencv,numpy,computer-vision,image-recognition
I am trying to make skin-colour detection program. Basically, it takes video from webcamera, and then creates a mask, after which only the skin should be visible. I have found a criterium for detecting skin-colour ranges in a paper. It looks like this: The skin colour at uniform daylight illumination...

Differences between Training Data and Vocabulary - Bag Of Words

opencv,image-recognition
When creating a Bag Of Words, you need to create a Vocabulary to give to the BOWImgDescriptorExtractor to which you use on the images you wish to input. This creates the Testing Data. So where does the Training Data come from, and where do you use it? Whats the difference...

Why negative image is used in preprocessing?

c++,opencv,image-processing,ocr,image-recognition
I've observed that for many preprocessing operations (I mean mainly preprocessing for OCR) usually negative image is used? For example: http://felix.abecassis.me/2011/10/opencv-rotation-deskewing/ http://felix.abecassis.me/2011/09/opencv-detect-skew-angle/ I've found it also when objects are found using kNN algorithm. Why inverted images are used? Is that only to show it is just preprocessing step? Are there...