neural-network,lmdb,caffe , Caffe convert_imageset with one classifier


Caffe convert_imageset with one classifier

Question:

Tag: neural-network,lmdb,caffe

I want to create an lmdb dataset from images which part of them contain the feature I want caffe to learn, and part of them don't.
My question is - in the text input file transferred to convert_imageset - how should I label those images that don't contain the feature?
I know the format is

PATH_TO_IMAGE LABEL
PATH_TO_IMAGE LABEL
PATH_TO_IMAGE LABEL

But which label should I assign to images without the feature?
For example, img1.jpg contain the feature, img2.jpg and img3.jpg don't. So should the text file look like -

img1.jpg 0
img2.jpg 1?
img3.jpg 1?

Thanks!


Answer:

Got an answer from Caffe-users Google Group - yes, creating a dummy feature is the right way for this.
So it is:

img1.jpg 0
img2.jpg 1
img3.jpg 1

Related:


FeedForward Neural Network: Using a single Network with multiple output neurons for many classes


machine-learning,neural-network,backpropagation,feed-forward
I am currently working on the MNIST handwritten digits classification. I built a single FeedForward network with the following structure: Inputs: 28x28 = 784 inputs Hidden Layers: A single hidden layer with 1000 neurons Output Layer: 10 neurons All the neurons have Sigmoid activation function. The reported class is the...

Wrong values for partial derivatives in neural network python


python,numpy,neural-network
I am implementing a simple neural network classifier for the iris dataset. The NN has 3 input nodes, 1 hidden layer with two nodes, and 3 output nodes. I have implemented evrything but the values of the partial derivatives are not calculated correctly. I have exhausted myself looking for the...

batch normalization in neural network


machine-learning,neural-network,normalization
I'm still fairly new with ANN and I was just reading the Batch Normalization paper (http://arxiv.org/pdf/1502.03167.pdf), but I'm not sure I'm getting what they are doing (and more importantly, why it works) So let's say I have two layers L1 and L2, where L1 produces outputs and sends them to...

How to test a trained neural network to predict outputs for new inputs


python,neural-network
New on neural-networks and py and i just started to learn. On the web a found this Back-Propagation Neural Network Class that im trying to using for classification. Link of class: http://arctrix.com/nas/python/bpnn.py I added to the network 11 inputs with corresponding labeled data [0] or [1]. creating a network with...

Continue training a Doc2Vec model


neural-network,gensim
Gensim's official tutorial explicitly states that it is possible to continue training a (loaded) model. I'm aware that according to the documentation it is not possible to continue training a model that was loaded from the word2vec format. But even when one generates a model from scratch and then tries...

Fitnet function analogue in Octave


matlab,neural-network,octave
Octave is considered as open source implementation of MATLAB. In MATLAB there is a function fitnet. Does anybody know a corresponding function in Octave? P.S.: I have also installed in my octave edition an Octave´s neural network package. Or, maybe, does somebody know about some other package, which has this...

Neuroph: Multi Layer Perceptron Backpropagation learning not working


java,neural-network
This question is related to Neuroph Java library. I have the following program which creates a multi layer perceptron containing a single hidden layer of 20 nodes. The function being learnt is x^2. Backpropagation learning rule is used. However, as is evident from the output, the program doesn't seem to...

How to train a RNN for word replacement?


text,replace,neural-network
I have some understanding of how to use a simple recursive neural network that reads a sequence of characters and produces another sequence where each character is a function of the previous ones. However I have no idea how to implement the sort of delayed output generation required to do...

Convolution Neural Network in torch. Error when training the network


lua,neural-network,torch
I am trying to base my Convolution neural network upon the following tutorial: https://github.com/torch/tutorials/tree/master/2_supervised The issue is that my images are of different dimensions than those used in the tutorial. (3x200x200). Also I have only two classes. The following are the changes that I made : Changing the dataset to...

Train neural network to determine color image quality [closed]


machine-learning,artificial-intelligence,neural-network
I'm looking for someone who know if it is possible to train a neural network to tell if the image provided live up to the trained expectation. Let's say we have a neural network which trained to read a 800x800 pixel color image. Therefore, I will have 1,920,000 input and...

Having trouble creating my Neural Network inputs


machine-learning,artificial-intelligence,neural-network
I'm currently working on a neural network that should have N parameters in input. Each parameters can have M different values (discrete values), let's say {A,B,C,…,M}. It also has a discrete number of outputs. How can I create my inputs from this situation? Should I have N×M inputs (having 0 or 1 as value), or should I think of a different...

FANN Neural Network - constant result


c,neural-network,fann
I'm using the FANN Library with the given code. #include <stdio.h> #include "doublefann.h" int main() { const NUM_ITERATIONS = 10000; struct fann *ann; int topology[] = { 1, 4, 1 }; fann_type d1[1] = { 0.5 }; fann_type d2[1] = { 0.0 }; fann_type *pres; int i; /* Create network...

In neural networks, why is the bias seen as either a “b” parameter or as an additionnal “wx” neuron?


machine-learning,neural-network,backpropagation
In other words, what is the main reason from switching the bias to a b_j or to an additional w_ij*x_i in the neuron summation formula before the sigmoid? Performance? Which method is the best and why? Note: j is a neuron of the actual layer and i a neuron of...

C++ FANN fann_run always produce same output


c++,neural-network,fann
I am using the FANN Library to build neural networks to proceed a regression problem. The thing is, once the networks has been trained on the relevant training set (which seems to work quite well), every single test output the exact same output. In other words, given any state of...

Feature Vectors in Radial Basis Function Network


machine-learning,neural-network,point-clouds
I am trying to use RBFNN for point cloud to surface reconstruction but I couldn't understand what would be my feature vectors in RBFNN. Can any one please help me to understand this one. A goal to get to this: From inputs like this: ...

torch7 : how to connect the neurons of the same layer?


neural-network,torch
Is it possible to implement, using torch, an architecture that connects the neurons of the same layer?

Error in creating LMDB database file in Python for Caffe


python,numpy,anaconda,caffe,lmdb
I'm trying to create an LMDB data base file in Python to be used with Caffe according to this tutorial. The commands import numpy as np and import caffe run perfectly fine. However, when I try to run import lmdb and import deepdish as dd, I'm getting the following errors:...

How does Caffe determine the number of neurons in each layer?


neural-network,deep-learning,caffe
Recently, I've been trying to use Caffe for some of the deep learning work that I'm doing. Although writing the model in Caffe is very easy, I've not been able to know the answer to this question. How does Caffe determine the number of neurons in a hidden layer? I...

how to install Lasagne package with python con windows


python,package,neural-network
I'm new on python and I'm running some script on python 3.4. I'm getting the following error: ImportError: No module named 'lasagne'. Does someone know how to install this package on Python please? ...

Don't understand train data from convnetjs


javascript,neural-network,conv-neural-network
I'm trying to predict some data using a neural network in javascript. For that I found convnetjs that seems easy to use. In the example, they use one thing that they call MagicNet, so you don't need to know about NN to work with it. This is the example of...

(Java) Partial Derivatives for Back Propagation of Hidden Layer


java,machine-learning,artificial-intelligence,neural-network
Yesterday I posted a question about the first piece of the Back propagation aglorithm. Today I'm working to understand the hidden layer. Sorry for a lot of questions, I've read several websites and papers on the subject, but no matter how much I read, I still have a hard time...

Trouble with backpropogation in a vectorized implementation of a simple neural network


matlab,neural-network
I have been going through UFLDL tutorials.In the vectorized implementation of a simple neural net, the tutorials suggest that one way to do this would be to go through the entire training set instead of iterative approach. In the back propogation part, this would mean replacing: gradW1 = zeros(size(W1)); gradW2...

Multilayer Perceptron replaced with Single Layer Perceptron


math,machine-learning,neural-network,linear-algebra,perceptron
I got a problem in understending the difference between MLP and SLP. I know that in the first case the MLP has more than one layer (the hidden layers) and that the neurons got a non linear activation function, like the logistic function (needed for the gradient descent). But I...

OpenCL / AMD: Deep Learning


sdk,opencl,neural-network,gpgpu,deep-learning
While "googl'ing" and doing some research I were not able to find any serious/popular framework/sdk for scientific GPGPU-Computing and OpenCL on AMD hardware. Is there any literature and/or software I missed? Especially I am interested in deep learning. For all I know deeplearning.net recommends NVIDIA hardware and CUDA frameworks. Additionally...

Error while installing deepdish


python,pip,ubuntu-14.04,caffe,lmdb
I'm trying to create an LMDB database file to be used with Caffe according to this tutorial on an Ubuntu 14.04 machine using Anaconda Python 2.7.9. However, when I do pip install deepdish, I'm getting the following error: Collecting deepdish Using cached deepdish-0.1.4.tar.gz Complete output from command python setup.py egg_info:...

Programming the Back Propagation Algorithm


java,machine-learning,neural-network
I'm trying to implement the backpropagation algoirthm into my own net. I understand the idea of the backprop agl, however, I'm not strong with math. I'm just working on the first half of the backprop alg, computing the output layer (not worrying about partial derivatives in the hidden layer(s) yet)....

How to read Torch Tensor from C [closed]


c,lua,neural-network,luajit,torch
I have to train a convolutional neural network using the Torch framework and then write the same network in C. To do so, I have to read somehow the learned parameters of the net from my C program, but I can't find a way to convert or write to a...

compute with neural network in R?


r,neural-network
all tuples in allClassifiers tuples are either 1 or 2 e.g. naiveBayesPrediction knnPred5 knnPred10 dectreePrediction logressionPrediction correctClass 1 2 1 1 1 1 1 2 1 1 1 1 1 2 1 1 1 1 1 2 1 2 1 1 I trained the ensembler ensembleModel <- neuralnet(correctClass ~ naiveBayesPrediction...

Opencv mlp Same Data Different Results


c++,opencv,machine-learning,neural-network,weight
Let Me simplify this question. If I run opencv MLP train and classify consecutively on the same data, I get different results. Meaning, if I put training a new mlp on the same train data and classifying on the same test data in a for loop, each iteration will give...

Supervised machine learning for several coefficient


machine-learning,neural-network
I have a set of items that are each described by 10 precise numbers n1, .., n10. I would like to learn the coefficients k1, .., k10 that should be associated to those numbers to rank them according to my criteria. In that purpose I created a web application (in...

What are units in neural network (backpropagation algorithm)


machine-learning,artificial-intelligence,neural-network,classification,backpropagation
Please help me to understand unit thing in neuron networks. From the book I understood that a unit in input layer represents an attribute of training tuple. However, it is left unclear, how exactly it does. Here is the diagram: There are two "thinking paths" about the input units. The...

Theano: how to efficiently undo/reverse max-pooling


python,optimization,neural-network,theano
I'm using Theano 0.7 to create a convolutional neural net which uses max-pooling (i.e. shrinking a matrix down by keeping only the local maxima). In order to "undo" or "reverse" the max-pooling step, one method is to store the locations of the maxima as auxiliary data, then simply recreate the...

How can I pause/serialize a genetic algorithm in Encog?


java,algorithm,neural-network,genetic-algorithm,encog
How can I pause a genetic algorithm in Encog 3.4 (the version currently under development in Github)? I am using the Java version of Encog. I am trying to modify the Lunar example that comes with Encog. I want to pause/serialize the genetic algorithm and then continue/deserialize at a later...

Can the validation error of a dataset be higher than the test error during the whole process of training a neural network?


machine-learning,computer-vision,neural-network,deep-learning,pylearn
I'm training a convolutional neural network using pylearn2 library and during all the ephocs, my validation error is consistently higher than the testing error. Is it possible? If so, in what kind of situations?

Why is there only one hidden layer in a neural network?


machine-learning,neural-network,genetic-algorithm,evolutionary-algorithm
I recently made my first neural network simulation which also uses a genetic evolution algorithm. It's simple software that just simulates simple organisms collecting food, and they evolve, as one would expect, from organisms with random and sporadic movements into organisms with controlled, food-seeking movements. Since this kind of organism...

how Weka calculates Sigmoid function c#


c#,neural-network,weka
I am using weka with my dataset to train a neural network and now I want to use the results (weights and thresholds produced by weka) in my application and implement only the forward pass. now the problem is that I don't know how exactly weka calculates the sigmoid function,...

Any Ideas for Predicting Multiple Linear Regression Coefficients by using Neural Networks (ANN)?


matlab,neural-network,linear-regression,backpropagation,perceptron
In case, there are 2 inputs (X1 and X2) and 1 target output (t) to be estimated by neural network (each nodes has 6 samples): X1 = [2.765405915 2.403146899 1.843932529 1.321474515 0.916837222 1.251301467]; X2 = [84870 363024 983062 1352580 804723 845200]; t = [-0.12685144347197 -0.19172223428950 -0.29330584684934 -0.35078062276141 0.03826908777226 0.06633047875487]; I...

XOR neural network backprop


python,machine-learning,neural-network
I'm trying to implement basic XOR NN with 1 hidden layer in Python. I'm not understanding the backprop algo specifically, so I've been stuck on getting delta2 and updating the weights...help? import numpy as np def sigmoid(x): return 1.0 / (1.0 + np.exp(-x)) vec_sigmoid = np.vectorize(sigmoid) theta1 = np.matrix(np.random.rand(3,3)) theta2...

brain.js: XOR example does not work


javascript,machine-learning,neural-network
I'm trying to understand brain.js. This is my code; it does not work. (Explaination of what I expect it to do below) <script src="https://cdn.rawgit.com/harthur/brain/gh-pages/brain-0.6.3.min.js"> <script> var net = new brain.NeuralNetwork(); net.train([{input: [0, 0], output: [0]}, {input: [0, 1], output: [1]}, {input: [1, 0], output: [1]}, {input: [1, 1], output: [0]}]);...

ArrayIndexOutOfBoundsException in 3D array


java,arrays,multidimensional-array,neural-network
I'm trying to make a jagged array for a neural network and this is giving me an out of bounds error... int[] sizes = { layer1, layer2, layer3 }; int k = sizes.length - 1; double[][][] net = new double[k][][]; int i; for (i = 0; i < k; i++)...

MATLAB - How to change “Validation Check” count


matlab,neural-network
How can I change "Validation Checks" value from 6 to higher or lower values using code? I have following code: % Create a Pattern Recognition Network hiddenLayerSize = ns; net = patternnet(hiddenLayerSize); net.divideParam.trainRatio = trRa/100; net.divideParam.valRatio = vaRa/100; net.divideParam.testRatio = teRa/100; % Train the Network [net,tr] = train(net,inputs,targets); % Test...

Torch Lua: Why is my gradient descent not optimizing the error?


lua,neural-network,backpropagation,training-data,torch
I've been trying to implement a siamese neural network in Torch/Lua, as I already explained here. Now I have my first implementation, that I suppose to be good. Unfortunately, I'm facing a problem: during training back-propagation, the gradient descent does not update the error. That is, it always computes the...

How to use Rs neuralnet package in a Kaggle competition about Titanic


r,machine-learning,neural-network
I am trying to run this code for the Kaggle competition about Titanic for exercise. Its forfree and a beginner case. I am using the neuralnet package within R in this package. This is the train data from the website: train <- read.csv("train.csv") m <- model.matrix( ~ Survived + Pclass...

What is the definition of “feature” in neural network?


neural-network
I am a beginner of the neural network. I am very confused about the word feature. Can you give me a defintion of feature? Are the features the neurons in the hidden layers?

Does Andrew Ng's ANN from Coursera use SGD or batch learning?


machine-learning,neural-network
What type of learning is Andrew Ng using in his neural network excercise on Coursera? Is it stochastic gradient descent or batch learning? I'm a little confused right now......

Malfunctioning perceptron


machine-learning,neural-network,perceptron
I am a newbie to machine learning and have been experimenting with basic perceptrons before moving on to multilayer networks. The problem I have is with the code below. I have a training data generator which uses a set of weights to generate a truth table. The problem I have...

Object categories of pretrained imagenet model in caffe


machine-learning,neural-network,deep-learning,caffe,matcaffe
I'm using the pretrained imagenet model provided along the caffe (CNN) library ('bvlc_reference_caffenet.caffemodel'). I can output a 1000 dim vector of object scores for any images using this model. However I don't know what the actual object categories are. Did someone find a file, where the corresponding object categories are...

Neural Network Error oscillating with each training example


machine-learning,artificial-intelligence,neural-network,backpropagation
I've implemented a back-propagating neural network and trained it on my data. The data alternates between sentences in English & Africaans. The neural network is supposed to identify the language of the input. The structure of the Network is 27 *16 * 2 The input layer has 26 inputs for...

What is cost function in neural network?


neural-network
Could someone please explain to me why it is so important the cost function in a neural network, what is its purpose? Note: I'm just introducing me to the subject of neural networks, but failed to understand it perfectly....

Print output of a Theano network


python,debugging,neural-network,theano
I am sorry, very newbee question... I trained a neural network with Theano and now I want to see what it outputs for a certain input. So I can say: test_pred = lasagne.layers.get_output(output_layer, dataset['X_test']) where output_layer is my network. Now, the last layer happens to be a softmax, so if...