FAQ Database Discussion Community


MATLAB - How to change “Validation Check” count

matlab,neural-network
How can I change "Validation Checks" value from 6 to higher or lower values using code? I have following code: % Create a Pattern Recognition Network hiddenLayerSize = ns; net = patternnet(hiddenLayerSize); net.divideParam.trainRatio = trRa/100; net.divideParam.valRatio = vaRa/100; net.divideParam.testRatio = teRa/100; % Train the Network [net,tr] = train(net,inputs,targets); % Test...

Torch Lua: Why is my gradient descent not optimizing the error?

lua,neural-network,backpropagation,training-data,torch
I've been trying to implement a siamese neural network in Torch/Lua, as I already explained here. Now I have my first implementation, that I suppose to be good. Unfortunately, I'm facing a problem: during training back-propagation, the gradient descent does not update the error. That is, it always computes the...

How to determine which neurons to connect between layers in an artificial neural network?

artificial-intelligence,neural-network
Say for my first input layer I have 10 input nodes/neurons. Say my hidden layer has 10 neurons as well. My third and final layer is one output neuron. How do I connect the layers? Is there a technique for determining the best way to do this or do you...

Neural Network Error oscillating with each training example

machine-learning,artificial-intelligence,neural-network,backpropagation
I've implemented a back-propagating neural network and trained it on my data. The data alternates between sentences in English & Africaans. The neural network is supposed to identify the language of the input. The structure of the Network is 27 *16 * 2 The input layer has 26 inputs for...

Does Convolutional Neural Network possess localization abilities on images?

computer-vision,neural-network,feature-detection,deep-learning
As far as I know, CNN rely on sliding window techniques and can only indicate if a certain pattern is present or not anywhere in given bounding boxes. Is that true? Can one achieve localization with CNN without any help of such techniques?...

How to train a neural network to detect presence of a pattern?

neural-network,training-data
The question phrasing is vague - and I'm happy to change it based on feedback. But, I am trying to train a neural network to detect fraudulent transactions on a website. I have a lot of parameters as inputs (time of day, country of origin, number of visits in the...

A method or delegate does not match other delegate parameters

c#,artificial-intelligence,neural-network
I'm creating an AI system for the Google Science Fair but I've hit a rock, I can't seem to find the error of this problem stated in the title. A search on Google returns null answers in the following topic. I'm using Monodevelop; Here is my code : using UnityEngine;...

Using machine learning to make a computer learn calculus

machine-learning,integration,neural-network,implementation,calculus
Are there any known approaches of making a machine learn calculus? I've learnt that it is quite simple to teach calculating derivatives because it is possiblen to implement an algorithm. Meanwhile, an implementation of integration is possible but is rarely or never fully implemented due to the algorithmical complexity. I...

FANN Neural Network - constant result

c,neural-network,fann
I'm using the FANN Library with the given code. #include <stdio.h> #include "doublefann.h" int main() { const NUM_ITERATIONS = 10000; struct fann *ann; int topology[] = { 1, 4, 1 }; fann_type d1[1] = { 0.5 }; fann_type d2[1] = { 0.0 }; fann_type *pres; int i; /* Create network...

confusion matrix as the result of neural network in matlab

neural-network
2 questions, 1- I used neural network matlab toolbox to train a neural for classification, but each time I close the program and train and test the NN, I got different results!! do you know what happend? 2- which value in the confusion matrix would be my final accuracy of...

Encog regularization for C# samples/usage

c#,neural-network,encog
I wonder did Encog developers implemented regularization for backpropogation algorithm? I seen RegularizationStrategy class for java, but didn't find something similar for C#.

Continue training a Doc2Vec model

neural-network,gensim
Gensim's official tutorial explicitly states that it is possible to continue training a (loaded) model. I'm aware that according to the documentation it is not possible to continue training a model that was loaded from the word2vec format. But even when one generates a model from scratch and then tries...

Unit testing backpropagation neural network code

unit-testing,neural-network,backpropagation
I am writing a backprop neural net mini-library from scratch and I need some help with writing meaningful automated tests. Up until now I have automated tests that verify that weight and bias gradients are calculated correctly by the backprop algorithm, but no test on whether the training itself actually...

How can I pause/serialize a genetic algorithm in Encog?

java,algorithm,neural-network,genetic-algorithm,encog
How can I pause a genetic algorithm in Encog 3.4 (the version currently under development in Github)? I am using the Java version of Encog. I am trying to modify the Lunar example that comes with Encog. I want to pause/serialize the genetic algorithm and then continue/deserialize at a later...

how to install Lasagne package with python con windows

python,package,neural-network
I'm new on python and I'm running some script on python 3.4. I'm getting the following error: ImportError: No module named 'lasagne'. Does someone know how to install this package on Python please? ...

How to encode the complex data for the neural network in the best way?

neural-network,bitvector
The data consist from the several records. A record is as the following: [bit vector, numeric vector, a few numeric values]. Bit vector has the different length for each record and the same is true for the numeric vector. The number of numeric values per the record is the constant...

Multiclass Neural Network Issue

java,neural-network,backpropagation
I have been trying to implement back-propagation neural networks for a while now and i am facing issues time after time. The progress so far is that my neural network works fine for XOR, AND and OR. Following image shows the training of my neural network for XOR over 100000...

C++ FANN fann_run always produce same output

c++,neural-network,fann
I am using the FANN Library to build neural networks to proceed a regression problem. The thing is, once the networks has been trained on the relevant training set (which seems to work quite well), every single test output the exact same output. In other words, given any state of...

compute with neural network in R?

r,neural-network
all tuples in allClassifiers tuples are either 1 or 2 e.g. naiveBayesPrediction knnPred5 knnPred10 dectreePrediction logressionPrediction correctClass 1 2 1 1 1 1 1 2 1 1 1 1 1 2 1 1 1 1 1 2 1 2 1 1 I trained the ensembler ensembleModel <- neuralnet(correctClass ~ naiveBayesPrediction...

(Java) Partial Derivatives for Back Propagation of Hidden Layer

java,machine-learning,artificial-intelligence,neural-network
Yesterday I posted a question about the first piece of the Back propagation aglorithm. Today I'm working to understand the hidden layer. Sorry for a lot of questions, I've read several websites and papers on the subject, but no matter how much I read, I still have a hard time...

Convolutional Deep Belief Networks (CDBN) vs. Convolutional Neural Networks (CNN)

machine-learning,neural-network,deep-learning,dbn,conv-neural-network
Lastly, I started to learn neural networks and I would like know the difference between Convolutional Deep Belief Networks and Convolutional Networks. In here, there is a similar question but there is no exact answer for it. We know that Convolutional Deep Belief Networks are CNNs + DBNs. So, I...

Any Ideas for Predicting Multiple Linear Regression Coefficients by using Neural Networks (ANN)?

matlab,neural-network,linear-regression,backpropagation,perceptron
In case, there are 2 inputs (X1 and X2) and 1 target output (t) to be estimated by neural network (each nodes has 6 samples): X1 = [2.765405915 2.403146899 1.843932529 1.321474515 0.916837222 1.251301467]; X2 = [84870 363024 983062 1352580 804723 845200]; t = [-0.12685144347197 -0.19172223428950 -0.29330584684934 -0.35078062276141 0.03826908777226 0.06633047875487]; I...

What is the way to feed multidimensional input data to encog ANN in java?

java,neural-network,encog
I am trying to feed some input (IP) v/s ideal (ID) data to encog neural network (BasicNetwork class). All the tutorials show the input format (MLData) to be like this: IP11,IP12,IP13 ID11,ID12 IP21,IP22,IP23 ID21,ID22 some more values... But I want to feed the data like this: IP11,IP12,IP13 IP21,IP22,IP23 ID11,ID12 IP11,IP12,IP13...

Get argument value from function call

r,neural-network
How do I get (potentially un-filled) value of argument in function call? I am trying to get an information if linout is true or false for fitted nnet model. Example: library(nnet) df <- data.frame(a = runif(10), b = runif(10), c = runif(10) > .5) fit <- nnet(c ~ ., data...

Neural Networks: Does the input layer consist of neurons?

machine-learning,neural-network
I currently study the Neural Networks theory and I see that everywhere it is written that it consists of the following layers: Input Layer Hidden Layer(s) Output Layer I see some graphical descriptions that show the input layer as real nodes in the net, while others show this layer as...

C++ heap corruption on new

c++,memory,neural-network
I'm writing simple ANN (neural network) for functions' approximation. I got crash with message: "Heap corrupted". I found few advices how to resolve it, but nothing help. I got error at first line of this function: void LU(double** A, double** &L, double** &U, int s){ U = new double*[s]; L...

OpenCL / AMD: Deep Learning

sdk,opencl,neural-network,gpgpu,deep-learning
While "googl'ing" and doing some research I were not able to find any serious/popular framework/sdk for scientific GPGPU-Computing and OpenCL on AMD hardware. Is there any literature and/or software I missed? Especially I am interested in deep learning. For all I know deeplearning.net recommends NVIDIA hardware and CUDA frameworks. Additionally...

How to avoid loops by Vectorizing below code?

matlab,neural-network,vectorization,bsxfun
The code below is correct, but I want to vectorize it (and may convert to GPU) to increase the speed. How can I convert it to vector form? RF = 4; inhibatory = 0; overlap=3; act_funct = 'sig'; gap = RF-overlap; Image1 = rand(30,22); Image2 = rand(27,19); % size_image2 is...

XOR neural network backprop

python,machine-learning,neural-network
I'm trying to implement basic XOR NN with 1 hidden layer in Python. I'm not understanding the backprop algo specifically, so I've been stuck on getting delta2 and updating the weights...help? import numpy as np def sigmoid(x): return 1.0 / (1.0 + np.exp(-x)) vec_sigmoid = np.vectorize(sigmoid) theta1 = np.matrix(np.random.rand(3,3)) theta2...

Which method should theoretically be best?

neural-network,ocr
Say I want to recognize my characters using neural network(s). Let's cut it down to 5 letters, the binary form of image to 16x16, input + 2 layers network, unipolar function inside both layers. Momentum backpropagation is used in the process of learning. Which of the following approaches should give...

Feature Vectors in Radial Basis Function Network

machine-learning,neural-network,point-clouds
I am trying to use RBFNN for point cloud to surface reconstruction but I couldn't understand what would be my feature vectors in RBFNN. Can any one please help me to understand this one. A goal to get to this: From inputs like this: ...

Malfunctioning perceptron

machine-learning,neural-network,perceptron
I am a newbie to machine learning and have been experimenting with basic perceptrons before moving on to multilayer networks. The problem I have is with the code below. I have a training data generator which uses a set of weights to generate a truth table. The problem I have...

Programming the Back Propagation Algorithm

java,machine-learning,neural-network
I'm trying to implement the backpropagation algoirthm into my own net. I understand the idea of the backprop agl, however, I'm not strong with math. I'm just working on the first half of the backprop alg, computing the output layer (not worrying about partial derivatives in the hidden layer(s) yet)....

Having trouble creating my Neural Network inputs

machine-learning,artificial-intelligence,neural-network
I'm currently working on a neural network that should have N parameters in input. Each parameters can have M different values (discrete values), let's say {A,B,C,…,M}. It also has a discrete number of outputs. How can I create my inputs from this situation? Should I have N×M inputs (having 0 or 1 as value), or should I think of a different...

about backpropagation and sigmoid function

neural-network
I have been reading this ebook about ANN:https://www4.rgu.ac.uk/files/chapter3%20-%20bp.pdf and got a doubt about the effect of the sigmoid function for calculating the errorB. In the text says that if I have threshold neuron I can use: Target-Output but because I have a sigmoid function involved I should add: Output(1-Output) and...

Liquid State Machine: How it works and how to use it?

machine-learning,neural-network
I am now learning about LSM (Liquid State Machines), and I try to understand how they are used for learning. I am pretty confused from what I read over the web. I'll write what I understood -> It may be incorrect and I'll be glad if you can correct me...

Supervised machine learning for several coefficient

machine-learning,neural-network
I have a set of items that are each described by 10 precise numbers n1, .., n10. I would like to learn the coefficients k1, .., k10 that should be associated to those numbers to rank them according to my criteria. In that purpose I created a web application (in...

PyBrain: passing empty floats or switching them with neutral values?

artificial-intelligence,neural-network,pybrain
Right now, I am trying to pass this for a dataset sample: 7/2/2014,7:30,138.885,138.87,138.923,,,,138.88067,138.91434,138.895,,,,138.89657 14,138.9186042,138.8745387,138.923,138.9046667,138.895,138.8696667 But predictably, it gives me a value error since empty strings can't be converted into floats. What I want to do is to pass those empty variables in such a way the associated nodes will do...

Can the validation error of a dataset be higher than the test error during the whole process of training a neural network?

machine-learning,computer-vision,neural-network,deep-learning,pylearn
I'm training a convolutional neural network using pylearn2 library and during all the ephocs, my validation error is consistently higher than the testing error. Is it possible? If so, in what kind of situations?

Why is there only one hidden layer in a neural network?

machine-learning,neural-network,genetic-algorithm,evolutionary-algorithm
I recently made my first neural network simulation which also uses a genetic evolution algorithm. It's simple software that just simulates simple organisms collecting food, and they evolve, as one would expect, from organisms with random and sporadic movements into organisms with controlled, food-seeking movements. Since this kind of organism...

How do I interpret the output of a neurolab simulation?

python,neural-network,data-mining
I am using neurolab to simulate a neural network to classify a dataset into a binary classification. I have the data in a dataframe.I am creating a neural network with one input value and one output value and 10 hidden nodes. df_train = pd.read_csv("training.csv") target = df_train['outputcol'] # already encoded...

What are units in neural network (backpropagation algorithm)

machine-learning,artificial-intelligence,neural-network,classification,backpropagation
Please help me to understand unit thing in neuron networks. From the book I understood that a unit in input layer represents an attribute of training tuple. However, it is left unclear, how exactly it does. Here is the diagram: There are two "thinking paths" about the input units. The...

Getting different predicted values of time series every time I re-train a neural network on R

r,neural-network,nnet
I'm trying to fit some data with the package nnet on R. After I train the neural network, I want to predict some values, but if I re-train the net and predict again, I get significant different values. Here's a reproducible code to copy/paste and see what I'm talking about....

Wrong values for partial derivatives in neural network python

python,numpy,neural-network
I am implementing a simple neural network classifier for the iris dataset. The NN has 3 input nodes, 1 hidden layer with two nodes, and 3 output nodes. I have implemented evrything but the values of the partial derivatives are not calculated correctly. I have exhausted myself looking for the...

How to train a RNN for word replacement?

text,replace,neural-network
I have some understanding of how to use a simple recursive neural network that reads a sequence of characters and produces another sequence where each character is a function of the previous ones. However I have no idea how to implement the sort of delayed output generation required to do...

In neural networks, why is the bias seen as either a “b” parameter or as an additionnal “wx” neuron?

machine-learning,neural-network,backpropagation
In other words, what is the main reason from switching the bias to a b_j or to an additional w_ij*x_i in the neuron summation formula before the sigmoid? Performance? Which method is the best and why? Note: j is a neuron of the actual layer and i a neuron of...

Feedforward Neural Network Training

java,machine-learning,neural-network
I am trying to write Feed Forward NN, and I am testing it to learn x*y function using particle swarm optimisation to learn (the PSO algorithm is working) but it won't even get close to learning the function. I have looked over my code so many times, so I don't...

Fitnet function analogue in Octave

matlab,neural-network,octave
Octave is considered as open source implementation of MATLAB. In MATLAB there is a function fitnet. Does anybody know a corresponding function in Octave? P.S.: I have also installed in my octave edition an Octave´s neural network package. Or, maybe, does somebody know about some other package, which has this...

What is cost function in neural network?

neural-network
Could someone please explain to me why it is so important the cost function in a neural network, what is its purpose? Note: I'm just introducing me to the subject of neural networks, but failed to understand it perfectly....

Train neural network to determine color image quality [closed]

machine-learning,artificial-intelligence,neural-network
I'm looking for someone who know if it is possible to train a neural network to tell if the image provided live up to the trained expectation. Let's say we have a neural network which trained to read a 800x800 pixel color image. Therefore, I will have 1,920,000 input and...

Don't understand train data from convnetjs

javascript,neural-network,conv-neural-network
I'm trying to predict some data using a neural network in javascript. For that I found convnetjs that seems easy to use. In the example, they use one thing that they call MagicNet, so you don't need to know about NN to work with it. This is the example of...

the neural networks are too sensitive for the input

machine-learning,artificial-intelligence,neural-network
I have the following two feature vectors: 0.2567 0.2567 0.0105 0.0105 0.0000 -0.0000 -0.0000 0.0000 0.0000 0.0000 0.2567 0.2567 0.0105 0.0105 0.0000 -0.0000 -0.0000 0.0000 0.0000 0.0000 0.2567 0.2567 0.0105 0.0105 0.0000 -0.0000 -0.0000 0.0000 0.0000 0.0000 0.2567 0.2567 0.0105 0.0105 0.0000 -0.0000 -0.0000 0.0000 0.0000 0.0000 0.2567 0.2567 0.0105...

Neuroph: Multi Layer Perceptron Backpropagation learning not working

java,neural-network
This question is related to Neuroph Java library. I have the following program which creates a multi layer perceptron containing a single hidden layer of 20 nodes. The function being learnt is x^2. Backpropagation learning rule is used. However, as is evident from the output, the program doesn't seem to...

Neural network back propagation weight change effect on predictions

networking,neural-network
I am trying to understand how neural network can predict different outputs by learning different input/output patterns..I know that weights changes are the mode of learning...but if an input brings about weight adjustments to achieve a particular output in back propagtion algorithm.. won't this knowledge(weight updates) be knocked of when...

batch normalization in neural network

machine-learning,neural-network,normalization
I'm still fairly new with ANN and I was just reading the Batch Normalization paper (http://arxiv.org/pdf/1502.03167.pdf), but I'm not sure I'm getting what they are doing (and more importantly, why it works) So let's say I have two layers L1 and L2, where L1 produces outputs and sends them to...

Opencv mlp Same Data Different Results

c++,opencv,machine-learning,neural-network,weight
Let Me simplify this question. If I run opencv MLP train and classify consecutively on the same data, I get different results. Meaning, if I put training a new mlp on the same train data and classifying on the same test data in a for loop, each iteration will give...

FeedForward Neural Network: Using a single Network with multiple output neurons for many classes

machine-learning,neural-network,backpropagation,feed-forward
I am currently working on the MNIST handwritten digits classification. I built a single FeedForward network with the following structure: Inputs: 28x28 = 784 inputs Hidden Layers: A single hidden layer with 1000 neurons Output Layer: 10 neurons All the neurons have Sigmoid activation function. The reported class is the...

ArrayIndexOutOfBoundsException in 3D array

java,arrays,multidimensional-array,neural-network
I'm trying to make a jagged array for a neural network and this is giving me an out of bounds error... int[] sizes = { layer1, layer2, layer3 }; int k = sizes.length - 1; double[][][] net = new double[k][][]; int i; for (i = 0; i < k; i++)...

Does Andrew Ng's ANN from Coursera use SGD or batch learning?

machine-learning,neural-network
What type of learning is Andrew Ng using in his neural network excercise on Coursera? Is it stochastic gradient descent or batch learning? I'm a little confused right now......

How Get Weight Matrix from NN FANN?

c,artificial-intelligence,neural-network,fann
I'm using FANN to use Neural Network. (Link to FANN) I need to get the matrix of weight after trained the network, but I didn't find anything from documentation. (Link to documentation) Do you know how get that matrix??? Thank you!...

How does a convolutional neural network connect to the multi-layered perceptron?

machine-learning,artificial-intelligence,neural-network,convolution
Which operation takes place to produce the output from say a 9x9 filter and pass that output as the input to MLP.

How to test a trained neural network to predict outputs for new inputs

python,neural-network
New on neural-networks and py and i just started to learn. On the web a found this Back-Propagation Neural Network Class that im trying to using for classification. Link of class: http://arctrix.com/nas/python/bpnn.py I added to the network 11 inputs with corresponding labeled data [0] or [1]. creating a network with...

How to use Rs neuralnet package in a Kaggle competition about Titanic

r,machine-learning,neural-network
I am trying to run this code for the Kaggle competition about Titanic for exercise. Its forfree and a beginner case. I am using the neuralnet package within R in this package. This is the train data from the website: train <- read.csv("train.csv") m <- model.matrix( ~ Survived + Pclass...

MLP with sliding windows = TDNN

neural-network
Need some confirmation on the statement. Is two of these equivalent? 1.MLP with sliding time windows 2.Time delay neural network (TDNN) Can anyone confirm on the given statement? Possibly with reference. Thanks...

how Weka calculates Sigmoid function c#

c#,neural-network,weka
I am using weka with my dataset to train a neural network and now I want to use the results (weights and thresholds produced by weka) in my application and implement only the forward pass. now the problem is that I don't know how exactly weka calculates the sigmoid function,...

How to read Torch Tensor from C [closed]

c,lua,neural-network,luajit,torch
I have to train a convolutional neural network using the Torch framework and then write the same network in C. To do so, I have to read somehow the learned parameters of the net from my C program, but I can't find a way to convert or write to a...

How does Caffe determine the number of neurons in each layer?

neural-network,deep-learning,caffe
Recently, I've been trying to use Caffe for some of the deep learning work that I'm doing. Although writing the model in Caffe is very easy, I've not been able to know the answer to this question. How does Caffe determine the number of neurons in a hidden layer? I...

What is the difference between Backpropogation and feed-forward Neural Network

machine-learning,neural-network,classification,backpropagation
What is the difference between Backpropogation and feed - forward Neural Network. By googling and reading I found that In feed forward there is only forward direction , but in back-propogation once we need to do a forward propogation and then Back propogation.I refered this link Any other difference other...

Using Workspace variables in a GUI matlab

matlab,neural-network,workspace,matlab-guide
I have a workspace called finalnet. Inside the workspace i have a neural network called net. and I want to use the network in one of the functions in my GUI. Is there a way to do that? I tried to use evalin function : network = evalin('finalnet','net') but i...

ai junkie neural networks tutorial - Not getting it

c++,machine-learning,neural-network,genetic-algorithm
I've been trying to understand the neural networks tutorial at http://www.ai-junkie.com/ann/evolved/nnt1.html I think I follow most of the tutorial up to page 8 (the last page), although maybe I don't because if I did, I'd probably understand the last page wouldn't I? Unfortunately for me, this page is not well...

Print output of a Theano network

python,debugging,neural-network,theano
I am sorry, very newbee question... I trained a neural network with Theano and now I want to see what it outputs for a certain input. So I can say: test_pred = lasagne.layers.get_output(output_layer, dataset['X_test']) where output_layer is my network. Now, the last layer happens to be a softmax, so if...

How to calculate the number of parameters of Convolutional Neural Networks(CNNs) correctly?

machine-learning,computer-vision,neural-network
I can't give the correct number of parameters of AlexNet or VGG Net. For example, to calculate the number of parameters of a conv3-256 layer of VGG Net, the answer is 0.59M = (3*3)*(256*256), that is (kernel size) * (product of both number of channels in the joint layers), however...

Which GPU model/brand is optimal for Neural Networks? [closed]

machine-learning,neural-network,gpu
This is not an unreasonable question. Nvidia and ATI architectures differ, enough so that for certain tasks (such as bitcoin mining) ATI is vastly better than Nvidia. The same could be true for Neural Network related processing. I have attempted to find comparisons of the 2 GPU brands in such...

Genetic Algorithm & Neural Networks: taking address of temporary [-fpermissive]

c++,neural-network,genetic-algorithm,temporary-objects
I am working on genetically evolved neural networks. I wrote a program using visual studio 2005 in 2008. Now I converted the program into Eclipse(Linux) and VS 2013(Win) projects with c++11 support. After running, both projects gave same error: taking address of temporary [-fpermissive] After searching a lot I found...

brain.js: XOR example does not work

javascript,machine-learning,neural-network
I'm trying to understand brain.js. This is my code; it does not work. (Explaination of what I expect it to do below) <script src="https://cdn.rawgit.com/harthur/brain/gh-pages/brain-0.6.3.min.js"> <script> var net = new brain.NeuralNetwork(); net.train([{input: [0, 0], output: [0]}, {input: [0, 1], output: [1]}, {input: [1, 0], output: [1]}, {input: [1, 1], output: [0]}]);...

Theano: how to efficiently undo/reverse max-pooling

python,optimization,neural-network,theano
I'm using Theano 0.7 to create a convolutional neural net which uses max-pooling (i.e. shrinking a matrix down by keeping only the local maxima). In order to "undo" or "reverse" the max-pooling step, one method is to store the locations of the maxima as auxiliary data, then simply recreate the...

How can I add concurrency to neural network processing?

neural-network,backpropagation,gradient-descent
The basics of neural networks, as I understand them, is there are several inputs, weights and outputs. There can be hidden layers that add to the complexity of the whole thing. If I have 100 inputs, 5 hidden layers and one output (yes or no), presumably, there will be a...

Trouble with backpropogation in a vectorized implementation of a simple neural network

matlab,neural-network
I have been going through UFLDL tutorials.In the vectorized implementation of a simple neural net, the tutorials suggest that one way to do this would be to go through the entire training set instead of iterative approach. In the back propogation part, this would mean replacing: gradW1 = zeros(size(W1)); gradW2...

Backpropagation algorithm in neural network

c++,machine-learning,neural-network
I have some troubles implementing backpropagation in neural network. This implementation is using ideas from slides of Andrew Ng's course on machine learning from Coursera (here is the link https://www.coursera.org/course/ml). I think that I have understood the algorithm, but there is some subtle error in the code. I'm using a...

Convolution Neural Network in torch. Error when training the network

lua,neural-network,torch
I am trying to base my Convolution neural network upon the following tutorial: https://github.com/torch/tutorials/tree/master/2_supervised The issue is that my images are of different dimensions than those used in the tutorial. (3x200x200). Also I have only two classes. The following are the changes that I made : Changing the dataset to...

Theano: Reconstructing convolutions with stride (subsampling) in an autoencoder

neural-network,convolution,theano,conv-neural-network
I want to train a simple convolutional auto-encoder using Theano, which has been working great. However, I don't see how one can reverse the conv2d command when subsampling (stride) is used. Is there an efficient way to "invert" the convolution command when stride is used, like in the image below?...

What is the definition of “feature” in neural network?

neural-network
I am a beginner of the neural network. I am very confused about the word feature. Can you give me a defintion of feature? Are the features the neurons in the hidden layers?

Can a neural network with random connections still work correctly?

neural-network
Let's say we have a neural network with n layers where connections do not simply go from layer i to layer i+1, but can go from any layer i to any layer k such that k > i. For example; connections from layer 1 directly to layer 3, or layer...

Stabilizing Neural Network

matlab,neural-network
I am trying to build a neural network and have the following code: for i = 1:num_samples-num_reserved % Getting the sample and transposing it so it can be multiplied sample = new_data_a(:,i)'; % Normalizing the input vector sample = sample/norm(sample); % Calculating output outputs = sample*connections; % Neuron that fired...

torch7 : how to connect the neurons of the same layer?

neural-network,torch
Is it possible to implement, using torch, an architecture that connects the neurons of the same layer?

Multilayer Perceptron replaced with Single Layer Perceptron

math,machine-learning,neural-network,linear-algebra,perceptron
I got a problem in understending the difference between MLP and SLP. I know that in the first case the MLP has more than one layer (the hidden layers) and that the neurons got a non linear activation function, like the logistic function (needed for the gradient descent). But I...

update of weights in a neural network

python,algorithm,neural-network,perceptron
I was trying to program the perceptron learning rule for the case of an AND example. Graphically we will have: where the value of x0=1, the algorithm for updating the weights is: and I have made the following program in Python: import math def main(): theta=[-0.8,0.5,0.5] learnrate=0.1 target=[0,0,0,1] output=[0,0,0,0] x=[[1,0,0],[1,0,1],[1,1,0],[1,1,1]]...

neural network for handwritten recognition?

matlab,machine-learning,neural-network
I have been following the course of Andrew Ng about Machine Learning, and I currently have some doubts about the implementation of a handwritten recognition tool. -First he says that he uses a subset of the MNIST dataset, which contaings 5000 training examples and each training example is an image...

Object categories of pretrained imagenet model in caffe

machine-learning,neural-network,deep-learning,caffe,matcaffe
I'm using the pretrained imagenet model provided along the caffe (CNN) library ('bvlc_reference_caffenet.caffemodel'). I can output a 1000 dim vector of object scores for any images using this model. However I don't know what the actual object categories are. Did someone find a file, where the corresponding object categories are...