I am trying to save an SVR model using pickle in python. However, on first attempt a ValueError was raised: ValueError: pickle protocol must be <= 2 I attempted to resolve this error by explicitly passing an argument as so: s = pickle.dumps(w, open('svm.p', 'wb'), protocol=pickle.HIGHEST_PROTOCOL) But I now receive...

I want to develop human emotion recognition application by analyzing voice features,how do I start this?I don't have idea. http://www.personal.rdg.ac.uk/~llsroach/phon2/freespeech.htm http://web.stanford.edu/dept/linguistics/corpora/material/PRAAT_workshop_manual_v421.pdf...

Lets say I have a dataset of about 350 positive images and more than 400 negative images. They aren't the same size. Also their size is bigger than 640x320. What should I do to create a better dataset? Do I need the images to be smaller? If yes, why? Should...

I Have a problem with using the apply function in R. I made the following function: TrainSupportVectorMachines <- function(trainingData,kernel,G,C){ ####train het model fit<-svm(Device~.,data=trainingData,kernel=kernel,probability=TRUE, gamma =G, costs=C) return(fit); } I want to train the model with different values of Cost(c). Therefore, I tried the following commend: cst = matrix(2^(-4:-2),ncol=3) kernl =...

I'm trying to get predictions for an SVM using a precomputed chi-squared kernel. However, I am getting issues when trying to run clf.predict(). min_max_scaler = preprocessing.MinMaxScaler() X_train_scaled = min_max_scaler.fit_transform(features_train) X_test_scaled = min_max_scaler.transform(features_test) K = chi2_kernel(X_train_scaled) svm = SVC(kernel='precomputed', cache_size=1000).fit(K, labels_train) y_pred_chi2 = svm.predict(X_test_scaled) The error I am getting is the...

I am trying to understand vowpal_wabbit data structure for the training and test data but cannot seem to understand them. I have some training data like. Feature 1: 0 Feature 2: 1 Feature 3: 10 Feature 4: 5 Class label : A Feature 1: 0 Feature 2: 2 Feature 3:...

I'm attempting to calculate the decision_function of a SVC classifier MANUALLY (as opposed to using the inbuilt method) using the the python library SKLearn. I've tried several methods, however, I can only ever get the manual calculation to match when I don't scale my data. z is a test datum...

I want to train my svm classifier for image categorization with scikit-learn. And I want to use opencv-python's SIFT algorithm function to extract image feature.The situation is as follow: 1. what the scikit-learn's input of svm classifier is a 2-d array, which means each row represent one image,and feature amount...

I'm using Accord.net in my research. I have a vector sequences of variable size as input so I use DynamicTimeWarping as a kernel for MulticlassSupportVectorMachine. IKernel kernel = new DynamicTimeWarping(dimension); var machine = new MulticlassSupportVectorMachine(0, kernel, 2); // Create the Multi-class learning algorithm for the machine var teacher = new...

I'd like to know whether the order of train data for svmtrain in matlab is important and affects the classifier performance or not. e.g I have two classes, labeled as 0 and 1. The train data array first 500 elements are from class 0 and the rest are from class...

I'm trying to create a car plate recognition system, using OpenCV (C++). I've already seen this example on GitHub, but I want to use SVM, instead of K-nearest neighbours or Arificial Neural Networks. I trained a SVM only for two classes (positive or negative), so how can I train to...

Can you suggest any implementation (matlab) of Multi-class classification algorithm for large database, I tried libsvm it's good except for large database and for the liblinear I can't use it for the multi classification

I am trying to use OneClassSVM in Sklearn for outlier detection. A user is visiting websites everyday but one day he visits a website which has never been visited before. I want to catch this outlier using OneClassSVM. Below is a sample data: `([[www.makeuseof.com, www.kickstater.com, www.google.com, www.mashable.com` Below is sample...

I'm trying to find class probabilities of new input vectors with support vector machines in R. Training the model shows no errors. fit <-svm(device~.,data=dataframetrain, kernel="polynomial",probability=TRUE) But predicting some input vector shows some errors. predict(fit,dataframetest,probability=prob) Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) : contrasts can be applied only to factors...

My problem is the following: I need to classify a data stream coming from an sensor. I have managed to get a baseline using the median of a window and I subtract the values from that baseline (I want to avoid negative peaks, so I only use the absolute value...

I want to use nursery data to train SVM (8 attributes and 5 classes), using same logic for C45 Learning class as seen on example: In example, data is loaded from nursery data containing 8 attributes "parents", "has_nurs", "form", "children", "housing", "finance", "social", "health" and combinations of these attributes result...

i am very new to matlab and SVM and i am reproducing this experiment from strach http://bioinformatics.oxfordjournals.org/content/19/13/1656.full.pdf *There they say "In the training of SVMs, we use the method of one versus the others, or one versus the rest" . ok there are 12 classes so they produce 12 SVMs....

I am using SVM Rank, which has multiple parameters, changing whom I am getting a variety of results. Is there some mechanism to tune and get the best parameters, as tuned according to the best results on validation set? Below are the different parameters: Learning Options: -c float -> C:...

In the Documentation: SVMClassifier Documentation there are class SimpleCV.MachineLearning.SVMClassifier.SVMClassifier(featureExtractors, properties=None) Code of class SVMClassifier mSVMProperties = { 'KernelType':'RBF', #default is a RBF Kernel 'SVMType':'NU', #default is C 'nu':None, # NU for SVM NU 'c':None, #C for SVM C - the slack variable 'degree':None, #degree for poly kernels - defaults to...

in my cross-validation in matlab with libSVM i found that this are the betters parameters to use: model = svmtrain( labels, training, '-s 0 -t 2 -c 10000 -g 100'); Now i want to replicate the classification in C++ with OpenCV. But i'm not understanding how to set the C++...

I have such a code; for x = 1:100 path = sprintf('C:\Users\hasan_000\Documents\MATLAB\Project\Images\%d.jpg', x); imgarray = imread(sprintf(path)); end I have a folder involves 100 pictures. I want to convert them to matrix by uploading automatically in a loop. But I get this error: Can't open file "C:" for reading; you may...

I got multiple curves from different sensor but all attached in the same moving object. Now I want to extract features from it , let's say I have cut 0-10 as window1 , so in window1 I got 5 graphs ,each graph represents one sensor in a particular position, each...

The documentation on SVMs implies, that an attribute called 'classes_' exists, which hopefully reveals how the model represents classes internally. I would like to get that information in order to interpret the output from functions like 'predict_proba', which generates probabilities of classes for a number of samples. Hopefully, knowing that...

I'm using this site http://scikit-learn.org/stable/datasets/ (subtitle 5.5) to create my custom dataset for performing SVM with scikit. Summary of my day: I basically have no idea what I'm doing. For my thesis I want to predict stock return direction, i.e. the output of SVM should be 1 (UP) or -1...

Good afternoon, guys. I'm learning SVM and try to finish an exercise at openclassroom.stanford.edu. My question is: What is the Octave/Matlab code to plot as follows If I have a set of 2D feature points {(x_11, x_12), (x_21, x_22), ..., (x_i1, x_i2)}, and the corresponding labels set is {1, -1,...

I have a sentiment analysis task, for this Im using this corpus the opinions have 5 classes (very neg, neg, neu, pos, very pos), from 1 to 5. So I do the classification as follows: from sklearn.feature_extraction.text import TfidfVectorizer import numpy as np tfidf_vect= TfidfVectorizer(use_idf=True, smooth_idf=True, sublinear_tf=False, ngram_range=(2,2)) from sklearn.cross_validation...

I'm using CvSVM from OpenCV for a regression task. For reasons related to legacy code, currently I have to train the model using Matlab, but then I'd like to load it into CvSVM and perform prediction in a C++ code with OpenCV due to application constraints. I did not find...

I have this iris data ... 5.1 3.5 1.4 0.2 Iris-setosa 4.9 3 1.4 0.2 Iris-setosa 7 3.2 4.7 1.4 Iris-versicolor 6.4 3.2 4.5 1.5 Iris-versicolor 7.1 3 5.9 2.1 Iris-virginica 6.3 2.9 5.6 1.8 Iris-virginica . . . and I got graph using gnuplot (plot 'c:\iris.data') but I want...

I have a dataset with 3 class positive, neutral and negative. I try to create a classifier using SVM. my dataset: my code in rapidminer: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <process version="5.3.015"> <context> <input/> <output/> <macros/> </context> <operator activated="true" class="process" compatibility="5.3.015" expanded="true" name="Process"> <parameter key="parallelize_main_process" value="true"/> <process expanded="true"> <operator activated="true"...

I am trying to build predictive models from text data. I built document-term matrix from the text data (unigram and bigram) and built different types of models on that (like svm, random forest, nearest neighbor etc). All the techniques gave decent results, but I want to improve the results. I...

Why is the plot not appearing? There is also no error coming. usd28 = read.csv("~/ICICI_nse_train_head") usd30= usd28[1:2000,] index<-1:nrow(usd30) testindex<-sample(index,trunc(length(index)/3)) testset<-usd30[testindex,] trainset<-usd30[-testindex,] svm.model<-svm(sprd_cross_dir~bid_sprd0+ask_sprd0,data=trainset,cost=5,gamma=1) svm.pred<-predict(svm.model,testset) summary(svm.model) x<-table(pred=svm.pred,true=testset$sprd_cross_dir) plot(x=svm.model,data=trainset,formula...

I've been using a small number of images (around 20) to train an SVM, and I've noticed that when it comes to training, one picture can have a really big difference on the outcome. Sometimes it will be kind of accurate, other times it'll say everything is a match, other...

I am training a SVM with features obtained by a TfidfVectorizer. When testing the SVM by asking for a prediction, even feature vectors from entries which were used for training and were labelled as 'negative' will lead to 'positive' predictions. I have the feeling I am doing something basic wrong...

I'm having a weird problem in training an SVM with an RBF kernel in Matlab. The issue is that, when doing a grid search, using 10-fold cross-validation, for the C and Sigma values I always get AUC values equal to approximately .50 (varying between .48 and .54 depending) -- I...

I want to use sklearn's CalibratedClassifierCV in conjuction with sklearn's SVC to make predictions for a multiclass (9 classes) prediction problem. However when I run it, I get the following error. This same code will run no problem with a different model (i.e RandomForestCalssifier). kf = StratifiedShuffleSplit(y, n_iter=1, test_size=0.2) clf...

I would like to know what is the default setting for SVM of weka library?. As I know Weka wraps LIVSVM and the default parameter for LIBSVM is the rbf kernel, does this holds true for weka?.

I am using the Eigenjoints of skeleton features to perform human action recognition by Matlab. I have 320 videos, so the training data is 320x1 cell array, each one cell contains Nx2970 double array, where N is number of frames (it is variable because each video contains different number of...

I'm using "multiclass.OneVsRestClassifier" and "cross_validation.StratifiedKFold". When I do cross validation on a multi-label problem, it´s fails. Is it possible to perform cross-validation on a multilabel problem scikit-learn? I think the problem is in the tuples of class label lists Eg ([1], [2], [2], [1], [1,2], [3], [1,2,3]. ..) code in...

I am trying to implement a Support Vector Machine to understand in and out of it but I am stuck on how to implement it. Everywhere it is explained how to get a hyper-plane such that we are able to separate different classes. My question is how to get the...

I am trying to classify human activities in videos(six classes and almost 100 videos per class, 6*100=600 videos). I am using 3D SIFT(both xy and t scale=1) from UCF. for f= 1:20 f offset = 0; c=strcat('running',num2str(f),'.mat'); load(c) pix=video3Dm; % Generate descriptors at locations given by subs matrix for i=1:100...

I want to make predictions from a simple time series. The observations y=[11,22,33,44,55,66,77,88,99,110] and at time x=[1,2,3,4,5,6,7,8,9,10]. I am using epsilon-SVR from libsvm toolbox. My code is as follows: x1 = (1:7)'; #' training set y1 = [11, 22, 33, 44, 55, 66, 77]'; #' observations from time series options...

I have installed spark on AWS Elastic Map Reduce(EMR) and have been running SVM using the packages in MLLib. But there are no options to choose parameters for building the model like kernel selection and cost of misclassification (Like in e1071 package of R). Can someone please tell me how...

I am new to machine learning. When I tried learning through gate, it is showing some error. The learning configuration file is given below. <?xml version="1.0"?> <ML-CONFIG> <SURROUND value="false"/> <FILTERING ratio='0.2' dis='far'/> <EVALUATION method="holdout" runs="2" ratio="0.66"/> <multiClassiﬁcation2Binary method="one-vs-anothers" thread-pool-size="2"/> <PARAMETER name="thresholdProbabilityBoundary" value="1.0"/> <PARAMETER name="thresholdProbabilityEntity" value="1.0"/>...

I am trying to classify different concepts in a text using n-gram. My data tyically exists of six columns: The word that needs classification The classification First word on the left of 1) Second word on the left of 1) First word on the right of 1) Second word on...

I want to tune the parameter C in ksvm. Now I'm wondering how this C is defined. The definition of C is cost of constraints violation (default: 1) this is the `C'-constant of the regularization term in the Lagrange formulation. Does this mean that the larger C is, the more...