Something More for Research

Explorer of Research #HEMBAD

  • Top Clicks

  • Subscribe by Email

  • Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 5,056 other subscribers
  • Blog Stats

    • 177,592 hits
  • Subscribe

  • HEMBAD

  • Calendar

    April 2024
    M T W T F S S
    1234567
    891011121314
    15161718192021
    22232425262728
    2930  
  • Goodreads

  • Spam Blocked

  • Pages

  • Blogs I Follow

  • Categories

  • RSS

  • Authors

Archive for the ‘Computer Languages’ Category

Deep Learning Software/ Framework links

Posted by Hemprasad Y. Badgujar on July 15, 2016


  1. Theano – CPU/GPU symbolic expression compiler in python (from MILA lab at University of Montreal)
  2. Torch – provides a Matlab-like environment for state-of-the-art machine learning algorithms in lua (from Ronan Collobert, Clement Farabet and Koray Kavukcuoglu)
  3. Pylearn2 – Pylearn2 is a library designed to make machine learning research easy.
  4. Blocks – A Theano framework for training neural networks
  5. Tensorflow – TensorFlow™ is an open source software library for numerical computation using data flow graphs.
  6. MXNet – MXNet is a deep learning framework designed for both efficiency and flexibility.
  7. Caffe -Caffe is a deep learning framework made with expression, speed, and modularity in mind.Caffe is a deep learning framework made with expression, speed, and modularity in mind.
  8. Lasagne – Lasagne is a lightweight library to build and train neural networks in Theano.
  9. Keras– A theano based deep learning library.
  10. Deep Learning Tutorials – examples of how to do Deep Learning with Theano (from LISA lab at University of Montreal)
  11. Chainer – A GPU based Neural Network Framework
  12. DeepLearnToolbox – A Matlab toolbox for Deep Learning (from Rasmus Berg Palm)
  13. Cuda-Convnet – A fast C++/CUDA implementation of convolutional (or more generally, feed-forward) neural networks. It can model arbitrary layer connectivity and network depth. Any directed acyclic graph of layers will do. Training is done using the back-propagation algorithm.
  14. Deep Belief Networks. Matlab code for learning Deep Belief Networks (from Ruslan Salakhutdinov).
  15. RNNLM– Tomas Mikolov’s Recurrent Neural Network based Language models Toolkit.
  16. RNNLIB-RNNLIB is a recurrent neural network library for sequence learning problems. Applicable to most types of spatiotemporal data, it has proven particularly effective for speech and handwriting recognition.
  17. matrbm. Simplified version of Ruslan Salakhutdinov’s code, by Andrej Karpathy (Matlab).
  18. deeplearning4j– Deeplearning4J is an Apache 2.0-licensed, open-source, distributed neural net library written in Java and Scala.
  19. Estimating Partition Functions of RBM’s. Matlab code for estimating partition functions of Restricted Boltzmann Machines using Annealed Importance Sampling (from Ruslan Salakhutdinov).
  20. Learning Deep Boltzmann Machines Matlab code for training and fine-tuning Deep Boltzmann Machines (from Ruslan Salakhutdinov).
  21. The LUSH programming language and development environment, which is used @ NYU for deep convolutional networks
  22. Eblearn.lsh is a LUSH-based machine learning library for doing Energy-Based Learning. It includes code for “Predictive Sparse Decomposition” and other sparse auto-encoder methods for unsupervised learning. Koray Kavukcuoglu provides Eblearn code for several deep learning papers on thispage.
  23. deepmat– Deepmat, Matlab based deep learning algorithms.
  24. MShadow – MShadow is a lightweight CPU/GPU Matrix/Tensor Template Library in C++/CUDA. The goal of mshadow is to support efficient, device invariant and simple tensor library for machine learning project that aims for both simplicity and performance. Supports CPU/GPU/Multi-GPU and distributed system.
  25. CXXNET – CXXNET is fast, concise, distributed deep learning framework based on MShadow. It is a lightweight and easy extensible C++/CUDA neural network toolkit with friendly Python/Matlab interface for training and prediction.
  26. Nengo-Nengo is a graphical and scripting based software package for simulating large-scale neural systems.
  27. Eblearn is a C++ machine learning library with a BSD license for energy-based learning, convolutional networks, vision/recognition applications, etc. EBLearn is primarily maintained by Pierre Sermanet at NYU.
  28. cudamat is a GPU-based matrix library for Python. Example code for training Neural Networks and Restricted Boltzmann Machines is included.
  29. Gnumpy is a Python module that interfaces in a way almost identical to numpy, but does its computations on your computer’s GPU. It runs on top of cudamat.
  30. The CUV Library (github link) is a C++ framework with python bindings for easy use of Nvidia CUDA functions on matrices. It contains an RBM implementation, as well as annealed importance sampling code and code to calculate the partition function exactly (from AIS lab at University of Bonn).
  31. 3-way factored RBM and mcRBM is python code calling CUDAMat to train models of natural images (from Marc’Aurelio Ranzato).
  32. Matlab code for training conditional RBMs/DBNs and factored conditional RBMs (from Graham Taylor).
  33. mPoT is python code using CUDAMat and gnumpy to train models of natural images (from Marc’Aurelio Ranzato).
  34. neuralnetworks is a java based gpu library for deep learning algorithms.
  35. ConvNet is a matlab based convolutional neural network toolbox.
  36. Elektronn is a deep learning toolkit that makes powerful neural networks accessible to scientists outside the machine learning community.
  37. OpenNN is an open source class library written in C++ programming language which implements neural networks, a main area of deep learning research.
  38. NeuralDesigner  is an innovative deep learning tool for predictive analytics.
  39. Theano Generalized Hebbian Learning.

Posted in C, Computing Technology, CUDA, Deep Learning, GPU (CUDA), JAVA, OpenCL, PARALLEL, PHP, Project Related | Leave a Comment »

CUDA Unified Memory

Posted by Hemprasad Y. Badgujar on October 6, 2015


CUDA Unified Memory

CUDA is the language of Nvidia GPU’s.  To extract maximum performance from GPU’s, you’ll want to develop applications in CUDA.

CUDA Toolkit is the primary IDE (integrated development environment) for developing CUDA-enabled applications.  The main roles of the Toolkit IDE are to simplify the software development process, maximize software developer productivity, and provide features that enhance GPU performance.  The Toolkit has been steadily evolving in tandem with GPU hardware and currently sits at Version 6.5.

One of the most important features of CUDA 6.5 is Unified Memory (UM).  (UM was actually first introduced in CUDA v.6.0).  CPU host memory and GPU device memory are physically separate entities, connected by a relatively slow PCIe bus.  Prior to v.6.0, data elements shared in both CPU and GPU memory required two copies – one copy in CPU memory and one copy in GPU memory.  Developers had to allocate memory on the CPU, allocate memory on the GPU, and then copy data from CPU to GPU and from GPU to CPU.  This dual data management scheme added complexity to programs, opportunities for the introduction of software bugs, and an excessive focus of time and energy on data management tasks.

UM corrects this.  UM creates a memory pool that is shared between CPU and GPU, with a single memory address space and single pointers accessible to both host and device code.  The CUDA driver and runtime libraries automatically handle data transfers between host and device memory, thus relieving developers from the burden of explicitly managing those data transfers.  UM improves performance by automatically providing data locality on the CPU or GPU, wherever it might be required by the application algorithm.  UM also guarantees global coherency of data on host and device, thus reducing the introduction of software bugs.

Let’s explore some sample code that illustrates these concepts.  We won’t concern ourselves with the function of this algorithm; instead, we’ll just focus on the syntax. (Credit to Nvidia for this C/CUDA template example).

Without Unified Memory

Without Unified Memory

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
#include <string.h>
#include <stdio.h>
struct DataElement
{
  char *name;
  int value;
};
__global__
void Kernel(DataElement *elem) {
  printf("On device: name=%s, value=%d\n", elem->name, elem->value;
  elem->name[0] = 'd';
  elem->value++;
}
void launch(DataElement *elem) {
  DataElement *d_elem;
  char *d_name;
  int namelen = strlen(elem->name) + 1;
  // Allocate memory on GPU
  cudaMalloc(&d_elem, sizeofDataElement());
  cudaMalloc(&d_name, namelen);
  // Copy data from CPU to GPU
  cudaMemcpy(d_elem, elem, sizeof(DataElement),
     cudaMemcpyHostToDevice);
  cudaMemcpy(d_name, elem->name, namelen, cudaMemcpyHostToDevice);
  cudaMemcpy(&(d_elem->name), &d_name, sizeof(char*),
     cudaMemcpyHostToDevice);
  // Launch kernel
  Kernel<<< 1, 1 >>>(d_elem);
  // Copy data from GPU to CPU
  cudaMemcpy(&(elem->value), &(d_elem->value), sizeof(int),
     cudaMemcpyDeviceToHost);
  cudaMemcpy(elem->name, d_name, namelen, cudaMemcpyDeviceToHost);
  cudaFree(d_name);
  cudaFree(d_elem);
}
int main(void)
{
  DataElement *e;
  // Allocate memory on CPU
  e = (DataElement*)malloc(sizeof(DataElement));
  e->value = 10;
  // Allocate memory on CPU
  e->name = (char*)malloc(sizeof(char) * (strlen("hello") + 1));
  strcpy(e->name, "hello");
  launch(e);
  printf("On host: name=%s, value=%d\n", e->name, e->value);
  free(e->name);
  free(e);
  cudaDeviceReset();
}

Note these key points:

  • L51,55: Allocate memory on CPU
  • L24,25: Allocate memory on GPU
  • L28-32: Copy data from CPU to GPU
  • L35: Run kernel
  • L38-40: Copy data from GPU to CPU

With Unified Memory 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#include <string.h>
#include <stdio.h>
struct DataElement
{
  char *name;
  int value;
};
__global__
void Kernel(DataElement *elem) {
  printf("On device: name=%s, value=%d\n", elem->name, elem->value;
  elem->name[0] = 'd';
  elem->value++;
}
void launch(DataElement *elem) {
  // Launch kernel
  Kernel<<< 1, 1 >>>(elem);
  cudaDeviceSynchronize();
}
int main(void)
{
  DataElement *e;
  // Allocate unified memory on CPU and GPU
  cudaMallocManaged((void**)&e, sizeof(DataElement));
  e->value = 10;
  // Allocate unified memory on CPU and GPU
  cudaMallocManaged((void**)&(e->name), sizeof(char) *
     (strlen("hello") + 1) );
  strcpy(e->name, "hello");
  launch(e);
  printf("On host: name=%s, value=%d\n", e->name, e->value);
  cudaFree(e->name);
  cudaFree(e);
  cudaDeviceReset();
}
 

Note these key points:

  • L28, 32, 33: Allocate unified memory on CPU and GPU
  • L19: Run kernel

With UM, memory is allocated on the CPU and GPU in a single address space and managed with a single pointer.  Note how the “malloc’s” and “cudaMalloc’s” are condensed into single calls to cudaMallocManaged().  Furthermore, explicit cudaMemcpy() data transfers between CPU and GPU are eliminated, as the CUDA runtime handles these transfers automatically in the background. Collectively these actions simplify code development, code maintenance, and data management.

As software project managers, we like UM for the productivity enhancements it provides for our software development teams.  It improves software quality, reduces coding time, effort and cost, and enhances overall performance. As software engineers, we like UM because of reduced coding effort and the fact that we can focus time and effort on writing CUDA kernel code, where all the parallel performance comes from, instead of spending time on memory management tasks.  Unified Memory is major step forward in GPU programming.

Posted in CUDA, CUDA TUTORIALS, GPU (CUDA), PARALLEL | Leave a Comment »

Bilateral Filtering

Posted by Hemprasad Y. Badgujar on September 14, 2015


Popular Filters

When smoothing or blurring images (the most popular goal of smoothing is to reduce noise), we can use diverse linear filters, because linear filters are easy to achieve, and are kind of fast, the most used ones are Homogeneous filter, Gaussian filter, Median filter, et al.

When performing a linear filter, we do nothing but output pixel’s value g(i,j)  which is determined as a weighted sum of input pixel values f(i+k, j+l):

g(i, j)=SUM[f(i+k, j+l) h(k, l)];

in which, h(k, l)) is called the kernel, which is nothing more than the coefficients of the filter.

Homogeneous filter is the most simple filter, each output pixel is the mean of its kernel neighbors ( all of them contribute with equal weights), and its kernel K looks like:

1

 Gaussian filter is nothing but using different-weight-kernel, in both x and y direction, pixels located in the middle would have bigger weight, and the weights decrease with distance from the neighborhood center, so pixels located on sides have smaller weight, its kernel K is something like (when kernel is 5*5):

gkernel

Median filter is something that replace each pixel’s value with the median of its neighboring pixels. This method is great when dealing with “salt and pepper noise“.

Bilateral Filter

By using all the three above filters to smooth image, we not only dissolve noise, but also smooth edges, which make edges less sharper, even disappear. To solve this problem, we can use a filter called bilateral filter, which is an advanced version of Gaussian filter, it introduces another weight that represents how two pixels can be close (or similar) to one another in value, and by considering both weights in image,  Bilateral filter can keep edges sharp while blurring image.

Let me show you the process by using this image which have sharp edge.

21

 

Say we are smoothing this image (we can see noise in the image), and now we are dealing with the pixel at middle of the blue rect.

22   23

Left-above picture is a Gaussian kernel, and right-above picture is Bilateral filter kernel, which considered both weight.

We can also see the difference between Gaussian filter and Bilateral filter by these pictures:

Say we have an original image with noise like this

32

 

By using Gaussian filter, the image is smoother than before, but we can see the edge is no longer sharp, a slope appeared between white and black pixels.

33

 

However, by using Bilateral filter, the image is smoother, the edge is sharp, as well.

31

OpenCV code

It is super easy to make these kind of filters in OpenCV:

1 //Homogeneous blur:
2 blur(image, dstHomo, Size(kernel_length, kernel_length), Point(-1,-1));
3 //Gaussian blur:
4 GaussianBlur(image, dstGaus, Size(kernel_length, kernel_length), 0, 0);
5 //Median blur:
6 medianBlur(image, dstMed, kernel_length);
7 //Bilateral blur:
8 bilateralFilter(image, dstBila, kernel_length, kernel_length*2, kernel_length/2);

and for each function, you can find more details in OpenCV Documentation

Test Images

Glad to use my favorite Van Gogh image :

vangogh

 

From left to right: Homogeneous blur, Gaussian blur, Median blur, Bilateral blur.

(click iamge to view full size version :p )

kernel length = 3:

homo3 Gaussian3 Median3 Bilateral3

kernel length = 9:

homo9 Gaussian9 Median9 Bilateral9
kernel length = 15:

homo15 Gaussian15 Median15 Bilateral15

kernel length = 23:

homo23 Gaussian23 Median23 Bilateral23
kernel length = 31:

homo31 Gaussian31 Median31 Bilateral31
kernel length = 49:

homo49 Gaussian49 Median49 Bilateral49
kernel length = 99:

homo99 Gaussian99 Median99 Bilateral99

Trackback URL.

Posted in C, Image / Video Filters, Image Processing, OpenCV, OpenCV, OpenCV Tutorial | Leave a Comment »

Building VTK with Visual Studio 2013

Posted by Hemprasad Y. Badgujar on April 30, 2015


Building VTK5 with Visual Studio

Download

  1. Download VTK 5.10.1 the (VTK-5.10.1.zip) to unzip the file. (C: \ VTK-5.10.1)Http://Www.Vtk.Org/VTK/resources/software.Html#previous
    Https://Github.Com/Kitware/VTK/tree/v5.10.1

CMake

  1. You want to specify the destination of the input destination and solution files of source code.
    • Where is the source code: C:\VTK-5.10.1
    • Where is build the binaries: C:\VTK-5.10.1\build
  2. Press the [Configure] to select the Visual Studio that is the target.
  3. It makes various settings.
    • BUILD_SHAREED_LIBS ☑ (check)
    • BUILD_TESTING ☐ (uncheck)
    • CMAKE_CONFIGURATION_TYPES Debug;Release
    • CMAKE_INSTALL_PREFIX C:\Program Files\VTK (or C:\Program Files (x86)\VTK)
  4. Press the [Add Entry] to add the following settings.
    Name: CMAKE_DEBUG_POSTFIX
    Type: STRING
    Value: -gd
    Description:

    * Debug string to be added to the file name of the build generated files of the (last).

  5. And output the solution file by pressing the [Generate].

Build

  1. Start Visual Studio with administrative privileges VTK solution file (C: \ VTK-5.10.1 \ build \ VTK.sln) to open.
    (If you do not start with administrator privileges Visual Studio INSTALL to fail.)
  2. It wants to modify the source code.
    • vtkOStreamWrapper.cxx
      60 line

      //VTKOSTREAM_OPERATOR(ostream&);
      vtkOStreamWrapper& vtkOStreamWrapper::operator << (ostream& a) {
        this->ostr << (void *)&a;
        return *this;
      }
      
    • vtkEnSightGoldBinaryReader.cxx
      3925 line

      if (this->IFile->read(result, 80).fail())
      

      3944 line

      if (this->IFile->read(dummy, 8).fail())
      

      4001 line

      if (this->IFile->read(dummy, 4).fail())
      

      4008 line

      if (this->IFile->read((char*)result, sizeof(int)).fail())
      

      4025 line

      if (this->IFile->read(dummy, 4).fail())
      

      4048 line

      if (this->IFile->read(dummy, 4).fail())
      

      4055 line

      if (this->IFile->read((char*)result, sizeof(int)*numInts).fail())
      

      4072 line

      if (this->IFile->read(dummy, 4).fail())
      

      4095 line

      if (this->IFile->read(dummy, 4).fail())
      

      4102 line

      if (this->IFile->read((char*)result, sizeof(float)*numFloats).fail())
      

      4119 line

      if (this->IFile->read(dummy, 4).fail())
      
    • vtkConvexHull2D.cxx
      31 lines

      #include <algorithm>
      
    • vtkAdjacencyMatrixToEdgeTable.cxx
      31 lines

      #include <algorithm>
      
    • vtkNormalizeMatrixVectors.cxx
      30 Line

      #include <algorithm>
      
    • vtkPairwiseExtractHistogram2D.cxx
      39 line

      #include <algorithm>
      
    • vtkControlPointsItem.cxx
      35 lines

      #include <algorithm>
      
    • vtkPiecewisePointHandleItem.cxx
      31 lines

      #include <algorithm>
      
    • vtkParallelCoordinatesRepresentation.cxx
      83 line

      #include <algorithm>
      
  1. It wants to build the VTK. (ALL_BUILD)
    1. The configuration of the solution (Debug, Release) set the.
    2. Choose the ALL_BUILD project from Solution Explorer.
    3. [Build]> to build VTK Press [Build Solution].
  2. It wants to install the VTK. (INSTALL)
    1. Choose the INSTALL project from Solution Explorer.
    2. [Build]> [projects only]> to install the VTK Press [INSTALL only the Build menu.CMAKE_INSTALL_PREFIX necessary files are copied to the specified output destination.

Environment Variable

  1. Environment variable VTK_ROOT create a VTK of path: Set the (C \ Program Files \ VTK).
  2. Environment variable Path I add a% VTK_ROOT% \ bin; to.

Building VTK6 with Visual Studio

Download

  1. Download VTK 6.1.0 the (VTK-6.1.0.zip) to unzip the file. (C: \ VTK-6.1.0)Http://Www.Vtk.Org/VTK/resources/software.Html#latestcand
    Https://Github.Com/Kitware/VTK/tree/v6.1.0

CMake

  1. You want to specify the destination of the input destination and solution files of source code.
    • Where is the source code: C:\VTK-6.1.0
    • Where is build the binaries: C:\VTK-6.1.0\build
  2. Press the [Configure] to select the Visual Studio that is the target.
  3. It makes various settings.
    • BUILD_SHAREED_LIBS ☑ (check)
    • BUILD_TESTING ☐ (uncheck)
    • CMAKE_CONFIGURATION_TYPES Debug;Release
    • CMAKE_INSTALL_PREFIX C:\Program Files\VTK (or C:\Program Files (x86)\VTK)
  4. Press the [Add Entry] to add the following settings.
    Name: CMAKE_DEBUG_POSTFIX
    Type: STRING
    Value: -gd
    Description:

    * Debug string to be added to the file name of the build generated files of the (last).

  5. And output the solution file by pressing the [Generate].

Build

  1. Start Visual Studio with administrative privileges VTK solution file (C: \ VTK-6.1.0 \ build \ VTK.sln) to open.
    (If you do not start with administrator privileges Visual Studio INSTALL to fail.)
  2. It wants to build the VTK. (ALL_BUILD)
    1. The configuration of the solution (Debug, Release) set the.
    2. Choose the ALL_BUILD project from Solution Explorer.
    3. [Build]> to build VTK Press [Build Solution].
  3. It wants to install the VTK. (INSTALL)
    1. Choose the INSTALL project from Solution Explorer.
    2. [Build]> [projects only]> to install the VTK Press [INSTALL only the Build menu.CMAKE_INSTALL_PREFIX necessary files are copied to the specified output destination.

Environment Variable

  1. Environment variable VTK_DIR create a VTK of path: Set the (C \ Program Files \ VTK).
  2. Environment variable Path I add a% VTK_DIR% \ bin; to.

Building VTK6 + Qt5 with Visual Studio

Download

  1. Download VTK 6.1.0 the (VTK-6.1.0.zip) to unzip the file. (C: \ VTK-6.1.0)Http://Www.Vtk.Org/VTK/resources/software.Html#latestcand
    Https://Github.Com/Kitware/VTK/tree/v6.1.0
  2. Qt 5.4.0 with OpenGLをダウンロード、インストールする。(C:\Qt)
    http://www.qt.io/download-open-source/#

    • Qt 5.4.0 for Windows 32-bit (VS 2013, OpenGL, 694 MB)
      (qt-opensource-windows-x86-msvc2013_opengl-5.4.0.exe)
    • Qt 5.4.0 for Windows 64-bit (VS 2013, OpenGL, 709 MB)
      (qt-opensource-windows-x86-msvc2013_64_opengl-5.4.0.exe)

CMake

  1. You want to specify the destination of the input destination and solution files of source code.
    • Where is the source code: C:\VTK-6.1.0
    • Where is build the binaries: C:\VTK-6.1.0\build
  2. Press the [Configure] to select the Visual Studio that is the target.
  3. It makes various settings.
    (Grouped and helpful to put a check to Advanced.) * Win32 is Msvc2013_opengl , x64 is msvc2013_64_openglspecified in. Ungrouped Entries

    • Qt5Core_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Core
    • Qt5Designer_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Designer
    • Qt5Gui_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Gui
    • Qt5Network_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Network
    • Qt5OpenGL_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5OpenGL
    • Qt5Sql_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Sql
    • Qt5WebKit_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5WebKit
    • Qt5WebKitWidgets_DIRC:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5WebKitWidgets
    • Qt5Widgets_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Widgets
    • Qt5Xml_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Xml

    BUILD

    • BUILD_SHAREED_LIBS ☑ (check)
    • BUILD_TESTING ☐ (uncheck)

    CMAKE

    • CMAKE_CONFIGURATION_TYPES Debug;Release
    • CMAKE_INSTALL_PREFIX C:\Program Files\VTK (or C:\Program Files (x86)\VTK)

    Module

    • Module_vtkGUISupportQt ☑ (check)
    • Module_vtkGUISupportQtOpenGL ☑ (check)
    • Module_vtkGUISupportQtSQL ☑ (check)
    • Module_vtkGUISupportQtWebkit ☑ (check)
    • Module_vtkRenderingQt ☑ (check)
    • Module_vtkViewsQt ☑ (check)

    OPENGL

    • OPENGL_gl_LIBRARY opengl
    • OPENGL_glu_LIBRARY glu32

    QT

    • QT_MKSPECS_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/mkspecs/win32-msvc2013
    • QT_QMAKE_EXECUTABLE C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/bin/qmake.exe
    • QT_QTCORE_LIBRARY_DEBUG C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/Qt5Cored.lib
    • QT_QTCORE_LIBRARY_DEBUG C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/Qt5Core.lib

    VTK

    • VTK_Group_Qt ☑ (check)
    • VTK_INSTALL_QT_PLUGIN_DIR ${CMAKE_INSTALL_PREFIX}/${VTK_INSTALL_QT_DIR}
    • VTK_QT_VERSION 5
  4. Press the [Add Entry] to add the following settings.
    Name: CMAKE_PREFIX_PATH
    Type: PATH
    Value: C:\Program Files (x86)\Windows Kits\8.1\Lib\winv6.3\um\x64
    (or C:\Program Files (x86)\Windows Kits\8.1\Lib\winv6.3\um\x86)
    Description:

    * Windows Kits path if Visual Studio 2013 8.1 \ Lib \ Winv6.3, if Visual Studio 2012 8.0 I specify the \ Lib \ Win8.

    Name: CMAKE_DEBUG_POSTFIX
    Type: STRING
    Value: -gd
    Description:

    * Debug string to be added to the file name of the build generated files of the (last).

  5. And output the solution file by pressing the [Generate].

Build

  1. Start Visual Studio with administrative privileges VTK solution file (C: \ VTK-6.1.0 \ build \ VTK.sln) to open.
    (If you do not start with administrator privileges Visual Studio INSTALL to fail.)
  2. It wants to build the VTK. (ALL_BUILD)
    1. The configuration of the solution (Debug, Release) set the.
    2. Choose the ALL_BUILD project from Solution Explorer.
    3. [Build]> to build VTK Press [Build Solution].
  3. It wants to install the VTK. (INSTALL)
    1. Choose the INSTALL project from Solution Explorer.
    2. [Build]> [projects only]> to install the VTK Press [INSTALL only the Build menu.CMAKE_INSTALL_PREFIX necessary files are copied to the specified output destination.

Environment Variable

  1. Environment variable VTK_DIR create a VTK of path: Set the (C \ Program Files \ VTK).
  2. Environment variable QTDIR by creating a Qt of the path (C: \ Qt \ Qt5.4.0 \ 5.4 \ msvc2013_64_opengl \ (or C: \ Qt \ Qt5.4.0 \ 5.4 \ msvc2013_opengl \)) to set.
  3. Environment variable Path in;% VTK_DIR% \ bin;% I add a QTDIR% \ bin.

Posted in CLOUD, Computer Languages, Computer Softwares, Computer Vision, Computing Technology, CUDA, GPU (CUDA), OpenCV | Tagged: , , , | 4 Comments »

Project Template in Visual Studio

Posted by Hemprasad Y. Badgujar on March 5, 2015


 

 Sample Image - maximum width is 600 pixels

Introduction

This article describes the step by step process of creating project template in Visual Studio 2012 and VSIX installer that deploys the project template. Each step contains an image snapshot that helps the reader to keep focused.

Background

A number of predefined project and project item templates are installed when you install Visual Studio. You can use one of the many project templates to create the basic project container and a preliminary set of items for your application, class, control, or library. You can also use one of the many project item templates to create, for example, a Windows Forms application or a Web Forms page to customize as you develop your application.

You can create custom project templates and project item templates and have these templates appear in the New Project and Add New Item dialog boxes. The article describes the complete process of creating and deploying the project template.

Using the Code

Here, I have taken a very simple example which contains nearly no code but this can be extended as per your needs.

Create Project Template

First of all, create the piece (project or item) that resembles the thing you want to get created started from the template we are going to create.

Then, export the template (we are going to use the exported template as a shortcut to build our Visual Studio template package):

Visual Studio Project Templates

We are creating a project template here.

Fill all the required details:

A zip file should get created:

Creating Visual Studio Package Project

To use VSIX projects, you need to install the Visual Studio 2012 VSSDK.

Download the Visual Studio 2012 SDK.

You should see new project template “Visual Studio Package” after installing SDK.

Select C# as our project template belongs to C#.

Provide details:

Currently, we don’t need unit test project but they are good to have.

In the solution, double-click the manifest, so designer opens.

Fill all the tabs. The most important is Assert. Here you give path of our project template(DummyConsoleApplication.zip).

As a verification step, build the solution, you should see a .vsix being generated after its dependency project:

Installing the Extension

Project template is located under “Visual C#” node.

Uninstalling the Project Template

References

Posted in .Net Platform, C, Computer Languages, Computer Software, Computer Softwares, Computer Vision, CUDA, GPU (CUDA), Installation, OpenMP, PARALLEL | Tagged: , , | Leave a Comment »

Professional ways of tracking GPU memory leakage

Posted by Hemprasad Y. Badgujar on January 25, 2015


Depending on what I am doing and what I need to track/trace and profile I utilise all 4 packages above. They also have the added benefit of being a: free; b: well maintained; c: free; d: regularly updated; e: free.

In case you hadn’t guessed I like the free part:)

In regards of object management, I would recommend an old C++ coding principle: as soon as you create an object, add the line that deletes it, every new should always (eventually) have a delete. That way you know that you are destroying the objects you create, however it will not save you from orphaned memory block memory leaks, where you change where pointers are pointing, for example:

myclass* firstInstance = new myclass();
myclass* secondInstance = new myclass();
firstInstance = secondInstance;
delete firstInstance;
delete secondInstance;

You will now have created a small memory leak where the data for the real firstInstance is now not being pointed at by any pointer. Very hard to detect when this happens in a large code-base, and more common that it should be.

generally these are the pairings you need to be aware of to ensure you properly dispose of all your objects:

new -> delete
new[] -> delete[]
malloc() -> free() // or you can use realloc(0) instead of free()
calloc() -> free() // or you can use realloc(0) instead of free()
realloc(nonzero) -> free() // or you can use realloc(0) instead of free()

If you are coming from a language with garbage collection to C++ it can take a while to get used to, but it quickly becomes habit:)

Posted in C, Computer Languages, Computer Vision, Computing Technology, CUDA | Tagged: , , , , , | Leave a Comment »

Building Static zlib v1.2.7 with MSVC 2012

Posted by Hemprasad Y. Badgujar on January 22, 2015


This post will explain how to obtain and build the zlib C programming library from source, using MS Visual Studio 2012 on Windows 7. The result will be a static release version that you can use in your C/C++ projects for data compression, or as a dependency for other libraries.

Downloads

The Environment

  1. Decompress anduntar the library with7zip and you’ll end up with a directory path similar to this:
    1
    C:\Users\%USERNAME%\Downloads\lib\zlib-1.2.7\

Building

  1. Modify “libs\zlib-1.2.7\contrib\masmx86\bld_ml32.bat,” adding “/safeseh” to the following two lines.
    Before:

    1
    2
    ml /coff /Zi /c /Flmatch686.lst match686.asm
    ml /coff /Zi /c /Flinffas32.lst inffas32.asm

    After:

    1
    2
    ml /safeseh /coff /Zi /c /Flmatch686.lst match686.asm
    ml /safeseh /coff /Zi /c /Flinffas32.lst inffas32.asm
  2. Open the solution file that came with the package, “libs\zlib-1.2.7\contrib\vstudio\vc10\zlibvc.sln,” and upgrade the solution file if necessary to MSVC 2012.
  3. Change to “Release” configuration.
  4. Remove “ZLIB_WINAPI;” from the “zlibstat” project’s property page: “Configuration Properties → C/C++ → Preprocessor → Preprocessor Definitions
  5. Build the solution.
  6. The new static library fileis created in a newsubfolder:
    1
    C:\Users\%USERNAME%\Downloads\lib\zlib-1.2.7\contrib\vstudio\vc10\x86\ZlibStatRelease\zlibstat.lib

Installing

  1. Create a place for the zlib library with “zlib” and “lib”subfolders.
    1
    2
    mkdir "C:\workspace\lib\zlib\zlib-1.2.7\zlib"
    mkdir "C:\workspace\lib\zlib\zlib-1.2.7\lib"
  2. Copy the header files.
    1
    xcopy "C:\Users\%USERNAME%\Downloads\lib\zlib-1.2.7\*.h" "C:\workspace\lib\zlib\zlib-1.2.7\zlib"
  3. Copy the library file.
    1
    xcopy "C:\Users\%USERNAME%\Downloads\lib\zlib-1.2.7\contrib\vstudio\vc10\x86\ZlibStatRelease\zlibstat.lib" "C:\workspace\lib\zlib\zlib-1.2.7\lib\zlibstat.lib"
  4. Add the include and lib paths to the default project property page in MSVC 2012:
    View → Other Windows → Property Manager → Release/Debug → Microsoft.Cpp.Win32.user
    .
    Be sure to save the property sheet so that the changes take effect.

Testing

  1. Create a new project, “LibTest” in MSVC 2012.
  2. Explicitly add the zlib library to the project: Project → Properties →Linker → Input → Additional Dependencies = “zlibstat.lib;”
  3. Create a source file in the project and copy the “zpipe.c” example code.

Build the project. It should compile and link successfully.

Potential Issues

These are some of the problems that you might run into while trying to build zlib.

LNK2026: module unsafe for SAFESEH image

Need to include support for safe exception handling. Modify “libs\zlib-1.2.7\contrib\masmx86\bld_ml32.bat,” adding “/safeseh” to the following two lines.
Before:

1
2
ml /coff /Zi /c /Flmatch686.lst match686.asm
ml /coff /Zi /c /Flinffas32.lst inffas32.asm

After:

1
2
ml /safeseh /coff /Zi /c /Flmatch686.lst match686.asm
ml /safeseh /coff /Zi /c /Flinffas32.lst inffas32.asm

LNK2001: unresolved external symbol _inflateInit_

The code is trying to link with the DLL version of the library instead of the static version. Remove “ZLIB_WINAPI;” from the “zlibstat” project’s property page: “Configuration Properties → C/C++ → Preprocessor → Preprocessor Definitions

Posted in C, Computer Languages, Computing Technology, Cryptography, Free Tools | Tagged: , , , | Leave a Comment »

Tutorial: OpenCV haartraining

Posted by Hemprasad Y. Badgujar on January 18, 2015


Rapid Object Detection With A Cascade of Boosted Classifiers Based on Haar-like Features

The OpenCV library provides us a greatly interesting demonstration for a face detection. Furthermore, it provides us programs (or functions) that they used to train classifiers for their face detection system, called HaarTraining, so that we can create our own object classifiers using these functions. It is interesting.Objective

However, I could not follow how OpenCV developers performed the haartraining for their face detection system exactly because they did not provide us several information such as what images and parameters they used for training. The objective of this report is to provide step-by-step procedures for following people.

My working environment is Visual Studio + cygwin on Windows XP, or on Linux. The cygwin is required because I use several UNIX commands. I am sure that you will use the cygwin (especially I mean UNIX commands) not only for this haartraining but also for others in the future if you are one of engineer or science people.

FYI: I recommend you to work haartrainig with something different concurrently because you have to wait so many days during training (it would possibly take one week). I typically experimented as 1. run haartraining on Friday 2. forget about it completely 3. see results on next Friday 4. run another haartraining (loop).

 

Data Prepartion

FYI: There are database lists on Face Recognition Homepage – Databases. andComputer Vision Test Images.

Positive (Face) Images

We need to collect positive images that contain only objects of interest, e.g., faces.

Kuranov et. al. [3] mentions as they used 5000 positive frontal face patterns, and 5000 positive frontal face patterns were derived from 1000 original faces. I describe how to increase number of samples at the later chapter.

Before, I downloaded and used The UMIST Face Database (Dead Link) because cropped face images were available at there. The UMIST Face Database has video-like image sequences from side-faces to frontal faces. I thought training with such images would generate a face detector which is robust to facial pose. However, the generated face detector did not work well. Probably, I dreamed too much. It was a story on 2006.

I obtained a cropped frontal face database based on CMU PIE Database. I use it too. This dataset has a large illumination variations, thus this would result in the same bad result with the case of the UMIST Face Database which had large variations in poses.
#Sorry, it looks redistribution (of modifications) of PIE database is not allowed. I made only a generated (distorted and diminished) .vec file available at the Download section. The PIE database is free (send a request e-mail), but it does not include the cropped faces originally.

MIT CBCL Face Data is another choice. They have 2,429 frontal faces with few illumination variations and pose variations. This data would be good for haartraining. However, the size of image is originally small 19 x 19. So, we can not perform experiments to determine good sizes.

Probably, the OpenCV developers used the FERET database. It looks that the FERET database became available to download over internet from Jan. 31, 2008(?).

Negative (Background) Images

We need to collect negative images that does not contain objects of interest, e.g., faces to train haarcascade classifier.

Kuranov et. al. [3] states as they used 3000 negative images.

Fortunately, I found http://face.urtho.net/ (Negatives sets, Set 1 – Various negatives) which has about 3500 images (Dead Link). But, this collection was used for eye detection, and includes some faces in some pictures. Therefore, I deleted all suspicious images which looked including faces. About 2900 images were remained, and I added 100 images to there. The number should be enough.

The collection is available at the Download section (But, it may take forever to download.)

Natural Test (Face in Background) Images

We can synthesize testing image sets using the createsamples utility, but having a natural testing image dataset is still good.

There is a CMU-MIT Frontal Face Test Set that the OpenCV developers used for their experiments. This dataset has a ground truth text including information for locations of eyes, noses, and lip centers and tips, however, it does not have locations of faces expressed by rectangle regions required by the haartraining utilities as default.

I created a simple script to compute facial regions from given ground truth information. My computation works as follows:

1. Get margin as nose height - mouse height
Lower boundary is located below the margin from the mouse
Upper boundary is located above the margin from the eye
2. Get margin as left mouse tip - right mouse tip
Right boundary is located right the margin from the right eye
Left boundary is located left the margin from the left eye

This was not perfect, but looked okay.

The generated ground truth text and image dataset is available at the Download section, you may download only the ground truth text. By the way, I converted GIF to PNG because OpenCV does not support GIF. The mogrify (ImageMagick) command would be useful to do such conversion of image types

$ mogrify -format png *.gif

How to Crop Images Manually Fast

To collect positive images, you may have to crop images a lot by your hand.

I created a multi-platform software imageclipper to help to do it. This software is not only for haartraining but also for other computer vision/machine learning researches. This software has characteristics as follows:

  • You can open images in a directory sequentially
  • You can open a video file too, frame by frame
  • Clipping and moving to the next image can be done by one button (SPACE)
  • You will select a region to clip by dragging left mouse button
  • You can move or resize your selected region by dragging right mouse button
  • Your selected region is shown on the next image too.

Create Samples (Reference)

We can create training samples and testing samples with the createsamples utility. In this section, I describe functionalities of the createsamples software because the Tutorial [1] did not explain them clearly for me (but please see the Tutorial [1] also for further options).

This is a list of options, but there are mainly four functions and the meanings of options become different in different functions. It confuses us.

Usage: ./createsamples
  [-info <description_file_name>]
  [-img <image_file_name>]
  [-vec <vec_file_name>]
  [-bg <background_file_name>]
  [-num <number_of_samples = 1000>]
  [-bgcolor <background_color = 0>]
  [-inv] [-randinv] [-bgthresh <background_color_threshold = 80>]
  [-maxidev <max_intensity_deviation = 40>]
  [-maxxangle <max_x_rotation_angle = 1.100000>]
  [-maxyangle <max_y_rotation_angle = 1.100000>]
  [-maxzangle <max_z_rotation_angle = 0.500000>]
  [-show [<scale = 4.000000>]]
  [-w <sample_width = 24>]
  [-h <sample_height = 24>]

1. Create training samples from one

The 1st function of the createsamples utility is to create training samples from one image applying distortions. This function (cvhaartraining.cpp#cvCreateTrainingSamples) is launched when options, -img, -bg, and -vec were specified.

  • -img <one_positive_image>
  • -bg <collection_file_of_negatives>
  • -vec <name_of_the_output_file_containing_the_generated_samples>

For example,

$ createsamples -img face.png -num 10 -bg negatives.dat -vec samples.vec -maxxangle 0.6 -maxyangle 0 -maxzangle 0.3 -maxidev 100 -bgcolor 0 -bgthresh 0 -w 20 -h 20

This generates <num> number of samples from one <positive_image> applying distortions. Be careful that only the first <num> negative images in the <collection_file_of_negatives> are used.

The file of the <collection_file_of_negatives> is as follows:

[filename]
[filename]
[filename]
...

such as

img/img1.jpg
img/img2.jpg

Let me call this file format as collection file format.

How to create a collection file

This format can easily be created with the find command as

$ cd [your working directory]
$ find [image dir] -name '*.[image ext]' > [description file]

such as

$ find ../../data/negatives/ -name '*.jpg' > negatives.dat

2. Create training samples from some

The 2nd function is to create training samples from some images without applying distortions. This function (cvhaartraining.cpp#cvCreateTestSamples) is launched when options, -info, and -vec were specified.

  • -info <description_file_of_samples>
  • -vec <name_of_the_output_file_containing_the_generated_samples>

For example,

$ createsamples -info samples.dat -vec samples.vec -w 20 -h 20

This generates samples without applying distortions. You may think this function as a file format conversion function.

The format of the <description_file_of_samples> is as follows:

[filename] [# of objects] [[x y width height] [... 2nd object] ...]
[filename] [# of objects] [[x y width height] [... 2nd object] ...]
[filename] [# of objects] [[x y width height] [... 2nd object] ...]
...

where (x,y) is the left-upper corner of the object where the origin (0,0) is the left-upper corner of the image such as

img/img1.jpg 1 140 100 45 45
img/img2.jpg 2 100 200 50 50 50 30 25 25
img/img3.jpg 1 0 0 20 20

Let me call this format as a description file format against the collection file format although the manual [1] does not differentiate them.

This function crops regions specified and resize these images and convert into .vec format, but (let me say again) this function does not generate many samples from one image (one cropped image) applying distortions. Therefore, you may use this 2nd function only when you have already sufficient number of natural images and their ground truths (totally, 5000 or 7000 would be required).

Note that the option -num is used only to restrict the number of samples to generate, not to increase number of samples applying distortions in this case.

How to create a description file

I write how to create a description file when already-cropped image files are available here because some people had asked how to create it at the OpenCV forum. Note that my tutorial steps do not require to perform this.

For such a situation, you can use the find command and the identify command (cygwin should have identify (ImageMagick) command) to create a description file as

$ cd <your working directory>
$ find <dir> -name '*.<ext>' -exec identify -format '%i 1 0 0 %w %h' \{\} \; > <description_file>

such as

$ find ../../data/umist_cropped -name '*.pgm' -exec identify -format '%i 1 0 0 %w %h' \{\} \; > samplesdescription.dat

If all images have the same size, it becomes simpler and faster,

$ find <dir> -name '*.<ext>' -exec echo \{\} 1 0 0 <width> <height> \; > <description_file>

such as

$ find ../../data/umist_cropped -name '*.pgm' -exec echo \{\} 1 0 0 20 20 \; > samplesdescription.dat

How to automate to crop images? If you can do it, you do not need haartraining. You have an object detector already (^-^

3. Create test samples

The 3rd function is to create test samples and their ground truth from single image applying distortions. This function (cvsamples.cpp#cvCreateTrainingSamplesFromInfo) is triggered when options, -img, -bg, and -info were specified.

  • -img <one_positive_image>
  • -bg <collection_file_of_negatives>
  • -info <generated_description_file_for_the_generated_test_images>

In this case, -w and -h are used to determine the minimal size of positives to be embeded in the test images.

$ createsamples -img face.png -num 10 -bg negatives.dat -info test.dat -maxxangle 0.6 -maxyangle 0 -maxzangle 0.3 -maxidev 100 -bgcolor 0 -bgthresh 0

Be careful that only the first <num> negative images in the <collection_file_of_negatives> are used.

This generates tons of jpg files

The output image filename format is as <number>_<x>_<y>_<width>_<height>.jpg, where x, y, width and height are the coordinates of placed object bounding rectangle.

Also, this generates <description_file_for_test_samples> of the description file format (the same format with <description_file_of_samples> at the 2nd function).

4. Show images

The 4th function is to show images within a vec file. This function (cvsamples.cpp#cvShowVecSamples) is triggered when only an option, -vec, was specified (no -info, -img, -bg). For example,

$ createsamples -vec samples.vec -w 20 -h 20

EXTRA: random seed

The createsamples software applys the same sequence of distortions for each image. We may want to apply the different sequence of distortions for each image because, otherwise, our resulting detection may work only for specific distortions.

This can be done by modifying createsamples slightly as:

Add below in the top

#include<time.h>

Add below in the main function

srand(time(NULL));

The modified source code is available at svn:createsamples.cpp

Create Samples

Create Training Samples

Kuranov et. al. [3] mentions as they used 5000 positive frontal face patterns and 3000 negatives for training, and 5000 positive frontal face patterns were derived from 1000 original faces.

However, you may have noticed that none of 4 functions of the createsamples utility provide us a function to generate 5000 positive images from 1000 images at burst. We have to use the 1st function of the createsamples to generate 5 (or some) positives form 1 image, repeat the procedures 1000 (or some) times, and finally merge the generated output vec files. *1

I wrote a program, mergevec.cpp, to merge vec files. I also wrote a script,createtrainsamples.pl, to repeat the procedures 1000 (or some) times. I specified 7000 instead of 5000 as default because the Tutorial [1] states as “the reasonable number of positive samples is 7000.” Please modify the path to createsamples and its option parameters directly written in the file.

The input format of createtrainsamples.pl is

$ perl createtrainsamples.pl <positives.dat> <negatives.dat> <vec_output_dir> [<totalnum = 7000>] [<createsample_command_options = "./createsamples -w 20 -h 20...">]

And, the input format of mergevec is

$ mergevec <collection_file_of_vecs> <output_vec_file_name>

A collection file (a file containing list of filenames) can be generated as

$ find [dir_name] -name '*.[ext]' > [collection_file_name]

Example)

$ cd HaarTraining/bin 
$ find ../../data/negatives/ -name '*.jpg' > negatives.dat
$ find ../../data/umist_cropped/ -name '*.pgm' > positives.dat

$ perl createtrainsamples.pl positives.dat negatives.dat samples 7000 "./createsamples  -bgcolor 0 -bgthresh 0 -maxxangle 1.1 -maxyangle 1.1 maxzangle 0.5 -maxidev 40 -w 20 -h 20"
$ find samples/ -name '*.vec' > samples.dat # to create a collection file for vec files
$ mergevec samples.dat samples.vec
$ # createsamples -vec samples.vec -show -w 20 -h 20 # Extra: If you want to see inside

Kuranov et. al. [3] states as 20×20 of sample size achieved the highest hit rate. Furthermore, they states as “For 18×18 four split nodes performed best, while for 20×20 two nodes were slightly better. Thus, -w 20 -h 20 would be good.

Create Testing Samples

Testing samples are images which include positives in negative background images and locations of positives are known in the images. It is possible to create such testing images by hand. We can also use the 3rd function of createsamples to synthesize such images. But, we can specify only one image using it, thus, creating a script to repeat the procedure would help us. The script is available at svn:createtestsamples.pl. Please modify the path to createsamples and its option parameters directly in the file.

The input format of the createtestsamples.pl is as

$ perl createtestsamples.pl <positives.dat> <negatives.dat> <output_dir> [<totalnum = 1000>] [<createsample_command_options = "./createsamples -w 20 -h 20...">]

This generates lots of jpg files and info.dat in the <output_dir>. The jpg file name format is as <number>_<x>_<y>_<width>_<height>.jpg, where x, y, width and height are the coordinates of placed object bounding rectangle.

Example)

$ # cd HaarTraining/bin 
$ # find ../../data/negatives/ -name '*.jpg' > negatives.dat 
$ # find ../../data/umist_cropped/ -name '*.pgm' > positives.dat
$ perl createtestsamples.pl positives.dat negatives.dat tests 1000 "./createsamples -bgcolor 0 -bgthresh 0 -maxxangle 1.1 -maxyangle 1.1 -maxzangle 0.5 maxidev 40"
$ find tests/ -name 'info.dat' -exec cat \{\} \; > tests.dat # merge info files

Training

Haar Training

Now, we train our own classifier using the haartraining utility. Here is the usage of the haartraining.

Usage: ./haartraining
  -data <dir_name>
  -vec <vec_file_name>
  -bg <background_file_name>
  [-npos <number_of_positive_samples = 2000>]
  [-nneg <number_of_negative_samples = 2000>]
  [-nstages <number_of_stages = 14>]
  [-nsplits <number_of_splits = 1>]
  [-mem <memory_in_MB = 200>]
  [-sym (default)] [-nonsym]
  [-minhitrate <min_hit_rate = 0.995000>]
  [-maxfalsealarm <max_false_alarm_rate = 0.500000>]
  [-weighttrimming <weight_trimming = 0.950000>]
  [-eqw]
  [-mode <BASIC (default) | CORE | ALL>]
  [-w <sample_width = 24>]
  [-h <sample_height = 24>]
  [-bt <DAB | RAB | LB | GAB (default)>]
  [-err <misclass (default) | gini | entropy>]
  [-maxtreesplits <max_number_of_splits_in_tree_cascade = 0>]
  [-minpos <min_number_of_positive_samples_per_cluster = 500>]

Kuranov et. al. [3] states as 20×20 of sample size achieved the highest hit rate. Furthermore, they states as “For 18×18 four split nodes performed best, while for 20×20 two nodes were slightly better. The difference between weak tree classifiers with 2, 3 or 4 split nodes is smaller than their superiority with respect to stumps.”

Furthermore, there was a description as “20 stages were trained. Assuming that my test set is representative for the learning task, I can expect a false alarm rate about 0.5^{20} \approx 9.6e-07and a hit rate about 0.999^{20} \approx 0.98.”

Therefore, use of 20×20 of sample size with nsplit = 2, nstages = 20, minhitrate = 0.9999 (default: 0.995), maxfalselarm = 0.5 (default: 0.5), and weighttrimming = 0.95 (default: 0.95) would be good such as

$ haartraining -data haarcascade -vec samples.vec -bg negatives.dat -nstages 20 -nsplits 2 -minhitrate 0.999 -maxfalsealarm 0.5 -npos 7000 -nneg 3019 -w 20 -h 20 -nonsym -mem 512 -mode ALL

The “-nonsym” option is used when the object class does not have vertical (left-right) symmetry. If object class has vertical symmetry such as frontal faces, “-sym (default)” should be used. It will speed up processing because it will use only the half (the centered and either of the left-sided or the right-sided) haar-like features.

The “-mode ALL” uses Extended Sets of Haar-like Features [2]. Default is BASIC and it uses only upright features, while ALL uses the full set of upright and 45 degree rotated feature set[1].

The “-mem 512” is the available memory in MB for precalculation [1]. Default is 200MB, so increase if more memory is available. We should not specify all system RAM because this number is only for precalculation, not for all. The maximum possible number to be specified would be 2GB because there is a limit of 4GB on the 32bit CPU (2^32 ≒ 4GB), and it becomes 2GB on Windows (kernel reserves 1GB and windows does something more).

There are other options that [1] does not list such as

 [-bt <DAB | RAB | LB | GAB (default)>]
 [-err <misclass (default) | gini | entropy>]
 [-maxtreesplits <max_number_of_splits_in_tree_cascade = 0>]
 [-minpos <min_number_of_positive_samples_per_cluster = 500>]

Please see my modified version of haartraining document [5] for details.

#Even if you increase the number of stages, the training may finish in an intermediate stage when it exceeded your desired minimum hit rate or false alarm because more cascading will decrease these rate for sure (0.99 until current * 0.99 next = 0.9801 until next). Or, the training may finish because all samples were rejected. In the case, you must increase number of training samples.

#You can use OpenMP (multi-processing) with compilers such as Intel C++ compiler and MS Visual Studio 2005 Professional Edition or better. See How to enable OpenMP section.

#One training took three days.

Generate a XML File

The haartraing generates a xml file when the process is completely finished (from OpenCV beta5).

If you want to convert an intermediate haartraining output dir tree data into a xml file, there is a software at the OpenCV/samples/c/convert_cascade.c (that is, in your installation directory). Compile it.

The input format is as

$ convert_cascade --size="<sample_width>x<sampe_height>" <haartraining_ouput_dir> <ouput_file>

Example)

$ convert_cascade --size="20x20" haarcascade haarcascade.xml

Testing

Performance Evaluation

We can evaluate the performance of the generated classifier using the performance utility. Here is the usage of the performance utility.

Usage: ./performance
  -data <classifier_directory_name>
  -info <collection_file_name>
  [-maxSizeDiff <max_size_difference = 1.500000>]
  [-maxPosDiff <max_position_difference = 0.300000>]
  [-sf <scale_factor = 1.200000>]
  [-ni]
  [-nos <number_of_stages = -1>]
  [-rs <roc_size = 40>]
  [-w <sample_width = 24>]
  [-h <sample_height = 24>]

Please see my modified version of haartraining document [5] for details of options.

I cite how the performance utility works here:

During detection, a sliding window was moved pixel by pixel over the picture at each scale. Starting with the original scale, the features were enlarged by 10% and 20%, respectively (i.e., representing a rescale factor of 1.1 and 1.2, respectively) until exceeding the size of the picture in at least one dimension. Often multiple faces are detect at near by location and scale at an actual face location. Therefore, multiple nearby detection results were merged. Receiver Operating Curves (ROCs) were constructed by varying the required number of detected faces per actual face before merging into a single detection result. During experimentation only one parameter was changed at a time. The best mode of a parameter found in an experiment was used for the subsequent experiments. [3]

Execute the performance utility as

$ performance -data haarcascade -w 20 -h 20 -info tests.dat -ni
or
$ performance -data haarcascade.xml -info tests.dat -ni

Be careful that you have to tell the size of training samples when you specify the classifier directory although the classifier xml file includes the information inside *2.

-ni option suppresses to create resulted image files of detection. As default, the performance utility creates the resulted image files of detection and stores them into directories that a prefix ‘det-‘ is added to test image directories. When you want to use this function, you have to create destination directories beforehand by yourself. Execute next command to create destination directories

$ cat tests.dat | perl -pe 's!^(.*)/.*$!det-$1!g' | xargs mkdir -p

where tests.dat is the collection file for testing images which you created at the step of createtestsamples.pl. Now you can execute the performance utility without ‘-ni’ option.

An output of the performance utility is as follows:

+================================+======+======+======+
|            File Name           | Hits |Missed| False|
+================================+======+======+======+
|tests/01/img01.bmp/0001_0153_005|     0|     1|     0|
+--------------------------------+------+------+------+
....
+--------------------------------+------+------+------+
|                           Total|   874|   554|    72|
+================================+======+======+======+
Number of stages: 15
Number of weak classifiers: 68
Total time: 115.000000
15
        874     72      0.612045        0.050420
        874     72      0.612045        0.050420
        360     2       0.252101        0.001401
        115     0       0.080532        0.000000
        26      0       0.018207        0.000000
        8       0       0.005602        0.000000
        4       0       0.002801        0.000000
        1       0       0.000700        0.000000
        ....

‘Hits’ shows the number of correct detections. ‘Missed’ shows the number of missed detections or false negatives (Truly there exists, but the detector missed to detect it). ‘False’ shows the number of false alarms or false positives (Truly there does not exist, but the detector alarmed as there exists.)

The latter table is for ROC plot. Please see my modified version of haartraining document [5] for more.

Fun with a USB camera

Fun with a USB camera or some image files with the facedetect utility.

$ facedetect --cascade=<xml_file> [filename(image or video)|camera_index]

I modified facedetect.c slightly because the facedetect utility did not work in the same manner with the performance utility. I added options to change parameters on command line. The source code is available at the Download section (or direct link facedetect.c). Now the usage is as follows:

Usage: facedetect  --cascade="<cascade_xml_path>" or -c <cascade_xml_path>
  [ -sf < scale_factor = 1.100000 > ]
  [ -mn < min_neighbors = 1 > ]
  [ -fl < flags = 0 > ]
  [ -ms < min_size = 0 0 > ]
  [ filename | camera_index = 0 ]
See also: cvHaarDetectObjects() about option parameters.

FYI: The original facedetect.c used min_neighbors = 2 although performance.cpp uses min_neighbors = 1. It affected face detection results considerably.

Experiments

PIE Expeirment 1

The PIE dataset has only frontal faces with big illumination variations. The dataset used in PIE experiments looks as follows:

img01_01.png img01_10.png img01_21.png
1st 10th 21st
  • List of Commands haarcascade_frontalface_pie1.sh.
    • I used -w 18 -h 20 because the original images were not square but rectangle with ratio about 18:20. I applied little distortions on this experiment.
    • The training took 3 days on Intel Xeon 2GHz with 1GB memory machine.
  • Performance Evaluation with pie_test (synthesize tests)haarcascade_frontalface_pie1.performance_pie_tests.txt
    +================================+======+======+======+
    |            File Name           | Hits |Missed| False|
    +================================+======+======+======+
    |                           Total|   847|   581|    67|
    +================================+======+======+======+
    Number of stages: 16
    Number of weak classifiers: 113
    Total time: 123.000000
    16
            847     67      0.593137        0.046919
            847     67      0.593137        0.046919
            353     2       0.247199        0.001401
            110     0       0.077031        0.000000
            15      0       0.010504        0.000000
            1       0       0.000700        0.000000
    
  • Performance evaluation with cmu_tests (natural tests)haarcascade_frontalface_pie1.performance_cmu_tests.txt
    +================================+======+======+======+
    |            File Name           | Hits |Missed| False|
    +================================+======+======+======+
    |                           Total|    20|   491|     9|
    +================================+======+======+======+
    Number of stages: 16
    Number of weak classifiers: 113
    Total time: 5.830000
    16
    	20	9	0.039139	0.017613
    	20	9	0.039139	0.017613
    	2	0	0.003914	0.000000
    

PIE Experiment 2

PIE Experiment 3

PIE Experiment 4

PIE Experiment 5

PIE Experiment 6

UMIST Experiment 1

The UMIST is a multi-view face dataset.

1a000.png 1a021.png 1a033.png
0th frame 21st frame 33rd frame

UMIST Experiment 2

CBCL Experiment 1

haarcascade_frontalface_alt2.xml

Discussion

The created detectors outperformed the opencv default xml in terms of synthesized test samples created from training samples. This shows that the training was successfully performed. However, the detector did not work well in general test samples. This might mean that the detector was over-trained or over-fitted to the specific training samples. I still don’t know good parameters or training samples to generalize detectors well.

False alarm rates of all of my generated detectors were pretty low compared with the opencv default detector. I don’t know which parameters are especially different. I set false alarm rate with 0.5 and this makes sense theoretically. I don’t know.

Training illumination varying faces in one detector resulted in pretty poor. The generated detector became sensitive to illumination rather than robust to illumination. This detector does not detect non-illuminated normal frontal faces. This makes sense because normal frontal faces did not exist in training sets so many. Training multi-view faces in one time resulted in the same thing.

We should train different detectors for each face pose or illumination state to construct a multi-view or illumination varied face detector as Fast Multi-view Face Detection. Viola and Jones extended their work for multi-view by training 12 separated face poses detectors. To achieve rapidness, they further constructed a pose estimator by C4.5 decision tree re-using the haar-like features, they further cascaded the pose estimator and face detector (Of course, this means that if pose estimation fails, the face detection also fails).

Theory behind

The advantage of the haar-like features is the rapidness in detection phase, not accuracy. We of course can construct another face detector which achieves better accuracy using, e.g., PCA or LDA although it becomes slow in detection phase. Use such features when you do not require rapidness. PCA does not require to train AdaBoost, so training phase would quickly finish. I am pretty sure that there exist such face detection method already although I did not search (I do not search because I am sure).

Download

The files are available at http://tutorial-haartraining.googlecode.com/svn/trunk/ (old repository)

Directory Tree

  • HaarTraining haartraining
    • src Source Code, haartraining and my additional c++ source codes are at here.
    • src/Makefile Makefile for Linux, please read comments inside
    • bin Binaries for Windows are ready, my perl scripts are also at here. This directory would be a working directory.
    • make Visual Studio Project Files
  • data The collected Image Datasets
  • result Generated Files (vec and xml etc) and results

This is a svn repository, so you can download files at burst if you have a svn client (you should have it on cygwin or Linux). For example,

$ svn co http://tutorial-haartraining.googlecode.com/svn/trunk/ tutorial-haartraining

Sorry, but downloading (checkout) image datasets may take forever…. I created a zip file once, but google code repository did not allow me to upload such a big file (100MB). I recommend you to check out only the HaarTraining directory first as

$ svn co http://tutorial-haartraining.googlecode.com/svn/trunk/HaarTraining/ HaarTraining

Here, the list of my additional utilities (I put them in HaarTraining/src and HaarTraining/bin directory):

The following additional utilities can be obtained from OpenCV/samples/c in your OpenCV install directory (I also put them in HaarTraining/src directory).

How to enable OpenMP

I bundled windows binaries in the Download section, but I did not enable OpenMP (multi-processing) support. Therefore, I write how to compile the haartraining utility to use OpenMP with Visual Studio 2005 Professional Edition here based on my distribution files (The procedure should be same for the originals too, but I did not verify.)

The solution file is in HaarTraining\make\haartraining.sln. Open it.

Right click cvhaartraining project > Properties. You will see a picture as below.

Follow Configuration Properties > C/C++ > Language > Change ‘OpenMP Support’ to ‘Yes (/openmp)’ as the above picture shows. If you can not see it, probably your environment does not support OpenMP.

Build cvhaartraining only (Right click the project > Project Only > Rebuild only cvhaartraining) and do the same procedure (enable OpenMP) for haartraining project. Now, haartraining.exe should work with OpenMP.

You may use Process Explorer to verify whether it is utilizing OpenMP or not.

Run the Process Explorer > View > Show Lower Pane (Ctrl+L) > choose ‘haartraining.exe’ process and see the Lower Pane. If you can see two threads not one thread, it is utilizing OpenMP.

References


*1 There was a choice to modify codes for the 2nd function to apply distortions and generate many images from one image, but I chose to write scripts to repeat the 1st function because the same method can be applied for creation of test samples too.
*2 The performance utility supports both classifier directory and haarcascade xml file, in details, cvLoadHaarClassifierCascade() function supports both

Posted in Computer Languages, OpenCV, OpenCV, OpenCV Tutorial | Tagged: , , | Leave a Comment »

Computer Vision Source Code

Posted by Hemprasad Y. Badgujar on January 17, 2015


Computer Vision Source Code

 

Matlab code for Skin Detection:
Info: Readme.TXT
Full Matlab code and demo: skindetector.zip

If you publish work which uses this code, please reference:
Ciarán Ó Conaire, Noel E. O’Connor and Alan F. Smeaton, “Detector adaptation by maximising agreement between independent data sources”, IEEE International Workshop on Object Tracking and Classification Beyond the Visible Spectrum 2007

(1) (2) (3)
(1) Original image, (2) Skin Likelihood image and (3) Detected Skin (using a threshold of zero)

 

Command line Face Detection

This uses the face-detector in the Blepo computer vision library, which in turn uses the OpenCV implementation of the Viola-Jones face detection method.

Info and usage: README

 

Simple MATLAB scripts

These small scripts are required for some of the other code on this page to work.
Code: makelinear.m – convert any size array/matrix into a Nx1 vector, where N = prod(size(inputMatrix))
Code: shownormimage.m – display any single-band or triple-band image, by normalising each band it so that the darkest pixel is black and the brightest is white
Code: filter3.m – uses filter2 to perform filtering on each image band separately
Code: removezeros.m – removes zero values from a vector.
Code: integrate.m – compute the cumulative sum of values in the rows of a matrix.

Layered Background Model

ReferencePerformance analysis and visualisation in tennis using a low-cost camera network, Philip Kelly, Ciarán Ó Conaire, David Monaghan, Jogile Kuklyte, Damien Connaghan, Juan Diego Pérez-Moneo Agapito, Petros Daras, Multimedia Grand Challenge Track at ACM Multimedia 2010, 25-29 October 2010, Firenze, Italy.
PDF Version

Download Codelayered_background_model_code.zip

Usage:

 

model creation:

N = 5; % number of layers

T = 20; % RGB Euclidian threshold (for determining if the pixel matches a layer)

U = 0.999; % update rate

A = 0.85; % fraction of observed colour to account for in the background

 

im = imread(‘image.jpg’);

bgmodel = initLayeredBackgroundModel(im, N, T, U, A);

 

 

 

model updating:

im = imread(‘image_new.jpg’);

[bgmodel, foreground, bridif, coldif, shadows] = updateLayeredBackgroundModel(bgmodel, im);

 

returns:

updated background model

foreground image

brightness difference

colour difference

shadow pixels image

 

 

 

Filtering/Image blurring

Code: gaussianFilter.m

Example:

% setup filter

filt = gaussianFilter(31,5);

% read in an image

im = double(imread(‘lena.jpg’));

% filter image

imf = filter3(filt, im);

% show images (get the code for ‘shownormimage.m’/’filter3.m’ above)

shownormimage(im); pause

shownormimage(imf); pause

 

Adaptive Image Thresholding

The following pieces of code implement the adaptive thresholding methods of Otsu, Kapur and Rosin.
References to the original papers given below.
Code: otsuThreshold.m
Code: kapurThreshold.m
Code: rosinThreshold.m
Code: dootsuthreshold.m
Code: dokapurthreshold.m
Code: dorosinthreshold.m

Example usage:

% read in image 0000 of an image sequence

im1 = double(imread(‘image0000.jpg’));

% read in image 0025 of an image sequence

im2 = double(imread(‘image0025.jpg’));

 

% compute the difference image (Euclidian distance in RGB space)

dif = sqrt(sum((im1-im2).^2,3));

 

% compute difference image histogram

[h, hc] = hist(makelinear(dif), 256);

 

% perform thresholding to detect motion

To = hc(otsuThreshold(h));

Tk = hc(kapurThreshold(h));

Tr = hc(rosinThreshold(h));

 

% display results

shownormimage(dif >= To); title(‘Otsu result’); pause

shownormimage(dif >= Tk); title(‘Kapur result’); pause

shownormimage(dif >= Tr); title(‘Rosin result’); pause

 

% Alternatively, you can use

% shownormimage(dokapurthreshold(dif)); % …. etc

 
Source images (above)

Thresholded difference images (from left to right): Rosin, Kapur and Otsu’s method.

References:

  • N. Otsu, A threshold selection method from gray-level histogram, IEEE Trans on System Man Cybernetics 9 (1979), no. 1, 62-66.
  • J. Kapur, P. Sahoo, and A. Wong, A new method for graylevel picture thresholding using the entropy of the histogram, Computer Graphics and Image Processing 29 (1985), no. 3, 273-285
  • P. L. Rosin, Unimodal thresholding, Pattern Recognition 34 (2001), no. 11, 2083-2096
Image Descriptors and Image Similarity

Code to extract global descriptors for images and to compare these descriptors.
Can be used for image retrieval, tracking, etc.

Image colour histogram extraction: getPatchHist.m
Histogram comparison using the Bhattacharyya coefficient: compareHists.m
Image colour spatiogram extraction: getPatchSpatiogram_fast.m
Spatiogram comparison: compareSpatiograms_new_fast.m
MPEG-7 Edge Orientation Histogram extraction: edgeOrientationHistogram.m
(Note: code for histogram comparison can be used with both colour histograms and edge orientation histograms)

Sample code:

% —- Code to compare image histograms —-

 

% read in database image

im1 = double(imread(‘flower.jpg’));

% read in query image

im2 = double(imread(‘garden.jpg’));

 

% Both images are RGB Colour images

% Extract an 8x8x8 colour histogram from each image

bins = 8;

h1 = getPatchHist(im1, bins);

h2 = getPatchHist(im2, bins);

 

% compare their histograms using the Bhattacharyya coefficient

sim = compareHists(h1,h2);

 

% 0 = very low similarity

% 0.9 = good similarity

% 1 = perfect similarity

disp(sprintf(‘Image histogram similarity = %f’, sim));

 

 

% —- Code to compare image SPATIOGRAMS —-

 

% Both images are RGB Colour images

% Extract an 8x8x8 colour SPATIOGRAM from each image

bins = 8;

[h1,mu1,sigma1] = getPatchSpatiogram_fast(im1, bins);

[h2,mu2,sigma2] = getPatchSpatiogram_fast(im2, bins);

 

% compare their histograms using the Bhattacharyya coefficient

sim = compareSpatiograms_new_fast(h1,mu1,sigma1,h2,mu2,sigma2);

 

% 0 = very low similarity

% 0.9 = good similarity

% 1 = perfect similarity

disp(sprintf(‘Image spatiogram similarity = %f’, sim));

 

Nearest-Neighbour search using KD-Trees

(coming soon…)

Hierarchical Clustering using K-means
Finding the nearest neighbour of a data point in high-dimensional space is known to be a hard problem [1].
This code clusters the data into a hierarchy of clusters and does a depth-first search to find the approximate nearest-neighbour to the query point. This technique was used in [2] to match 128-dimensional SIFT descriptors to their visual words.

K-means clustering: kmeans.m
Building a hierarchical tree of D-dimensional points: kmeanshierarchy.m
Approximate Nearest-Neighbour using a hierarchical tree: kmeanshierarchy_findpoint.m

Sample code:

 

D=2; % dimensionality

K=2; % branching factor

N=50; % size of each dataset

iters = 3; % number of clustering iterations

% setup datasets, first column is the ID

dataset1 = [ones(N,1) randn(N,D)];

dataset2 = [2*ones(N,1) 2*randn(N,2)+repmat([-5 3],[N 1])];

dataset3 = [3*ones(N,1) 1.5*randn(N,2)+repmat([5 3],[N 1])];

data = [dataset1; dataset2; dataset3];

 

 

% build the tree structure

% select columns 2 to D+1, column 1 stores the dataset-ID that the point came from.

[hierarchy] = kmeanshierarchy(data, 2, D+1, iters, K);

 

for test = 1:16

 

% Generate a random point

point = [rand*16-8 randn(1,D-1)];

 

% plot the data

cols = [‘bo’;’rx’;’gv’];

hold off

for i = 1:size(data,1)

plot(data(i,2),data(i,3),cols(data(i,1),:));

hold on

end

hold off

 

if (test==1)

title(‘Data Clusters (1=Blue, 2=Red, 3=Green)’)

pause

end

 

hold on

plot(point(1), point(2), ‘ks’);

hold off

title(‘New Point (shown as a black square)’);

pause

 

% Find its approximate nearest-neighbour in the tree

nn = kmeanshierarchy_findpoint(point, hierarchy, 2, D+1, K);

 

nearest_neighbour = nn(2:(D+1));

 

% Which set did it come from?

set_id = nn(1);

 

line([point(1) nearest_neighbour(1)],[point(2) nearest_neighbour(2)])

title(sprintf(‘Approx Nearest Neighbour: Set %d’, set_id))

pause

 

end

 

 

[1] Piotr Indyk. Nearest neighbors in high-dimensional spaces.
Handbook of Discrete and Computational Geometry, chapter 39.
Editors: Jacob E. Goodman and Joseph O’Rourke, CRC Press, 2nd edition, 2004. CITESEER LINK

[2] David Nistér and Henrik Stewenius, Scalable Recognition with a Vocabulary Tree, CVPR’06 CITESEER LINK

Mutual Information Thresholding

The idea of selecting thresholds for data “adaptively” has been around for a long time. The standard paradigm is to observe some property of the data (such as histogram shape, entropy, spatial layout, etc) and to choose a threshold that maximises some proposed performance measure. Mutual Information (MI) Thresholding takes a different approach. Instead of examining a single property of the data, it looks instead at how choices of threshold for two sources of data will affect how well they “agree” with each other.

More formally: Given two sources of data that have uncorrelated noise, choose a threshold for each of the sources, such that the mutual information between the resulting binary signals is maximised. This search through threshold-space can be done very efficiently using integral-images.

Matlab Code Download: ZIP FILE

Sample code:

N=100;

signal = 30;

noise = 5;

im = zeros(N,N);

im(round(N/3:N/2),round(N/3:N/2)) = signal;

im1 = abs(im + noise * randn(N,N));

im2 = abs(im + 1.5*noise * randn(N,N));

[T1, T2, mi, imT1, imT2, imF, quality, miscore, mii1, mii2] = mutualinfoThreshold(im1, im2);

subplot(1,2,1); shownormimage2(im1);

subplot(1,2,2); shownormimage2(im2);

pause

subplot(1,2,1); shownormimage2(imT1); title(sprintf(‘Threshold = %f’, T1));

subplot(1,2,2); shownormimage2(imT2); title(sprintf(‘Threshold = %f’, T2));

pause

subplot(1,1,1);

shownormimage2(mi); title(‘Mutual Information Surface’);

pause

References:

  • C. Ó Conaire, N. O’Connor, E. Cooke and A. Smeaton, Detection Thresholding using Mutual Information, VISAPP 2006
    PDF Available
  • C. Ó Conaire and N. O’Connor, Unsupervised feature selection for detection using mutual information thresholding, WIAMIS 2008

 

Posted in C, Computer Languages, Computer Vision | Tagged: | Leave a Comment »

Step By Step Installing Visual Studio Professional 2012

Posted by Hemprasad Y. Badgujar on January 5, 2015


1. Mount .iso file. Click on “Setup.exe” file. Agree on terms and conditions and click on “Next” button.

2. Select the required features from the list and click “Install” button. It will take around 7.90 GB of space if all features are installed.

3. Setup will create “System Restore Point” before starting the installation process.

4. Once it is done, it will start installation process.

5. Between setup will ask you to restart the system. Click on “Restart” button to restart your system.


6. Setup will resume, once system is restarted.

7. Now installation will take some time. Around 20-30 minutes.

8. Once setup is completed, you can launch Visual studio.

Posted in Computer Languages, Computer Network & Security, Computer Softwares, CUDA, GPU (CUDA), Installation, PARALLEL, Windows OS | Tagged: | 1 Comment »

 
Extracts from a Personal Diary

dedicated to the life of a silent girl who eventually learnt to open up

Num3ri v 2.0

I miei numeri - seconda versione

ThuyDX

Just another WordPress.com site

Abraham Zamudio [Matematico]

Matematica Aplicada, Linux ,Programacion Cientifica , HIgh Performance COmputing, APrendizaje Automatico

josephdung

thoughts...

Tech_Raj

A great WordPress.com site

Travel tips

Travel tips

Experience the real life.....!!!

Shurwaat achi honi chahiye ...

Ronzii's Blog

Just your average geek's blog

Chetan Solanki

Helpful to u, if u need it.....

ScreenCrush

Explorer of Research #HEMBAD

managedCUDA

Explorer of Research #HEMBAD

siddheshsathe

A great WordPress.com site

Ari's

This is My Space so Dont Mess With IT !!

Business India 2.0

All about Business Travel 2.0 ideas,technology,ventures and the capital making its happen