Something More for Research

Explorer of Research #HEMBAD

Archive for the ‘Computer Vision’ Category

OpenCV 3.1 with CUDA , QT , Python Complete Installation on Windows in With Extra Modules

Posted by Hemprasad Y. Badgujar on May 13, 2016


OpenCV 3.1 with CUDA , QT , Python Complete Installation on Windows in With Extra Modules

The description here was tested on Windows 8.1 Pro. Nevertheless, it should also work on any other relatively modern version of Windows OS. If you encounter errors after following the steps described below, feel free to contact me.

Note :  To use the OpenCV library you have two options: Installation by Using the Pre-built Libraries or Installation by Making Your Own Libraries from the Source Files. While the first one is easier to complete, it only works if you are coding with the latest Microsoft Visual Studio IDE and doesn’t take advantage of the most advanced technologies we integrate into our library.

I am going to skip Installation by Using the Pre-built Libraries is it is easier to install even for New User. So Let’s Work on Installation by Making Your Own Libraries from the Source Files (If you are building your own libraries you can take the source files from OpenCV Git repository.) Building the OpenCV library from scratch requires a couple of tools installed beforehand

Prerequisites  Tools

Step By Step Prerequisites Setup

  • IDE : Microsoft Visual Studio. However, you can use any other IDE that has a valid CC++ compiler.

    Installing By Downloading from the Product Website Start installing Visual Studio by going to Visual Studio Downloads on the MSDN website and then choosing the edition you want to download.here we will going to use Visual Studio 2012 / ISO keys with  Visual Studio 2012 Update 4 /ISO and Step By Step Installing Visual Studio Professional 2012
  • Make Tool : Cmake is a cross-platform, open-source build system.

    Download and install the latest stable binary version: here we will going to use CMake 3 Choose the windows installer (cmake-x.y.z-win32.exe) and install it. Letting the cmake installer add itself to your path will make it easier but is not required.

  • Download OpenCV source Files by GIT/Sourceforge by : TortoiseGit or download source files from page on Sourceforge.

    The Open Source Computer Vision Library has >2500 algorithms, extensive documentation and sample code for real-time computer vision. It works on Windows, Linux, Mac OS X, Android and iOS.

  •  Python and Python libraries : Installation notes

    • It is recommended to uninstall any other Python distribution before installing Python(x,y)
    • You may update your Python(x,y) installation via individual package installers which are updated more frequently — see the plugins page
    • Please use the Issues page to request new features or report unknown bugs
    • Python(x,y) can be easily extended with other Python libraries because Python(x,y) is compatible with all Python modules installers: distutils installers (.exe), Python eggs (.egg), and all other NSIS (.exe) or MSI (.msi) setups which were built for Python 2.7 official distribution – see the plugins page for customizing options
    • Another Python(x,y) exclusive feature: all packages are optional (i.e. install only what you need)
    • Basemap users (data plotting on map projections): please see the AdditionalPlugins
  • Sphinx is a python documentation generator

    After installation, you better add the Python executable directories to the environment variable PATH in order to run Python and package commands such as sphinx-build easily from the Command Prompt.

    1. Right-click the “My Computer” icon and choose “Properties”

    2. Click the “Environment Variables” button under the “Advanced” tab

    3. If “Path” (or “PATH”) is already an entry in the “System variables” list, edit it. If it is not present, add a new variable called “PATH”.

      • Right-click the “My Computer” icon and choose “Properties”

      • Click the “Environment Variables” button under the “Advanced” tab

      • If “Path” (or “PATH”) is already an entry in the “System variables” list, edit it. If it is not present, add a new variable called “PATH”.

      • Add these paths, separating entries by ”;”:

        • C:\Python27 – this folder contains the main Python executable
        • C:\Python27\Scripts – this folder will contain executables added by Python packages installed with pip (see below)

        This is for Python 2.7. If you use another version of Python or installed to a non-default location, change the digits “27” accordingly.

      • Now run the Command Prompt. After command prompt window appear, type python and Enter. If the Python installation was successful, the installed Python version is printed, and you are greeted by the prompt>>> . TypeCtrl+Z and Enter to quit.

        Add these paths, separating entries by ”;”:

        • C:\Python27 – this folder contains the main Python executable
        • C:\Python27\Scripts – this folder will contain executables added by Python packages installed with easy_install (see below)

        This is for Python 2.7. If you use another version of Python or installed to a non-default location, change the digits “27” accordingly.

        After installation, you better add the Python executable directories to the environment variable PATH in order to run Python and package commands such as sphinx-build easily from the Command Prompt.

      • Install the pip command

      Python has a very useful pip command which can download and install 3rd-party libraries with a single command. This is provided by the Python Packaging Authority(PyPA): https://groups.google.com/forum/#!forum/pypa-dev

      To install pip, download https://bootstrap.pypa.io/get-pip.py and save it somewhere. After download, invoke the command prompt, go to the directory with get-pip.py and run this command:

      C:\> python get-pip.py
      

      Now pip command is installed. From there we can go to the Sphinx install.

      Note :pip has been contained in the Python official installation after version

        of Python-3.4.0 or Python-2.7.9.
  • Installing Sphinx with pip

    If you finished the installation of pip, type this line in the command prompt:

    C:\> pip install sphinx
    

    After installation, type sphinx-build -h on the command prompt. If everything worked fine, you will get a Sphinx version number and a list of options for this command.

    That it. Installation is over. Head to First Steps with Sphinx to make a Sphinx project.

    Now run the Command Prompt. After command prompt window appear, type python and Enter. If the Python installation was successful, the installed Python version is printed, and you are greeted by the prompt>>> . TypeCtrl+Z and Enter to quit.

  • Install the easy_install command

    Python has a very useful easy_install command which can download and install 3rd-party libraries with a single command. This is provided by the “setuptools” project: https://pypi.python.org/pypi/setuptools.

    To install setuptools, download https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py and save it somewhere. After download, invoke the command prompt, go to the directory with ez_setup.py and run this command:

    C:\> python ez_setup.py
    

    Now setuptools and its easy_install command is installed. From there we can go to the Sphinx install.

    Installing Sphinx with easy_install

    If you finished the installation of setuptools, type this line in the command prompt:
    C:\> easy_install sphinx
    

    After installation, type sphinx-build on the command prompt. If everything worked fine, you will get a Sphinx version number and a list of options for this command.

  • Numpy is a scientific computing package for Python. Required for the Python interface.

Try the (unofficial) binaries in this site: http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy
You can get numpy 1.6.2 x64 with or without Intel MKL libs to Python 2.7

I suggest WinPython, a Python 2.7 distribution for Windows with both 32- and 64-bit versions.

It is also worth considering the Anaconda Python distribution. http://continuum.io/downloads

  • Numpy :Required for the Python interface abve Installation of python alosho included with Numpy and Scipy libraries

  • Intel © Threading Building Blocks (TBB) is used inside OpenCV for parallel code snippets. Download Here

    • Download TBB
      • Go to TBB download page to download the open source binary releases. I choose Commercial Aligned Release, because this has the most stable releases. I downloaded tbb43_20141204os, TBB 4.3 Update 3, specifically tbb43_20141204os for Windows. The release has the header files as well as the import library and DLL files prebuilt for Microsoft Visual C++ 8.0 and 9.0 on both x86(IA32) and x64(intel64). If you are aggressive and need the source code of TBB, you can try stable releases or development releases.
    • Install TBB
      • Extract the files in the zip file to a local directory, for example, C:\TBB. You should find tbb22_013oss under it. This is the installation directory, and doc, example, include etc should be directly under the installation folder.
      • Set a Windows environment variable TBB22_INSTALL_DIR to the above directory, e.g., C:\TBB\tbb22_013oss.
    • Develop with TBB
      • Add $(TBB22_INSTALL_DIR)\include to your C++ project’s additional include directories.
      • Add $(TBB22_INSTALL_DIR)\<arch>\<compiler>\lib (e.g., $(TBB22_INSTALL_DIR)\ia32\vc9\lib) to your project’s additional library directories.
      • Add to your project’s additional dependencies tbb.lib (Release) or tbb_debug.lib (Debug).
      • Write your C++ code to use TBB. See code below as an example.
    • Deploy with TBB
      • The TBB runtime is in TBB DLLs (tbb.dll/tbbmalloc.dll/tbbmalloc_proxy.dll for Release, tbb_debug.dll/tbbmalloc_debug.dll/tbbmalloc_proxy_debug.dll for Debug). They can be found in $(TBB22_INSTALL_DIR)\\\bin.
      • Your executable should have these DLLs in the same folder for execution.

    Intel © Integrated Performance Primitives (IPP) may be used to improve the performance of color conversion.(Paid)

    Intel Parallel Studio XE 2015 – Cluster Edition includes everything in the Professional edition (compilers, performance libraries, parallel models, performance profiler, threading design/prototyping, and memory & thread debugger). It adds a MPI cluster communications library, along with MPI error checking and tuning to design, build, debug and tune fast parallel code that includes MPI.

  • Eigen is a C++ template library for linear algebra.

     “install” Eigen?

    In order to use Eigen, you just need to download and extract Eigen‘s source code (see the wiki for download instructions). In fact, the header files in the Eigen subdirectory are the only files required to compile programs using Eigen. The header files are the same for all platforms. It is not necessary to use CMake or install anything.

     simple first program

    Here is a rather simple program to get you started.

    #include <iostream>
    #include <Eigen/Dense>
     int main()
    {
    MatrixXd m(2,2);
    m(0,0) = 3;
    m(1,0) = 2.5;
    m(0,1) = -1;
    m(1,1) = m(1,0) + m(0,1);
    std::cout << m << std::endl;
    }
  • Installing CUDA Development Tools

    The setup of CUDA development tools on a system running the appropriate version of Windows consists of a few simple steps:

    • Verify the system has a CUDA-capable GPU.
    • Download the NVIDIA CUDA Toolkit.
    • Install the NVIDIA CUDA Toolkit.
    • Test that the installed software runs correctly and communicates with the hardware.
    • CUDA Toolkit will allow you to use the power lying inside your GPU. we will going to use CUDA 7.5  Toolkit

      To verify that your GPU is CUDA-capable, open the Control Panel (StartControl > Panel) and double click on System. In the System Propertieswindow that opens, click the Hardware tab, then Device Manager. Expand the Display adapters entry. There you will find the vendor name and model of your graphics card. If it is an NVIDIA card that is listed inhttp://developer.nvidia.com/cuda-gpus, your GPU is CUDA-capable.

      The Release Notes for the CUDA Toolkit also contain a list of supported products.

       Download the NVIDIA CUDA Toolkit

      The NVIDIA CUDA Toolkit is available at http://developer.nvidia.com/cuda-downloads.

      Choose the platform you are using and download the NVIDIA CUDA Toolkit

      The CUDA Toolkit contains the CUDA driver and tools needed to create, build and run a CUDA application as well as libraries, header files, CUDA samples source code, and other resources.

      Download Verification

      The download can be verified by comparing the MD5 checksum posted at http://developer.nvidia.com/cuda-downloads/checksums with that of the downloaded file. If either of the checksums differ, the downloaded file is corrupt and needs to be downloaded again.

      To calculate the MD5 checksum of the downloaded file, follow the instructions at http://support.microsoft.com/kb/889768.

      Install the CUDA Software

      Before installing the toolkit, you should read the Release Notes, as they provide details on installation and software functionality.

      Note: The driver and toolkit must be installed for CUDA to function. If you have not installed a stand-alone driver, install the driver from the NVIDIA CUDA Toolkit.

      Graphical Installation

      Install the CUDA Software by executing the CUDA installer and following the on-screen prompts.

      Silent Installation

      Alternatively, the installer can be executed in silent mode by executing the package with the -s flag. Additional flags can be passed which will install specific subpackages instead of all packages. Allowed subpackage names are: CUDAToolkit_6.5, CUDASamples_6.5, CUDAVisualStudioIntegration_6.5, and Display.Driver. For example, to install only the driver and the toolkit components:

      .exe -s CUDAToolkit_6.5 Display.Driver

      This will drastically improve performance for some algorithms (e.g the HOG descriptor). Getting more and more of our algorithms to work on the GPUs is a constant effort of the OpenCV .

  • JRE :Java run time environment

    Installing Ant The binary distribution of Ant consists of the following directory layout:

      ant
       +--- README, LICENSE, fetch.xml, other text files. //basic information
       +--- bin  // contains launcher scripts
       |
       +--- lib  // contains Ant jars plus necessary dependencies
       |
       +--- docs // contains documentation
       |      |
       |      +--- images  // various logos for html documentation
       |      |
       |      +--- manual  // Ant documentation (a must read ;-)
       |
       +--- etc // contains xsl goodies to:
                //   - create an enhanced report from xml output of various tasks.
                //   - migrate your build files and get rid of 'deprecated' warning
                //   - ... and more ;-)
    

    Only the bin and lib directories are required to run Ant. To install Ant, choose a directory and copy the distribution files there. This directory will be known as ANT_HOME.

Before you can run Ant there is some additional set up you will need to do unless you are installing the RPM version from jpackage.org:

  • Add the bin directory to your path.
  • Set the ANT_HOME environment variable to the directory where you installed Ant. On some operating systems, Ant’s startup scripts can guess ANT_HOME(Unix dialects and Windows NT/2000), but it is better to not rely on this behavior.
  • Optionally, set the JAVA_HOME environment variable (see the Advanced section below). This should be set to the directory where your JDK is installed.

Operating System-specific instructions for doing this from the command line are in the Windows, Linux/Unix (bash), and Linux/Unix (csh) sections. Note that using this method, the settings will only be valid for the command line session you run them in. Note: Do not install Ant’s ant.jar file into the lib/ext directory of the JDK/JRE. Ant is an application, whilst the extension directory is intended for JDK extensions. In particular there are security restrictions on the classes which may be loaded by an extension.

Windows Note:
The ant.bat script makes use of three environment variables – ANT_HOME, CLASSPATH and JAVA_HOME. Ensure that ANT_HOME and JAVA_HOME variables are set, and that they do not have quotes (either ‘ or “) and they do not end with \ or with /. CLASSPATH should be unset or empty.

Check Installation

You can check the basic installation with opening a new shell and typing ant. You should get a message like this

Buildfile: build.xml does not exist!
Build failed

So Ant works. This message is there because you need to write an individual buildfile for your project. With a ant -version you should get an output like

Apache Ant(TM) version 1.9.2 compiled on July 8 2013

If this does not work ensure your environment variables are set right. They must resolve to:

  • required: %ANT_HOME%\bin\ant.bat
  • optional: %JAVA_HOME%\bin\java.exe
  • required: %PATH%=…maybe-other-entries…;%ANT_HOME%\bin;…maybe-other-entries

ANT_HOME is used by the launcher script for finding the libraries. JAVA_HOME is used by the launcher for finding the JDK/JRE to use. (JDK is recommended as some tasks require the java tools.) If not set, the launcher tries to find one via the %PATH% environment variable. PATH is set for user convenience. With that set you can just start ant instead of always typingthe/complete/path/to/your/ant/installation/bin/ant.

Posted in GPU (CUDA), Image / Video Filters, Mixed, OpenCV, OpenCV | Leave a Comment »

Databases for Multi-camera , Network Camera , E-Surveillace

Posted by Hemprasad Y. Badgujar on February 18, 2016


Multi-view, Multi-Class Dataset: pedestrians, cars and buses

This dataset consists of 23 minutes and 57 seconds of synchronized frames taken at 25fps from 6 different calibrated DV cameras.
One camera was placed about 2m high of the ground, two others where located on a first floor high, and the rest on a second floor to cover an area of 22m x 22m.
The sequence was recorded at the EPFL university campus where there is a road with a bus stop, parking slots for cars and a pedestrian crossing.

Download

Ground truth images
Ground truth annotations

References

The dataset on this page has been used for our multiview object pose estimation algorithm described in the following paper:

G. Roig, X. Boix, H. Ben Shitrit and P. Fua Conditional Random Fields for Multi-Camera Object Detection, ICCV11.

Multi-camera pedestrians video

“EPFL” data set: Multi-camera Pedestrian Videos

people tracking
results, please cite one of the references below.

On this page you can download a few multi-camera sequences that we acquired for developing and testing our people detection and tracking framework. All of the sequences feature several synchronised video streams filming the same area under different angles. All cameras are located about 2 meters from the ground. All pedestrians on the sequences are members of our laboratory, so there is no privacy issue. For the Basketball sequence, we received consent from the team.

Laboratory sequences

These sequences were shot inside our laboratory by 4 cameras. Four (respectively six) people are sequentially entering the room and walking around for 2 1/2 minutes. The frame rate is 25 fps and the videos are encoded using MPEG-4 codec.

[Camera 0] [Camera 1] [Camera 2] [Camera 3]

Calibration file for the 4 people indoor sequence.

[Camera 0] [Camera 1] [Camera 2] [Camera 3]

Calibration file for the 6 people indoor sequence.

Campus sequences

These two sequences called campus were shot outside on our campus with 3 DV cameras. Up to four people are simultaneously walking in front of them. The white line on the screenshots shows the limits of the area that we defined to obtain our tracking results. The frame rate is 25 fps and the videos are encoded using Indeo 5 codec.

[Seq.1, cam. 0] [Seq.1, cam. 1] [Seq.1, cam. 2]
[Seq.2, cam. 0] [Seq.2, cam. 1] [Seq.2, cam. 2]

Calibration file for the two above outdoor scenes.

Terrace sequences

The sequences below, called terrace, were shot outside our building on a terrace. Up to 7 people evolve in front of 4 DV cameras, for around 3 1/2 minutes. The frame rate is 25 fps and the videos are encoded using Indeo 5 codec.

[Seq.1, cam. 0] [Seq.1, cam. 1] [Seq.1, cam. 2] [Seq.1, cam. 3]
[Seq.2, cam. 0] [Seq.2, cam. 1] [Seq.2, cam. 2] [Seq.1, cam. 3]

Calibration file for the terrace scene.

Passageway sequence

This sequence dubbed passageway was filmed in an underground passageway to a train station. It was acquired with 4 DV cameras at 25 fps, and is encoded with Indeo 5. It is a rather difficult sequence due to the poor lighting.

[Seq.1, cam. 0] [Seq.1, cam. 1] [Seq.1, cam. 2] [Seq.1, cam. 3]

Calibration file for the passageway scene.

Basketball sequence

This sequence was filmed at a training session of a local basketball team. It was acquired with 4 DV cameras at 25 fps, and is encoded with Indeo 5.

[Seq.1, cam. 0] [Seq.1, cam. 1] [Seq.1, cam. 2] [Seq.1, cam. 3]

Calibration file for the basketball scene.

Camera calibration

POM only needs a simple calibration consisting of two homographies per camera view, which project the ground plane in top view to the ground plane in camera views and to the head plane in camera views (a plane parallel to the ground plane but located 1.75 m higher). Therefore, the calibration files given above consist of 2 homographies per camera. In degenerate cases where the camera is located inside the head plane, this one will project to a horizontal line in the camera image. When this happens, we do not provide a homography for the head plane, but instead we give the height of the line in which the head plane will project. This is expressed in percentage of the image height, starting from the top.

The homographies given in the calibration files project points in the camera views to their corresponding location on the top view of the ground plane, that is

H * X_image = X_topview .

We have also computed the camera calibration using the Tsai calibration toolkit for some of our sequences. We also make them available for download. They consist of an XML file per camera view, containing the standard Tsai calibration parameters. Note that the image size used for calibration might differ from the size of the video sequences. In this case, the image coordinates obtained with the calibration should be normalized to the size of the video.

Ground truth

We have created a ground truth data for some of the video sequences presented above, by locating and identifying the people in some frames at a regular interval.

To use these ground truth files, you must rely on the same calibration with the exact same parameters that we used when generating the data. We call top view the rectangular area of the ground plane in which we perform tracking.

This area is of dimensions tv_width x tv_height and has top left coordinate (tv_origin_x, tv_origin_y). Besides, we call grid our discretization of the top view area into grid_width x grid_height cells. An example is illustrated by the figure below, in which the grid has dimensions 5 x 4.

The people’s position in the ground truth are expressed in discrete grid coordinates. In order to be projected into the images with homographies or the Tsai calibration, these grid coordinates need to be translated into top view coordinates. We provide below a simple C function that performs this translation. This function takes the following parameters:

  • pos : the person position coming from the ground truth file
  • grid_width, grid_height : the grid dimension
  • tv_origin_x, tv_origin_y : the top left corner of the top view
  • tv_width, tv_height : the top view dimension
  • tv_x, tv_y : the top view coordinates, i.e. the output of the function
  void grid_to_tv(int pos, int grid_width, int grid_height,                  float tv_origin_x, float tv_origin_y, float tv_width,                  float tv_height, float &tv_x, float &tv_y) {     tv_x = ( (pos % grid_width) + 0.5 ) * (tv_width / grid_width) + tv_origin_x;    tv_y = ( (pos / grid_width) + 0.5 ) * (tv_height / grid_height) + tv_origin_y;  }

The table below summarizes the aforementionned parameters for the ground truth files we provide. Note that the ground truth for the terrace sequence has been generated with the Tsai calibration provided in the table. You will need to use this one to get a proper bounding box alignment.

Ground Truth Grid dimensions Top view origin Top view dimensions Calibration
6-people laboratory 56 x 56 (0 , 0) 358 x 360 file
terrace, seq. 1 30 x 44 (-500 , -1,500) 7,500 x 11,000 file (Tsai)
passageway, seq. 1 40 x 99 (0 , 38.48) 155 x 381 file

The format of the ground truth file is the following:

 1 <number of frames>  <number of people>  <grid width>  <grid height>  <step size>  <first frame>  <last frame> <pos> <pos> <pos> ... <pos> <pos> <pos> ... . . .

where <number of frames> is the total number of frames, <number of people> is the number of people for which we have produced a ground truth, <grid width> and <grid height>are the ground plane grid dimensions, <step size> is the frame interval between two ground truth labels (i.e. if set to 25, then there is a label once every 25 frames), and <first frame> and <last frame> are the first and last frames for which a label has been entered.

After the header, every line represents the positions of people at a given frame. <pos> is the position of a person in the grid. It is normally a integer >= 0, but can be -1 if undefined (i.e. no label has been produced for this frame) or -2 if the person is currently out of the grid.

References

Multiple Object Tracking using K-Shortest Paths Optimization

Jérôme Berclaz, François Fleuret, Engin Türetken, Pascal Fua
IEEE Transactions on Pattern Analysis and Machine Intelligence
2011
pdf | show bibtex

Multi-Camera People Tracking with a Probabilistic Occupancy Map

François Fleuret, Jérôme Berclaz, Richard Lengagne, Pascal Fua
IEEE Transactions on Pattern Analysis and Machine Intelligence
pdf | show bibtex

MuHAVi: Multicamera Human Action Video Data

including selected action sequences with

MAS: Manually Annotated Silhouette Data

for the evaluation of human action recognition methods

Figure 1. The top view of the configuration of 8 cameras used to capture the actions in the blue action zone (which is marked with white tapes on the scene floor).

camera symbol

camera name

V1 Camera_1
V2 Camera_2
V3 Camera_3
V4 Camera_4
V5 Camera_5
V6 Camera_6
V7 Camera_7
V8 Camera_8

Table 1. Camera view names appearing in the MuHAVi data folders and the corresponding symbols used in Fig. 1.

 

On the table below, you can click on the links to download the data (JPG images) for the corresponding action

Important: We noted that some earlier versions of that earlier versions of MS Internet Explorer could not download files over 2GB size, so we recomment to use alternative browsers such as Firefox or Chrome.

Each tar file contains 7 folders corresponding to 7 actors (Person1 to Person7) each of which contains 8 folders corresponding to 8 cameras (Camera_1 to Camera_8). Image frames corresponding to every combination of action/actor/camera are named with image frame numbers starting from 00000001.jpg for simplicity. The video frame rate is 25 frames per second and the resolution of image frames (except for Camera_8) is 720 x 576 Pixels (columns x rows). The image resolution is 704 x 576 for Camera_8.

action class

action name

size
C1 WalkTurnBack 2.6GB
C2 RunStop 2.5GB
C3 Punch 3.0GB
C4 Kick 3.4GB
C5 ShotGunCollapse 4.3GB
C6 PullHeavyObject 4.5GB
C7 PickupThrowObject 3.0GB
C8 WalkFall 3.9GB
C9 LookInCar 4.6GB
C10 CrawlOnKnees 3.4GB
C11 WaveArms 2.2GB
C12 DrawGraffiti 2.7GB
C13 JumpOverFence 4.4GB
C14 DrunkWalk 4.0GB
C15 ClimbLadder 2.1GB
C16 SmashObject 3.3GB
C17 JumpOverGap 2.6GB

MIT Trajectory Data Set – Multiple Camera Views

Download

MIT trajectory data set is for the research of activity analysis in multiple single camera view using the trajectories of objects as features. Object tracking is based on background subtraction using a Adaptive Gaussian Mixture model. There are totally four camera views. Trajectories in different camera views have been synchronized. The data can be downloaded from the following link,

MIT trajectory data set

Background image

Reference

Please cite as:

X. Wang, K. Tieu and E. Grimson, Correspondence‐Free Activity Analysis and Scene Modeling in Multiple Camera Views, IEEE Transactions on Pattern Analysis and Machine Intelligence(PAMI), Vol. 32, pp. 56-71, 2010..

Details

MIT traffic data set is for research on activity analysis and crowded scenes. It includes a traffic video sequence of 90 minutes long. It is recorded by a stationary camera. The size of the scene is 720 by 480. It is divided into 20 clips and can be downloaded from the following links.

Ground Truth

In order to evaluate the performance of human detection on this data set, ground truth of pedestrians of some sampled frames are manually labeled. It can be downloaded below. A readme file provides the instructions of how to use it.
Ground truth of pedestrians

References

  1. Unsupervised Activity Perception in Crowded and Complicated scenes Using Hierarchical Bayesian Models
    X. Wang, X. Ma and E. Grimson
    IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 31, pp. 539-555, 2009
  2. Automatic Adaptation of a Generic Pedestrian Detector to a Specific Traffic Scene
    M. Wang and X. Wang
    IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2011

Description

This dataset is presented in our CVPR 2015 paper,
Linjie Yang, Ping Luo, Chen Change Loy, Xiaoou Tang. A Large-Scale Car Dataset for Fine-Grained Categorization and Verification, In Computer Vision and Pattern Recognition (CVPR), 2015. PDF

The Comprehensive Cars (CompCars) dataset contains data from two scenarios, including images from web-nature and surveillance-nature. The web-nature data contains 163 car makes with 1,716 car models. There are a total of 136,726 images capturing the entire cars and 27,618 images capturing the car parts. The full car images are labeled with bounding boxes and viewpoints. Each car model is labeled with five attributes, including maximum speed, displacement, number of doors, number of seats, and type of car. The surveillance-nature data contains 50,000 car images captured in the front view. Please refer to our paper for the details.

The dataset is well prepared for the following computer vision tasks:

  • Fine-grained classification
  • Attribute prediction
  • Car model verification

The train/test subsets of these tasks introduced in our paper are included in the dataset. Researchers are also welcome to utilize it for any other tasks such as image ranking, multi-task learning, and 3D reconstruction.

Note

  1. You need to complete the release agreement form to download the dataset. Please see below.
  2. The CompCars database is available for non-commercial research purposes only.
  3. All images of the CompCars database are obtained from the Internet which are not property of MMLAB, The Chinese University of Hong Kong. The MMLAB is not responsible for the content nor the meaning of these images.
  4. You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data.
  5. You agree not to further copy, publish or distribute any portion of the CompCars database. Except, for internal use at a single site within the same organization it is allowed to make copies of the database.
  6. The MMLAB reserves the right to terminate your access to the database at any time.
  7. All submitted papers or any publicly available text using the CompCars database must cite the following paper:
    Linjie Yang, Ping Luo, Chen Change Loy, Xiaoou Tang. A Large-Scale Car Dataset for Fine-Grained Categorization and Verification, In Computer Vision and Pattern Recognition (CVPR), 2015.

Download instructions

Download the CompCars dataset Release Agreement, read it carefully, and complete it appropriately. Note that the agreement should be signed by a full-time staff member (that is, student is not acceptable). Then, please scan the signed agreement and send it to Mr. Linjie Yang (yl012(at)ie.cuhk.edu.hk) and cc to Chen Change Loy (ccloy(at)ie.cuhk.edu.hk). We will verify your request and contact you on how to download the database.

Stanford Cars Dataset

Overview

       The Cars dataset contains 16,185 images of 196 classes of cars. The data is split into 8,144 training images and 8,041 testing images, where each class has been split roughly in a 50-50 split. Classes are typically at the level of Make, Model, Year, e.g. 2012 Tesla Model S or 2012 BMW M3 coupe.

Download

       Training images can be downloaded here.
Testing images can be downloaded here.
A devkit, including class labels for training images and bounding boxes for all images, can be downloaded here.
If you’re interested in the BMW-10 dataset, you can get that here.

Update: For ease of development, a tar of all images is available here and all bounding boxes and labels for both training and test are available here. If you were using the evaluation server before (which is still running), you can use test annotations here to evaluate yourself without using the server.

Evaluation

       An evaluation server has been set up here. Instructions for the submission format are included in the devkit. This dataset was featured as part of FGComp 2013, and competition results are directly comparable to results obtained from evaluating on images here.

Citation

       If you use this dataset, please cite the following paper:

3D Object Representations for Fine-Grained Categorization
Jonathan Krause, Michael Stark, Jia Deng, Li Fei-Fei
4th IEEE Workshop on 3D Representation and Recognition, at ICCV 2013 (3dRR-13). Sydney, Australia. Dec. 8, 2013.
[pdf]   [BibTex]   [slides]

Note that the dataset, as released, has 196 categories, one less than in the paper, as it has been cleaned up slightly since publication. Numbers should be more or less comparable, though.

The HDA dataset is a multi-camera high-resolution image sequence dataset for research on high-definition surveillance. 18 cameras (including VGA, HD and Full HD resolution) were recorded simultaneously during 30 minutes in a typical indoor office scenario at a busy hour (lunch time) involving more than 80 persons. In the current release (v1.1), 13 cameras have been fully labeled.

 

The venue spans three floors of the Institute for Systems and Robotics (ISR-Lisbon) facilities. The following pictures show the placement of the cameras. The 18 recorded cameras are identified with a small red circle. The 13 cameras with a coloured view field have been fully labeled in the current release (v1.1).

 

Each frame is labeled with the bounding boxes tightly adjusted to the visible body of the persons, the unique identification of each person, and flag bits indicating occlusion and crowd:

  • The bounding box is drawn so that it completely and tightly encloses the person.
  • If the person is occluded by something (except image boundaries), the bounding box is drawn by estimating the whole body extent.
  • People partially outside the image boundaries have their BB’s cropped to image limits. Partially occluded people and people partially outside the image boundaries are marked as ‘occluded’.
  • A unique ID is associated to each person, e.g., ‘person01’. In case of identity doubt, the special ID ‘personUnk’ is used.
  • Groups of people that are impossible to label individually are labelled collectively as ‘crowd’. People in front of a ’crowd’ area are labeled normally.

The following figures show examples of labeled frames: (a) an unoccluded person; (b) two occluded people; (c) a crowd with three people in front.

 

Data formats:

For each camera we provide the .jpg frames sequentially numbered and a .txt file containing the annotations according to the “video bounding box” (vbb) format defined in the Caltech Pedestrian Detection Database. Also on this site there are tools to visualise the annotations overlapped on the image frames.

 

Some statistics:

Labeled Sequences: 13

Number of Frames: 75207

Number of Bounding Boxes: 64028

Number of Persons: 85

 

Repository of Results:

We maintain a public repository of re-identification results in this dataset. Send us your CMC curve to be uploaded  (alex at isr ist utl pt).
Click here to see the full list and detailed experiments.

MANUAL_c_l_e_a_n cam60

Posted in Computer Network & Security, Computer Research, Computer Vision, Image Processing, Multimedia | Leave a Comment »

Bilateral Filtering

Posted by Hemprasad Y. Badgujar on September 14, 2015


Popular Filters

When smoothing or blurring images (the most popular goal of smoothing is to reduce noise), we can use diverse linear filters, because linear filters are easy to achieve, and are kind of fast, the most used ones are Homogeneous filter, Gaussian filter, Median filter, et al.

When performing a linear filter, we do nothing but output pixel’s value g(i,j)  which is determined as a weighted sum of input pixel values f(i+k, j+l):

g(i, j)=SUM[f(i+k, j+l) h(k, l)];

in which, h(k, l)) is called the kernel, which is nothing more than the coefficients of the filter.

Homogeneous filter is the most simple filter, each output pixel is the mean of its kernel neighbors ( all of them contribute with equal weights), and its kernel K looks like:

1

 Gaussian filter is nothing but using different-weight-kernel, in both x and y direction, pixels located in the middle would have bigger weight, and the weights decrease with distance from the neighborhood center, so pixels located on sides have smaller weight, its kernel K is something like (when kernel is 5*5):

gkernel

Median filter is something that replace each pixel’s value with the median of its neighboring pixels. This method is great when dealing with “salt and pepper noise“.

Bilateral Filter

By using all the three above filters to smooth image, we not only dissolve noise, but also smooth edges, which make edges less sharper, even disappear. To solve this problem, we can use a filter called bilateral filter, which is an advanced version of Gaussian filter, it introduces another weight that represents how two pixels can be close (or similar) to one another in value, and by considering both weights in image,  Bilateral filter can keep edges sharp while blurring image.

Let me show you the process by using this image which have sharp edge.

21

 

Say we are smoothing this image (we can see noise in the image), and now we are dealing with the pixel at middle of the blue rect.

22   23

Left-above picture is a Gaussian kernel, and right-above picture is Bilateral filter kernel, which considered both weight.

We can also see the difference between Gaussian filter and Bilateral filter by these pictures:

Say we have an original image with noise like this

32

 

By using Gaussian filter, the image is smoother than before, but we can see the edge is no longer sharp, a slope appeared between white and black pixels.

33

 

However, by using Bilateral filter, the image is smoother, the edge is sharp, as well.

31

OpenCV code

It is super easy to make these kind of filters in OpenCV:

1 //Homogeneous blur:
2 blur(image, dstHomo, Size(kernel_length, kernel_length), Point(-1,-1));
3 //Gaussian blur:
4 GaussianBlur(image, dstGaus, Size(kernel_length, kernel_length), 0, 0);
5 //Median blur:
6 medianBlur(image, dstMed, kernel_length);
7 //Bilateral blur:
8 bilateralFilter(image, dstBila, kernel_length, kernel_length*2, kernel_length/2);

and for each function, you can find more details in OpenCV Documentation

Test Images

Glad to use my favorite Van Gogh image :

vangogh

 

From left to right: Homogeneous blur, Gaussian blur, Median blur, Bilateral blur.

(click iamge to view full size version :p )

kernel length = 3:

homo3 Gaussian3 Median3 Bilateral3

kernel length = 9:

homo9 Gaussian9 Median9 Bilateral9
kernel length = 15:

homo15 Gaussian15 Median15 Bilateral15

kernel length = 23:

homo23 Gaussian23 Median23 Bilateral23
kernel length = 31:

homo31 Gaussian31 Median31 Bilateral31
kernel length = 49:

homo49 Gaussian49 Median49 Bilateral49
kernel length = 99:

homo99 Gaussian99 Median99 Bilateral99

Trackback URL.

Posted in C, Image / Video Filters, Image Processing, OpenCV, OpenCV, OpenCV Tutorial | Leave a Comment »

cppconlib: A C++ library for working with the Windows console

Posted by Hemprasad Y. Badgujar on July 20, 2015


cppconlib is built with C++11 features and requires Visual Studio 2012 or newer. The library is available in a single header called conmanip.h and provides a set of helper classes, functions and constants for manipulating a Windows console (using the Windows console functions). The library features the following components:

  • console_context<T>: represents a context object for console operations; its main purpose is restoring console settings; typedefs for the three consoles are available (console_in_context, console_out_context and console_err_context)
  • console<T>: represents a console objects providing operations such as changing the foreground and background colors, the input mode, screen buffer size, title, and others; typedefs for the three consoles are available (console_in, console_out and console_err)
  • manipulating functions that can be used with cout/wcout and cin/wcin: settextcolor()/restoretextcolor(), setbgcolor()/restorebgcolor(), setcolors(),setmode()/clearmode(), setposx()/setposy()/setpos().

The library can be downloaded from here. Detailed documentation is available here.

cppconlib

Examples

The following example prints some text in custom colors and then reads text in a different set of colors.

cppconlib2

The following code prints a rhomb to the console:

cppconlib3

For more details and updates check the project at codeplex: https://cppconlib.codeplex.com.

UPDATE: A NuGet package for cppconlib is available.

Posted in Computer Software, Computer Vision | Tagged: , , , , , | Leave a Comment »

OpenCV: Color-spaces and splitting channels

Posted by Hemprasad Y. Badgujar on July 18, 2015


Conversion between color-spaces

Our goal here is to visualize each of the three channels of these color-spaces: RGB, HSV, YCrCb and Lab. In general, none of them are absolute color-spaces and the last three (HSV, YCrCb and Lab) are ways of encoding RGB information. Our images will be read in BGR (Blue-Green-Red), because of OpenCV defaults. For each of these color-spaces there is a mapping function and they can be found at OpenCV cvtColor documentation.
One important point is: OpenCV imshow() function will always assume that the Mat shown is in BGR color-space. Which means, we will always need to convert back to see what we want. Let’s start.

OpenCV Program: Split Channels (356 downloads )

HSV

While in BGR, an image is treated as an additive result of three base colors (blue, green and red), HSV stands for Hue, Saturation and Value (Brightness). We can say that HSV is a rearrangement of RGB in a cylindrical shape. The HSV ranges are:

  • 0 > H > 360 ⇒ OpenCV range = H/2 (0 > H > 180)
  • 0 > S > 1 ⇒ OpenCV range = 255*S (0 > S > 255)
  • 0 > V > 1 ⇒ OpenCV range = 255*V (0 > V > 255)

YCrCb or YCbCr

It is used widely in video and image compression schemes. The YCrCb stands for Luminance (sometimes you can see Y’ as luma), Red-difference and Blue-difference chroma components. The YCrCb ranges are:

  • 0 > Y > 255
  • 0 > Cr > 255
  • 0 > Cb > 255

L*a*b

In this color-opponent space, L stands for the Luminance dimension, while a and b are the color-opponent dimensions. The Lab ranges are:

  • 0 > L > 100 ⇒ OpenCV range = L*255/100 (1 > L > 255)
  • -127 > a > 127 ⇒ OpenCV range = a + 128 (1 > a > 255)
  • -127 > b > 127 ⇒ OpenCV range = b + 128 (1 > b > 255)

Splitting channels

All the color-spaces mentioned above were constructed using three channels (dimensions). It is a good exercise to visualize each of these channels and realize what they really store, because when I say that the third channel of HSV stores the brightness, what do you expect to see? Remember: a colored image is made of three-channels (in our cases) and when we see each of them separately, what do you think the output will be? If you said a grayscale image, you are correct! However, you might have seen these channels as colored images out there. So, how? For that, we need to choose a fixed value for the other two channels. Let’s do this!
To visualize each channel with color, I used the same values used on the Slides 53 to 65 from CS143, Lecture 03 from Brown University.

RGB or BGR

Original image (a) and its channels with color: blue (b), green (c) and red (d). On the second row, each channel in grayscale (single channel image), respectively.

HSV

Original image (a) and its channels with color: hue (b), saturation (c) and value or brightness (d). On the second row, each channel in grayscale (single channel image), respectively.

YCrCb or YCbCr

Original image (a) and its channels with color: luminance (b), red-difference (c) and blue difference (d). On the second row, each channel in grayscale (single channel image), respectively.

Lab or CIE Lab

Original image (a) and its channels with color: luminance (b), a-dimension (c) and b-dimension (d). On the second row, each channel in grayscale (single channel image), respectively.

Posted in Computer Vision, GPU (CUDA), OpenCV, OpenCV, OpenCV Tutorial | Tagged: , , | Leave a Comment »

Open road databases for lane tracking and vehicle detection

Posted by Hemprasad Y. Badgujar on May 16, 2015


“free” for a researcher willing to test his own algorithms of lane tracking or vehicle detection.

Although it is quite easy to find webpages with huge databases with images of vehicles, it is not so easy to find sites where there are videos of the road ahead captured with a camera installed in a vehicle.

We finally found some, which include in some cases the original videos and the videos with the overimposed detection of vehicles, pedestrians and things like that.

Here you are the list with the links and a short description of the owner:

Thanks to the researchers that share their databases! You support the whole research community with your effort!!

Posted in Computer Vision, OpenCV, Project Related | Tagged: , , , , , | Leave a Comment »

The conversion and copy CvMat, Mat and between IplImage

Posted by Hemprasad Y. Badgujar on May 15, 2015


The conversion and copy CvMat, Mat and between IplImage

In OpenCV Mat, CvMat and IplImage types can represent and display the image. IplImage derived from the CvMat, and CvMat that is derived from the CvArr CvArr -> CvMat -> IplImage, Mat type is a C ++ version of the matrix type (CvArr used as a function of the parameters, either passed or are CvMat IplImage, inside it is by CvMat deal with).

Mat type which focuses on computing, mathematics higher, OpenCV Mat ​​type of calculation is also optimized; while CvMat and IplImage type is more focused on the “image”, OpenCV on which the image manipulation (zoom, single extraction, image thresholding operation, etc.) were optimized.Many times the need for mutual conversion of three types, here a brief overview.

Conversion and copy
CvMat and between Mat

1 replication between CvMat,

  1. // Note: deep copy – separately allocated space, two independent
  2. CvMat* a;
  3. CvMat* b = cvCloneMat(a);   //copy a to b

2 Copy between Mat,

  1. // Note: shallow copy – not just copy the data to create a matrix head, data sharing (change a, b, c of the same effect will be on any one of the other two production)
  2. Mat a;
  3. Mat b = a; //a “copy” to b
  4. Mat c(a); //a “copy” to c
  5. // Note: deep copy
  6. Mat a;
  7. Mat b = a.clone(); //a copy to b
  8. Mat c;
  9. a.copyTo(c); //a copy to c

3, CvMat turn Mat

  1. // Use the constructor Mat: Mat :: Mat (const CvMat * m, bool copyData = false); copyData default is false
  2. CvMat* a;
  3. // Note: the following three consistent results, are shallow copy
  4. Mat b(a);   //a “copy” to b
  5. Mat b(a, false);    //a “copy” to b
  6. Mat b = a;  //a “copy” to b
  7. // Note: When the parameter copyData set to true, it was a deep copy (copying the entire image data)
  8. Mat b = Mat(a, true); //a copy to b

4, Mat turn CvMat

  1. // Note: shallow copy
  2. Mat a;
  3. CvMat b = a; //a “copy” to b
  4. // Note: deep copy
  5. Mat a;
  6. CvMat *b;
  7. CvMat temp = a;  // into CvMat type, instead of copying data
  8. CVCopy  (& temp, b);  // true copy data

Conversion and copy ================ ======================== IplImage above between the two ======== 1. Copy IplImage between this does not go into details, that is cvCopy use with cvCloneImage difference, Zhang posted online map:

2, IplImage turn Mat

  1. // Use the constructor Mat: Mat :: Mat (const IplImage * img, bool copyData = false); default is false copyData
  2. IplImage* srcImg = cvLoadImage(“Lena.jpg”);
  3. // Note: the following three consistent results, are shallow copy
  4. Mat M(srcImg);
  5. Mat M(srcImg, false);
  6. Mat M = srcImg;
  7. // Note: When the parameter copyData set to true, it was a deep copy (copying the entire image data)
  8. Mat M(srcImg, true);

3, Mat turn IplImage

  1. // Note: shallow copy – again, just to create an image first, but not to copy data
  2. Mat M;
  3. IplImage img = M;
  4. IplImage img = IplImage(M);

4, IplImage turn CvMat

  1. // Method a: cvGetMat function
  2. IplImage* img;
  3. CvMat temp;
  4. CvMat* mat = cvGetMat(img, &temp);  //深拷贝
  5. // Act II: cvConvert function
  6. CvMat *mat = cvCreateMat(img->height, img->width, CV_64FC3);  //注意height和width的顺序
  7. cvConvert (img, mat);     // a deep copy

5, CvMat turn IplImage

  1. // Method a: cvGetImage function
  2. CvMat M;
  3. IplImage* img = cvCreateImageHeader(M.size(), M.depth(), M.channels());
  4. cvGetImage (& M, img);     // a deep copy: The function returns img
  5. // Also be written as
  6. CvMat M;
  7. IplImage* img = cvGetImage(&M, cvCreateImageHeader(M.size(), M.depth(), M.channels()));
  8. // Act II: cvConvert function
  9. CvMat M;
  10. IplImage* img = cvCreateImage(M.size(), M.depth(), M.channels());
  11. cvConvert (& M, img);  // a deep copy

 

A final note:

1, Mat type is automatic memory management, no explicit release (of course, you can also call the manual release () method to force Mat matrix data release); and CvMat you need to call cvReleaseMat (& cvmat) to release, IplImage call cvReleaseImage (& iplimage) to release.
2, the establishment of CvMat matrix, the first parameter is the number of rows, the second parameter is the number of columns: CvMat * cvCreateMat (int rows, int cols, int type); 3, when establishing IplImage image, CvSize first parameter width, namely the number of columns; the second argument is the height of that line number: IplImage * cvCreateImage (CvSize size, int depth, int channels); CvSize CvSize (int width, int height); 4, IplImage internal buffer per line is by 4 byte alignment, CvMat not have this limitation.

Posted in OpenCV, OpenCV Tutorial | Tagged: , | Leave a Comment »

Building VTK with Visual Studio 2013

Posted by Hemprasad Y. Badgujar on April 30, 2015


Building VTK5 with Visual Studio

Download

  1. Download VTK 5.10.1 the (VTK-5.10.1.zip) to unzip the file. (C: \ VTK-5.10.1)Http://Www.Vtk.Org/VTK/resources/software.Html#previous
    Https://Github.Com/Kitware/VTK/tree/v5.10.1

CMake

  1. You want to specify the destination of the input destination and solution files of source code.
    • Where is the source code: C:\VTK-5.10.1
    • Where is build the binaries: C:\VTK-5.10.1\build
  2. Press the [Configure] to select the Visual Studio that is the target.
  3. It makes various settings.
    • BUILD_SHAREED_LIBS ☑ (check)
    • BUILD_TESTING ☐ (uncheck)
    • CMAKE_CONFIGURATION_TYPES Debug;Release
    • CMAKE_INSTALL_PREFIX C:\Program Files\VTK (or C:\Program Files (x86)\VTK)
  4. Press the [Add Entry] to add the following settings.
    Name: CMAKE_DEBUG_POSTFIX
    Type: STRING
    Value: -gd
    Description:

    * Debug string to be added to the file name of the build generated files of the (last).

  5. And output the solution file by pressing the [Generate].

Build

  1. Start Visual Studio with administrative privileges VTK solution file (C: \ VTK-5.10.1 \ build \ VTK.sln) to open.
    (If you do not start with administrator privileges Visual Studio INSTALL to fail.)
  2. It wants to modify the source code.
    • vtkOStreamWrapper.cxx
      60 line

      //VTKOSTREAM_OPERATOR(ostream&);
      vtkOStreamWrapper& vtkOStreamWrapper::operator << (ostream& a) {
        this->ostr << (void *)&a;
        return *this;
      }
      
    • vtkEnSightGoldBinaryReader.cxx
      3925 line

      if (this->IFile->read(result, 80).fail())
      

      3944 line

      if (this->IFile->read(dummy, 8).fail())
      

      4001 line

      if (this->IFile->read(dummy, 4).fail())
      

      4008 line

      if (this->IFile->read((char*)result, sizeof(int)).fail())
      

      4025 line

      if (this->IFile->read(dummy, 4).fail())
      

      4048 line

      if (this->IFile->read(dummy, 4).fail())
      

      4055 line

      if (this->IFile->read((char*)result, sizeof(int)*numInts).fail())
      

      4072 line

      if (this->IFile->read(dummy, 4).fail())
      

      4095 line

      if (this->IFile->read(dummy, 4).fail())
      

      4102 line

      if (this->IFile->read((char*)result, sizeof(float)*numFloats).fail())
      

      4119 line

      if (this->IFile->read(dummy, 4).fail())
      
    • vtkConvexHull2D.cxx
      31 lines

      #include <algorithm>
      
    • vtkAdjacencyMatrixToEdgeTable.cxx
      31 lines

      #include <algorithm>
      
    • vtkNormalizeMatrixVectors.cxx
      30 Line

      #include <algorithm>
      
    • vtkPairwiseExtractHistogram2D.cxx
      39 line

      #include <algorithm>
      
    • vtkControlPointsItem.cxx
      35 lines

      #include <algorithm>
      
    • vtkPiecewisePointHandleItem.cxx
      31 lines

      #include <algorithm>
      
    • vtkParallelCoordinatesRepresentation.cxx
      83 line

      #include <algorithm>
      
  1. It wants to build the VTK. (ALL_BUILD)
    1. The configuration of the solution (Debug, Release) set the.
    2. Choose the ALL_BUILD project from Solution Explorer.
    3. [Build]> to build VTK Press [Build Solution].
  2. It wants to install the VTK. (INSTALL)
    1. Choose the INSTALL project from Solution Explorer.
    2. [Build]> [projects only]> to install the VTK Press [INSTALL only the Build menu.CMAKE_INSTALL_PREFIX necessary files are copied to the specified output destination.

Environment Variable

  1. Environment variable VTK_ROOT create a VTK of path: Set the (C \ Program Files \ VTK).
  2. Environment variable Path I add a% VTK_ROOT% \ bin; to.

Building VTK6 with Visual Studio

Download

  1. Download VTK 6.1.0 the (VTK-6.1.0.zip) to unzip the file. (C: \ VTK-6.1.0)Http://Www.Vtk.Org/VTK/resources/software.Html#latestcand
    Https://Github.Com/Kitware/VTK/tree/v6.1.0

CMake

  1. You want to specify the destination of the input destination and solution files of source code.
    • Where is the source code: C:\VTK-6.1.0
    • Where is build the binaries: C:\VTK-6.1.0\build
  2. Press the [Configure] to select the Visual Studio that is the target.
  3. It makes various settings.
    • BUILD_SHAREED_LIBS ☑ (check)
    • BUILD_TESTING ☐ (uncheck)
    • CMAKE_CONFIGURATION_TYPES Debug;Release
    • CMAKE_INSTALL_PREFIX C:\Program Files\VTK (or C:\Program Files (x86)\VTK)
  4. Press the [Add Entry] to add the following settings.
    Name: CMAKE_DEBUG_POSTFIX
    Type: STRING
    Value: -gd
    Description:

    * Debug string to be added to the file name of the build generated files of the (last).

  5. And output the solution file by pressing the [Generate].

Build

  1. Start Visual Studio with administrative privileges VTK solution file (C: \ VTK-6.1.0 \ build \ VTK.sln) to open.
    (If you do not start with administrator privileges Visual Studio INSTALL to fail.)
  2. It wants to build the VTK. (ALL_BUILD)
    1. The configuration of the solution (Debug, Release) set the.
    2. Choose the ALL_BUILD project from Solution Explorer.
    3. [Build]> to build VTK Press [Build Solution].
  3. It wants to install the VTK. (INSTALL)
    1. Choose the INSTALL project from Solution Explorer.
    2. [Build]> [projects only]> to install the VTK Press [INSTALL only the Build menu.CMAKE_INSTALL_PREFIX necessary files are copied to the specified output destination.

Environment Variable

  1. Environment variable VTK_DIR create a VTK of path: Set the (C \ Program Files \ VTK).
  2. Environment variable Path I add a% VTK_DIR% \ bin; to.

Building VTK6 + Qt5 with Visual Studio

Download

  1. Download VTK 6.1.0 the (VTK-6.1.0.zip) to unzip the file. (C: \ VTK-6.1.0)Http://Www.Vtk.Org/VTK/resources/software.Html#latestcand
    Https://Github.Com/Kitware/VTK/tree/v6.1.0
  2. Qt 5.4.0 with OpenGLをダウンロード、インストールする。(C:\Qt)
    http://www.qt.io/download-open-source/#

    • Qt 5.4.0 for Windows 32-bit (VS 2013, OpenGL, 694 MB)
      (qt-opensource-windows-x86-msvc2013_opengl-5.4.0.exe)
    • Qt 5.4.0 for Windows 64-bit (VS 2013, OpenGL, 709 MB)
      (qt-opensource-windows-x86-msvc2013_64_opengl-5.4.0.exe)

CMake

  1. You want to specify the destination of the input destination and solution files of source code.
    • Where is the source code: C:\VTK-6.1.0
    • Where is build the binaries: C:\VTK-6.1.0\build
  2. Press the [Configure] to select the Visual Studio that is the target.
  3. It makes various settings.
    (Grouped and helpful to put a check to Advanced.) * Win32 is Msvc2013_opengl , x64 is msvc2013_64_openglspecified in. Ungrouped Entries

    • Qt5Core_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Core
    • Qt5Designer_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Designer
    • Qt5Gui_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Gui
    • Qt5Network_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Network
    • Qt5OpenGL_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5OpenGL
    • Qt5Sql_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Sql
    • Qt5WebKit_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5WebKit
    • Qt5WebKitWidgets_DIRC:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5WebKitWidgets
    • Qt5Widgets_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Widgets
    • Qt5Xml_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/cmake/Qt5Xml

    BUILD

    • BUILD_SHAREED_LIBS ☑ (check)
    • BUILD_TESTING ☐ (uncheck)

    CMAKE

    • CMAKE_CONFIGURATION_TYPES Debug;Release
    • CMAKE_INSTALL_PREFIX C:\Program Files\VTK (or C:\Program Files (x86)\VTK)

    Module

    • Module_vtkGUISupportQt ☑ (check)
    • Module_vtkGUISupportQtOpenGL ☑ (check)
    • Module_vtkGUISupportQtSQL ☑ (check)
    • Module_vtkGUISupportQtWebkit ☑ (check)
    • Module_vtkRenderingQt ☑ (check)
    • Module_vtkViewsQt ☑ (check)

    OPENGL

    • OPENGL_gl_LIBRARY opengl
    • OPENGL_glu_LIBRARY glu32

    QT

    • QT_MKSPECS_DIR C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/mkspecs/win32-msvc2013
    • QT_QMAKE_EXECUTABLE C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/bin/qmake.exe
    • QT_QTCORE_LIBRARY_DEBUG C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/Qt5Cored.lib
    • QT_QTCORE_LIBRARY_DEBUG C:/Qt/Qt5.4.0/5.4/msvc2013_64_opengl/lib/Qt5Core.lib

    VTK

    • VTK_Group_Qt ☑ (check)
    • VTK_INSTALL_QT_PLUGIN_DIR ${CMAKE_INSTALL_PREFIX}/${VTK_INSTALL_QT_DIR}
    • VTK_QT_VERSION 5
  4. Press the [Add Entry] to add the following settings.
    Name: CMAKE_PREFIX_PATH
    Type: PATH
    Value: C:\Program Files (x86)\Windows Kits\8.1\Lib\winv6.3\um\x64
    (or C:\Program Files (x86)\Windows Kits\8.1\Lib\winv6.3\um\x86)
    Description:

    * Windows Kits path if Visual Studio 2013 8.1 \ Lib \ Winv6.3, if Visual Studio 2012 8.0 I specify the \ Lib \ Win8.

    Name: CMAKE_DEBUG_POSTFIX
    Type: STRING
    Value: -gd
    Description:

    * Debug string to be added to the file name of the build generated files of the (last).

  5. And output the solution file by pressing the [Generate].

Build

  1. Start Visual Studio with administrative privileges VTK solution file (C: \ VTK-6.1.0 \ build \ VTK.sln) to open.
    (If you do not start with administrator privileges Visual Studio INSTALL to fail.)
  2. It wants to build the VTK. (ALL_BUILD)
    1. The configuration of the solution (Debug, Release) set the.
    2. Choose the ALL_BUILD project from Solution Explorer.
    3. [Build]> to build VTK Press [Build Solution].
  3. It wants to install the VTK. (INSTALL)
    1. Choose the INSTALL project from Solution Explorer.
    2. [Build]> [projects only]> to install the VTK Press [INSTALL only the Build menu.CMAKE_INSTALL_PREFIX necessary files are copied to the specified output destination.

Environment Variable

  1. Environment variable VTK_DIR create a VTK of path: Set the (C \ Program Files \ VTK).
  2. Environment variable QTDIR by creating a Qt of the path (C: \ Qt \ Qt5.4.0 \ 5.4 \ msvc2013_64_opengl \ (or C: \ Qt \ Qt5.4.0 \ 5.4 \ msvc2013_opengl \)) to set.
  3. Environment variable Path in;% VTK_DIR% \ bin;% I add a QTDIR% \ bin.

Posted in CLOUD, Computer Languages, Computer Softwares, Computer Vision, Computing Technology, CUDA, GPU (CUDA), OpenCV | Tagged: , , , | 4 Comments »

Assessing the pixel values of an image

Posted by Hemprasad Y. Badgujar on March 14, 2015



Assessing the pixel values of an image

#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "iostream"
 
using namespace cv;
using namespace std;
 
int main( )
{
 
 Mat src1;
 src1 = imread("lena.jpg", CV_LOAD_IMAGE_COLOR); 
 namedWindow( "Original image", CV_WINDOW_AUTOSIZE ); 
 imshow( "Original image", src1 ); 
 
 Mat gray;
 cvtColor(src1, gray, CV_BGR2GRAY);
 namedWindow( "Gray image", CV_WINDOW_AUTOSIZE ); 
 imshow( "Gray image", gray );
 
 // know the number of channels the image has
 cout<<"original image channels: "<<src1.channels()<<"gray image="" channels:="" "<="" *******************="" read="" the="" pixel="" intensity="" *********************="" single="" channel="" grey="" scale="" (type="" 8uc1)="" and="" coordinates="" x="5" y="2" by="" convention,="" {row="" number="y}" {column="" intensity.val[0]="" contains="" a="" value="" from="" 0="" to="" 255="" scalar="" intensity1="gray.at(2," 5);="" cout="" <<="" "intensity=" << endl << " "="" intensity1.val[0]="" endl="" endl;="" 3="" with="" bgr="" color="" 8uc3)="" values="" can="" be="" stored="" in="" "int"="" or="" "uchar".="" here="" int="" is="" used.="" vec3b="" intensity2="src1.at(10,15);" blue="intensity2.val[0];" green="intensity2.val[1];" red="intensity2.val[2];" write="" **********************="" this="" an="" example="" opencv="" 2.4.6.0="" documentation="" mat="" h(10,="" 10,="" cv_64f);="" for(int="" i="0;" <="" h.rows;="" i++)="" j="0;" h.cols;="" j++)="" h.at(i,j)="1./(i+j+1);" cout<<h<<endl<<endl;="" modify="" pixels="" of="" for="" (int="" {="" j<src1.cols;="" src1.at<vec3b="">(i,j)[0] = 0;
 src1.at(i,j)[1] = 200;
 src1.at(i,j)[2] = 0; 
 }
 }
 namedWindow( "Modify pixel", CV_WINDOW_AUTOSIZE ); 
 imshow( "Modify pixel", src1 );
 
 waitKey(0); 
 return 0;
}

Posted in OpenCV, OpenCV Tutorial | Tagged: , , , , | Leave a Comment »

Project Template in Visual Studio

Posted by Hemprasad Y. Badgujar on March 5, 2015


 

 Sample Image - maximum width is 600 pixels

Introduction

This article describes the step by step process of creating project template in Visual Studio 2012 and VSIX installer that deploys the project template. Each step contains an image snapshot that helps the reader to keep focused.

Background

A number of predefined project and project item templates are installed when you install Visual Studio. You can use one of the many project templates to create the basic project container and a preliminary set of items for your application, class, control, or library. You can also use one of the many project item templates to create, for example, a Windows Forms application or a Web Forms page to customize as you develop your application.

You can create custom project templates and project item templates and have these templates appear in the New Project and Add New Item dialog boxes. The article describes the complete process of creating and deploying the project template.

Using the Code

Here, I have taken a very simple example which contains nearly no code but this can be extended as per your needs.

Create Project Template

First of all, create the piece (project or item) that resembles the thing you want to get created started from the template we are going to create.

Then, export the template (we are going to use the exported template as a shortcut to build our Visual Studio template package):

Visual Studio Project Templates

We are creating a project template here.

Fill all the required details:

A zip file should get created:

Creating Visual Studio Package Project

To use VSIX projects, you need to install the Visual Studio 2012 VSSDK.

Download the Visual Studio 2012 SDK.

You should see new project template “Visual Studio Package” after installing SDK.

Select C# as our project template belongs to C#.

Provide details:

Currently, we don’t need unit test project but they are good to have.

In the solution, double-click the manifest, so designer opens.

Fill all the tabs. The most important is Assert. Here you give path of our project template(DummyConsoleApplication.zip).

As a verification step, build the solution, you should see a .vsix being generated after its dependency project:

Installing the Extension

Project template is located under “Visual C#” node.

Uninstalling the Project Template

References

Posted in .Net Platform, C, Computer Languages, Computer Software, Computer Softwares, Computer Vision, CUDA, GPU (CUDA), Installation, OpenMP, PARALLEL | Tagged: , , | Leave a Comment »

 
Extracts from a Personal Diary

dedicated to the life of a silent girl who eventually learnt to open up

Num3ri v 2.0

I miei numeri - seconda versione

ThuyDX

Just another WordPress.com site

Algunos Intereses de Abraham Zamudio Chauca

Matematica, Linux , Programacion Serial , Programacion Paralela (CPU - GPU) , Cluster de Computadores , Software Cientifico

josephdung

thoughts...

Tech_Raj

A great WordPress.com site

Travel tips

Travel tips

Experience the real life.....!!!

Shurwaat achi honi chahiye ...

Ronzii's Blog

Just your average geek's blog

Karan Jitendra Thakkar

Everything I think. Everything I do. Right here.

VentureBeat

News About Tech, Money and Innovation

Chetan Solanki

Helpful to u, if u need it.....

ScreenCrush

Explorer of Research #HEMBAD

managedCUDA

Explorer of Research #HEMBAD

siddheshsathe

A great WordPress.com site

Ari's

This is My Space so Dont Mess With IT !!

%d bloggers like this: