Something More for Research

Explorer of Research #HEMBAD

Archive for the ‘My Research Related’ Category

How to use Twitter as a scientist

Posted by Hemprasad Y. Badgujar on February 5, 2016


if you are a scientist, with “joining Twitter” and then “doing stuff with Twitter” on your To Do list, you might feel a little intimidated by the long list of possible people to follow. Moreover, following @CNN and @BarackObama might be the first thing you do, and might be suggested to you, but these are not your main sources of scientific joy and information.

So let’s take this step by step. Let’s go from setting up a profile, following people to building an academic network on Twitter. I don’t want this to become like a tutorial (there’s plenty of videos on YouTube to take you through any step you might have difficulties with), but I want to give you some tips and tricks at every step along the process.

1. Crafting a bio
One of the first things you need to do when you sign up on Twitter, is to put a bio. I recommend that you make your Twitter profile publicly accessible instead of private. If you join Twitter to enter the realm of scientists on Twitter, you’d prefer them to be able to find you and follow you. Make sure your bio mentions your field and institution(s). You can add a warning that Retweets are not Endorsements, but, really, most of the Twitterverse is aware of that.

Keep in mind as well that Twitter is a lighter type of platform. There’s no need for you to cite your recent publications in your bio. I like to add a bit of lightness to my bio by adding “Blogs. Pets cats. Drinks tea.” I’m assuming that also sets up people for the fact that besides the concrete and the science, I could blurt out the odd complaint, random observation or retweet cute cat pictures if I feel like. Does that make me unprofessional? I’m on the border of Gen Y and I don’t think so…

2. Choosing a profile picture
Your standard profile picture is an egg. Whenever I get followed by an egg, I don’t even take the effort to read the profile description of this person, because the sole fact that he/she didn’t even finish his/her profile, makes me doubt this person has any real interest in interacting on Twitter.

Since Twitter profile pictures show up very small, I recommend you use a headshot. If you put a full body picture of yourself presenting your work somewhere, you’ll be reduced to the size of a stickman in people’s timelines. Use a clear, recognizable headshot, so that the odd fellow researcher might be able to recognize you at a conference.

3. Following people

So now that we have the basics covered, let’s start to move forward into the actual use of Twitter. Your first recommended people to follow will typically be @CNN and @BarackObama. While I like using Twitter as a source for the news, I’m going to assume you came here in the first place for the scientific community. How do you start following people?

Here are a few types of accounts that you can/should start following:
– the accounts of your university and department. These accounts will also retweet tweets from fellow academics at your institute.
– the accounts of universities and research groups worldwide you are interested in.
– the accounts of academic publishers
– the accounts of news websites and blogs related with higher education, such as @insidehighered
– make a search for your field and see what and who shows up
– organizations in your field
– Twitter lists about your field or with people from your institution

Keep in mind that, just like growing followers, growing a list of interesting people to follow is something that happens over time. You might see a retweet of somebody, check out his/her profile and then decide to follow this tweep. If you start aggressively following a lot of people in a short amount of time, Twitter will ban you from following more people anyway.

4. Creating content
Now you can start creating content. You can tweet about your recent publications, retweet information from the accounts you follow and more. If you have a blog, Twitter is an excellent place to share your recent blog posts. You can also tweet series of posts (indicated by (1/3), (2/3) and (3/3) if you distribute it over 3 posts, for example) of the contents that you want to share is too long to squeeze into 150 characters.

Some ideas on what to share with the world:
– tweet about the topic you will discuss in class
– tweet about the conference you are planning to attend
– share your progress in writing
– talk about a recent publication
– join the discussion about higher education policies (I know you have an opinion – we all do)

5. Getting the discussion started
If you see a topic of your interest, you don’t need to wait for anyone to invite you to take part in the discussion – you can imply barge right into it. You wouldn’t do it in real life, but on Twitter, nobody knows you are reading along. So comment to what fellow researchers are sharing, ask for ideas and opinions and interact.

You can also tag people in a post by adding their @name when you share an article and ask what they think. In this way, you can as well get involved in the academic online discussion.

6. Using hashtags
Hashtags, those #selfie #dinner #random stuff that you see showing up around most social media platforms come from Twitter, where feeds and discussions center around certain hashtags. In the academic world, I recommend you to check out #phdchat, #ecrchat (for early career researchers), #scholarsunday (on Sundays, to learn who to follow), #acwri (for academic writing) and #acwrimo (in November, the month in which academics worldwide pledge to get their manuscript out and post their daily word counts).

Some hashtags have a weekly fixed hour to chat. Other hashtags are continuous streams of information. Figure out what the important hashtags are in your field and in academia in general, listen in and contribute.

7. Saving conversations with Storify

If you had a particularly interesting conversation on Twitter that you would like to save for future reference, you can use Storify. Storify is a website on which you can save stories, by adding social media content. You can, for example, add tweets and replies to tweets in a logical order, to save a discussion you had. Once you finished compiling your story, you can share it again through social media. Stories also remain saved and accessible for the future in Storify.

8. Curating content
Retweeting, sharing articles, hosting people to write on your blog, … all these activities are related to curating content and broadcasting it to your audience. I enjoy interviewing fellow academics that I meet through Twitter. I post the interview then on my blog, and share that link on Twitter (going full circle). From a number of newsletters that I read, I also share articles and interesting documents. Find out what type of content you and your followers find relevant, and start distributing interesting information.

Posted in Computing Technology, Mixed, My Research Related | Tagged: , , , , | Leave a Comment »

Image Processing Algorithms and Codes

Posted by Hemprasad Y. Badgujar on February 5, 2015

Algorithms The Image Processing and Measurement Cookbook by Dr. John C. Russ

Conference Papers

Computer Vision Source Code


Basic image processing demos showing some basic image processing filters: thresholding, Gaussian filter, and Canny edge detector using MATLAB. or

Gaussian Masks, Scale Space and Edge Detection

The SUSAN algorithms cover image noise filtering, edge finding and corner finding.

Vision Systems Course

Various Simple Image Processing Techniques

Algorithms commonly used in spectral processing methods

Beyond Photography — The Digital Darkroom  (online, Jan 2003)

Alpha blending pp. 150-154,  High Performance Computer Imaging
Antialiasing Aliasing and Antialiasing

Antialiasing with Line Samples

Area efg’s Polygon Area and Centroid Lab Report
Astrophotos Post-Processing Astrophotos
How to Put the Astro in Photographer
Star Shaping
Authentication Image Authentication for a Slippery New Age, Dr. Dobb’s Journal, April 1995.
Bar Codes Introduction to Bar Coding
Biometrics Intelligent Biometric Techniques in Fingerprint and Face Recognition (book)

Also see Recognition, Faces and Recognition, Fingerprints below.

Books Digital Image Processing Algorithms and Applications
Centroid efg’s Polygon Area and Centroid Lab Report
Also see “Moments of Inertia” below
Chain Codes 7. Processing Line Drawings
Chromaticity Charts efg’s Color Reference Library
Classification Jean Vezina’s  automatic tree species identification from digitized aerial photographs.  PowerPoint presentation for forestry audience, TreeID.ZIP.      Abstract.

Fourier Descriptors Allow Web-Inspection System to Classify Plastic Shapes
Vision Systems Design, Dec. 98, pp. 25-35

Image Processing and Neural Networks Classify Complex Defects
Vision Systems Design, Mar. 99, pp. 63-70

Chapter 17, Classification, pp. 509-520
Practical Handbook on Image Processing for Scientific Applications

Compression Image Compression, pp. 179-215
Digital Image Processing:  Principles and Applications

Image Compression, Chapter 9
Simplified Approach to Image Processing

Compression Links

Chapter 6, Image Compression, pp. 307-412
Digital Image Processing

Chapter 9, Image data compression, High Performance Computer Imaging
Huffman, run-length, DCT, JPEG (lossy and lossless), MPEG

Mitsuharu ARIMURA’s Bookmarks on Source Coding/Data Compression

Condensation The Condensation Algorithm
Contrast Contrast Stretching, pp. 54-59
Simplified Approach to Image Processing

Contrast Perception

 efg’s HistoStretchGrays Lab Report

Contrast Manipulation, pp. 228-232
The Image Processing Handbook

Also see Histograms below.

Counting Peaks Algorithm for counting peaks on chart
efg’s UseNet Post
efg’s Pixel Profile Lab Report
Deconvolution Deconvolution, Fourier-Self

Blind Deconvolution Page

De-mosaicing Digital Camera Designers Face a Maze of Trade-Offs
Detection, Buildings Building Detection in Aerial Images
Detection, Corners The SUSAN algorithms cover image noise filtering, edge finding and corner finding.
Detection, Edges Edge Detection, pp. 79-85
Simplified Approach to Image Processing

A New Method of Edge Detection

3. Differentiation, Sharpening, Enhancement, Caricatures and Shape Morphing

Evaluation of Subpixel Line and Edge Detection Precision and Accuracy

Contour Extraction

Edge Detection

Canny Edge Detector Code

The Canny Edge Detector

Edges — The Canny Edge Detector

“An Imaging Edge:  Tips and Technique for Edge Extraction”
Advanced Imaging, Jan 99, pp. 36-40

Understanding Edge- and Line-Based Segmentation
Vision Systems Design, Mar. 99, pp. 27-32

“Data Structures:  Your Mind Doesn’t Process Pixels, so why Should Your Software?”
Advanced Imaging, Mar 99, pp. 34-35, 55
Gives example of Kanizsa Square that has illusory edges.

J.F. Canny, “A computational approach to edge detection”, IEEE Patt. Anal.
Machine Intell., Vol. 8, No. 6, pp. 55-73, 1990.

Sobel Masks for Edge Detection

Line and edge detection:  One simple test image

The SUSAN algorithms cover image noise filtering, edge finding and corner finding.

Edges:  The Occurrence of Local Edges

Chapter 1, Advanced Edge-Detection Techniques
Algorithms for Image Processing and Computer Vision

Chapter 12, Edges and Lines, pp. 387-413
Practical Handbook on Image Processing for Scientific Applications

pp. 177-179, High Performance Computer Imaging

Distortion Eliminating Distortion in Your Imaging System

Nonlinear Lens Distortion

Dithering Gernot Hoffmann’s “Dithering + Halftoning” (includes Hilbert-Peano dithering, Floyd-Steinberg dithering)



Average, Floyd-Steinberg, Ordered, Random Dithering

Test Page for Color Map Quantization (including Floyd-Steinberg error diffusion)


“A Balanced Dithering Technique,” December 1998C/C++ Users Journal

“Classic” dithering notes by Lee Cocker

Encryption / Decryption    efg’s ImageCrypt Lab Report
Enhancement “Understanding Image Enhancement”
median filter, rank filter, Nagao filter (edge-enhancement and image smoothing), Weymouth/Overton filter, contrast enhancement
Vision Systems Design, July 1998, pp. 23-25; August 1998, pp. 21-24

Chapter 4, Image Enhancement, pp. 227-304
The Image Processing Handbook

Chapter 4, pp. 161-251
Digital Image Processing
, 2nd edition

Erosion Dr. John Russ’ UseNet Post about erosion
Also see Skeletonization

Erosion and Dialation, pp. 129-135
Digital Image Processing:  Principles and Applications

Feature Extraction Digital Image Processing:  Principles and Applications, pp. 153-167
Filters Digital Filters

“Understanding Image-Filtering Algorithms”
spatial frequency, low-pass filtering, median filtering, high-pass, low-stop
Vision Systems Design, June 1998, pp. 19-24

Chapter 6, Image neighborhood filtering
High Performance Computer Imaging

Section 4.3, Spatial Filtering, pp. 189-200
Section 4.4, Enhancement in the Frequency Domain, pp. 201-218
Digital Image Processing
, 2nd edition

Image Filtering in the Frequency Domain

Image Transformations and Filters

Filters, Convolution Note from Lazikas o Pontios about convolution filters.

Dr. John Russ’ UseNet Post to (19 July 2000):
“… applying convolutions to the RGB channels in an image is usually wrong. For most purposes processing the image in an HSI space and modifying only the channel, leaving the color information alone, produces the best results”


Convolution, pp. 67-73
Simplified Approach to Image Processing

Section 6.3, Linear filtering using convolution, High Performance Computer Imaging.
Discusses alternatives wasy to handle boundary pixels:
zero fill the edges, don’t write the edges, extend the edges, reflect the edges

Chris Russ’ UseNet Post about Fast Convolution Algorithms

Filters, Derivatives pp. 250-254, The Image Processing Handbook

Filters, Sobel and Kirsch
pp. 255-268, The Image Processing Handbook
pp. 178-179, High Performance Computer Imaging

First Order Derivatives Operators, pp. 86-88
Roberts, Prewitt, Sobel, Frei-Chen;
Second Order Derivative Operators, pp. 88-93
Prewitt, Kirsch, Robinson (3-level, 5-level)
Simplified Approach to Image Processing

Filters, Laplacian pp. 242-250,  The Image Processing Handbook

Gaussian Filter

LoG (Laplacian of Gaussian):  Chris Russ’ UseNet Post about LoG operators of various sizes

Median and Rank
Optimal median smoothing, Applied Statistics, 44(2): 258-264, W. Haerdle, W. and M. Steiger, 1995.
Haerdle and Steiger’s approach is O(log p) per pixel, where p is the width of the kernel.

Median Filtering, pp. 95-107 (includes color median filtering, p. 103)
Simplified Approach to Image Processing

Fast Median Search: an ANSI C implementation

Median Filter

Adaptive Center Weighted Median Filter

Filtre Médian (3×3, 5×5)

Diagram of optimal way to compute median value from 3×3 array in hardware. Use the same logic in software.

Median filters are useful tools in digital signal processing. Wesley examines their use for removing impulsive signal noise while maintaining signal trends. Additional resources include aa1099.txt(listings).
Dr. Dobb’s Journal, October 1999.

Spyros’ UseNet Post about  Huang’s Algorithm

Rank operations, pp. 268-277
The Image Processing Handbook

Section 6.4, Nonlinear filtering I:  the median filter and its variations
High Performance Computer Imaging

Filters, Morphological Section 6.5, High Performance Computer Imaging
Nonlinear filtering II:  morphological filters
Filters, Sharpening p. 177, High Performance Computer Imaging. Includes discussion of unsharp masking.

3. Differentiation, Sharpening, Enhancement, Caricatures and Shape Morphing

Sharpening, pp. 77-79
Simplified Approach to Image Processing

Filters, Smoothing pp. 176-177, High Performance Computer Imaging

Gaussian Smoothing

Filters, Unsharp Mask Chris Russ’ UseNet Post about implementing unsharp mask
Chris Russ’ UseNet Post about two common uses for unsharp mask

Unsharp masking is a photographic technique that increases the sharpness of photographic images. Tim presents an algorithm that implements this concept.   Additional resources include (source code).
Dr. Dobb’s Journal, Nov. 1999.

Fingerprints See Recognition, Fingerprints
Fluorescence Imagnig Fluorescence Imaging Applications Guide
Fluorescence Imaging Principles and Methods
Focus Chris Russ’ UseNet Post about AutoFocus methods
Fractal analysis pp. 282-288, The Image Processing Handbook
Gray Scales Perceptually Optimized Grayscales
Gamma correction efg’s Color Reference Library
Gray Scale Images Fast Gaussian blurring; Operations on gray scale images; Generalized order statistic filters; Contour construction for 2D image.  (Requires Magic Software)
Halftoning A Review of Halftoning Techniques

Digital Halftoning, Chapter 6
Simplified Approach to Image Processing

Gernot Hoffmann’s “Dithering + Halftoning”

Histogram equalization Use of computer graphic simulation to explain color histogram structure

pp. 233- 241, The Image Processing Handbook
pp. 146-150,  High Performance Computer Imagingincludes discussion of adaptive histogram equalization

Histograms Display/Print Histograms (R, G, B, H, S, V)
 Show Image

  efg’s HistoStretchGrays Lab Report

Histogram-base Operations: Contrast stretching, equalization

Histogram Specification, pp. 49-54
Simplified Approach to Image Processing

Histogram-based operations, pp. 143-150
High Performance Computer Imaging

Three-way histograms (RGB, YUV, HSI)
The Image Processing Handbook

Section 4.2.2, Histogram Processing, pp. 171-185
Digital Image Processing
, 2nd edition

Hough Transform Hough Transform:  Journal and Conference Papers

8. Detection of Structure in Noisy Pictures and Dot Patterns

On the Efficient Sampling Interval of the Parameter in Hough Transform

Hough Transforms


Kim Madsen’s UseNet Post about Hough Transform

Image Analogies
Image Query Fast Multiresolution Image Querying

Shape Queries Using Image Databases

Also see book Wavelets for Computer Graphics

Interpolation and Extrapolation See Resampling

Michel Chabroux’s UseNet Post explaining bilinear interpolation

Jitter Infrared jitter imaging data reduction algorithms
Lens Transformation

Nonlinear Lens Distortion

Computer Generated Angular Fisheye Projections

Eric Rudd’s UseNet Post about fisheye lens simulation

Lighting efg’s Color Reference Library
Masks Chapter 7, Processing Binary Images, pp. 431-508
The Image Processing Handbook
Measurements Chapter 8, Image Measurements, pp. 509-574
The Image Processing Handbook

Chapter 16, Size and Shape, pp. 485-508
Practical Handbook on Image Processing for Scientific Applications

Metamorphisis Feature-Based Image Metamorphosis
Metrology Metrology based on Computer Vision
Moiré Methods “Moiré Methods Make Shape Recognition Easier”
Vision Systems Design, March 1997, pp. 32-37

Modelization of the Moiré Phenomenon

Moiré Effects With Overlayed Line Screens

Moments of Inertia

On the Calculation of Arbitrary Moments of Polygons

Morphology Understanding mathematical morphology 
Vision Systems Design
, May 1999

Understanding more mathematical morphology 
Vision Systems Design, June 1999

Morphological Image Analysis:  Principles and Applications (June 1999). Author’s page

The Morphology Digest is intended as a forum between workers in the field of Mathematical Morphology and related fields (stochastic geometry, random set theory, image algebra, etc.).

Mathematical Morphology and Image Interpolation

SDC Morphology Toolbox for MATLAB:   includes fast queue-based algorithms for distance transform, watershed, reconstruction, labeling, area-opening, etc.

Mosaicing Rho Ophiucus Mosiac Processing Example

An Introduction to Image Mosaicing

Image Registration and Mosaicking

Automatic Panoramic Image Merging

Mosaicing with Super Resolution

Motion Image Sequence Segmentation

Estimation of Visual Motion in Image Sequences

Chris Russ’ UseNet Post with suggestion on how to detect motion by image difference

Section 5.4.2, Removal of Blur Caused by Uniform Linear Motion, pp. 272-278
Section 7.5, The Use of Motion in Segmentation
Digital Image Processing


Introduction to Active Contours and Visual Dynamics

Motion and Time Sequence Analysis

Thomas Kragh’s UseNet Post about Motion Blur

Chapter 13, Orientation and Velocity, 415-440
Practical Handbook on Image Processing for Scientific Applications

Neural Networks Neural Network as a Tool for Feature Selection
Noise Removal Noise Removal from Images

The SUSAN algorithms cover image noise filtering, edge finding and corner finding.

Median filters are useful tools in digital signal processing. Wesley examines their use for removing impulsive signal noise while maintaining signal trends. Additional resources include aa1099.txt(listings).
Dr. Dobb’s Journal, October 1999.

Also see Median Filters

Nyquist Limit Using a Nyquist Chart to Evaluate Digital Camera Systems
Part 1.

Part 2.

Optimization Using MMX Technology to Speed Up Machine Vision Algorithms
Part 1.
Part 2.
Panoramic Images Helmut Dersch’s “Panorama Tools: Documentation, Info and More Uses”

(Be wary of IPIX, however.)

Perimeter efg’s E-mail to Engineering Student at kmutt about how to computer perimeter of an object
Photogrammetry See General Info page
Point Operations Monadic image operations:  add constant, subtract constant, multiply constant, divide into constant, divide by constant, or constant, and constant, xor constant

Diadic image operations:  add, subtract, multiply, divide, min,max, or, and, xor

Radon Transform
Recognition Recognizing Flexible Objects

Point Pattern Matching*/  (Using Wayback Machine)

Chris Russ’ UseNet Post about fixing broken lines in image recognition

Pattern Classification and Scene Analysis (book)

“Moiré Methods Make Shape Recognition Easier”
Vision Systems Design, March 1997, pp. 32-37

Chapter 9, Recognition and Interpretation, pp. 571-661
Digital Image Processing

See also Skeletonization

Recognition, Faces The Face Detection Home Page

Face Recognition Home Page

Multi-Modal System for Locating Heads and Faces

Locating Faces and Facial Parts

Bayesian Modeling of Facial Similarity

Computer Vision Face Tracking For Use in a Perceptual User Interface

The Biometric Consortium, Research and Databases
Face, Fingerprints, Handwriting, Voice

Face detection, recognition and analysis

Intelligent Biometric Techniques in Fingerprint and Face Recognition 

Recognition, Fingerprints Biometrics

The Biometric Consortium, Reseat and Databases
Face, Fingerprints, Handwriting, Voice

Fingerprint Enhancement,

FBI Fingerprint Image Compression Standard

FAQ about FBI’s Wavelet/Scalar Quantization Specification for compression of digitized gray-scale fingerprint images.

Intelligent Biometric Techniques in Fingerprint and Face Recognition    US  UK

Recognition, Handwriting The Biometric Consortium, Reseat and Databases
Face, Fingerprints, Handwriting, Voice

Fingerprints and handwriting

NIST Form-Based Handprint Recognition System

Recognition, Iris Iris Recognition Homepage
Recognition, License Plates Vehicle Number Plate Recognition Home Page
Universidade de Trás-os-Montes e Alto Douro

A Neural Network Based Artificial Vision System for Licence Plate Recognition

Recognition, Optical Character (OCR) Character Recognition

Optical Character Recognition:  Journal and Conference Papers

Character Recognition

OCR/ICR Documents

Character Recognition by Feature Point Extraction

Document Understanding and  Character Recognition WWW Server

Chapter 8, Optical Character Recognition, pp. 275-304
Chapter 9, Symbol Recognition, pp. 305-356
Algorithms for Image Processing and Computer Vision

Geometry in Action

Recognition, Pattern Pattern Recognition Links

Understanding Pattern RecognitionVisions Systems Design, July 1999.
Understanding More Pattern-Recognition Techniques, Visions Systems Design, Aug 1999, pp. 21-25

Pattern Recognition Resources,

Pattern Recognition Information,

Statistical Pattern Recognition & Artificial Neural Network Library

Optimizing Vision Applications: Which is Better, Blob-Centroid or Grayscale Search?

“Red Eye” See efg’s  Color and Computers page
Representation Chapter 8, Representation and Description, pp. 483-569
Digital Image Processing
Resampling Advanced Image Processing:  Image Interpolation and Filtering

Interpolation for Scaling, Rotation, Perspective and Morphing

Interpolation (Nearest Neighbor, Bilinear, Cubic Convolution, B-Spline), pp. 110-123
Simplified Approach to Image Processing

Digital Image Processing:  Principles and Applications, pp. 117-122

Image Processing By Interpolation and Extrapolation

Efficient Image Magnification by Bicubic Spline Interploation

Mathematical Morphology and Image Interpolation

Section 8.3.2, “Interpolation,” in
Practical Handbook on Image Processing for Scientific Applications  
p. 269

Paul Heckbert’s “zoom” program

Non-Linear Magnification Home Page

Note from Lazikas o Pontios abut Resampling to zoom.

Testing Interpolator Quality

Restoration, Reconstruction Electronic Imaging, a Tool for the Reconstruction of Faded Color Photographs

Image Restoration

“Novel Blind Deconvolution Techniques Restore Blurred Images”
Researchers are using blind-image deconvolution to automatically deblur telescope and microscope images.
Vision Systems Design, Nov. 98, pp. 35-41

Chapter 3, Correcting Image Defects, pp. 161-226
The Image Processing Handbook

Chapter 5, Image Restoration, pp. 253-306
Digital Image Processing

Chapter 6, Image Restoration, pp. 220-249
Algorithms for Image Processing and Computer Vision

Chapter 9, Restoration and Reconstruction, pp. 287-306
Practical Handbook on Image Processing for Scientific Applications

Rotation Turn, Turn, Turn:  Using the Graphics Class to Rotate Images

HOWTO: Display a Bitmap into a Rotated or Non-rectangular Area
(Using Windows 2000 “WarpBlt” API call)

“High Accuracy Rotation of Images,” in Computer Vision, Graphics and Image Processing, Vol. 54, No. 4, July 1992, pp. 340-344.

One-pass and multipass rotation, Section 8.5
High Performance Computer Imaging.

2-pass and 3-pass rotations
Practical Handbook on Image Processing for Scientific Applications  
p. 107 FAQ, Section 3.01,

In Windows NT the plgblt API call can be used for bitmap rotation if the RC_BITBLT is supported by a device.

See efg’s RotateScanlineRotatePixels and FlipReverseRotate Lab Reports.

Scaling Bitmap Scaling

Section 8.4, High Performance Computer ImagingImage Scaling

Segmentation Digital Image Processing:  Principles and Applications, pp. 124-152

Image Segmentation and Mathematical Morphology

Image Sequence Segmentation

Many image-analysis tasks must first separate the image into clearly defined regions. Lee’s algorithm performs such a separation and presents the results in a fashion amenable to further study. Additional resources include aa798.txt (listings) and (source code).   July 1998, Dr. Dobb’s Journal.

Skin Cancer Segmentation program

Efficiently Computing a Good Segmentation

Understanding Image Segmentation Basics
Vision Systems Design, Sept. 98; Oct. 98, pp. 20-22

Understanding Region-Based Segmentation
Vision Systems Design, Nov. 98, pp. 21-23

Understanding Oversegmentation and Region Merging
Vision Systems Design, Dec. 98, pp. 21-23

Understanding Undersegmentation and Region Splitting
Vision Systems Design, Feb. 99, pp. 16-19

Understanding Edge- and Line-Based Segmentation
Vision Systems Design, Mar. 99, pp. 27-32

Understanding other edge- and line-based segmentation techniques 
Vision Systems Design
, Apr. 99


Color Image Segmentation (with C++ code)

Unsupervised Segmentation

Chapter 6, Segmentation and Thresholding, pp. 371-430
The Image Processing Handbook

Chapter 7, Image Segmentation, pp. 413-482
Digital Image Processing

Chapter 15, Segmentation, pp. 474-484
Practical Handbook on Image Processing for Scientific Applications

Also see Segmentation on efg’s Color Reference Library page

Shape from Shading A method for determining the shape of a surface from its image
Sharpness see links under MTF
Skeletonization Digital Image Processing:  Principles and Applications, pp. 137-139

The Scale Space Skeletonization Page

Hilditch’s Algorithm for Skeletonization

Skeletonization in 2D, 3D and 4D images

Comparison of Skeletonization Methods

Skew Skew Correction
Snakes Gradient Vector Flow (GVF) snake.  Active contours — or snakes — are computer-generated curves that move within images to find object boundaries.

GVF snake for *nix boxes source code


Active Snakes

Active Contours (Snakes)

Active contour models of shape or “snakes”

Snakes:  Active Contour Models


Special Effects Beyond Photography — The Digital Darkroom (Out of Print, 1995)
Publisher Web Site:
Spectroscopy USGS Imaging spectroscopy analysis: identify and map materials through spectroscopic remote sensing, on the earth and throughout the solar system.

About Imaging Spectroscopy

Multispectral Scanner Landsat Data

Steganography The information hiding homepage – digital watermarking & steganography


Steganography & Digital Watermarking

Steganography/Watermarking Information

Stenography and Digital Watermarks

Also see watermarking

Stereoscopic Vision Stereo pair displays of surface range images

Single-Image Stereograms, July 1995, Dr. Dobb’s Journal.

Stereoscopic, or true 3-D, images take into account depth information that’s lost when conventional 3-D images are projected onto a PC’s 2-D screen. In addition to discussing hardware and software stereoscopic requirements, our authors present and implement algorithms for generating left- and right-eye views fundamental to stereoscopic viewing.   April 1994, Dr. Dobb’s Journal.

Stereoscopic Vision and Perspective Projection

Stereo Vision

Texture Fast Marble Texture Algorithm

pp. 278-282, The Image Processing Handbook

Chapter 4, Texture, pp 150-175
Algorithms for Image Processing and Computer Vision

Chapter 14, Scale and Texture, pp. 441-469
Practical Handbook on Image Processing for Scientific Applications

Thermal Imaging “Thermal Imaging Is Gaining Acceptance as a Diagnostic Tool”
Biophotonics International, Nov/Dec 1998, pp. 48-53
Thresholding Chapter 6, Segmentation and Thresholding, pp. 371-430
The Image Processing Handbook

Section 7.3, Thresholding, pp. 443-457
Digital Image Processing

Transformation Image Transformations (Chapter 7)
Frequency Domain; Discrete Fourier Transform; FFT; Discrete Cosine Transform
Simplified Approach to Image Processing
Transformation, Affine Affine transformation software

Affine texture mapping is fundamental to many forms of 3D rendering, including light interpolation and other sampling type operations. Additional resources include tmapper.txt (listings) and code).  Dr. Dobb’s Journal, July 1998.

Section 8.6, High Performance Computer ImagingAffine transformation

Curvature Scale Space image under affine transforms

Transformation, Special effects Section 8.8, High Performance Computer ImagingSpecial-effects filters.
Shows radial transformation

Spatial Transformations (affine, perspective, bilinear, meshwarp)
Simplified Approach to Image Processing

Transparency Sean Dockery’s UseNet Post about Transparency including links to Microsoft Technical Reports
Triangle Intersection Triangle Intersection Tests
Dr. Dobb’s Journal, August 2000
Unsharp Masking “Real” Digital Unsharp Masking:  A Digital Equivalent to a Film Technique
Warping Digital Image Processing:  Principles and Applications, pp. 115-117

Fields-Based Warping, pp. 234-243
Simplified Approach to Image Processing

Watermarking nformation Hiding Techniques for Steganography and Digital Watermarking
by Fabien Petitcolas and Stefan Katzenbeisser  US  UK
Author’s Web Site

The information hiding homepage – digital watermarking & steganography

Digital Watermarking World

Invisible Watermarking:  Protecting Digital Pictures

References on Multimedia Watermarking and Data Hiding Research & Technology

Watermarking of Video and Multimedia Data

Digital watermarking: perfecting the art of security

Digital Watermarking

Digital Watermarking: a solution to Electronic Copyright Management Systems Requirements

Watermarking of Digital Images

Also see Steganography

Watershed Transformation Watershet Edge Detection, pp. 148-153
Digital Image Processing:  Principles and Applications

Several papers on Watershed algorithms

SDC Morphology Toolbox for MATLAB:   includes fast queue-based algorithms for distance transform, watershed, reconstruction, labeling, area-opening, etc.

Watershed Transform

Image segmentation problems in mathematical morphology

Zoom See ResamplingImage Registration
Automated Image Registration
Automatic Panoramic Image Merging
Automatic Registration of SAR Images and Digitized Maps
Bibliography Michael Jacobs’ UseNet Post with many references to journal articles
CISG Registration Toolkit
Cross-Correlation Fast Normalized Cross-Correlation
Elastic Image Registration

Elastic Imaging Registration and Pathology Detection

FFT Reddy, B. Srinivasa and B. N. Chatterji, “An FFT-Based Technique for Translation, Rotation, and Scale-Invariant Image Registration,” IEEE Trans. Image Processing vol. 5, no. 8 (1996 August) pp. 1266-1271.
FLIRT FLIRT is software for linear image registration, which is part of FSL
Image Matching and Registration
Image Matching by Maximisation of Mutual Information
Image Registration Special Issue of Pattern Recognition on Image Registration
Image Registration and Mosaicking
Image Registration Technology KT-Tech, Inc.
Matching Algorithms for Medical Image Processing
Medical Image Registration New Book (June 2001)  US  UK  DE  FR
Medical Image Registration using Geometric Hashing
Multimodality Medical Image Registration
Raw Image Registration NewSips System


Retrospective Registration Evaluation Project
Role of Image Registration in Brain Mapping
Survey of image registration techniques Abstract from ACM Computing Surveys, Vol 24., No. 4, Dec 92, pp. 325-376
Survey of Medical Image Registration J.B.A. Maintz, Medical Image Analysis, 2(1):1-36, 1998.

An overview of medical image registration methods. In Symposium of the Belgian hospital physicists association (SBPH/BVZF), volume 12, pages V:1-22, 1996/1997.

Author’s Bibliographical Information:

3D Image Registration of CT Angiography
3D Image Registration for Sculptors

Mathematical Techniques
Also see Fourier Analysis and Wavelets Sections of efg’s Mathematics Page

DCTs Implementing Fast DCTs (Discrete Cosine Transforms)
Dr. Dobb’s Journal, March 1999, pp. 115-119
Fast Hartley Transform Hartley Transform

Note:  According to a letter to the editor in the July 1999 “Embedded Systems Programming” (p. 7) the Fast Hartley Transform is covered under U.S. Patent Number 4,646,256.  Use of this algorithm for noncommercial research must be negotiated with the Office of Technology Licensing at Stanford University.

Geometry Vision Geometry and Mathematics

Section 2.5, Imaging Geometry, pp. 51-71
Digital Image Processing
, 2nd edition

Chapter 8, Geometry, pp. 263-286
Practical Handbook on Image Processing for Scientific Applications

Chapter 8, Image geometric operations
High Performance Computer Imaging

Image Quality IQM Approach:  Obtain Quality from Image Power Spectrum

Objective Image Quality Measure Derived from Digital Image Power Spectra
Norman B. Nill, Optical Engineering, April 1992, Vol. 31, No. 4, pp. 813-825

Also see Modulation Transfer Function below

Image Transforms Chapter 3, pp. 81-159
Fourier Transform (3.1), Walsh Transform (3.5.1),
Hadamard Transform (3.5.2), Discrete Cosine Transform (3.5.3),
Haar Transform (3.5.4), Slant Transform (3.5.5), Hotelling Transform (3.6)
Digital Image Processing
, 2nd edition
Modulation Transfer Function Random Test Patterns To Evaluate MTF

Understanding image sharpness part 1:  resolution and MTF curves in film and lenses.

Understanding image sharpness part 2:  resolution and MTF curves in scanners and sharpening

Understanding MTF Testing

How to interpret MTF Graphs

Use of Sinusoidal Test Patterns for MTF Evaluation

Image Quality Evaluation:  Modulation Transfer Function

What is a MTF Curve?

Moments Section 8.3.4, Moments, pp. 514-518
Digital Image Processing
Matrices Matrix Operations for Image Processing
Point Spread Functions The point-spread function (PSF) model of blurring

Point Spread Function of Imaging System (diagram)

Point spread function of the human eye obtained by a dual double-pass method

Point and Line Spread Functions

ACIS/HRMA Point Spread Function

Statistics Statistics and Image Processing
White Balance White Balance Patents


Posted in Computer Vision, Computing Technology, Entertainment, Free Tools, Journals & Conferences, My Research Related, Research Menu | Tagged: , , , , , , , , , | 1 Comment »

Research Writing Up & Publishing

Posted by Hemprasad Y. Badgujar on February 5, 2015

Research Writing Up & Publishing

The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century
– Steven Pinker, Harvard University
Writing your thesis – Champion et al.
How to read a scientific paper.
Top Ten Tips for doing your PhD – only 10?
Advice on Research and writing
Writing tips – covering a wide range of issues, from abbreviations, to punctuation, to writing style.
Guide to Grammar and Writing by Charles Darling
How to have a bad career in Research/Academia by David A. Patterson.
How to Write a Master’s Thesis in Computer Science by William D. Shoaff
Writing and Presenting Your Thesis or Dissertation by S. Joseph Levine, Ph.D.
How To Write A Dissertation or Bedtime Reading For People Who Do Not Have Time To Sleep
List of links on being a graduate student
Notes On The PhD Degree by D. Comer.
On Being A Scientist: Responsible Conduct In Research by NATIONAL ACADEMY OF SCIENCES.
You and your research.
Library notes for Engineering Researchers.
PhD Thesis Structure and Content by Christopher Clack.
Discussion on Ph.D. thesis proposals in computing science by H. Lauer.
Tips for a PhD and here.
Guide for writing a funding proposal by J. Levine.
How to publish in top journals.
How to Write Publishable Papers and here is a list of journals (mostly non-IT).
Networking on the Network – A Guide to Professional Skills for PhD Students by Phil Agre.
PhD writing links
Your PHd Thesis: How to Plan, Draft, Revise and Edit Your Thesis – A book by Brewer et al.,

Posted in Documentations, Journals & Conferences, My Research Related, Research Menu | Tagged: , , , , , , , , , , , , , , | Leave a Comment »

How to get started with wxWidgets on Windows

Posted by Hemprasad Y. Badgujar on February 3, 2015

How to get started with wxWidgets on Windows

wxWidgets is a cross-platform GUI library, that is also available for Windows. You can get started with using wxWidgets in a few steps:

  1. Download and install the Windows installer for the current stable release of wxWidgets from its download page. It installs the source and build files in C:. For example, inC:\wxWidgets-3.0.2\
  2. wxWidgets needs to be built before it can be used with your application. Go toC:\wxWidgets-3.0.2\build\msw and open the .sln file that matches the Visual Studio version you intend to use for your application. For example, I open wx_vc10.sln using Visual Studio 2012.
  3. Choose one of the build types: Debug, Release, DLL Debug or DLL Release and build the solution. The resulting .lib files are placed in C:\wxWidgets-3.0.2\lib\vc_lib
  4. Create a new Visual Studio solution for your C++ application. Remember that it has to be Win32 Project, not a Win32 Console Project. The difference is that the main function is defined inside wxWidgets and does not need to be defined in your application code.
  5. Add a .cpp file to your solution and copy the Hello World code into it.
  6. Add C:\wxWidgets-3.0.2\include and C:\wxWidgets-3.0.2\include\msvc as additional include directories to the solution.
  7. Add C:\wxWidgets-3.0.2\lib\vc_lib as additional library directory to the solution.
  8. Build the solution and run it to see an empty wxWidgets window.

Posted in Computer Vision, Entertainment, Free Tools, My Research Related, OpenCV | Tagged: , , , , , , , | Leave a Comment »

Building a Beowulf cluster with Ubuntu

Posted by Hemprasad Y. Badgujar on December 25, 2014

Building a Beowulf cluster with Ubuntu

The beowulf cluster article on Wikipedia describes the Beowulf cluster as follows:

“A Beowulf cluster is a group of what are normally identical, commercially available computers, which are running a Free and Open Source Software (FOSS), Unix-like operating system, such as BSD, GNU/Linux, or Solaris. They are networked into a small TCP/IP LAN, and have libraries and programs installed which allow processing to be shared among them.” – Wikipedia, Beowulf cluster, 28 February 2011.

This means a Beowulf cluster can be easily built with “off the shelf” computers running GNU/Linux in a simple home network. So building a Beowulf like cluster is within reach if you already have a small TCP/IP LAN at home with desktop computers running Ubuntu Linux (or any other Linux distribution).

There are many ways to install and configure a cluster. There is OSCAR(1), which allows any user, regardless of experience, to easily install a Beowulf type cluster on supported Linux distributions. It installs and configures all required software according to user input.

There is also the NPACI Rocks toolkit(2), which incorporates the latest Red Hat distribution and cluster-specific software. Rocks addresses the difficulties of deploying manageable clusters. Rocks makes clusters easy to deploy, manage, upgrade and scale.

Both of the afore mentioned toolkits for deploying clusters were made to be easy to use and require minimal expertise from the user. But the purpose of this tutorial is to explain how to manually build a Beowulf like cluster. Basically, the toolkits mentioned above do most of the installing and configuring for you, rendering the learning experience mute. So it would not make much sense to use any of these toolkits if you want to learn the basics of how a cluster works. This tutorial therefore explains how to manually build a cluster, by manually installing and configuring the required tools. In this tutorial I assume that you have some basic knowledge of the Linux-based operating system and know your way around the command line. I tried however to make this as easy as possible to follow. Keep in mind that this is new territory for me as well and there’s a good chance that this tutorial shows methods that may not be the best.

I myself started off with the clustering tutorial from SCFBio which gives a great explanation on how to build a simple Beowulf cluster.(3) It describes the prerequisites for building a Beowulf cluster and why these are needed.


  • What’s a Beowulf Cluster, exactly?
  • Building a virtual Beowulf Cluster
  • Building the actual cluster
  • Configuring the Nodes
    • Add the nodes to the hosts file
    • Defining a user for running MPI jobs
    • Install and setup the Network File System
    • Setup passwordless SSH for communication between nodes
    • Setting up the process manager
      • Setting up Hydra
      • Setting up MPD
  • Running jobs on the cluster
    • Running MPICH2 example applications on the cluster
    • Running bioinformatics tools on the cluster
  • Credits
  • References

What’s a Beowulf Cluster, exactly?

The typical setup of a beowulf cluster

The definition I cited before is not very complete. The book “Engineering a Beowulf-style Compute Cluster”(4) by Robert G. Brown gives a more detailed answer to this question (if you’re serious about this, this book is a must read). According to this book, there is an accepted definition of a beowulf cluster. This book describes the true beowulf as a cluster of computers interconnected with a network with the following characteristics:

  1. The nodes are dedicated to the beowulf cluster.
  2. The network on which the nodes reside are dedicated to the beowulf cluster.
  3. The nodes are Mass Market Commercial-Off-The-Shelf (M2COTS) computers.
  4. The network is also a COTS entity.
  5. The nodes all run open source software.
  6. The resulting cluster is used for High Performance Computing (HPC).

Building a virtual Beowulf Cluster

It is not a bad idea to start by building a virtual cluster using virtualization software like VirtualBox. I simply used my laptop running Ubuntu as the master node, and two virtual computing nodes running Ubuntu Server Edition were created in VirtualBox. The virtual cluster allows you to build and test the cluster without the need for the extra hardware. However, this method is only meant for testing and not suited if you want increased performance.

When it comes to configuring the nodes for the cluster, building a virtual cluster is practically the same as building a cluster with actual machines. The difference is that you don’t have to worry about the hardware as much. You do have to properly configure the virtual network interfaces of the virtual nodes. They need to be configured in a way that the master node (e.g. the computer on which the virtual nodes are running) has network access to the virtual nodes, and vice versa.

Building the actual cluster

It is good practice to first build and test a virtual cluster as described above. If you have some spare computers and network parts lying around, you can use those to build the actual cluster. The nodes (the computers that are part of the cluster) and the network hardware are the usual kind available to the general public (beowulf requirement 3 and 4). In this tutorial we’ll use the Ubuntu operating system to power the machines and open source software to allow for distributed parallel computing (beowulf requirement 5). We’ll test the cluster with cluster specific versions of bioinformaticstools that perform some sort of heavy calculations (beowulf requirement 6).

The cluster consists of the following hardware parts:

  • Network
  • Server / Head / Master Node (common names for the same machine)
  • Compute Nodes
  • Gateway

All nodes (including the master node) run the following software:

I will not focus on setting up the network (parts) in this tutorial. I assume that all nodes are part of the same private network and that they are properly connected.

Configuring the Nodes

Some configurations need to be made to the nodes. I’ll walk you through them one by one.

Add the nodes to the hosts file

It is easier if the nodes can be accessed with their host name rather than their IP address. It will also make things a lot easier later on. To do this, add the nodes to the hosts file of all nodes.(8) (9) All nodes should have a static local IP address set. I won’t go into details here as this is outside the scope of this tutorial. For this tutorial I assume that all nodes are already properly configured to have a static local IP address.

Edit the hosts file (sudo vim /etc/hosts) like below and remember that you need to do this for all nodes,	localhost	master	node1	node2	node3

Make sure it doesn’t look like this:	localhost	master	node1	node2	node3

neither like this:	localhost	master	master	node1	node2	node3

Otherwise other nodes will try to connect to localhost when trying to reach the master node.

Once saved, you can use the host names to connect to the other nodes,

$ ping -c 3 master
PING master ( 56(84) bytes of data.
64 bytes from master ( icmp_req=1 ttl=64 time=0.606 ms
64 bytes from master ( icmp_req=2 ttl=64 time=0.552 ms
64 bytes from master ( icmp_req=3 ttl=64 time=0.549 ms

--- master ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 0.549/0.569/0.606/0.026 ms

Try this with different nodes on different nodes. You should get a response similar to the above.

In this tutorial, master is used as the master node. Once the cluster has been set up, the master node will be used to start jobs on the cluster. The master node will be used to spawn jobs on the cluster. The compute nodes are node1 to node3 and will thus execute the jobs.

Defining a user for running MPI jobs

Several tutorials explain that all nodes need a separate user for running MPI jobs.(8) (9) (6) I haven’t found a clear explanation to why this is necessary, but there could be several reasons:

  1. There’s no need to remember different user names and passwords if all nodes use the same username and password.
  2. MPICH2 can use SSH for communication between nodes. Passwordless login with the use of authorized keys only works if the username matches the one set for passwordless login. You don’t have to worry about this if all nodes use the same username.
  3. The NFS directory can be made accessible for the MPI users only. The MPI users all need to have the same user ID for this to work.
  4. The separate user might require special permissions.

The command below creates a new user with username “mpiuser” and user ID 999. Giving a user ID below 1000 prevents the user from showing up in the login screen for desktop versions of Ubuntu. It is important that all MPI users have the same username and user ID. The user IDs for the MPI users need to be the same because we give access to the MPI user on the NFS directory later. Permissions on NFS directories are checked with user IDs. Create the user like this,

$ sudo adduser mpiuser --uid 999

You may use a different user ID (as long as it is the same for all MPI users). Enter a password for the user when prompted. It’s recommended to give the same password on all nodes so you have to remember just one password. The above command should also create a new directory/home/mpiuser. This is the home directory for user mpiuser and we will use it to execute jobs on the cluster.

Install and setup the Network File System

Files and programs used for MPI jobs (jobs that are run in parallel on the cluster) need to be available to all nodes, so we give all nodes access to a part of the file system on the master node. Network File System (NFS) enables you to mount part of a remote file system so you can access it as if it is a local directory. To install NFS, run the following command on the master node:

master:~$ sudo apt-get install nfs-kernel-server

And in order to make it possible to mount a Network File System on the compute nodes, the nfs-common package needs to be installed on all compute nodes:

$ sudo apt-get install nfs-common

We will use NFS to share the MPI user’s home directory (i.e. /home/mpiuser) with the compute nodes. It is important that this directory is owned by the MPI user so that all MPI users can access this directory. But since we created this home directory with the adduser command earlier, it is already owned by the MPI user,

master:~$ ls -l /home/ | grep mpiuser
drwxr-xr-x   7 mpiuser mpiuser  4096 May 11 15:47 mpiuser

If you use a different directory that is not currently owned by the MPI user, you must change it’s ownership as follows,

master:~$ sudo chown mpiuser:mpiuser /path/to/shared/dir

Now we share the /home/mpiuser directory of the master node with all other nodes. For this the file /etc/exports on the master node needs to be edited. Add the following line to this file,

/home/mpiuser *(rw,sync,no_subtree_check)

You can read the man page to learn more about the exports file (man exports). After the first install you may need to restart the NFS daemon:

master:~$ sudo service nfs-kernel-server restart

This also exports the directores listed in /etc/exports. In the future when the /etc/exports file is modified, you need to run the following command to export the directories listed in /etc/exports:

master:~$ sudo exportfs -a

The /home/mpiuser directory should now be shared through NFS. In order to test this, you can run the following command from a compute node:

$ showmount -e master

In this case this should print the path /home/mpiuser. All data files and programs that will be used for running an MPI job must be placed in this directory on the master node. The other nodes will then be able to access these files through NFS.

The firewall is by default enabled on Ubuntu. The firewall will block access when a client tries to access an NFS shared directory. So you need to add a rule with UFW (a tool for managing the firewall) to allow access from a specific subnet. If the IP addresses in your network have the format192.168.1.*, then is the subnet. Run the following command to allow incoming access from a specific subnet,

master:~$ sudo ufw allow from

You need to run this on the master node and replace “” by the subnet for your network.

You should then be able to mount master:/home/mpiuser on the compute nodes. Run the following commands to test this,

node1:~$ sudo mount master:/home/mpiuser /home/mpiuser
node2:~$ sudo mount master:/home/mpiuser /home/mpiuser
node3:~$ sudo mount master:/home/mpiuser /home/mpiuser

If this fails or hangs, restart the compute node and try again. If the above command runs without a problem, you should test whether/home/mpiuser on any compute node actually has the content from /home/mpiuser of the master node. You can test this by creating a file inmaster:/home/mpiuser and check if that same file appears in node*:/home/mpiuser (where node* is any compute node).

If mounting the NFS shared directory works, we can make it so that the master:/home/mpiuser directory is automatically mounted when the compute nodes are booted. For this the file /etc/fstab needs to be edited. Add the following line to the fstab file of all compute nodes,

master:/home/mpiuser /home/mpiuser nfs

Again, read the man page of fstab if you want to know the details (man fstab). Reboot the compute nodes and list the contents of the/home/mpiuser directory on each compute node to check if you have access to the data on the master node,

$ ls /home/mpiuser

This should lists the files from the /home/mpiuser directory of the master node. If it doesn’t immediately, wait a few seconds and try again. It might take some time for the system to initialize the connection with the master node.

Setup passwordless SSH for communication between nodes

For the cluster to work, the master node needs to be able to communicate with the compute nodes, and vice versa.(8) Secure Shell (SSH) is usually used for secure remote access between computers. By setting up passwordless SSH between the nodes, the master node is able to run commands on the compute nodes. This is needed to run the MPI daemons on the available compute nodes.

First install the SSH server on all nodes:

$ sudo apt-get install ssh

Now we need to generate an SSH key for all MPI users on all nodes. The SSH key is by default created in the user’s home directory. Remember that in our case the MPI user’s home directory (i.e. /home/mpiuser) is actually the same directory for all nodes: /home/mpiuser on the master node. So if we generate an SSH key for the MPI user on one of the nodes, all nodes will automatically have an SSH key. Let’s generate an SSH key for the MPI user on the master node (but any node should be fine),

$ su mpiuser
$ ssh-keygen

When asked for a passphrase, leave it empty (hence passwordless SSH).

When done, all nodes should have an SSH key (the same key actually). The master node needs to be able to automatically login to the compute nodes. To enable this, the public SSH key of the master node needs to be added to the list of known hosts (this is usually a file~/.ssh/authorized_keys) of all compute nodes. But this is easy, since all SSH key data is stored in one location: /home/mpiuser/.ssh/ on the master node. So instead of having to copy master’s public SSH key to all compute nodes separately, we just have to copy it to master’s ownauthorized_keys file. There is a command to push the public SSH key of the currently logged in user to another computer. Run the following commands on the master node as user “mpiuser”,

mpiuser@master:~$ ssh-copy-id localhost

Master’s own public SSH key should now be copied to /home/mpiuser/.ssh/authorized_keys. But since /home/mpiuser/ (and everything under it) is shared with all nodes via NFS, all nodes should now have master’s public SSH key in the list of known hosts. This means that we should now be able to login on the compute nodes from the master node without having to enter a password,

mpiuser@master:~$ ssh node1
mpiuser@node1:~$ echo $HOSTNAME

You should now be logged in on node1 via SSH. Make sure you’re able to login to the other nodes as well.

Setting up the process manager

In this section I’ll walk you through the installation of MPICH and configuring the process manager. The process manager is needed to spawn and manage parallel jobs on the cluster. The MPICH wiki explains this nicely:

“Process managers are basically external (typically distributed) agents that spawn and manage parallel jobs. These process managers communicate with MPICH processes using a predefined interface called as PMI (process management interface). Since the interface is (informally) standardized within MPICH and its derivatives, you can use any process manager from MPICH or its derivatives with any MPI application built with MPICH or any of its derivatives, as long as they follow the same wire protocol.” – Frequently Asked Questions – Mpich.

The process manager is included with the MPICH package, so start by installing MPICH on all nodes with,

$ sudo apt-get install mpich2

MPD has been the traditional default process manager for MPICH till the 1.2.x release series. Starting the 1.3.x series, Hydra is the default process manager.(10) So depending on the version of MPICH you are using, you should either use MPD or Hydra for process management. You can check the MPICH version by running mpich2version in the terminal. Then follow either the steps for MPD or Hydra in the following sub sections.

Setting up Hydra

This section explains how to configure the Hydra process manager and is for users of MPICH 1.3.x series and up. In order to setup Hydra, we need to create one file on the master node. This file contains all the host names of the compute nodes.(11) You can create this file anywhere you want, but for simplicity we create it in the the MPI user’s home directory,

mpiuser@master:~$ cd ~
mpiuser@master:~$ touch hosts

In order to be able to send out jobs to the other nodes in the network, add the host names of all compute nodes to the hosts file,


You may choose to include master in this file, which would mean that the master node would also act as a compute node. The hosts file only needs to be present on the node that will be used to start jobs on the cluster, usually the master node. But because the home directory is shared among all nodes, all nodes will have the hosts file. For more details about setting up Hydra see this page: Using the Hydra Process Manager.

Setting up MPD

This section explains how to configure the MPD process manager and is for users of MPICH 1.2.x series and down. Before we can start any parallel jobs with MPD, we need to create two files in the home directory of the MPI user. Make sure you’re logged in as the MPI user and create the following two files in the home directory,

mpiuser@master:~$ cd ~
mpiuser@master:~$ touch mpd.hosts
mpiuser@master:~$ touch .mpd.conf

In order to be able to send out jobs to the other nodes in the network, add the host names of all compute nodes to the mpd.hosts file,


You may choose to include master in this file, which would mean that the master node would also act as a compute node. The mpd.hosts file only needs to be present on the node that will be used to start jobs on the cluster, usually the master node. But because the home directory is shared among all nodes, all nodes will have the mpd.hosts file.

The configuration file .mpd.conf (mind the dot at the beginning of the file name) must be accessible to the MPI user only (in fact, MPD refuses to work if you don’t do this),

mpiuser@master:~$ chmod 600 .mpd.conf

Then add a line with a secret passphrase to the configuration file,


The secretword can be set to any random passphrase. You may want to use a random password generator the generate a passphrase.

All nodes need to have the .mpd.conf file in the home directory of mpiuser with the same passphrase. But this is automatically the case since/home/mpiuser is shared through NFS.

The nodes should now be configured correctly. Run the following command on the master node to start the mpd deamon on all nodes,

mpiuser@master:~$ mpdboot -n 3

Replace “3” by the number of compute nodes in your cluster. If this was successful, all nodes should now be running the mpd daemon. Run the following command to check if all nodes entered the ring (and are thus running the mpd daemon),

mpiuser@master:~$ mpdtrace -l

This command should display a list of all nodes that entered the ring. Nodes listed here are running the mpd daemon and are ready to accept MPI jobs. This means that your cluster is now set up and ready to rock!

Running jobs on the cluster

Running MPICH2 example applications on the cluster

The MPICH2 package comes with a few example applications that you can run on your cluster. To obtain these examples, download the MPICH2 source package from the MPICH website and extract the archive to a directory. The directory to where you extracted the MPICH2 package should contain an “examples” directory. This directory contains the source codes of the example applications. You need to compile these yourself.

$ sudo apt-get build-dep mpich2
$ wget
$ tar -xvzf mpich2-1.4.1.tar.gz
$ cd mpich2-1.4.1/
$ ./configure
$ make
$ cd examples/

The example application cpi is compiled by default, so you can find the executable in the “examples” directory. Optionally you can build the other examples as well,

$ make hellow
$ make pmandel

Once compiled, place the executables of the examples somewhere inside the /home/mpiuser directory on the master node. It’s common practice to place executables in a “bin” directory, so create the directory /home/mpiuser/bin and place the executables in this directory. The executables should now be available on all nodes.

We’re going to run an MPI job using the example application cpi. Make sure you’re logged in as the MPI user on the master node,

$ su mpiuser

And run the job like this,

When using MPD:

mpiuser@master:~$ mpiexec -n 3 /home/mpiuser/bin/cpi

When using Hydra:

mpiuser@master:~$ mpiexec -f hosts -n 3 /home/mpiuser/bin/cpi

Replace “3” by the number of nodes on which you want to run the job. When using Hydra, the -f switch should point to the file containing the host names. When using MPD, it’s important that you use the absolute path to the executable in the above command, because only then MPD knows where to look for the executable on the compute nodes. The absolute path used should thus be correct for all nodes. But since/home/mpiuser is the NFS shared directory, all nodes have access to this path and the files within it.

The example application cpi is useful for testing because it shows on which nodes each sub process is running and how long it took to run the job. This application is however not useful to test performance because this is a very small application which takes only a few milliseconds to run. As a matter of fact, I don’t think it actually computes pi. If you look at the source, you’ll find that the value of pi is hard coded into the program.

Running bioinformatics tools on the cluster

By running actual bioinformatics tools you can give your cluster a more realistic test run. There are several parallel implementations of bioinformatics tools that are based on MPI. There are two that I currently know of:

It would be nice to test mpiBLAST, but because of a compilation issue, I was not able to do so. After some asking around at the mpiBLAST-Users mailing list, I got an answer:

“That problem is caused by a change in GCC version 4.4.X. We don’t have a fix to give out for the issue as yet, but switching to 4.3.X or lower should solve the issue for the time being.”(7)

Basically, I’m using a newer version of the GCC compiler which fails to build mpiBLAST. In order to compile it, I’d have to use an older version. But to instruct mpicc to use GCC 4.3 instead, requires that MPICH2 be compiled with GCC 4.3. Instead of going through that trouble, I’ve decided to give ClustalW-MPI a try instead.

The MPI implementation of ClustalW is fairly out-dated, but it’s good enough to perform a test run on your cluster. Download the source from the website, extract the package, and compile the source. Copy the resulting executable to the /home/mpiuser/bin directory on the master node. Use for example Entrez to search for some DNA/protein sequences and put these in a single FASTA file (the NCBI website can do that for you). Create several FASTA files with multiple sequences to test with. Copy the multi-sequence FASTA files to a data directory inside mirror (e.g./home/mpiuser/data). Then run a job like this,

When using MPD:

mpiuser@master:~$ mpiexec -n 3 /home/mpiuser/bin/clustalw-mpi /home/mpiuser/data/seq_tyrosine.fasta

When using Hydra:

mpiuser@master:~$ mpiexec -f hosts -n 3 /home/mpiuser/bin/clustalw-mpi /home/mpiuser/data/seq_tyrosine.fasta

and let the cluster do the work. Again, notice that we must use absolute paths. You can check if the nodes are actually doing anything by logging into the nodes (ssh node*) and running the top command. This should display a list of running processes with the processes using the most CPU on the top. In this case, you should see the process clustalw-mpi somewhere along the top.


Thanks to Reza Azimi for mentioning the nfs-common package.


  1. OpenClusterGroup. OSCAR.
  2. Philip M. Papadopoulos, Mason J. Katz, and Greg Bruno. NPACI Rocks: Tools and Techniques for Easily Deploying Manageable Linux Clusters. October 2001, Cluster 2001: IEEE International Conference on Cluster Computing.
  3. Supercomputing Facility for Bioinformatics & Computational Biology, IIT Delhi. Clustering Tutorial.
  4. Robert G. Brown. Engineering a Beowulf-style Compute Cluster. 2004. Duke University Physics Department.
  5. Pavan Balaji, et all. MPICH2 User’s Guide, Version 1.3.2. 2011. Mathematics and Computer Science Division Argonne National Laboratory.
  6. Kerry D. Wong. A Simple Beowulf Cluster.
  7. mpiBLAST-Users: unimplemented: inlining failed in call to ‘int fprintf(FILE*, const char*, …)’
  8. Ubuntu Wiki. Setting Up an MPICH2 Cluster in Ubuntu.
  9. Building a Beowulf Cluster in just 13 steps.
  10. Frequently Asked Questions – Mpich.
  11. Using the Hydra Process Manager – Mpich.

Posted in CLUSTER, Computer Hardware, Computer Hardwares, Computer Languages, Computer Vision, Computing Technology, CUDA, Free Tools, GPU (CUDA), Linux OS, Mixed, My Research Related, Open CL, OpenCV, OpenMP, OPENMPI, PARALLEL | 2 Comments »

How to run CUDA 6.5 in Emulation Mode

Posted by Hemprasad Y. Badgujar on December 20, 2014

How to run CUDA in Emulation Mode

Some beginners feel a little bit dejected when they find that their systems donotcontainGPUs to learn andworkwithCUDA. In this blog post, I shall include the step by step process of installingandexecutingCUDA programs in emulation mode on a system with no GPU installed in it.It is mentioned here thatyouwill not be able to gain any performance advantage expected out of a GPU (obviously). Instead, the performance will be worse than a CPU implementation. However, emulation mode provides an excellent tool to compile and debugyourCUDA codes for more advanced purposes.Please note that I performed the following steps for a Dell Xeon with Windows 7 (32-bit) system.1. Acquire and install Microsoft Visual Studio 2008 on your system.

2. Access the CUDA Toolkit Archives  page and select CUDA Toolkit 6.0 / 6.5 version. (It is the last version that came with emulation mode. Emulation mode was discontinued in later versions.)

3. Download and install the following on your machine:-

  • Developer Drivers for Win8/win7 X64  – (Select the one as required for your machine.)
  • CUDA Toolkit
  • CUDA SDK Code Samples
  • CUBLAS and CUFFT (If required)

4. The next step is to check whether the sample codes run properly on the system or not. This will ensure that there is nothing missing from the required installations. Browse the nVIDIA GPU Computing SDK using the windows start bar or by using the following path in your My Computer address bar:-
As per your working Platform
“C:\ProgramData\NVIDIA Corporation\NVIDIA GPU Computing SDK\C\bin\win32\Release”
“C:\ProgramData\NVIDIA Corporation\NVIDIA GPU Computing SDK\C\bin\win64\Release”

(Also note that the ProgramData folder is by default set to “Hidden” attribute. It will be good if you unhide theis folder as it will be frequently utilized later on as you progress with your CUDA learning spells.)

5. Run the “deviceQuery” program and it should output something similar as shown in Fig. 1. Upon visual inspection of the output data, it can be seen that “there is no GPU device found” however the test has PASSED. This means that all the required installations for CUDA in emulation mode has been completed and now we can proceed with writing, compiling and executing CUDA programs in emulation mode.

Figure 1. Successful Rxecution of deviceQuery.exe ** Demo Example only

6. Open Visual Studio and create a new Win32 console project. Let’s name it “HelloCUDAEmuWorld”. Remember to select the “EMPTY PROJECT” option in Application Settings. Now Right Click on “Source Files” in the project tree and add new C++ code item. Remember to include the extension “.cu” instead of “.cpp”. Let’s name this item as “”. (If you forget the file extension, it can always be renamed via the project tree on the left).

7. Include the CUDA include, lib and  bin paths to MS Visual Studio. They were located at “C:\CUDA” in my system.

The next steps need to be performed for every new CUDA project when created.

8. Right Click on the project and select Custom Build Rules. Check the Custom Build Rules v6.0.0 option if available. Otherwise, click on Find Existing… and navigate to “C:\ProgramData\NVIDIA Corporation\NVIDIA GPU Computing SDK\C\common” and select Cuda.rules. This will add the build rules for CUDA v6.0to VS 2012.

9. Right click on the project and select Properties. Navigate to Configuration Properties –> Linker –> Input. Type in cudart.lib in the Additional Dependencies text bar and click Okay. Now we are ready to compile and run our first ever CUDA program in emulation mode. But first we need to activate the emulation  mode for .cu files.

10. Once again  Right click on the project and select Properties. Navigate to Configuration Properties –> CUDA Build Rule v6.0.0 –> General. Set Emulation Mode from No to Yes in the right hand column of the opened window. Click Okay.

11. Type in the following in the code editor and build and compile the project. And there it is. Your first ever CUDA program, in Emulation Mode. Something to brag about among friends.

int main(void)
return 0;

I hope this effort would not go in vain and offer some help to anyone whois tied upregarding this issue. Do contact if there is any queryregarding the above procedure.Source (

Posted in Computer Vision, Computing Technology, CUDA, GPU (CUDA), GPU Accelareted, Image / Video Filters, My Research Related, OpenCV, PARALLEL, Project Related | Leave a Comment »

Parallel Code: Maximizing your Performance Potential

Posted by Hemprasad Y. Badgujar on December 19, 2014

No matter what the purpose of your application is, one thing is certain. You want to get the most bang for your buck. You see research papers being published and presented making claims of tremendous speed increases by running algorithms on the GPU (e.g. NVIDIA Tesla), in a cluster, or on a hardware accelerator (such as the Xeon Phi or Cell BE). These architectures allow for massively parallel execution of code that, if done properly, can yield lofty performance gains.

Unlike most aspects of programming, the actual writing of the programs is (relatively) simple. Most hardware accelerators support (or are very similar to) C based programming languages. This makes hitting the ground running with parallel coding an actually doable task. While mastering the development of massively parallel code is an entirely different matter, with a basic understanding of the principles behind efficient, parallel code, one can obtain substantial performance increases compared to traditional programming and serial execution of the same algorithms.

In order to ensure that you’re getting the most bang for your buck in terms of performance increases, you need to be aware of the bottlenecks associated with coprocessor/GPU programming. Fortunately for you, I’m here to make this an easier task. By simply avoiding these programming “No-No’s” you can optimize the performance of your algorithm without having to spend hundreds of hours learning about every nook and cranny of the architecture of your choice. This series will discuss and demystify these performance-robbing bottlenecks, and provide simple ways to make these a non-factor in your application.

Parallel Thread Management – Topic #1

First and foremost, the most important thing with regard to parallel programming is the proper management of threads. Threads are the smallest sequence of programmed instructions that are able to be utilized by an operating system scheduler. Your application’s threads must be kept busy (not waiting) and non-divergent. Properly scheduling and directing threads is imperative to avoid wasting precious computing time.
Read the rest of this entry »

Posted in Computer Hardwares, Computer Languages, Computing Technology, GPU (CUDA), GPU Accelareted, My Research Related, PARALLEL, Research Menu | Tagged: | Leave a Comment »

Estimation vs Prediction

Posted by Hemprasad Y. Badgujar on December 14, 2014

“Prediction” and “estimation” indeed are sometimes used interchangeably in non-technical writing and they seem to function similarly, but there is a sharp distinction between them in the standard model of a statistical problem. An estimator uses data to guess at a parameter while a predictor uses the data to guess at some random value that is not part of the dataset. For those who are unfamiliar with what “parameter” and “random value” mean in statistics, the following provides a detailed explanation.

In this standard model, data are assumed to constitute a (possibly multivariate) observation x of a random variable X whose distribution is known only to lie within a definite set of possible distributions, the “states of nature”. An estimator t is a mathematical procedure that assigns to each possible value of x some property t(x) of a state of nature θ, such as its mean μ(θ). Thus an estimate is a guess about the true state of nature. We can tell how good an estimate is by comparing t(x) to μ(θ).

A predictor p(x) concerns the independent observation of another random variable Z whose distribution is related to the true state of nature. A prediction is a guess about another random value. We can tell how good a particular prediction is only by comparing p(x) to the value realized byZ. We hope that on average the agreement will be good (in the sense of averaging over all possible outcomes x and simultaneously over all possible values of Z).

Ordinary least squares affords the standard example. The data consist of pairs (xi,yi)associating values yi of the dependent variable to values xi of the independent variable. The state of nature is specified by three parameters α, β, and σ: it says that each yi is like an independent draw from a normal distribution with mean α+βxi and standard deviation σ. α, β, and σ are parameters (numbers) believed to be fixed and unvarying. Interest focuses on α (the intercept) and β (the slope). The OLS estimate, written (α^,β^), is good in the sense that α^ tends to be close to α and β^ tends to be close to β, no matter what the true (but unknown) values of α and β might be.

OLS prediction consists of observing a new value Z=Y(x) of the dependent variable associated with some value x of the independent variable. x might or might not be among the xi in the dataset; that is immaterial. One intuitively good prediction is that this new value is likely to be close to α^+β^x. Better predictions say just how close the new value might be (they are called prediction intervals). They account for the fact that α^ and β^ are uncertain (because they depend mathematically on the random values (yi)), that σ is not known for certain (and therefore has to be estimated), as well as the assumption that Y(x) has a normal distribution with standard deviation σ and mean α+βx (note the absence of any hats!).

Note especially that this prediction has two separate sources of uncertainty: uncertainty in the data (xi,yi) leads to uncertainty in the estimated slope, intercept, and residual standard deviation (σ); in addition, there is uncertainty in just what value of Y(x) will occur. This additional uncertainty–because Y(x) is random–characterizes predictions. A prediction may look like an estimate (after all, α^+β^x estimates α+βx 🙂 and may even have the very same mathematical formula (p(x) can sometimes be the same as t(x)), but it will come with a greater amount of uncertainty than the estimate.

Here, then, in the example of OLS, we see the distinction clearly: an estimate guesses at the parameters (which are fixed but unknown numbers), while a prediction guesses at the value of a random quantity. The source of potential confusion is that the prediction usually builds on the estimated parameters and might even have the same formula as an estimator.

In practice, you can distinguish estimators from predictors in two ways:

  1. purpose: an estimator seeks to know a property of the true state of nature, while a prediction seeks to guess the outcome of a random variable; and
  2. uncertainty: a predictor usually has larger uncertainty than a related estimator, due to the added uncertainty in the outcome of that random variable. Well-documented and described predictors therefore usually come with uncertainty bands–prediction intervals–that are wider than the uncertainty bands of estimators, known as confidence intervals. A characteristic feature of prediction intervals is that they can (hypothetically) shrink as the dataset grows, but they will not shrink to zero width–the uncertainty in the random outcome is “irreducible”–whereas the widths of confidence intervals will tend to shrink to zero, corresponding to our intuition that the precision of an estimate can become arbitrarily good with sufficient amounts of data.

In applying this to assessing potential investment loss, first consider the purpose: do you want to know how much you might actually lose on this investment (or this particular basket of investments) during a given period, or are you really just guessing what is the expected loss (over a large universe of investments, perhaps)? The former is a prediction, the latter an estimate. Then consider the uncertainty. How would your answer change if you had nearly infinite resources to gather data and perform analyses? If it would become very precise, you are probably estimating the expected return on the investment, whereas if you remain highly uncertain about the answer, you are making a prediction.

Posted in Apps Development, Computer Languages, My Research Related | Leave a Comment »

Event Planning and Productivity Tools

Posted by Hemprasad Y. Badgujar on November 23, 2014

  • Invitation, Registration, & RSVP
  • Calendaring & Time Visualizations
  • Task Management
  • Collaboration
  • Bonus Tools


Invitation, Registration, & RSVP

Stop mailing out those invitations by hand and start using one of these savvy online services. Here are tools that will give you beautifully-designed invitation email, powerful registration tools, and guest management.

1. Acteva
An online ticketing, registration, and payment management system for events, both online and off.

2. Amiando
The European solution for large event registration and ticketing. Encourage registrations, track, email guests, sell tickets and manage your data in one place.

3. Anyvite
A great back-to-basics invite site that is optimized for viewing across multiple platforms including excellent mobile functionality.

4. ArrangeMySeat
More than just online invitations, they offer customer ticket sales and seating plans so your guests can RSVP, register, and choose their seats, all in one go!

5. Ejovo
More than just an online invitation site, Ejovo offers a complete solution for your party-planning needs including invites (of course), party themes, decor, recipes, and more.

6. EventWax
A completely free solution for managing your event from invitation to to day-of guest tracking. You can even sell tickets to your event (they’ll take a small cut in that case), create custom pages for your event, and manage attendees.

7. Eventbrite
Create events, sell tickets, track registrations, and even manage your event entries. It’s like TicketMaster, but not evil!

8. Evite
The Grandaddy of online invitations. Simple interface and lots of great designs for invitations, especially for kids events or informal parties.

9. Facebook Events
Create your event, invite, track, and message your guests all in the place where most of us spend our virtual free time anyway! It’s convenient to use, if not the most comprehensive.

10. Google+ Events
The new kid on the block allows you to send beautiful, personalized invitation through Google+. Better yet, this is a great new way to show off your event as it’s happening with “party mode.”

11. Orchid Event Solutions
Large event (think conventions) registration and housing service. You can set up online registration for attendees, exhibitors, and more and coordinate hotel says for guests.

Paperless post

12. Paperless Post
Online Invitations with a chic, grown-up aesthetic. Send your invite through Paperless Post using personalized digital “envelopes” or share via social media.

13. Phonevite
Call it a throw-back but sometimes the phone is a much faster way to communicate timely info. PhoneVite allows you record a message to be sent, en masse, to a multitude of different people. Perfect for last minute changes to patries, meetings, practices, or just saying “hi” to a whole

14. Pingg
The only online resource that mails postmarked versions of your invitations and still allows you to collect online RSVPs. You can send your invitations online, via social media or email, or through the post office, everything is tracked in one place!

15. PurpleTrail
Create customized event invites or cards to send out via snail mail email. They have LOTS of really cute designs!

16. Socializr
Originally started by Friendster founder, Jonathan Abram, Socializr has been acquired by start-to-finish party planning site, Punchbowl. Send out online invites or save-the-dates. You can even poll your guests to find out what date works best for them!

17. TimeBridge
The perfect solution for meeting invitations that means you can stop juggling your meeting times. Completely integrable with Google Calendar, Outlook, and iCal.

18. Twtvite
The Twitter invitation solution! Create your invite, push it to Twitter (complete with hashtags!) and track responses. Paid features include ticket sales and multiple events per month.

Calendaring & Time Visualizations

Save time and get your deadlines straight. Here are creative ways to see projects at a glance and easy ways to manage your calendar.

19. Boomerang Calendar
This is a simple, intuitive app that can be built right into Gmail. It will, from within Gmail, highlight dates and times that you are free or busy, and automatically add events to your Google calendar if you choose.

20. Doodle
Doodle’s calendar software integrates easily with a hugr variety of calendar applications and allows you to propose several dates and times and the participants can indicate their availability online. You’ll find the perfect time to meet in a quick and easy way no matter how many people and calendars are involved.

21. Gantter
Gantter is a free, cloud-based solution for project management. Completely integrated with Google Drive, you can attach Google Docs to your projects and timeline and easily see and share exactly where you are with any given project.

22. Ganttic
Estonia’s solution to MS Project, Ganttic features a visual approach to resource scheduling with
Simple drag&drop rescheduling, multi-project management, and almost no learning curve for team members

23. Gantto
Gantto reproduces the experience of sketching a project plan on a whiteboard, but using your computer so your projects are easy to modify and share with your team. The best part, it’s both PC and Mac compatible, just open up your web browser and start charting!

24. Liquid Planner
Easy and professional online software with integrated features for scheduling, collaboration, time tracking, analysis and reporting. Just assign and estimate your tasks, then put them in priority order, and LiquidPlanner tells you when you’re likely to complete the work.

25. TeamGantt
Create online, fully sharable Gantt charts. Drag-and-drop functionality, fill in completion percentages, upload files, and even communicate from within the application.

26. Timetoast
Create beautiful, dynamic timelines that you can interact with. Create a timeline of your company, your family history, or help your middle schooler on that tricky Boer War project.

Tom's Planner27. Tom’s Planner

Tom’s Planner is online Gantt chart software that allows anyone to create, collaborate and share Gantt Charts online with drag and drop simplicity. It’s web based, extremely intuitive and easy-to-use.

28. Which Date Works
This tool allows you to plan events quickly and easily by finding out your guests availability. It’s free, easy-to-use, and doesn’t require any registration. Woo!

Task management

Get your ideas and tasks out of your head and off your wall. Harness your creativity and gain power with these essential to-do tools.

29. Coolendar
Forget a boring grid-style calendar, Coolander lays out everything in an easy-to-use list that features alerts, custom hashtags, and a smooth, intuitive interface. Coolendar exists across almost all platforms, including mobile (even Kindle!), so your Coolendar travels with you where ever you go.


23. Daylite
This is a true all-in-one solution for business. Projects, sales, emails, meetings, calendars, contacts, notes, and more. More than a CRM, more than a calendar, more than a to-do list. It’s a little bit of everything and whole lot of useful!

24. Diigo
Finally you can collect your bookmarks from across multiple browsers and platforms with this unique service. Create notes, highlights, collections and then share them (or keep them to yourself) across social media.

25. Evernote
Throw away your pile of sticky notes or scribbled notepad and move over to Evernote. Save your ideas, things you like, things you hear, and things you see. Whether you type it in or snap a picture, everything you do is fully searchable so you can come back to it later.

26. Kuandoo
This app is like a party planner in your pocket. Get specific by tracking which guests are bringing which potluck dishes, who likes to drink what kind of booze, and who’s driving so you can even coordinate designated drivers.

27. OmniFocus
OmniFocus is designed to quickly capture your thoughts and allow you to store, manage, and process them into actionable to-do items. OmniFocus helps you work smarter by giving you powerful tools for staying on top of all the things you need to do.

28. Producteev
Producteev helps you organize your tasks and projects in the simplest way. With web, desktop and mobile apps, you can access your to-dos from anywhere.

29. Remember the Milk
This is truly a reinvented to-do list. With text-message reminders, categorizations, and integration across a variety of platforms, RTM is the original reinvented to-do list!

30. Things
Things is a Mac task-manager that seamlessly flows from work to home with iCloud integration.

31. Trello
Welcome to a “whiteboard with superpowers”. Organize anything, from life goals and to-do lists to group projects. You can add images, checklists, due dates, attachments, and more to your Trello board and then assign tasks, streamline communication, and get everyone on the same page, fast.

32. TeuxDeux
When all you need is a simple to-do list but want to look stylish while feeling productive then TeuxDeux is your perfect solution. Sync up with the Mobile App and you’ll look good and feel productive while on the go.

wunderlist33. Wunderlist
Many tools offer seamless integration between devices but Wunderlist actually comes through on their promise. With customizable backgrounds your to-do list can be as beautiful as you are.


Sometimes you need a team to pull off the impossible. Delegate tasks, check on progress, and brainstorm together in one location with one of these collaboration tools.

34. Basecamp
Get all of your team together, on one page, to share project timelines, deadlines, statuses, and communications.

35. Gatherball
This is the cooler, more laid back brother of Basecamp designed for helping plan group trips, outings, vacations, parties, or reunions. Anything where multiple opinions can make planning a headache. This is a seriously cool site.

36. Gliffy
Create professional charts, diagrams and info-graphics with this simple, web-based program.

37. Google Drive
An incredible online suite of tools that does everything from word processing, spreadsheets, presentations and more. You can save, share, collaborate and create intricate document databases.

38. GroupSite
Communicate easily via subgroups. share calendars, files, and media, and build deeper connections between team members.

39. MindMeister
Map the minds of your entire team and get everyone on the same page. Perfect for online brainstorming sessions, keeping track of projects and ideas, and building your team connections.

Paam 40. Paam
When your event stretches beyond the capabilities of being run by just you or your staff it’s time to recruit volunteers. Paam allows you to recruit and manage staff and volunteers for expeditions, projects, festivals, events

41. PowerNoodle
Powernoodle is a complete internal business communications solution that facilitates collaboration and brainstorming. It helps encourages feedback, allocate resources, and determine your next best steps. Practically the only thing it doesn’t do is rub your feet and make you dinner.

42. ProofHub
This web-based project management software helpers your team stay on target, communicate more effectively, and increases productivity. Now if only it came with rainbows and baby bunnies!

43. SmartSheet
Spreadsheets that have the capability to incorporate sites, alerts, Gantt charts and more? Yes, please!

44. WhoDoes
WhoDoes 2.0 is a web-based project management tool that helps team members collaborate with each other, share files, emails and manage milestones and tasks.

zoho projects45. Zoho Projects
Organize tasks, track time, track bugs, analyze reports, sync with dropbox and keep everyone on the same page. Coordinate, unify, and work smarter and faster.
Bonus planning tools

Bonus Tools

Either these tools do it all, help you do it all, or fit in a category all their own.

46. FluidTables
Planning a wedding is every bit as complicated as running a business so why wouldn’t you use some of the same tools. FluidTables focuses on the minutiae of wedding planning including invitations, seating plans, menus, volunteer coordination and much more.

47. MeetingApps
A clearinghouse of useful apps that will help your meeting run smoothly from helping attendees find hotels and restaurants to communicating with each other during your event.

moredays48. Moredays
Replace you paper calendars with this beautiful designed digital calendar. Save photos, add in sketches, and create a virtual scrapbook that you can look back on.

49. Rapportive
Pulling information in from your Facebook, Twitter, and LinkedIn accounts, Rapportive makes your Gmail experience much more comprehensive and gives you information at your fingertips to make you a better communicator.

50. SocialTables
Imagine all of your floor plans, seating charts, and guest lists living in one place where clients and teammates can access them in REAL-TIME.

51. Welcu
Welcu gives you all the tools you need to plan your event so that your team and your guests will love you. Guest management, digital invitations and RSVPs, all while allowing you to measure your results every step of the way.

52. PlaceFull
Plan and book all of your parties, meetings and events without ever having to pick up the phone. Weddings, corporate dinners, offsite meetings, birthday parties and more. Even book your event services!

Posted in International Conferences, Journals & Conferences, My Life, My Research Related, National Conferences | Tagged: , , | Leave a Comment »

Literature Review versus Literature Survey. What is the difference?

Posted by Hemprasad Y. Badgujar on November 3, 2014

Literature Survey: Is the process of analyzing, summarizing, organizing, and presenting novel conclusions from the results of technical review of large number of recently published scholarly articles. The results of the literature survey can contribute to the body of knowledge when peer-reviewed and published as survey articles

Literature Review: Is the process of technically and critically reviewing published papers to extract technical and scientific metadata from the presented contents. The metadata are usually used during literature survey to technically compare different but relevant works and draw conclusions on weaknesses and strengths of the works.

Second View: The second view over literature survey and review is that in survey, researchers usually utilize the author-provided contents available in the published works to qualitatively analyze and compare them with other related works. While in the former, you should not perform qualitative analysis. Rather it should be quantitative meaning that every research work under study should be implemented and benchmarked under certain criteria. The results of this benchmarking study can be used to compare them together and criticize or appreciate the works.

So basically you can look at current literature and find which approach is dominating in your field. Hope it helps. I try to revise it if I came a cross other points or useful comments here.

we can use the following definitions from CS journals.

  • According to the definition of survey paper provided by IEEE Communications Surveys & Tutorials journal (one of the best CS journals), “The term survey, as applied here, is defined to mean a survey of the literature. A survey article should provide a comprehensive review of developments in a selected area“.
  • In ACM Computing Survey (another prestigious CS journal), survey paper is described as “A paper that summarizes and organizes recent research results in a novel way that integrates and adds understanding to work in the field. A survey article emphasizes the classification of the existing literature, developing a perspective on the area, and evaluating trends.”
  • In Elsevier journal of Computer Science Review, you will see here4 that “Critical review of the relevant literature“ is required a component of every typical survey paper.


Posted in Computer Research, Documentations, Journals & Conferences, My Research Related, Placement, Project Related, Research Menu | Tagged: , , , , , , | Leave a Comment »

Extracts from a Personal Diary

dedicated to the life of a silent girl who eventually learnt to open up

Num3ri v 2.0

I miei numeri - seconda versione


Just another site

Algunos Intereses de Abraham Zamudio Chauca

Matematica, Linux , Programacion Serial , Programacion Paralela (CPU - GPU) , Cluster de Computadores , Software Cientifico




A great site

Travel tips

Travel tips

Experience the real life.....!!!

Shurwaat achi honi chahiye ...

Ronzii's Blog

Just your average geek's blog

Karan Jitendra Thakkar

Everything I think. Everything I do. Right here.

Chetan Solanki

Helpful to u, if u need it.....


Explorer of Research #HEMBAD


Explorer of Research #HEMBAD


A great site


This is My Space so Dont Mess With IT !!

%d bloggers like this: