Something More for Research

Explorer of Research #HEMBAD

Archive for the ‘Apps Development’ Category

20 Compelling Reasons to Spend Less Time on Facebook and More Time on LinkedIn

Posted by Hemprasad Y. Badgujar on April 8, 2015


If you’re like most college students, chances are good that you spend more time on Facebook than you do on LinkedIn. But if you’re concerned with furthering your career (and you should be), it’s time to switch over to a more professional network. We’ve shared 20 great reasons why you need to be spending your time on LinkedIn much more than Facebook, and we hope they motivate you to make a change for the better. These reasons should be especially compelling for students earning online bachelor’s degrees, as they will have fewer face-to-face networking opportunities and will need to capitalize on their online networking skills to bolster their job hunt.

  1. LINKEDIN IS PROFESSIONAL AT ITS CORE

    LinkedIn was created to connect professionals in online networking; Facebook was not. Although both services have evolved to include elements of each other, they do still remain true to their original purpose, and LinkedIn excels at presenting a professional front.

  2. LINKEDIN IS A GREAT PLACE TO GAIN EXPERT STATUS

    Although experts are increasingly flocking to Facebook, it’s still hard for some people to take the site seriously. On LinkedIn, the setting is much more open to gaining expert status and credibility. Forums, question and answer sections, and groups make it simpler to connect and share your knowledge in a credible way. Students working toward a graduate degree can even share their research with other experts in the field and receive valuable feedback as they complete master’s theses and doctoral dissertations.

  3. YOUR COLLEGE PROFESSORS MIGHT ACTUALLY CONNECT ON LINKEDIN

    Although some colleges take a lax approach to social media, many still frown on Facebook connections between students and professors. But on LinkedIn, connections are typically seen as a positive thing, opening you up to the resources that your professors can share with you, including positive recommendations.

  4. LINKEDIN REPRESENTS A MORE TARGETED AUDIENCE

    Facebook is on track to hit the 1 billion-user mark this year, a figure that basically obliterates LinkedIn’s comparatively small 135 million plus users. One might think that more users means more exposure, and that would be correct, but on Facebook, you can’t be sure that the millions of users are actually online to hear about your professional life. On LinkedIn, you can expect to reach a more targeted audience that is connected to you, interested in your work, and willing to listen to what you have to say.

  5. YOU’RE MORE LIKELY TO GET A RECOMMENDATION ON LINKEDIN

    A recommendation on either LinkedIn or Facebook is a great way to put your best foot forward, but you’re simply more likely to land one on LinkedIn. Recent stats show that 36% of LinkedIn users make a recommendation, compared to 27% of Facebook users. LinkedIn also has a 57% interested recommendation response, compared with 42% on Facebook.

  6. LINKEDIN USERS LOG IN WITH A SENSE OF PURPOSE

    While on Facebook, you may be surfing to find out about the latest cat video or your friend’s wedding photos, but LinkedIn tends to lead to a more task-driven visit. Users log in to check out job and collaboration opportunities, people to hire, and relevant industry news.

  7. LINKEDIN IS A GREAT PLACE TO SHOWCASE YOUR UNPAID WORK

    Even if you haven’t been hired for a job in your life, chances are you’ve volunteered or done an internship before graduation. LinkedIn is specifically designed to help you showcase this experience.

  8. LINKEDIN IS AN ONLINE RESUME

    LinkedIn is a great place to collect references, share your work experience, professional samples, and more. Your Facebook Timeline is much more like a digital scrapbook of personal experiences.

  9. LINKEDIN SEARCHING IS MORE ROBUST

    While you can search for people and terms on Facebook, LinkedIn really shines in this category. You can search for companies, find people to connect with, get news, and more on LinkedIn. Your profile is also highly searchable, and represents a great tool for allowing recruiters to find you.

  10. YOU CAN ACTUALLY TURN YOUR LINKEDIN PROFILE INTO A RESUME

    Although LinkedIn functions as an online resume, it’s also a time saver when it comes to creating one that you can print and hand out. Use this feature to stop neglecting your paper resume and have something to hand in.

  11. FACEBOOK CAN MAKE YOUR SCHOOLWORK SUFFER

    Experts report that students who regularly surf Facebook do not do as well on tests. In fact, some students suffered by as much as an entire grade. They believe that using the social media site takes up valuable study time.

  12. RECRUITERS ARE MORE LIKELY TO SHARE APPLICATIONS ON LINKEDIN

    Facebook and LinkedIn are both experiencing growth in applications shared on their sites. But LinkedIn stands out for the number of candidates who actually apply. You can expect recruiters to go where the interest is, which clearly rests with LinkedIn.

  13. FACEBOOK IS A MAJOR TIME SUCK

    Facebook is fun, but for most users, it takes up much more time than it should. In a comparison, researchers found that Facebook visits resulted in stays of 405 minutes per visitor, compared with 17 minutes on LinkedIn. It is much wiser to spend 17 focused minutes on LinkedIn than several hours frittering your time away on Facebook.

  14. GROUPS ON LINKEDIN ARE HIGHLY EFFECTIVE

    Facebook has groups, but not on the level that LinkedIn does. LinkedIn remains an incredible resource for connecting and networking in industry groups on the site.

  15. YOU’RE MORE LIKELY TO GET HIRED ON LINKEDIN

    In a recent comparison of job search markers on Facebook and LinkedIn, LinkedIn beat Facebook handily in every category. The most interesting and revealing, however, was social employee hires, with LinkedIn earning 73% and Facebook at a low 22%.

  16. LINKEDIN IS A GREAT PLACE FOR BUSINESS INTRODUCTIONS

    One of the best features of LinkedIn is the ability to be introduced to new business contacts through the site, especially through contacts you already know. So if you’ve recently completed a business degree and want to expand your professional connections, LinkedIn in the place to be.

  17. LINKEDIN USERS HAVE MORE MONEY

    Out of all the popular social media sites, LinkedIn users have the highest average income of $89K. If you’re looking to earn a good salary, you’ll be in great company on LinkedIn.

  18. LINKEDIN ACTIVITIES ARE MORE ATTUNED TO JOB PROMOTION

    The top activities on LinkedIn are industry networking, keeping in touch, and networking between coworkers.

  19. LINKEDIN REALLY SHINES WITH RELEVANCE

    While your friends on Facebook may be sharing music videos that you scroll right past, LinkedIn works hard to bring you content that is the most relevant to you. The site sends emails to users with the most-shared news, groups that belong to your job focus, and contacts you’re likely to be interested in getting to know.

  20. LINKEDIN IS AWESOME FOR RESEARCH

    Facebook is growing in this respect with better Pages, but LinkedIn still wins the battle of employer research. You find out who works there, who used to work there, whether or not you have any connections within the company, and more. For example, if you recently earned a master’s degree in finance, and are looking for employment with major financial services companies, you can search for employment leads by networking with a company’s current and former employees.

Posted in Apps Development, Computer Softwares, Installation, Other | Leave a Comment »

Estimation vs Prediction

Posted by Hemprasad Y. Badgujar on December 14, 2014


“Prediction” and “estimation” indeed are sometimes used interchangeably in non-technical writing and they seem to function similarly, but there is a sharp distinction between them in the standard model of a statistical problem. An estimator uses data to guess at a parameter while a predictor uses the data to guess at some random value that is not part of the dataset. For those who are unfamiliar with what “parameter” and “random value” mean in statistics, the following provides a detailed explanation.

pKaCorr
In this standard model, data are assumed to constitute a (possibly multivariate) observation x of a random variable X whose distribution is known only to lie within a definite set of possible distributions, the “states of nature”. An estimator t is a mathematical procedure that assigns to each possible value of x some property t(x) of a state of nature θ, such as its mean μ(θ). Thus an estimate is a guess about the true state of nature. We can tell how good an estimate is by comparing t(x) to μ(θ).

A predictor p(x) concerns the independent observation of another random variable Z whose distribution is related to the true state of nature. A prediction is a guess about another random value. We can tell how good a particular prediction is only by comparing p(x) to the value realized byZ. We hope that on average the agreement will be good (in the sense of averaging over all possible outcomes x and simultaneously over all possible values of Z).

Ordinary least squares affords the standard example. The data consist of pairs (xi,yi)associating values yi of the dependent variable to values xi of the independent variable. The state of nature is specified by three parameters α, β, and σ: it says that each yi is like an independent draw from a normal distribution with mean α+βxi and standard deviation σ. α, β, and σ are parameters (numbers) believed to be fixed and unvarying. Interest focuses on α (the intercept) and β (the slope). The OLS estimate, written (α^,β^), is good in the sense that α^ tends to be close to α and β^ tends to be close to β, no matter what the true (but unknown) values of α and β might be.

OLS prediction consists of observing a new value Z=Y(x) of the dependent variable associated with some value x of the independent variable. x might or might not be among the xi in the dataset; that is immaterial. One intuitively good prediction is that this new value is likely to be close to α^+β^x. Better predictions say just how close the new value might be (they are called prediction intervals). They account for the fact that α^ and β^ are uncertain (because they depend mathematically on the random values (yi)), that σ is not known for certain (and therefore has to be estimated), as well as the assumption that Y(x) has a normal distribution with standard deviation σ and mean α+βx (note the absence of any hats!).

Note especially that this prediction has two separate sources of uncertainty: uncertainty in the data (xi,yi) leads to uncertainty in the estimated slope, intercept, and residual standard deviation (σ); in addition, there is uncertainty in just what value of Y(x) will occur. This additional uncertainty–because Y(x) is random–characterizes predictions. A prediction may look like an estimate (after all, α^+β^x estimates α+βx 🙂 and may even have the very same mathematical formula (p(x) can sometimes be the same as t(x)), but it will come with a greater amount of uncertainty than the estimate.

Here, then, in the example of OLS, we see the distinction clearly: an estimate guesses at the parameters (which are fixed but unknown numbers), while a prediction guesses at the value of a random quantity. The source of potential confusion is that the prediction usually builds on the estimated parameters and might even have the same formula as an estimator.

In practice, you can distinguish estimators from predictors in two ways:

  1. purpose: an estimator seeks to know a property of the true state of nature, while a prediction seeks to guess the outcome of a random variable; and
  2. uncertainty: a predictor usually has larger uncertainty than a related estimator, due to the added uncertainty in the outcome of that random variable. Well-documented and described predictors therefore usually come with uncertainty bands–prediction intervals–that are wider than the uncertainty bands of estimators, known as confidence intervals. A characteristic feature of prediction intervals is that they can (hypothetically) shrink as the dataset grows, but they will not shrink to zero width–the uncertainty in the random outcome is “irreducible”–whereas the widths of confidence intervals will tend to shrink to zero, corresponding to our intuition that the precision of an estimate can become arbitrarily good with sufficient amounts of data.

In applying this to assessing potential investment loss, first consider the purpose: do you want to know how much you might actually lose on this investment (or this particular basket of investments) during a given period, or are you really just guessing what is the expected loss (over a large universe of investments, perhaps)? The former is a prediction, the latter an estimate. Then consider the uncertainty. How would your answer change if you had nearly infinite resources to gather data and perform analyses? If it would become very precise, you are probably estimating the expected return on the investment, whereas if you remain highly uncertain about the answer, you are making a prediction.

Posted in Apps Development, Computer Languages, My Research Related | Leave a Comment »

Posted by Hemprasad Y. Badgujar on December 11, 2014


Cloud scaling, Part 1: Build a compute node or small cluster application and scale with HPC

Leveraging warehouse-scale computing as needed

Discover methods and tools to build a compute node and small cluster application that can scale with on-demand high-performance computing (HPC) by leveraging the cloud. This series takes an in-depth look at how to address unique challenges while tapping and leveraging the efficiency of warehouse-scale on-demand HPC. The approach allows the architect to build locally for expected workload and to spill over into on-demand cloud HPC for peak loads. Part 1 focuses on what the system builder and HPC application developer can do to most efficiently scale your system and application.

Exotic HPC architectures with custom-scaled processor cores and shared memory interconnection networks are being rapidly replaced by on-demand clusters that leverage off-the-shelf general purpose vector coprocessors, converged Ethernet at 40 Gbit/s per link or more, and multicore headless servers. These new HPC on-demand cloud resources resemble what has been called warehouse-scale computing, where each node is homogeneous and headless and the focus is on total cost of ownership and power use efficiency overall. However, HPC has unique requirements that go beyond social networks, web search, and other typical warehouse-scale computing solutions. This article focuses on what the system builder and HPC application developer can do to most efficiently scale your system and application.

Moving to high-performance computing

The TOP500 and Green500 supercomputers (see Resources) since 1994 are more often not custom designs, but rather designed and integrated with off-the-shelf headless servers, converged Ethernet or InfiniBand clustering, and general-purpose graphics processing unit (GP-GPU) coprocessors that aren’t for graphics but rather for single program, multiple data (SPMD) workloads. The trend in high-performance computing (HPC) away from exotic custom processor and memory interconnection design to off-the-shelf—warehouse-scale computing—is based on the need to control total cost of ownership, increase power efficiency, and balance operational expenditure (OpEx) and capital expenditure (CapEx) for both start-up and established HPC operations. This means that you can build your own small cluster with similar methods and use HPC warehouse-scale resources on-demand when you need them.

The famous 3D torus interconnection that Cray and others used may never fully go away (today, the TOP500 is one-third massively parallel processors [MPPs] and two-thirds cluster architecture for top performers), but focus on efficiency and new OpEx metrics like Green500 Floating Point Operations (FLOPs)/Watt are driving HPC and keeping architecture focused on clusters. Furthermore, many applications of interest today are data driven (for example, digital video analytics), so many systems not only need traditional sequential high performance storage for HPC checkpoints (saved state of a long-running job) but more random access to structured (database) and unstructured (files) large data sets. Big data access is a common need of traditional warehouse-scale computing for cloud services as well as current and emergent HPC workloads. So, warehouse-scale computing is not HPC, but HPC applications can leverage data center-inspired technology for cloud HPC on demand, if designed to do so from the start.

Power to computing

Power to computing can be measured in terms of a typical performance metric per Watt—for example, FLOPS/Watt or input/output per second/Watt for computing and I/O, respectively. Furthermore, any computing facility can be seen as a plant for converting Watts into computational results, and a gross measure of good plant design is power use efficiency (PUE), which is simply the ratio of total facility power over that delivered to computing equipment. A good value today is 1.2 or less. One reason for higher PUEs is inefficient cooling methods, administrative overhead, and lack of purpose-built facilities compared to cloud data centers (see Resources for a link to more information).

Changes in scalable computing architecture focus over time include:

  • Early focus on a fast single processor (uniprocessor) to push the stored-program arithmetic logic unit central processor to the highest clock rates and instruction throughput possible:
    • John von Neumann, Alan Turing, Robert Noyce (founder of Intel), Ted Hoff (Intel universal processor proponent), along with Gordon Moore see initial scaling as a challenge to scaling digital logic and clock a processor as fast as possible.
    • Up to at least 1984 (and maybe longer), the general rule was “the processor makes the computer.”
    • Cray Computer designs vector processors (X-MP, Y-MP) and distributed memory multiprocessors interconnected by a six-way interconnect 3D torus for custom MPP machines. But this is unique to the supercomputing world.
    • IBM’s focus early on was scalable mainframes and fast uniprocessors until the announcement of the IBM® Blue Gene® architecture in 1999 using a multicore IBM® POWER® architecture system-on-a-chip design and a 3D torus interconnection. The current TOP500 includes many Blue Gene systems, which have often occupied the LINPACK-measured TOP500 number one spot.
  • More recently since 1994, HPC is evolving to a few custom MPP and mostly off-the-shelf clusters, using both custom interconnections (for example, Blue Gene and Cray) and off-the-shelf converged Ethernet (10G, 40G) and InfiniBand:
    • The TOP500 has become dominated by clusters, which comprise the majority of top-performing HPC solutions (two-thirds) today.
    • As shown in the TOP500 chart by architecture since 1994, clusters and MPP dominate today (compared to single instruction, multiple data [SIMD] vector; fast uniprocessors; symmetric multiprocessing [SMP] shared memory; and other, more obscure architectures).
    • John Gage at Sun Microsystems (now Oracle) stated that “the network is the computer,” referring to distributed systems and the Internet, but low-latency networks in clusters likewise become core to scaling.
    • Coprocessors interfaced to cluster nodes via memory-mapped I/O, including GP-GPU and even hybrid field-programmable gate array (FPGA) processors, are used to accelerate specific computing workloads on each cluster node.
  • Warehouse-scale computing and the cloud emerge with focus on MapReduce and what HPC would call embarrassingly parallel applications:
    • The TOP500 is measured with LINPACK and FLOPs and so is not focused on cost of operations (for example, FLOPs/Watt) or data access. Memory access is critical, but storage access is not so critical, except for job checkpoints (so a job can be restarted, if needed).
    • Many data-driven applications have emerged in the new millennium, including social networks, Internet search, global geographical information systems, and analytics associated with more than a billion Internet users. This is not HPC in the traditional sense but warehouse-computing operating at a massive scale.
    • Luiz André Barroso states that “the data center is the computer,” a second shift away from processor-focused design. The data center is highly focused on OpEx as well as CapEx, and so is a better fit for HPC where FLOPs/Watt and data access matter. These Google data centers have a PUE less than 1.2—a measure of total facility power consumed divided by power used for computation. (Most computing enterprises have had a PUE of 2.0 or higher, so, 1.2 is very low indeed. See Resources for more information.)
    • Amazon launched Amazon Elastic Compute Cloud (Amazon EC2), which is best suited to web services but has some scalable and at least high-throughput computing features (see Resources).
  • On-demand cloud HPC services expand, with an emphasis on clusters, storage, coprocessors and elastic scaling:
    • Many private and public HPC clusters occupy TOP500, running Linux® and using common open source tools, such that users can build and scale applications on small clusters but migrate to the cloud for on-demand large job handling. Companies like Penguin Computing, which features Penguin On-Demand, leverage off-the-shelf clusters (InfiniBand and converged 10G/40G Ethernet), Intel or AMD multicore headless nodes, GP-GPU coprocessors, and scalable redundant array of independent disks (RAID) storage.
    • IBM Platform computing provides IBM xSeries® and zSeries® computing on demand with workload management tools and features.
    • Numerous universities and start-up companies leverage HPC on demand with cloud services or off-the-shelf clusters to complement their own private services. Two that I know well are the University of Alaska Arctic Region Supercomputing Center (ARSC) Pacman (Penguin Computing) and the University of Colorado JANUS cluster supercomputer. A common Red Hat Enterprise Linux (RHEL) open source workload tool set and open architecture allow for migration of applications from private to public cloud HPC systems.

Figure 1 shows the TOP500 move to clusters and MPP since the mid-1990s.

Figure 1. TOP500 evolution to clusters and MPP since 1994

Image showing the evolution to clustersThe cloud HPC on-demand approach requires well-defined off-the-shelf clustering, compute nodes, and tolerance for WAN latency to transfer workload. As such, these systems are not likely to overtake top spots in the TOP500, but they are likely to occupy the Green500 and provide efficient scaling for many workloads and now comprise the majority of the Top500.

High-definition digital video computer vision: a scalable HPC case study

Most of us deal with compressed digital video, often in Motion Picture Experts Group (MPEG) 4 format, and don’t think of the scale of even a high-definition (HD) web cam in terms of data rates and processing to apply simple image processing analysis. Digital cinema workflow and post-production experts know the challenges well. They deal with 4K data (roughly 4-megapixel) individual frames or much higher resolution. These frames might be compressed, but they are not compressed over time in groups of pictures like MPEG does and are often lossless compression rather than lossy.

To start to understand an HPC problem that involves FLOPs, uncompressed data, and tools that can be used for scale-up, let’s look at a simple edge-finder transform. The transform-example.zip includes Open Computer Vision (OpenCV) algorithms to transform a real-time web cam stream into a Sobel or Canny edge view in real time. See Figure 2.

Figure 2. HD video Canny edge transform

Image showing a Canny edge transformLeveraging cloud HPC for video analytics allows for deployment of more intelligent smart phone applications. Perhaps phone processors will someday be able to handle real-time HD digital video facial recognition, but in the mean time, cloud HPC can help. Likewise, data that originates in data centers, like geographic information systems (GIS) data, needs intensive processing for analytics to segment scenes, create point clouds of 3D data from stereo vision, and recognize targets of interest (such as well-known landmarks).

Augmented reality and video analytics

Video analytics involves collection of structured (database) information from unstructured video (files) and video streams—for example, facial recognition. Much of the early focus has been on security and automation of surveillance, but applications are growing fast and are being used now for more social applications, e.g. facial recognition, perhaps not to identify a person but to capture and record their facial expression and mood (while shopping). This technology can be coupled with augmented reality, whereby the analytics are used to update a scene with helpful information (such as navigation data). Video data can be compressed and uplinked to warehouse-scale data centers for processing so that the analytics can be collected and information provided in return not available on a user’s smart phone. The image processing is compute intensive and involves big data storage, and likely a scaling challenge (see Resources for a link to more information).

Sometimes, when digital video is collected in the field, the data must be brought to the computational resources; but if possible, digital video should only be moved when necessary to avoid encoding to compress and decoding to decompress for viewing. Specialized coprocessors known as codecs (coder/decoder) are designed to decode without software and coprocessors to render graphics (GPUs) exist, but to date, no CV coprocessors are widely available. Khronos has announced an initiative to define hardware acceleration for OpenCV in late 2012, but work has only just begun (see Resources). So, to date, CV remains more of an HPC application that has had attention primarily from digital cinema, but this is changing rapidly based on interest in CV on mobiles and in the Cloud.

Although all of us imagine CV to be implemented on mobile robotics, in our heads-up displays for intelligent transportation, and on visors (like Google Goggles that are now available) for personal use, it’s not clear that all of the processing must be done on the embedded devices or that it should be even if it could. The reason is data: Without access to correlated data center data, CV information has less value. For example, how much value is there in knowing where your are without more mapping and GIS data to help you with where you want to go next? Real-time CV and video analytics are making progress, but they face many challenges, including huge storage requirements, high network bit rates for transport, and significant processing demands for interpretation. Whether the processing is done by cloud HPC clusters or embedded systems, it’s clear that concurrency and parallel processing will play a huge role. Try running a simple Hough linear transform on the 12-megapixel cactus photo I took, and you’ll see why HPC might be needed just to segment a scene at 60 frames/s.

The challenge of making algorithms parallel

HPC with both clusters and MPP requires coding methods to employ many thread of execution on each multicore node and to use message-passing interfaces (MPIs) and basic methods to map data and code to process resources and collect results. For digital video, the mapping can be simple if done at a frame level. Within a frame is more difficult but still not bad other than the steps of segmenting and restitching frames together.

The power of MapReduce

The MapReduce concept is generally associated with Google and the open source Hadoop project (from Apache Software Foundation), but any parallel computation must employ this concept to obtain speed-up, whether done at a node or cluster level with Java™ technology or at a thread level for a nonuniform memory access (NUMA) shared memory. For applications like digital video analytics, the mapping is data intensive, so it makes sense to move the function to the data (in the mapping stage), but either way, the data to be processed must be mapped and processed and the results combined. A clever mapping avoids data dependencies and the need for synchronization as much as possible. In the case of image processing, for CV, the mapping could be within a frame, at the frame level, or by groups of pictures (see Resources).

Key tools for designing cluster scaling applications for cloud HPC on demand include the following:

  • Threading is the way in which a single application (or Linux process) is one address space on one cluster node and can be designed to use all processor cores on that node. Most often, this is done with Portable Operating System Interface for UNIX® (POSIX) Pthreads or with a library like OpenMP, which abstracts the low-level details of POSIX threading. I find POSIX threading to be fairly simple and typically write Pthread code as can be seen in the hpc_cloud_grid.tar.gz example. This example maps threads to the over-number space for prime number searching.
  • MPI is a library that can be linked into a cluster parallel application to assist with mapping of processing to each node, synchronization, and reduction of results. Although you can use MPI to implement MapReduce, unlike Hadoop, it typically moves data (in messages) to program functions running on each node (rather than moving code to the data). In the final video analytics article in this series, I will provide a thread and MPI cluster-scalable version of the capture-transform code. Here, I provide the simple code for a single thread and node to serve as a reference. Run it and Linux dstat at the same time to monitor CPU, I/O, and storage use. It is a resource-intensive program that computes Sobel and Canny transforms on a 2560×1920-pixel image. It should run on any Linux system with OpenCV and a web cam.
  • Vector SIMD and SPMD processing can be accomplished on Intel and AMD nodes with a switch to enable during compilation or, with more work, by creation of transform kernels in CUDA or OpenCL for off-load to a GPU or GP-GPU coprocessor.
  • OpenCV is highly useful for video analytics, as it includes not only convenient image capture, handling, and display functions but also most of the best image processing transforms used in CV.

The future of on-demand cloud HPC

This articles makes an argument for cloud HPC. The goal here is to acquaint you with the idea and some of the challenging, yet compelling applications (like CV) as well as to introduce you to methods for programming applications that can scale on clusters and MPP machines. In future articles, I will take the CV example further and adapt it for not only threading but also for MPI so that we can examine how well it scales on cloud HPC (in my case, at ARSC on Pacman or JANUS). My research involves comparison of tightly coupled CV coprocessors (that I am building using an Altera Stratix IV FPGA I call a computer vision processing unit [CVPU]). I am comparing this to what I can achieve with CV on ARSC for the purpose of understanding whether environmental sensing and GIS data are best processed like graphics, with a coprocessor, or on a cluster or perhaps with a combination of the two. The goals for this research are lofty. In the case of CVPU, the CV/graphics Turing-like test I imagine is one in which the scene that the CVPU parses can then be sent to a GPU for rendering. Ideally, the parsed/rendered image would be indistinguishable from the true digital video stream. When rendered scenes and the ability to analyze them reaches a common level of fidelity, augmented reality, perceptual computing, and video analytics will have amazing power to transform our lives.

Cloud scaling, Part 2: Tour high-performance cloud system design advances

Learn how to leverage co-processing, nonvolatile memory, interconnection, and storage

Breakthrough device technology requires the system designer to re-think operating and application software design in order to realize the potential benefits of closing the access gap or pushing processing into the I/O path with coprocessors. Explore and consider how the latest memory, compute, and interconnection devices and subsystems can affect your scalable, data-centric, high-performance cloud computing system design. Breakthroughs in device technology can be leveraged for transition between compute-centric and the more balanced data-centric compute architectures.

The author examines storage-class memory and demonstrates how to fill the long-standing performance gap between RAM and spinning disk storage; details the use of I/O bus coprocessors (for processing closer to data); explains how to employ InfiniBand to build low-cost, high performance interconnection networks; and discusses scalable storage for unstructured data.

Computing systems engineering has historically been dominated by scaling processors and dynamic RAM (DRAM) interfaces to working memory, leaving a huge gap between data-driven and computational algorithms (see Resources). Interest in data-centric computing is growing rapidly, along with novel system design software and hardware devices to support data transformation with large data sets.

The data focus in software is no surprise given applications of interest today, such as video analytics, sensor networks, social networking, computer vision and augmented reality, intelligent transportation, machine-to-machine systems, and big data initiatives like IBM’s Smarter Planet and Smarter Cities.

The current wave of excitement is about collecting, processing, transforming, and mining the big data sets:

  • The data focus is leading toward new device-level breakthroughs in nonvolatile memory (storage-class memory, SCM) which brings big data closer to processing.
  • At the same time, input/output coprocessors are bringing processing closer to the data.
  • Finally, low-latency, high-bandwidth off-the-shelf interconnections like InfiniBand are allowing researchers to quickly build 3D torus and fat-tree clusters that used to be limited to the most exotic and expensive custom high-performance computing (HPC) designs.

Yet, the systems software and even system design often remain influenced by out-of-date bottlenecks and thinking. For example, consider threading and multiprogramming. The whole idea came about because of slow disk drive access; what else can a program do when waiting on data but run another one. Sure, we have redundant array of independent disks (RAID) scaling and NAND flash solid-state disks (SSDs), but as noted by IBM Almaden Research, the time scale differences of the access time gap are massive in human terms.

The access time gap between a CPU, RAM, and storage can be measured in terms of typical performance for each device, but perhaps the gap is more readily understood when put into human terms (as IBM Almaden has done for illustrative purposes).

If a typical CPU operation is similar to what a human can do in seconds, then RAM access at 100 times more latency is much like taking a few minutes to access information. However, by the same comparison, disk access at 100,000 times more latency compared to RAM is on the order of months (100 days). (See Figure 1.)

Figure 1. The data access gap

Image showing the data access gapMany experienced computer engineers have not really thought hard about the 100 to 200 random I/O operations per second (IOPS) — it is the mechanical boundary for a disk drive. (Sure, sequential access is as high as hundreds of megabytes per second, but random access remains what it was more than 50 years ago, with the same 15K RPM seek and rotate access latency.)

Finally, as Almaden notes, tape is therefore glacially slow. So, why do we bother? For the capacity, of course. But how can we get processing to the data or data to the processing more efficiently?

Look again at Figure 1. Improvements to NAND flash memory for use in mobile devices and more recently SSD has helped to close the gap; however, it is widely believed that NAND flash device technology will be pushed to its limits fairly quickly, as noted by numerous system researchers (see Resources). The transistor floating gate technology used is already at scaling limits and pushing it farther is leading to lower reliability, so although it has been a stop-gap for data-centric computing, it is likely not the solution.

Instead, several new nonvolatile RAM (NVRAM) device technologies are likely solutions, including:

  • Phase change RAM (PCRAM): This memory uses a heating element to turn a class of materials known as chalcogenides into either a crystallized or amorphous glass state, thereby storing two states that can be programmed and read, with state retained even when no power is applied. PCRAM appears to show the most promise in the near term for M-type synchronous nonvolatile memory (NVM).
  • Resistive RAM (RRAM): Most often described as a circuit that is unlike a capacitor, inductor, or resistor, RRAM provides a unique relationship between current and voltage unlike other well-known devices that store charge or magnetic energy or provide linear resistance to current flow. Materials with properties called memristors have been tested for many decades but engineers usually avoid them because of their nonlinear properties and the lack of application for them. IEEE fellow Leon Chua describes them in “Memristor: The Missing Circuit Element.” A memristor’s behavior can be summarized as follows: Current flow in one direction causes electrical resistance to increase and in the opposite direction resistance decreases, but the memristor retains the last resistance it had when flow is re-started. As such, it can store a nonvolatile state, be programmed, and the state read. For details and even some controversy on what is and is not a memristor, seeResources.
  • Spin transfer torque RAM (STT-RAM): A current passed through a magnetic layer can produce a spin-polarized current that, when directed into a magnetic layer, can change its orientation via angular momentum. This behavior can be used to excite oscillations and flip the orientation of nanometer-scale magnetic devices. The main drawback is the high current needed to flip the orientation.

Consult the many excellent entries in Resources for more in-depth information on each device technology.

From a systems perspective, as these devices evolve, where they can be used and how well each might fill the access gap depends on the device’s:

  • Cost
  • Scalability (device integration size must be smaller than a transistor to beat flash; less than 20 nanometers)
  • Latency to program and read
  • Device reliability
  • Perhaps most importantly, durability (how often it can be programmed and erased before it becomes unreliable).

Based on these device performance considerations, IBM has divided SCM into two main classes:

  • S-type: Asynchronous access via an I/O controller. Threading or multiprogramming is used to hide the I/O latency to the device.
  • M-type: Synchronous access via a memory controller. Think about this as wait-states for RAM access in which a CPU core stalls.

Further, NAND SSD would be considered fast storage, accessed via a block-oriented storage controller (much higher I/O rates but similar bandwidth to a spinning disk drive).

It may seem like the elimination of asynchronous I/O for data processing (except, of course, for archive access or cluster scaling) might be a cure-all for data-centric processing. In some sense it is, but systems designers and software developers will have to change habits. The need for I/O latency hiding will largely go away on each node in a system, but it won’t go away completely. Clusters built from InfiniBand deal with node-to-node data-transfer latency with Message Passing Interface or MapReduce schemes and enjoy similar performance to this envisioned SCM node except when booting or when node data exceeds node working RAM size.

So, for scaling purposes, cluster interconnection and I/O latency hiding among nodes in the cluster is still required.

Moving processing closer to data with coprocessors

Faster access to big data is ideal and looks promising, but some applications will always benefit from the alternative approach of moving processing closer to data interfaces. Many examples exist, such as graphics (graphics processing units, GPUs), network processors, protocol-offload engines like the TCP/IP Offload Engine, RAID on chip, encryption coprocessors, and more recently, the idea of computer vision coprocessors. My research involves computer vision and graphics coprocessors, both at scale in clusters and embedded. I am working on what I call a computer vision processing unit, comparing several coprocessors that became more widely pursued with the 2012 announcement of OpenVX by Khronos (see Resources).

In the embedded world, such a method might be described as an intelligent sensor or smart camera, methods in which preliminary processing of raw data is provided by the sensor interface and an embedded logic device or microprocessor, perhaps even a multicore system on a chip (SoC).

In the scalable world, this most often involves use of a coprocessor bus or channel adapter (like PCI Express, PCIe, and Ethernet or InfiniBand); it provides data processing between the data source (network side) and the node I/O controller (host side).

Whether processing should be done or is more efficient when done in the I/O path or on a CPU core has always been a topic of hot debate, but based on an existence proof (GPUs and network processors), clearly they can be useful, waxing and waning in popularity based on coprocessor technology compared to processor. So, let’s take a quick look at some of the methods:

Vector processing for single program, multiple data
Provided today by GPUs, general-purpose GPUs (GP-GPUs), and application processing units (APUs), the idea is that data can be transformed on its way to an output device like a display or sent to a GP-GPU/APU and transformed on a round trip from the host. “General purpose” implies more sophisticated features like double-precision arithmetic compared to single precision only for graphics-specific processing.
Many core
Traditional many-core coprocessor cards (see Resources) are available from various vendors. The idea is to lower cost and power consumption by using simpler, yet numerous cores on the I/O bus, with round-trip offloading of processing to the cards for a more capable but power-hungry and costly full-scale multicore host. Typically, the many-core coprocessor might have an order of magnitude more cores than the host and often includes gigabit or 10G Ethernet and other types of network interfaces.
I/O bus field-programmable gate arrays (FPGAs)
FPGA cards, most often used to prototype a new coprocessor in the early stages of development, can perhaps used as a solution for low-volume coprocessors as well.
Embedded SoCs
A multicore solution can be used in an I/O device to create an intelligent device like a stereo ranging or time-of-flight camera.
Interface FPGA/configurable programmable logic devices
A digital logic state machine can provide buffering and continuous transformation of I/O data, such as digital video encoding.

Let’s look at an example based on offload and I/O path. Data transformation has obvious value for applications like the decoding of MPEG4 digital video, consisting of a GPU coprocessor in the path between the player and a display as shown in Figure 2 for the Linux® MPlayer video decoder and presentation acceleration unit (VDPAU) software interface to NVIDIA MPEG decoding on the GPU.

Figure 2. Simple video decode offload example

Image showing an example of a simple video decode offloadLikewise, any data processing or transformation that can be done in-bound or out-bound from a CPU host may have value, especially if the coprocessor can provide processing at a lower cost with great efficiency or with lower power consumption based on purpose-built processors compared to general-purpose CPUs.

To start to understand a GP-GPU compared to a multicore coprocessor approach, try downloading the two examples of a point spread function to sharpen the edges on an image (threaded transform example) compared with the GPU transform example. Both provide the same 320×240-pixel transformation, but in one case, the Compute Unified Device Architecture (CUDA) C code provided requires a GPU or GP-GPU coprocessor and, in the other case, either a multicore host or a many-core (for example, MICA) coprocessor.

So which is better?

Neither approach is clearly better, mostly because the NVRAM solutions have not yet been made widely available (except as expensive battery-backed DRAM or as S-type SCM from IBM Texas Memory Systems Division) and moving processing into the I/O data path has traditionally involved less friendly programming. Both are changing, though: Coprocessors are adopting higher-level languages like the Open Compute Language (OpenCL) in which code written for multicore hosts runs equally well on Intel MICA or Altera Startix IV/V architectures.

Likewise, all of the major computer systems companies are working feverishly to release SCM products, with PCRAM the most likely to be available first. My advice is to assume that both will be with us for some time and operating systems and applications must be able to deal with both. The memristor, or RRAM, includes a vision that resembles Isaac Asimov’s fictional positronic brain in which memory and processing are fully integrated as they are in a human neural system but with metallic materials. The concept of fully integrated NVM and processing is generally referred to as processing in memory (PIM) or neuromorphic processing (see Resources). Scalable NVM integrated processing holds extreme promise for biologically inspired intelligent systems similar to the human visual cortex, for example. Pushing toward the goal of integrated NVM, with PIM from both sides, is probably a good approach, so I plan to keep up with and keep working on systems that employ both methods—coprocessors and NVM. Nature has clearly favored direct, low-level, full integration of PIM at scale for intelligent systems.

Scaling nodes with Infiniband interconnection

System designers always have to consider the trade-off between scaling up each node in a system and scaling out a solution that uses networking or more richly interconnected clustering to scale processing, I/O, and data storage. At some point, scaling the memory, processing, and storage a single node can integrate hits a practical limit in terms of cost, power efficiency, and size. It is also often more convenient from a reliability, availability, and servicing perspective to spread capability over multiple nodes so that if one needs repair or upgrade, others can continue to provide service with load sharing.

Figure 3 shows a typical InfiniBand 3D torus interconnection.

Figure 3. Example of InfiniBand 4x4x4 3D torus with 1152 nodes (SDSC Gordon)

Image showing an example of InfiniBand 4x4x4 3D torus with 1152 nodes (SDSC Gordon)In Figure 3, the 4x4x4 shown is for the San Diego Supercomputing Center (SDSC) Gordon supercomputer, as documented by Mellanox, which uses a 36-port InfiniBand switch to connect nodes to each other and to storage I/O.

InfiniBand, Converged Enhanced Ethernet iSCSI (CEE), or Fibre Channel is the most often used scalable storage interface for access to big data. This storage area network (SAN) scaling for RAID arrays is used to host distributed, scalable file systems like Ceph, Lustre, Apache Hadoop, or the IBM General Parallel File System (GPFS). Use of CEE and InfiniBand for storage access using the Open Fabric Alliance SCSI Remote Direct Memory Access (RDMA) Protocol and iSCSI Extensions for RDMA is a natural fit for SAN storage integrated with an InfiniBand cluster. Storage is viewed more as a distributed archive of unstructured data that is searched or mined and loaded into node NVRAM for cluster processing. Higher-level data-centric cluster processing methods like Hadoop MapReduce can also be used to bring code (software) to the data at each node. These topics are big-data-related topics that I describe more in the last part of this four-part series.

The future of data-centric scaling

This articles makes an argument for systems design and architecture that move processors closer to data-generating and -consuming devices, as well as simplification of memory hierarchy to include fewer levels, leveraging lower-latency, scalable NVM devices. This defines a data-centric node design that can be further scaled with low-latency off-the-shelf interconnection networks like InfiniBand. The main challenge with data-centric computing is not instructions-per-second or floating-point-operations-per-second only, but rather IOPS and the overall power efficiency of data processing.

In Part 1 of this series, I uncovered methods and tools to build a compute node and small cluster application that can scale with on-demand HPC by leveraging the cloud. In this article I detailed such high-performance system design advances as co-processing, nonvolatile memory, interconnection, and storage.

In Part 3 in this series I provide more in-depth coverage of a specific data-centric computing application — video analytics. Video analytics includes applications such as facial recognition for security and computer forensics, use of cameras for intelligent transportation monitoring, retail and marketing that involves integration of video (for example, visualizing yourself in a suit you’re considering from a web-based catalog), as well as a wide range of computer vision and augmented reality applications that are being invented daily. Although many of these applications involve embedded computer vision, most also require digital video analysis, transformation, and generation in cloud-based scalable servers. Algorithms like Sobel transformation can be run on typical servers, but algorithms like the generalized Hough transform, facial recognition, image registration, and stereo (point cloud) mapping, for example, require the NVM and coprocessor approaches this article discussed for scaling.

In the last part of the series, I deal with big data issues.

Cloud scaling, Part 3: Explore video analytics in the cloud

Using methods, tools, and system design for video and image analysis, monitoring, and security

Explore and consider methods, tools, and system design for video and image analysis with cloud scaling. As described in earlier articles in this series, video analytics requires a more balanced data-centric compute architecture compared to traditional compute-centric, scalable, high-performance computing. The author examines the use of OpenCV and similar tools for digital video analysis and methods to scale this analysis using cluster and distributed system design.

The use of coprocessors designed for video analytics and the new OpenVX hardware acceleration discussed in previous articles can be applied to the computer vision (CV) examples presented in this article. This new data-centric technology for CV and video analytics requires the system designer to re-think application software and system design to meet demanding requirements, such as real-time monitoring and security for large, public facilities and infrastructure as well as a more entertaining, interactive, and safer world.

Public safety and security

The integration of video analytics in public places is perhaps the best way to ensure public safety, providing digital forensic capabilities to law enforcement and the potential to increase detection of threats and prevention of public safety incidents. At the same time, this need has to be balanced with rights to privacy, which can become a contentious issue if these systems are abused or not well understood. For example, the extension of facial detection, as shown in Figure 1, to facial recognition has obvious identification capability and can be used to track an individual as he or she moves from one public place to another. To many people, facial analytics might be seen an invasion of privacy, and use of CV and video analytics should adhere to surveillance and privacy rights laws and policies, to be sure—any product or service developer might want to start by considering best practices outlined by the Federal Trade Commission (FTC; see Resources).

Digital video using standards such as that from Motion Picture Experts Group (MPEG) for encoding video to compress, transport, uncompress, and display it has led to a revolution in computing ranging from social networking media and amateur digital cinema to improved training and education. Tools for decoding and consuming digital video are widely used by all every day, but tools to encode and analyze uncompressed video frames are needed for video analytics, such as Open Computer Vision (OpenCV). One of the readily available and quite capable tools for encoding and decoding of digital video is FFmpeg; for still images, GNU Image Processing (GIMP) is quite useful (see Resources for links). With these three basic tools, an open source developer is fully equipped to start exploring computer vision (CV) and video analytics. Before exploring these tools and development methods, however, let’s first define these terms better and consider applications.

The first article in this series, Cloud scaling, Part 1: Build your own and scale with HPC on demand, provided a simple example using OpenCV that implements a Canny edge transformation on continuous real-time video from a Linux® web cam. This is an example of a CV application that you could use as a first step in segmenting an image. In general, CV applications involve acquisition, digital image formats for pixels (picture elements that represent points of illumination), images and sequences of them (movies), processing and transformation, segmentation, recognition, and ultimately scene descriptions. The best way to understand what CV encompasses is to look at examples. Figure 1 shows face and facial feature detection analysis using OpenCV. Note that in this simple example, using the Haar Cascade method (a machine learning algorithm) for detection analysis, the algorithm best detects faces and eyes that are not occluded (for example, my youngest son’s face is turned to the side) or shadowed and when the subject is not squinting. This is perhaps one of the most important observations that can be made regarding CV: It’s not a trivial problem. Researchers in this field often note that although much progress has been made since its advent more than 50 years ago, most applications still can’t match the scene segmentation and recognition performance of a 2-year-old child, especially when the ability to generalize and perform recognition in a wide range of conditions (lighting, size variation, orientation and context) is considered.

Figure 1. Using OpenCV for facial recognition

Image showing facial recognition analysisTo help you understand the analytical methods used in CV, I have created a small test set of images from the Anchorage, Alaska area that isavailable for download. The images have been processed using GIMP and OpenCV. I developed C/C++ code to use the OpenCV application programming interface with a Linux web cam, precaptured images, or MPEG movies. The use of CV to understand video content (sequences of images), either in real time or from precaptured databases of image sequences, is typically referred to as video analytics.

Defining video analytics

Video analytics is broadly defined as analysis of digital video content from cameras (typically visible light, but it could be from other parts of the spectrum, such as infrared) or stored sequences of images. Video analytics involves several disciplines but at least includes:

  • Image acquisition and encoding. As a sequence of images or groups of compressed images. This stage of video analytics can be complex, including photometer (camera) technology, analog decoding, digital formats for arrays of light samples (pixels) in frames and sequences, and methods of compressing and decompressing this data.
  • CV. The inverse of graphical rendering, where acquired scenes are converted into descriptions compared to rendering a scene from a description. Most often, CV assumes that this process of using a computer to “see” should operate wherever humans do, which often distinguishes it from machine vision. The goal of seeing like a human does most often means that CV solutions employ machine learning.
  • Machine vision. Again, the inverse of rendering but most often in a well-controlled environment for the purpose of process control—for example, inspecting printed circuit boards or fabricated parts to make sure they are geometrically correct within tolerances.
  • Image processing. A broad application of digital signal processing methods to samples from photometers and radiometers (detectors that measure electromagnetic radiation) to understand the properties of an observation target.
  • Machine learning. Algorithms developed based on the refinement of the algorithm through training data, whereby the algorithm improves performance and generalizes when tested with new data.
  • Real-time and interactive systems. Systems that require response by a deadline relative to a request for service or at least a quality of service that meets SLAs with customers or users of the services.
  • Storage, networking, database, and computing. All required to process digital data used in video analytics, but a subtle, yet important distinction is that this is an inherently data-centric compute problem, as was discussed in Part 2 of this series.

Video analytics, therefore, is broader in scope than CV and is a system design problem that might include mobile elements like a smart phone (for example, Google Goggles) and cloud-based services for the CV aspects of the overall system. For example, IBM has developed a video analytics system known as the video correlation and analysis suite (VCAS), for which the IBM Travel and Transportation Solution BriefSmarter Safety and Security Solution for Rail [PDF] is available; it is a good example of a system design concept. Detailed focus on each system design discipline involved in a video analytics solution is beyond the scope of this article, but many pointers to more information for system designers are available in Resources. The rest of this article focuses on CV processing examples and applications.

Basic structure of video analytics applications

You can break the architecture of cloud-based video analytics systems down into two major segments: embedded intelligent sensors (such as smart phones, tablets with a camera, or customized smart cameras) and cloud-based processing for analytics that can’t be directly computed on the embedded device. Why break the architecture into two segments compared to fully solving in the smart embedded device? Embedding CV in transportation, smart phones, and products is not always practical. Even when embedding a smart camera is smart, so often, the compressed video or scene description may be back-hauled to a cloud-based video analytics system, just to offload the resource-limited embedded device. Perhaps more important, though, than resource limitations is that video transported to the cloud for analysis allows for correlation with larger data sets and annotation with up-to-date global information for augmented reality (AR) returned to the devices.

The smart camera devices for applications like gesture and facial expression recognition must be embedded. However, more intelligent inference to identify people and objects and fully parse scenes is likely to require scalable data-centric systems that can be more efficiently scaled in a data center. Furthermore, data processing acceleration at scale ranging from the Khronos OpenVX CV acceleration standards to the latest MPEG standards and feature-recognition databases are key to moving forward with improved video analytics, and two-segment cloud plus smart camera solutions allow for rapid upgrades.

With sufficient data-centric computing capability leveraging the cloud and smart cameras, the dream of inverse rendering can perhaps be realized where, in the ultimate “Turing-like” test that can be demonstrated for CV, scene parsing and re-rendered display and direct video would be indistinguishable for a remote viewer. This is essentially done now in digital cinema with photorealistic rendering, but this rendering is nowhere close to real time or interactive.

Video analytics apps: Individual scenarios

Killer applications for video analytics are being thought of every day for CV and video analytics, some perhaps years from realization because of computing requirements or implementation cost. Nevertheless, here is a list of interesting applications:

  • AR views of scenes for improved understanding. If you have ever looked at, for example, a landing plane and thought, I wish I could see the cockpit view with instrumentation, this is perhaps possible. I worked in Space Shuttle mission control long ago, where a large development team meticulously re-created a view of the avionics for ground controllers that shadowed what astronauts could see—all graphical, but imaging fusion of both video and graphics to annotate and re-create scenes with meta-data. A much simplified example is presented here in concept to show how an aircraft observed via a tablet computer camera could be annotated with attitude and altitude estimation data (see the example in this article).
  • Skeletal transformations to track the movement and estimate the intent and trajectory of an animal that might jump onto a highway. See the example in this article.
  • Fully autonomous or mostly autonomous vehicles with human supervisory control only. Think of the steps between today’s cruise control and tomorrow’s full autonomous car. Cars that can parallel park themselves today are a great example of this stepwise development.
  • Beyond face detection to reliable recognition and, perhaps more importantly, for expression feedback. Is the driver of a semiautonomous vehicle aggravated, worried, surprised?
  • Virtual shopping (AR to try products). Shoppers can see themselves in that new suit.
  • Signage that interacts with viewers. This is based on expressions, likes and dislikes, and data that the individual has made public.
  • Two-way television and interactive digital cinema. Entertainment for which viewers can influence the experience, almost as if they were actors in the content.
  • Interactive telemedicine. This is available any time with experts from anywhere in the world.

I make no attempt in this article to provide an exhaustive list of applications, but I explore more by looking closely at both AR (annotated views of the world through a camera and display—think heads-up displays such as fighter pilots have) and skeletal transformations for interactive tracking. To learn more beyond these two case studies and for more in-depth application-specific uses of CV and video analytics in medicine, transportation safety, security and surveillance, mapping and remote sensing, and an ever-increasing list of system automation that includes video content analysis, consult the many entries in Resources. The tools available can help anyone with computer engineering skills get started. You can also download a larger set of test images as well as all OpenCV code I developed for this article.

Example: Augmented reality

Real-time video analytics can change the face of reality by augmenting the view a consumer has with a smart phone held up to products or our view of the world (for example, while driving a vehicle) and can allow for a much more interactive experience for users for everything from movies to television, shopping, and travel to how we work. In AR, the ideal solution provides seamless transition from scenes captured with digital video to scenes generated by rendering for a user in real time, mixing both digital video and graphics in an AR view for the user. Poorly designed AR systems distract a user from normal visual cues, but a well-designed AR system can increase overall situation awareness, fusing metrics with visual cues (think fighter pilot heads-up displays).

The use of CV and video analytics in intelligent transportation systems has significant value for safety improvement, and perhaps eventually CV may be the key technology for self-driving vehicles. This appears to be the case based on the U.S. Defense Advanced Research Projects Agency challenge and the Google car, although use of the full spectrum with forward-looking infrared and instrumentation in addition to CV has made autonomous vehicles possible. Another potentially significant application is air traffic safety, especially for airports to detect and prevent runway incursion scenarios. The imagined AR view of an aircraft on final approach at Ted Stevens airport in Anchorage shows a Hough linear transform that might be used to segment and estimate aircraft attitude and altitude visually, as shown in Figure 2. Runway incursion safety is of high interest to the U.S. Federal Aviation Administration (FAA), and statistics for these events can be found in Resources.

Figure 2. AR display example

Image showing an example of video augmentationFor intelligent transportation, drivers will most likely want to participate even as systems become more intelligent, so a balance of automation and human participation and intervention should be kept in mind (for autonomous or semiautonomous vehicles).

Skeletal transformation examples: Tracking movement for interactive systems

Skeletal transformations are useful for applications like gesture recognition or gate analysis of humans or animals—any application where the motion of a body’s skeleton (rigid members) must be tracked can benefit from a skeletal transformation. Most often, this transformation is applied to bodies or limbs in motion, which further enables the use of background elimination for foreground tracking. However, it can still be applied to a single snapshot, as shown in Figure 3, where a picture of a moose is first converted to a gray map, then a threshold binary image, and finally the medial distance is found for each contiguous region and thinned to a single pixel, leaving just the skeletal structure of each object. Notice that the ears on the moose are back—an indication of the animal’s intent (higher-resolution skeletal transformation might be able to detect this as well as the gait of the animal).

Figure 3. Skeletal transformation of a moose

Image showing an example of a skeletal transformationSkeletal transformations can certainly be useful in tracking animals that might cross highways or charge a hiker, but the transformation has also become of high interest for gesture recognition in entertainment, such as in the Microsoft® Kinect® software developer kit (SDK). Gesture recognition can be used for entertainment but also has many practical purposes, such as automatic sign language recognition—not yet available as a product but a concept in research. Certainly skeletal transformation CV can analyze the human gait for diagnostic or therapeutic purposes in medicine or to capture human movement for animation in digital cinema.

Skeletal transformations are widely used in gesture-recognition systems for entertainment. Creative and Intel have teamed up to create an SDK for Windows® called the Creative* Interactive Gesture Camera Developer Kit (see Resources for a link) that uses a time-of-flight light detection and ranging sensor, camera, and stereo microphone. This SDK is similar to the Kinect SDK but intended for early access for developers to build gesture-recognition applications for the device. The SDK is amazingly affordable and could become the basis from some breakthrough consumer devices now that it is in the hands of a broad development community. To get started, you can purchase the device from Intel, and then download the Intel® Perceptual Computing SDK. The demo images are included as an example along with numerous additional SDK examples to help developers understand what the device can do. You can use the finger tracking example shown in Figure 4 right away just by installing the SDK for Microsoft Visual Studio® and running the Gesture Viewer sample.

Figure 4. Skeletal transformation using the Intel Perceptual Computing SDK and Creative Interactive Gesture Camera Developer Kit

Image showing a skeletal and blob transformation of a hand

 op

The future of video analytics

This article makes an argument for the use of video analytics primarily to improve public safety; for entertainment purposes, social networking, telemedicine, and medical augmented diagnostics; and to envision products and services as a consumer. Machine vision has quietly helped automate industry and process control for years, but CV and video analytics in the cloud now show promise for providing vision-based automation in the everyday world, where the environment is not well controlled. This will be a challenge both in terms of algorithms for image processing and machine learning as well as data-centric computer architectures discussed in this series. The challenges for high-performance video analytics (in terms of receiver operating characteristics and throughput) should not be underestimated, but with careful development, this rapidly growing technology promises a wide range of new products and even human vision system prosthetics for those with sign impairments or loss of vision. Based on the value of vision to humans, no doubt this is also fundamental to intelligent computing systems.

Downloads

Description Name Size
OpenCV Video Analytics Examples va-opencv-examples.zip 600KB
Simple images for use with OpenCV example-images.zip 6474KB

Resources

Learn

Get products and technologies

Downloads

Description Name Size
GPU accelerated image transform sharpenCUDA.zip 644KB
Grid threaded comparison hpc_dm_cloud_grid.zip 1.08MB
Simple image for transform benchmark Cactus-320×240-pixel.ppm.zip 206KB

Resources

Learn

Downloads

Description Name Size
Continuous HD digital camera transform example transform-example.zip 123KB
Grid threaded prime generator benchmark hpc_cloud_grid.tar.gz 3KB
High-resolution image for transform benchmark Cactus-12mpixel.zip 12288KB

Resources

Learn

Posted in Apps Development, CLOUD, Computer Languages, Computer Software, Computer Vision, GPU (CUDA), GPU Accelareted, Image Processing, OpenCV, OpenCV, PARALLEL, Project Related, Video | Leave a Comment »

15 Easy Ways To Speed Up WordPress: Why Slow Page Load Equals Slow Blog Growth

Posted by Hemprasad Y. Badgujar on December 7, 2014


WordPress is a great platform.

You’re seeing it in action right now: my site (which has tens of thousands of readers) is run entirely on this crazy-powerful platform.

One weakness that WordPress suffers from, however, is that it is usually very slow.

Without taking the right precautions, you could end up with a sluggish site that will not only be a hassle for repeat visitors, but will most certainly lose you subscribers and customers due to the impatient nature of web browsers.

First I want to go over why your WordPress site’s speed is important to your success, and next I want to go over ALL of the best ways that I’ve found to consistently.

If you want to get right to how you can speed up your site, scroll down.

If you want to learn why you should, read below.

Why Site Speed Is Important

You’ve probably heard this before, but when a person lands on your site for the first time, you only have a few seconds to capture their attention to convince them to hang around.

If you’ve been doing this online business thing for a while, you’ll recognize the importance of branding, a nice layout, putting important things above the fold, and all of that good stuff in order to try to capture visitors into staying.

But if your page loads slowly, you may lose people before you even have the change to convert them.

Most studies have confirmed that you have a very short time to load your site before people click away, especially if they’ve been linked there from another site that they visit.

Think about that.

Someone just gave you a good reference with a link, and yet you are doing both of you a disservice by having a slow loading site that nobody would want to wait around for.

Not only that, you are stunting your own growth by losing these potential subscribers, especially early on.

You have on average a single digit time frame before you lose somebody to a slow loading page.

That means if your site takes longer than 10 seconds to load, most people are gone, lost before you even had the change to convince them to stick around and give your blog or website a change.

Not only that, but Google has now included site speed in it’s ranking algorithm. That means that your site’s speed effects it’s SEO, so if your site is slow, you are not only losing visitors out of impatience, but you are also losing them by having reduced rankings in search engines.

So let’s see how we can fix that.

How To Speed Up WordPress

As a side note, these are not ordered by importance or any criteria, I’ve just gathered everything I’ve learned about speeding up page loads on WordPress and compiled them here.

I guarantee that using even a few of these will drastically speed up your site.

1. Choose a good host

While starting out, a shared host might seem like a bargain (“Unlimited page views, wowie zowie!”), it comes at another price: incredibly slow site speed and frequent down time during high traffic periods.

If you plan on doing awesome stuff (aka the kind of stuff that creates high traffic periods), you’re killing yourself by running your WordPress site on shared hosting.

The stress of your site going down after getting a big feature is enough to create a few early gray hairs: don’t be a victim, invest in proper hosting.

The only WordPress host I continually recommend is below… (drum roll please…)

My sites are always blazingly fast, never have downtime when I get huge features (like when I was featured on the Discovery Channel blog!), and the backend is stupidly simple.

Last but not least, support is top notch, which is a must when it comes to hosting… take it from a guy who’s learned that the hard way!

Head on over to the WP Engine homepage and check out their offerings, you’ll be happy you did.

2. Start with a solid framework/theme

You might be surprised to here this, but the Twenty Ten/Twenty Eleven “framework” (aka the default WP themes) are quite speedy frameworks to use.

That’s because they keep it the “guts” simple, and light frameworks are always the way to go to have a speedy site.

From my experience, the fastest loading premium framework is definitely the Thesis Theme Framework (aff).

Whatever you might say about it’s SEO abilities (I prefer to use plugins and my own edits), it is definitely a solid framework for quick page loads, I’ve always had this experience as have many others.

3. Use an effective caching plugin

WordPress plugins are obviously quite useful, but some of the best fall under the caching category, as they drastically improve page loads time, and best of all, all of them on WP.org are free and easy to use.

By far my favorite, bar none, is W3 Total Cache, I wouldn’t recommend or use any other caching plugin, it has all of the features you need and is extremely easy to install and use.

Simply install and activate, and what your page load faster as elements are cached.

4. Use a content delivery network (CDN)

All of your favorite big blogs are making use of this, and if you are into online marketing using WordPress (as I’m sure many of my readers are) you won’t be surprised to here that some of your favorite blogs like Copyblogger are making use of CDN’s.

Essentially, a CDN, or content delivery network, takes all your static files you’ve got on your site (CSS, Javascript and images etc) and lets visitors download them as fast as possible by serving the files on servers as close to them as possible.

I personally use the Max CDN Content Delivery Network on my WordPress sites, as I’ve found that they have the most reasonable prices and their dashboard is very simple to use (and comes with video tutorials for setting it up, takes only a few minutes).

There is a plugin called Free-CDN that promises to do the same, although I haven’t tested it.

5. Optimize images (automatically)

Yahoo! has an image optimizer called Smush.it that will drastically reduce the file size of an image, while not reducing quality.

However, if you are like me, doing this to every image would be beyond a pain, and incredibly time consuming.

Fortunately, there is an amazing, free plugin called WP-SmushIt which will do this process to all of your images automatically, as you are uploading them. No reason not to install this one.

6. Optimize your homepage to load quickly

This isn’t one thing but really a few easy things that you can do to ensure that your homepage loads quickly, which probably is the most important part of your site because people will be landing there the most often.

Things that you can do include:

  • Show excerpts instead of full posts
  • Reduce the number of posts on the page (I like showing between 5-7)
  • Remove unnecessary sharing widgets from the home page (include them only in posts)
  • Remove inactive plugins and widgets that you don’t need
  • Keep in minimal! Readers are here for content, not 8,000 widgets on the homepage
Overall, a clean and focused homepage design will help your page not only look good, but load quicker as well.

7. Optimize your WordPress database

I’m certainly getting a lot of use out of the word “optimize” in this post!

This can be done the very tedious, extremly boring manual fashion, or…

You can simply use the WP-Optimize plugin, which I run on all of my sites.

This plugin lets you do just one simple task: optimize the your database (spam, post revisions, drafts, tables, etc.) to reduce their overhead.

I would also recommend the WP-DB Manager plugin, which can schedule dates for database optimization.

8. Disable hotlinking and leeching of your content

Hotlinking is a form of bandwidth “theft.” It occurs when other sites direct link to the images on your site from their articles making your server load increasingly high.

This can add up as more and more people “scrape” your posts or your site (and especially images) become more popular, as must do if you create custom images for your site on a regular basis.

Place this code in your root .htaccess file:

disable hotlinking of images with forbidden or custom image option
RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?hemprasad.wordpress.com [NC]
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?google.com [NC]
RewriteCond %{HTTP_REFERER} !^http(s)?://(www\.)?feeds2.feedburner.com/SomethingMoreForResearch [NC]
RewriteRule \.(jpg|jpeg|png|gif)$ – [NC,F,L]

You’ll notice I included my feed (from FeedBurner), you’ll need to replace it with your feed’s name, otherwise your images won’t appear correctly there.

9. Add an expires header to static resources

An Expires header is a way to specify a time far enough in the future so that the clients (browsers) don’t have to re-fetch any static content (such as css file, javascript, images etc).

This way can cut your load time significantly for your regular users.

You need to copy and paste the following code in your root .htaccess file:

ExpiresActive On
ExpiresByType image/gif A2592000
ExpiresByType image/png A2592000
ExpiresByType image/jpg A2592000
ExpiresByType image/jpeg A2592000

The above numbers are set for a month (in seconds), you can change them as you wish.

10. Adjust Gravatar images

You’ll notice on this site that the default Gravatar image is set to… well, nothing.

This is not an aesthetic choice, I did it because it improves page loads by simply having nothing where there would normally be a goofy looking Gravatar logo or some other nonsense.

Some blogs go as far to disable them throughout the site, and for everyone.

You can do either, just know that it will at least benefit your site speed if you set the default image (found in “Discussion”, under the settings tab in the WordPress dashboard) to a blank space rather than a default image.

11. Add LazyLoad to your images

LazyLoad is the process of having only only the images above the fold load (i.e. only the images visible in the visitor’s browser window), then, when reader scrolls down, the other images begin to load, just before they come into view.

This will not only speed you page loads, it can also save bandwidth by loading less data for users who don’t scroll all the way down on your pages.

To do this automatically, install the jQuery Image Lazy Load plugin.

12. Control the amount of post revisions stored

I saved this post to draft about 8 times.

WordPress, left to its own devices, would store every single one of these drafts, indefinitely.

Now, when this post is done and published, why would I need all of those drafts stored?

That’s why I use the Revision Control plugin to make sure I keep post revisions to a minimum, set it to 2 or 3 so you have something to fall back on in case you make a mistake, but not too high that you clutter your backend with unnecessary amounts of drafted posts.

13. Turn off pingbacks and trackbacks

By default, WordPress interacts with other blogs that are equipped with pingbacks and trackbacks.

Every time another blog mentions you, it notifies your site, which in turn updates data on the post. Turning this off will not destroy the backlinks to your site, just the setting that generates a lot of work for your site.

For more detail, read this explanation of WordPress Pingbacks, Trackbacks and Linkbacks.

14. Replace PHP with static HTML, when necessary

This one is a little bit advanced, but can drastically cut down your load time if you are desperate to include page load speeds, so I included it.

I’d be doing this great post injustice if I didn’t link to it for this topic, as it taught me how to easily do this myself, in a few minutes.

So go there and check it out, it wrote it out in plainer terms than I ever could!

15. Use CloudFlare

This is similar to the section above on using CDN’s, but I’ve become so fond of CloudFlare since I discussed it in my best web analytics post that I’ve decided to include it separately here.

To put it bluntly, CloudFlare, along with the W3 Total Cache plugin discussed above, are a really potent combination (they integrate with each other) that will greatly improve not only the speed, but the security of your site.

And, both are free, so you have no excuse!

Posted in Apps Development, Computer Softwares | Tagged: , , | Leave a Comment »

SHOULD YOU LEARN TO CODE?

Posted by Hemprasad Y. Badgujar on December 5, 2014


Literacy in any computer language, from simple HTML to complex C++, requires dedication not only to the technology, but to changes in the technology. There’s a reason HTML5 ends in a number. When enough browsers support HTML6, developers will have new things to learn.

Possible reasons to put yourself through the learning process include:

  • To gain confidence: I’ve had rare clients who think that if they master a language then computers will intimidate them less. While that may be the case, it rarely sticks without dedicated practice.
  • Necessity: technical problems will arise whether or not one’s job description fits the bill. When problems must get solved, there’s a time to pass the buck and a time to buckle down and solve it.
  • The thrill of it: some people just like to learn new skills.
  • To understand what’s possible: a developer says “it can’t be done.” Do they mean it’s impossible? Or that it’s more trouble than it’s worth? A designer says “I want it to do this.” Did he or she just give someone a week’s worth of headaches? Can technology be used in a more appropriate way?

STAY CURIOUS

I’ve seen it. You know, that look. Not quite panic, not quite despair. It’s the look someone gets when they realize the appeal of letting someone else do the heavy lifting. The look that says, “That’s a windshield; I don’t have to be the bug.” I’ve seen it in co-workers’ eyes, students’ postures, and staring back from the mirror.

In my experience, it isn’t fear of failure that intimidates people. It’s fear of getting lost. Overwhelming hopelessness encourages feelings of inadequacy. That cycle will beat anyone down.

Courage or persistence are not antidotes for feeling overwhelmed. Stopping before feeling overwhelmed is the solution.

Pressure

Pressure image via Shutterstock.

My favorite technique is to tackle a project with three traits.

1. Find a topic that irks you

Deadlines and paychecks are fine. But nothing drives people like an itch they can’t scratch. In the long run, learning code must not be an end in itself. It must become a salve for some irritation.

Way back when, I got frustrated that I couldn’t find a good book. There’s no shortage of book discovery websites, but intuition told me there was a better way. So I started my own website. I never finished the project, but I learned many ways to organize novels. On the way, almost incidentally, I learned more code.

2. You should be rewarded for incremental effort

Having found that proverbial itch, people learning to code should also find relief.

No tutorials, tools, or outside praise will give people the mindset to conquer code better than “I wrote this and… look what I did!” and leaving with a sense of being greater than the obstacle you overcame.

It sounds silly until you try it. Seeing code perform gives people a micro-rush of self confidence, a validation that they can master the machine.

Code

Code image via Shutterstock.

Last week someone looked at my screen and shook his head. It was full of code. Three open windows of colored tags and function calls. He said: “I could never do that.” Years ago I would have agreed. I didn’t want to look stupid or break something that I could not fix. Who knows what damage one wrong keystroke would cause?

3. Your project should conclude while your brain still has an appetite

This one’s critical. When learning something that intimidates you, you must approach but do not exceed your limit.

“Exercising your brain” isn’t an appropriate analogy. When working out, trainers encourage people to push just past their limits. But learning is a hunger. Your brain has an appetite for knowledge. Filling your brain to the brim (or worse, exceeding its limit) will hamper your ability to learn, erode your self-confidence, and kill a kitten. Please, think of the kittens.

Better yet, think of mental exercise as one workout that happens to last a while. Say, one week. Sure, you take breaks between reps (called “getting sleep”). But rushing ahead works against your goal. The kittens will never forgive you.

  • Part one: warm up by mixing something you already learned with something you don’t know.Leave yourself at least one question. 1 day.
  • Part two: practice. Experiment. Practice repeating experiments. And always end on a cliffhanger. The goal is to hit your stride and break on a high note. By “break” I mean sleep, eat, or talk to fellow humans. 3 days.
  • Part 3: cool down by improving what you’ve already covered. As always, get your brain to a point of enjoying the exercise, then let go for a while. 1 day.

Sprinting does not train you for a marathon. A hundred pushups will improve your shoulders better than trying to lift a truck once. And cramming tutorial books like shots of tequila will impair your ability to think.

PRACTICE DAILY

In my newspaper days, I refused to use stock art. Deadlines came five days a week, but I insisted on hand-crafting my own vector art. Six months later I was the go-to guy for any custom graphic work. That one skill that earned me a senior position at a startup company. Even today I love fiddling with bezier paths.

Learning any skill, including how to debug code, works much the same.

The only way to learn code — and make it stick — is to practice every day. Like learning any new skill, a consistent schedule with manageable goals gradually improves performance to the point of expertise.

“I CAN” IS NOT “I SHOULD”

Part of learning to read and write code, be it HTML, jQuery, or C++, is learning one’s limits. Another part is explaining one’s limits. The curse of understanding a language … rather, the curse of people thinking you “know code” is they’ll expect you to do it.

Technology

Code image via Shutterstock.

HTML is not CSS. CSS is not PHP. PHP is not WordPress. WordPress is not server administration. Server administration is not fixing people’s clogged Outlook inboxes. Yet I’ve been asked to do all of that. Me, armed with my expired Photoshop certificate and the phrase “I don’t know, but maybe I can help….”

Those without code experience often don’t differentiate between one $(fog-of).squiggles+and+acronyms; or . Not that we can blame them. Remember what it was like before you threw yourself into learning by

  • finding a topic that interests you;
  • getting incremental rewards;
  • learning without getting overwhelmed.

Knowledge of code is empowering. Reputation as a coder is enslaving. At least both pay the bills.

Posted in Apps Development, Computer Languages, Computer Software | Tagged: , , , | Leave a Comment »

Install PHP 5.5 and Apache 2.4

Posted by Hemprasad Y. Badgujar on November 25, 2014


apt-add-repository ppa:ptn107/apache
apt-add-repository ppa:ondrej/php5

Then installing apache 2.4

apt-get install apache2-mpm-worker

checking apache version:

# apache2 -v
Server version: Apache/2.4.6 (Ubuntu)
Server built:   Sep 23 2013 07:23:34

Installing PHP 5.5

apt-get install php5-common php5-mysqlnd php5-xmlrpc php5-curl php5-gd php5-cli php5-fpm php-pear php5-dev php5-imap php5-mcrypt

Checking php version

php -v
PHP 5.5.8-3+sury.org~precise+1 (cli) (built: Jan 24 2014 10:15:11) 
Copyright (c) 1997-2013 The PHP Group
Zend Engine v2.5.0, Copyright (c) 1998-2013 Zend Technologies
     with Zend OPcache v7.0.3-dev, Copyright (c) 1999-2013, by Zend Technologies

So everyting seems ok the thing is I need mod_fastcgi but can’t be installed:

Posted in Apps Development, Computer Network & Security, Installation | Tagged: , | Leave a Comment »

Posted by Hemprasad Y. Badgujar on October 12, 2014


Mathematics Function handbook

Abstract. Function handbook contains all elementary, trigonometric and hyperbolic functions — their definitions, graphs, properties and identities.

 

Function handbook

1. Introduction

When working at scientific calculator Li-L project we wanted to give some gift to its the future users. This scientific calculator has an embedded Help available just in two clicks for every part of the application and we had an idea to create function handbook and include it as a part of the Help, so that click in the Li-L calculator keypad took the user to the handbook page with function description.

The most challenging part of the idea was graphics. We wanted every function to have its graph and we set high standards for graphics: it was to be properly scaled, numbered and antialiased. It was interesting and difficult job: every figure was created in several stages with final stage of editing on pixel level. When the task was completed, maybe first time in my life I saw all the functions in the same scale, properly curved and cut, so that all the features were clearly visible.

The result got was so great we decided to put this beauty online. And now you can access what Li-L calculator users have on their desktops. Below you will find functions put into several lists — every list holds all functions, but ordering is different, so that you can look up the function by its notation, name or article head.

Li-L calculator handbook contains all elementary, trigonometric and hyperbolic functions. Scientific calculator Li-X got handbook extended with gamma and Bessel functions and scientific calculator Li-Xc got handbook extended with modified Bessel functions.

2. Content

  1. or sqrt — square root
  2. ^ — power
  3. Γ — gamma function
  4. abs — absolute value
  5. arccos — trigonometric arc cosine
  6. arccot or arcctg — trigonometric arc cotangent
  7. arccsc or arccosec — trigonometric arc cosecant
  8. arcosh or arch — arc-hyperbolic cosine
  9. arcoth or arcth — arc-hyperbolic cotangent
  10. arcsch — arc-hyperbolic cosecant
  11. arcsec — trigonometric arc secant
  12. arcsin — trigonometric arc sine
  13. arctan or arctg — trigonometric arc tangent
  14. arsech or arsch — arc-hyperbolic secant
  15. arsinh or arsh — arc-hyperbolic sine
  16. artanh or arth — arc-hyperbolic tangent
  17. ceil — ceiling
  18. cos — trigonometric cosine
  19. cosh or ch — hyperbolic cosine
  20. cot or ctg — trigonometric cotangent
  21. coth or cth — hyperbolic cotangent
  22. csc or cosec — trigonometric cosecant
  23. csch — hyperbolic cosecant
  24. exp — exponent
  25. floor — floor
  26. Iν — modified Bessel function of the first kind
  27. Jν — Bessel function of the first kind
  28. Kν — modified Bessel function of the second kind
  29. ln — natural logarithm
  30. log or lg — decimal logarithm
  31. n! — factorial
  32. sec — trigonometric secant
  33. sech — hyperbolic secant
  34. sign — signum
  35. sin — trigonometric sine
  36. sinh or sh — hyperbolic sine
  37. tan or tg — trigonometric tangent
  38. tanh or th — hyperbolic tangent
  39. Yν — Bessel function of the second kind

3. Function index by name

  1. Absolute value abs
  2. Arc-hyperbolic cosecant arcsch
  3. Arc-hyperbolic cosine arcosh or arch
  4. Arc-hyperbolic cotangent arcoth or arcth
  5. Arc-hyperbolic secant arsech or arsch
  6. Arc-hyperbolic sine arsinh or arsh
  7. Arc-hyperbolic tangent artanh or arth
  8. Bessel function of the first kind Jν
  9. Bessel function of the second kind Yν
  10. Ceiling function ceil
  11. Decimal logarithm log or lg
  12. Exponent exp
  13. Factorial n!
  14. Floor function floor
  15. Gamma function Γ
  16. Hyperbolic cosecant csch
  17. Hyperbolic cosine cosh or ch
  18. Hyperbolic cotangent coth or cth
  19. Hyperbolic secant sech
  20. Hyperbolic sine sinh or sh
  21. Hyperbolic tangent tanh or th
  22. Modified Bessel function of the first kind Iν
  23. Modified Bessel function of the second kind Kν
  24. Natural logarithm ln
  25. Power ^
  26. Signum sign
  27. Square root or sqrt
  28. Trigonometric arc cosecant arccsc or arccosec
  29. Trigonometric arc cosine arccos
  30. Trigonometric arc cotangent arccot or arcctg
  31. Trigonometric arc secant arcsec
  32. Trigonometric arc sine arcsin
  33. Trigonometric arc tangent arctan or arctg
  34. Trigonometric cosecant csc or cosec
  35. Trigonometric cosine cos
  36. Trigonometric cotangent cot or ctg
  37. Trigonometric secant sec
  38. Trigonometric sine sin
  39. Trigonometric tangent tan or tg

4. Function index by notation

  1. — square root
  2. ^ — power
  3. Γ — gamma function
  4. abs — absolute value
  5. arccos — trigonometric arc cosine
  6. arccosec — trigonometric arc cosecant
  7. arccot — trigonometric arc cotangent
  8. arccsc — trigonometric arc cosecant
  9. arcctg — trigonometric arc cotangent
  10. arch — arc-hyperbolic cosine
  11. arcosh — arc-hyperbolic cosine
  12. arcoth — arc-hyperbolic cotangent
  13. arcsch — arc-hyperbolic cosecant
  14. arcsec — trigonometric arc secant
  15. arcsin — trigonometric arc sine
  16. arctan — trigonometric arc tangent
  17. arctg — trigonometric arc tangent
  18. arcth — arc-hyperbolic cotangent
  19. arsch — arc-hyperbolic secant
  20. arsech — arc-hyperbolic secant
  21. arsh — arc-hyperbolic sine
  22. arsinh — arc-hyperbolic sine
  23. artanh — arc-hyperbolic tangent
  24. arth — arc-hyperbolic tangent
  25. ceil — ceiling
  26. ch — hyperbolic cosine
  27. cos — trigonometric cosine
  28. cosec — trigonometric cosecant
  29. cosh — hyperbolic cosine
  30. cot — trigonometric cotangent
  31. coth — hyperbolic cotangent
  32. csc — trigonometric cosecant
  33. csch — hyperbolic cosecant
  34. ctg — trigonometric cotangent
  35. exp — exponent
  36. floor — floor
  37. Iν — modified Bessel function of the first kind
  38. Jν — Bessel function of the first kind
  39. Kν — modified Bessel function of the second kind
  40. lg — decimal logarithm
  41. ln — natural logarithm
  42. log — decimal logarithm
  43. n! — factorial
  44. sec — trigonometric secant
  45. sech — hyperbolic secant
  46. sh — hyperbolic sine
  47. sign — signum
  48. sin — trigonometric sine
  49. sinh — hyperbolic sine
  50. sqrt — square root
  51. tan — trigonometric tangent
  52. tanh — hyperbolic tangent
  53. tg — trigonometric tangent
  54. th — hyperbolic tangent
  55. Yν — Bessel function of the second kind

References. The handbook is a part of scientific calculator Li-L and scientific calculator Li-X products.

Posted in Apps Development, Computer Languages, Computer Research, Free Tools, Image / Video Filters, My Research Related, PARALLEL | Leave a Comment »

Screencasting Tools For Creating Video Tutorials

Posted by Hemprasad Y. Badgujar on June 3, 2014


Ever wondered how people show you so clearly what is happening on their computer, like in the Photoshop Video Tutorials we shared with you? Thanks to screencasting software, anyone can do it. So what’s stopping you now from making your own how-to videos? Try out one of these 12 tools and get to making your first video!

 

Free

 

http://www.bobyte.com/

AviScreen – As the name would imply, this capture program records the video into AVI files, but can also do BMP photos. It’s Windows only and does not record audio.

 

http://camstudio.org/

CamStudio.org – An open source program for capturing your on-screen video and audio as AVI files. Windows only, and absolutely free.

 

http://danicsoft.com/projects/copernicus/

Copernicus – A free program for Macs that focuses heavily on making quick and speedy films by recording the video to your RAM for quicker access. Does not include any support for audio.

 

http://www.jingproject.com/

JingProject.com – Beyond recording video, Jing allows you to take a picture of any portion fo your desktop, draw on it, add a message, and immediately upload your media to a free hosting account. You are then given a small URL that you can give to whomever needs to see the image or video. Works with Macs and Windows machines.

 

http://www.screencast-o-matic.com/

Screencast-O-Matic.com – A Java-based screencasting tool that requires no downloads and will allow you to automatically upload to hosting. According to their site it works well with Macs and Windows machines, but does have some issues with Linux.

 

http://www.debugmode.com/wink/

Wink – Screencasting software that focuses on making tutorials with audio and text annotation abilities. Outputs to Flash, PDF, HTML, EXE files and more.

ScreenToaster (Web-based, Free)P

Five Best Screencasting Tools

ScreenToaster is the only web-based offering in this week’s Hive Five, and it definitely fills a handy niche. Whether you don’t screencast enough to want to install a dedicated application or you just need to crank out a quick screencast wherever you are, ScreenToaster can help. You don’t get any advanced editing tools—screw up and you’re redoing it—but you do get full screen capture, support for picture-in-picture webcam video in the lower right corner, and audio for voice-over. When you’re done recording and previewing your clip, you can upload the video to ScreenToaster or YouTube, or download it as a MOV or SWF file. ScreenToaster is free and works with any Java-enabled web browser.

Commercial

 

http://www.adobe.com/products/captivate/

Adobe Captivate – While Adobe is almost always synonymous with quality, it also always means it’s going to be expensive. Pricing starts at nearly $700.

 

http://www.allcapture.com/eng/index.php

AllCapture – Capture in real time, add audio during recording or after completion. Can output to Flash, EXE, ASF, DVD, SVCD and VCD. Free trial available, Windows only.

 

http://www.hyperionics.com/

HyperCam – Windows only system for recording screen activity to AVI files along with system audio. Free trial with $39.95 for full version.

 

http://www.shinywhitebox.com/home/home.html

iShowU – Offers a wide-range of presets that allows you to record directly into Quicktime and up to 1080P in both NTSC and PAL formats. Also does audio and the file is ready to be published as soon as hit stop. Mac only.

 

http://www.polarian.com/products/ScreenMimic.php

ScreenMimic – Software for the Mac that offers transitions, audio dubbing, can output to HTML, Quicktime and Flash. Free download and $64.95 for the paid version.

 

http://www.miensoftware.com/screenrecord.html

ScreenRecord – Outputs to Quicktime directly and can record your clicks and all on-screen activities. Offers a free trial and then $19.95 to purchase.

KEY: Top Tools 2013 Free Tool

Adobe Captivate : Rapidly create simulations, software demonstrations, and scenario-based training. Free Trial, Download 

AllCapture : Capture your desktop activities in real-time and create your demos, software simulations and tutorials. Free Trial, Download

Bulent’s Screen Recorder : Captures video, sound and pictures of anything you see on your screen. Any part of the screen or a window or the entire desktop can be recorded
Free trial, Download

 CamStudio : Record all screen and audio activity on your computer and create industry-standard AVI video files and using its built-in SWF Producer can turn those AVIs into Streaming Flash videos (SWFs). Open Source, Download

Camtasia : Record your screen to create training, demo, and presentation videos, aka screencasts. Free Trial, Download 

CaptureWizPro : A professional tool for capturing anything on your screen, even tricky items like the entire contents of scrolling areas, drop-down lists, tool tips, mouse pointers and screen savers. There’s also a high-performance recorder for capturing streaming video or creating demos. Free Trial, Download

DemoBuilder : Capture your activities in a running application and then edit the recorded material to add a voice-over narration track or background music, visual effects, annotations, comments and other elements that will add to the efficiency of your presentation. Free Trial, Download

Flash Demo Builder : Its powerful screen capturerecords keypresses and mouse movements to show clearly how some processes and applications work. Free Trial, Download

FlashDemo Studio : Record your PC screen activities in real time.  Publish as Flash movies. Download

 FreeScreencast : Record your screen, upload, and share with ease. Hosted

GoVIew : Capture your computer screen and audio, then instantly share your recording online. Hosted

Instant Demo : Screen Recorder Software. Free Trial, Download

iShowU HD Pro : Realtime screen capture from your Mac. Free Trial, Download

Jing : Always-ready program that instantly captures and shares images and video…from your computer to anywhere. Pro version available. Download 

Playback : For creating screencast tutorial videos in any subject using an iPad. Annotate your presentation slides as you talk.

 Screenbird – “record your screen like a boss” Hosted

Screencast-O-matic : online screen recorder. Hosted

ScreenCastle : One-click screencasting. Hosted

ScreenFlow : Professional screencasting studio (for the Mac). Free Trial, Download

ScreenJelly : Records your screen activity with your voice so you can spread it as a video via Twitter or email. Hosted

Screenr : Record your screen. Hosted 

ScreenRecord : A screen recording tool (for Mac) that allows the user to capture continuous images on the screen as a Quicktime movie. Free Trial, Download

ScreenToaster : Record your screen online. Hosted 

Screeny : capture your videos or images at any size (for Mac) Download

 TipCam – easy-to-use professional screen recording software for Windows. TipCam Pro available for purchase. Download

TurboDemo : Capture screenshots and explain software, PC applications, websites and products with animated demos and tutorials. Free Trial, Download

Viewlet Builder : Automatically reproduces the movement of your cursor, allowing you to create Flash tutorials or simulations that exactly mirror the way your product or web site works. Publish your finished tutorials as small, secure Flash files that can be delivered over the Internet. Free Trial, Hosted

ViewletCam : record PC applications, PowerPoint presentations, animations, and video directly from your PC screen and generate Flash movies for use in demos, troubleshooting, training classes, and presentations. Free Trial, Hosted

Webinaria : Create software demos and share online. Hosted

Wink Tutorial and Presentation creation software, primarily aimed at creating tutorials on how to use software (like a tutor for MS-Word/Excel etc). Using Wink you can capture screenshots, add explanations boxes, buttons, titles etc and generate a highly effective tutorial for your users. Download 

Wondershare DemoCreator : Screen Recorder to Record Screen Activities as Video Demos. Free Trial, Hosted

 

Product Name Publisher Latest stable version Latest release date OS Software license Open source
ActivePresenter Free Edition Atomi Systems 3.9 2013-06-19 Windows Freeware No
BB FlashBack Express Blueberry Software 4.0 2012-09-18 Windows Freeware No
Capture Fox Zafer Gurel 0.7.0 2009-11-25 Windows Freeware Yes
Jing TechSmith 2.8 Windows
Mac OS X
Freeware No
Windows Media Encoder Microsoft Corporation 9.00.00.3352 (x86)
10.00.00.3809 (x64)
2002 Windows Freeware No
Wink Satish Kumar 2.0 2008-07-14 Windows
Linux
Freeware No
CamStudio CamStudio.org 2.7 r316 2013-02-15 Windows GPL Yes
RecordMyDesktop SourceForge 0.3.8.1 2008-12-13 Linux GPL Yes
VirtualDub SourceForge 1.9.11 2012-12-27 Windows GPL Yes
VLC media player VideoLAN 2.1.4 2014-02-21 Cross-platform GPL Yes
XVidCap SourceForge 1.1.7 2008-07-13 Unix-like GPL Yes
Freeseer FOSSLC 3.0.0 2013-08-30 Windows
Mac OS X
Linux
GPL v3 Yes
ShareX GitHub 9.0.0 2014-05-16 Windows GPL v3 Yes
SimpleScreenRecorder maartenbaert 0.0.7-1 2013-06-02 Linux GPL v3 Yes
FFmpeg FFmpeg.org 2.2.2 2014-05-05 Linux[1] OS supporting X11Windows LGPL Yes

 

 

Comparison by features

The following table compares features of screencasting software. The table has seven fields, as follows:

  1. Name: Product’s name; sometime includes edition if a certain edition is targeted
  2. Audio: Specifies whether the product supports recording audio commentary on the video
  3. Entire desktop: Specifies whether product supports recording the entire desktop
  4. OpenGL: Specifies whether the product supports recording from video games and software that employ OpenGL to render digital image
  5. DirectX: Specifies whether the product supports recording from video games or software that employ DirectX (particularly Direct 3D) to render digital image
  6. Editing: Specifies whether the product supports editing recorded video at least to some small extent, such as cropping, trimming or splitting
  7. Output: Specifies the file format in which the software saves the final video (Non-video output types are omitted)
Product name Audio Entire
desktop
OpenGL DirectX Editing Output
ActivePresenter[2] Yes Yes No No Yes AVIFLVMP4SWFHTMLWebMWMV
ActivePresenter free edition[2] Yes Yes No No Yes AVIMP4WebMWMV
Adobe Captivate[3] Yes Yes ? ? Yes SWFEXEMP4HTML5
Bandicam Yes Yes Yes Yes No
BB FlashBack Yes Yes ? ? Yes
BB FlashBack express Yes Yes ? ? No
CamStudio Yes Yes ? ? Yes AVISWF
Camtasia Studio Yes Yes Yes Yes Yes .camrec, AVI
Camtasia for Mac Yes Yes Yes No Yes
Capture Fox Yes Yes ? ? No Motion JPEG or Xvid in AVI
Epiplex500 Yes Yes Yes Yes Yes Motion JPEG or Xvid in AVI
FFmpeg Yes Yes Yes ? Yes Many
Fraps Yes Yes Yes Yes No FPS1 in AVI
Freeseer Yes Yes ? ? No Ogg
HyperCam Yes Yes ? ? No AVIWMV
Jing Yes Yes ? ? No SWF
Microsoft Expression Encoder Yes Yes Yes No Yes
Nero Vision Yes ? ? ? Yes
Pixetell Yes Yes Yes Yes Yes
QuickTime X Yes Yes ? ? No
RecordMyDesktop Yes Yes ? N/A No Theora in Ogg
Screencam Yes Yes Yes Yes Yes
ScreenFlow Yes Yes Yes N/A Yes
ShareX Yes Yes No No No AVIMP4GIF
SimpleScreenRecorder Yes Yes Yes N/A No Formats supported by libavformat
SmartPixel Yes Yes ? ? Yes FLVAVIMP4GIF
Snagit Yes Yes Yes Yes No MP4
Snapz Pro X Yes Yes ? ? No
VirtualDub Yes ? ? ? Yes
VLC Yes Yes Yes ? Yes
Windows Media Encoder Yes Yes ? ? No
Wink Yes Yes ? No Yes SWFPDF
XVidCap Yes ? ? N/A No

Posted in Apps Development, Documentations, Project Related | Tagged: , | Leave a Comment »

Setting Global C++ Include Paths in Visual Studio 2012 (and 2011, and 2010)

Posted by Hemprasad Y. Badgujar on May 14, 2014


Setting Global C++ Include Paths in Visual Studio 2012 (and 2011, and 2010)

Starting with Visual Studio 2010, Microsoft decided to make life hard on C++ developers.  System-wide include path settings used to be accessed through Tools | Options | Projects and Solutions | VC++ Directories.  However, that option is gone:

VS2012_ToolsOptionsDirectories

Instead, the system-wide include paths are now located within the ‘Properties’ interface.  To access it, select View | Property Manager.  No dialog will appear yet. Instead, the Property Manager appears as a tab along with the Solution Explorer:

VS2012_OptionsPropertyManager

Note:  The Property Manager won’t contain anything unless a solution is loaded.

Now, expand one of your projects, then expand Debug | Win32 or Release | Win32:

VS2012_PropertyPageProjectExpanded

Right click Microsoft.Cpp.Win32.user and select Properties:

VS2012_PropertyPageProjectExpandedMenu

This brings up the Microsoft.Cpp.Win32.User Property Pages dialog, which should look familiar enough:

VS2012_Win32UserPropertyPage

Alternate Access

The properties can be accessed directly as an XML file by editing%LOCALAPPDATA%\Microsoft\MSBuild\v4.0\Microsoft.Cpp.Win32.user.props

VS2012_Win32UserPropsXML

 

Posted in Apps Development, Computer Network & Security, Computer Softwares, Computer Vision, CUDA, Installation, OpenCL, OpenCV | Tagged: , , , , | Leave a Comment »

Computer Vision Algorithm Implementations

Posted by Hemprasad Y. Badgujar on May 6, 2014


Participate in Reproducible Research

General Image Processing

OpenCV
(C/C++ code, BSD lic) Image manipulation, matrix manipulation, transforms
Torch3Vision
(C/C++ code, BSD lic) Basic image processing, matrix manipulation and feature extraction algorithms: rotation, flip, photometric normalisations (Histogram Equalization, Multiscale Retinex, Self-Quotient Image or Gross-Brajovic), edge detection, 2D DCT, 2D FFT, 2D Gabor, PCA to do Eigen-Faces, LDA to do Fisher-Faces. Various metrics (Euclidean, Mahanalobis, ChiSquare, NormalizeCorrelation, TangentDistance, …)
ImLab
(C/C++ code, MIT lic) A Free Experimental System for Image Processing (loading, transforms, filters, histogram, morphology, …)
CIMG
(C/C++ code, GPL and LGPL lic) CImg Library is an open source C++ toolkit for image processing
Generic Image Library (GIL)boost integration
(C/C++ code, MIT lic) Adobe open source C++ Generic Image Library (GIL)
SimpleCV a kinder, gentler machine vision library
(python code, MIT lic) SimpleCV is a Python interface to several powerful open source computer vision libraries in a single convenient package
PCL, The Point Cloud Library
(C/C++ code, BSD lic) The Point Cloud Library (or PCL) is a large scale, open project for point cloud processing. The PCL framework contains numerous state-of-the art algorithms including filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation.
Population, imaging library in C++ for processing, analysing, modelling and visualising
(C/C++ code, CeCill lic) Population is an open-source imaging library in C++ for processing, analysing, modelling and visualising including more than 200 algorithms designed by V. Tariel.
qcv
(C/C++ code, LGPL 3) A computer vision framework based on Qt and OpenCV that provides an easy to use interface to display, analyze and run computer vision algorithms. The library is provided with multiple application examples including stereo, SURF, Sobel and and Hough transform.
Machine Vision Toolbox
(MATLAB/C, LGPL lic) image processing, segmentation, blob/line/point features, multiview geometry, camera models, colorimetry.
BoofCV
(Java code, Apache lic) BoofCV is an open source Java library for real-time computer vision and robotics applications. BoofCV is organized into several packages: image processing, features, geometric vision, calibration, visualize, and IO.
Simd
(C++ code, MIT lic) Simd is free open source library in C++. It includes high performance image processing algorithms. The algorithms are optimized with using of SIMD CPU extensions such as SSE2, SSSE3, SSE4.2 and AVX2.
Free but not open source – ArrayFire (formely LibJacket) is a matrix library for CUDA
(CUDA/C++, free lic) ArrayFire offers hundreds of general matrix and image processing functions, all running on the GPU. The syntax is very Matlab-like, with the goal of offering easy porting of Matlab code to C++/ArrayFire.

Image Acquisition, Decoding & encoding

FFMPEG
(C/C++ code, LGPL or GPL lic) Record, convert and stream audio and video (lot of codec)
OpenCV
(C/C++ code, BSD lic) PNG, JPEG,… images, avi video files, USB webcam,…
Torch3Vision
(C/C++ code, BSD lic) Video file decoding/encoding (ffmpeg integration), image capture from a frame grabber or from USB, Sony pan/tilt/zoom camera control using VISCA interface
lib VLC
(C/C++ code, GPL lic) Used by VLC player: record, convert and stream audio and video
Live555
(C/C++ code, LGPL lic) RTSP streams
ImageMagick
(C/C++ code, GPL lic) Loading & saving DPX, EXR, GIF, JPEG, JPEG-2000, PDF, PhotoCD, PNG, Postscript, SVG, TIFF, and more
DevIL
(C/C++ code, LGPL lic) Loading & saving various image format
FreeImage
(C/C++ code, GPL & FPL lic) PNG, BMP, JPEG, TIFF loading
VideoMan
(C/C++ code, LGPL lic) VideoMan is trying to make the image capturing process from cameras, video files or image sequences easier.

Segmentation

OpenCV
(C/C++ code, BSD lic) Pyramid image segmentation
Branch-and-Mincut
(C/C++ code, Microsoft Research Lic) Branch-and-Mincut Algorithm for Image Segmentation
Efficiently solving multi-label MRFs (Readme)
(C/C++ code) Segmentation, object category labelling, stereo

Machine Learning

Torch
(C/C++ code, BSD lic) Gradient machines ( multi-layered perceptrons, radial basis functions, mixtures of experts, convolutional networks and even time-delay neural networks), Support vector machines, Ensemble models (bagging, adaboost), Non-parametric models (K-nearest-neighbors, Parzen regression and Parzen density estimator), distributions (Kmeans, Gaussian mixture models, hidden Markov models, input-output hidden Markov models, and Bayes classifier), speech recognition tools

Object Detection

OpenCV
(C/C++ code, BSD lic) Viola-jones face detection (Haar features)
Torch3Vision
(C/C++ code, BSD lic) MLP & cascade of Haar-like classifiers face detection
Hough Forests
(C/C++ code, Microsoft Research Lic) Class-Specific Hough Forests for Object Detection
Efficient Subwindow Object Detection
(C/C++ code, Apache Lic) Christoph Lampert “Efficient Subwindow” algorithms for Object Detection
INRIA Object Detection and Localization Toolkit
(C/C++ code, Custom Lic) Histograms of Oriented Gradients library for Object Detection

Object Category Labelling

Efficiently solving multi-label MRFs (Readme)
(C/C++ code) Segmentation, object category labelling, stereo
Multi-label optimization
(C/C++/MATLAB code) The gco-v3.0 library is for optimizing multi-label energies. It supports energies with any combination of unary, pairwise, and label cost terms.

Optical flow

OpenCV
(C/C++ code, BSD lic) Horn & Schunck algorithm, Lucas & Kanade algorithm, Lucas-Kanade optical flow in pyramids, block matching.
GPU-KLT+FLOW
(C/C++/OpenGL/Cg code, LGPL) Gain-Adaptive KLT Tracking and TV-L1 optical flow on the GPU.
RLOF
(C/C++/Matlab code, Custom Lic.) The RLOF library provides GPU / CPU implementation of Optical Flow and Feature Tracking method.

Features Extraction & Matching

SIFT by R. Hess
(C/C++ code, GPL lic) SIFT feature extraction & RANSAC matching
OpenSURF
(C/C++ code) SURF feature extraction algorihtm (kind of fast SIFT)
ASIFT (from IPOL)
(C/C++ code, Ecole Polytechnique and ENS Cachan for commercial Lic) Affine SIFT (ASIFT)
VLFeat (formely Sift++)
(C/C++ code) SIFT, MSER, k-means, hierarchical k-means, agglomerative information bottleneck, and quick shift
SiftGPU
A GPU Implementation of Scale Invariant Feature Transform (SIFT)
Groupsac
(C/C++ code, GPL lic) An enhance version of RANSAC that considers the correlation between data points

Nearest Neighbors matching

FLANN
(C/C++ code, BSD lic) Approximate Nearest Neighbors (Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration)
ANN
(C/C++ code, LGPL lic) Approximate Nearest Neighbor Searching

Tracking

OpenCV
(C/C++ code, BSD lic) Kalman, Condensation, CAMSHIFT, Mean shift, Snakes
KLT: An Implementation of the Kanade-Lucas-Tomasi Feature Tracker
(C/C++ code, public domain) Kanade-Lucas-Tomasi Feature Tracker
GPU_KLT
(C/C++/OpenGL/Cg code, ) A GPU-based Implementation of the Kanade-Lucas-Tomasi Feature Tracker
GPU-KLT+FLOW
(C/C++/OpenGL/Cg code, LGPL) Gain-Adaptive KLT Tracking and TV-L1 optical flow on the GPU
On-line boosting trackers
(C/C++, LGPL) On-line boosting tracker, semi-supervised tracker, beyond semi-supervised tracker
Single Camera background subtraction tracking
(C/C++, LGPL) Background subtraction based tracking algorithm using OpenCV.
Multi-camera tracking
(C/C++, LGPL) Multi-camera particle filter tracking algorithm using OpenCv and intel IPP.

Simultaneous localization and mapping

Real-Time SLAM – SceneLib
(C/C++ code, LGPL lic) Real-time vision-based SLAM with a single camera
PTAM
(C/C++ code, Isis Innovation Limited lic) Parallel Tracking and Mapping for Small AR Workspaces
GTSAM
(C/C++ code, BSD lic) GTSAM is a library of C++ classes that implement smoothing and mapping (SAM) in robotics and vision, using factor graphs and Bayes networks as the underlying computing paradigm rather than sparse matrices

Camera Calibration & constraint

OpenCV
(C/C++ code, BSD lic) Chessboard calibration, calibration with rig or pattern
Geometric camera constraint – Minimal Problems in Computer Vision
Minimal problems in computer vision arise when computing geometrical models from image data. They often lead to solving systems of algebraic equations.
Camera Calibration Toolbox for Matlab
(Matlab toolbox) Camera Calibration Toolbox for Matlab by Jean-Yves Bouguet (C implementation in OpenCV)

Multi-View Reconstruction

Bundle Adjustment – SBA
(C/C++ code, GPL lic) A Generic Sparse Bundle Adjustment Package Based on the Levenberg-Marquardt Algorithm
Bundle Adjustment – SSBA
(C/C++ code, LGPL lic) Simple Sparse Bundle Adjustment (SSBA)

Stereo

Efficiently solving multi-label MRFs (Readme)
(C/C++ code) Segmentation, object category labelling, stereo
LIBELAS: Library for Efficient LArge-scale Stereo Matching
(C/C++ code) Disparity maps, stereo

Structure from motion

Bundler
(C/C++ code, GPL lic) A structure-from-motion system for unordered image collections
Patch-based Multi-view Stereo Software (Windows version)
(C/C++ code, GPL lic) A multi-view stereo software that takes a set of images and camera parameters, then reconstructs 3D structure of an object or a scene visible in the images
libmv – work in progress
(C/C++ code, MIT lic) A structure from motion library
Multicore Bundle Adjustment
(C/C++/GPU code, GPL3 lic) Design and implementation of new inexact Newton type Bundle Adjustment algorithms that exploit hardware parallelism for efficiently solving large scale 3D scene reconstruction problems.
openMVG
(C/C++/GPU code, MPL2 lic) OpenMVG (Multiple View Geometry) “open Multiple View Geometry” is a library for computer-vision scientists and especially targeted to the Multiple View Geometry community. It is designed to provide an easy access to the classical problem solvers in Multiple View Geometry and solve them accurately..

Visual odometry

LIBVISO2: Library for VISual Odometry 2
(C/C++ code, Matlab, GPL lic) Libviso 2 is a very fast cross-platfrom (Linux, Windows) C++ library with MATLAB wrappers for computing the 6 DOF motion of a moving mono/stereo camera.

Posted in Apps Development, C, Computer Hardware, Computer Network & Security, CUDA, Game Development, GPU (CUDA), GPU Accelareted, Graphics Cards, Image Processing, OpenCV, PARALLEL, Simulation, Virtualization | Tagged: , , , , , , , , , , , , , , , , , , , | 3 Comments »

 
Extracts from a Personal Diary

dedicated to the life of a silent girl who eventually learnt to open up

Num3ri v 2.0

I miei numeri - seconda versione

ThuyDX

Just another WordPress.com site

Algunos Intereses de Abraham Zamudio Chauca

Matematica, Linux , Programacion Serial , Programacion Paralela (CPU - GPU) , Cluster de Computadores , Software Cientifico

josephdung

thoughts...

Tech_Raj

A great WordPress.com site

Travel tips

Travel tips

Experience the real life.....!!!

Shurwaat achi honi chahiye ...

Ronzii's Blog

Just your average geek's blog

Karan Jitendra Thakkar

Everything I think. Everything I do. Right here.

VentureBeat

News About Tech, Money and Innovation

Chetan Solanki

Helpful to u, if u need it.....

ScreenCrush

Explorer of Research #HEMBAD

managedCUDA

Explorer of Research #HEMBAD

siddheshsathe

A great WordPress.com site

Ari's

This is My Space so Dont Mess With IT !!

%d bloggers like this: