Something More for Research

Explorer of Research #HEMBAD

Archive for the ‘Computer Software’ Category

cppconlib: A C++ library for working with the Windows console

Posted by Hemprasad Y. Badgujar on July 20, 2015


cppconlib is built with C++11 features and requires Visual Studio 2012 or newer. The library is available in a single header called conmanip.h and provides a set of helper classes, functions and constants for manipulating a Windows console (using the Windows console functions). The library features the following components:

  • console_context<T>: represents a context object for console operations; its main purpose is restoring console settings; typedefs for the three consoles are available (console_in_context, console_out_context and console_err_context)
  • console<T>: represents a console objects providing operations such as changing the foreground and background colors, the input mode, screen buffer size, title, and others; typedefs for the three consoles are available (console_in, console_out and console_err)
  • manipulating functions that can be used with cout/wcout and cin/wcin: settextcolor()/restoretextcolor(), setbgcolor()/restorebgcolor(), setcolors(),setmode()/clearmode(), setposx()/setposy()/setpos().

The library can be downloaded from here. Detailed documentation is available here.

cppconlib

Examples

The following example prints some text in custom colors and then reads text in a different set of colors.

cppconlib2

The following code prints a rhomb to the console:

cppconlib3

For more details and updates check the project at codeplex: https://cppconlib.codeplex.com.

UPDATE: A NuGet package for cppconlib is available.

Posted in Computer Software, Computer Vision | Tagged: , , , , , | Leave a Comment »

Project Template in Visual Studio

Posted by Hemprasad Y. Badgujar on March 5, 2015


 

 Sample Image - maximum width is 600 pixels

Introduction

This article describes the step by step process of creating project template in Visual Studio 2012 and VSIX installer that deploys the project template. Each step contains an image snapshot that helps the reader to keep focused.

Background

A number of predefined project and project item templates are installed when you install Visual Studio. You can use one of the many project templates to create the basic project container and a preliminary set of items for your application, class, control, or library. You can also use one of the many project item templates to create, for example, a Windows Forms application or a Web Forms page to customize as you develop your application.

You can create custom project templates and project item templates and have these templates appear in the New Project and Add New Item dialog boxes. The article describes the complete process of creating and deploying the project template.

Using the Code

Here, I have taken a very simple example which contains nearly no code but this can be extended as per your needs.

Create Project Template

First of all, create the piece (project or item) that resembles the thing you want to get created started from the template we are going to create.

Then, export the template (we are going to use the exported template as a shortcut to build our Visual Studio template package):

Visual Studio Project Templates

We are creating a project template here.

Fill all the required details:

A zip file should get created:

Creating Visual Studio Package Project

To use VSIX projects, you need to install the Visual Studio 2012 VSSDK.

Download the Visual Studio 2012 SDK.

You should see new project template “Visual Studio Package” after installing SDK.

Select C# as our project template belongs to C#.

Provide details:

Currently, we don’t need unit test project but they are good to have.

In the solution, double-click the manifest, so designer opens.

Fill all the tabs. The most important is Assert. Here you give path of our project template(DummyConsoleApplication.zip).

As a verification step, build the solution, you should see a .vsix being generated after its dependency project:

Installing the Extension

Project template is located under “Visual C#” node.

Uninstalling the Project Template

References

Posted in .Net Platform, C, Computer Languages, Computer Software, Computer Softwares, Computer Vision, CUDA, GPU (CUDA), Installation, OpenMP, PARALLEL | Tagged: , , | Leave a Comment »

System Image Backup in Windows 8.1

Posted by Hemprasad Y. Badgujar on January 11, 2015


There is no traditional backup and restore functionality in Windows 8.1, but there is still a way to create a full image of system drive (the disk/partition where Windows is installed) that can later be restored from Recovery Environment.

Please remember to create Recovery Drive for easy access to Repair your PC options!
You might also want to create a Custom Recovery Image for better Refresh your PC functionality.

Caveats for System Image Backups in Windows 8.1

Clearly, Microsoft wants you to use File History, OneDrive (aka SkyDrive) and maybe even Storage Spaces for storing, syncing and backing up your personal files, and Refresh your PC or Reset your PC for restoring Windows to a working state. Maybe that is why System Image Backup is so difficult to find in Windows 8.1.

Here are a few things you should know about system images in Windows 8.1:

  • You can create only one System Image Backup on a drive: any previous versions will be overwritten.
  • There is no easy way of scheduling image backups, and for the previous reason, it is not really recommended either. You do not want to automatically overwrite a good system image with image of a computer that does not run properly.
  • System Image Backup cannot be used for restoring individual files or folders: restoring the image means overwriting everything on the target drive. File History is the proper solution for backing up and restoring personal data in Windows 8.1.

Using DISM to verify that Windows Component Store is intact

Before you create a full backup, it is strongly recommended to check for corruption in Windows Component Store – there is no point in backing up a broken installation that will probably fail in the near future.

Open elevated Command Prompt: either open Start screen, type cmd, right-click on Command Prompt and select Run as administrator; or if you’ve set to display Command Prompt in Taskbar Navigation settings, use keyboard shortcut WINDOWS KEY+X to bring up Quick Links menu (a list of commands for power users) and click Command Prompt (Admin).
In the black window, type or copy-paste the following command to have DISM (Deployment Imaging and Servicing Management) tool verify the integrity of Component Store: Dism /Online /Cleanup-Image /ScanHealth . Press ENTER key to launch the command.

The check takes up to 15 minutes to complete, and if the result reads “No component store corruption detected”, you have the green light to create the System Image Backup.
If the result reads “The component store is repairable” instead, type Dism /Online /Cleanup-Image /RestoreHealth and press ENTER key to fix the corruption. The process can last up to 15 minutes again and positive result reads: “The restore operation completed successfully. The component store corruption was repaired.” Move on to the System Image Backup creation then.

In case the RestoreHealth command fails no matter what, it is best to perform a non-destructive reinstall of Windows 8.1. This seems to be the only solution to the infamous DISM error 0x800f081f.

Creating a System Image Backup in Windows 8.1

To access the feature, open Search everywhere (keyboard shortcut WINDOWS KEY+Q), type File History and click the result.
Yes, you read it right: “File History”. Smile Also, connect your external hard drive with plenty of available disk space now.
Click the link titled System Image Backup in the bottom left corner of the File History window.
Windows 8.1, File History window. Click 'System Image Backup' in the lower left corner.

First, System Image Backup looks for available DVD-writers and hard drives. While you can use network drives for backing up your PC, it is not recommended because backed up data cannot be securely protected for a network target.
I cannot recommend using DVD-s for backups, either – optical media is vulnerable to scratches that might ruin the whole backup set, so the only usable option here is hard disk drive.

In accordance with common sense, you cannot create a system image on the same physical drive where Windows is installed. You see, if this hard drive goes bad, you would lose both Windows and all backups.

In the Create a system image window, select On a hard disk. The best one might be already selected, but you can change the target drive using the combo box.
I recommend using destination drives that are connected to standard controllers (not SCSI, SAS, RAID and other controllers that Windows cannot automatically recognize or find driver for) or standard USB ports.

If you’ve created a system image on the selected drive before, there will be a line stating “Most recent backup on drive:” beneath the combo box. Here’s the catch: previous system image will be overwritten, so you can really have only one backup at a time on the same drive.
Click Next.
Windows then lists your backup location and size, plus drives/partitions that will be backed up.
Again, if there is a previous system image on the drive where you want to back up your PC, a yellow warning sign with the text “Any existing system images for this machine might be overwritten” appears.
Click Start backup if you’re satisfied with the settings.
Depending on the size of selected drive(s), the backup might take several hours. Click Close after it is complete.

Scheduling System Image Backup in Windows 8.1

While it is not recommended to schedule System Image Backups in Windows 8.1, you might prefer to do so if you have more than one external hard drive dedicated for backups.
In such case, you can manually create one backup on the first drive (for example, drive F:) and leave it untouched forever – this will be your fail-safe backup right after installing and updating Windows and necessary software (you should use File History for backing up your personal files and folders). Then schedule a PowerShell command that creates and updates backup on a different physical drive (for example, drive E:) on weekly basis.

To use this advanced scenario, use keyboard shortcut WINDOWS KEY+Q to open Search everywhere, type schedule and click Schedule tasks.
Right-click Task Scheduler Library and select Create Basic Task from the menu.
In the Create Basic Task Wizard window, type Name for the new task. Description is optional.
Click Next after you’re done.
Set Task Trigger to Weekly and click Next. If programs and apps on your Windows device change rarely, you can select Monthly instead.
Because creating a system image slows your PC down for quite some time, choose a start time when your machine is most probably not in heavy use.
Click Next again.
Leave Start a program selected for Action and click Next.
Type powershell.exe into Program/script field and then copy and paste the following line into Add arguments (optional) field:
wbAdmin start backup -backupTarget:E: -include:C: -allCritical -quiet
Replace drive letter E: in the -backupTarget argument with the appropriate one for your backup destination disk if necessary.
Because Windows 8.1 always assigns drive letter C: to system drive (the one where Windows is installed), changing this one is not needed.
The -allCritical option includes everything (additional partitions/volumes or drives) required to start and run Windows properly in the backup. I guess you all know what -quiet means.
In the final screen of Create Basic Task Wizard, tick the Open the Properties dialog for this task when I click Finish check box and click Finish.
In the Security options section of Task Properties window, select the Run whether user is logged on or not option and tick the Run with highest privileges check box. Then click Change user or Group button next to the When running the task, use the following user account field.
Type system into Enter the object name to select field and click Check Names. The name turns into all capital letters and gets underlined. Click OK.
Back in the Task Properties window, open Settings tab and enable the Run task as soon as possible after a scheduled start is missed option. This ensures that the backup is always created.
Finally, click OK to save the task changes. Make sure that the destination drive is always connected during the scheduled time.

To verify that the backup task runs and finishes properly, open WindowsImageBackup folder on the target drive. There should be a subfolder with your computer’s name – open it and then open another subfolder, Logs, and see if the Backup_error_<date and time>.log file is empty. If it is, the backup finished successfully.
Please note that you might have to use administrative privileges to open the folders for the first time.

Another way is to check backup log in Event Viewer. Use keyboard shortcut WINDOWS KEY+X to open Quick Links menu and click Event Viewer. Alternatively, right-click or tap and hold the Start tip on Taskbar.
Expand Applications and Services Logs, Microsoft, Windows, Backup items and click Operational. You’ll then see the list of events related to backing up your device. Here are some most common backup events in Windows:

  • Event ID 1 – The backup operation has started.
  • Event ID 4 – The backup operation has finished successfully.
  • Event ID 5 – Backup started at <date and time> failed with following error code <number>.
  • Event ID 8 – Backup cancelled.
  • Event ID 14 – The backup operation has completed. This event appears even if backup was cancelled or did not finish successfully.
  • Event ID 20 – Backup started at <date and time> failed as another backup or recovery is in progress.
  • Event ID 50 – Backup failed as required space was not available on the backup target. Free up some disk space on the target drive or increase available disk space on Windows disk.

 

How to restore Windows 8.1 from a System Image Backup

First, you need to get into Windows 8.1 Recovery Environment (WinRE) using Recovery Drive or bootable Windows 8.1 installation DVD. If Windows is running, you can invoke Settings charm (keyboard shortcut WINDOWS KEY+I), click Power and hold down SHIFT key while clickingRestart.
Detailed instructions are included in Repair your computer in Windows 8 and 8.1 tutorial.

Click or tap Troubleshoot in Choose an option screen, then choose Advanced Options in Troubleshoot screen.
 

Next, in Advanced options screen, click or tap System Image Recovery, and choose Windows 8.1.

 

How to use Recovery Environment for refreshing, restoring or resetting Windows 8 and 8.1

First part of this article describes how to get into Windows 8 or 8.1 Recovery Environment (WinRE) and repair smaller problems such as file system corruption and corrupted Boot Configuration Data.

Options to try before Refresh your PC, Reset your PC or restoring backup image in Windows 8 and 8.1
  • Always boot to Safe Mode at least once – this often repairs corrupted file system and essential system files.
  • If Windows is able to boot, use System File Checker and icacls.exe to repair corrupted system files.
  • While Windows is running, use free WhoCrashed for determining BSOD (Blue Screen Of Death) causes.
    Reliability Monitor might also reveal faulty drivers or software.
  • Try a non-destructive reinstall of Windows 8 or 8.1. It certainly takes a lot of time, but it often works much better than Refresh Your PC. This repair method leaves all your files, settings, installed programs and apps intact. It is also about the only option for fixing DISM RestoreHealth failure 0x800f081f.
Step 3 – Refresh your PC

In case Automatic Repair and System Restore did not help and you do not have any system image backups available, you can use the brand new option in Windows 8 and 8.1 – Refresh your PC. This method is pretty close to Non-destructive reinstall of Windows 8 or 8.1, but you will lose all apps and Desktop programs that were not installed from Windows Store unless you have created a Custom Recovery Image.

Please note that you cannot use the Refresh Your PC feature if Windows 8 or 8.1 is installed on a drive with GPT (not MBR) partition table until you force the “UEFI only” boot setting in BIOS/EFI. Windows will not detect GPT partition alignment correctly if BIOS booting is enabled.

All your personal files, documents and most of personalization settings will remain intact, and a list of removed programs will be available on your Desktop. Windows settings and all installed app settings will revert back to defaults to avoid possible conflicts.

Please be aware that even if using a Custom Recovery Image, Desktop programs will lose their custom settings and revert back to defaults.

Those who upgraded from Windows 8 to 8.1 without clean install/removing everything, can encounter a problem where Windows 8, not 8.1 is restored. This is because you need to update Custom Recovery Image after upgrading to Windows 8.1 – the image on Recovery Partition is still Windows 8.

Refresh your PC restores default contents of the following folders on system drive (the one where Windows 8/8.1 is installed):

  • Windows
  • Program Files
  • Program Files (x86)
  • ProgramData
  • Users\<user name>\AppData

In most cases, you must have Windows 8/8.1 installation or recovery media (DVD) available. Media prompt will not appear only if a custom recovery image is available.

Some users report “Unable to refresh your PC. A required drive partition is missing” and “The drive where Windows is installed is locked. Unlock the drive and try again” errors during the refresh process. In most cases, rebuilding Boot Configuration Data helps. This might also resolve the problems that made Windows unable to boot.
A less common cause is that Windows 8 or 8.1 cannot locate a proper driver for the hard drive controller and therefore cannot access partitions. If Windows is able to start, try installing proper chipset drivers (such as Intel or AMD) before refreshing your PC.

You can also launch Refresh your PC while Windows 8 or 8.1 is running – use keyboard shortcut WINDOWS KEY+I to open Settings charm and click Change PC settings.
In Windows 8, open General tab on the left and click Get started in the Refresh your PC without affecting your files section.
In Windows 8.1, open Update & recovery tab on the left, then open Recovery tab and click Get started in the Refresh your PC without affecting your files section.
The following process is nearly identical to the one described below.

To start, click Refresh your PC in Troubleshoot screen. If you have a custom recovery image on some external drive, make sure the drive is connected. If Windows 8 or 8.1 is running in normal mode, not Recovery Environment, you can also verify the custom image is available.
If Refresh your PC does not detect a custom recovery image, or one has not been created, it will use defaults and all installed Desktop programs and non-Windows Store apps will be removed.

An overview of refreshing will appear. Click Next if you are satisfied with the consequences.
If you started this operation some other way, you might have to sign in first.

As usual, you must choose a target operating system. Click the correct Windows 8/8.1 installation in the list. In most cases, there is just one.

If the process asks you to insert your Windows installation or recovery media, insert it and the process will continue automatically.

Windows 8 and 8.1 will remind you that you must have your PC plugged in. Click Refresh to start the process.

The process will take from 15-20 minutes to several hours, depending on the number of installed programs and the speed of hard drive or SSD. It has several stages, such as “Preparing”, “Getting devices ready”, “System” and “Welcome”.

In most cases, this action solves all problems and Windows 8/8.1 is able to boot and run normally.
Windows 8.1 will restore your synced settings and apps after signing in (if you had syncing to OneDrive enabled before refreshing) – this takes some time and your device might be slower than usual during this.
If necessary, reinstall all removed programs after this – the list is available on your Desktop as an HTML document titled “Removed Apps” and it contains links to program downloads.
If you restored a Custom Recovery Image, you must reconfigure all Desktop programs and non-Windows Store apps. Your File History is intact, but you must register the recovery image again.

Step 4 – restore a disk image backup or recover files

Windows 8 does have a traditional backup program that is well hidden under the name Windows 7 File Recovery. If you automated it properly, you can click System Image Recovery in Advanced options screen and follow instructions in Restore a System Image in Windows 7 and 8 article.

Windows 8.1 has the disk imaging backup hidden even better. In case you’ve created System Image Backup, you can restore it here.
If you are using free AOMEI Backupper instead, read this article about restoring disk image using bootable rescue media.

EaseUS Todo Backup Free users should follow the Restore disk image tutorial.

In case you do not have any backups and you have not turned on File History in Windows 8 or 8.1, you can use my Data Recovery CD/USB orPuppy Linux to copy your documents, pictures, videos, music, settings, etc to a flash drive or external hard disk.
After copying is complete, run Step 3 (Refresh your PC, does not affect your documents or personalization settings) or Step 5 (Reset your PC, removes everything and installs a clean copy of Windows), copy your rescued files back to your computer if needed, and do start making regular backups this time.

Step 5 – Reset your PC

Reset your PC is a last resort – you should definitely try restoring a backup image or copy important files to an external drive first. Also, make sure your File History drive is not connected if you have turned the feature on and need to keep your personal files and Libraries.

Resetting means removing all user accounts, settings, personal files, installed apps and Desktop programs and reverting to a clean (default) Windows 8/8.1 installation.

This option is useful if you want to sell, donate or recycle your PC and make sure no one can recover your personal data.

Please note that you cannot use the Reset Your PC feature if Windows 8 or 8.1 is installed on a drive with GPT (not MBR) partition table until you force the “UEFI only” boot setting in BIOS/EFI. Windows will not detect GPT partition alignment correctly if BIOS booting is enabled.

Those who upgraded from Windows 8 to 8.1 without clean install/removing everything, can encounter a problem where Windows 8, not 8.1 is restored. This is because you need to update Custom Recovery Image after upgrading to Windows 8.1 – the image on Recovery Partition is still Windows 8.

You’ll need your Windows 8/8.1 installation or recovery media (DVD) and product key to run Reset your PC.

If you encounter the “Unable to reset your PC. A required drive partition is missing” and “The drive where Windows is installed is locked. Unlock the drive and try again” errors during the reset process, try rebuilding Boot Configuration Data first. This might also resolve the problems that made Windows unable to boot.
A far less common cause is that Windows 8 or 8.1 cannot locate a proper driver for the hard drive controller and therefore cannot access partitions. If Windows is able to start, try installing proper chipset drivers (such as Intel or AMD) before refreshing your PC.

You can also launch Reset your PC while Windows 8/8.1 is running – use keyboard shortcut WINDOWS KEY+I to open Settings charm and clickChange PC settings.
In Windows 8, open General tab on the left and click Get started in the Remove everything and reinstall Windows section.
In Windows 8.1, open Update & recovery tab on the left, then open Recovery tab and click Get started in the Remove everything and reinstall Windows section.
The following process is nearly identical to the one described below.

To reset Windows 8 or 8.1, click Reset your PC in Troubleshoot screen.
An overview of resetting will appear. Click Next if you are satisfied with the consequences.
As usual, you must choose a target operating system. Click the correct Windows 8/8.1 installation in the list. In most cases there is only one, anyway.
If the process asks you to insert your Windows installation or recovery media, insert it and the process will continue automatically.
If your PC has more than one drive (e.g. two internal hard disks), Reset your PC asks whether you want to remove all files from all drives.
Click Only the drive where Windows is installed in case you are repairing your Windows 8 or 8.1installation.
If you’re about to sell, donate or recycle the PC, click All drives instead.
Next, two options for removing files appear:

  • Just remove my files will delete all files normally. This is a quick process and is suggested if you just want to reinstall Windows 8 or 8.1 and continue using this computer.
  • Fully clean the drive will delete all files securely so that recovery programs are not able to restore these. This option is recommended if you are planning to sell, donate or recycle your computer – and it will certainly take from several to many-many hours to complete.

 

A reminder of consequences appears with a suggestion to keep your computer plugged in. To start the process, click Reset.
If you chose the quick reset option, the process will not take long – much less time than refreshing Windows 8 or 8.1. Plan about 10-30 minutes for the process.
In case you chose to fully clean all drives, the process will certainly take at least a few hours.
Just like Refresh your PC, the process has several stages, such as “Preparing”, “Getting devices ready”, “System” and “Welcome”. Please stand by.
Like I warned, you will need Windows product key after Reset your PC is complete. Type it in and click Next. If you do not have one right away, clickSkip instead – but please remember that Windows 8 and 8.1 will work for only the next 30 days without activation.

The process will continue exactly like with brand new PC-s – License terms, Settings, Personalization, User Account, etc.
Windows 8.1 allows restoring your apps and settings from another synced PC (if you had syncing to OneDrive enabled before resetting), or setting it up as a brand new PC. Please note that syncing takes some time and your PC might be slower during that time.

After all this is done, Windows 8/8.1 should run flawlessly.

If you’re not planning to get rid of the device, remember to configure Windows Update, System Restore, File History and backups; reinstall free anti-malware program(s) and other free security apps, such as WOT Safe Surfing Tool and Secunia PSI. Also, do not forget to create a custom recovery image after reinstalling all apps and Desktop programs.

Posted in Computer Software, Computer Softwares, Computing Technology, Free Tools, Installation, Operating Systems, Windows OS | Tagged: , | Leave a Comment »

Posted by Hemprasad Y. Badgujar on December 11, 2014


Cloud scaling, Part 1: Build a compute node or small cluster application and scale with HPC

Leveraging warehouse-scale computing as needed

Discover methods and tools to build a compute node and small cluster application that can scale with on-demand high-performance computing (HPC) by leveraging the cloud. This series takes an in-depth look at how to address unique challenges while tapping and leveraging the efficiency of warehouse-scale on-demand HPC. The approach allows the architect to build locally for expected workload and to spill over into on-demand cloud HPC for peak loads. Part 1 focuses on what the system builder and HPC application developer can do to most efficiently scale your system and application.

Exotic HPC architectures with custom-scaled processor cores and shared memory interconnection networks are being rapidly replaced by on-demand clusters that leverage off-the-shelf general purpose vector coprocessors, converged Ethernet at 40 Gbit/s per link or more, and multicore headless servers. These new HPC on-demand cloud resources resemble what has been called warehouse-scale computing, where each node is homogeneous and headless and the focus is on total cost of ownership and power use efficiency overall. However, HPC has unique requirements that go beyond social networks, web search, and other typical warehouse-scale computing solutions. This article focuses on what the system builder and HPC application developer can do to most efficiently scale your system and application.

Moving to high-performance computing

The TOP500 and Green500 supercomputers (see Resources) since 1994 are more often not custom designs, but rather designed and integrated with off-the-shelf headless servers, converged Ethernet or InfiniBand clustering, and general-purpose graphics processing unit (GP-GPU) coprocessors that aren’t for graphics but rather for single program, multiple data (SPMD) workloads. The trend in high-performance computing (HPC) away from exotic custom processor and memory interconnection design to off-the-shelf—warehouse-scale computing—is based on the need to control total cost of ownership, increase power efficiency, and balance operational expenditure (OpEx) and capital expenditure (CapEx) for both start-up and established HPC operations. This means that you can build your own small cluster with similar methods and use HPC warehouse-scale resources on-demand when you need them.

The famous 3D torus interconnection that Cray and others used may never fully go away (today, the TOP500 is one-third massively parallel processors [MPPs] and two-thirds cluster architecture for top performers), but focus on efficiency and new OpEx metrics like Green500 Floating Point Operations (FLOPs)/Watt are driving HPC and keeping architecture focused on clusters. Furthermore, many applications of interest today are data driven (for example, digital video analytics), so many systems not only need traditional sequential high performance storage for HPC checkpoints (saved state of a long-running job) but more random access to structured (database) and unstructured (files) large data sets. Big data access is a common need of traditional warehouse-scale computing for cloud services as well as current and emergent HPC workloads. So, warehouse-scale computing is not HPC, but HPC applications can leverage data center-inspired technology for cloud HPC on demand, if designed to do so from the start.

Power to computing

Power to computing can be measured in terms of a typical performance metric per Watt—for example, FLOPS/Watt or input/output per second/Watt for computing and I/O, respectively. Furthermore, any computing facility can be seen as a plant for converting Watts into computational results, and a gross measure of good plant design is power use efficiency (PUE), which is simply the ratio of total facility power over that delivered to computing equipment. A good value today is 1.2 or less. One reason for higher PUEs is inefficient cooling methods, administrative overhead, and lack of purpose-built facilities compared to cloud data centers (see Resources for a link to more information).

Changes in scalable computing architecture focus over time include:

  • Early focus on a fast single processor (uniprocessor) to push the stored-program arithmetic logic unit central processor to the highest clock rates and instruction throughput possible:
    • John von Neumann, Alan Turing, Robert Noyce (founder of Intel), Ted Hoff (Intel universal processor proponent), along with Gordon Moore see initial scaling as a challenge to scaling digital logic and clock a processor as fast as possible.
    • Up to at least 1984 (and maybe longer), the general rule was “the processor makes the computer.”
    • Cray Computer designs vector processors (X-MP, Y-MP) and distributed memory multiprocessors interconnected by a six-way interconnect 3D torus for custom MPP machines. But this is unique to the supercomputing world.
    • IBM’s focus early on was scalable mainframes and fast uniprocessors until the announcement of the IBM® Blue Gene® architecture in 1999 using a multicore IBM® POWER® architecture system-on-a-chip design and a 3D torus interconnection. The current TOP500 includes many Blue Gene systems, which have often occupied the LINPACK-measured TOP500 number one spot.
  • More recently since 1994, HPC is evolving to a few custom MPP and mostly off-the-shelf clusters, using both custom interconnections (for example, Blue Gene and Cray) and off-the-shelf converged Ethernet (10G, 40G) and InfiniBand:
    • The TOP500 has become dominated by clusters, which comprise the majority of top-performing HPC solutions (two-thirds) today.
    • As shown in the TOP500 chart by architecture since 1994, clusters and MPP dominate today (compared to single instruction, multiple data [SIMD] vector; fast uniprocessors; symmetric multiprocessing [SMP] shared memory; and other, more obscure architectures).
    • John Gage at Sun Microsystems (now Oracle) stated that “the network is the computer,” referring to distributed systems and the Internet, but low-latency networks in clusters likewise become core to scaling.
    • Coprocessors interfaced to cluster nodes via memory-mapped I/O, including GP-GPU and even hybrid field-programmable gate array (FPGA) processors, are used to accelerate specific computing workloads on each cluster node.
  • Warehouse-scale computing and the cloud emerge with focus on MapReduce and what HPC would call embarrassingly parallel applications:
    • The TOP500 is measured with LINPACK and FLOPs and so is not focused on cost of operations (for example, FLOPs/Watt) or data access. Memory access is critical, but storage access is not so critical, except for job checkpoints (so a job can be restarted, if needed).
    • Many data-driven applications have emerged in the new millennium, including social networks, Internet search, global geographical information systems, and analytics associated with more than a billion Internet users. This is not HPC in the traditional sense but warehouse-computing operating at a massive scale.
    • Luiz André Barroso states that “the data center is the computer,” a second shift away from processor-focused design. The data center is highly focused on OpEx as well as CapEx, and so is a better fit for HPC where FLOPs/Watt and data access matter. These Google data centers have a PUE less than 1.2—a measure of total facility power consumed divided by power used for computation. (Most computing enterprises have had a PUE of 2.0 or higher, so, 1.2 is very low indeed. See Resources for more information.)
    • Amazon launched Amazon Elastic Compute Cloud (Amazon EC2), which is best suited to web services but has some scalable and at least high-throughput computing features (see Resources).
  • On-demand cloud HPC services expand, with an emphasis on clusters, storage, coprocessors and elastic scaling:
    • Many private and public HPC clusters occupy TOP500, running Linux® and using common open source tools, such that users can build and scale applications on small clusters but migrate to the cloud for on-demand large job handling. Companies like Penguin Computing, which features Penguin On-Demand, leverage off-the-shelf clusters (InfiniBand and converged 10G/40G Ethernet), Intel or AMD multicore headless nodes, GP-GPU coprocessors, and scalable redundant array of independent disks (RAID) storage.
    • IBM Platform computing provides IBM xSeries® and zSeries® computing on demand with workload management tools and features.
    • Numerous universities and start-up companies leverage HPC on demand with cloud services or off-the-shelf clusters to complement their own private services. Two that I know well are the University of Alaska Arctic Region Supercomputing Center (ARSC) Pacman (Penguin Computing) and the University of Colorado JANUS cluster supercomputer. A common Red Hat Enterprise Linux (RHEL) open source workload tool set and open architecture allow for migration of applications from private to public cloud HPC systems.

Figure 1 shows the TOP500 move to clusters and MPP since the mid-1990s.

Figure 1. TOP500 evolution to clusters and MPP since 1994

Image showing the evolution to clustersThe cloud HPC on-demand approach requires well-defined off-the-shelf clustering, compute nodes, and tolerance for WAN latency to transfer workload. As such, these systems are not likely to overtake top spots in the TOP500, but they are likely to occupy the Green500 and provide efficient scaling for many workloads and now comprise the majority of the Top500.

High-definition digital video computer vision: a scalable HPC case study

Most of us deal with compressed digital video, often in Motion Picture Experts Group (MPEG) 4 format, and don’t think of the scale of even a high-definition (HD) web cam in terms of data rates and processing to apply simple image processing analysis. Digital cinema workflow and post-production experts know the challenges well. They deal with 4K data (roughly 4-megapixel) individual frames or much higher resolution. These frames might be compressed, but they are not compressed over time in groups of pictures like MPEG does and are often lossless compression rather than lossy.

To start to understand an HPC problem that involves FLOPs, uncompressed data, and tools that can be used for scale-up, let’s look at a simple edge-finder transform. The transform-example.zip includes Open Computer Vision (OpenCV) algorithms to transform a real-time web cam stream into a Sobel or Canny edge view in real time. See Figure 2.

Figure 2. HD video Canny edge transform

Image showing a Canny edge transformLeveraging cloud HPC for video analytics allows for deployment of more intelligent smart phone applications. Perhaps phone processors will someday be able to handle real-time HD digital video facial recognition, but in the mean time, cloud HPC can help. Likewise, data that originates in data centers, like geographic information systems (GIS) data, needs intensive processing for analytics to segment scenes, create point clouds of 3D data from stereo vision, and recognize targets of interest (such as well-known landmarks).

Augmented reality and video analytics

Video analytics involves collection of structured (database) information from unstructured video (files) and video streams—for example, facial recognition. Much of the early focus has been on security and automation of surveillance, but applications are growing fast and are being used now for more social applications, e.g. facial recognition, perhaps not to identify a person but to capture and record their facial expression and mood (while shopping). This technology can be coupled with augmented reality, whereby the analytics are used to update a scene with helpful information (such as navigation data). Video data can be compressed and uplinked to warehouse-scale data centers for processing so that the analytics can be collected and information provided in return not available on a user’s smart phone. The image processing is compute intensive and involves big data storage, and likely a scaling challenge (see Resources for a link to more information).

Sometimes, when digital video is collected in the field, the data must be brought to the computational resources; but if possible, digital video should only be moved when necessary to avoid encoding to compress and decoding to decompress for viewing. Specialized coprocessors known as codecs (coder/decoder) are designed to decode without software and coprocessors to render graphics (GPUs) exist, but to date, no CV coprocessors are widely available. Khronos has announced an initiative to define hardware acceleration for OpenCV in late 2012, but work has only just begun (see Resources). So, to date, CV remains more of an HPC application that has had attention primarily from digital cinema, but this is changing rapidly based on interest in CV on mobiles and in the Cloud.

Although all of us imagine CV to be implemented on mobile robotics, in our heads-up displays for intelligent transportation, and on visors (like Google Goggles that are now available) for personal use, it’s not clear that all of the processing must be done on the embedded devices or that it should be even if it could. The reason is data: Without access to correlated data center data, CV information has less value. For example, how much value is there in knowing where your are without more mapping and GIS data to help you with where you want to go next? Real-time CV and video analytics are making progress, but they face many challenges, including huge storage requirements, high network bit rates for transport, and significant processing demands for interpretation. Whether the processing is done by cloud HPC clusters or embedded systems, it’s clear that concurrency and parallel processing will play a huge role. Try running a simple Hough linear transform on the 12-megapixel cactus photo I took, and you’ll see why HPC might be needed just to segment a scene at 60 frames/s.

The challenge of making algorithms parallel

HPC with both clusters and MPP requires coding methods to employ many thread of execution on each multicore node and to use message-passing interfaces (MPIs) and basic methods to map data and code to process resources and collect results. For digital video, the mapping can be simple if done at a frame level. Within a frame is more difficult but still not bad other than the steps of segmenting and restitching frames together.

The power of MapReduce

The MapReduce concept is generally associated with Google and the open source Hadoop project (from Apache Software Foundation), but any parallel computation must employ this concept to obtain speed-up, whether done at a node or cluster level with Java™ technology or at a thread level for a nonuniform memory access (NUMA) shared memory. For applications like digital video analytics, the mapping is data intensive, so it makes sense to move the function to the data (in the mapping stage), but either way, the data to be processed must be mapped and processed and the results combined. A clever mapping avoids data dependencies and the need for synchronization as much as possible. In the case of image processing, for CV, the mapping could be within a frame, at the frame level, or by groups of pictures (see Resources).

Key tools for designing cluster scaling applications for cloud HPC on demand include the following:

  • Threading is the way in which a single application (or Linux process) is one address space on one cluster node and can be designed to use all processor cores on that node. Most often, this is done with Portable Operating System Interface for UNIX® (POSIX) Pthreads or with a library like OpenMP, which abstracts the low-level details of POSIX threading. I find POSIX threading to be fairly simple and typically write Pthread code as can be seen in the hpc_cloud_grid.tar.gz example. This example maps threads to the over-number space for prime number searching.
  • MPI is a library that can be linked into a cluster parallel application to assist with mapping of processing to each node, synchronization, and reduction of results. Although you can use MPI to implement MapReduce, unlike Hadoop, it typically moves data (in messages) to program functions running on each node (rather than moving code to the data). In the final video analytics article in this series, I will provide a thread and MPI cluster-scalable version of the capture-transform code. Here, I provide the simple code for a single thread and node to serve as a reference. Run it and Linux dstat at the same time to monitor CPU, I/O, and storage use. It is a resource-intensive program that computes Sobel and Canny transforms on a 2560×1920-pixel image. It should run on any Linux system with OpenCV and a web cam.
  • Vector SIMD and SPMD processing can be accomplished on Intel and AMD nodes with a switch to enable during compilation or, with more work, by creation of transform kernels in CUDA or OpenCL for off-load to a GPU or GP-GPU coprocessor.
  • OpenCV is highly useful for video analytics, as it includes not only convenient image capture, handling, and display functions but also most of the best image processing transforms used in CV.

The future of on-demand cloud HPC

This articles makes an argument for cloud HPC. The goal here is to acquaint you with the idea and some of the challenging, yet compelling applications (like CV) as well as to introduce you to methods for programming applications that can scale on clusters and MPP machines. In future articles, I will take the CV example further and adapt it for not only threading but also for MPI so that we can examine how well it scales on cloud HPC (in my case, at ARSC on Pacman or JANUS). My research involves comparison of tightly coupled CV coprocessors (that I am building using an Altera Stratix IV FPGA I call a computer vision processing unit [CVPU]). I am comparing this to what I can achieve with CV on ARSC for the purpose of understanding whether environmental sensing and GIS data are best processed like graphics, with a coprocessor, or on a cluster or perhaps with a combination of the two. The goals for this research are lofty. In the case of CVPU, the CV/graphics Turing-like test I imagine is one in which the scene that the CVPU parses can then be sent to a GPU for rendering. Ideally, the parsed/rendered image would be indistinguishable from the true digital video stream. When rendered scenes and the ability to analyze them reaches a common level of fidelity, augmented reality, perceptual computing, and video analytics will have amazing power to transform our lives.

Cloud scaling, Part 2: Tour high-performance cloud system design advances

Learn how to leverage co-processing, nonvolatile memory, interconnection, and storage

Breakthrough device technology requires the system designer to re-think operating and application software design in order to realize the potential benefits of closing the access gap or pushing processing into the I/O path with coprocessors. Explore and consider how the latest memory, compute, and interconnection devices and subsystems can affect your scalable, data-centric, high-performance cloud computing system design. Breakthroughs in device technology can be leveraged for transition between compute-centric and the more balanced data-centric compute architectures.

The author examines storage-class memory and demonstrates how to fill the long-standing performance gap between RAM and spinning disk storage; details the use of I/O bus coprocessors (for processing closer to data); explains how to employ InfiniBand to build low-cost, high performance interconnection networks; and discusses scalable storage for unstructured data.

Computing systems engineering has historically been dominated by scaling processors and dynamic RAM (DRAM) interfaces to working memory, leaving a huge gap between data-driven and computational algorithms (see Resources). Interest in data-centric computing is growing rapidly, along with novel system design software and hardware devices to support data transformation with large data sets.

The data focus in software is no surprise given applications of interest today, such as video analytics, sensor networks, social networking, computer vision and augmented reality, intelligent transportation, machine-to-machine systems, and big data initiatives like IBM’s Smarter Planet and Smarter Cities.

The current wave of excitement is about collecting, processing, transforming, and mining the big data sets:

  • The data focus is leading toward new device-level breakthroughs in nonvolatile memory (storage-class memory, SCM) which brings big data closer to processing.
  • At the same time, input/output coprocessors are bringing processing closer to the data.
  • Finally, low-latency, high-bandwidth off-the-shelf interconnections like InfiniBand are allowing researchers to quickly build 3D torus and fat-tree clusters that used to be limited to the most exotic and expensive custom high-performance computing (HPC) designs.

Yet, the systems software and even system design often remain influenced by out-of-date bottlenecks and thinking. For example, consider threading and multiprogramming. The whole idea came about because of slow disk drive access; what else can a program do when waiting on data but run another one. Sure, we have redundant array of independent disks (RAID) scaling and NAND flash solid-state disks (SSDs), but as noted by IBM Almaden Research, the time scale differences of the access time gap are massive in human terms.

The access time gap between a CPU, RAM, and storage can be measured in terms of typical performance for each device, but perhaps the gap is more readily understood when put into human terms (as IBM Almaden has done for illustrative purposes).

If a typical CPU operation is similar to what a human can do in seconds, then RAM access at 100 times more latency is much like taking a few minutes to access information. However, by the same comparison, disk access at 100,000 times more latency compared to RAM is on the order of months (100 days). (See Figure 1.)

Figure 1. The data access gap

Image showing the data access gapMany experienced computer engineers have not really thought hard about the 100 to 200 random I/O operations per second (IOPS) — it is the mechanical boundary for a disk drive. (Sure, sequential access is as high as hundreds of megabytes per second, but random access remains what it was more than 50 years ago, with the same 15K RPM seek and rotate access latency.)

Finally, as Almaden notes, tape is therefore glacially slow. So, why do we bother? For the capacity, of course. But how can we get processing to the data or data to the processing more efficiently?

Look again at Figure 1. Improvements to NAND flash memory for use in mobile devices and more recently SSD has helped to close the gap; however, it is widely believed that NAND flash device technology will be pushed to its limits fairly quickly, as noted by numerous system researchers (see Resources). The transistor floating gate technology used is already at scaling limits and pushing it farther is leading to lower reliability, so although it has been a stop-gap for data-centric computing, it is likely not the solution.

Instead, several new nonvolatile RAM (NVRAM) device technologies are likely solutions, including:

  • Phase change RAM (PCRAM): This memory uses a heating element to turn a class of materials known as chalcogenides into either a crystallized or amorphous glass state, thereby storing two states that can be programmed and read, with state retained even when no power is applied. PCRAM appears to show the most promise in the near term for M-type synchronous nonvolatile memory (NVM).
  • Resistive RAM (RRAM): Most often described as a circuit that is unlike a capacitor, inductor, or resistor, RRAM provides a unique relationship between current and voltage unlike other well-known devices that store charge or magnetic energy or provide linear resistance to current flow. Materials with properties called memristors have been tested for many decades but engineers usually avoid them because of their nonlinear properties and the lack of application for them. IEEE fellow Leon Chua describes them in “Memristor: The Missing Circuit Element.” A memristor’s behavior can be summarized as follows: Current flow in one direction causes electrical resistance to increase and in the opposite direction resistance decreases, but the memristor retains the last resistance it had when flow is re-started. As such, it can store a nonvolatile state, be programmed, and the state read. For details and even some controversy on what is and is not a memristor, seeResources.
  • Spin transfer torque RAM (STT-RAM): A current passed through a magnetic layer can produce a spin-polarized current that, when directed into a magnetic layer, can change its orientation via angular momentum. This behavior can be used to excite oscillations and flip the orientation of nanometer-scale magnetic devices. The main drawback is the high current needed to flip the orientation.

Consult the many excellent entries in Resources for more in-depth information on each device technology.

From a systems perspective, as these devices evolve, where they can be used and how well each might fill the access gap depends on the device’s:

  • Cost
  • Scalability (device integration size must be smaller than a transistor to beat flash; less than 20 nanometers)
  • Latency to program and read
  • Device reliability
  • Perhaps most importantly, durability (how often it can be programmed and erased before it becomes unreliable).

Based on these device performance considerations, IBM has divided SCM into two main classes:

  • S-type: Asynchronous access via an I/O controller. Threading or multiprogramming is used to hide the I/O latency to the device.
  • M-type: Synchronous access via a memory controller. Think about this as wait-states for RAM access in which a CPU core stalls.

Further, NAND SSD would be considered fast storage, accessed via a block-oriented storage controller (much higher I/O rates but similar bandwidth to a spinning disk drive).

It may seem like the elimination of asynchronous I/O for data processing (except, of course, for archive access or cluster scaling) might be a cure-all for data-centric processing. In some sense it is, but systems designers and software developers will have to change habits. The need for I/O latency hiding will largely go away on each node in a system, but it won’t go away completely. Clusters built from InfiniBand deal with node-to-node data-transfer latency with Message Passing Interface or MapReduce schemes and enjoy similar performance to this envisioned SCM node except when booting or when node data exceeds node working RAM size.

So, for scaling purposes, cluster interconnection and I/O latency hiding among nodes in the cluster is still required.

Moving processing closer to data with coprocessors

Faster access to big data is ideal and looks promising, but some applications will always benefit from the alternative approach of moving processing closer to data interfaces. Many examples exist, such as graphics (graphics processing units, GPUs), network processors, protocol-offload engines like the TCP/IP Offload Engine, RAID on chip, encryption coprocessors, and more recently, the idea of computer vision coprocessors. My research involves computer vision and graphics coprocessors, both at scale in clusters and embedded. I am working on what I call a computer vision processing unit, comparing several coprocessors that became more widely pursued with the 2012 announcement of OpenVX by Khronos (see Resources).

In the embedded world, such a method might be described as an intelligent sensor or smart camera, methods in which preliminary processing of raw data is provided by the sensor interface and an embedded logic device or microprocessor, perhaps even a multicore system on a chip (SoC).

In the scalable world, this most often involves use of a coprocessor bus or channel adapter (like PCI Express, PCIe, and Ethernet or InfiniBand); it provides data processing between the data source (network side) and the node I/O controller (host side).

Whether processing should be done or is more efficient when done in the I/O path or on a CPU core has always been a topic of hot debate, but based on an existence proof (GPUs and network processors), clearly they can be useful, waxing and waning in popularity based on coprocessor technology compared to processor. So, let’s take a quick look at some of the methods:

Vector processing for single program, multiple data
Provided today by GPUs, general-purpose GPUs (GP-GPUs), and application processing units (APUs), the idea is that data can be transformed on its way to an output device like a display or sent to a GP-GPU/APU and transformed on a round trip from the host. “General purpose” implies more sophisticated features like double-precision arithmetic compared to single precision only for graphics-specific processing.
Many core
Traditional many-core coprocessor cards (see Resources) are available from various vendors. The idea is to lower cost and power consumption by using simpler, yet numerous cores on the I/O bus, with round-trip offloading of processing to the cards for a more capable but power-hungry and costly full-scale multicore host. Typically, the many-core coprocessor might have an order of magnitude more cores than the host and often includes gigabit or 10G Ethernet and other types of network interfaces.
I/O bus field-programmable gate arrays (FPGAs)
FPGA cards, most often used to prototype a new coprocessor in the early stages of development, can perhaps used as a solution for low-volume coprocessors as well.
Embedded SoCs
A multicore solution can be used in an I/O device to create an intelligent device like a stereo ranging or time-of-flight camera.
Interface FPGA/configurable programmable logic devices
A digital logic state machine can provide buffering and continuous transformation of I/O data, such as digital video encoding.

Let’s look at an example based on offload and I/O path. Data transformation has obvious value for applications like the decoding of MPEG4 digital video, consisting of a GPU coprocessor in the path between the player and a display as shown in Figure 2 for the Linux® MPlayer video decoder and presentation acceleration unit (VDPAU) software interface to NVIDIA MPEG decoding on the GPU.

Figure 2. Simple video decode offload example

Image showing an example of a simple video decode offloadLikewise, any data processing or transformation that can be done in-bound or out-bound from a CPU host may have value, especially if the coprocessor can provide processing at a lower cost with great efficiency or with lower power consumption based on purpose-built processors compared to general-purpose CPUs.

To start to understand a GP-GPU compared to a multicore coprocessor approach, try downloading the two examples of a point spread function to sharpen the edges on an image (threaded transform example) compared with the GPU transform example. Both provide the same 320×240-pixel transformation, but in one case, the Compute Unified Device Architecture (CUDA) C code provided requires a GPU or GP-GPU coprocessor and, in the other case, either a multicore host or a many-core (for example, MICA) coprocessor.

So which is better?

Neither approach is clearly better, mostly because the NVRAM solutions have not yet been made widely available (except as expensive battery-backed DRAM or as S-type SCM from IBM Texas Memory Systems Division) and moving processing into the I/O data path has traditionally involved less friendly programming. Both are changing, though: Coprocessors are adopting higher-level languages like the Open Compute Language (OpenCL) in which code written for multicore hosts runs equally well on Intel MICA or Altera Startix IV/V architectures.

Likewise, all of the major computer systems companies are working feverishly to release SCM products, with PCRAM the most likely to be available first. My advice is to assume that both will be with us for some time and operating systems and applications must be able to deal with both. The memristor, or RRAM, includes a vision that resembles Isaac Asimov’s fictional positronic brain in which memory and processing are fully integrated as they are in a human neural system but with metallic materials. The concept of fully integrated NVM and processing is generally referred to as processing in memory (PIM) or neuromorphic processing (see Resources). Scalable NVM integrated processing holds extreme promise for biologically inspired intelligent systems similar to the human visual cortex, for example. Pushing toward the goal of integrated NVM, with PIM from both sides, is probably a good approach, so I plan to keep up with and keep working on systems that employ both methods—coprocessors and NVM. Nature has clearly favored direct, low-level, full integration of PIM at scale for intelligent systems.

Scaling nodes with Infiniband interconnection

System designers always have to consider the trade-off between scaling up each node in a system and scaling out a solution that uses networking or more richly interconnected clustering to scale processing, I/O, and data storage. At some point, scaling the memory, processing, and storage a single node can integrate hits a practical limit in terms of cost, power efficiency, and size. It is also often more convenient from a reliability, availability, and servicing perspective to spread capability over multiple nodes so that if one needs repair or upgrade, others can continue to provide service with load sharing.

Figure 3 shows a typical InfiniBand 3D torus interconnection.

Figure 3. Example of InfiniBand 4x4x4 3D torus with 1152 nodes (SDSC Gordon)

Image showing an example of InfiniBand 4x4x4 3D torus with 1152 nodes (SDSC Gordon)In Figure 3, the 4x4x4 shown is for the San Diego Supercomputing Center (SDSC) Gordon supercomputer, as documented by Mellanox, which uses a 36-port InfiniBand switch to connect nodes to each other and to storage I/O.

InfiniBand, Converged Enhanced Ethernet iSCSI (CEE), or Fibre Channel is the most often used scalable storage interface for access to big data. This storage area network (SAN) scaling for RAID arrays is used to host distributed, scalable file systems like Ceph, Lustre, Apache Hadoop, or the IBM General Parallel File System (GPFS). Use of CEE and InfiniBand for storage access using the Open Fabric Alliance SCSI Remote Direct Memory Access (RDMA) Protocol and iSCSI Extensions for RDMA is a natural fit for SAN storage integrated with an InfiniBand cluster. Storage is viewed more as a distributed archive of unstructured data that is searched or mined and loaded into node NVRAM for cluster processing. Higher-level data-centric cluster processing methods like Hadoop MapReduce can also be used to bring code (software) to the data at each node. These topics are big-data-related topics that I describe more in the last part of this four-part series.

The future of data-centric scaling

This articles makes an argument for systems design and architecture that move processors closer to data-generating and -consuming devices, as well as simplification of memory hierarchy to include fewer levels, leveraging lower-latency, scalable NVM devices. This defines a data-centric node design that can be further scaled with low-latency off-the-shelf interconnection networks like InfiniBand. The main challenge with data-centric computing is not instructions-per-second or floating-point-operations-per-second only, but rather IOPS and the overall power efficiency of data processing.

In Part 1 of this series, I uncovered methods and tools to build a compute node and small cluster application that can scale with on-demand HPC by leveraging the cloud. In this article I detailed such high-performance system design advances as co-processing, nonvolatile memory, interconnection, and storage.

In Part 3 in this series I provide more in-depth coverage of a specific data-centric computing application — video analytics. Video analytics includes applications such as facial recognition for security and computer forensics, use of cameras for intelligent transportation monitoring, retail and marketing that involves integration of video (for example, visualizing yourself in a suit you’re considering from a web-based catalog), as well as a wide range of computer vision and augmented reality applications that are being invented daily. Although many of these applications involve embedded computer vision, most also require digital video analysis, transformation, and generation in cloud-based scalable servers. Algorithms like Sobel transformation can be run on typical servers, but algorithms like the generalized Hough transform, facial recognition, image registration, and stereo (point cloud) mapping, for example, require the NVM and coprocessor approaches this article discussed for scaling.

In the last part of the series, I deal with big data issues.

Cloud scaling, Part 3: Explore video analytics in the cloud

Using methods, tools, and system design for video and image analysis, monitoring, and security

Explore and consider methods, tools, and system design for video and image analysis with cloud scaling. As described in earlier articles in this series, video analytics requires a more balanced data-centric compute architecture compared to traditional compute-centric, scalable, high-performance computing. The author examines the use of OpenCV and similar tools for digital video analysis and methods to scale this analysis using cluster and distributed system design.

The use of coprocessors designed for video analytics and the new OpenVX hardware acceleration discussed in previous articles can be applied to the computer vision (CV) examples presented in this article. This new data-centric technology for CV and video analytics requires the system designer to re-think application software and system design to meet demanding requirements, such as real-time monitoring and security for large, public facilities and infrastructure as well as a more entertaining, interactive, and safer world.

Public safety and security

The integration of video analytics in public places is perhaps the best way to ensure public safety, providing digital forensic capabilities to law enforcement and the potential to increase detection of threats and prevention of public safety incidents. At the same time, this need has to be balanced with rights to privacy, which can become a contentious issue if these systems are abused or not well understood. For example, the extension of facial detection, as shown in Figure 1, to facial recognition has obvious identification capability and can be used to track an individual as he or she moves from one public place to another. To many people, facial analytics might be seen an invasion of privacy, and use of CV and video analytics should adhere to surveillance and privacy rights laws and policies, to be sure—any product or service developer might want to start by considering best practices outlined by the Federal Trade Commission (FTC; see Resources).

Digital video using standards such as that from Motion Picture Experts Group (MPEG) for encoding video to compress, transport, uncompress, and display it has led to a revolution in computing ranging from social networking media and amateur digital cinema to improved training and education. Tools for decoding and consuming digital video are widely used by all every day, but tools to encode and analyze uncompressed video frames are needed for video analytics, such as Open Computer Vision (OpenCV). One of the readily available and quite capable tools for encoding and decoding of digital video is FFmpeg; for still images, GNU Image Processing (GIMP) is quite useful (see Resources for links). With these three basic tools, an open source developer is fully equipped to start exploring computer vision (CV) and video analytics. Before exploring these tools and development methods, however, let’s first define these terms better and consider applications.

The first article in this series, Cloud scaling, Part 1: Build your own and scale with HPC on demand, provided a simple example using OpenCV that implements a Canny edge transformation on continuous real-time video from a Linux® web cam. This is an example of a CV application that you could use as a first step in segmenting an image. In general, CV applications involve acquisition, digital image formats for pixels (picture elements that represent points of illumination), images and sequences of them (movies), processing and transformation, segmentation, recognition, and ultimately scene descriptions. The best way to understand what CV encompasses is to look at examples. Figure 1 shows face and facial feature detection analysis using OpenCV. Note that in this simple example, using the Haar Cascade method (a machine learning algorithm) for detection analysis, the algorithm best detects faces and eyes that are not occluded (for example, my youngest son’s face is turned to the side) or shadowed and when the subject is not squinting. This is perhaps one of the most important observations that can be made regarding CV: It’s not a trivial problem. Researchers in this field often note that although much progress has been made since its advent more than 50 years ago, most applications still can’t match the scene segmentation and recognition performance of a 2-year-old child, especially when the ability to generalize and perform recognition in a wide range of conditions (lighting, size variation, orientation and context) is considered.

Figure 1. Using OpenCV for facial recognition

Image showing facial recognition analysisTo help you understand the analytical methods used in CV, I have created a small test set of images from the Anchorage, Alaska area that isavailable for download. The images have been processed using GIMP and OpenCV. I developed C/C++ code to use the OpenCV application programming interface with a Linux web cam, precaptured images, or MPEG movies. The use of CV to understand video content (sequences of images), either in real time or from precaptured databases of image sequences, is typically referred to as video analytics.

Defining video analytics

Video analytics is broadly defined as analysis of digital video content from cameras (typically visible light, but it could be from other parts of the spectrum, such as infrared) or stored sequences of images. Video analytics involves several disciplines but at least includes:

  • Image acquisition and encoding. As a sequence of images or groups of compressed images. This stage of video analytics can be complex, including photometer (camera) technology, analog decoding, digital formats for arrays of light samples (pixels) in frames and sequences, and methods of compressing and decompressing this data.
  • CV. The inverse of graphical rendering, where acquired scenes are converted into descriptions compared to rendering a scene from a description. Most often, CV assumes that this process of using a computer to “see” should operate wherever humans do, which often distinguishes it from machine vision. The goal of seeing like a human does most often means that CV solutions employ machine learning.
  • Machine vision. Again, the inverse of rendering but most often in a well-controlled environment for the purpose of process control—for example, inspecting printed circuit boards or fabricated parts to make sure they are geometrically correct within tolerances.
  • Image processing. A broad application of digital signal processing methods to samples from photometers and radiometers (detectors that measure electromagnetic radiation) to understand the properties of an observation target.
  • Machine learning. Algorithms developed based on the refinement of the algorithm through training data, whereby the algorithm improves performance and generalizes when tested with new data.
  • Real-time and interactive systems. Systems that require response by a deadline relative to a request for service or at least a quality of service that meets SLAs with customers or users of the services.
  • Storage, networking, database, and computing. All required to process digital data used in video analytics, but a subtle, yet important distinction is that this is an inherently data-centric compute problem, as was discussed in Part 2 of this series.

Video analytics, therefore, is broader in scope than CV and is a system design problem that might include mobile elements like a smart phone (for example, Google Goggles) and cloud-based services for the CV aspects of the overall system. For example, IBM has developed a video analytics system known as the video correlation and analysis suite (VCAS), for which the IBM Travel and Transportation Solution BriefSmarter Safety and Security Solution for Rail [PDF] is available; it is a good example of a system design concept. Detailed focus on each system design discipline involved in a video analytics solution is beyond the scope of this article, but many pointers to more information for system designers are available in Resources. The rest of this article focuses on CV processing examples and applications.

Basic structure of video analytics applications

You can break the architecture of cloud-based video analytics systems down into two major segments: embedded intelligent sensors (such as smart phones, tablets with a camera, or customized smart cameras) and cloud-based processing for analytics that can’t be directly computed on the embedded device. Why break the architecture into two segments compared to fully solving in the smart embedded device? Embedding CV in transportation, smart phones, and products is not always practical. Even when embedding a smart camera is smart, so often, the compressed video or scene description may be back-hauled to a cloud-based video analytics system, just to offload the resource-limited embedded device. Perhaps more important, though, than resource limitations is that video transported to the cloud for analysis allows for correlation with larger data sets and annotation with up-to-date global information for augmented reality (AR) returned to the devices.

The smart camera devices for applications like gesture and facial expression recognition must be embedded. However, more intelligent inference to identify people and objects and fully parse scenes is likely to require scalable data-centric systems that can be more efficiently scaled in a data center. Furthermore, data processing acceleration at scale ranging from the Khronos OpenVX CV acceleration standards to the latest MPEG standards and feature-recognition databases are key to moving forward with improved video analytics, and two-segment cloud plus smart camera solutions allow for rapid upgrades.

With sufficient data-centric computing capability leveraging the cloud and smart cameras, the dream of inverse rendering can perhaps be realized where, in the ultimate “Turing-like” test that can be demonstrated for CV, scene parsing and re-rendered display and direct video would be indistinguishable for a remote viewer. This is essentially done now in digital cinema with photorealistic rendering, but this rendering is nowhere close to real time or interactive.

Video analytics apps: Individual scenarios

Killer applications for video analytics are being thought of every day for CV and video analytics, some perhaps years from realization because of computing requirements or implementation cost. Nevertheless, here is a list of interesting applications:

  • AR views of scenes for improved understanding. If you have ever looked at, for example, a landing plane and thought, I wish I could see the cockpit view with instrumentation, this is perhaps possible. I worked in Space Shuttle mission control long ago, where a large development team meticulously re-created a view of the avionics for ground controllers that shadowed what astronauts could see—all graphical, but imaging fusion of both video and graphics to annotate and re-create scenes with meta-data. A much simplified example is presented here in concept to show how an aircraft observed via a tablet computer camera could be annotated with attitude and altitude estimation data (see the example in this article).
  • Skeletal transformations to track the movement and estimate the intent and trajectory of an animal that might jump onto a highway. See the example in this article.
  • Fully autonomous or mostly autonomous vehicles with human supervisory control only. Think of the steps between today’s cruise control and tomorrow’s full autonomous car. Cars that can parallel park themselves today are a great example of this stepwise development.
  • Beyond face detection to reliable recognition and, perhaps more importantly, for expression feedback. Is the driver of a semiautonomous vehicle aggravated, worried, surprised?
  • Virtual shopping (AR to try products). Shoppers can see themselves in that new suit.
  • Signage that interacts with viewers. This is based on expressions, likes and dislikes, and data that the individual has made public.
  • Two-way television and interactive digital cinema. Entertainment for which viewers can influence the experience, almost as if they were actors in the content.
  • Interactive telemedicine. This is available any time with experts from anywhere in the world.

I make no attempt in this article to provide an exhaustive list of applications, but I explore more by looking closely at both AR (annotated views of the world through a camera and display—think heads-up displays such as fighter pilots have) and skeletal transformations for interactive tracking. To learn more beyond these two case studies and for more in-depth application-specific uses of CV and video analytics in medicine, transportation safety, security and surveillance, mapping and remote sensing, and an ever-increasing list of system automation that includes video content analysis, consult the many entries in Resources. The tools available can help anyone with computer engineering skills get started. You can also download a larger set of test images as well as all OpenCV code I developed for this article.

Example: Augmented reality

Real-time video analytics can change the face of reality by augmenting the view a consumer has with a smart phone held up to products or our view of the world (for example, while driving a vehicle) and can allow for a much more interactive experience for users for everything from movies to television, shopping, and travel to how we work. In AR, the ideal solution provides seamless transition from scenes captured with digital video to scenes generated by rendering for a user in real time, mixing both digital video and graphics in an AR view for the user. Poorly designed AR systems distract a user from normal visual cues, but a well-designed AR system can increase overall situation awareness, fusing metrics with visual cues (think fighter pilot heads-up displays).

The use of CV and video analytics in intelligent transportation systems has significant value for safety improvement, and perhaps eventually CV may be the key technology for self-driving vehicles. This appears to be the case based on the U.S. Defense Advanced Research Projects Agency challenge and the Google car, although use of the full spectrum with forward-looking infrared and instrumentation in addition to CV has made autonomous vehicles possible. Another potentially significant application is air traffic safety, especially for airports to detect and prevent runway incursion scenarios. The imagined AR view of an aircraft on final approach at Ted Stevens airport in Anchorage shows a Hough linear transform that might be used to segment and estimate aircraft attitude and altitude visually, as shown in Figure 2. Runway incursion safety is of high interest to the U.S. Federal Aviation Administration (FAA), and statistics for these events can be found in Resources.

Figure 2. AR display example

Image showing an example of video augmentationFor intelligent transportation, drivers will most likely want to participate even as systems become more intelligent, so a balance of automation and human participation and intervention should be kept in mind (for autonomous or semiautonomous vehicles).

Skeletal transformation examples: Tracking movement for interactive systems

Skeletal transformations are useful for applications like gesture recognition or gate analysis of humans or animals—any application where the motion of a body’s skeleton (rigid members) must be tracked can benefit from a skeletal transformation. Most often, this transformation is applied to bodies or limbs in motion, which further enables the use of background elimination for foreground tracking. However, it can still be applied to a single snapshot, as shown in Figure 3, where a picture of a moose is first converted to a gray map, then a threshold binary image, and finally the medial distance is found for each contiguous region and thinned to a single pixel, leaving just the skeletal structure of each object. Notice that the ears on the moose are back—an indication of the animal’s intent (higher-resolution skeletal transformation might be able to detect this as well as the gait of the animal).

Figure 3. Skeletal transformation of a moose

Image showing an example of a skeletal transformationSkeletal transformations can certainly be useful in tracking animals that might cross highways or charge a hiker, but the transformation has also become of high interest for gesture recognition in entertainment, such as in the Microsoft® Kinect® software developer kit (SDK). Gesture recognition can be used for entertainment but also has many practical purposes, such as automatic sign language recognition—not yet available as a product but a concept in research. Certainly skeletal transformation CV can analyze the human gait for diagnostic or therapeutic purposes in medicine or to capture human movement for animation in digital cinema.

Skeletal transformations are widely used in gesture-recognition systems for entertainment. Creative and Intel have teamed up to create an SDK for Windows® called the Creative* Interactive Gesture Camera Developer Kit (see Resources for a link) that uses a time-of-flight light detection and ranging sensor, camera, and stereo microphone. This SDK is similar to the Kinect SDK but intended for early access for developers to build gesture-recognition applications for the device. The SDK is amazingly affordable and could become the basis from some breakthrough consumer devices now that it is in the hands of a broad development community. To get started, you can purchase the device from Intel, and then download the Intel® Perceptual Computing SDK. The demo images are included as an example along with numerous additional SDK examples to help developers understand what the device can do. You can use the finger tracking example shown in Figure 4 right away just by installing the SDK for Microsoft Visual Studio® and running the Gesture Viewer sample.

Figure 4. Skeletal transformation using the Intel Perceptual Computing SDK and Creative Interactive Gesture Camera Developer Kit

Image showing a skeletal and blob transformation of a hand

 op

The future of video analytics

This article makes an argument for the use of video analytics primarily to improve public safety; for entertainment purposes, social networking, telemedicine, and medical augmented diagnostics; and to envision products and services as a consumer. Machine vision has quietly helped automate industry and process control for years, but CV and video analytics in the cloud now show promise for providing vision-based automation in the everyday world, where the environment is not well controlled. This will be a challenge both in terms of algorithms for image processing and machine learning as well as data-centric computer architectures discussed in this series. The challenges for high-performance video analytics (in terms of receiver operating characteristics and throughput) should not be underestimated, but with careful development, this rapidly growing technology promises a wide range of new products and even human vision system prosthetics for those with sign impairments or loss of vision. Based on the value of vision to humans, no doubt this is also fundamental to intelligent computing systems.

Downloads

Description Name Size
OpenCV Video Analytics Examples va-opencv-examples.zip 600KB
Simple images for use with OpenCV example-images.zip 6474KB

Resources

Learn

Get products and technologies

Downloads

Description Name Size
GPU accelerated image transform sharpenCUDA.zip 644KB
Grid threaded comparison hpc_dm_cloud_grid.zip 1.08MB
Simple image for transform benchmark Cactus-320×240-pixel.ppm.zip 206KB

Resources

Learn

Downloads

Description Name Size
Continuous HD digital camera transform example transform-example.zip 123KB
Grid threaded prime generator benchmark hpc_cloud_grid.tar.gz 3KB
High-resolution image for transform benchmark Cactus-12mpixel.zip 12288KB

Resources

Learn

Posted in Apps Development, CLOUD, Computer Languages, Computer Software, Computer Vision, GPU (CUDA), GPU Accelareted, Image Processing, OpenCV, OpenCV, PARALLEL, Project Related, Video | Leave a Comment »

SHOULD YOU LEARN TO CODE?

Posted by Hemprasad Y. Badgujar on December 5, 2014


Literacy in any computer language, from simple HTML to complex C++, requires dedication not only to the technology, but to changes in the technology. There’s a reason HTML5 ends in a number. When enough browsers support HTML6, developers will have new things to learn.

Possible reasons to put yourself through the learning process include:

  • To gain confidence: I’ve had rare clients who think that if they master a language then computers will intimidate them less. While that may be the case, it rarely sticks without dedicated practice.
  • Necessity: technical problems will arise whether or not one’s job description fits the bill. When problems must get solved, there’s a time to pass the buck and a time to buckle down and solve it.
  • The thrill of it: some people just like to learn new skills.
  • To understand what’s possible: a developer says “it can’t be done.” Do they mean it’s impossible? Or that it’s more trouble than it’s worth? A designer says “I want it to do this.” Did he or she just give someone a week’s worth of headaches? Can technology be used in a more appropriate way?

STAY CURIOUS

I’ve seen it. You know, that look. Not quite panic, not quite despair. It’s the look someone gets when they realize the appeal of letting someone else do the heavy lifting. The look that says, “That’s a windshield; I don’t have to be the bug.” I’ve seen it in co-workers’ eyes, students’ postures, and staring back from the mirror.

In my experience, it isn’t fear of failure that intimidates people. It’s fear of getting lost. Overwhelming hopelessness encourages feelings of inadequacy. That cycle will beat anyone down.

Courage or persistence are not antidotes for feeling overwhelmed. Stopping before feeling overwhelmed is the solution.

Pressure

Pressure image via Shutterstock.

My favorite technique is to tackle a project with three traits.

1. Find a topic that irks you

Deadlines and paychecks are fine. But nothing drives people like an itch they can’t scratch. In the long run, learning code must not be an end in itself. It must become a salve for some irritation.

Way back when, I got frustrated that I couldn’t find a good book. There’s no shortage of book discovery websites, but intuition told me there was a better way. So I started my own website. I never finished the project, but I learned many ways to organize novels. On the way, almost incidentally, I learned more code.

2. You should be rewarded for incremental effort

Having found that proverbial itch, people learning to code should also find relief.

No tutorials, tools, or outside praise will give people the mindset to conquer code better than “I wrote this and… look what I did!” and leaving with a sense of being greater than the obstacle you overcame.

It sounds silly until you try it. Seeing code perform gives people a micro-rush of self confidence, a validation that they can master the machine.

Code

Code image via Shutterstock.

Last week someone looked at my screen and shook his head. It was full of code. Three open windows of colored tags and function calls. He said: “I could never do that.” Years ago I would have agreed. I didn’t want to look stupid or break something that I could not fix. Who knows what damage one wrong keystroke would cause?

3. Your project should conclude while your brain still has an appetite

This one’s critical. When learning something that intimidates you, you must approach but do not exceed your limit.

“Exercising your brain” isn’t an appropriate analogy. When working out, trainers encourage people to push just past their limits. But learning is a hunger. Your brain has an appetite for knowledge. Filling your brain to the brim (or worse, exceeding its limit) will hamper your ability to learn, erode your self-confidence, and kill a kitten. Please, think of the kittens.

Better yet, think of mental exercise as one workout that happens to last a while. Say, one week. Sure, you take breaks between reps (called “getting sleep”). But rushing ahead works against your goal. The kittens will never forgive you.

  • Part one: warm up by mixing something you already learned with something you don’t know.Leave yourself at least one question. 1 day.
  • Part two: practice. Experiment. Practice repeating experiments. And always end on a cliffhanger. The goal is to hit your stride and break on a high note. By “break” I mean sleep, eat, or talk to fellow humans. 3 days.
  • Part 3: cool down by improving what you’ve already covered. As always, get your brain to a point of enjoying the exercise, then let go for a while. 1 day.

Sprinting does not train you for a marathon. A hundred pushups will improve your shoulders better than trying to lift a truck once. And cramming tutorial books like shots of tequila will impair your ability to think.

PRACTICE DAILY

In my newspaper days, I refused to use stock art. Deadlines came five days a week, but I insisted on hand-crafting my own vector art. Six months later I was the go-to guy for any custom graphic work. That one skill that earned me a senior position at a startup company. Even today I love fiddling with bezier paths.

Learning any skill, including how to debug code, works much the same.

The only way to learn code — and make it stick — is to practice every day. Like learning any new skill, a consistent schedule with manageable goals gradually improves performance to the point of expertise.

“I CAN” IS NOT “I SHOULD”

Part of learning to read and write code, be it HTML, jQuery, or C++, is learning one’s limits. Another part is explaining one’s limits. The curse of understanding a language … rather, the curse of people thinking you “know code” is they’ll expect you to do it.

Technology

Code image via Shutterstock.

HTML is not CSS. CSS is not PHP. PHP is not WordPress. WordPress is not server administration. Server administration is not fixing people’s clogged Outlook inboxes. Yet I’ve been asked to do all of that. Me, armed with my expired Photoshop certificate and the phrase “I don’t know, but maybe I can help….”

Those without code experience often don’t differentiate between one $(fog-of).squiggles+and+acronyms; or . Not that we can blame them. Remember what it was like before you threw yourself into learning by

  • finding a topic that interests you;
  • getting incremental rewards;
  • learning without getting overwhelmed.

Knowledge of code is empowering. Reputation as a coder is enslaving. At least both pay the bills.

Posted in Apps Development, Computer Languages, Computer Software | Tagged: , , , | Leave a Comment »

How to add Twitter and Facebook buttons to your Moodle site

Posted by Hemprasad Y. Badgujar on November 18, 2014


Adding Twitter and Facebook like buttons to a website is always a good idea if you’d like to spread the word about your site through social networking. In this tutorial I will walk you through how to add the commonly seen buttons to your Moodle site.

First things first. Before we do anything in Moodle, Let’s get the buttons from Twitter and Facebook’s official websites.

Get Twitter “Share a link” button

1) Go to Twitter’s official buttons page: http://twitter.com/about/resources/buttons

2) Select “Share a link” and enter the desired options as shown in the figure below.

screenshot

3) Once you are happy about the button’s preview, you can keep the page open for later use.

Get Facebook “like” button

1) Go to the Facebook developers plugin page: http://developers.facebook.com/docs/reference/plugins/like/

2) Configure the like button as shown in the figure below. For the like button to work in Moodle you can only use the IFRAME version rather than the HTML5/XFBML version.

screenshot

3) Once you are happy with the button’s preview, you can click the “Get Code” button. In the popup window you need to choose the IFRAME option as shown in the figure below. Keep the page open for later use.

screenshot

Ok, now the buttons are ready for use we can dive into Moodle to add the buttons.

Step 1

In a new window, Log into Moodle as an administrator. Select the “HTML” option from the “Add a block” drop-down menu.

screenshot

Step 2

Now you should see that a new HTML block has been added. Click the configuration icon, which is the second icon, as shown in the figure below. (Your Moodle site’s configuration icon will look different )

screenshot

Step 3

On the “Configuring a (new HTML block) block” page, turn on the HTML Source Editor for the “Content” text field by clicking the HTML icon in the editor menu as shown below.

screenshot

Step 4

Copy and paste the relevant buttons’ code from the previous Twitter and Facebook pages into the HTML Source Editor and click the “Update” button.

screenshot

Step 5

Enter a block title and configure other options before saving.

Step 6

You need to turn off editing to see the changes.

screenshot

Conclusion:

Using a Moodle HTML block to add Twitter and Facebook buttons is only one way of doing it. If your Moodle theme offers you extra block/widget areas to enter HTML code you can take advantage of those as well.

For example, in our premium Moodle Theme “Ace”, you can go to the theme settings page and add the Twitter and Facebook buttons to the page’s header as shown below.

screenshot

Posted in Computer Software, Computer Softwares, Installation | Leave a Comment »

Architectures of Mobile Cloud Computing

Posted by Hemprasad Y. Badgujar on August 30, 2014


“Mobile Cloud Computing at its simplest, refers to an infrastructure where both the data storage and the data processing happen outside of the mobile device. Mobile cloud applications move the computing power and data storage away from mobile phones and into the cloud, bringing applications and mobile computing to not just smartphone users but a much broader range of mobile subscribers”.

1
From the concept of MCC, the general architecture of MCC can be shown in Fig.  In Fig. , mobile devices are connected to the mobile networks via base stations (e.g., base transceiver station (BTS), access point, or satellite) that establish and control the connections (air links) and functional interfaces between the networks and mobile devices. Mobile users’ requests and information (e.g., ID and location) are transmitted to the central processors that are connected to servers providing mobile network services. Here, mobile network operators can provide services to mobile users as AAA (for authentication, authorization, and accounting) based on the home agent (HA) and subscribers’ data stored in databases. After that, the subscribers’ requests are delivered to a cloud through the Internet. In the cloud, cloud controllers
process the requests to provide mobile users with the corresponding cloud services. These services are Accepted in Wireless Communications and Mobile Computing -developed with the concepts of utility computing, virtualization, and service-oriented architecture (e.g.,web, application, and database servers).

http://onlinelibrary.wiley.com/doi/10.1002/wcm.1203/abstract

Posted in CLOUD, Computer Network & Security, Computer Software, Computing Technology | Tagged: , | Leave a Comment »

What’s New In Microsoft PowerPoint 2013 ?

Posted by Hemprasad Y. Badgujar on July 9, 2014


Revamped Landing Page

The landing page of PowerPoint 2013 has received the much needed facelift. The landing page of the previous version, Microsoft Office 2010, looked very bland and even confusing to some users. In the newer version, the landing page has been revamped to provide users with quick access to locally available templates, as well as the online database. The online templates are divided into several categories such as Business, Industry, Small Business, Presentation, Orientation, Design Sets, 4:3, Media, Nature, Marketing etc. While new presentations can be created from the main window, the left sidebar shows all the recently accessed presentations.

 

Color Themes For Templates

The templates can be used with different color themes. For instance, if the theme comes with a light color scheme, and you want to use darker colors, just click the template and all the available color schemes will be displayed. You can select the required one to use in your presentation. Just like the previous versions, you can also manually change the color and style of elements in a template.

 

Enhanced Presenter View

The Presenter View in PowerPoint 2013 displays the Active slide on the left side, the Next slide at the top right, while the Notes for the current slide are displayed in the bottom right corner. A timer appears above the preview of the current slide, and extra controls are available at the top and bottom of the Presenter View window.

Even though, Presenter View was also available in the previous versions of Microsoft PowerPoint, it was not activated by default. Users had to navigate to the Slide Show tab, and enable Presenter View in order to display it on secondary display screen. For this reason, a lot of people were not aware of its existence in the previous PowerPoint versions. Microsoft seemed to realize this in the latest release, and has enabled it by default. Now, whenever you run the slide show, the Presenter View will be displayed, if there are multiple display devices connected to the computer. Some changes have also been made to the console. Now you have an extra Laser Pointer Pen Tool, option to zoom parts of a slide, see all slides at a time, and ability to switch Slide show and Presenter Views between the connected display monitors.

 

Account Management

The Account Management window allows you to connect to your SkyDrive account, and add services to use with PowerPoint. Sign in to your Microsoft Account, and it will automatically connect to your SkyDrive account. You can use the same account to sign in to Microsoft Office 2013 on different devices. This way, all your saved documents will be synced to the cloud and will be available for viewing and editing from any device. This eliminates the need to carry your documents in removable storage drives. Using the SkyDrive account, you can easily share your presentations and invite others to collaborate on required presentation projects.

 

Share Documents To View & Edit In The Browser

The Share option offers a number of ways to share the document with others. You can Invite people by specifying their Email addresses, Send them a link to View and Edit the document, Post the document to Social networks, Email it to others as an attachment (PPTX), as a URL, as PDF, as XPS, or as internet Fax,Present it Online so that others can check out your presentation from their browsers, and Publish Slides to any Library or a SharePoint site. The person on the receiving end does not need to have Microsoft Office installed on his/her system in order to view or edit the document. If they have a Windows Live ID, everything can be performed from inside the browser.

 

 

Widescreen & Fullscreen Support

PowerPoint 2013 offers a slew of Widescreen templates and themes. The previous version also allowed you to switch to widescreen mode, however, you had to manually change the aspect ratio of the slide, which also changed the size of the slide elements. The new version of PowerPoint has built in support for Widescreen monitors. Moreover, there is also a new full screen mode available for editing. It allows you to view your slides, and edit them while consuming the available screen space. The Ribbon, containing all the editing options, can be activated and deactivated from a conveniently placed button at the top right corner.

 

UI Changes & Pane View

There are various UI related changes in PowerPoint 2013. First of all, everything feels smoother, from the movement of the cursor when you type, to the way animations appear in your presentation. Microsoft has also tried to improve the look and feel of the interface. There are now buttons available on the main interface to switch to the aforementioned Fullscreen View, and to access Notes and Comments.

Another welcome change to the UI is that a lot of options, which used to appear in separate dialog boxes, are now accessible through panes, appearing on the right side. For instance, in PowerPoint 2010, if you right-click a slide and select Format Background, a separate dialog box opens up. You can make changes to a slide, but the dialog box covers the slide, and you have to move it manually in order to view all the slide elements. Moreover, when you select Format Background option, instead of opening a separate dialog box, a pane is added to the right side. Anything that you change using from the pane is reflected on the slide in real time. It means that you don’t have to open and close the dialog box again and again to view the changes. Just like other Office 2013 suite applications, it includes an Online pictures option to let you quickly add background to the slide from your favorite online image resource; you can choose an image from the Office.com Clip Art Library, the Bing Image Search, or from your own SkyDrive and Flickr account.

 

Alignment Guides, Merge Shapes & Auto-Text Wrapping

A new feature, included in PowerPoint 2013, as well as Word 2013, is the Alignment Guides. It allows you to easily align objects and text in a slide, relative to each other. You can use the object alignment option to merge different shapes with each other. For instance, If you want to merge together two shapes, the alignment guides help you in quickly adjusting them together according to top, down, left and right margins. Another very useful, and much needed, feature added to PowerPoint 2013 is auto-text wrapping. When an image is added to a slide with text in it, the text automatically readjusts itself around the image so that there is no overlapping of any kind.

 

Insert Online Video, Image And Audio

PowerPoint 2013 now allows you to add videos, images and audio files directly from the internet, without first downloading them to your PC. Think of it as the object being embedded in your presentation. The previous version of PowerPoint also had the option to add videos from the web, however you had to copy the embed code of required video and paste it into PowerPoint. The latest version allows you to Insert an online video in your presentation using the integrated Bing Video Search, SkyDrive Account, YouTube, or From a Video Embed Code. For instance, to add a YouTube video, just search for it, select the required one from the search results and click OK to embed it into your presentation.

 

The image results are, by default, set to show the images that are licensed under Create Commons, so it eliminates the chance of copyright violation when you use an online image in your presentation. You can also choose to view all the web results for your search.

 

Export Presentation As WMV & MPEG-4 Video

PowerPoint 2010 also lets you save the presentation as a video, but only in WMV format. In PowerPoint 2013, another format, MPEG-4 is added to save converted presentation in video format. Due to the addition of MPEG-4 format, the presentation video can directly played on a lot of media players and devices. Now, users don’t require Windows Media CODEC installed on non-Windows devices to watch the presentation. Also, portable devices, as well as a lot of LCD/ LED TVs have built in support to play MPEG-4 format. Just go to Export, and select Create a Video. All the other options, including the Resolution, and whether to use recorded timings and narrations are available with the MPEG-4 format.

 

Start at the new Start screen

As with the other key Office 2013 applications, PowerPoint 2013 shares the new Modern-style interface and a revamped Start screen. Instead of the blank presentation you started with in PowerPoint 2010, this screen is packed with options including a range of templates. Also on the Start screen is a link to your current online SharePoint or SkyDrive account, a list of recently accessed PowerPoint files, and an Open Other Presentations link which you use to access files on disk or stored in the cloud.

You can also search online for templates and themes from the Start screen; a list of suggested searches helps here.

Now you can preview layouts before selecting a Theme to use.

Themes are sleeker, and Variants more varied

PowerPoint Themes are predesigned slide designs that spare you from doing the design work yourself. In PowerPoint 2010 there was a plethora of Themes, Color Schemes, Font Schemes and Effects to choose from. PowerPoint 2013 simplifies everything. The new Themes default to a 16:9 aspect ratio and each has a small subset of Variants, which provide variations in color and some design elements for that Theme.

You’ll find Themes from both the Start screen and the new Design tab. On the Start screen you can click a Theme, preview its variant,s and scroll through previews of the Theme Title, Title and Content, Smart Chart and Photo layouts before committing to one to use.

The old Merge Shape tools are now easier to find.

Shape tools get improvements

Although some of the Merge Shapes features that are touted as being new in PowerPoint 2013 were in PowerPoint 2010, they weren’t accessible from the Ribbon toolbar. In PowerPoint 2013, though, the Join, Combine, Fragment, Intersect and Subtract tools are accessible by selecting the Drawing ToolsFormat tab and clicking the Merge Shapesbutton. You’ll use these to create your own custom shapes by combining and merging simple shapes to make more complex ones. These tools have a handy live preview as well.

In addition, new alignment guides show when shapes are lined up to each other, to slide elements, and to borders and they make it easier to line up and space objects evenly on your slides.

Formatting options have become more visible.

Find new formatting tools

In PowerPoint 2013, you’ll find many formatting features from task docked to the right of the screen as you work. In earlier versions of PowerPoint, these options appeared in dialogs over the slide, which you had to move or close to continue working.

To access these new task panes, right-click a shape, for example, and choose Format Shape to see the available options for a shape in the task pane. Click a picture and the task pane changes to show picture formatting options. While most of the formatting options are not new, this makes them easier to find.

New is the Eyedropper tool, available when you are making a color choice. Use this to match colors by sampling a color to use from a shape or photo.

Lok online for videos to include in presentations from within PowerPoint.

Video input and output improve

PowerPoint 2013 supports additional video formats so it’s more likely videos will play in your presentation without you needing to install additional codecs.  For example, PowerPoint 2013 supports the MP4 and MOV formats for playing video, and you can export a PowerPoint presentation to video in MP4 or WMV formats.

The new Video button on the Insert tab includes options that let you search for a video from an online source and drop it into your deck without first downloading it to your computer.

At long last, there’s a button to play audio tracks in the background and across slides.

Audio playback options expand

PowerPoint 2013 supports a wide range of audio formats without requiring you to download and install additional codecs. Supported formats now include AIFF, AU, MID, MIDI, MP3, M4A, MP4, WAV, and WMA.

You can click a button in PowerPoint 2013 to play audio tracks across the entire slideshow or across slides. While this has always been possible, it was ridiculously annoying to set up.  Now all you need do is to insert the audio file, select it, and chooseAudio Tools, Playback tab and click the Play In Backgroundoption.

Only have one monitor? You can finally take advantage of Presentation View.

Presentation View becomes rosier

While the PowerPoint Presenter View was available in earlier versions of PowerPoint most users didn’t know it existed. Plus, if your computer only had one monitor you couldn’t access it —even to rehearse your presentation!

Now you can access Presenter View even on a single monitor by pressing Alt + F5. In Presenter View you can swap monitors for Presenter View and Slide Show View if desired. You can also view a thumbnail view of your slides, and click to view a slide out of sequence.

The new Zoom option lets you look close-up into an area on a slide to draw attention to it. There’s a new laser pointer tool here, too.

The new Comments task pane makes it easier to converse when working with others.

Work better with your team

When you’re designing a presentation with others, the new Comments feature will make it easier to discuss your slideshow with collaborators. When you add a comment, it appears in a Comments task pane down the right of the screen and stays visible while you work.

There are also options to add a comment from the Insert tab or the Comments task pane. The Comments task pane lets you navigate through comments, and see if there are comments on other slides. You can view your presentation with or without comments by selecting the Show Comments from the Review tab, and deselecting Show Comments.

The new Office Presentation Service expands features for Presentation View and video in online presentations.

Bring your presentation online

Now you can present a deck stored in the cloud or on your PC to the Web in real time. To use the new Office Presentation Service, choose File, Share, Present Online. You can also allow attendees to download the presentation to their own PC.

You’ll also see Presenter View while making your presentation. Plus, you can play video at presentation time, and viewers get their own set of video controls. In addition, viewers can navigate back to previous slides if they need to check or follow up on something.

 

Switching accounts / SkyDrive integration

I’ll admit, I really wasn’t crazy about the idea of “logging in” to Office initially. I also admit that this isn’t the most exciting or even impressive feature, but it is one that I am thankful for. As someone with several Microsoft Accounts, a couple Office 365 accounts, and therefore many SkyDrive accounts, it was a bit inconvenient having to go to the web, sign in to a SkyDrive account, and then download whatever file I needed. I really love being able to quickly switch between profiles to quickly access files in the cloud right from PowerPoint.

Having two Microsoft Accounts gives me a nice little “fence” to separate my personal and work files. All I have to do is click on “Switch account” to access my other accounts.

If I didn’t want to separate files via multiple Microsoft Accounts, I can also just add two different SkyDrive accounts to one profile. In other words, I sign into PowerPoint with one Microsoft account, but add all my SkyDrive accounts by clicking on “Add a Place” from the backstage open screen.

The only thing I don’t like about this second method is that at first glance there is no way to distinguish between my two different SkyDrive folders. As you can see in the above picture, PowerPoint only displays the user name (which is the same) next to each account. On the Open screen, I would love to see the email address display below the name like in the Accounts screen. Other than that, this is a wonderful addition, one that makes me utilize my free cloud storage more than ever before, and limits my need to “remote desktop” into my work computer.

Threaded comments

When collaborating with others, it is now a lot less complicated to follow conversations. Comments are now “threaded” and a lot easier on the eye.

Play From and Motion Path End

Technically, these are two separate but similar features that tie for third place in my book. I work with a lot of animations, and these two new additions have saved me a ton of time when working with and creating them.

Play From

The old Play button in the Animation Pane is now a Play From button, allowing you to preview a portion of the animations on a PowerPoint slide. Simply select an animation in the animation pane before pressing the Play From button.

Motion Path End

When drawing motion paths, PowerPoint now “ghosts” your object so you can see exactly where that object will appear when the animation completes, so no more guessing!

Color Picker

PowerPoint now includes a color picker! Better late than never, right?

The Eyedropper tool is found in the Shape Fill drop menu located from both the Home tab and the Drawing Tools Format tab. To select a color on the slide, simply click on the Eyedropper button, and then click on the desired color. To select a color from outside of the PowerPoint application window, click and drag.

Presenter View

The presenter view received quite the overhaul. It now is much darker, so presenting from behind a computer screen will not create a creepy glow.

It also includes three resizable panes: a slide preview, a next slide preview, and a notes area. To resize any of these areas, simply hover your mouse over any of the divider bars, then just click and drag.

Personally, I don’t need to see my current slide or the next slide. So my view usually looks like this:

In the above picture, I’ve completely collapsed the current slide view, resized the next slide view to a teeny-tiny thumbnail, and maximized my notes area to act as a kind of teleprompter.

There are also a lot of tools at your disposal that were once buried in hard-to-reach menus. All buttons are touch-friendly sized, making it easier to navigate a presentation from a touch-enabled monitor or tablet. The only problem is that these buttons appear in the Current Slide pane, so if you are like me and minimize that area, they are no longer easily accessible; however, you can still get to those options by right-clicking.

Also very useful, you can now jump to any slide or section in your presentation by clicking the Slide Sorter button (the one next to the pen tool) or by right-clicking and selecting “See All Slides.

Your view will change, but your audience will still see your previously selected slide. As you select a different slide, your audience will just see a flawless transition to a new slide and will never know you are presenting out of order.

But perhaps the best addition to the presenter view is the ability to zoom into a portion of a slide.

Simply select the Zoom In button (Magnifying Glass icon), hover your mouse over the area you’d like to zoom into, and click.

Well, now that PowerPoint 2013 has released to manufacturing, it’s time to publish my big list of new features. This is my list of new stuff in PowerPoint 2013, definitely not the same list Microsoft marketing publishes. So here we go…

Start UI. PowerPoint 2013 gives you a whole new experience from the get-go. Choose from a bunch of new templates and variants and see previews of a few slide layouts before you begin your presentation.

16×9. This is the new default slide aspect ratio. (The old one was 4×3.) Don’t worry, you can still set your default template to 4×3 if you want.

13.33″ x 7.5″. This is the new default slide size. (The old 4×3 was 10″ x 7.5″, and the old 16×9 was 10″ x 5.76″.) Personally, I think this is a very good thing.

Before I forget, Scale to Fit Paper is now ON by default in the File | Print dialog. I’m sure this is directly related to the 13.33×7.5 slide size feature above. (So the whole 16×9 slide will print on the page.)

Slide Size tool. There’s a new tool on the Design tab to help you switch your slides from 4×3 to 16×9 and back without completely wrecking all your content. Yay!

Variants and SuperThemes. We now have variations of a theme that are built-in. Most variants are very similar to the “base” theme, with changes to the color or font set. Themes that include variants are called SuperThemes.

Format panes. Instead of having a Format dialog, we now have a Format pane that is docked to the right side of the work space.

Insert Online Pictures. The Office programs now distinguish between inserting pictures from your hard drive and inserting them from online. Similar settings exist for Video and Audio.

Logging in. Log into your Microsoft.com account, and you’ll see more content and have more options. For example, if I’ve logged onto my MSFT account, my SkyDrive will show up (along with office.com, Flickr and Bing image search) when I click Insert Online Pictures.

Saving. When you save, online locations such as SharePoint team sites and Skydrive are in the forefront. Don’t forget to click Computer before browsing to a location if you’re saving to your hard drive!

Present Online. This is really the equivalent of Broadcast Slide Show, but the presenter has the option of letting people download the presentation as well (or not). Be aware — if you allow the audience to download, then they’ll also have the ability to navigate through the broadcast presentation at their own pace while you’re presenting.

Save as Video. By default this now creates an MPEG-4 Video. WMV (Windows Media Video) is still an option.

New Slide button. They finally added this to the Insert tab! (Only took three versions, sheesh. Unfortunately it’s still in the wrong place — it should be on the other side of the Images group, but nobody listens to me!) Don’t worry, it’s still on the Home tab also.

Popup menu in Slide Show View. The buttons that show in the lower left corner during slide show view have been tweaked for a better touch experience. They’re not as subtle as they could be, but they’re not as bad as they could be, either.

See All Slides. When in a slide show, we now have a view that looks kind of like Slide Sorter View. (There’s no longer a Go to Slide menu with an option to navigate by slide title, though.)

Presenter View. This is all kinds of new and all kinds of cool. And if you only have one monitor, use Alt+F5 to see and practice with Presenter View!

Page Curl transition. Yes, you heard (read) me right — we finally have a page turn transition! It’s actually called Peel Off, but what’s in a name? Actually, we have quite a few new transitions, including Page Curl, Curtains, and Fracture (among others). Also, while we’re on the subject of transitions, the bounce has been removed from the end of the Pan transition.

Play From. The animation pane now lets you play from the selected animation.

Motion Path End. A ghosted object now shows up to show you the end position of a motion path. Very, very helpful!

Animation Zombies. Some of the old animations (Stretch and Collapse, for example) are baaaaack!

Threaded Comments. Comments have been enhanced with a Comments Pane that shows the comments thread and avatars for those commenting.

Enhanced Smart Guides. Those whisker things that showed up in PowerPoint 2010 to help you align and position objects on a slide? Well, they got even better in 2013 because now they also help with distribution.

Enhanced Guides. We now have the equivalent of lockable, colorable guidelines, people! Wahooo! Put one set of guides on your slide master (to indicate margins, for example). Add others to any layouts that might require different guides. And add even more to the regular slides as you’ve always done. When you’re in Normal (editing) View, only the guides on the slides will be selectable — otherwise you’ll need to go to Master View to move them. Oh, and did I mention that you can recolor all of these? Just right-click a guide…

Color Picker. We now have eyedroppers to pick up and apply fill, outline and font colors. All together now: Thank you, PowerPoint Team!

Merge Shapes. These tools, which are similar to the Pathfinder tools in Illustrator, are now on the Ribbon (on the Drawing Tools Format tab). The group is called Merge Shapes instead of Combine Shapes. There is also a new tool, Fragment, to complement the other four.

Semantic Zoom. We can zoom and pan in Slide Show View now.

Charts. Charting is a lot better in many ways and a lot worse in others. Now a small Excel datasheet opens above the chart instead of Excel opening and taking up half your screeen. The interface is vastly improved. They added a combo chart to the types of charts (yay!). They added new chart styles (good) but removed the 2007/2010 chart styles (bad). They made the default chart font size 12 points (good or bad, depending if you like it or not) and the default chart font color a tint/shade of Dark 1/Light 1 (horrible if Dark 1/Light 1 is anything besides black or white).

PowerPoint Web App. This has lots of new features. We can now add, edit and format shapes, apply a new theme, and use animations and transitions. We also have audio and video playback in both Reading and Slide Show views. It still supports co-authoring, but now it supports co-authoring with regular ol’ PowerPoint, too. And if you embed your presentation into a web page or blog, it’s no longer just static pictures — it’s actually like a regular presentation with animations, transitions, audio and video. (Old embedded presentations will automatically update to behave this way, too.)

Default Office Theme is a bit different. The colors are different and the default effects set is way more subtle.

SmartArt graphics. We got some new SmartArt diagrams.

Backstage. Along with the overall interface overhaul to a newer, flatter look, Backstage has been reorganized once again.

WHAT’S MISSING (WELL, KIND OF…)

Save as HTML. Gone. Done. Kaput. It’s not in the interface, and it’s not accessible with VBA either.

Insert ClipArt. This has been replaced with Insert Online Pictures. No clipart or picture collections are installed with Office 2013.

Not missing, just moved. Theme Colors, Fonts and Effects dialog are no longer on the Design tab, but they are available in Slide Master View. Background Styles are available in Slide Master View.

Broadcast Slide Show. This isn’t really gone — it’s just morphed into Present Online.

Outline pane. Again, this isn’t actually gone, it just doesn’t show up any more next to the Slides pane in Normal (editing) View. Go to the View tab to turn the Outline pane on and off.

Combine Shapes. For those of you who used these, they’re not gone. They’ve been promoted to the Drawing Tools Format tab of the Ribbon and are now called Merge Shapes.

In Slide Show View, there’s no longer a Go to Slide menu with an option to navigate by slide title. Instead, we have the new See All Slides view, which looks similar to Slide Sorter view.

Posted in Computer Software, Computer Softwares, Documentations, My Research Related, Project Related, Research Menu | Tagged: , , | 1 Comment »

What’s New In Microsoft excel 2013 ?

Posted by Hemprasad Y. Badgujar on July 9, 2014


Microsoft’s updated spreadsheet tool isn’t getting a lot of new, whiz-bang features, but it is becoming more functional. That’s something both new and experienced users will enjoy—especially a new approach to an old problem that used to require a cumbersome workaround. Complex tasks become easier to perform, thanks to tools such as Recommended Charts and Recommended PivotTables tools. Other changes place choices closer to your data, and use big-business brawn to crunch data right into Excel.

To help you get up to speed, read on for 10 new features that make your work easier in the new Excel. Want to know more about the new Office suite? You’ll find our full review of Office 2013 here, as well 10 killer features in the new Word 2013 here.

Start screen sets the scene

Excel’s new Start Screen helps you get to work more quickly. Along its left edge are the most recently used worksheets, any of which can be pinned to your Recent list so they will always be visible. Here, too, you can click Open Other Workbooks to access your files from a disk or the cloud. The Start Screen’s top-right corner also shows the SkyDrive (or SharePoint) account you are currently connected to.

A range of templates appears here to help you quick start a project. These can also be pinned, or you can use the search feature to look online for other templates. A list of suggested searches can help you get started.

New users will appreciate the template choices, and existing users will likee the Recent file list and quick access to existing files. Although the Start Screen can be disabled, I find it useful enough to stick with it.

The Open tab has links to recently accessed files and locations.

Enjoy a new Backstage View

The Backstage View, introduced in Office 2010, is accessible from the File menu. In Excel this has been revamped to show exactly what you’re doing so you can choose the appropriate task.

The Open tab now gives you access to recently accessed workbooks, making it a combination of the Open and Recent tabs from Excel 2010. You can pin worksheets to this list or click Computer to access recently accessed locations (any of which you can pin permanently here, too). There’s also access to your SkyDrive account, and the option to set up additional SkyDrive or SharePoint accounts.

Want to split first and last names into two columns? Look to the new Flash Fill feature.

Make Flash Fill magic

The most whiz-bang new feature is the Flash Fill tool. Its predictive data entry can detect patterns and extract and enter data that follows a recognizable pattern. It solves some common problems that currently require cumbersome workarounds to achieve.

One such problem is extracting a person’s first name from a column of full names. In a blank column adjacent to the one that contains full names, you simply type the first name and then click the Home tab, and select Fill, Flash Fill. The first names of everyone in the list will be entered into that that column immediately. You can use the same process to extract last names, to join first and last names, to extract months, days or years from dates and even extract values from cells.  While you could have always done this with formulas, now Flash Fill ensures anyone can do it very quickly and easily.

Take the guess work out of which chart to choose to best display your data.

Simplify choices with Recommended Charts

This falls somewhere between a whiz-bang new feature and something that makes working in Excel more intuitive. Recommended Charts shows only a subset of chart types that are appropriate to the data you’ve selected. It will help inexperienced users create charts that help explain the data and don’t confuse the viewer.

To use the tool, select the data that you want to chart, click the Insert tab and selectRecommended Charts. A dialog appears with a range of charts to choose from—click each in turn to see how your data will look plotted on that chart. Select the desired option and click OK, and the chart is created automatically.

Change the look of your chart by selecting options from the pop-up menu.

Chart tools get smarter

In previous versions of Excel, when a chart is selected, the Chart Tools tab revealed three additional tabs: Design, Layout, and Format. The interface is simpler in Excel 2013, with only the Design and Format tabs to choose from.

In addition, a set of icons appears outside the top right edge of a chart when it is selected. Click any of these buttons—Chart  Elements, Chart Styles or Chart Filters—to reveal additional chart formatting options. Click Chart Elements to add or remove elements, such as axis titles and legends; click Chart Styles to change the style and color of your chart; or click Chart Filtersto view filtered data using a live preview.

Quick Analysis offers formatting, totals and charts for analyzing your data.

Quickly analyze your data

The new Quick Analysis tool can help both new and experienced users find options for working with selected data. To use it, select the data to analyze, and the Quick Analysis icon  appears in the bottom-right corner of the selected data.

Click that icon, and a dialog appears showing a range of tools for analyzing the data, such as Formatting, Charts, Totals, Tables and Sparklines. Click any option, and a series of selectable choices appear; preview those choices by mousing over them. Next, click the option you like to apply it to your data. This feature speeds up the process of formatting, charting and writing formulas.

PivotTables just became ridiculously simple to create.

Answer questions instantly with Pivot Tables

Pivot Tables are a powerful tool for analyzing and answering questions about  your data, but they’re not easy for new users to create. For the first time, though, if you can click a mouse key, then you can create a meaningful Pivot Table, thanks to the new Recommended PivotTables. To use it, select your data, including headings, and chooseInsert, Recommended PivotTables. A dialog appears showing a series of PivotTables with explanations of what they show. All you need do is to select the table that shows what you want to see, click OK,and the PivotTable is automatically drawn for you.

Excel 2013 now integrates Power View for beefy analysis and reporting.

imelines

A timeline lets you filter records in a PivotTable—it works similar to a slicer, but you’ll filter by dates. For instance, Figure E shows a PivotTable and timeline. (I used the same data range used in #3.) Once you have a PivotTable arranged, adding the timeline is simple:

  1. With the PivotTable selected, click the contextual Analyze tab.
  2. In the Filter group, click Insert Timeline.
  3. In the resulting dialog, check the date field (in this case, that’s Date) and click OK. Excel will embed the timeline alongside the PivotTable.

 

Excel_New_Ftrs.FigE.jpg

 

Use the new Timeline with a PivotTable.

To use the timeline, just drag the scroll bar or click a tile to further filter personnel totals by specific months. In the upper-right corner, you can change to years, quarters, months, and days. To clear the timeline filter, click the Clear button in the upper-right corner.

Make quick reports with Power View

The Power View add-in, available for previous versions of Excel, is now integrated inside Excel 2013. Power View is typically used for analyzing large quantities of data brought in from external data sources—just  the sort of tool that big business might use.

Incorporated within Excel, it’s now  accessible to anyone. To see it at work, select your data and choose Insert, Power View. The first time you use it, the feature installs automatically. Then a Power View sheet will be added to your workbook, and the analysis report will be created.

You can add a title and then filter the data and organize it to display the way you like. The Power View tab on the Ribbon toolbar displays report format options, such as Theme and text formats, as well View options for Field List and Filters Area panels that you can use to filter and sort your data.

Try to work on a worksheet that someone else is editing? You’ll be warned that it’s locked. You can view and download it, but can’t change it.

Share files and work with other people

Working with other people on shared files in real time is a double-edged sword. While it’s useful to do this, you will face problems when two people try to change the same item at the same time. In Excel 2013 you can share and work collaboratively on files with others via SkyDrive using the Excel WebApp, and multiple people can work on the same file at the same time. However, you cannot open a worksheet from SkyDrive in Excel 2013 on your local machine if someone else is currently working in the same worksheet. This protects the worksheet against conflicting changes.

Instead, if one person is editing an Excel file that’s stored online, others with permission can view and download it, but they cannot change the original, whichis locked until the person working with it is finished.

Like other applications in the Office 2013 suite, Excel 2013 saves files by default to the cloud. You can open, view, and edit Excel files  online in a browser using the Excel WebApp without having Excel 2013 on the local hard drive.

Share your cloud-stored worksheets with friends on Facebook, Twitter, or LinkedIn.

features to explore

Get started quickly

Some of the templates that are available in Excel

Templates do most of the set-up and design work for you, so you can focus on your data. When you open Excel 2013, you’ll see templates for budgets, calendars, forms, and reports, and more.

Instant data analysis

Data Analysis Lens

The new Quick Analysis tool lets you convert your data into a chart or table in two steps or less. Preview your data with conditional formatting, sparklines, or charts, and make your choice stick in just one click. To use this new feature, see Analyze your data instantly.

Fill out an entire column of data in a flash

Flash Fill in action

Flash Fill is like a data assistant that finishes your work for you. As soon as it detects what you want to do, Flash Fill enters the rest of your data in one fell swoop, following the pattern it recognizes in your data. To see when this feature comes in handy, see Split a column of data based on what you type.

Create the right chart for your data

Recommended Charts

With Chart recommendations, Excel recommends the most suitable charts for your data. Get a quick peek to see how your data looks in the different charts, and then simply pick the one that shows the insights you want to present. Give this feature a try when you create your first chart.

Filter table data by using slicers

Table slicer

First introduced in Excel 2010 as an interactive way to filter PivotTable data, slicers can now also filter data in Excel tables, query tables, and other data tables. Simpler to set up and use, slicers show the current filter so you’ll know exactly what data you’re looking at.

One workbook, one window

Two workbooks, two windows

In Excel 2013 each workbook has in its own window, making it easier to work on two workbooks at once. It also makes life easier when you’re working on two monitors.

New Excel functions

New Web functions

You’ll find several new functions in the math and trigonometry, statistical, engineering, date and time, lookup and reference, logical, and text function categories. Also new are a few Web service functions for referencing existing Representational State Transfer (REST)-compliant Web services. Look for details in New functions in Excel 2013.

Save and share files online

Online places to save your workbook

Excel makes it easier to save your workbooks to your own online location, like your free OneDrive or your organization’s Office 365 service. It’s also simpler to share your worksheets with other people. No matter what device they’re using or where they are, everyone works with the latest version of a worksheet— and you can even work together in real time. To learn more about it, see Save a workbook to the Web.

Embed worksheet data in a web page

To share part of your worksheet on the web, you can simply embed it on your web page. Other people can then work with the data in Excel Online or open the embedded data in Excel.

Share an Excel worksheet in an online meeting

No matter where you are or what device you’re on—be it your smartphone, tablet, or PC—as long as you have Lync installed, you can connect to and share a workbook in an online meeting. To learn more about it, seePresent a workbook online.

Save to a new file format

Now you can save to and open files in the new Strict Open XML Spreadsheet (*.xlsx) file format. This file format lets you read and write ISO8601 dates to resolve a leap year issue for the year 1900. To learn more about it, seeSave a workbook in another file format.

Top of Page TOP OF PAGE

New charting features

Changes to the ribbon for charts

Chart Tools

The new Recommended Charts button on the Insert tab lets you pick from a variety of charts that are right for your data. Related types of charts like scatter and bubble charts are under one umbrella. And there’s a brand new button for combo charts—a favorite chart you’ve asked for. When you click a chart, you’ll also see a simpler Chart Tools ribbon. With just a Design and Format tab, it should be easier to find what you need.

Fine tune charts quickly

Chart buttons to change chart elements, layout, or chart filters

Three new chart buttons let you quickly pick and preview changes to chart elements (like titles or labels), the look and style of your chart, or to the data that is shown. To learn more about it, see Format your chart.

Richer data labels

Bubble chart with data labels

Now you can include rich and refreshable text from data points or any other text in your data labels, enhance them with formatting and additional freeform text, and display them in just about any shape. Data labels stay in place, even when you switch to a different type of chart. You can also connect them to their data points with leader lines on all charts, not just pie charts. To work with rich data labels, see Change the format of data labels in a chart.

View animation in charts

See a chart come alive when you make changes to its source data. This isn’t just fun to watch—the movement in the chart also makes the changes in your data much clearer.

Powerful data analysis

Create a PivotTable that suits your data

Recommended PivotTables for your data

Picking the right fields to summarize your data in a PivotTable report can be a daunting task. Now you can get some help with that. When you create a PivotTable, Excel recommends several ways to summarize your data, and shows you a quick preview of the field layouts so you can pick the one that gives you the insights you’re looking for. To learn more about it, see Create a PivotTable to analyze worksheet data.

Use one Field List to create different types of PivotTables

Add more Tables in the Field List

Create the layout of a PivotTable that uses one table or multiple tables by using one and the same Field List. Revamped to accommodate both single and multi-table PivotTables, the Field List makes it easier to find the fields you want in your PivotTable layout, switch to the new Excel Data Model by adding more tables, and explore and navigate to all of the tables. To learn more about it, see Use the Field List to arrange fields in a PivotTable.

Use multiple tables in your data analysis

The new Excel Data Model lets you to tap into powerful analysis features that were previously only available by installing the Power Pivot add-in. In addition to creating traditional PivotTables, you can now create PivotTables based on multiple tables in Excel. By importing different tables, and creating relationships between them, you’ll be able to analyze your data with results you aren’t able to get from traditional PivotTable data. To learn more about it, see Create a Data Model in Excel.

Power Query

If you’re using Office Professional Plus 2013 or Office 365 Pro Plus, you can take advantage of Power Query for Excel. Use Power Query to easily discover and connect to data from public and corporate data sources. This includes new data search capabilities, as well as capabilities to easily transform and merge data from multiple data sources so that you can continue to analyze it in Excel. To learn more about it, see Discover and combine with Power Query for Excel.

Power Map

Power Map

If you’re using Office 365 Pro Plus, Office 2013, or Excel 2013, you can take advantage of Power Map for Excel. Power Map is a three-dimensional (3-D) data visualization tool that lets you look at information in new ways by using geographic and time-based data. You can discover insights that you might not see in traditional two-dimensional (2-D) tables and charts. Power Map is built into Office 365 Pro Plus, but you’ll need to download a preview version to use it with Office 2013 or Excel 2013. See Power Map for Excel for details about the preview. To learn more about using Power Map to create a visual 3-D tour of your data, see Get started with Power Map.

Connect to new data sources

To use multiple tables in the Excel Data Model, you can now connect to and import data from additional data sources into Excel as tables or PivotTables. For example, connect to data feeds like OData, Windows Azure DataMarket, and SharePoint data feeds. You can also connect to data sources from additional OLE DB providers.

Create relationships between tables

When you’ve got data from different data sources in multiple tables in the Excel Data Model, creating relationships between those tables makes it easy to analyze your data without having to consolidate it into one table. By using MDX queries, you can further leverage table relationships to create meaningful PivotTable reports. To learn more about it, see Create a relationship between two tables.

Use a timeline to show data for different time periods

A timeline makes it simpler to compare your PivotTable or PivotChart data over different time periods. Instead of grouping by dates, you can now simply filter dates interactively or move through data in sequential time periods, like rolling month-to-month performance, in just one click. To learn more about it, see Create a PivotTable timeline to filter dates.

Use Drill Down, Drill Up, and Cross Drill to get to different levels of detail

Drilling down to different levels of detail in a complex set of data is not an easy task. Custom sets are helpful, but finding them among a large number of fields in the Field List takes time. In the new Excel Data Model, you’ll be able to navigate to different levels more easily. Use Drill Down into a PivotTable or PivotChart hierarchy to see granular levels of detail, and Drill Up to go to a higher level for “big picture” insights. To learn more about it, seeDrill into PivotTable data.

Use OLAP calculated members and measures

Tap into the power of self-service Business Intelligence (BI) and add your own Multidimensional Expression (MDX)-based calculations in PivotTable data that is connected to an Online Analytical Processing (OLAP) cube. No need to reach for the Excel Object Model—now you can create and manage calculated members and measures right in Excel.

Create a standalone PivotChart

A PivotChart no longer has to be associated with a PivotTable. A standalone or de-coupled PivotChart lets you experience new ways to navigate to data details by using the new Drill Down, and Drill Up features. It’s also much easier to copy or move a de-coupled PivotChart. To learn more about it, see Create a PivotChart.

Power View

Power View

If you’re using Office Professional Plus, you can take advantage of Power View. Simply click the Power View button on the ribbon to discover insights about your data with highly interactive, powerful data exploration, visualization, and presentation features that are easy to apply. Power View lets you create and interact with charts, slicers, and other data visualizations in a single sheet. Learn more about Power View in Excel 2013.

New and improved add-ins and converters

Power Pivot for Excel add-in

If you’re using Office Professional Plus 2013 or Office 365 Pro Plus, the Power Pivot add-in comes installed with Excel. The Power Pivot data analysis engine is now built into Excel so that you can build simple data models directly in Excel. The Power Pivot add-in provides an environment for creating more sophisticated models. Use it to filter out data when importing it, define your own hierarchies, calculation fields, and key performance indicators (KPIs), and use the Data Analysis Expressions (DAX) language to create advanced formulas. Learn more about the Power Pivot in Excel 2013 add-in.

Inquire add-in

If you’re using Office Professional Plus 2013 or Office 365 Pro Plus, the Inquire add-in comes installed with Excel. It helps you analyze and review your workbooks to understand their design, function, and data dependencies, and to uncover a variety of problems including formula errors or inconsistencies, hidden information, broken links and others. From Inquire, you can start a new Microsoft Office tool, called Spreadsheet Compare, to compare two versions of a workbook, clearly indicating where changes have occurred. During an audit, you have full visibility of the changes in your workbooks.

Cloud support

Microsoft claims that its cloud support is the true shining star of the Office 2013 suite. If you need it, you probably agree; many organizations aren’t taking full advantage of it yet. If you’re curious, you can quickly hook up to SkyDrive or your organization’s SharePoint team site by using the Save As (or Open) screen, as shown in Figure F. Doing so has two advantages:

  • You have quick and easy access to your Excel files on any device that runs Excel 2013 (including a Windows tablet and smartphone).
  • Using Office 365 (you’ll need a subscription), you canreview and edit your workbooks online using almost any web browser.
    Excel_New_Ftrs.FigF.jpg

Data Model and Relationships

Excel 2013’s new integrated data model support is well beyond a simple recommendation tip like this. You’ll want to study and familiarize yourself with all of the possibilities:

  • Create PivotTables based on multiple tables.
  • Create one-to-one and one-to many relations between tables.
  • Easily connect to OData, Windows Azure DataMarket, and SharePoint.
  • Drill down to detail levels in a PivotTable or PivotChart.
  • Drill up for a high-end view.

Apps for Office

This new feature provides quick access to specialized programs at Office Store. Just a quick click and you’re shopping! To install an app, click the Insert tab and then click Apps for office in the Apps group. You’ll need an account at the store, which the feature will help you create the first time you use it. Figure G shows Bing Maps as an installed app.

Excel_New_Ftrs.FigG.jpg

 

After creating an Office Store account, adding Bing Maps took just a couple of clicks.

Present online

Sharing a workbook online used to take a bit of preparation, but in Excel 2013, on-the-fly sharing is no problem. First, install Lync. If you have Office Professional Plus, you already have it, but you’ll need to configure it. Before sharing, sign into Lync. Then, return to Excel 2013, close all workbooks that you don’t want to share, and do the following:

  1. Click the File tab.
  2. Choose Share in the left pane.
  3. Click Present Online (in the Share section).
  4. Click Present.
  5. Choose a Lync meeting or create one, and click OK.

At this point, you can share the workbook and even allow others to update it.

Share work to your social networks

Here’s a handy way to share a to-do list, an event planning worksheet, or whatever spreadsheet you desire with your social network. You can now share Excel workbooks with Facebook and more from within Excel 2013 itself. To see the Post to Social Networks option, the best way to save the file first to SkyDrive.

If you haven’t saved your file to SkyDrive, then choose File, Share, and click Invite People. You’ll be stepped through the process of saving the file to the Cloud so that Save Asoptions later appear automatically. Once this is done, you are returned to the Share panel where the Post to Social Networks option now appears. Here you can select any social network that you have linked to your Office 2013 account. You can select whether viewers can view or edit your shared worksheet, and you can include a message, and then post it for review.

Posted in Computer Software, Computer Softwares, Documentations, Free Tools, My Research Related, Research Menu | Tagged: , , | 1 Comment »

What’s New In Microsoft Word 2013 ?

Posted by Hemprasad Y. Badgujar on July 9, 2014


A word processor is indispensable for anyone who creates documents, be it for work, school, or writing angry letters to your representatives in Congress. Now that Microsoft has finally released Office 2013 to the general public, we’re naming what we think are the 10 best new features in Word 2013. (We reviewed the whole enchilada last December, when it became available to Microsoft TechNet subscribers. You can read our opinionhere.)

Word 2013 boasts new and improved features across the board, spanning document creation to reading, editing, and collaboration. What’s even better is that Microsoft has made these advanced features easier for everyone to use.

The new Design tab includes document formatting options to format the entire document.

A New Look for Word

The first change you’ll see when you fire up Word 2013 is a new landing page (rather than a blank document, as in older versions of Word). In the left pane, you’ll see a list of your most recent Word documents as well as the option to open previously viewed documents. In the right pane, you can pick from various templates, such as blank, invoice, blog post, and so on. You can also search through Microsoft’s library of templates using certain keywords, such as “fundraiser” or “proposal.” The new landing page may take some getting used to, but will prove helpful in accessing templates you might have otherwise overlooked.

Microsoft Word has a new landing page.
Microsoft Word has a new landing page.

Integrated Account Management & Connected Services

The landing page provides you with a sleek interface organized into three sections: the navigation sidebar, account information and product information. The navigation bar allows you to access essential word processing functionalities including sharing, exporting and return buttons. The overall Word interface is highly responsive obeying the click, instantly. As illustrated below, Word 2013 comes with a customizable themes (that can be selected from the Office Background dropdown menu boasting multiple attractive themes). Moreover, understanding the power of social media and its penetration in generating viral content, Microsoft has decided to ride the wave by offering Connected Services that virtually allows you to access documents from any device on the go. Just use your Microsoft or SkyDrive account or connect using Youtube. Still not satisfied with the feature? Why not Add a service and connect your work to your favorite online hot spots. The Product Information on the left allows you toManage account or analyze the overall subscription of the Office Suite with update details.

Microsoft Word Account Preview

When adding a new service, Word 2013 allows you to link your existing Microsoft account with another online service like LinkedIn. If you don’t have a LinkedIn account, just click the Join Now button on the top right corner to create one. You can specify the access duration and upon approving the link, the new service will get connected to Word, successfully.

Word 2013 Connected Services

The new Design tab

Document formats can be further extended by choosing Themes, Colors, and Fonts to use with them. If you come up with something you’d like to use all the time, the new Set as Default option allows you to make the current combination of formatting settings the default for all new documents.Word 2007 and Word 2010 added interesting features for styling a document, but the tools were scattered throughout the user interface, and they were difficult to use. The new Word 2013 Design tab consolidates all these tools onto one tab, so they’re easy to find. Microsoft has also added a visual element to its Document Formatting tool that allows you to preview a document style before applying it to the entire document. You’ll also find a range of new document format designs to choose from.

The new Alignment Guides in Word 2013 show you when an object is lined up with another object or page element.

Navigation Task Pane

Word 2013 - Navigation Task Pane

 

Bookmarks

When you reopen a document, a bookmark is placed in the last location you scrolled to, and you can keep reading right where you left off.

Bookmarks show the last location you scrolled to.

Object Placement Beyond The Right Click

In all previous versions, placement options relevant to objects like pictures, figures, etc. were accessible from the right-click menu. You may have used the Wrap Text feature, placement and adjustment with text, re-sizing and rotation utilities. In Word 2013, a simple click reveals all relevant functionality with the layout options floating on the right, while re-sizing buttons on and around the object. Options can be easily expanded by clicking See more. Double click on the picture to zoom for a better view of the target. With live layout and alignment guides, you can drag your image wherever you want with the text adapting in real-time, accordingly.

Word Picture Placement

Enhanced Templates Directory

A comprehensive template directory comes to view upon clicking New. It is advisable to load the Welcome to Word document for a quick tour of Word 2013. A large number of useful and popular templates are organized in the New tab as user-friendly tiles. Moreover, the search bar allows you to browse, view and select from hundreds of online templates in the Office Library.  Suggested searches enhance the searching experience by highlighting frequently used categories.

Word 2013 Landing

Office Apps: Redefining Creativity

Office Apps are a new way of adding creative and useful applications to Microsoft Office 2013 suite. The Merriam-Webster Dictionary and eFax app for Word 2013 are useful ways of increasing productivity while creating and managing documents. Moreover, there are loads of free featured apps and a huge collection in the Office Store awaiting your click. You can manage your apps and refresh to keep track of any updates.

 Insert Pictures From The Web, Instantly

The Insert tab on the ribbon reveals some new and useful additions. One such feature is the option to insert online pictures. Microsoft has updated its Royalty Free Photos and Illustrations directory that can be accessed using the search bar in the Insert Pictures window. You can also browse your online SkyDrive storage for clipart stored in the cloud. Too often do we use our Image Search to identify relevant photos in the web browser to paste into Microsoft Word. Now, you can use the Bing Image Search and Flickr account to hunt and insert online pictures for good from within Word 2013.

Insert Online Pictures

Alignment with Alignment Guides

If you have text wrapping set to an option such as Square, the Alignment Guides also show when the object is aligned with the top of a paragraph or to a heading.This new feature makes lining up images and other objects a cinch in Word 2013. When you move an object such as an Image, Chart, or SmartArt illustration around in a document, Alignment Guides automatically appear to show you when the object is lined up with other elements on the page. The guides also show you when the object is lined up to key page locations, such as the edge of the page and the left and right margins.

Read mode provides a superior experience for anyone who uses Word primarily to read documents others have created.

Comfortable reading in Read mode

If you use Word more to read documents than to create them, you’ll like Word 2013’s new Read mode. It automatically resizes a document to the full window. Click the on-screen arrows to flip through the pages, or swipe the screen from either edge of the display if you’re using a touch-screen monitor. Switch to page view for vertical scrolling. Right-click on any unfamiliar words to display a definition without existing read mode. You can also click on any image, table, or chart to enlarge it for easier reading.

The new comments tool encapsulates related comments into a single bubble, which makes them much easier to follow.

Smarter collaboration

 

If you collaborate with others on Word documents, you know how quickly conversations can become difficult to follow, because Word’s comments tool treats every utterance as a new comment.

In Word 2013, you can reply to a comment within that comment by clicking the Comment Reply button. This captures the entire discussion of a given point inside a single comment box, which will appear as a small bubble in the document’s margin.

You can also lock the change-tracking feature, so it can’t be bypassed unless the collaborator provides the correct password.

And with the new Simple Markup option, you can hide complex markups and view the final version of the document. Switch between this and All Markup view from the Review tab or by double clicking the line in the left margin beside a tracked change.

Word can now open PDF files so you can edit and complete them in Word including working with table data in the file.

Open and edit PDFs inside Word


Word 2013 can not only open a PDF document, it also enables you to edit it—without need of a third-party application. You can also edit the data inside tables and move images around the document. When you’re finished, you can save the document as either a PDF or a Word file. This is a must-have feature for anyone who works with PDFs frequently.

Select a picture, chart, or SmartArt object, and the new Layout Options icon lets you configure placement and text wrapping options for it.

Discoverable layout options

You can also select Move with text or Fix position on page to control the location of the object. Click See more to open the old Layout dialog, which offers other options for positioning the object on the page.New layout options in Word 2013 make features such as wrapping text around an illustration much easier to use. When you click an image, a chart, or a SmartArt object in a Word document, a Layout Options icon appears outside its top right corner. Click it to select text wrapping options such as Tight, Square and Through.

As with the other applications in the Office 2013 suite, a formatting task pane opens when you right-click an object and choose, for example, Format Picture or Format Shape. This stays open as you work and shows formatting options relevant to the currently selected object.

If you use tables in your documents, the new Border Painter tool and Border Styles feature simplify and speed up formatting.

New table border tools

Select a Line Style, Line Weight, and Pen Color; or choose a preset from the Border Styleslist and paint the borders onto the table. You can also sample an existing border, using the Border Sampler tool in the Border Styles panel, and then use the Border Painter to paint that style elsewhere in the table.Formatting a Word table by adding different width and style borders has always been a pain point. Word 2013’s handy Border Painter tool makes this task supremely easy. To access it, choose Table Tools, Design, Border Painter.

There are new icons for inserting rows and columns in tables and options on the Mini Toolbar for deleting them, too.

Insert Online Videos And Interactive Content Easily

In an attempt to promote dynamic content in documents, Word 2013 presents to you the option to add online videos may it be from social media sites like Youtube, search engines like Bing Video search or videos from any other website (using embed code). To insert a video successfully, type a keyword in the relevant search bar to view results.

Word 2013 displays all results, mentioning the total number of links. Just click the result to preview the video before actually inserting it into the document. Similarly, multiple video results can be added by selecting, previewing and inserting, accordingly. Text Reflow allows you to fit the interactive content in the most appropriate manner.

Video Search from Word 2013

Simplified Markup View For Better Collaborations

Working with text had never been so interactive. Online Pictures and Videos already added color and dynamic content to the Word document ensuring fast track follow up on relevant topics. Now, with a simplified markup view meant to highlight changes in your document in a neat, effective manner encourages you to focus on collaborating work. The left sidebar indicates changes while a small cloud on the right indicates comments at the respective places. With Word 2013, you can instantly reply to comments in an organized manner to give rise to useful discussion threads. With Microsoft SkyDrive and SharePoint, working on projects and documents online as a collaborated effort could have never been simpler. These markers and comment threads allow you to highlight necessary details, corrections and pointers for the rest of your team to keep in mind. Similarly, keeping track of the activity around your workspace has thus, been made possible.

Word 2013 guide - Commenting feature

More new table features

Word has always had weak table tools, and Word 2013 finally addresses the problem. You can now add a new row to a table by hovering your mouse just outside the left edge of the table at the point at which the row is to be inserted. A small icon will appear; click on it and you’re done. There’s a similar icon for easily adding a new column. New Delete buttons on the Mini Toolbar make it easy to delete columns and rows; if the table itself is selected, the option lets you delete the entire table.

New Expand/Collapse options let you collapse and expand a document to make it easier to work on.

Collapse and expand a document

Long documents can become unruly to manage, especially if you’re working in just a small portion of it. Word 2013 lets you collapse and expand a document, so you see only the portion you need. To do this, you must format the document’s headings using the built-in styles Heading 1, Heading 2, and so on.

Switch to Print Layout view and you can collapse the document by hovering your mouse to the left of a formatted heading. Click the small disclosure triangle to hide the paragraphs between this heading and the next, leaving just the heading text visible.

Right-click a heading formatted with one of the heading styles to access the Expand/Collapse option, which gives you menu control for this feature.

Now you can present a document online to others in real time.

Understand The Impact, Definitely

The Review tab has a new Define feature that presents definitions of words and phrases, instantly using the relevant Word Apps like the Merriam-Webster Dictionary. Results and displayed on the right sidebar as soon as you select the text. Say goodbye to right-click menu and dictionary access when your results are displayed at a single click. Now, you can truly understand the impact of your content.

Word 2013 guide - Define feature

Verdict

It is always beneficial to look at new features objectively. Microsoft Word 2013 has indeed come with new tidings for progress in the area of word processing. You are now in a position to present documents online to people who do not have the latest version of Microsoft Office, how? This can now be achieved using the cloud-based storage and synchronization of documents for access, wherever needed. Just provide your team members with the respective link that can be pasted in browsers for viewing. Thus, with a modern, polished and internet-friendly Word 2013, the new life spells out productivity for us all.

Present a document online

Once everyone is connected to the service—which is run via the Microsoft Word Web App—they’ll be able to follow along as you present the document. The interface supports comments being made during the presentation, and participants can create a printable and downloadable PDF of the document if desired.Office 2013’s new Office Presentation Service allows you to present Word documents online. You must be signed into your Microsoft Account to use this feature. When you’re ready to share your document, chooseFile, Share, Present Online, and click the Present Online button to upload your document to the cloud. You will get a link that you can email or share with others so they can join the presentation.

There’s a lot to like about the new Microsoft Word 2013. The new features collectively will make your day-to-day work much easier to perform whatever that happens to be.

Posted in Computer Research, Computer Software, Computer Softwares, Documentations, Free Tools, My Research Related, Research Menu | Tagged: , , , , | Leave a Comment »

 
Extracts from a Personal Diary

dedicated to the life of a silent girl who eventually learnt to open up

Num3ri v 2.0

I miei numeri - seconda versione

ThuyDX

Just another WordPress.com site

Algunos Intereses de Abraham Zamudio Chauca

Matematica, Linux , Programacion Serial , Programacion Paralela (CPU - GPU) , Cluster de Computadores , Software Cientifico

josephdung

thoughts...

Tech_Raj

A great WordPress.com site

Travel tips

Travel tips

Experience the real life.....!!!

Shurwaat achi honi chahiye ...

Ronzii's Blog

Just your average geek's blog

Karan Jitendra Thakkar

Everything I think. Everything I do. Right here.

Chetan Solanki

Helpful to u, if u need it.....

ScreenCrush

Explorer of Research #HEMBAD

managedCUDA

Explorer of Research #HEMBAD

siddheshsathe

A great WordPress.com site

Ari's

This is My Space so Dont Mess With IT !!

%d bloggers like this: