Something More for Research

Explorer of Research #HEMBAD

CUDA Installation

Installing CUDA 6 / 6.5 on Ubuntu 12.04

Setup New ubuntu 12.04

  • Clean Install
  • You can manually update via terminal by running:
    sudo apt-get update
    sudo apt-get upgrade

    Additionally you can run:

    sudo apt-get dist-upgrade
  • enable root login
     sudo passwd root
     sudo sh -c 'echo "greeter-show-manual-login=true" >> /etc/lightdm/lightdm.conf'

Root won’t show up as a user, but “Login” will, which is how you manually log in with users not shown in the greeter.

Rebooted and then you should be able to login as root.

  • Download the NVIDIA CUDA Toolkit.


Pre-installation Actions

Some actions must be taken beforetheCUDA Toolkit and Driver can be installed on Linux:

  • Verify the system has a CUDA-capable GPU.
  • Verify the system is running a supported version of Linux.
  • Verify the system has gcc installed.
  • Download the NVIDIA CUDA Toolkit.
Note: You can override the install-time prerequisite checks by running the installer with the -override flag. Remember that the prerequisites will still be required to use the NVIDIA CUDA Toolkit.

Verify You Have a CUDA-Capable GPU

To verify that your GPU is CUDA-capable, go to your distribution’s equivalent of System Properties, or, from the command line, enter:

lspci | grep -i nvidia

If you do not see any settings, update the PCI hardware database that Linux maintains by entering update-pciids (generally found in /sbin) at the command line and rerun the previous lspci command.

If your graphics card is from NVIDIA and it is listed in, your GPU is CUDA-capable.

The Release Notes for the CUDA Toolkit also contain a list of supported products.

 Verify You Have a Supported Version of Linux

The CUDA Development Tools are only supported on some specific distributions of Linux. These are listed in the CUDA Toolkit release notes.

To determine which distribution and release number you’re running, type the following at the command line:

uname -m && cat /etc/*release

You should see output similar to the following, modified for your particular system:

i386 Red Hat Enterprise Linux WS release 4 (Nahant Update 6)

The i386 line indicates you are running on a 32-bit system. On 64-bit systems running in 64-bit mode, this line will generally read: x86_64. The second line gives the version number of the operating system.

Verify the System Has gcc Installed

The gcc compiler and toolchain generally are installed as part of the Linux installation, and in most cases the version of gcc installed with a supported version of Linux will work correctly.

To verify the version of gcc installed on your system, type the following on the command line:

gcc --version

If an error message displays, you need to install the development tools from your Linux distribution or obtain a version of gcc and its accompanying toolchain from the Web.

For ARMv7 cross development, a suitable cross compiler is required. For example, performing the following on Ubuntu 12.04:

sudo apt-get install g++-4.6-arm-linux-gnueabihf

will install the gcc 4.6 cross compiler on your system which will be used by nvcc. Please refer to th NVCC manual on how to use nvcc to cross compile to the ARMv7 architecture

Choose an Installation Method

The CUDA Toolkit can be installed using either of two different installation mechanisms: distribution-specific packages, or a distribution-independent package. The distribution-independent package has the advantage of working across a wider set of Linux distributions, but does not update the distribution’s native package management system. The distribution-specific packages interface with the distribution’s native package management system. It is recommended to use the distribution-specific packages, where possible.

Note: Distribution-specific packages and repositories are not provided for Redhat 5 and Ubuntu 10.04. For those two Linux distributions, the stand-alone installer must be used.
Note: Standalone installers are not provided for the ARMv7 release. For both native ARMv7 as well as cross developent, the toolkit must be installed using the distribution-specific installer.

Download the NVIDIA CUDA Toolkit

The NVIDIA CUDA Toolkit is available at

Choose the platform you are using and download the NVIDIA CUDA Toolkit

The CUDA Toolkit contains the CUDA driver and tools needed to create, build and run a CUDA application as well as libraries, header files, CUDA samples source code, and other resources.

Download Verification

The download can be verified by comparing the MD5 checksum posted at with that of the downloaded file. If either of the checksums differ, the downloaded file is corrupt and needs to be downloaded again.

To calculate the MD5 checksum of the downloaded file, run the following:

$ md5sum

 Runfile Installation

This section describes the installation and configuration of CUDA when using the standalone installer.

Pre-installation Setup

Before the stand-alone installation can be run, perform the pre-installation actions.


If you have already installed a standalone CUDA driver and desire to keep using it, you need to make sure it meets the minimum version requirement for the toolkit. This requirement can be found in the CUDA Toolkit release notes. With many distributions, the driver version number can be found in the graphical interface menus under Applications > System Tools > NVIDIA X Server Settings.. On the command line, the driver version number can be found by running /usr/bin/nvidia-settings.

The package manager installations (RPM/DEB packages) and the stand-alone installer installations (.run file) are incompatible. See below about how to uninstall any previous RPM/DEB installation.

Copy cuda_6.0.37_linux_*.run file to Root home folder for easy access.


The standalone installer can install any combination of the NVIDIA Driver (that includes the CUDA Driver), the CUDA Toolkit, or the CUDA Samples. If needed, each individual installer can be extracted by using the -extract=/absolute/path/to/extract/location/. The extraction path must be an absolute path.

The CUDA Toolkit installation includes a read-only copy of the CUDA Samples. The read-only copy is used to create a writable copy of the CUDA Samples at some other location at any point in time. To create this writable copy, use the script provided with the toolkit. It is equivalent to installing the CUDA Samples with the standalone installer.

Extra Libraries

If you wish to build all the samples, including those with graphical rather than command-line interfaces, additional system libraries or headers may be required. While every Linux distribution is slightly different with respect to package names and package installation procedures, the libraries and headers most likely to be necessary are OpenGL (e.g., Mesa), GLU, GLUT, and X11 (including Xi, Xmu, and GLX).

On Ubuntu, those can be installed as follows:

sudo apt-get install freeglut3-dev build-essential libx11-dev libxmu-dev libgl1-mesa-dri libxi-dev libgl1-mesa-glx libglu1-mesa libglu1-mesa-dev

sudo apt-get install libwxgtk2.8-0 libwxbase2.8-0 wx-common libglu1-mesa libgl1-mesa-glx zlib1g bzip2 gpsd gpsd-clients xcalib libportaudio2

Interaction with Nouveau

Proprietary Video Driver

The built-in nouveau video driver in Ubuntu is incompatible with the CUDA Toolkit, and you have to replace it with the proprietary NVIDIA driver.

$ sudo aptget remove purge \  xserverxorgvideonouveau
The Nouveau drivers may be installed intoyourroot filesystem (initramfs) and may cause the Display Driver installation to fail. To fix the situation,theinitramfs image must be rebuilt with:

sudo mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img
sudo dracut /boot/initramfs-$(uname -r).img $(uname -r)

if Grub2 is used as the bootloader, the rdblacklist=nouveau nouveau.modeset=0 line must be added at the end of the GRUB_CMDLINE_LINUX entry in /etc/default/grub. Then, the Grub configuration must be remade by running:

sudo grub2-mkconfig -o /boot/grub2/grub.cfg

Once this is done, the machine must be rebooted and the installation attempted again.

Graphical Interface Shutdown

Exit the GUI if you are in a GUI environment by pressing Ctrl-Alt-Backspace. Some distributions require you to press this sequence twice in a row; others have disabled it altogether in favor of a command such as sudo service ligthdm stop. Still others require changing the system runlevel using a command such as /sbin/init 3 Consult your distribution’s documentation to find out how to properly exit the GUI. This step is only required in the event that you want to install the NVIDIA Display Driver included in the standalone installer.

NVIDIA Driver RPM/Deb package uninstallation

If you want to install the NVIDIA Display Driver included in the standalone installer, any previous driver installed through RPM or DEB packages MUST be uninstalled first. Such installation may be part of the default installation of your Linux distribution. Or it could have been installed as part of the package installation described in the previous section. To uninstall a DEB package, use sudo apt-get –purge remove package_name or equivalent. To uninstall a RPM package, use sudo yum remove package_name or equivalent.


To install any combination of the driver, toolkit, and the samples, simply execute the .run script. The installation of the driver requires the script to be run with root privileges. Depending on the target location, the toolkit and samples installations may also require root privileges.

Shutdown the all the graphics

Ubuntu uses LightDM, so you need to stop this service:

$ sudo service lightdm stop

press Alt+F1 for Terminal

5.2 Run the installer

Go to (using cd) the directory where you have the CUDA installer (a file with *.run extension) and type the following:

$ sudo chmod +x *.run
$ sudo ./*.run

By default, the toolkit and the samples will install under /usr/local/cuda-6.0 and $(HOME)/NVIDIA_CUDA-6.0_Samples, respectively. In addition, a symbolic link is created from /usr/local/cuda to /usr/local/cuda-6.0. The symbolic link is created in order for existing projects to automatically make use of the newly installed CUDA Toolkit.

If the target system includes both an integrated GPU (iGPU) and a discrete GPU (dGPU), the –no-opengl-libs option must be used. Otherwise, the openGL library used by the graphics driver of the iGPU will be overwritten and the GUI will not work. In addition, the xorg.conf update at the end of the installation must be declined.

Note: Installing Mesa may overwrite the /usr/lib/ that was previously installed by the NVIDIA driver, so a reinstallation of the NVIDIA driver might be required after installing these libraries.


Environment Setup

The PATH variable needs to include /usr/local/cuda-6.0/bin

The LD_LIBRARY_PATH variable needs to contain /usr/local/cuda-6.0/lib on a 32-bit system, and /usr/local/cuda-6.0/lib64 on a 64-bit system

  • To change the environment variables for 32-bit operating systems:

    $ export PATH=/usr/local/cuda-6.0/bin:$PATH
    $ export LD_LIBRARY_PATH=/usr/local/cuda-6.0/lib:$LD_LIBRARY_PATH
  • To change the environment variables for 64-bit operating systems:

    $ export PATH=/usr/local/cuda-6.0/bin:$PATH
    $ export LD_LIBRARY_PATH=/usr/local/cuda-6.0/lib64:$LD_LIBRARY_PATH


Check that the device files/dev/nvidia* exist and have the correct (0666) file permissions. These files are used by the CUDA Driver to communicate with the kernel-mode portion of the NVIDIA Driver. Applications that use the NVIDIA driver, such as a CUDA application or the X server (if any), will normally automatically create these files if they are missing using the setuidnvidia-modprobe tool that is bundled with the NVIDIA Driver. Some systems disallow setuid binaries, however, so if these files do not exist, you can create them manually either by running the command nvidia-smi as root at boot time or by using a startup script such as the one below:


/sbin/modprobe nvidia

if [ "$?" -eq 0 ]; then
  # Count the number of NVIDIA controllers found.
  NVDEVS=`lspci | grep -i NVIDIA`
  N3D=`echo "$NVDEVS" | grep "3D controller" | wc -l`
  NVGA=`echo "$NVDEVS" | grep "VGA compatible controller" | wc -l`

  N=`expr $N3D + $NVGA - 1`
  for i in `seq 0 $N`; do
    mknod -m 666 /dev/nvidia$i c 195 $i

  mknod -m 666 /dev/nvidiactl c 195 255

  exit 1

/sbin/modprobe nvidia-uvm

if [ "$?" -eq 0 ]; then
  # Find out the major device number used by the nvidia-uvm driver
  D=`grep nvidia-uvm /proc/devices | awk '{print $1}'`

  mknod -m 666 /dev/nvidia-uvm c $D 0
  exit 1


Graphical Interface Restart

Restart the GUI environment using the command startx, init 5, sudo service lightdm start, or the equivalent command on your system.


Post-installation Actions

Some actions must be taken after installingtheCUDA Toolkit and Driver before they can be completely used:

  • Setup evironment variables.
  • Install a writable copy of the CUDA Samples.
  • Verify the installation.

 Environment Setup

The PATH variable needs to include /usr/local/cuda-6.0/bin

The LD_LIBRARY_PATH variable needs to contain /usr/local/cuda-6.0/lib on a 32-bit system, and /usr/local/cuda-6.0/lib64 on a 64-bit system

  • To change the environment variables for 32-bit operating systems:

    $ export PATH=/usr/local/cuda-6.0/bin:$PATH
    $ export LD_LIBRARY_PATH=/usr/local/cuda-6.0/lib:$LD_LIBRARY_PATH
  • To change the environment variables for 64-bit operating systems:

    $ export PATH=/usr/local/cuda-6.0/bin:$PATH
    $ export LD_LIBRARY_PATH=/usr/local/cuda-6.0/lib64:$LD_LIBRARY_PATH

(Optional) Install Writable Samples

CUDA Repository

Retrieve the CUDA repository package for Ubuntu 14.04 from the CUDA download site and install it in a terminal.

$ sudo dpkg i cudarepoubuntu1404_6.514_amd64.deb
$ sudo aptget update

CUDA Toolkit

Then you can install the CUDA Toolkit using apt-get.

$ sudo aptget install cuda

You should reboot the system afterwards and verify the driver installation with the nvidia-settings utility.

Environment Variables

As part of the CUDA environment, you should add the following in the .bashrc file of your home folder.

export CUDA_HOME=/usr/local/cuda6.5

export PATH

CUDA SDK Samples

Now you can copy the SDK samples into your home directory, and proceed with the build process.

$  ~
$ cd ~/NVIDIA_CUDA6.5_Samples
$ make

If everything goes well, you should be able to verify your CUDA installation by running thedeviceQuery sample in bin/x86_64/linux/release.

In order to modify, compile, and run the samples, the samples must be installedwithwrite permissions. A convenience installation script is provided:

$ <dir>

This script is installed with the cuda-samples-60 package. The cuda-samples-60 package installs only a read-only copy in /usr/local/cuda-6.0/samples.

 Verify the Installation

Before continuing, it is important to verify that the CUDA toolkit can find and communicate correctly with the CUDA-capable hardware. To do this, you need to compile and run some of the included sample programs.

Note: Ensure the PATH and LD_LIBRARY_PATH variables are set correctly.

Verify the Driver Version

If you installed the driver, verify that the correct version of it is installed.

This can be done through your System Properties (or equivalent) or by executing the command

cat /proc/driver/nvidia/version

Note that this command will not work on an iGPU/dGPU system.

Compiling the Examples

The version of the CUDA Toolkit can be checked by running nvcc -V in a terminal window. The nvcc command runs the compiler driver that compiles CUDA programs. It calls the gcc compiler for C code and the NVIDIA PTX compiler for the CUDA code.

The NVIDIA CUDA Toolkit includes sample programs in source form. You should compile them by changing to ~/NVIDIA_CUDA-6.0_Samples and typing make. The resulting binaries will be placed under ~/NVIDIA_CUDA-6.0_Samples/bin.

Running the Binaries

After compilation, find and run deviceQuery under ~/NVIDIA_CUDA-6.0_Samples. If the CUDA software is installed and configured correctly, the output for deviceQuery should look similar to that shown in Figure 1.

Figure 1. Valid Results from deviceQuery CUDA Sample

Valid Results from deviceQuery CUDA Sample.


The exact appearance and the output lines might be different on your system. The important outcomes are that a device was found (the first highlighted line), that the device matches the one on your system (the second highlighted line), and that the test passed (the final highlighted line).

If a CUDA-capable device and the CUDA Driver are installed but deviceQuery reports that no CUDA-capable devices are present, this likely means that the /dev/nvidia* files are missing or have the wrong permissions.

On systems where SELinux is enabled, you might need to temporarily disable this security feature to run deviceQuery. To do this, type:

# setenforce 0

from the command line as the superuser.

Running the bandwidthTest program ensures that the system and the CUDA-capable device are able to communicate correctly. Its output is shown in Figure 2.

Figure 2. Valid Results from bandwidthTest CUDA Sample

Valid Results from bandwidthTest CUDA Sample.


Note that the measurements for your CUDA-capable device description will vary from system to system. The important point is that you obtain measurements, and that the second-to-last line (in Figure 2) confirms that all necessary tests passed.

Should the tests not pass, make sure you have a CUDA-capable NVIDIA GPU on your system and make sure it is properly installed.

If you run into difficulties with the link step (such as libraries not being found), consult the Linux Release Notes found in the doc folder in the CUDA Samples directory.

Programming in CUDA

A simple “Hello, world” program using CUDA C is given here

In file

#include “stdio.h”

int main()


printf(“Hello, world\n”);

return 0;


On your machine, you can compile and this with:

$ nvcc

$ ./a.out

You can change the output file name with the –o flag: nvcc –o hello

Programming using OpenCL

We should write the header as follows for Linux

#ifdef _LINUX_
#include <OpenCL/opencl.h>
#include <CL/cl.h>

To run “Hello.c”

> gcc -I /path-to-NVIDIA/OpenCL/common/inc -L /path-to-NVIDIA/OpenCL/common/lib/Linux64 -o hello hello.c -lOpenCL (64-bit Linux)

6. Additional Considerations

Now that you have CUDA-capable hardware and the NVIDIA CUDA Toolkit installed, you can examine and enjoy the numerous included programs. To begin using CUDA to accelerate the performance of your own applications, consult the CUDA C Programming Guide, located in /usr/local/cuda-6.0/doc.

A number of helpful development tools are included in the CUDA Toolkit to assist you as you develop your CUDA programs, such as NVIDIA® Nsight™ Eclipse Edition, NVIDIA Visual Profiler, cuda-gdb, and cuda-memcheck.

For technical support on programming questions, consult and participate in the developer forums at

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Extracts from a Personal Diary

dedicated to the life of a silent girl who eventually learnt to open up

Num3ri v 2.0

I miei numeri - seconda versione


Just another site

Algunos Intereses de Abraham Zamudio Chauca

Matematica, Linux , Programacion Serial , Programacion Paralela (CPU - GPU) , Cluster de Computadores , Software Cientifico




A great site

Travel tips

Travel tips

Experience the real life.....!!!

Shurwaat achi honi chahiye ...

Ronzii's Blog

Just your average geek's blog

Karan Jitendra Thakkar

Everything I think. Everything I do. Right here.

Chetan Solanki

Helpful to u, if u need it.....


Explorer of Research #HEMBAD


Explorer of Research #HEMBAD


A great site


This is My Space so Dont Mess With IT !!

%d bloggers like this: