Archive for the ‘CLOUD’ Category
Posted by Hemprasad Y. Badgujar on April 30, 2015
Posted by Hemprasad Y. Badgujar on December 22, 2014
Some of the fastest computers in the world are cluster computers. A cluster is a computer system comprising two or more computers (“nodes”) connected with a high-speed network. Cluster computers can achieve higher availability, reliability, and scalability than is possible with an individual computer. With the increasing adoption of GPUs in high performance computing (HPC), NVIDIA GPUs are becoming part of some of the world’s most powerful supercomputers and clusters. The most recent top 500 list of the worlds fastest supercomputers included nearly 50 supercomputers powered by NVIDIA GPUs, and the current world’s fastest supercomputer, Oak Ridge National Labs TITAN, utilizes more than 18,000 NVIDIA Kepler GPUs.
In this post I will take you step by step through the process of designing, deploying, and managing a small research prototype GPU cluster for HPC. I will describe all the components needed for a GPU cluster as well as the complete cluster management software stack. The goal is to build a research prototype GPU cluster using all open source and free software and with minimal hardware cost.
I gave a talk on this topic at GTC 2013 (session S3516 – Building Your Own GPU Research Cluster Using Open Source Software Stack). The slides and a recording are available at that link so please check it out!
There are multiple motivating reason for building a GPU-based research cluster.
- Get a feel for production systems and performance estimates;
- Port your applications to GPUs and distributed computing (using CUDA-aware MPI);
- Tune GPU and CPU load balancing for your application;
- Use the cluster as development platform;
- Early experience means increased readiness;
- The investment is relatively small for a research prototype cluster
Figure 1 shows the steps to build a small GPU cluster. Let’s look at the process in more detail.
1. Choose Your Hardware
There are two steps to choosing the correct hardware.
- Node Hardware Details. This isthe specification of the machine (node) for your cluster. Each node has the following components.
- CPU processor from any vendor;
- A motherboard with the following PCI-express connections:
- 2x PCIe x16 Gen2/3 connections for Tesla GPUs;
- 1x PCIe x8 wide for HCI Infiniband card;
- 2 available network ports;
- A minimum of 16-24 GB DDR3 RAM. (It is good to have more RAM in the system).
- A power-supply unit (SMPS) with ample power rating. The total power supply needed includes power taken by the CPU, GPUs and other components in the system.
- Secondary storage (HDD / SSD) based on your needs.
GPU boards are wide enough to cover two physically adjacent PCI-e slots, so make sure that the PCIe x16 and x8 slots are physically separated on the motherboard so that you can fit a minimum of 2 PCI-e x16 GPUs and 1 PCIe x8 network card.
- Choose the right form factor forGPUs. Once you decide your machine specs you should also decide which modelGPUs you would like to consider for your system. The form factor ofGPUs is an important consideration. Kepler-based NVIDIA TeslaGPUs are available in two main form factors.
- Tesla workstation products (C Series) are actively cooled GPU boards (this means they have a fan cooler over the GPU chip) that you can just plug in to your desktop computer in a PCI-e x16 slot. These use either two 6-pin or one 8-pin power supply connector.
- Server products (M Series) are passively cooled GPUs (no fans) installed in standard servers sold by various OEMs.
There are three different options for adding GPUs to your cluster:
- you can buy C-series GPUs and install them in existing workstations or servers with enough space;
- you can buy workstations from a vendor with C-series GPUs installed; or
- you can buy servers with M-series GPUs installed.
2. Allocate Space, Power and Cooling
The goal for this step is to assess your physical infrastructure, including space, power and cooling needs, network considerations and storage requirements to ensure optimal system choices with room to grow your cluster in the future. You should make sure that you have enough space, power and cooling for your cluster. Clusters are mainly rack mounted, with multiple machines installed in a vertical rack. Vendors offer many server solutions that minimize the use of rack space.
3. Assembly and Physical Deployment
After deciding the machine configuration and real estate the next step is to physically deploy your cluster. Figure 2 shows the cluster deployment connections. The head node is the external interface to the cluster; it receives all external network connections, processes incoming requests, and assigns work to compute nodes (nodes with GPUs that perform the computation).
In a research prototype cluster you can also make use one of the compute nodes as a head node, but routing all traffic from the head node and also making it a compute node is not a good idea for production clusters because of performance and security issues. Production and large clusters mostly have a dedicated node to handle all incoming traffic while the head node just manages the work distribution for the compute nodes.
4. Head Node Installation
I recommend installing the head node with the open source Rocks Linux distribution. Rocks is a customizable, easy and quick way to install nodes. The Rocks installation package includes essential components for clusters, such as MPI. ROCKS head node installation is well-documented in the Rocks user guide, but here is a summary of the steps.
- Follow the steps in Chapter 3 of the Rocks user guide and do a CD-based installation.
- Install the NVIDIA drivers and CUDA Toolkit on the head node. (CUDA 5 provides a unified package that contain NVIDIA driver, toolkit and CUDA Samples.)
- Install network interconnect drivers (e.g. Infiniband) on the head node. These drivers are available from your interconnect manufacturer.
- Nagios® Core™ is an open source system and network monitoring application. It watches hosts and services that you specify, alerting you when things go wrong and when they get better. To install, follow the instructions given in the Nagios installation guide.
- The NRPE Nagios add-on allows you to execute Nagios plugins on remote Linux machines. This allows you to monitor local resources like CPU load and memory usage, which are not usually exposed to external machines, on remote machines using Nagios. Install NRPE following the install guide.
5. Compute Node Installation
After you have completed the head node installation, you will install the compute node software with the help of Rocks and the following steps.
- On the head node: in a terminal shell run the command:
Choose “Compute Nodes” as the new node to add.
- Power on the compute node with the Rocks CD as the first boot device or do a network installation.
- The compute node will connect to the head node and start the installation.
- Install the NRPE package as described in the NRPE guide.
6. Management and Monitoring
Once you finish the head node and all compute node installations, your cluster is ready to use! Before you actually start using it to run applications of interest, you should also set up management and monitoring tools on the cluster. These tools are necessary for proper management and monitoring of all resources available in cluster. In this section, I will describe various tools and software packages for GPU management and monitoring.
GPU SYSTEM MANAGEMENT
The NVIDIA System Management Interface (NVIDIA-SMI) is a tool distributed as part of the NVIDIA GPU driver. NVIDIA-SMI provides a variety of GPU system information including
- thermal monitoring metrics: GPU temperature, chassis inlet/outlet temperatures;
- system Information: firmware revision, configuration information;
- system state: fan states, GPU faults, power system fault; ECC errors, etc.
NVIDIA-SMI allows you to configure the compute mode for any device in the system (Reference: CUDA C Programming Guide)
- Default compute mode: multiple host threads can use the device at the same time.
- Exclusive-process compute mode: Only one CUDA context may be created on the device across all processes in the system and that context may be current to as many threads as desired within the process that created the context.
- Exclusive-process-and-thread compute mode: Only one CUDA context may be created on the device across all processes in the system and that context may only be current to one thread at a time.
- Prohibited compute mode: No CUDA context can be created on the device.
NVIDIA-SMI also allows you to turn ECC (Error Correcting Code memory) mode on and off. The default is ON, but applications that do not need ECC can get higher memory bandwidth by disabling it.
GPU MONITORING WITH THE TESLA DEPLOYMENT KIT
The Tesla Deployment Kit is a collection of tools provided to better manage NVIDIA Tesla™ GPUs. These tools support Linux (32-bit and 64-bit), Windows 7 (64-bit), and Windows Server 2008 R2 (64-bit). The current distribution contains NVIDIA-healthmon and the NVML API.
The NVML API is a C-based API which provides programmatic state monitoring and management of NVIDIA GPU devices. The NVML dynamic run-time library ships with the NVIDIA display driver, and the NVML SDK provides headers, stub libraries and sample applications. NVML can be used from Python or Perl (bindings are available) as well as C/C++ or Fortran.
Ganglia is an open-source scalable distributed monitoring system used for clusters and grids with very low per-node overhead and high concurrency. Ganglia gmond is an NVML-based Python module for monitoring NVIDIA GPUs in the Ganglia interface.
This utility provides quick health checking of GPUs in cluster nodes. The tool detects issues and suggests remedies to software and system configuration problems, but it is not a comprehensive hardware diagnostic tool. Features include:
- basic CUDA and NVML sanity check;
- diagnosis of GPU failures;
- check for conflicting drivers;
- poorly seated GPU detection;
- check for disconnected power cables;
- ECC error detection and reporting;
- bandwidth test;
- infoROM validation.
7. Run Benchmarks and Applications
Once your cluster is up and running you will want to validate it by running some benchmarks and sample applications. There are various benchmarks and code samples for GPUs and the network as well as applications to run on the entire cluster. For GPUs, you need to run two basic tests.
- devicequery: This sample code is available with the CUDA Samples included in the CUDA Toolkit installation package. devicequery simply enumerates the properties of the CUDA devices present in a node. This is not a benchmark but successfully running this or any other CUDA sample serves to verify that you have the CUDA driver and toolkit properly installed on the system.
- bandwidthtest: This is another of the CUDA Samples included with the Toolkit. This sample measures the cudaMemcopy bandwidth of the GPU across PCI-e as well as internally. You should measure device-to-device copy bandwidth, host-to-device copy bandwidth for pageable and page-locked memory, and device-to-host copy bandwidth for pageable and page-locked memory.
To benchmark network performance, you should run the bandwidth and latency tests for your installed MPI distribution. MPI standard installations have standard benchmarks such as /tests/osu_benchmarks-3.1.1. You should consider using an open source CUDA-aware MPI implementation like MVAPICH2, as described in earlier Parallel Forall posts An Introduction to CUDA-Aware MPI and Benchmarking CUDA-Aware MPI.
To benchmark the entire cluster, you should run the LINPACK numerical linear algebra application. The top 500 supercomputers list uses the HPL benchmark to decide the fastest supercomputers on Earth. The CUDA-enabled version of HPL (High-Performance LINPACK) optimized for GPUs is available from NVIDIA on request, and there is a Fermi-optimized version available to all NVIDIA registered developers.
# In this post I have provided an overview of the basic steps to build a GPU-accelerated research prototype cluster. For more details on GPU-based clusters and some of best practices for production clusters, please refer to Dale Southard’s GTC 2013 talk S3249 – Introduction to Deploying, Managing, and Using GPU Clusters by Dale Southard.
Posted by Hemprasad Y. Badgujar on August 30, 2014
“Mobile Cloud Computing at its simplest, refers to an infrastructure where both the data storage and the data processing happen outside of the mobile device. Mobile cloud applications move the computing power and data storage away from mobile phones and into the cloud, bringing applications and mobile computing to not just smartphone users but a much broader range of mobile subscribers”.
From the concept of MCC, the general architecture of MCC can be shown in Fig. In Fig. , mobile devices are connected to the mobile networks via base stations (e.g., base transceiver station (BTS), access point, or satellite) that establish and control the connections (air links) and functional interfaces between the networks and mobile devices. Mobile users’ requests and information (e.g., ID and location) are transmitted to the central processors that are connected to servers providing mobile network services. Here, mobile network operators can provide services to mobile users as AAA (for authentication, authorization, and accounting) based on the home agent (HA) and subscribers’ data stored in databases. After that, the subscribers’ requests are delivered to a cloud through the Internet. In the cloud, cloud controllers
process the requests to provide mobile users with the corresponding cloud services. These services are Accepted in Wireless Communications and Mobile Computing -developed with the concepts of utility computing, virtualization, and service-oriented architecture (e.g.,web, application, and database servers).
Posted by Hemprasad Y. Badgujar on January 30, 2014
Monitoring your network can be a real pain. First and foremost, what tool should you use? Everyone you ask will give you a different answer. Each answer will reflect a different set of requirements and, in some cases, fill completely different needs. Here are the five network monitors I prefer, based on two criteria: They’re free (as in cost) and easy to use. You might not agree with the choices, but at the price point, you’d be hard pressed to find better solutions.
Wireshark (Figure A) has always been my go-to monitor. When most other monitors fail to find what I want, Wireshark doesn’t let me down. Wireshark is a cross-platform analyzer that does deep inspection of hundreds of protocols. It does live capture and capture save (for offline browsing), which can be viewed in GUI or tty mode. Wireshark also does VoIP analysis and can read/write many capture formats (tcpdump, Pcap NG, Catapult DCT2000, Cisco Secure IDS iplog, Microsoft Network Monitor, and many more).
2: Angry IP Scanner
Angry IP Scanner (Figure B) is one of the easiest to use of all the IP scanners. It has a user-friendly GUI that can scan IP addresses (and their ports) in any range. Angry IP Scanner is cross platform and doesn’t require installation, so you can use it as a portable scanner. It can get NetBIOS information, favorite IP address range, Web server detection, customizable openers, and much more. This little scanner makes use of mutlithreads, so it’s going to be fairly fast. Source code is available on the download page.
Zenmap (Figure C) is a graphical front end to the cross-platform Nmap tool. Nmap can scan huge networks, is portable, free, and well documented. It’s one of the most powerful IP traffic monitors, but that power comes with a price: complexity. Zenmap takes Nmap and makes it more accessible to users who prefer to avoid the command line. That does not mean Zenmap is the easiest of the lot. You still need to use some commands. But Zenmap offers a powerful wizard-like tool to help you through the process.
4: Colasoft Capsa Free
If you’re an admin used to more Windows-like tools, Capsa Free (Figure D) might be the perfect tool for you. There are actually two versions of Capsa: paid and free. The free version should be enough in most cases. It provides an easy-to-use dashboard you can use to create various types of captures. Capsa Free also offers plenty of alarm configurations so you can be alerted when something occurs. And it can capture more than 300 network protocols, so you won’t be missing out on anything with this free tool.
EtherApe is a Linux-only tool and is molded after the classic etherman monitor. It’s unique in that it offers an easy-to-use mapping of IP traffic on your network. It does this in real time and gives you a clear picture of the overall look of your network traffic. You can create filters (using pcap syntax) to make reading the map easier. As you can see in Figure E, a busy network can get rather challenging to read. EtherApe will display both the node and link color with the most-used protocol so it’s easier to take a quick glance, even on a busy network.
A lot of networking monitoring tools are out there, and some of them do more auditing than the tools listed here. But when you really need to know what’s going on with your network, one of the above tools will do a great job.
Have you used any of these tools? What other free scanners have you tried?
Posted by Hemprasad Y. Badgujar on November 29, 2013
We all know and love Apache. Its great, it allows us to run websites on the Internet with minimal configuration and administration.
However, this same ease of flexibility and lack of tuning, is typically what leads Apache to becoming a memory hog. Utilizing these easy to understand tips, you can gain a significant performance boost from Apache.
1. Remove unused modules – save memory by not loading modules that you do not need, including but not limited to mod_php, mod_ruby, mod_perl, etc.
2. Use mod_disk_cache NOT mod_mem_cache – mod_mem_cache will not share its cache amongst different apache processes, which results in high memory usage with little performance gain since on an active server, mod_mem_cache will rarely serve the same page twice in the same apache process.
3. Configure mod_disk_cache with a flat hierarchy – ensure that you are using CacheDirLength=2 and CacheDirLevels=1 to ensure htcacheclean will not take forever when cleaning up your cache directory.
4. Setup appropriate Expires, Etag, and Cache-Control Headers – to utilize your cache, you must tell it when a file expires, otherwise your client will not experience the caching benefits.
5. Put Cache on separate disk – place your cache on a separate physical disk for fastest access without slowing down other processes.
6. Use Piped Logging instead of direct logging – directly logging to a file has issues when you want to rotate the log file. It must restart apache to use the next log file. This will cause significant slowness for your users during the restart. Particularly if you are using Passenger or some other app loader.
7. Log to a different disk than disk serving pages – put your logs on physically different disks than the files you are serving.
8. Utilize mod_gzip/mod_deflate – gzip your content before sending it off and then the client will ungzip upon receipt, this will minimize the size of file transfers, it generally will help all user experience.
9. Turn HostnameLookups Off – stop doing expensive DNS lookups. You will rarely ever need them and when you do, you can look them up after the fact.
10. Avoid using hostname in configs – if you have HostnameLookups off, this will prevent you from having to wait for the DNS resolve of the hostnames in your configs, use IP addresses instead.
11. Use Persistent Connections – Set KeepAlive On and then set KeepAliveTimeout and KeepAliveRequests. KeepAliveTimeout is how long apache will wait for the next request, and KeepAliveRequests is the max number of requests for a client prior to resetting the connection. This will prevent the client from having to reconnect between each request.
12. Do Not set KeepAliveTimeout too high – if you have more requests than apache children, this setting can starve your pool of available clients.
13. Disable .htaccess – i.e. AllowOverride None This will prevent apache from having to check for a .htaccess file on each request.
14. Allow symlinks – i.e. Options +FollowSymLinks -SymLinksIfOwnerMatch. Otherwise, apache will make a separate call on each filename to ensure it is not a symlink.
15. Set ExtendedStatus Off – Although very useful, the ExtendedStatus will produce several system calls for each request to gather statistics. Better to utilize for a set time period in order to benchmark, then turn back off.
16. Avoid Wildcards in DirectoryIndex – use a specific DirectoryIndex, i.e. index.html or index.php, not index
17. Increase Swappiness – particularly on single site hosts this will increase performance. On linux systems increase /proc/sys/vm/swappiness to at least 60 if not greater. This will try to load as many files as possible into the memory cache for faster access.
18. Increase Write Buffer Size – increase your write buffer size for tcp/ip buffers. On linux systems increase /proc/sys/net/core/wmem_max and /proc/sys/net/core/wmem_default. If your pages fit within this buffer, apache will complete a process in one call to the tcp/ip buffer.
19. Increase Max Open Files – if you are handling high loads increase the number of allowed open files. On linux, increase /proc/sys/fs/file-max and run ulimit -H -n 4096.
20. Setup Frontend proxy for images and stylesheets – allow your main web servers to process the application while images and stylesheets are served from frontend webservers
21. Use mod_passenger for rails – mod_passenger is able to share memory and resources amongst several processes, allowing for faster spawning of new application instances. It will also monitor these processes and remove them when they are unnecessary.
22. Turn off safe_mode for php – it will utilize about 50-70% of your script time checking against these safe directives. Instead configure open_base_dir properly and utilize plugins such as mod_itk.
23. Don’t use threaded mpm with mod_php – look at using mod_itk, mod_php tends to segfault with threaded mpm.
24. Flush buffers early for pre-render – it takes a relatively long time to create a web page on the backend, flush your buffer prior to page completion to send a partial page to the client, so it can start rendering. A good place to do this is right after the HEAD section – so that the browser can start fetching other objects.
25. Use a Cache for frequently accessed data – memcached is a great for frequently used data and sessions. It will speed up your apache render time as databases are slow.
Posted by Hemprasad Y. Badgujar on July 18, 2013
in the performance of GPUs and CPUs. This has to be quite a compromise since actual performance depends heavily on the suitability of the chip to a particular problem/algorithm among many other specifics. The simplest method is to plot theoretical peak performance over time; I chose to show it for single and double precision for NVIDIA GPUs and Intel CPUs.
In the past, I have used the graph in the CUDA C Programming Guide, but that is frequently out of date, I have no control of the formatting, I have to settle for a screenshot instead of vector output, and, until I did my own research, I wasn’t sure if it was biased.
Below is Michael Galloy attempt (click to enlarge).
by Michael Galloy
Posted by Hemprasad Y. Badgujar on March 4, 2013
Remote Desktop Connection in Windows 7
Remote Desktop Connection, a utility included in all versions of Windows 7, allows you to use a laptop or home computer to remotely control the Windows-based desktop computer in your on-campus office or lab. When using Remote Desktop Connection from a laptop on a wireless network (including Purdue’s AirLink network and free public WiFi networks in coffee shops, hotels, etc.) or a home computer on a broadband Internet connection, it’s as if you’re sitting at the desk in your office using your computer’s keyboard and mouse — even if you’re two buildings, two miles, or two continents away.
By remotely accessing an ECN-supported desktop computer and refraining from storing your Purdue files locally on your laptop or home computer, your data remains safely stored in your home directory on ECN’s network servers — which receive daily backups.
- If you’re using Windows XP Professional rather than Windows 7, please see Remote Desktop Connection in Windows XP instead.
- If you have a Macintosh desktop at home or a Mac laptop but have a Windows-based desktop computer in your office, Microsoft also provides a free Mac version of Remote Desktop Connection; please see Remote Desktop Connection in Mac OS X. (The instructions on the page you’re reading now focus on the Windows 7 version.)
You’ll want to follow these instructions on your laptop and/or home computer, not on the on-campus desktop computer!
When connecting from off-campus, please don’t miss step #6! Connecting first to Purdue’s Virtual Private Network is required.
Who can use Remote Desktop Connection?
A remote-controlled computer can be used by only one person at a time. As such, it is recommended for use only by those who do not share the same office computer with other people. A graduate student may use Remote Desktop Connection with the permission of his or her supervisor.
Creating a Remote Desktop shortcut
1. Getting started on your Windows 7-based laptop or home computer.
On your laptop or home computer, click on the Start menu, navigate to All Programs, then to Accessories, and then launch “Remote Desktop Connection.”
2. Computer address.
2A. In the “Computer” field, enter the IP number of the desktop computer in your office. It will look similar to the following:
where both xxx and yyy are a specific number between 1 and 255. No two computers have the same full number; please obtain this number from ECN.
You may either skip to step #6 (to connect to the remote computer immediately) or proceed with step #2B (to set program options and create a shortcut for future use).
2B. Then click on the “Options” button. The window will expand to show several tabs, each with various program settings.
3. The “Experience” tab.
This step is optional. These settings might help improve your remote connection’s performance.
3A. Click on the “Experience” tab.
3B. Click the menu beneath “Choose your connection speed to optimize performance” and select one of the following:
- For most public WiFi services or home DSL connections, try “Low-speed broadband (256 Kbps – 2 Mbps)”.
- For home cable modem connections, try “High-speed broadband (2 Mbps – 10 Mbps)”.
4. The “General” tab.
4A. In the “User name” field, type your Purdue Career Account username.
Leave the “Allow me to save credentials” box unchecked.
4B. Click on the “Save As” button to proceed to the next step. The “Save As” dialog will appear.
5. Saving your shortcut file.
In this step, you’ll create a shortcut file which you will later begin using routinely to launch a remote control session to your office PC. You may save this shortcut wherever you prefer; we suggest saving a copy to your desktop.
5A. In the “Save As” dialog, click on the “Desktop” icon in the left-hand column. This will set the “Save in” location to the desktop.
5B. In the “File name” field, type a name that you’ll recognize. We suggest something like the following:
Remote Desktop to my office PC
If you’ll be creating shortcuts to multiple remote computers (say, one for each person who uses a shared home computer, each pointing to his or her unique office PC), you could enter a more specific name, e.g.:
Remote Desktop to John's office PC Remote Desktop to arms3403pc1
5C. Click the “Save” button.
The new shortcut file will be created on the desktop.
5D. (This step is optional.) If you’d like the shortcut to appear in more places, this would be a good time to make copies of it. You could drag the icon from the desktop to the Start button, for example, to place a copy of the shortcut in your Start menu.
Connecting to the desktop computer in your office
These instructions assume that your computer is connected to the Internet, either wirelessly or via a broadband connection (e.g. cable modem or DSL).
6. Connect to Purdue’s Virtual Private Network. When using a computer off-campus, this step is required. Establish a connection to Purdue’s Virtual Private Network (https://webvpn.purdue.edu). For a description of this service, please see ITaP’s VPN “Getting Started” page.
7. Starting the remote connection.
7A. If you saved the icon to the desktop in step #5, locate it there and double-click the icon now.
Alternately, repeat steps #1 and #2A, and then click the “Connect” button.
Your laptop or home computer will connect via the Internet to your desktop computer in your office.
8. Remote computer verification.
You might see a dialog (like the one shown at right) noting that the remote computer’s identity cannot be verified.
8A. You may optionally enable (place a check mark in) the “Don’t ask me again for connections to this computer” box. When the password prompt appears, enter your Purdue Career Account password.
8B. Then click the “Yes” button.
9. Password prompt.
A password prompt will appear. Because you are connecting to an ECN-supported PC which is a member of an Active Directory domain, you might need to do a couple extra steps.
If the remote computer is running Windows 7, the login prompt will look like the one on the left in the illustration, below:
9A. If the dialog appears as above, click the “Use another account” button.
9B. Enter your username as follows, substituting your own Purdue Career Account username:
9C. Enter your Purdue Career Account password.
9D. Then click the “OK” button.
Your office computer’s desktop will appear. If you had left programs running and/or files open on your office computer, they’ll appear now, just as they were. If you had logged out of Windows before you left your office, your ECN-supported office computer will go through the typical startup process, finishing with the Message of the Day window — just as when you’re in the office.
Now, while your remote connection is open, when you type or use your mouse, it’ll be like using the keyboard and mouse at your office computer.
Minimizing and/or disconnecting
10. Using the top-central tool bar.
While connected to the remote computer, a toolbar appears at the top of your screen like the one shown here:
10A. If you need to access a file or program on your local computer (the laptop or home computer you’re using), click the minimize button on the top-central tool bar. Remote Desktop Connection will stay running (as will all programs you have open on your office PC); restore it by clicking its button on the task bar (at the bottom of your screen, usually).
10B. When you’re ready to disconnect from your office PC, you may end the session one of these ways:
- Click on the “X” button at the right edge of the top-central toolbar. This will end the remote session but leave files and programs open and running on your office PC.
- Or, as shown in the illustration below, click on the (remote computer’s) Start menu and select “Log off.” This will close all open files and programs on your office PC and also end the remote session.
Posted by Hemprasad Y. Badgujar on February 25, 2013
Make changes in Cloudsim and run it
Now simply copy the “org” folder in “cloudsim-2.1.1\examples” and paste it to net beans source folder as shown.go to source and right click select paste.
Posted by Hemprasad Y. Badgujar on February 25, 2013
open Netbeans (any version greater then 5.0) ,Go to file–>>new project
select “Java” folder then select first option java Application ,Press next
Now your project is been created as shown.
Go to library ,right click on it ,a menu will come ,click on “Add jars/Folders”
Now browse the cloudsim folder which you have extracted from zip file .and go to “cloudsim-2.1.1\jars” and select “cloudsim-2.1.1.jar” .