Rosenwinkel’s article Project Plans in the New World outlines several general factors and aspects that the writer feels most projects in organizational environments feature and that project leaders and managers should address for accomplishments of targeted success. The author feels that most projects, particularly teamwork projects, in modern organizational environments feature managerial and operational control challenges out of greater diversity among groups and individuals and personal and environmental resources that contemporary technological, social, and business developments have facilitated.
Rosenwinkel advances the view that a basic guidance system, comprising a fitting project plan, is necessary for a project leader to facilitate some organizational teamwork projects’ success relative to objectives. He provides the basis of criteria for assessment whether a project plan is necessary and outlines such a plan’s key components and dimensions (Rosenwinkel, 1995, p. 34-37). Plans are necessary for organization in big projects, in which there are great detail, uncertainty, team member, and resource spectrum diversities, to control and balance team behavior. They are also important in cases where management planning is a necessary investment for project success and future organizational objectives, especially with regard to factors such as time, targets, and available resources.
After the decision whether a plan is necessary in a project, Rosenwinkel recommends reviews of a variety of applicable people and project variables to develop a suitable plan. Reviews of the two sets of variables should apply based on focus on the project’s process and targets, to establish risk levels for each variable. Rosenwinkel provides a 1-10 rating scale for each variable, and observes that rates of 1-3 for variables decrease overall project risk levels, and thus reduce formality requirements in the project plan. People variable factors, whose 1-10 ratings range from a one project owner platform to group diversity respectively, include singular person interest, project owner diversity, owner expertise, owner experience, participant numbers, skill-set diversity, previous teamwork experience, and skill-set match issues.
Project variable factors, which Rosenwinkel rates from 1 (simple) to 10 (complex), include clarity and stability of objective definitions, time-frame flexibility, project size, previous team/organization experience in similar project, project technology levels, and stability of the prevalent organizational structure (Rosenwinkel, 1995, p. 35-37). Rosenwinkel nevertheless rates dedication and responsibility among participants as the overriding determining factor for project success, despite his assessment and recommendations for planning and organization.
Rosenwinkel’s views are comparable to a personal teamwork scenario in my case. I was involved as a member in a team tasked to research on, assess, and advise a small-scale enterprise’s senior authorities on the viability, suitability, and usefulness of interactive and contact/communicative technology software for office, customer relations, and general business activities. Two of Rosenwinkel’s people variables, participant number and skill-set diversity, and two of his project variables, clarity and stability of objective definitions and time-frame flexibility, are particularly applicable in my team scenario. The team comprised six members, including the leader, thus promoting a significant challenge in harmonious teamwork activity due to individual differences and preferences in teamwork issues such as leadership style, work assignment patterns, task schedules, and teamwork contributions. The team experienced role, discipline, and contribution pattern challenges due to a significant number of participants.
The team required about four different skill-sets for coordinated success in the task. These included leadership and organization skills for the leader, field data collection/survey skills for two members, and computer and online research skills, logistical/technical support skills, and presentation skills for the remaining three members. These skill-sets added to teamwork coordination and responsibility complexities (West, 2012, p. 3-10, 41-57). Based on Rosenwinkel’s 1-10 scale, I would rate the participant number and skill-set diversity people variables in our team at 5 and 6 respectively. There was clear scope and objective definition of our team’s task. Our brief involved research on potential technology software options and their various advantages and limitations and recommend two options that promised the greatest value for office, customer relations, and general business activity, based on issues such as costs and suitability for the firm’s model and objectives, to the firm’s authorities in a final report within a month.
This promoted focus in the team’s task and made our activities easier. In terms of project time-frame, the team’s task needed to finish successfully within a two-month timeframe, which senior firm authorities were reluctant to extend. Team members needed to organize and carry out their duties early in order to allow the member with report compilation and presentation responsibilities adequate time before the two-month deadline (West, 2012, p. 3-10, 41-57). I would rate our team’s clarity and stability of objective definitions and time-frame flexibility project variables at 3 and 9 respectively, based on the author’s scale. Overall, the team’s task was significantly complex, especially given time-frame inflexibility and personality difference factors, which made leadership, coordination, and team progress challenging.
To improve the project’s management for better task results, I would have enforced two major changes. The time-frame constraint was a major factor in the team’s assignment. I would have allowed for greater flexibility in the time-frame, to allow for greater thoroughness and quality in members’ individual and the team’s overall responsibilities and results. This is because more time, probably three or four months, would have allowed individual members and the team more time to review their activity results and responsibilities, thus adding value to the final report. The second change would involve incorporation of technology in members’ responsibilities and coordination among themselves and with important research and field contacts.
The introduction of internet-based and other developed communicative technology in the team’s work would have enabled members to contact each other any time and allow better and quicker research, execution of their parts in the team, coordination, cooperation, and report preparation activities (West, 2012, p.107-117, 119-127). It would have made managerial and operational responsibilities and processes much easier, freeing up more time and other resources for greater value in the team’s brief.
Rosenwinkel, J. (1995). Project Plans in the New World. ABI/INFORM Global, Journal of Systems Management, Vol. 46, No. 2, p. 34-37
West, M. (2012). Effective Teamwork: Practical Lessons from Organizational Research. John Wiley and Sons, New Jersey, USA
Article Review 5
Grid Enabled Condor ClusterFinal Year Project
This project will use Condor to establish a grid enabled/ready HPC cluster. Research will be required to compare the Condor Cluster model with other models like OSCAR/ROCK and TORQUE/SGE/LSF etc. As a high performance cluster the designed system should be able to handle parallel, vanilla and standard jobs. To improve usability and efficiency of the system scheduling policies and queuing systems will need to be configured. To ensure the system is grid enabled research with design and implementation Globus and Condor-G will be required to allow uniform job submission.
Draft Version (13-1-2012)
Invention of computers makes life easier in many fields. People start using computer and rely on them in many works. This innovation opens the door development of hardware and software. According to Moore’s law, the speed of the CPU will double every two years by doubling the number of transistors. Following this trend, software developers are trying to benefit from the huge resources that are available, which predicted to be increase. As the CPU become smaller, the Moore’s law became harder to implement. Scientists and engineers tried to follow Moore’s law by combining the resources of many computers to act as one supercomputer. Those computers used a middleware on the top of the OS to act as one computer.
This combination of computers called cluster, which can be grouped depending on its way of work to three categories: High Performance Computing Cluster (HPC), High Throughput Computing Cluster (HTC), and High Available Cluster (HA). HPC cluster is used where the time is significant by combining number of computers or servers. In contract HTC is used to provide high throughput computing by harnessing the unused power in the non-dedicated machines. The performance for the HPC cluster measured by GFlops, which depends on how many calculations, can be produced per second. HPC cluster gave the ability for must manufacturer to replace the actual prototypes by virtual prototypes. The cluster can be enabled to receive or send a job from or to external machines, which is called grid computing. Grid technology helps organizations to share their power of computing. Different middleware are available for cluster some are commercial where the other are belongs to the open source community. One of the known middleware is Condor, which used widely to harness the unused power in the non-dedicated computers or servers (HTC clusters) (Silva, Barreira and Ribeiro 96). Condor can be used also as a grid middleware in its Condor-G extension, which is the integration of the job management part of condor project with the security and resource access of the Globus toolkit.
In this project, condor will be used to build a grid enabled HPC cluster using three IBM eServer machines. Condor will rely on Centos operating system, which is a free open source Linux distribution. This report will include a background about condor and condor-G, which is the grid extension for condor, a comparison between some of the common cluster middleware that exist, and the HPC cluster deploying and test process (Silva, Barreira and Ribeiro 96).
The aim of this project is to establish a grid enabled/ready HPC cluster using condor on three IBM eServer. Condor in this project will be examined as HPC cluster middleware to compare its features in contrast of other existing HPC cluster middleware.
To be able to achieve the aim of the project, the following objectives should be completed:
1- Understanding the architecture and the installing process of condor.
2- Understanding condor-G architecture and how it deployed.
3- Setting the cluster environment and define each node job.
4- Installing a supported operating system.
5- Installing and configuring Condor and the specified universes (standard, vanilla, and parallel).
6- Installing and configuring Condor-G.
7- Testing the cluster to make sure that it is working as it should be.
8- Benchmarking the cluster to measure its performance.
2.0. Background study:
This is a system of specialized workload management that is made by Wisconsin-Madison University. Condor is a full-featured batch system that provides job queuing mechanism, priority scheme, scheduling policy, resource monitoring, and resource management. Condor mainly used in High throughput computing (HTC) where cycle stealing could applied to use the wasted time on the non-dedicated machines. However, it can also manage dedicated machines or used in high performance computing (Silva, Barreira and Ribeiro 96). The outcome will have one system, which manage all the non-dedicated and dedicated machines. When a job is submitted to the Condor by user, it will be placed in a queue. Depending on a policy, condor decides where and when to have the job run. The job progress can be monitored and the user can be informed about the job status. A typical condor system consists of four components:
Condor pool: The condor pool consists of the machines that form the cluster and the jobs that submitted to the cluster (Hussain 273).
Central manager: A dedicated machine that used to collect information about the available resources and the submitted jobs and map each queued job with an available required machine. A condor pool should have only one central manager, which can be in addition an execution or submitting machine or both (Hussain 273).
Submitting machine: A machine in condor pool from where the user can submit a job to the cluster.
Execution machine: A machine in condor pool, which executes the submitted job.
Each machine can be a submitting machine or an execution machine or it can be both by having the job management and the resource management tools installed at the same machine. Condor has several daemons that help condor to work correctly (Hussain 273).
Condor-Master: this is a daemon that keeps the rest of Condor daemons running in the various machines in the pool. The daemon spawns other daemons and intermittently checks for any new binaries that could be installed. For the new binaries installed, the master restarts the affected daemons (Lawrence 327). If any of the daemons crash, the master sends email to Condor administrator of the pool and the daemon is restarted. Condor-Master also assists in various administrative commands, which allow for starting, stopping or reconfiguring of daemons remotely. Condor-master runs on each of the machines in the Condor pools and does not depend on the machine functions.
Condor-startd: This is a daemon that represents resources like machines to Condor pool. It promotes certain features about the resource and these features facilitate in matching the resource with pending requests. Condor-startd runs on machines in the pool that are selected to perform a job and is responsible for enforcement of policy configured by resource owners to determine conditions that remote jobs are started, resumed, suspended, killed and vacated Startd spawns Condor-Starter when it is ready for condo job execution (Lawrence 327).
Condor-Starter: This is a program that spawns remote Condor job in a machine, sets up execution environment and also monitors the job while it is being run. Upon completion of a job Starter notices and sends back to the submitting machine a status notification and exits (Lawrence 327).
Condor-schedd: This is a daemon that represents requests from the resource to Condor pool. To submit jobs, users go to Condor schedd, which store the jobs in a queue and manages them. There are various tools that connect to the schedd that allow for viewing and manipulation of job queue (Lawrence 327). These tools include condor-submit, condor-rm and condor-q. The commands cannot work in case the Condor-schedd is down on any given machine it is the Condor-schedd that advertise the waiting jobs in job queue and claims available resources in order to serve the requests Schedd spawn Condor-shadow in order to serve the request once the schedd is matched with a particular resource.
Condor-shadow: This is a program that runs on machine where requests are submitted, the program manages the resource requests linking jobs to Condor standard universe that performs remote system calls. Condor-Shadow performs system call on submit machine, the results are sent back to remote job over the network. Condor-shadow makes decisions about resource request; such decisions include the access of certain files and storing of checkpoint files (Jia and Zhou 453).
Condor-collector: this is a daemon that collects information from Condor pool status. The rest of daemons intermittently send ClassAd updates to Condor-collector. ClassAds contain information on other daemons, resources represented and resource requests within the pool (like jobs submitted). To query for specific information on various parts of the Condor, Condor-command is used. Condor daemons themselves also queries Condor-collector for information needed including the address to be used in sending commands to the remote machine (Jia and Zhou 453).
Condor-negotiator: this is a daemon that helps in match-making in the Condor system. Negotiation cycle is started when the negotiator queries collector for current state of resources available in the pool. Negotiator then contacts schedd on the waiting resource requests that are prioritized and matched with resources available. The negotiator enforces user priorities within the system. Less priority is given to users with more resources in resource acquisition. The negotiator ensures that resources are made available to users with better priorities (Jia and Zhou 453).
Condor-gridmanager: this is daemon that handles execution and management of all jobs in the grid universe. Condor-gridmanager is invoked by Condor-schedd concerning jobs in queue. Condor-gridmanager exits when the jobs are exhausted in the grid universe (Jia and Zhou 453).
Condor uses ClassAd, which is a flexible representation of the characteristics and constrains of condor pool (machines and jobs), to match the submitted jobs with the available machines. This mechanism called matchmaking. Each execution machine in the condor pool has its own ClassAd depending on the machine specifications and each job submitted to the condor pool has its own ClassAd depending on the user preferences. As a matchmaker condor read continually the submitted job ClassAds and the execution machines ClassAds.
Depending on the requirements condor map each job with an execution machine. Condor support check pointing, which is a mechanism that used to save the work that have been calculated by cluster so if the job stopped after it start processing some of the calculation it can be saved and restart again from that checkpoint rather than being started from scratch this mechanism helps condor to work in HTC environment. It supports also remote system calls, which is a mechanism that allows the program to access data files from any machine in the condor pool however this mechanism is supported in some universes, which is condor expression for its run time environments (Hussain 271).
In addition, condor provides job checkpoint and migration for jobs that runs under standard universe. Using this feature gives condor the ability to resume back to where a job was previously stopped, which is a form of fault tolerance. Remote system calls is another feature that work under standard universe and gave condor the ability to redirect all of a job input/output related system calls to the submitted machine (Silva, Barreira and Ribeiro 97). So the user no longer needs to make available the data files for the job to the executing node. Even if there are no shared folders between the nodes, Condor has several runtime environments called universes the following are some of the condor universes:
Standard universe: Condor offers remote system calls and checkpointing within the standard universe, which makes jobs more reliable allowing for uniformity in resource access form the pool.
Standard universe job is prepared by re-linking the program with Condor-compile various programs can be made standard universe jobs though with few restrictions (Lawrence 323).
Vanilla universe: Vanilla universe in the Condor is used for programs that cannot be re-linked successfully. Vanilla universe is also useful in Shell scripts unfortunately; jobs that are run under vanilla universe cannot use the remote system calls or checkpoint. This is an unfortunate since jobs that are partially completed in the remote running machine must be sent back to owners. Condor has two choices for such jobs; they can be suspended to be completed later or can be given up and restarted from the scratch in another machine within the pool (Lawrence 323).
Parallel universe: these are programs used to execute parallel programs; they execute collectively to solve problems using multiple processors for the purpose of increasing speed. Parts communicate between themselves when execution is being done and this is one key aspects of the parallel programming. These parts carry out execution in different computers at the same time (Lawrence 327).
Globus Universe: this is used to submit job to the Condor-G in the grid. Globus universe does not allow for matchmaking and checkpointing. The universe has limited mechanism for file transfer and it supports few platforms and it has limited. The universe has no job exit code (Lawrence 323).
Java Universe: This is used in executing java programs. Regardless of owner, location, or the JVM version, programs submitted to Java universe are run on any machine as long as the machine has a JVM. Condor takes care of the details like setting class path and finding of JVM binary (Lawrence 323).
Scheduler universe: through the scheduler universe, users submit light weight jobs that are run immediately besides the Condor-schedd on the host itself. Jobs in scheduler universe are not harmonized with remote machine and are not preempted. Condor-schedd’s ClassAd helps in evaluating jobs requirements expression. Scheduler-universe is originally intended for the meta-schedulers like the Condor-dagman but can also help in managing jobs in the submit-host. Unlike local universe, scheduler universe makes use of no Condo-starter daemon in managing jobs and therefore offers limited policy support and features (Lawrence 323). Local universe proves a good choice in most jobs that must be run on submit hosts since it offers rich set of management features. It is also more consistent with various other universes like the vanilla universe. Scheduler universe is likely to be retired in the future as preference is made to the local universe.
Local universe: local universe allows job submission and execution under different execution conditions and with different assumptions. Jobs are not waited for matching with machine but are instead executed right away in the machine where it is submitted without any preemption, Condor-schedd’s ClassAd helps in evaluating jobs’ requirements expression (Lawrence 324).
VM Universe: VM universe helps Condor to facilitate execution of Xen and VMware virtual machines.
The main advantage of condor over the other middleware is its ability to compute intensive jobs from dedicated and non-dedicated machines. In addition, it supports heterogeneous environments, and it is compatible with most OS platforms. 
Condor-G refers to the Condor where Globus software is used in executing grid universe jobs that are sent to the grid resources for execution. Globus Toolkit offers a framework for development of grid applications and systems. Condor offers similar job management capabilities. For the Condor, user effectively submits jobs, manages them and the jobs are executed on machines that are widely distributed. Condor-G is deemed as simple replacement of Globusrun, which is a Globus Toolkit only that Condor-G does more than the Globusrun (Joseph and Fellenstein 37). Condor-G allows for submission of jobs at once as well as monitoring of the jobs in a convenient interface. During job running, notifications are made on completed jobs, maintenance and failure of the Globus credentials, which may expire during the running. Condor-G is fault-tolerant kind of a system, in case of machine crash; all other functions are made available as machine is resumed to run.
2.2.1 Globus Terminology and Protocols
Globus software givens distinct set of protocols that are aimed at facilitating remote job execution, data transfer and authentication. Authentication involves verification of an identity and may comprise of getting authorization to uses a given resource. Condor and Globus makes use of various protocols and terminologies that allow interaction with the grid machines in executing the jobs (Wilkinson 179).
GSI: One of the protocols is the GSI or the Grid Security Infrastructure, which provides the relevant building blocks Condor-G and other grid protocols. Authorization and authentication of systems are used to authenticate the user once by use of public key infrastructure or the PKI mechanisms that verifies user-supplied credential in the grid.GSI maps the credential to the various local credentials and authorization/authentication mechanisms applied to each site (Wilkinson 194).
GRAM: Grid Resource Allocation Management or the GRAM protocol helps in remote submission of computation request like running of a program to the remote computation resource. GRAM also helps in monitoring and controlling of computation. Condor-G uses GRAM to reach remote Globus work mangers.
GASS: Globus access to secondary storage or the GASS provides data transfer mechanisms to and from remote HTTP, GASS server or FTP. Condor uses GASS in the gt2 grid in transfer of job files between remote resource and where job is submitted.
RSL: Resource Specification Language or the RSL is a language GRAM that accepts and specifies job information.
GRIDFTP: GRIDFTP is an FTP extension that provides high performance and security options in transfer of large amount of data.
Gatekeeper: this is one of the software daemons that executes in the GRID on remote machine and is only important to gt2 grid types. Gatekeeper handles initial communication between remote resource and daemon.
Job manager: this is a Globus services, which is initiated at the remote resource to keep track, submit and manage Grid for jobs that run on underlying batch system that is supported by Globus. Examples include LSF, PBS and Condor (Joseph and Fellenstein 44).
Figure (2.1): Remote execution by condor-G on remote Globus resources 
2.2.2- Grid Universe’s matchmaking
The Grid universe permit user to specify a particular grid site where jobs are to be destined. This is however sufficient where user is aware of the jobs destination or the grid site to be used or where there is a high level resource broker who decided the grid site to be used. There are various grid sites that user could use; the Condor enables grid universe’s matchmaking to select the grid resource that a given job will run on. This matchmaking is somehow new. Rough edges are likely to be there as improvement is continually attained. Matching of jobs to the grid resources by Condor involves provision of submission of a description file that has the needed commands to enable job to be matched with the grid resource (Lawrence 313). Grid resource is identified to Condor; this is done by ClassAd advertising, which specifies necessary attributes that allow Condor to make matches properly. Condor-advertise identify grid resource and ClassAd is then sent to Condor representing a grid source that is used in making matches.
Figure 5.1: Interaction between Globus-managed resources and Condor-G
2.3- Cluster Middleware:
There are number of cluster middleware, some are commercial and the other are free open source. Some of the middleware are categorized as a cluster installation suite such as: OSCAR, ROCKS, and Windows HPC server. Those cluster packages contain all the applications that needed to run and manage a dedicated HPC cluster. The aim of that is to simplify the process of building HPC clusters. However, this feature can be seen as a drawback if the user does not have the knowledge of the cluster architecture and structure and the different applications that used to run and manage the cluster and the function for each part. The other middleware are called cluster batch schedulers (Lawrence 323). Those are the most effective tools in the cluster tool kits.
The scheduler main work is to manage the available resources and the available jobs and maps them together by assigning each job to an available machine. To be able to do that, the scheduler uses a scheduling algorithms sometimes with specific policies from which the system administrator can chose to get a high performance cluster. Some of the most used algorithms in the scheduling process are First Come First Serve (FCFS), First in First out (FIFO), Round Robin (RR), Shortest Job First (SJF), and Longest Job First (LJF). Those scheduling algorithms can be applied with scheduling policies like backfilling, and fire share to increase the cluster performance. In the following sub sections, a quick review can be found about various middle wares including how it is worked (Etsion and Tsafrir 3).
OSCAR (Open Source Cluster Application Resources) package contains most of the tools that needed to run and manage HPC clusters. The package tools belong to the open source community. OSCAR needs to be installed on the top of a supported Linux distribution. In OSCAR installation the OS and OSCAR need to be installed in the head node. During OSCAR installation a client image will be created and sent to the client nodes. The client image includes the minimum requirement from the OS that needed for the client nodes (Wilkinson 199).
OSCAR consists of the following packages:
System Installation suite (SIS): A tool for installing an image of the operating system on the client nodes.
Cluster Command Control (C3): A suite of tools that used for cluster administration and application support.
Switcher environment Manager: allow the user to switch between different environments.
OSCAR Password Installer and User Management (OPIUM): used to synchronize the cluster’s accounts and configure ssh for users.
LAM/MPI, MPICH, OpenMPI, and PVM: the standard interfaces for HPC parallel programming.
Gnaglia Monitoring System: a monitoring system for HPC system, which give the ability to gather information about each node in the cluster.
Torque Resource Manager: is a flexible workload manager, which based on OpenPBS. Torque consists of three main components:
– Torque server, which runs on the head node to control the jobs and track the cluster resources.
– mom daemon, which is a daemon in each client node to start and stop the submitted jobs.
– Torque scheduler, which is a simple FIFO scheduler this scheduler can be used but as a default, OSCAR used Moab/Maui scheduler (Ajdari 3).
Like OSCAR Rocks package contains various open source tools that used to run and manage HPC clusters. The main deference in Rocks is that it comes with the supported OS in one distribution instead of installing the cluster tools on the top of the OS. This feature can make the installation process easier since there is no need to check the compatibility between the OS and the cluster tool kit (Wilkinson 201). The previous versions of ROCKS come with Red Hat distribution while the earliest ones come with CentOS distribution.
2.3.3- Windows HPC Server:
Windows HPC server is a solution from Microsoft for High performance cluster. Its advantages come from the ease of use for the people who are familiar with Microsoft product. It has its own cluster manager and job manager, which can be installed on any supported computer in addition of the head node to support remote cluster management and job submission. As a Microsoft solution it support Microsoft product by giving the ability to use high performance computing in Excel sheet calculations. Microsoft stated that Windows HPC Server 2008 R2 provides the ability to switch between traditional first-come, first-serve scheduling and a new service-balanced scheduling policy designed for SOA/dynamic (grid) workloads, with support for preemption, heterogeneous matchmaking (targeting of jobs to specific types of nodes), growing and shrinking of jobs, backfill, exclusive scheduling, and task dependencies for creating workflows (Microsoft, 2011). In the other hand, it required Windows server as a platform so it is compatible just with windows platform. In addition, windows active directory can make the installation process easier.
Moab is an open source workload manager, which based on Maui batch scheduler so it has all Maui features and flexibility with extra features. Moab provides a simple FCFS scheduler with a backfilling policy. It supports UNIX and most common Linux distributions. It can be integrated with Torque scheduler and ganglia monitoring system (Etsion and Tsafrir 1).
2.3.5- Load Leveler:
Load Leveler is the workload manager that used in IBM SP. Miron Livny, who are one of the principal investigators for the Condor Project stated that “IBM’s Load Leveler, which runs on SP, is already a commercial offspring of Condor when he have been asked about moving condor into a commercial context. Like condor, Load Leveler has remote system call and checkpointing features. It has similar daemon to that available on condor. Load Leveler supports FCFS, FCFS with backfilling, gang scheduling, also it can be interfaced with external scheduler (Etsion and Tsafrir 1).
2.3.6- Load Sharing Facility (LSF):
LSF is workload manager for HPC. It supports FCFS, preemptive, fair-share, Service Level Agreements (LSA), and backfilling. It can also interface with external scheduler. It supports automatic and manual checkpoints, migrations, job reruns, and automatic job dependencies (Etsion and Tsafrir 1).
2.3.7- Portable Batch System (PBS):
PBS was developed for NASA after that it branches to OpenPBS, which is the original open source, and PBS Pro, which is the commercial version. It support number of scheduling policies including FCFS, SJF, user/group priority, and fair share. Specific scheduler can be implemented using C and TCL programming languages or a special language called BaSL. PBS supports checkpoint, rerun for the failed or stopped jobs, and failed node recovery (Etsion and Tsafrir 2).
2.3.8- Sun Grid Engine (SGE):
SGE or Oracle Grid Engine default scheduling policy is FCFS. In addition, it has Equal share scheduler, which is a simple fair share policy that can be set by the system administrator to distribute the resources equally among all the users and groups. A new job queue can be set among the queues with a specific dispatch order. SGE currently does not support backfilling policy (Etsion and Tsafrir 2).
2.4- How do other sites use condor:
During the research, it has been found that most sites use condor in high throughput computing. However, some universities used condor in high performance computing in addition to HTC. Some of those universities aim to connect their cluster to appear as one cluster so if a user needs to submit a job he can submit it to this cluster, which decide depending on the job requirement where should the job run. Some other sites use Condor-G to benefits from its advantages in grid environment. However, most of those sites does not use condor as a cluster middleware. Three universities use of condor will explained Lehigh University, University of Manchester and University of Texas (Etsion and Tsafrir 2).
2.4.1- University of Lehigh:
University of Lehigh in USA has Beowulf clusters, which has approximately 1400 computing cores that are equipped with distributed memory. Two main clusters, which specifications can be seen in the table (2.1) form these clusters. Those cluster managed by using just condor as it stated in their website. However, one of those cluster use PBS scheduler while the other use condor scheduler. Normally they used vanilla, standard, and MPI universe as they state in the website. In addition to the condor HPC cluster, Lehigh has a wide HTC condor pool (University of Lehigh 2011).
NameInfernoCoronaNodes4064 (16×4 nodes/2U)CPU in each nodeDouble quad-core Xeon 1.8Ghz2 X AMD Opteron 8-core 6128 (16 cores/node)Amount of compute cores3201040Cash4MBL1: 8*128KB, L2: 8*512KB, L3: 12288 KBArchitecture64 bit x86_6464 bit x86_64Memory for each node16GB32GB or 64GBOperating SystemsCentOS release 5, which is (Final)CentOS release 5.5 (Final)Disk800GB /home mounted on all nodes. Quota networks for individual users, are10GB. NetworksAll the connections are at 1000MbpsJob submittingCondor submissionPBS submissionSchedulerCondorPBSTable (2.1): University of Lehigh HPC cluster specification 
2.4.2- University of Manchester:
Manchester University in UK has a HPC Linux cluster called Mace01. This cluster use SGE as batch scheduler but it use condor to backfill the system with the possible number of short jobs. The pool consist of number of the computing nodes and only one submit node during to the network topology that have been used, which use IP tunnel and requires significant network routing changes (Manchester University 2011).
2.4.3- University of Texas:
University of Texas has a large condor cluster, which used dedicated computing nodes in addition to harnessing the idle desktop machines. They used the cluster to run serial jobs since its network connections are not fast enough to handle heavy parallel jobs. However the run small MPI jobs on it. The dedicated machines consist of a central manager, check point servers, submit nodes, compute nodes, and file storage (University of Texas 2012).
The first step in designing condor cluster is to decide each machine job. Since three machines are provided to do this project, the decision was to keep one as a central manager, submitting machine and computing machine. This machine has been selected to be the one with the greater specifications as it can be appear in the table (3.1).
NoMachineCPUMemoryHard Drive1CM2×Dual core AMD Opteron processor 257 (2.2 GHz)3GB70GB2N12×AMD Opteron processor 248 (2.2 GHz)2GB33GB3N22×AMD Opteron processor 248 (2.2 GHz)2GB33GBTable (3.1): Machines specifications
Naming scheme: all machines used the domain CHPC.hud.ac.uk the abbreviation CHPC has taken from Condor HPC while the rest of the domain name is the university domain name. The following table shows the name of the machines and the IP addresses.
No.Machine NameIP addressJob1CM.CHPC.hud.ac.uk192.168.0.10Central Manager + (Submitting + Executing) node2N1.CHPC.hud.ac.uk192.168.0.11Executing node3N2.CHPC.hud.ac.uk192.168.0.12Executing node
Table (3.2): Machines Names and IP addresses
Three IBM machines are used to form the cluster. One was set to be a central manager, submitting, and computing node this machine has the best specifications over the three machines. The other two machines are set to be computing nodes (IBM 2011).
Figure (3.1): Cluster Design
CentOS 5.4 has been chosen to be the operating system for the three dedicated machine. CentOS is a Linux flavor based on Red Hat, which become commercial while CentOS are totally free. This selection help to get a public support since most of the sites that used condor used CentOS as operating system. Also the native packages are available since it is based on red hat (CentOS 2009). Being based on Red Hat, it benefits from the widely support and knowledgebase for the most popular distributions. It benefits also from the security updates the provided by Red Hat. Any fixes to RHEL will normally apply to CentOS within 24 hours. CentOS is very stable distribution and its release cycle is slow normally 2-3 years.
3.2- Cluster Middle Ware:
Condor: Condor has been installed using the native package for the Red Hat (RPMs) since CentOS is based on red hat. Condor have been installed firstly on the head node then on the executing nodes the installation process is the same but each machine has its own condor configuration file depending on the job that the machine does. The standard following steps were used to install condor on all the three machines the different is in the Daemon list that running in the Central manager machine and the daemon that running in the executing nodes (CentOS 2009).
Add condor user
Set the host names and add it to the files /etc/sysconfig/network and /etc/hosts/ in each machine.
Install condor using the following commands:
Set condor configuration by editing the files /etc/condor/condor_config and /etc/condor/condor_config.local
The following changes have been made for condor_config.local file:
Running condor using the following command: checking if condor work correctly by checking the daemons that work using the command
standard and vanilla universes work directly when condor installed correctly and does not need an additional configuration or installation (Lawrence 334). The previous steps are applied to the executing nodes expect the DAEMON_LIST, which contains MASTER, and STARTD.
Running the parallel universe:
Condor support parallel jobs including MPI but it needs to set additional configuration first MPI needs to be installed there are two library that used to run MPI jobs OpenMPI and MPICH. MPICH2 were selected because a clear configuration has been found to how to install and configure this library (Lawrence 343).
unpacking the file
configuring and building MPICH2
Adding the bin directory to the path
Make sure MPI is well installed:
All those command should return a path to the mpich2 binaries, which were exported in step 4. After installing MPICH2 condor has been configured to use MPI. By using the following script. After using MPICH2 and because the configurations has not been set as it suppose the MPI changed to be open MPI. Open MPI installed using the following steps:
Then the path to the openmpi binaries and libraries has been added to the .bashrc
SSH password is required for MPI to run in a cluster environment to SSH without password the following standard steps were done on all the nodes:
Generate the public and private key
Copy the local host’s public key to the remote host’s authorized_keys file.
Configuring Network File System (NFS):
A Network File System (NFS) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. To use NFS successfully both the server and the clients need to be configured (Lawrence 346).
The server configurations:
create a directory to be the shared folder (/work)
write the clients IP’s in the /etc/exports file:
write the clients IP’s in the /etc/hosts.allow
start the NFS and portmap:
The client configurations:
create a directory to be mount to (/work)
mount the shared directory into the required directory
mount 192.168.0.10:/work /work
To check that the shared folder is mounted the command mount shows the following output.
3.3- Grid Middle Ware:
To run Grid universe in condor, Globus toolkit should be installed this can be done by either installing Globus toolkit, VDT, or NMI. In this project Globus 4.2.0 has been installed from the source. Before installing Globus toolkit there are some software requirements that need to be installed first (Lawrence 349). The following software should be installed (Java, Ant, Perl, PostgreSQL, and xinetd). The native package for PostgreSQL, Perl, and xinetd were installed using the following command:
JAVA and ANT were installed from the source. According to (Wilkinson 203), it is recommended to use sun’s JAVA virtual machine because there are some versions, which can cause errors like GNU virtual machine. He recommends also installing JAVA and Ant from source. Installing JAVA and Ant from the source needs just to unpack the files and add the paths to the binaries in the PATH
environment variable.  The following commands were used to install JAVA:
The following commands were used to install Ant:
The paths to Ant and java binaries and the folders locations have been added to the environmental variables in the file /etc/profile:
To test the installation the /etc/profile has been sourced without any errors.
Firewall requirements: various configuration and installation problems form Globus package are attributable to the firewalls. Globus makes use of ports on firewalls that are by default closed. Various ports that should be opened on firewalls for Globus 4 include GRAM 8443, high port range (Ephemeral ports) and GSIFTP 2811. Ephemeral ports play important role of communicating job status and transferring files with GSIFTP. These ports are normally in high ranges of 30,000 + and it is recommendable that a range is set for these ports in order to ensure that they are open on firewall. Setting of port range is done using environmental variable, the GLOBUS-TCP-PORT-RANGE. It is important to install Globus package for the grid to be in position to receive work from outside the university framework (Joseph and Fellenstein 82).
Install Globus package:
Globus toolkit 4.2.0 has been used since there is more support about how to deploy it. The source code has been installed (Lawrence 336). A user has been added by the name Globus to controls the Globus tools and authenticates and validates use of the resources. It is not recommended to use root to manage the Globus tools. A directory has been created for Globus software and the ownership has been changed to the user Globus. As the user Globus, the Globus toolkit 4.2.0 has been retrieved from Globus website and unpacked in the Globus user home directory. The globus-4.2.0 directory that has been created has been set in the environmental variable GLOBUS_LOACTION then from inside the Globus toolkit 4.2.0 unpacked directory the software has been configured and compiled (Ferreira 176).
GLOBUS_LOCATION and GLOBUS_TCP_PORT_RANGE variables have been added to the /etc/profile and sourced along with two scripts, which used to set other variables.
3.3.1 Modifying the Sudo file
Sudo helps administrator to provide users with special privileges in running commands that can only be run by rot. Globus requires root privileges in order to Sudo other accounts in running file transfer or in job submission. Globus read from file gridmap-file in mapping certificate to user name. The process offers more accountability to users’ actions because the users behind file transfers of commands issued can be identified. It also offers users security and accuracy since Linux permission system does not allow access of users account data or processes without explicit permission (Lawrence 329). To modify sudo file, command visudo is typed as the root and other lines are added.
3.3.2 Certificate authority
This is important in ensuring high level of security within the grid since the grid executes codes but not data sharing. This way the grid can be a good source of virus like Trojan horses and others if it is compromised. Certificate authority is thus an important aspect in maintain of strong grid security (Grandinetti 242). External certificate authority could be used or an organization can operate one in itself. One must however have trust in the certificate. Authority should also adhere to the responsibilities. Main responsibilities of the Certificate Authority include positively identifying entities requesting certificates, issuing, archiving and removing of certificates and protecting certificate authority server. The certificate authority must also maintain namespace for certificate owners’ unique names, serve the signed certificates to the people requiring to authenticate entities and monitor the logging activity.
Certificate authority is built on public-key encryption system where keys generated are in pairs of private key and public key so that one key can be utilized to encrypt data and the other key be used to decrypt the data. Private key generated is guarded by owner and not revealed to anyone else. The public key is issued to anyone else who may require it. Certificate authority holds the public keys and guarantees those to who they belong. Every time a user uses private key in encrypting something, receiver uses the public key corresponding to the private key to decrypt. Only the particular user’s public key is effective in decrypting message issued correctly.
Anyone can however interpret such message and succeed in decrypting it since they may get originators’ public key. Double encrypting of private key and the corresponding recipient’s public key helps in forming a secure connection. Receiver can use private key in decrypting the message, the second decryption can be done by using sender’s public key. Proper decrypting of the message helps the recipient to know that only the sender have sent it while the sender identifies that only the specific receiver can decrypt it. This allows people to be free since they do not have to get encryption key from Sender to receiver as it is in the conventional encryption systems. In addition, tampering with communication can be revealed. The user can therefore know in case of alteration of the desired user’s public key (Grandinetti 242).
3.4. Configuring Condor-G:
Condor-G helps in submitting of jobs to the remote resources that are run on the Globus Toolkit called the GRAM or the Pre-WS GRAM service. Condor-G jobs are submitted similarly to other Condor jobs. Pre-WS GRAM protocol within the grid universe is specifies grid type as gt2 within the grid-resource command. Credentials are required for successful job submission in grid universe where gt2 is used. X.509 certificate is utilized in creating proxy, account authorization and allocation of required grid resource. Grid-proxy-init is used to create proxy before a job is submitted to the Condor in the Grid universe. Submit description file comprise of executable test, grid universe, grid- resource as gt2modi4.nsa.uiuc.edu/job manager, output as test.out, log as log.test and queue.
Executable test is transferred from local machine to remote machine by the Condor together with other files that are specified as input by input command (Joseph and Fellenstein 44). The executable has to be compiled to the intended platform. Grid resource is also needed in grid universe jobs. Second file specifies scheduling software that is used on remote resources and specific jobmanager for the Globus batch system. Port number should be included in machine where jobmanager listens on nonstandard port. The site specific string is call jobmanager-fork or jobmanager. Other names include jobmanager-sge, jobmanager-pbs, jobmanager-condor or jobmanager-lsf. Globus software runs on remote resource using string to select and identify correct service to be performed. The submit machine maintains the job log file.
3.5- Connection of Condor Pools to with the Flocking
Condor’s flocking allows jobs, which could not run immediately to run in a special Condor pool. The configuration variables helps in allowing Condor-schedd daemon that runs in each of the machines submitting jobs, to implement flocking (Ferreira 246).
3.5.1 Flocking Configuration
The condor pool has been flocked with a test bed HTC condor pool, which has been built by Adly Alshareef in both directions. The following configuration has been changed within the condor_config file (Condor Project Homepage 2011).
3.6- Installing LINPACK and ATLAS Library:
To determine the cluster performance the cluster is benchmarked using LINPACK with ATLAS library. Linpack benchmark has been utilized widely as a numerical test since 1979. Linpack benchmark is used in measuring float point computer performance. It gives enables users to know the length of time taken in solving some matrix problems and determining the results accuracy. High Performance Linkpack or the HPL software package is used in distributed memory computers to solve dense linear systems in a binary precision arithmetic. The software is freely available and portable implementation (Condor Project Homepage 2011). HPL checks and solves dense linear equation system on the distribute memory computer and also helps in timing the process.
HPL package apples a 64-bit floating portable and arithmetic routines for message passing and linear algebra operations. These algebra operations can be vector signal image processor Library or the BLAS. HPL allows assortment of numerous factorization algorithms. The figure 3 indicates HPL driver code that is modeled after Linpack 100 code of the benchmark. Source of the diagram has been retrieved from http://www.netlib.org/utk/people/JackDongarra/PAPERS/hplpaper.pdf
The library that used for benchmarking the system is ATLAS (Automatically Tuned Linear Algebra Software). This library used in many programs for the automatic generation and optimization for processors with deep memory hierarchies and pipelined functional units. After ATLAS were installed the LINPACK version 2 (hpl-2.0) were installed. Using the following standard steps:
tar xfm hpl-2.0.tar.gz
mv Make.Linux_ATHLON_FBLAS Make.LinuxGeneric
Make arch = LinuxGeneric
ARCH = LinuxGeneric
TOPdir = /work/LINPACK/hpl-2.0
MPdir = /usr/local/openmpi
MPlib = $(MPdir)/lib/libmpi.so
LAdir = /work/LINPACK/ATLAS/Linux_BLD/lib
LAlib = $(LAdir)/libf77blas.a $(LAdir)/libatlas.a
CC = /usr/local/openmpi/bin/mpicc
LINKER = /usr/local/openmpi/bin/mpif77
After HPL compiled two files xhpl and HPL.dat should be found in the path hpl-2.0/bin/LinuxGeneric. xhpl is the program that used to benchmark the system, and HPL.dat is the tuning file for xhpl where the size of problem, the number of block, and the other parameters can be set.
Test cases: -what why
First, condor was testing to check if all nodes can appear to each other using command condor_status that gave the expected result (8 processors) as it can be seen in figure (4.1).
Figure (4.): Condor Status
4.1- Test cases:
Since the cluster were configured in order to run jobs under three condor universes (standard, vanilla, and MPI), those three universes are tested.
4.1.1- Vanilla universe test cases:
Vanilla universe is the alternative condor universe for jobs that does not compiled by condor compiler. It does not support check point neither support remote system calls. In the test of this universe a job that compiled using c compiler has been submitted to condor pool.
Figure () shows that the job has been submitted to the condor pool and it has been executed in the same node (192.168.0.10), which is the submitted node and the central manager as well. The expected output appears in the last two lines (Lawrence 330). The other test in the same universe was to test if this universe can execute the job in another machine when working in a non-shared directory by specify the requirement in the condor job submission file for the job to be not run in the submitted node so in this case condor will try to match the job with the other available machines, which are N1, and N2.
Figure () shows that condor submitted tried to match the job but it failed because the other nodes could not create the output file since they did not have permissions on it. However submitting the same job where the submitting and the executable files are in the shared file system does not cause an error. Also submitting the same file with specifying
4.1.2- Standard universe test cases:
The standard universe installed with condor by default it supports check point and remote system calls. For the job to be submitted by standard universe, it should be compiled using condor compiler. After job has been compiled and submitted to condor pool once without specifying a machine requirement and another time by specifying a machine requirement to be not executed in the same submitting node the point of that is practicing the features that has been given by the standard universe. The same job are sent another time with requiring to be execute not in the submitting machine ant the job executed as it can be seen in figure () in the node 192.168.0.11. This test was chosen first to check the support of remote system calls under the standard universes and also to check that condor can submit from a submitting node to another node.
4.1.3- Parallel universe test cases:
Condor Parallel universe used to submit MPI jobs. To run MPI jobs a message passing interface should be installed on the condor nodes. The popular MPI implementations are MPICH2 and open MPI. Open MPI was installed on all nodes when MPICH2 does not work as expected because it needs extra configurations. As it can be seen in figure (), the job was submitted from the submitted machine (192.168.0.10) and executed on all the 8 processors. However the expected result was to get eight lines each from one processor but the practical result was vary from time to time the outputs for the same job can be seen in Appendix B.
4.1.4- Globus universe test cases:
Globus universe has not been tested because the following error appears when submitting a job to OSCAR head node.
4.1.5- Flocking test cases:
A specific job flocks into another pool only when that job cannot be run currently in its current pool. In testing of flocking, A Job is sent to cm.htc.com’ with the particular executing machine; the job submitted is run in the given machine before being returning the expected output (Condor Project Homepage 2011).
4.2- Cluster performance:
Two factors needed to be determined; the first is the highest performance that the cluster can reach and the second one is the size of problem that the cluster can process it more efficient than the single PC. Firstly the first factor has been tested by changing the size of problem, number of blocks, and the PxQ grid (Condor Project Homepage 2011). The size of problem has been determined using the following equation:
Ns=0.75×Number of Nodes×RAM8= 0.75×3×2G8=23717
This formula assumes that each node has the same specifications but in the cluster that has been used the head node has different specifications, which limit the overall LINPACK performance on the cluster. To avoid using virtual memory (SWAP) in the calculation it has assumed that the RAM for each node is 2GB, which is the minimum RAM in the cluster nodes. Various tests were run on the cluster to check the maximum size of problem that can be run on the cluster without using the SWAP. The maximum size of problem that can be run over the cluster was 23500.
The test shows that the cluster ran the vanilla and standard universes as expected. In the case of the parallel universe the test cases logs shows that the job submitted to the required machines but the output does not appear as it expected (Ferreira 219). When investigating this problem it has been found that the same problem appears with the clusters that used open MPI as a Message Passing Interface.
5.1- Standard universe:
Standard universe test shows the features of condor remote systems calls, which allows condor to communicate with its nodes (Lawrence 323). The test shows that even if the executable is not in the executable node that condor can still deal with it.
5.2- Vanilla universe:
Vanilla universe did not support remote system calls and checkpointing. This universe is the alternative for standard universe; the program could not be assembled using condor compiler (Lawrence 323). The test shows that this universe works when executing the job on the same submitting machine and it works also when submitting a job that has its executable in the NFS folder.
5.3- Parallel universe (MPI):
Trying to run the same job directly using open MPI without condor the job run and return 8 answers. The problem can be solved by either configuring MPICH2 that is used on the cluster instead of openMPI. MPICH2 was installed already on the cluster but because of it is configuration it has been replaced by openMPI (Lawrence 323).
5.4- Grid universe: Run the job effectively.
Flocking test shows the features of flocking
5.6- Cluster performance:
The cluster performance shows that the system has effectively worked.
5.7- General Discussion:
One week was spent in trying to install the OS on the executing nodes because the IBM machines CD drives did not work the assumption was that it needs to configure to use CD drive in the bootable since 5 machines has the same problem. A search has been made to check how to implement an alternative method of installing; one of these methods is to install the OS over the network (Condor Project Homepage 2011). After that, the OS was installed using external CD drive, which was a suggestion from one of Stack overflow community members who state that those machines CD drive’s does not work after a while because it is not used regularly and he suggest that it is better to put an empty CD on it to save it from damage. The condor installation itself was not hard but the hardest part was configuring the MPI and the Grid universe to run under condor since those universes needs to install and other components and configures them with condor. Some problems that appears was hard to solve since the available support that was available is the mailing list group, which is a good source of information but it is a cooperative work so if no one face the problem it might be hard to get an answer.
5.8- Project costs:
The project cost is zero because all the server that have been used is already owned by the university and are available for test. Also the software that has been used is all under the open source license and is totally free to use starting from the operating system (CentOS) and the middleware for cluster and grid (Condor and Globus Toolkit).
In summary, the cluster worked as expected but the MPI universe need to be improved because currently condor submitted the job to the machines but the output is not the expected output. The cluster has Globus toolkit 4.2.0 installed on it and it act also as a certificate authority so it should be able to send and receive jobs over the grid but there still a problem in sending a job to GRAM however it can GSISSH, and copy files from and to a remote host. The cluster can be improved by adding more computing nodes, dedicated check point server, faster network connection.
Flocking the cluster with the HTC pool that the university already have will give the university a huge computing resource. In addition it will reduce the maintenance cost by having single scheduling system that gives the university the opportunity to join all the computing resources. The cluster currently uses condor default scheduling policy, which is first come first serve (FIFO) but another policy can be defined in the future. Knowledge has been gained through this project on how to build an HPC cluster, implementing condor batch scheduler, implementing Globus toolkit to make the cluster accessible over the grid, sending jobs to another cluster using condor-G. Good knowledge in how to use Linux Centos terminal has been gained (Grandinetti 224). I also gained knowledge on implementation of MPI and how to implement it, the cluster middleware that exist and the differences between them.
Ajdari, Indiana, Open Source Cluster Application Resources (OSCAR), South East European University, 2002. Available at: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CCsQFjAB&url=http%3A%2F%2Foscar.openclustergroup.org%2Fpublic%2Fdocs%2Foscar5.0%2FOSCAR5.0_Users_Manual.pdf&ei=iu-KT4rJAsjR4QTmk-DsCQ&usg=AFQjCNErAyFexE5nr_vWuxD-Kv45_uKLCw&sig2=guTRcbyMlb-uYTSRee8Ukw.
CentOS. The Community Enterprise Operating System, 2009. Available at: < http://www.centos.org/.
Condor Project Homepage. Condor High Throughput Computing, 2011. Available at: http://www.cs.wisc.edu/condor/.
Etsion, Yoav and Tsafrir, Dan. A Short Survey of Commercial Cluster Batch Schedulers, South East European University, 2005 . Available at: http://leibniz.cs.huji.ac.il/tr/742.pdf.
Ferreira, Luis. Introduction to Grid computing with Globus. 2003, IBM.
Grandinetti, Lucio. High Performance Computing and Grids in Action, IOS, 2008.
Hussain, Akbar. Wireless Networks Information Processing and Systems: First International …, New York, Springer, 2009.
IBM. IBM eServer 325, 2011. Available at: http://www-304.ibm.com/shop/americas/webapp/wcs/stores/servlet/default/CategoryDisplay?categoryId=2583808&storeId=1&catalogId=-840&langId=-1
Jia, Weijia and Zhou, Wanlei. Distributed Network Systems: From Concepts To Implementations, New York, Springer, 2005.
Joseph, Joshy and Fellenstein, Craig. Grid Computing, New York, Prentice Hall Professional, 2004.
Lawrence, Thomas. Beowulf Cluster Computing With Windows, MIT Press, 2002.
Manchester University. Condor High Throughput Computing, 2011. Available at: < http://wiki.rcs.manchester.ac.uk/community/Mace01Condor.
Microsoft. Technical Overview of Windows HPC Server 2008 R2, Microsoft, 2011. Available at: http://leibniz.cs.huji.ac.il/tr/742.pdf
Silva, Fernado, Barreira, Graspa and Ribeiro, Ligia. 2nd Iberian Grid Infrastructure Conference Proceedings. Netbibo, 2008.
University of Lehigh. Condor High Throughput Computing, 2011. Available at: http://www.lehigh.edu/computing/hpc/running/condor.html
University of Texas. Condor cluster, 2012. Available at: http://www.cs.utexas.edu/facilities/documentation/condor
Wilkinson, Barry. GRID COMPUTING Techniques and Applications, 2010, CRC Press.
In this project, a grid enabled HPC condor cluster that runs jobs under vanilla, standard, parallel, and grid universes are designed, built, and tested. The cluster was formed of three IBM xserver with the following specification. The machines were connected together as a private network using a dedicated switch and Ethernet cables. The submission node is connected via the second NIC card to the university network. Centos 5.5, which is a Linux flavor operating system, installed on the three machines as OS. Condor 7.6.6 was installed on all the three machines as cluster middleware. MPICH2 and open MPI were installed as message passing interface on the three machines. Globus toolkit 4.2.0 was installed as a grid middleware on the submission node. To be able to install Globus toolkit the following software were installed: JDK 220.127.116.11 (JAVA), apache 1. (ANT), PostgreSQL, Perl, Xineted on the submission node.
The project’s main objective has been achieved by running the jobs under vanilla, standard, parallel, and grid universes. In addition the pool was flocked with the HTC test bed pool that built by Adly Alshareef. The cluster also is able to receive jobs over the grid since the grid installation and configuration that has been made allows him to do so. In addition the head node act also as a simple certificate authority for the cluster, which allows the administrator to allows other users to send jobs to the cluster via the grid.
First of all, I would like to thank God Almighty for giving me the strength and power to finish my project and also for directing my spirit in the right way at the times when I was confused and in trouble. With his blessing I will keep on working till I got the best result. I would like to thank my supervisor Dr. Violleta Holmes who gave me the opportunity to do this project and for her guidance in this project and I would like to thank also Mr. Ibad Kuraishi who helped me to organize the report and for his guidance in this project. I would also like to thank my parents, my wife, my daughter, and my brothers and sisters who helped me to focus on my studies. Finally, I would like to thank my sponsor TVTC and the ministry of higher education in Saudi Arabia for their financial support throughout the whole period of my studies in the UK.
Green Computing Research Project
The above project will be implemented in phases starting with the initial design to implementation. The company will recruit administrative staff, project implementation team, experienced design engineers and hire a consultant with strong project management skills, who will ensure adherence to project management procedures to make the project successful.
The new system will undergo instrumentation tests to detect faults that may interfere with the normal operations of the system before system handover.
The cost is not expected to exceed $500,000. This will include the hardware and software costs, consultancy costs, salaries and transportation of equipment costs. There will be a project manager on site to oversee the overall project from the design phase, testing and implementation. He will be expected to participate in formal meetings with the stakeholders and to ensure reporting is done in all stages to ensure the major deliverables are successful.
The project manager will monitor and evaluate the progress in every stage to ensure timely production of reports for assessment purposes by the overall management. The entire project will take a maximum duration of 6 months upon its commencement.
PROJECT BUDGET SUMMARY
Budget Item Cost EstimatesConsultancy services $ 5,000Hardware and Software $ 400,000Overtime on Salaries $ 75,000Equipment Transportation $ 20,000Total $500,000
To design a system that will increase the overall optimal performance of servers to increase efficiency in service delivery, and to enable the business align its business strategic goal in line with technology
To scale up rental space for servers to enable the business tap into unexploited market in order to increase the market share by being a leader in the provision of cloud infrastructure.
To align the firm’s competitiveness by increasing the business value of IT systems by offering uncompromisingly quality services that outweigh the competitors
To design a system and computing infrastructure that is ecologically friendly to conserve energy
To complete implementing the new IT infrastructure within 180 days
Project success CRITERIA
The project will be deemed successful if it meets the following:
The project is completed within its budget without straining other departmental budgets
The project is completed within the set time frame and does not overlap to the next financial year
The system designed meets its business goals to ensure an efficiently managed business process
The design implemented adheres to design rules for the construction of green computing systems
The system reduces power consumption by an average of 10%
The project conforms to the rules used in designing sustainable technologies
MANAGING THE PROJECT
The project is will undergo distinct stages, each overseen by the manager responsible for each of the functional unit. The unit manager will have the overall responsibility of overseeing the operations and activities of the specific units. He will interface with other unit managers to ensure each deliverable at each stage is met.
ROLES AND RESPONSIBILITIES
The overall design and implementation of the project will be overseen by a team of experienced personnel, who will ensure design procedures are adhered to. Each staff member is expected to be well qualified, and understand his/her duties as this will be critical and directly influences on the success of the project (Letavec, 2006). The Project Management Office will be responsible in stipulating and enforcing the roles and responsibilities of all project personnel.
The person will have the overall authority pertaining to the project while at the same time will approve and make changes to the project scope. He/she will approve the project budget and control the business aspects of the project by approving work products and providing resources to project personnel in various teams.
Executive Steering Committee
The committee has the main authority of guiding and mentoring the project sponsors, project managers and teams. The steering committee will be responsible for facilitating communication. They will also be involved in the reviewing of the budget in case there are changes within the project scope. The committee is also expected to make recommendations and monitor the overall progress of the project.
The person will be vested with the main overall responsibility of managing the entire project (Harrison and Lock, 2004). He/she is expected to develop and maintain the project charter and project plans, and at the same time steer the project by giving direction and controlling the operations of the project.
The person will have the responsibility of managing the functional aspects of the project. He/she will assist the project manager in research, help in developing the work breakdown structure and to assist the project manager develop and maintain the overall project scope estimates.
Project Management Office
The office will facilitate and effectively coordinate all aspects of communication regarding the project status in the whole organization. The office will also assist the project team members to adhere to quality management principles by establishing standards, which will direct the project performance to help the design team implement the project to conform to industry standards and specifications.
Harrison, F. L., & Lock, D. (2004). Advanced project management: A structured approach. Aldershot, England: Gower.
Letavec, C. J. (2006). The program management office: Establishing, managing and growing the value of a PMO. Ft. Lauderdale, FL: J. Ross Pub.
GREEN COMPUTING RESERCH PROJECT 5
Running head: GREEN COMPUTING RESEARCH PROJECT 1
Managing a Six Sigma Project for Southern Care Hospital
The Primary Roles and Responsibilities of a Project Manager
The project manager is the person entrusted with overseeing the initiation to completion stages of the project. He is at the center of the project and its success is highly dependent on how he carries out his duties. He is involved in initiating the project, defining the objectives of the project, designing the organizational structure, selecting the necessary resources for the project and constituting a team with crucial skills and capabilities to carry out the project.
According to the results of the 2012 PMI report, Pulse of the Profession, the most crucial skill for a project manager that would enable them to manage today’s highly complex and intricate projects and programs is the capability to align the team with the overall objective of the project as well as designing the organizational structure of the project to align with the individual goals of team members.
I would recommend the functional organizational structure for the project since this is a project carried out on part-time basis and by team members who are part of the larger organization, working in a stable environment. The different parts of the project have to be linked together with different members working on their individual tasks, who as a whole complete the project.
Similarities and Differences between the Various Organizational Structures
The difference between the functional and the pure organizational structures depends on how the authority is structured and who is responsible for what or to whom.In functional organizational structure, there are different sub-groups within a larger team, each with their own functional heads and carrying out a separate task. The different functional heads are answerable to the project manager who defines goals and allocates resources. In pure organizational structure, there exists only one team, headed by one main superior, the project manager, and the team carries on a single project, together. The matrix organizational structure is a combination of the other two, where even though there is a main superior at the top, the functional managers determine their own goals, manage their own teams independently and even allocate the necessary resources to their projects. The similarities are that in all of them, there exists the main authority and the team members take orders from him.
Criteria to Select Resources to Serve on a Project Team
In selecting resources to serve in a project team, it is important to determine the project needs in order to match these needs with members possessing the necessary skills and capabilities. There should be diversity of talents and experience. Team members selected must be the most qualified with years of proven track record. We also need members who will avail more time for the project as we intend to reduce the project’s lead time. The members should have a high degree of proficiency and efficiency in resource allocation and who can save on the project’s resources.
Selecting the Team Members
Some of the members I would recommend for the project are:
Hazel Vaughn, as a supervisor of the project. Even though she does not have many years of experience, her proficiency record is striking showing she is very efficient. She will also save the project $74,000, which is a good amount. Also, since she is available 30%, it means she will have more time for the project.
Peggy Moss: This is a Black Belt, Radiologist Senior Bus Analyst who holds an MBA and with years of experience. She will save the company $240,000, which is a very good amount, and she is available 50% of the time, thus, is more available for the project.
Joyce Synder: Being the project’s manager, she has many years of experience and is available 50% of the time thus more time for the project. She will also save the project $112,000.
Susan Osborne: Being the staff nurse, it is important that she be available most of the time to carry out the requirements of the project, and since she is available 50% of the time, then she is the most viable person. Also, it is important for a staff nurse to have a long time of experience in order to increase efficiency in carrying out his mandates. Since she has 8 years of proven experience, she becomes an important part of the team.
PROJECT MANAGEMENT 4
Running head: PROJECT MANAGEMENT 1
A Research Project Submitted in Partial Fulfillment of the Requirements for the Degree of —
This thesis describes e-money and provides knowledge on the use of e-money among international students. The international students in this case include students who travel from other countries to study at Cal Poly Pomona University. These include students from Asia, Mexico, Egypt, Venezuela, Portugal, Nigeria and Albania. The proposed study will include all international students irrespective of the country of origin. A quantitative research will be conducted among these students to investigate if they use e-money, why they use e-money and the factors that motivate them to use e-money. The study will also investigate how international students use e-money. In addition, the study will provide insights into perceptions of international students towards e-money.
Chapter 1: Introduction
This chapter outlines the background information of this study, problem statement, and the research objectives. The significance of the study, research questions, and limitations are outlined as well
Electronic money (e-money) can be defined as the form of cash that is exchanged electronically. Electronic money is a replacement of coins and notes with an electronic equivalent (Rahman & Raisinghani, 2000; Gasper, 2006). Electronic money can be stored on mobile phones or in an account that is accessible to the user through the internet. Exchanging electronic money requires access to the internet. Electronic money enables individuals to make payments and engage in financial transactions without withdrawing coins and notes from their bank accounts (Good, 2000).
Examples of electronic money include debit cards, credit cards, and electronic funds transfer. Credit and debit cards are considered as forms of e-money when used in online transactions (Good, 2000).The increased use of the internet in commerce has revolutionized the way consumers purchase goods and services (Good, 2000). Consumers shop for products via the internet and may make purchases without visiting the stores physically. Some business activities such as sales, payments, and marketing can easily be conducted via the internet as long as the sellers and buyers have internet connections (Gasper, 2006).
E-money involves transferring funds from one bank account to another and using digital currencies such as PayPal to make online payments. Credit cards can be used to complete online transactions or in conjunction with other forms of electronic money (Gup, 2003). Electronic money provides more privacy to buyers compared to credit cards because a credit card provides personal information of the buyer to the seller (Gwartney, Sobel, & Macpherson, 2006). Issuing debit and credit cards is tedious and costly for banks. Thus, such services are limited to consumers that can prove their credit worth. This requirement prevents many individuals from accessing debit and credit cards if they cannot prove their credit worth (Gup, 2003). In addition, the cards are not efficient for micro-payments due to the high costs involved. A buyer incurs high charges for using credit or debit cards to pay for purchases involving small amounts of cash. Electronic money enables buyers to overcome such challenges and make fast, convenient, and cheap transfer of funds to the seller (Gwartney, Sobel, & Macpherson, 2006)
E-commerce has led to increased use of e-money and in some cases e-money is the only way of paying for online transactions (Qin, 2009; Rouibah, 2009). Wild et al (2011) indicate that the convenience of e-money attracts many buyers and sellers to online transactions. The authors outline some advantages of e-money and indicate that the transfer of funds is not limited by the geographical distance between two individuals. In addition, one does not have to wait in queues to withdraw cash from his/her bank account or make cash payments at a store. Electronic money is more secure to carry around relative to real cash (Wild et al., 2011). These advantages have drawn many people to turn to electronic money. Some of the shortcomings associated with e money include fraud, technological failure during transactions, and the loss of human interaction (Wild et al., 2011).
This study will involve the international students at Cal Poly Pomona University. There are about 1,000 international students at the university in different fields. The international students join the university as first -year student or transfer from other universities to complete their studies at Cal Poly Pomona. Some of the international students join the university from language schools while some travel directly from their home countries. The largest number of international students at Cal Poly Pomona University comes from Asia. Other international students come from Mexico, Egypt, Venezuela, Portugal, Nigeria and Albania (Cal Poly Pomona, 2012). This study will investigate whether these international students use electronic money. The study will also investigate the factors that motivate the international students to use electronic money within and outside the university and the ways in which they use e-money. The study will include all the international students irrespective of their country of origin.
International students are advised against carrying large amounts of cash and revealing their identity as international students when paying for their bills for their safety. Many universities advise their international students to use checks, ATM cards, or credit cards to make payments for goods and service in and out of the University (Lipson, 2008). Many banks in the United States are reluctant to issue international students with credit cards because they do not have their credit history (Rao & Rao, 2009). In addition, international students have no regular income and they may return to their countries before clearing their debts. However, international students have the option of using store or secure credit cards and debit cards before they can qualify for major credit cards (Rao & Rao, 2009).
International students are vulnerable to fraudulent online transactions especially if they are new in the United States. Such online transactions require international students to use e-money to make payments and students end up losing their money (Rao & Rao, 2009). The policies adopted by US banks on issuing credit cards and the vulnerability of international students to fraudulent online transactions may discourage international students from using e-money. This study will investigate if international students overcome such limitations and use e-money to pay for goods and services in and out of the university.
The main objective of this study is to find out if international students in Cal Poly Pomona University use e-money. If international students at University indicate that they use e-money, the study will investigate why they use e-money and the factors that motivate them to use e-money. Thus, the objectives of this study include:
To investigate whether international students at Cal Poly Pomona University use e-money
To investigate the reasons why international students use e-money
To identify the factors that motivate international students to use e money
To find out how international students use e-money
The achievement of these objectives will help in understanding whether international students are able to overcome the challenges of accessing and using e-money in a foreign country. As indicated earlier, domestic banks are reluctant to issue credit cards to international students, which are the major form of e-money used by students (Rao & Rao, 2009).
Significance of Research
The study will indicate whether international students use e-money and the factors that motivate them to use e-money. The study will also indicate the ways in which international students use e-money. The findings of this study will provide insights on the perceptions of international students towards e-money. This information is valuable to domestic banks. Banks can use this knowledge to design appropriate marketing strategies that will reach international students with their products.
Banks can develop informative marketing messages on different kinds of e-money that are available to international students and how these students can access such products. The knowledge of the ways in which international students use e-money will indicate if international students are aware of all the possible uses of e-money or if they require more information on the same. Banks and the International Centre at the University can educate new and future international Students on how to use e-money based on the findings of this study. Domestic banks can also use the findings of this study to develop existing and new products that meet the needs of international students more effectively.
The main research questions in this study include:
Do international students at Cal Poly Pomona University use e-money?
Why do the international students use e-money?
What are the specific factors that motivate international students to use e-money?
How do international students use e-money?
The study will involve international students in Cal Poly Pomona University irrespective of their origin. According to the statistics provided one University’s website, there are approximately 1, 000 international students in the University. The study is limited to the availability of these students and their different class schedules may pose a challenge to the researcher when trying to get the students to fill the questionnaires are the same time. The international students come from different countries including Asia, Mexico, Egypt, Venezuela, Portugal, Nigeria and Albania. This means that most of them have learnt English as a second language and this may limit their understanding of the research questions.
Electronic money provides an easy, convenient and fast way of making payments for goods and services. Individuals using e-money do not have to queue in banks to withdraw notes or coins to pay for goods or services (Wild et al., 2011). The proposed study will investigate whether internationals students in Cal Poly Pomona University use e-money, why they use e-money, the factors that motivate them to use e-money, and the ways in which they use e-money. The study will involve international students from Asia, Mexico, Egypt, Venezuela, Portugal, Nigeria and Albania who are studying at the university. The findings of the students will outline the perceptions of international students towards e-money. The knowledge on the use of e-money among international students will help banks in designing their products and marketing strategies to reach international students.
Chapter 2: Literature Review
In this chapter, various literatures that are relevant to the research topic are analyzed. The chapter outlines various reasons why people use e money and the factors that motivate students to use e money, The chapter will also outline how people use e-money and various issues associated with the use of e-money.
Uses of Electronic Money
People use electronic money for different reasons. A buyer does not have to carry large amount of cash around to make payments for business transactions. Sometimes an individual may be in a geographical location where accessing their bank account for cash is impossible or inconvenient. Electronic money provides a channel of making payments in such circumstances (Hespeler, 2008).
Another reason why people use electronic cash is that one can carry out multiple transactions without carrying cash or issuing checks (Hespeler, 2008). E-money allows an individual to purchase different items from different suppliers and pay at once without moving from one location to another to make payments. A buyer can engage in multiple online purchases at the same time and pay using e-money.
Carrying physical cash in such case is unsafe and time consuming because the buyer has to walk from one store to another. Electronic money is widely accepted (Guttman, 2003). Electronic payments are fast and convenient as long as there is a reliable network connection. With the current developments in technology, reliable internet connections are easily accessible (Hespeler, 2008; Guttman, 2003).
Using electronic money involves converting a certain amount of money into digital cash, which is then transferred electronically from one party to another. This electronic transfer allows individuals to use e-money at any time or location. The electronic transfer of funds makes e- money convenient for international transactions due to difference in time zones and geographical location. The hustles of currency exchange are eliminated with this form of payment (Rosston & Waterman, 1997). Thus, individuals intending to carry out multiple international transactions may choose e-money to make payments. Dorn (1997) outline the process of making payments using e-money. If an individual intends to pay for any transaction using e-money, they simply register for an account with service providers such as Moneybookers, PayPal and Google among others. These providers will charge the buyer for the service of converting cash into digital cash and keep record of all transactions that an individual pays for through their site. The buyer does not have to provide any credit or debit card information to all sellers and can claim their money back if goods or services delivered do not meet the specified requirements (Dorn , 1997)
Dorn (1997) indicates that any payment done using e money is recorded and these records are accessible to users. Thus, an individual can easily control and monitor their payments when using e money. The availability and expansion of digital technologies motivates people to shift to electronic payments for goods and services. In areas where that digital technology is available, reliable, and accessible, the use of e money is extensive. In the absence of modern technology, individuals will turn to other ways of making payments.
According to Rosston and Waterman (1997), money is a cultural and social phenomenon. Cultures differ in their attitudes and perceptions of money. Cultural changes that allow societies to embrace modernism have led to changes in perception and attitudes towards money (Rosston & Waterman, 1997). The current shift in many societies to electronic commerce and digital technology has intensified the use of e money. Electronic payments suit the modern lifestyle of convenience and increased efficiency (Rosston & Waterman, 1997).
Garman and Forgue (2011) indicate that individuals use e-money to pay for online transactions and make payments for goods and services at retail outlets. Garman and Forgue (2011) continue to indicate that students may use e-money to pay for goods and services and as a way of controlling their spending.
Factors That Motivate Students to Use E Money
Boldt, Patron, and Smith (2011) indicate that university students are shifting towards using electronic payment options. The popularity of using cash and checks to make payments is slowly declining in institutions of higher learning (Bold, Patron & Smith, 2011). This is because of technological innovations that have changed the methods of paying for goods and services. Students in institutions of higher learning have access to the internet through their personal computers, I pads and mobile phones and thus making payments using e-money is convenient to them,. Students prefer electronic payments that are internationally recognized (Bold, Patron & Smith, 2011) Electronic money is one of these kinds of payment options. The desire to keep up with technological innovations has led to a shift to e money among students (Bold, Patron & Smith, 2011).
Bold, Patron and Smith (2011) argue that the introduction and discussion of various methods of payments in class encourages students to explore electronic payment options. As student gain new knowledge on the use of electronic methods of payment, they shift from traditional methods of making payments to the electronic methods. Pelilli (n.d.) examined the reasons why university students use e-money and discovered that personal attitudes towards modern technology and e-money had a significant influence on the decisions of university students to use e-money. In addition, Pelilli (n.d.) discovered that the features of e-money and social beliefs influenced the decisions of university students to use e-money. The studies conducted by Bold, Patron and Smith (2011) and Pelilli (n.d.) provide useful information on why university students use e-money. However, the studies do not indicate whether the reasons outlined apply to international students as well. The proposed research will cover these research gaps.
People use electronic money for different reasons including the convenience of completing transactions with no physical cash. Individuals can make multiple payments in different geographical locations with e money. Electronic payments are fast, convenient, cheap, and relatively secure. The expansion in technological innovations and use has encouraged individuals including students to shift to convenient methods of making payments. Individual attitudes and behavior of students influence their decisions to use e money. Other motivating factors include features of e money, social beliefs, control beliefs, and money lessons learnt in class. The analyzed literature in this section does not indicate motivating factors that are specific to international students and why students make decisions on e money.
Chapter 3: Methodology
This chapter outlines the research design, sampling techniques and sample size, measures and procedures of the study.
A quantitative survey will be conducted to establish whether international students in Cal Poly Pomona University use e-money, their specific reasons of using e-money and the factors that motivate them to use e-money. A quantitative research design is chosen because it will allow the research establish correlations and relations relationship between various variables and the use of e-money among international students at the University. A quantitative research allows a research to test the strength of relationships between variable and present final results in a comprehensive and systematic way (Luton, 2010). Thus, the research can establish significance of the relationship between various factors and reasons given by participants and their use of e-money.
A survey allows a researcher to collect large amounts of data at once. This is because all participants can fill in questionnaires simultaneously. In addition, it is possible to test correlation among multiple variables and examine the trend of relationships between variables (Miller, Strang, & Miller, 2010). This study will involve collecting information on all research questions and using a survey will enable the researcher to obtain answers for all the research questions simultaneously
A stratified random sampling technique will be used to select the international students who will participate in the study. Stratified random sampling involves dividing the target into different groups or strata based on their unique characteristics. Participants are then picked at random from each stratum to form the final sample (Anderson, Sweeney, & Williams, 2011). A stratified random sampling is chose for this research because international student at the University come from different parts of the world. The population of international students will be divided into different strata based on their countries of origin including Asia, Mexico, Egypt, Venezuela, Portugal, Nigeria and Albania. Students will be picked at random from each stratum to form the final sample of 200 international students. The number of students picked from each stratum will depend on total number of students from that country of origin in the university. For instance, majority of international students come from Asia while international students from Nigeria and Egypt are few. Thus, a larger number of Asian students will participate in the study compare to international students from Egypt and Nigeria.
The measures for each research question are outline below
Do international students at Cal Poly Pomona University use e-money?
This will be indicated by positive response (agree or strongly) to statement in the questionnaire that indicate that the participant uses e-money regularly or on several occasions
Why do the international students use e-money?
The reasons why students use e-money will be indicated by their agreement with the statement indicating various ways in which e-money is used. Question 1 and 2 in section B of the questionnaire will give further reasons why international students use e-money
What are the specific factors that motivate international students to use e-money?
Various factors are outlined in section A of the questionnaires and number of participants that agree with the corresponding statement will indicate the relationship between these factors and the use of e-money among international students. Section B also requires participants to outline any additional factors that motivate them to use e-money
How do international students use e-money?
The various ways in which international students use e-money will be indicate by their responses to the statements corresponding to uses of e-money and the uses outlined in section B of the questions.
The frequencies of each response to each statement will help in indicating the strength of the relationship of various factors and the use of e-money by international students at the University.
Questionnaires will be used in the survey to collect data from participants. The questionnaires will consist of open and close-ended questions related to the research topic. This will ensure that the researcher collects as much information from the participants as possible. International students in Cal Poly Pomona University have regular meetings and activities at the international students lounge. The researcher with the help of advisor of international students will introduce the study at one of the regular meetings where most of the students are in attendance. This verbal communication will be followed by an email to all international students that will be authorized by the advisor. In this email, the research will state the purpose and objectives of the proposed study and what is expected of the participants. The instructions of how to fill the questionnaires and a sample of the questionnaire will be included in the email. Email recipients will be required to indicate their willingness to participate in the study. The questionnaires will be issued to willing participants in hard and soft copies (e-mail).
Hard copy questionnaires will be issued after one of the regular international students’ meetings while soft copy questionnaires will be sent to participants via email. All filled questionnaires will be submitted to the researcher directly. The students will not be allowed to discuss the questions in when filling the questionnaires. The participating students will not be required to indicate their names or identification numbers in the questionnaires. Participants are free to withdraw from the study at any stage and their identities will remain concealed throughout the study. The participants will be encouraged to give their honest opinions and responses. The data collected will be coded into different categories and analyzed using excels spreadsheets and SPSS program.
A quantitative survey will be used to collect information from 200 international students in Cal Poly Pomona University. The survey will include students from all countries of origin represented in the university. A stratified random sampling technique will be used to pick participants and the final sample will represent the entire population of international students. The participants will take part willingly and their identities will remain confidential throughout the study. Data will be collected using questionnaires that will contain open and close-ended questions. The collected data will be coded and analyzed using excels spreadsheets and the SPSS software
Anderson, D, R., Sweeney, D, J., & Williams, T, A. (2011). Essentials of Statistics for Business and Economics. Connecticut: Cengage Learning
Boldt, D., Patron, H., & Smith, W, J. (2011). Payment Methods and Practices among College Students: A Classroom Discussion Tool. Journal for Economic Educators, 11(2), 25-34
Cal Poly Pomona. (2012). International Center. Retrieved 7 May 2012 from http://www.csupomona.edu/
Dorn, J, A. (1997). The Future of Money in the Information Age. Washington: Cato Institute
Garman, E, T., & Forgue, R. (2011). Personal Finance. Connecticut: Cengage Learning
Gaspar, J. (2006), Introduction to Business. Connecticut: Cengage Learning
Good, B, A. (2000). The Changing Face of Money: Will Electronic Money be adopted in the United States? London: Routledge
Gup, B, E. (2003). The Future of Banking. Connecticut: Greenwood Publishing Group
Guttmann, R. (2003). Cybercash: The Coming Era of Electronic Money. Hampshire: Palgrave Macmillan
Gwartney, J, D., Sobel, R, S., & Macpherson, D, A. (2006). Economics: Private & Public Notice. Connecticut: Cengage Learning
Hespeler, F. (2008). Electronic Money and the Monetary Transmission Process. Gottingen: Cuvillier Verlag
Lipson, C. (2008). Succeeding as an International Student in the United States and Canada. Chicago: University of Chicago
Luton, L, S. (2010). Qualitative Research Approaches for Public Administration. New York, NY: M. E Sharpe
Miller, P, G., Strang, J., & Miller, P, M. (2010). Addiction Research Methods. New Jersey, NJ: John Wiley & Sons
Pelilli, D. (n.d.). What Moves Payment Card Use? Evidence from Survey on Italian University Students. Retrieved 22 February 2012 from http://www.adeimf.it/new/images/stories/Convegni/Lecce/David_Pelilli.pdf
Qin, Z. (2009). Introduction to E-commerce. New York, NY: Springer
Rahman, S., & Raisinghani, M. (2000). Electronic Commerce: Opportunity and Challenges. Hershey: Idea Group Inc (IGI).
Rao., & Rao, R. (2009). Study in America: The Definitive Guide for Aspiring Students .Delhi: Pearson Education India
Rosston, G, L., & Watermann, D. (1997). Interconnection and the Internet. London: Routledge
Rouibah, K. (2009). Emerging Markets and E-commerce in Developing Economies. Hershey: Idea Group Inc (IGI)
Wild, C., Weinstein, S., MacEvan, N., & Geach, N. (2011). Electronic and Mobile Commerce Law: An Analysis of Trade, Finance, Media and Cybercrime in the Digital Age. Hertfordshire: University of Hertfordshire Press
The purpose of this survey is to investigate whether international students at Cal Poly Pomona University use electronic money (e-money). The survey will investigate why international student use e-money, how they use e-money and the factors that motivate them to use e-money. Thus, your participation is important. Please note that your responses and identity will remain confidential throughout the study. Do not indicate your name or student identification number on this questionnaire. Please give your honest answers to every question
What is your country of origin?
For each of these statements please indicate whether you strongly agree, agree, disagree, or strongly disagree
Statement Strongly agreeAgreeDisagree Strongly disagreeThe use of e-money I use e-money regularly to make payment, I rarely use e-moneyI have never used e-moneyReasons for using e-moneyI use e-money because it is convenient, cheap and fastI use e-money for safety reasonsI use e-money because it is the only method of payment for some transactions I use e-money because my friends use itMotivating factorsMy attitude towards e-money influences my decisions to use e-money The social beliefs in the university influence my decisions to use e-moneyMy cultural background influence my use of e-moneyThe lessons taught in class have caused me to shift from traditional methods of payment to e-moneyThe availability of modern technology motivates me use e-money My desire to keep up with advances in technology motivates me to use e-moneyThe advantages of e-money motivates me to use this method of payment Uses of e-moneyI use e-money to pay for goods and services at all retail outletsI use e-money to pay for goods and services at retail outlets within the university I only use e-money for online transactions I use e-money for online and real transactions
1. Apart from the reasons outlined in section B above, what are your other reasons of using e-money?
2. Kindly explain your frequency of using e-money
3. In additional to the factors outline above, what other factors motivate you to use e-money?
4. Would you encourage other international students to use e-money? Why?
5. Apart from the uses outlined above, how else do you use e-money?
Thank You For Your Participation!
Electronic Money 1
The Responsibility Project: No Phone Zone Day
ETH/316: Ethics and Social ResponsibilityUniversity of Phoenix
Members of a community have certain responsibilities. Of the many responsibilities we have, social responsibility is the most important. This is because it is through it that we make a difference in other people’s lives. The video addresses the issue of use cell phones while driving. Using a cell phone while driving can cause traffic disruptions and accidents. These accidents have maimed and killed many victims. The No Phone Zone video gives damning statistics. Eight hundred and twelve thousand (812,000) drivers admit using phones while driving, and one third of teenage drivers text while driving. Distracted drivers are responsible for the deaths of 5,870 people, and injuries to 515,000 people (The No Phone Zone video 2010). This growing ethical and safety issue compelled people to take a stand. On April 30 2010, the people formed an awareness day to educate on the dangers of using phones while driving (No Phone Zone Day, 2010). The issue of driving while using a cell phone has both legal as well as ethical aspects. Many states have enacted laws banning the use of phones while driving.
Using ethical principles to address organizational issues
Ethical principles help to address organizational issues through awareness and support. They also provide the boundaries of acceptable behavior within the organization. Liberty Mutual (2011) identifies social responsibility as the main ethical principle in the movie. Although the issue of using a cell phone while driving has now taken a legal dimension, law enforcement officials, cannot minimize its use on their own. Politicians and community members can help the law enforcement agencies in curbing this vice by creating awareness. Once people understand that the law prohibits a certain behavior, they tend to desist from engaging in that behavior. Support to combat the use of phones while driving can be in different ways. The enforcement agencies require support financially and morally. Financial support helps in funding awareness activities, and moral support is necessary so that drivers have knowledge of right and wrong.
Importance of the issues in the film
The issue of using phones while driving is important because the country loses many lives every year and many are injured (No Phone Zone Day, 2010). This issue can affect anyone regardless of race, age or gender. It also costs the American taxpayer millions of dollars every year. Most of the money is spent on treatment of accident injuries, compensation to injured parties and promoting awareness. Lastly, this issue is important because of its legal and ethical aspects. The video is crucial as it raises awareness about the issue of using phones while driving.
The function of external social pressures in changing organizational ethics
External social pressures can play a positive or negative role in influencing organizational ethics. The positive role is the creation of awareness and positive peer pressure. The negative roles include creating negative peer pressure and the promotion of the vice. Awareness may cause people to think of the possible consequences before they act. Positive peer pressure can minimize violations of the law if peers are doing the right thing. Peer pressure can also be negative. If peers are doing the wrong thing, they may influence one into breaking the law. Promotion may be negative if the media show a person saying that he or she can do two things at once. The same applies if a person disputes the link between use of phones and accidents.
Relevance to organizational and personal decisions
The use of a cell phone while driving has relevance to personal and organizational decisions. It is organizationally relevant because enforcement organizations have to make decisions on what punishments and control measures to put in place in their efforts to eliminate or minimize the issue. It is relevant to personal decisions because people have to decide whether they are willing to take the risk and if the potential consequences of breaking the law are worth the risk.
Relationship between ethical and legal issues in the film
The relationship between the legal and ethical aspect is the negative consequences associated with using a phone while driving. Legally, some of the consequences of driving while using a phone include fines, traffic tickets, and jail time depending on the severity of the offence. The ethical consequences of using a cell phone while driving are guilt, and embarrassment depending on the circumstances. If someone gets into an accident because of using a phone while driving, and someone else is hurt, he or she would have to deal with both the legal and ethical issues.
Community members have responsibilities to themselves and their community. Social responsibility is probably the most important because each of us can make a difference in life. Using a phone while driving is not only an ethical issue but also, it is also a legal one. This issue claims the lives of thousands and leaves thousands hurt and maimed every year. The American government spends millions trying to promote awareness and repair the damage caused by this issue. This issue affects our organizational and personal decisions. Social pressure can help to reduce the incidence of use of mobile phones by drivers significantly.
Liberty Mutual. (2011). The Responsibility Project. Retrieved from
No Phone Zone Day. (2010). [Video file]. Available from Liberty Mutual website: http://responsibility-project.libertymutual.com/films/no-phone-zone- day#fbid=YSS6BDTOk
Running Head: THE RESPONSIBILITY PROJECT: NO PHONE ZONE DAY �
Project Scope Management
There are two perspectives of scope management in general project management as a body of knowledge. That is project scope and product scope. Project scope entails all the activities that need to be done within a given timeline in order to deliver the final product or service with specified functions and features. On the other hand, product scope entails the functions and features that characterize a service or a product (Shenhar and Dvir 608) .In general, scope in project management refers to collection of all the required information, resources, and the features of the final product as per the client’s requirements, including quality standards before starting the project (Pinto 156). The scope of a project includes the goals of the project, constraints and limitations. Described in this paper is scope management as a general body of knowledge in project management and its six main perspectives. That is conceptual development, the scope statement, work authorization, scope reporting, control systems, and project closeout (Pinto 157). Also described in the report is the application of the above project activities in automobile and manufacturing engineering.
Scope management is a general project management function of controlling the project objectives and goals through conceptual development process, definition, execution, and termination processes. It entails creation of systematic project development plans before project initiation and estimation of resources in terms of time and materials to ensure that the undertaking is a success (Atkinson, Cryford, and Ward 688).
Project constraints require project managers to understand all the restrictions that may affect the project leading to creepage before starting the project. Such constraints include time, financial budget creepage, and client demands that may lead to scope configuration (Atkinson, Cryford, and Ward 690). Turner and Cochrane(94-97) underscore that failure of many projects happen due to ill-defined objectives arising from partial understanding of the problem statement. This happens when the project client give insuffiecient information about the problem or when the project organization team undertakes insufficient research on the problem and probable solutions.Figure 1 below shows the various elements of a project scope and their activities.
This is the first step in project management and deals with clear definition of the goals and objectives of the project and finding the best way of achieving them. It starts by collection of data by the project team and processing it into informed information on how to handle the project (Pinto 158).It helps in reducing the complexity of the project to a basic level. Key issues in conceptual development include problem statement, information gathering, constraints, alternative analysis, project objectives, and statement of work (Khan 13;Pinto 157).
Problem statement: This deals with identifying the exisitng problem and the need for a solution to it (Khan 12). For instance.Take the defense department of the United States of America.It may want to contract an automobile company like General Motors Incorporation to manufacture for it an Amoured Personnel Carrier (APC) with a capability to transport an infantry squad of 12 soldiers, ability to travel through water, minimal offensive power, and strong side amour to protect the crew inside it. The problem could be improving on an existing APC like M-113 that has less of the afore-described requirements.
Information gathering: This entails collection of data and processing into meaning information regarding the existing problem and required solution. Information gathering about the ineffectiveness of M-113 Amoured Personnel Carrier above is important in informing General Motors in finding a solution to the outlined requirements. According to Cripe(6-7), project managers should only commence project implementation after a clear understanding of the problem from research information.
Constraints: These are restrictions that may affect the completion of the project. They include time limits, financial budget constraints, and client demands that may lead to scope creep and configuration. In the above example. General Motors Incorporation should assess all the above posible constraints before initiating the project of manufacturing the required APC for US Army.
Alternative analysis: Project managers are also expected to undertake an alternative analysis to identify an alternative solution to the problem in the event of failure of the first option (Pinto 158). According to Barros, Werner, and Travassos(23-24), alternative analysis in project maangement entails first understanding the nature of the problem at hand and its suitable solution. The derived or proposed solution is then use to generate an alternative strategy and solution to the same problem.
There are two main functions of alternative analysis: It gives the project team a comprehensive understanding of the project characteristics. It also offers an alterenative choice on how a given project should be undertaken (Pinto 158). Alternative analysis prevents project team from starting project implementation process without conducting sufficient analysis for more efficient and effective options.
Project objectives:This requires project managers to know the required outputs, resources, and timing of the project(Reh n.p).All processes, steps, and procedures set out in a conceptual development plan should work jointly as a system to affect the outcome of the project. A vague objective leads to project creepage and failure to achieve it. For instance, conceptual development in automobile and manufacturing industry may be to develop an automobile engine with improved efficiency for Mercedes Benz. In this case, a high efficiency engine power implies less fuel consumption and thus the design team has to work on how to improve the efficiency of fuel consumption of the engine. A vague objective related to the same output is to improve the efficiency of a Mercedes Benz. In this case, the objective is not specific because the stated efficiency can apply to speed, braking system, and adaptability to road surfaces. Such a vague objective often leads to creepage of the project due to client configurations and scope changes (Shenhar and Dvir 608).
Statement of Work (SOW): It is a detailed breakdown of work requirement and activities for the project. It contains a brief description of the project objectives, general activities of the project, expected outputs, and budget. Depending on the project scope, the statement of work can be very detailed. For instance, a request by the department of defense of the United States of America for an automobile company like General Motors Incorporation to manufacture for it an Armored Personnel Carrier for battle may have the following pertinent details to the statement of work; the number of people it can carry, the type of material to use, the overall weight of the finished vehicle and project start and end dates. In the above example of the problem statement, an effective statement of work includes background information to the problem, technical description of the problem and required solution, and project timeline and milestones (Wysocki 54). According to Pinto (159), an effective statement of work should detail clearly the expectations of the client, the problems seeking address, and the activities required to accomplish the project.
This is documentation and approval of critical project specifications and parameters by the project team before initiating the project. The key stages in scope statement include establishment of the objective criteria of the project, development of the project management plan, establishment of a work breakdown structure(WBS), and creation of scope baseline(Pinto 159-161).
The project goals criteria consist of project schedule, cost, performance and deliverables. The latter refers to any tangible, measurable, result, verifiable outcome, and items that should be achieved at completion of the project or part of it.
The scope baseline of a project is documentation of each component of the project objectives, including provisional budget of required resources and schedule information of each project work package. Making of the scope baseline of a project is the final step in systematic pre-work project plans where every task is identified and allocated resources and control parameters.
Work Breakdown Structure (WBS), is the division of the project scope into modules or component sub-steps so as to allow establishment of inter-relationship among various activities of the project. A WBS helps the project team in effective and accurate management of the project.
Another critical element of the scope statement is the development of the project management plan. The plan consists of the organizational framework of project team, policies and produces that govern the project team members, job description, and reporting structure of each team member. It can be compared to a hierarchy of commands in an organizational management. Figure 2 below shows components of the scope management centered on the work breakdown structure.
In the above figure, using the hypothetical example of the project of General Motors to manufacture an Armored Personnel Carrier for the US Army, project initiation arises out of the need to have an improved M-113 Armored Personnel Carrier for battle. Scope planning is an intermediate work Breakdown Structure detailing likely project management areas, and general project activities. After a review, the scope plan is upgraded into a detailed Work Breakdown Structure (Khan 13). Scope definition is the addition of other relevant details to already designed scope to come up with a more meaningful work breakdown structure and achievable objectives. Scope verification involves checking all project engineering deliverables raised in project scope planning and definition stages. Finally, scope change control entails measuring or monitoring of emerging scope of the project against scope baseline and making appropriate adjustment to ensure that the overall project is completed within the budgeted time and financial resources.
The main purpose of the work breakdown structure include: One, it echoes the objectives of the project. Two, it serves as the organization chart for the project being undertaken. Three, it forms a logical platform for tracking the project cost, schedules, and performances for each activity in the project. Four, it can be used in the communication of the project status and improvement of the overall communication of the project. Five, it shows how the project will be managed before initiation (Pinto 162). Figure 3 below shows a typical work breakdown structure of a project.
In the above hypothetical project work breakdown structure, 1.0 is the main project problem. 1.2, 1.3, and 1.4 are major tasks which should be undertaken in problem resolution process. 1.2.1, 1.2.2, and 1.2.3 are sub-tasks, otherwise deliverable work packages under task 1.2 while 1.3.1 and 1.3.2 are sub-tasks of task 1.3. Lastly 1.4.1 and 1.4.2 are sub-tasks of task 1.4 of the overall project scope. The project activities or tasks should be broken down to the lowest task level to permeate clear understanding of the project objectives.
The Responsibility Assignment Matrix (RAM): It is also called linear responsibility chart. It is a description of the responsibilities of each project team member in the overall project. It shows the personnel responsible for various activities in the work breakdown structure (Cleland and Ireland 234).
Work authorization is the formal authorization of the project to proceed after scope statement, plan documentation, and preparation and approval of all contractual documents. In many project undertakings, work authorization entails formal signing of all project plan documents and contractual agreements by the parties involved. For the above example of the manufacture of the Armored Personnel Carrier for the US Army, work authorization entails sign-off of all the plan documents of the project scope by the representative of the department of defense of the United States of America and the project managers of General Motors Incorporation.
Valid considerations are voluntary promises in the project held in reciprocal for a commitment by another party. In the above example, valid considerations may include the promises by the department of defense of the United States of America to pay General Motors Incorporation upon completion and delivery of the project objectives. In turn, General Motors must promise the US Army that it will deliver the project outcome as set out in the contract terms.
Contractual agreements serves as a codification of the relationship between the project client and the project organization. Turnkey contacts are contracts in which the project organization takes the responsibility of all costs for successful delivery of the project objectives. Cost-plus contracts are those contacts which the project organization fixes its entitled profit in advance for the completion of the project. Cost-plus contact implies that the client of the project takes the liability of all costs that arises from project changes above the baseline scope (Pinto 171).
Project scope reporting is the progressive acquisition and communication of information regarding the project progress by the project manager to the client. The task entails determination of relevant project progress information and reporting to the relevant parties in the project. It can be task-based or time-based. The former refers to the reporting of information to the relevant parties after each completion of a project sub-task or activity while the later refers to communication of project progress information after a pre-set time interval. For instance, in the above example of the manufacture of an APC for the US Army, the project manager of General Motors can decide to report the project progress to the US department of defense after completion of each project activity or after elapsing of a fixed time interval, say regularly after an interval of two months. The main information of scope report in many projects includes cost status, schedule status, and technical status (Pinto 172).
Project control entail monitoring of the project development scope against the baseline scope to ensure that any notable changes that may affect cost and completion time are well taken care off. Project control is a function of project managers and involves reporting of the following:
Configuration control: This involves all procedures for monitoring emergent project scope against the baseline or original contractual scope. The changes in project scope is called scope configuration and it arises when there was insufficient information and knowledge of the problem statement and project objectives leading to addition of requirements along the way by the client. It may also arise due to lack of a comprehensive understanding of the problem by the project organization leading to failure to meet objectives. This failure may prompt the project team to change the project scope by executing alternatives or by addition of changes to existing scope leading to timeline creep and over-bloating of the project resources.
Design control relates to monitoring of the project’s scope, and costs at design stage. For example, Chrysler has a Platform Design Team (PDT) which comprise of members from various functional departments. The team is charged with the responsibility of that new automobile designs are evaluated by experts in engineering, marketing, and production.
Trend monitoring entals tracking of estimated costs, schedules, and resources used in the project against baseline budget. It shows any significant deviation of the project scope from the budgeted scope.Document control is a functional process of ensuring that project documents are compiled and disseminated when required to the right personnel during project implementation. Acquisition control is used to monitor systems used in the acquisition of the project requiremens such as materials, equipments, and labourforce. Specification control is used to ensure that all project specifications are clearly prepared, comunicated to the concerned individuals, and changed or altered only with proper authorization (Pinto 173).
Configuration management can be defined as a system of set procedures used for monitoring the emerging scope of the project against the original contractual scope. It is a functional requirement of documentation and approval of all changes of the baseline scope of the project. According to Tallent and LaGuarda (144), baseline scope is the scope of a project which was fixed at a specific point in time. It could be at the beginning or during the project progresion. The scope baseline can therefore be described as a project scope configuration. It provides a summative description of the original content, end product, and budgetary cost of the project. Therefore, configuration management involves developing various individual components of the project and assembling them into one functional entity.The main factors that may lead to change of the project configuration are: Initial technical or human planing errors, aditional project knowledge or environmental conditions, due to uncontrollable mandates, and client request (Pinto 174).
This is the final step in project scope management. At this stage, the project manager is required to consider records and reports that the organization and the client will require at the end of the project. Effective closeout records and reports are written earlier in the progression stages of the project.Closeout information is useful in cases where contractual disputes ensue after project completion and facilitation of project auditing tasks by indicating the expense flow in various project accounts. Pertinent documentation of the project closeout include:
Historical records that can be used to analyse feasibility, predict trends, and highlighting problem areas. This information is helpful for future handling of similar projects.
Pos- project analysis which is a formal reporting structure and may include analysis and recording of the performance of the project in terms of adherence to the schedules, cost management, and technical performance.
Financial closeout is the acounting of the project expenses. This information is helpful in estimating the final cost of the project in the case of cost-plus contracts. For instance, if the contractual terms of the project of manufacturing an Armored Personnle Carrier for the US Army by General Motors aboce provides for a 20% profit for the latter irrespective of the project cost, then General Motors project manager should provide all financial closeout documents that will be used in estimating what the department of defense of the United States of America should pay the automobile company.
The above desription of the project scope management and related activities shows the importance of scope management in any project undertaking. According to Chan (4), each project has a scope which must be managed judiciously in order to meet its objectives within stipulated time frame and baseline budget. Khan (14) underscores that many automobile projects creep resulting into failure to achieve set objectives because of poor scope management. Some project organizations end up running projects at losses due to budget creepage resulting from poor scope management in cases on turnkey contracts.
The project of upgrading the safety details of the Concorde following the crash of the Air France Flight 4590 in the year 2000 shows one of the cases in which poor scope management can lead to the collapse of an entire organization. Following the crash of the afore-said plane, the organization undertook a masive project of upgrading the safety of its planes. The end product saw the cumulative costs of maintaining concorde flights rise tremendously beyond feasible market rates. This led to the collapse of the company and today concorde planes are only displayed in French museums for viewing (Clark 1-3).
In summary, effective project scope management calls for understanding the relationship between scope management and project success by the project managers. Project scope planning must go through conceptual development process, scope statement, work authorization, project scope reporting, control systems, and project closeout to enusre successful delivery of the set objectives.
A work breakdown structure is an effective scope management tool that breaks down the project activities into deliverable elements. Closely related to work breakdown structure is the organization breakdown structure which helps project organizations in defining work to be acomplished and assigning it to work package owners.
A Responsibility Assignment Matrix (RAM) is also an essential project scope mamagement tool that ensures that each project work package is allocated to a personnel owner tasked with responsibility of ensuring that the activity or work package is completed within the stipulated time and budgeted resources.
Chan, Julie. A project has scope, budget. Administrative Assistant’s Update; Oct 2007; ProQuest Central (2007): 4, 2007
Clark, Nicola. Trial to Open in Concorde Disaster. The New York Times (2010): 1-3, 2010
Cleland, David and Lewis Ireland. Project management: strategic design and implementation. New York: McGraw-Hill Professional, 2006.
Cochrane, J R Turner and R A. Goals-and-methods matrix:coping with projects with ill defined goals and/or methods of achievin them. International Journal of Project Management vol 11(2) (1993): 194-95, 1993
Cripe, Edward. A Blueprint For Success:The Skills, Knowledge, and Personal Characteristics of Superior Performers in this Job. ,Project Manager Job Competency Model :In a High Technology Organization,1-15, 2010
Dvir, Aaron J. Shenhar and Dov. Toward a typological theory of project management. Research Policy vol 25, 607-632, 1996
John Reh. Project Management:Basic Project Management Outline. About.Com Management (2012): np.
Khan, Asadullah. Project Scope Management . Cost Engineering, vol 48(6), 12-16, 2006
Pinto, Jeffrey . Project Management:Achieving Competitive Advantage.2nd edition. New York : prentice Hall , 2010.
Roger Atkinsona, Lynn Crawford, and Stephen Ward. Fundamental uncertainties in projects and the scope of project management. International Journal of Project Management, vol 24(8),687-698, 2006
Barros Marcio, Werner Claudia, and Travassos Guilherme.Supporting risks in software project management . The Journal of Systems and Software 70, 213,5 2004
Tallent, Cheryl and LaGuardia Ed. Project Management Estimating: Scope, Timeline, Resources. Library Journal, ISSN 0363-0277, 10/1998, Volume 123, Issue 16, 144,1998
Wysocki, Robert . Effective Project Management: Traditional, Agile, Extreme. New York: John Wiley & Sons Inc, 2011.
Project Evaluation and Control
Project evaluation and control is a critical strategic concept in management as it determines the success rate of a program initiated and how adherent to the pre-set objectives it is in a quantifiable manner (Rossi, Lipsey & Freeman, 2004). Given the critical role that this business plan defining the complete project plan for the new Finance System to be developed by XYZ to mitigate various operational challenges the firm currently faces plays in the firm, a proper evaluation reporting the on-going status of the project to the stakeholders and participants is imperative. Its detailed description and presentation establishes confidence that the project initiated is executed effectively and with minimal waste and such a plan for performance measurement requires the identification of critical elements of performance, how they are measured, and when they are measured to communicate if they project is on track (Rossi, Lipsey & Freeman, 2004).
Aim of the Project
The project being evaluated is the installation of new Finance Systems at XYZ to replace the older individualistic systems whose limited integration and inconsistency within the firm’s departments created significant corporate, operation, and financial risk (Rossi, Lipsey & Freeman, 2004). This is because lack of core financial standards chart of accounts (COA), data definitions, KPIs, and allocation methods limits the management’s capacity to reach the desired field net-income management goals (Zelkowitz, 1999). Owing to this, the key goal of the Financial Implementation System suggested in the plan is to create a one process standard and appropriate technology standards for financial management through the firm. It is intended to be a single application implemented to provide consistent financial management processes throughout the firm while at the same time allowing necessary local modifications for regulatory reporting (Rossi, Lipsey & Freeman, 2004).
Evaluation of the Project
The evaluation to asses the progress of this project is done under four key evaluation topics: partnership, project process, products and impact on target groups. There is a large composition of the project participants from the management, the project committee, to the implementers all of whom have a close contact relationship having all relevant levels of partnership present (Zelkowitz, 1999). The outline of the project tasks has been well presented to all the target groups and their work is well defined by institutional structures created to ease performance and avoid duplicity of tasks (Rossi, Lipsey & Freeman, 2004). Communication among these stakeholders is also effectively established through clear communication channels where information and material is spread effectively (Zelkowitz, 1999).
After the initiation of the project, the planning and management process has been hitherto sufficiently appropriate ensuring proper oversight of the operations. There are clear planning and management guidelines ensuring clarity of organizational procedures and roles and responsibilities for the participants (Rossi, Lipsey & Freeman, 2004). With a robustly instituted project management scheme, the area in need of critical consideration is management and evaluation plan for the project where the management committee as it is now lacks in proper record keeping, adherence to time scales and arrangement for ongoing monitoring (Zelkowitz, 1999). These are critical elements of evaluating the progress of the project that have to be piously considered by the project committee to ensure capacity to quantifiably account for the progress of the entire project (Zelkowitz, 1999).
Assessment of the products of the project considers the website created, the use of ICT and integration of the system within the existent systems in the firm. The expected website has been created and is operational, very attractive, giving right information on the project and its theme and thereby serving as an effective communication tool. It is also optimized being easy to find, popular and frequently hit (Rossi, Lipsey & Freeman, 2004). The project has also been able to extensively use ICT to establish a technologically advanced finance management system for XYZ which has been well integrated in the existing systems of the organization.
As of now therefore, the project can be said to be on the right course with key objectives already achieved and the implementation of key strategies established already underway. Inasmuch as there are areas in need of more deliberate consideration regarding evaluation and monitoring systems as mentioned above, design processes, reports, workflows, integration, and implementation are among the key evaluation elements that are already operational in this project giving a clear indication that it is headed in the right direction. The testing, training, and phased deployment are yet to be instigated as the project is still in its infancy stages.
Rossi, P., Lipsey, M.W. & Freeman, H.E. (2004). Evaluation: A Systematic Approach (7th ed.). Thousand Oaks: Sage.
Zelkowitz, M. (1999). Advances in Computers. New York: Academic Press.
PROJECT EVALUATION AND CONTROL 2
Defense Travel Defense Office: Change Management
Change in an organization is an empirical observation in every organizational body of variations in quality, state, or shape over time. The general motive of organizational change is environmental adaption (Hughes, 2011) and improvement in organizational performance (Spicer, 2011). To effectively design change typologies, change is defined along a variety beginning in evolutionary and low – scope changes to strategic and high – scope changes.Organizational changes involve making adjustments to the organization’s structure, purpose, culture, and processes in reaction to anticipated environmental changes.
According to Hartley (2009), change can be defined as a condition and a process. It describes the environmental happenings that may profoundly affect organization in one way or the other. Thus, it is a process that should be incorporated in the planning and implementation of projects from the beginning. In many occasions, organizational changes are not taken into consideration during the development of project strategies (Hughes, 2011).
Where does incentive for change come from? Effective strategic managers understand that change in the strategic management isinevitable, and is in fact a continuous process in every organization (Cruickshank and Collins, 2012). They need to identify when changes in the environment means a need for change and when it does not. Change is completely necessary for the organization’s survival. It is unavoidable. Over the long run, organizations have no choice unless they are willing to become irrelevant in their fields of operation (Richard 2006). Change results to radical transformation that enable organization to completely change its essential framework to look for competitive advantage while affecting its fundamental capabilities
It is therefore for these reasons that a change management program is needed to manage the commercial travel requirements within the Defense Travel Management Office. Under the purview, the Defense Travel Management Office servers as the single central point for travel policy, commercial travel programs that are centrally managed, the Defense Travel System functional oversight, commercial travel office agreements, and strategic guideline for all such areas. This has been addressed as a vital and integral component to project methodologies we will use. As soon as the change management strategy has been developed it will be integrated with the project plans that will be incorporated at any point after the start up.
The Defense Travel Management Office change management strategyseek to provide guidelines for understanding the Defense Travel Management Office change management program. The plan outlines responsibilities of primary entities as well as the scope of the Defense Travel Management Office change management program. It will enable a continual dialog among primary Defense Travel Management Office personnel, Military Agency representatives, Travel Improvement Unit members, system developers, process owners, and industry partners. The Defense Travel Management Office will allows inputs from each of the defined groups be heard and well managed. The processes defined in this framework guarantee clear, structured management of Department of Defense requirements that relates to commercial travel.
Cruickshank and Collins (2012) argues that, to remain highly competitive, organizations need to implement a client directed, reactive culture and suitable framework which requires the use of both strategic management and change management strategies.
The Defense Travel Management Office aims to consolidate travel requirements, re-create regulations and directives, and develop new technologies to rationalize travel business processes. The following strategies are required in response to the Defense Travel Management Office variety of challenges that face senior managers:
Establish an Improvement Unit for Defense Travel that will manage changes to travel requirements.
Establish a Steering Committee for Defense Travel (SCDT) to forecast implementation of transformations, provide opinions on enterprise wide applications, and any other application that exceeds a cost of $ 600,000.
Give directive on the use of Defense Travel System (DTS) as well as other future travel systems
Merge and manage services for commercial travel office
To manage Travel card Program
Implement new and innovative processes to allow effective traveling throughout the Unit
This Defense Travel Management Office Change Management plan outlines on how transformations to the Department of Defense’s travel requirement are defined, enhanced, managed, implemented and analyzed. It may be used by Department of Defense personnel as a direction to the Defense Travel Management Office change management program.
Several templates for change managements, for example, Hughes (2011) suggests that change leaders must practice five steps of activities while planning and implementing change.
Develop change readiness to overcome resistance to change.
Develop organizational vision while articulating a convincing reasons for the transformation.
Develop political support for the transformation process.
Manage the transformation of the organization from its present state to the desired state.
Sustain momentum for the transformation so that it is carried to completion.
To illustrate these activities, Cruickshank and Collins (2012) suggests an eight – step model that would be used to deliver change in an organization. This eight steps involve establishing a sense of urgency, developing the directing coalition and developing change vision and strategy. However, these steps which are usually required to support the implementation of the training system are developmental changes (Spicer, 2011).Thisincludes the redesign of jobs as well as organizational business operations. Every organization needs to choose an approach to managing changes that fits its context, objectives, goals, purpose and capabilities(Hughes, 2011).
With the goal of making the use of scope of change easier, this plan will describe both extremes of the range. It will span small changes that transform organizational aspects while focusing on improvement in current situation, and at the same time keeping the Department of Defense’s working framework. However, it is restricted to formal transformation management program introduced by the Defense Travel Management Office. As technical advances are introduced, the planned changes to the Unit’s present services, applications, and protocols will be managed as defined in the plan.
As a result, a number of stakeholders related with implementation are identified. The stakeholder groups are considered as component of this plan. The Department of Defense activities will thereafter review change proposals submitted by Department of Defense customers before such proposals are approved by the Defense Travel Management Office. The Defense Travel Management Office has also established the Defense Travel Improvement Unit consisting of main Military Agencies. This will ensure proper and successful change management.
Military Agencies are considered to be part of the stakeholders and will participate actively in the Defense Travel Management Office change management program by forecasting the present primary requirements throughout the change life – cycle. The extent and complexity of altering such requirements is concede by the Defense Travel Management Office. All changes will be prudently managed by this plan’s scope, as defined in the subsequent parts. It has been acknowledged that any engagement with these groups of stakeholders can be regarded part of transformation management strategy.
A part from activities whose key purpose is to bring about transformation, the scope of this plan also includes activities associated with:
User requirement collection
For proper change management, changes must be identified, classified and clearly documented. Identifying changes is necessary for managing change throughout the requirement life – cycle. This process involve maintaining the baseline of requirements. These changes may be classified by regulatory or non regulatory processes, documentations, or technology such as software applications. The process include; establishing and maintaining the functional requirements’ baseline, identifying change, as well as all supporting documentations and references, determining the documentation types needed for each change, implementing the functional requirements as well as the associated documentation, and identifying the outcome of the impact analysis to determine continued application.
The fundamental purpose of controlling change is to manage changes according to the functional requirements’ baseline. This process encompass identifying the requirements as well as implementing the approved changes. It details the following:
Identify need for transformation
Evaluate Change Request
Evaluate Functional Requirements document
Steering Defense Travel Improvement Board meetings
Conducting Post Defense Travel Improvement Board meeting activities
Tracking change implementation.
Change requests are submitted to the appropriate Defense Travel Improvement Board for review and approval. Changes that are associated with commercial travel policy, services, programs, business processes are to be reviewed by the appropriate Military Agencies and Defense Travel Improvement Board representatives for validity and substance. All proposals are to be submitted in change request format.
On approval, the Defense Travel Improvement Board representatives will forward the Change Requests to the Defense Travel Management Office change management email for processing. The submitter receives an acknowledgement on receipt of the Change Request. This acknowledgement details the review status of the Change Request. The Defense Travel Management Office change management team will enter the proposal into the Rational Clear Quest System. This is a software for managing change and other associated activities. Once logged, the Change Request is assigned for a Defense Travel Management Office Requirement Analyst from action. Initial review of the changes proposed is conducted to determine the complexity and extent of the issue presented. This will determine is any further review is necessary.
Project Change Request Form
Change InformationTitle:Reference Number:Unit Manager: Part 1: Change RequestRequested By: .
Email Address: Date of Request:Change Request Id:
Supplied by (PM)Subject to be Changed:
Priority:Change Description: (Field Description: give a high level descriptions of the desired transformation. Please, provide references to any existing policy guidance and regulations by title and section. Provide as much details as possible)
Reasons To Change: (Field Description: Identify the need and benefits of the proposed transformation. Please, include benefit or rationale of the proposed transformation)
Impact of Non – incorporation: (Please, provide description of consequences or impact if change request is not adopted)Remark and Recommendations: (Provide any additional information that may be helpful in adopting change requested)Estimated Cost and Time: (Description: To be completed by the PMO Defense Travel Service. An estimated cost and time for implementing requested change will be identified by the PMO Defense Travel Service and presented during the Board meeting)
Estimate Basis (Description: this part is to be completed by the PMO Defense Travel Service. An estimated implementation schedule will be identifies by the PMO Defense Travel Service and presented to the Defense Travel Improvement Board during the Board Meeting). Part 2: Change AssessmentAssessed by:
Date of Assessment:Activities Required:
What is Affect:
Impact to Cost, Schedule, Scope, Quality, and Risk:
Part 3: Resolution of ChangesAccepted
WithdrawnApproved by (Print):
Part 4: Tracking ChangesDate of CompletionCompleted by (Print):
My signature above affirms that the Change Request documentation has been updated to soundly and comprehensively reflect the approved transformation.
In – depth Analysis on Travel Improvement
Change request often requires an in – depth analysis on the issues presented. This is to be done by the Travel Improvement Working Unit that reviews and approval. The unit members are selected based on the area of expertise and experiences outlined by the change request. The team conducts analysis and give recommendations for the acceptance or withdrawal to the Defense Travel Improvement Board representatives. The team also assess the available alternatives that would improve the proposed change and provide a better cost effective and performance advantage solution. In such situations the team may name those alternatives in the recommendation. The Defense Travel Management Office RA will review the change request as required, to integrate any improvement revision. However, if a Change request does not get approved by the Travel Improvement Working Unit, it will enter a withdrawal status.
Change tracking describes the process for maintaining a log of all change requests, approved, and withdrawn or rejected form the project. This gives a clear traceability of all changes requested as evident. The following sample languages may be used:
The Defense Travel Improvement Board will maintain a master log of all proposed changes and resolution of each proposal. All proposed changes will be maintained in a Change Maintenance Log.
For approved requests, the Defense Travel Improvement Chief will complete Part 4.00 of the Change Request Form as a completion of the document updates. He will file the form with other related project articles.
The board shall convene at the Defense Travel Management Office on first Tuesdays of every month for a duration of two hours. This will enable reengineer Change Requests. Each meeting will follow the published agenda and be carried out in presentation format. Such meetings have the following key components:
Agenda items review
Review of the previous discussed matters
Review of matter approved from last meeting
Updates on governance board by Defense Travel Management Office Requirement Branch office
Presentation of informational items as well as present subjects by each Defense Travel Management Office Division
Presentation of coming attractions.
The Steering Committee
The steering committee will consist of the Military Agency representatives of o8 rank, or civilian equivalent. They will be responsible for forecasting activities by the Defense Travel Improvement Board, reviewing the detailed Department of Defense changes before implementation by the Defense Travel Improvement Board, while serving as a key decision unit for Board appeals. Detailed information on the process of appeal is outlined below. This committee will also be responsible for scheduling the Defense Travel Steering Committee meetings.
The goals of the Defense Travel Management Office transformation review process is to identify, discuss into details all substantive subjects before presenting requested changes to a formal Defense Travel Improvement Board. These subjects are assessed and resolved whenever possible except under unusual circumstances. However, an appeal should not be an approach to raise novel issues that should have been well – thought – out at initial review process. A member may appeal decision of Boards through appropriate Defense Travel Steering Committee representatives within 5 working days of the Defense Travel Improvement Board decision. Email will be the primary medium of the appeal statement. The member and the Defense Travel Improvement Board Chair should provide a history with supporting evidences (documentation) that supports their reasoning during decision – making as required. The Defense Travel Steering Committee will analyze the subject to reach their governing verdict. In case the appeal process is not opened within the five day window period, the Board Chair’s decision will stand.
This analysis examine both physical and functional aspects of the approved change. The detailed review will take place within five months of implementation in order to assess validity, cost effectiveness, as well as the overall impact of the approved process on the organization. A report that summarizes the entire process will be provided to all members upon closure of the detailed analysis by the Requirements Branch Chief.
Change Communication Plan
John (2011) argues that when developing a communication strategy, it is vital to create reporting protocols for the project. The Travel Management Improvement Committee will identify the team who will be responsible for rolling out of communication. They will ensure that all related stakeholders such as Military Agencies and Travel management Improvement Board are included in the communication plan. They will address when should the message be communicated and what negative impacts follows late or too soon message communication.
The communication Plan will detail the following:
The target audience of the piece of message
The requirements, priorities and special interests of the targeted audience
How the message will be best framed in order to address the targeted audience’s interest.
How the targeted audience might respond to the message and if their responses may be supportive and open to misinterpretations.
What are the underlined objectives of the communication? What the audience is expected to do or say as a result of the intended communication.
Barrier to Change
Barrier is a natural and inevitable part of any change process. It is an essential aspect to be considered in any transformation process. John (2011) says that an effective resistance management is the key for success or failure of transformation. Defining resistance to change help understand the phenomenon that affects the process at its start – up, or its development, while aiming to keep the present situation. According to Hughes (2011),this is a survival approach within organizations. Hendry & Woodward (2004) emphasize that the cause for the failure of many change move originate from barrier to change. Change barriers introduces extra costs and delays into the entire process that are difficult to anticipate but has to be taken into account. However, change barrier has been seen as a source of information used in learning how to implement a more successful change process. As a result, barrier to change is a crucial subject in change management and should be considered to enable organization realize the importance of the transformation.
Can change agents do anything to deal with resistance? There might not be general guideline to evade resistance to transformation. However, managers and change agents should pay more attention to certain subjects that might result into barriers to change. They should consider how organization’s culture suits with transformation objectives and what activities should be done in order to improve it before the transformation process kicks off.
Cultural consideration would help merge both managers’ and employees’ interests closer and avoid organizational silence (Karriker, 2008). Implementation is the integral step between the choice to change and the usual use of it at the organization. Within the Defense Travel Management Office two resistance group constituents can be identified. The first deals with cultural and political barriers to change. Hendry & Woodward (2004) mentions that this group consists of climate implementation and relation between organizational values and change values. Negative relation between these values results in barrier and opposition to change. Further, politics, incommensurable beliefs, and strong disagreement among employees within the organization’s departments often suffer change implementation. Often, organization suffers leadership inaction. Some managers are afraid of uncertainty, thus they fear changing the status quo.
Thus, it is crucial to identify the root causes to staff resistance in order to lay down the organizational strategies for implementation. This can be carried out in different ways, such as employee feedback, project team issues, supervisor input, and compliance audits.
In relation to this, a number of subjects need to be communicated in which contribute to the difficulty of meeting and managing the change. A recent study of the acceptance and adoption of innovations showed that the success and speed of the diffusion brought the following lessons:
The positive support of chief management in adopting change increased the success of transformation.
Effective organizational management in the organizations speed transformation.
Information needed to start – up, implement, and assess change request must be credible and persuasive to people who influence budget decision.
The rate of transformation is influenced by the level to which the innovation requires changes in the culture of organization. However, the transformation process is slowed when the effort entails coordination across departments/units or disciplines.
Organization behavior is an area replete with theories, models and approaches for apprehending it and emphasizing on changing it. Integrated together, theories and models highlights the complexity change behavior. They show the possibility to characterize different elements, identify the attitudes, beliefs, values, practices as well as different behaviors which are to be altered and what conditions.
Effecting organizational internal changes to accommodate external changes is responsive, (Hendry & Woodward, 2004) and strategic managers should be proactive. However, well managed organizational vision can enable balance responsive and proactive changes. Effective change management allows employees to adopt a change so that the organization objectives and goals are realized. According to Spicer(2011), change management is the bridge between solution and result realization, and it is basically about employee’s collective role of change transformation into successful result for the organizationIt is important to acknowledge that training is a crucial approach to surpass communication problems. Thus, it is a means that can reduce resistance that result from communication barrier.
Cruickshank A., & Collins D. (2012) Change Management: The Case of the Elite Sport
Performance Team, Journal of Change Management, 12(2), 209-229.
Hartley M. (2009). Leading Grassroots Change in the Academy: Strategic and Ideological
Adaption in the Civic Engagement Movement, Journal of Change Management, 9(3), 323-338.
Hendry, C., & Woodward, S. (2004). Leading and Coping with Change, Journal of Change
Management, 4(2), 155 -183.
Hughes M. (2011). Do 70% of all Organizational Change Initiatives Really Fail? Journal of
Change Management, 12(2011), 451-464.
John G. (2011). Reconsidering Communication and the Discursive Politics of Organizational
Change, Journal of Change Management, 11(4), 465-480.
Karriker J. (2008). Justice as Strategy: The Role of Procedural Justice in an Organizational
Realignment, Journal of Change Management, 7(3-4), 329-342.
Richard. S. (2006). Organizational Behavior and Human Decision Processes, A Basis for
Competitive Advantage in Firms, 82(10), 150-169.
Spicer D. (2011). Changing Culture: A Case Study of a Merger Using Cognitive Mapping,
Journal of Change Management, 11(2), 245-264.
CHANGE MANAGEMENT | 13
Running Head: CHANGE MANAGEMENT
Community Services Projects
Community service offers opportunities for a crucial self-society linkage in identity construction. In working to help other persons in need, adolescents can begin to experience their own agency. They can also begin to ask why people in the society live in such different conditions and do not possess similar basic resources. Community services may also make people to begin asking about the political bases of variations in conditions and to question the moral positions that would support either the status quo or reasons for changing it. Most importantly, adolescents who start reflecting in this manner would necessarily consider how they as individuals want to take stands on existing ideologies and so decide whether they might simply live through the present moment of history or tale responsibility in the actual re-making of history. Moreover, community service was also developed to enhance cooperation and interaction among people globally. In this respect, this paper will discuss the community services project that I have been involved in previously that has made a great impact on my personal life and decision to attend college.
The community services project that I have been involved in is that I have volunteered in a hospital for a period of two years. In the hospital environment, I have been in different parts. Firstly, I have been helping around in the pharmacy. Secondly, I have been with little kids making their visits to the hospital a little better. Besides, I have been paging little bags for them to go home with. I have also been helping around in moving the little children from room to room. Precisely, my community services project in the hospital has been of great benefit. In fact, it has made me help in the development of the hospital in terms of services provision.
My community services project in the hospital has made a great impact in my life. To start with, the project has made me to learn different ways of handling situations. For instance, a hospital setting encompasses various people with disagreements. Thus the project helped me develop skills of handling disagreements with other people. Secondly, the community service project in the hospital instilled responsibility in me. This is because responsibility is one of the most important aspects that must be instilled in people who deal with patients such as physicians and nurses. Thirdly, the project made a great impact on my social life. This is because; I learnt how to socialize with people especially the patients and the little children.
Undoubtedly, my participation in a community service project in a hospital had a great impact on my decision to attend college. Initially, before participating in the project, I had no dream whatsoever of attending to patients. However, participation in the project made me have an interest in medical related issues. As a result, I made a decision of attending college to purse a medical related course. Moreover, the project at the hospital involved practical medical tests which seemed very interesting. Thus, this influenced my decision to attend college as it would act as a platform where more practical medical tests would be done.
To sum up everything, community service projects have been influential to individuals in various ways. For instance, in my case, the project in the hospital was very beneficial as it improved my personal social life. Besides, the community services projects have so far ended up influencing more people to attend to college and put into practice whatever they see during the community service projects.