Job Scheduling Strategies for Parallel Processing: 10th International Workshop, JSSPP 2004, New York, NY, USA, June 13, 2004, Revised Selected Papers
Springer Science & Business Media, Apr 18, 2005 - Computers - 315 pages
This volume contains the papers presented at the 10th Anniversary Workshop on Job Scheduling Strategies for Parallel Processing. The workshop was held in New York City, on June 13, 2004, at Columbia University, in conjunction with the SIGMETRICS 2004 conference. Although it is a workshop, the papers were conference-reviewed, with the full versions being read and evaluated by at least five and usually seven members of the Program Committee. We refer to it as a workshop because of the very fast turnaround time, the intimate nature of the actual presentations, and the ability of the authors to revise their papers after getting feedback from workshop attendees. On the other hand, it was actually a conference in that the papers were accepted solely on their merits as decided upon by the Program Committee. We would like to thank the Program Committee members, Su-Hui Chiang, Walfredo Cirne, Allen Downey, Eitan Frachtenberg, Wolfgang Gentzsch, Allan Gottlieb, Moe Jette, Richard Lagerstrom, Virginia Lo, Reagan Moore, Bill Nitzberg, Mark Squillante, and John Towns, for an excellent job. Thanks are also due to the authors for their submissions, presentations, and final revisions for this volume. Finally, we would like to thank the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), The Hebrew University, and Columbia University for the use of their facilities in the preparation of the workshop and these proceedings.
What people are saying - Write a review
We haven't found any reviews in the usual places.
Parallel Job Scheduling A Status Report
Scheduling on the Top 50 Machines
Parallel Computer Workload Modeling with Markov Chains
Enhancements to the Decision Process of the SelfTuning dynP Scheduler
Reconfigurable Gang Scheduling Algorithm
TimeCritical Scheduling on a Well Utilised HPC System at ECMWF Using Loadleveler with Resource Reservation
Inferring the Topology and Traffic Load of Parallel Programs Running in a Virtual Machine Environment
Using Additional Communication Links to Improve Utilization of Parallel Computers
Workload Characteristics of a Multicluster Supercomputer
A Dynamic Coallocation Service in Multicluster Systems
Exploiting Replication and Data Reuse to Efficiently Schedule DataIntensive Applications on Grids
Performance Implications of Failures in LargeScale Cluster Scheduling
Are User Runtime Estimates Inherently Inaccurate?
Improving Speedup and Response Times by Replicating Parallel Programs on a SNOW
LOMARC Lookahead Matchmaking for Multiresource Coscheduling
Costs and Beneﬁts of Load Sharing in the Computational Grid
Other editions - View all
actual runtime allocation batch Cluster Inst co-allocation Computer Science Conf configuration correlation coscheduling D. G. Feitelson DAS-2 clusters Distributed Computing DUROC ECMWF evaluation execution extra workstations failures FCFS Figure FloatingCpus given Average Utilization given Scheduling given TOP500 grid Grid Computing heterogeneous heuristic hyperthreaded IEEE impact implemented improvement job arrival Job Scheduling Strategies job slowdown Linux load sharing LoadLeveler LOMARC machine makespan Markov chain memory usage multi-toroidal multiprogramming level Myrinet Nhalf Nmax Notes in Computer number of nodes operational jobs parallel computers Parallel Job Scheduling Parallel Processing parallel program parameters performance prediction Prioritization requested runtime Research Processors Resource Reservation response RGSA Rmax Rpeak Rudolph runtime scheduling algorithm scheduling policies SDSC self-tuning dynP scheduler simulation speedup Storage Affinity Strategies for Parallel supercomputer switches task tion topology toroidal Type USA City users VNET VTTIF Weibull Weibull distribution workload model Xeon XSufferage