Work Queue User's Manual

Last Updated May 2012

Work Queue is Copyright (C) 2009 The University of Notre Dame. This software is distributed under the GNU General Public License. See the file COPYING for details.

Overview

Work Queue is a framework for building master/worker applications. In Work Queue, a Master process is a custom, application-specific program that uses the Work Queue API to define and submit a large number of small tasks. The tasks are executed by many Worker processes, which can run on any available machine. A single Master may direct hundreds to thousands of Workers, allowing users to easily construct highly scalable programs.

Work Queue is a stable framework that has been used to create highly scalable scientific applications in biometrics, bioinformatics, economics, and other fields. It can also be used as an execution engine for the Makeflow workflow engine.

Work Queue is part of the Cooperating Computing Tools. You can download the CCTools from this web page, follow the installation instructions, and you are ready to go. From the same website, or from within the CCTools you can view documentation for the full set features of the Work Queue API.

Building and Running Work Queue

Let's begin by running a simple but complete example of a master and a worker. After trying it out, we will then show how to write a program from scratch.

We assume that you have already downloaded and installed the cctools in the directory CCTOOLS. Next, download the example file for the language of your choice:

If you are using the C example, compile it like this:
gcc work_queue_example.c -o work_queue_example -I${CCTOOLS}/include/cctools -L${CCTOOLS}/lib -ldttools -lm
If you are using the Python example, set PYTHONPATH to include the Python modules in cctools:
export PYTHONPATH=${PYTHONPATH}:${CCTOOLS}/lib/python2.6/site-packages
If you are using the Perl example, set PERL5LIB to include the Perl modules in cctools:
export PERL5LIB=${PERL5LIB}:${CCTOOLS}/lib/perl5/site_perl
This example program simply compresses a bunch of files in parallel. List the files to be compressed on the command line. Each will be transmitted to a remote worker, compressed, and then sent back to the master. (This isn't necessarily faster than doing it locally, but it is easy to run.) For example, to compress files a, b, and c, run this:
./work_queue_example a b c
You will see this right away:
listening on port 9123...
submitted task: /usr/bin/gzip < a > a.gz
submitted task: /usr/bin/gzip < b > b.gz
submitted task: /usr/bin/gzip < c > c.gz
waiting for tasks to complete...
The master is now waiting for workers to connect and begin requesting work. (Without any workers, it will wait forever.) You can start one worker on the same machine by opening a new shell and running:
work_queue_worker MACHINENAME 9123
(Obviously, substitute the name of your machine for MACHINENAME.) If you have access to other machines, you can ssh there and run workers as well. In general, the more you start, the faster the work gets done. If a worker should fail, the work queue infrastructure will retry the work elsewhere, so it is safe to submit many workers to an unreliable system.

If you have access to a Condor pool, you can use this shortcut to submit ten workers at once via Condor:

% condor_submit_workers MACHINENAME 9123 10 
Submitting job(s).......... 
Logging submit event(s).......... 
10 job(s) submitted to cluster 298.
Or, if you have access to an SGE cluster, do this:
% sge_submit_workers MACHINENAME 9123 10 
Your job 153083 ("worker.sh") has been submitted 
Your job 153084 ("worker.sh") has been submitted 
Your job 153085 ("worker.sh") has been submitted 
...

When the master completes, if the workers were not shut down in the master, your workers will still be available, so you can either run another master with the same workers, or you can remove the workers with kill, condor_rm, or qdel as appropriate. If you forget to remove them, they will exit automatically after fifteen minutes. (This can be adjusted with the -t option to worker.)

Writing a Master Program

To write your own program using Work Queue, begin with C example or Python example or Perl example as a starting point. Here is a basic outline for a Work Queue master:
q = work_queue_create(port);

    for(all tasks) {
         t = work_queue_task_create(command);
         /* add to the task description */
         work_queue_submit(q,t);
    }

    while(!work_queue_empty(q)) {
        t = work_queue_wait(q);
        work_queue_task_delete(t);
    }

work_queue_delete(q);
First create a queue that is listening on a particular TCP port:

C/Perl

 q = work_queue_create(port);

Python

 q = WorkQueue(port)
The master then creates tasks to submit to the queue. Each task consists of a command line to run and a statement of what data is needed, and what data will be produced by the command. Input data can be provided in the form of a file or a local memory buffer. Output data can be provided in the form of a file or the standard output of the program. It is also required to specify whether the data, input or output, need to be cached at the worker site for later use. In the example, we specify a command that takes a single input file, produces a single output file, and requires both files to be cached:

C/Perl

 t = work_queue_task_create(command);  
 work_queue_task_specify_file(t,infile,infile,WORK_QUEUE_INPUT,WORK_QUEUE_CACHE); 
 work_queue_task_specify_file(t,outfile,outfile,WORK_QUEUE_OUTPUT,WORK_QUEUE_CACHE);

Python

 t = Task(command) 
 t.specify_file(infile,infile,WORK_QUEUE_INPUT,cache=True)  
 t.specify_file(outfile,outfile,WORK_QUEUE_OUTPUT,cache=True) 
If a file does not need to be cached at the execution site to avoid wasteful strorage, it can be specified so:

C/Perl

 work_queue_task_specify_file(t,outfile,outfile,WORK_QUEUE_OUTPUT,WORK_QUEUE_NOCACHE);

Python

 t.specify_file(outfile,outfile,WORK_QUEUE_OUTPUT,cache=False) 
You can also run a program that is not necessarily installed at the remote location, by specifying it as an input file. If the file is installed on the local machine, then specify the full local path, and the plain remote path. For example:

C/Perl

 t = work_queue_task_create("./my_compress_program < a > a.gz");  
 work_queue_task_specify_file(t,"/usr/local/bin/my_compress_program","my_compress_program",WORK_QUEUE_INPUT,WORK_QUEUE_CACHE); 
 work_queue_task_specify_file(t,"a","a",WORK_QUEUE_INPUT,WORK_QUEUE_CACHE); 
 work_queue_task_specify_file(t,"a.gz","a.gz",WORK_QUEUE_OUTPUT,WORK_QUEUE_CACHE); 

Python

 t = Task("./my_compress_program < a > a.gz")  
 t.specify_file("/usr/local/bin/my_compress_program","my_compress_program",WORK_QUEUE_INPUT,cache=True)
 t.specify_file("a","a",WORK_QUEUE_INPUT,cache=True) 
 t.specify_file("a.gz","a.gz",WORK_QUEUE_OUTPUT,cache=True) 
Once a task has been fully specified, it can be submitted to the queue where it gets assigned a unique taskid:

C/Perl

 taskid = work_queue_submit(q,t);

Python

 taskid = q.submit(t)
Next, wait for a task to complete, stating how long you are willing to wait for a result, in seconds. (If no tasks have completed by the timeout, work_queue_wait will return null.)

C/Perl

 t = work_queue_wait(q,5);

Python

 t = q.wait(5)
A completed task will have its output files written to disk. You may examine the standard output of the task in t->output and the exit code in t->exit_status. When you are done with the task, delete it:

C/Perl

 work_queue_task_delete(t);

Python

 Deleted automatically when task object goes out of scope
Continue submitting and waiting for tasks until all work is complete. You may check to make sure that the queue is empty with work_queue_empty When all is done, delete the queue:

C/Perl

 work_queue_delete(q);

Python

 Deleted automatically when work_queue object goes out of scope
Full details of all of the Work Queue functions can be found in the Work Queue API.

Advanced Usage

The technique described above is suitable for distributed programs of tens to hundreds of workers. As you scale your program up to larger sizes, you may find the following features helpful. All are described in the Work Queue API.

For More Information

For the latest information about Work Queue, please visit our web site and subscribe to our mailing list.