The Fifth DIMACS Challenge -- Priority Queue Tests
Note: This page describes the original set of core
experiments (and the links) for participants in the Priority Queue part
of DIMACS Challenge 5. Subsequent to the Challenge Workshop, and after much
email discussion, we developed a final set of core experiments, which
involved some slight changes and additions to the original set. The
final set of experiments is described in this
postscript document. It describes a
new generator.
This page describes the set of core experiments for participants in the
Priority Queue part of DIMACS Challenge 5.
We expect you will perform more extensive experiments when addressing
your research questions,
and we encourage you to perform more experiments to track down
interesting questions arising from these listed here.
Send Cathy a note if you have
questions or need help.
-
The test driver. To run the tests, you need
a little driver program that invokes procedures according to commands
presented in an input file.
- The specifications document that describes the driver and
the input file formats is available in
postscript . Or you can get
a Latex version together with
a required macro file.
-
If
you are implementing in C, a driver program is available from DIMACS.
You can get the
compressed tar file . The documentation explains how to
to combine the driver with your priority queue implementation.
- With the compile-time switch #DEFINE DETAIL 1, the
driver prints a trace of its procedure invocations.
Each output line shows the command and its parameters,
followed by what was returned by the procedure
(these two parts are separated by a colon :).
Be sure to run some tests with output traces to check
correctness of the implementation and the interface.
Remove the #define DETAIL line to turn off this output.
-
General testing procedures. Below we describe
8 test problems, each involving 3 (or sometimes 9) problem sizes.
In addition to running these problems for each implementation you are
testing, you should do the following.
- Find out whether your implementation's performance is sensitive
to the size of the info part in each item.
If it is, you should run two sets of tests, using
in_type = str4 and in_type = str128. If it isn't,
just use the smaller in_type. (You'll have to change the
driver for this as well).
- Choose a few combinatorial
performance measures that are most closely associated with running
times (like number of nodes touched). Instrument
your data structure to report those counts for each test problem.
- Also report user runtimes for each problem. Since you don't want to
include the cost of counting in your runtimes, you should
recompile without the combinatorial counting code. Use the
same random seeds in the input, though, to get identical runs.
- When reporting runtimes, you can let the driver and data structure
produce no output.
To make running times meaningful, you should also report: The
machine type, the operating system and version number, the compiler
and version number, and the optimization level. We strongly urge you to get
in touch with other participants
to swap implementations for tests on different machines.
- For the middle problem size in each Random test,
run the demo priority queue provided with the
DIMACS driver program,
and report the runtime. We hope this will maximize comparability
across different environments.
- You are welcome to run extra random trials to improve your
estimate of mean cost. For each quantity reported, state the
number of random trials, the mean, the min,
and the max values found.
- Tests with Dijkstra's algorithm
(3 problems x 3 problem sizes).
These tests trace priority queue calls from
Dijkstra's shortest-path algorithm. Dijkstra's algorithm is
run on three kinds of graphs.
One produces relatively small queues, one produces long queues, and one
maximizes the number of decrease-key operations that Dijkstra's algorithm
performs. These programs are a subset of SPLIB, provided for
this use by Andrew Goldberg, and modified by C. McGeoch.
- Tests using Nagomochi-Ibaraki
(1 problem x 3 problem sizes ).
These tests generate priority queue traces from the
Nagomochi-Ibaraki minimum-cut algorithm. The algorithm is run on
one kind of random graph. This algorithm tends to produce a large
number of decrease-key operations. Thanks to Andrew Goldberg
et al. who provided the code for this use.
-
Random Tests (4 problems x 9 problem sizes).
These tests use a generator of
'random' priority queue calls. The generator takes several
parameters to structure the operation sequence. One test is a sorting
application; the other three exercise various combinations of priority
queue operations. The generator was written by Ben Chang with
modifications by C. McGeoch.
- Random Monotone Tests. We are working on developing
a generator of random monotone priority queue operations. In a monotone
priority queue, all inserted priorities have value not less than the
current min in the priority queue. Dijkstra's algorithm, for example,
produces monotone sequence. The generator will be available soon.
- Other Tests. Participants are expected to perform
more extensive tests than those listed here. You can use these problem
generators with other parameter settings, or you can invent your own problems.
You can perform more random trials with these problems, to improve
the estimate of mean behavior. Go crazy.
Back to the Challenge.
This page is maintained by
C.C. McGeoch, who can be reached at
ccm@cs.amherst.edu