I wrote these small programs mainly to compare the performances of OS/2
- eComStation running on a real machine and within a virtual machine
(twoOStwo and Virtual PC).
Since time measurement of the benchmark test is not accurate when the
programs are executed within a virtual machine I provided some options
to make it easier to measure the time needed to complete the single
benchmark tests with a real chronometer (i.e. not the PC timer).
The benchmark suite consists of two simple programs:
a command line program which allows to check the performances with
mathematical operations, file operations and text parsing;
a PM program to check the performances with the most common graphic
primitives (bitmap, line, rectangle and text drawing)
and with dialog loading, moving and sizing.
All benchmark tests are executed at time critical priority ! You
will not be able to use the machine for any other purpose while a benchmark test is running !!!
WORKBNCH.EXE on termination completely removes the test path and all
its content, so do not specify as test path a path containing data!
Just specify the name of a new directory, WORKBNCH.EXE will create it.
You are allowed to use this program as you find suitable.
You are allowed to modify the program source code to suit your needs.
The program, in its current version, may be installed on as many machines
as you like.
You are free to distribute the program provided that you include all
the files in the original archive without any modification.
You are not allowed to sell the program, but can charge
a reasonable amount to cover the cost of the distribution media.
Under no circumstances will the author be liable for any loss
or damage that may be derived from the use of the program.
MMBS is the copyrighted property of Alessandro Felice Cantatore,
Bitonto, Bari, Italy.
is the path where the program puts all the files and the
directories created during the execution of the various tests.
The default value is the subdirectory TEST in
the current working path. Note:
the program needs about 512 MB of free space on the drive containing the test path!
the program takes care of creating the test directory if it doesn't exist.
The program removes the test directory and all its content on termination!
output file containing a short description of the tests and the result.
The default value is results.txt in the working path.
is a value ranging from 0 to 9 and sets the time needed to complete each test.
With a speed of 0, on a slow machine such
like a Pentium 100 MHz, some test may take long.
It is better to check with the highest speed first and then repeat
with a lower speed to get more accurate results.
The default value is 7.
U (run unattended)
will run all tests without prompting. This is useful when executing
the benchmark on a real machine. You must avoid
this when executing in a virtual machine as the reported
times are not real although in some cases migth be not far from
the real ones.
only display test ID, iterations and results (i.e. do not display
the test description).
to show a progress bar. This may be useful if you are running
the benchmark for the first time and are not sure how long it is
going to take.
do not check for free space on the test path. This is mainly
for debugging purpose. You should just ignore it !
run all tests creating the needed files and directories in the
TEST directory in the working path, and saving the results
in the results.txt file.
WORKBNCH T:f:\mydir l:c:\workbnch.log s:9 o:up
run all tests creating the test files and directories in
F:\MYDIR, the test results are logged in C:\WORKBNCH.LOG,
the tests are executed at the maximum speed (s:9), you are not prompted
before executing each test(o:u) and a progress bar is displayed on
the screen (o:p) to show the state of the current test.
Other subcommands are available for debugging purpose. These may
provide unpredictable results (i.e. the program will terminate with an
error message) if they are not used in the correct context.
These commands allow to run a specific test for a specified number of times.
is mandatory and is a number ranging from 1 to 23 (see the
test descriptions below
for more details about the test IDs).
means how many time the test operation must be repeated.
The other parameters have been described above.
Timer accuracy test (id: 1)
The purpose of this test is to compare the PC timer with a real world
timer to check the virtual machines timer accuracy. By
the way, during my tests, I found that the virtual machine timers
usually delay a few tenths of milliseconds with WORKBNCH.EXE while may
delay a few seconds with PMBENCH.EXE. When the 'U' option is entered (unattended mode) this test is skipped.
Mathematical operations tests (id: 2-4)
The algorithms used by the mathematical tests are quite simple. I
modified them from an article on the net reporting performances tests
of various programming language. Unfortunately I do not remember the
original URL, but I do not think that this is a problem as the code is so
basic that I doubt that anybody would ever claim a copyright infringment
(I just wonder when, in the US, somebody will pretend to patent the
I know that there are much more accurate tests but I didn't care as
these tests should be enough accurate for the current purposes (i.e.
comparing virtual and real PCs).
Integer operations (id: 2)
executes the basic integer operations (sum, subtraction,
multiplication and division).
Floating point operations (id: 3)
This test works exactly like the previous one, but the operations
are executed on floating point operands.
Trigonometric operations (id: 4)
just executes a series of trignonometric operations : sin, cosin,
tangent and logarithm and square root.
File operations tests (id: 5-15)
These tests measure the perfomances in writing, reading, zipping, unzipping and deleting directories and files of various sizes.
All the files are text file generated by using a dictionary of
pseudo-text words (i.e. words created by randomly concatenating vowels
and consonants) separated by a set of randomly chosen separator strings
(like ", ", "! ", "\r\n", etc.).
All file operations go through the file system cache as the purpose of
the tests is comparing real machines with virtual ones, not to check the
Each test is run multiple times, according to the speed set via the program
Multiple file write (id: 5)
measures the performances by creating 100 directories, each directory
containing 2 subdirectories and each subdirectory containing 32 text files
of size ranging from 1KB to 16 KB. For each iteration
100 directories, 200 subdirectories and 6400 text files are created.
Large file write (id: 6)
measures the performances by writing a large text file (128 MB).
The file is written multiple times according to the set speed.
Multiple file read (id: 7)
measures the file read performances by reading multiple files
(i.e. the files created during test 5).
Large file read (id: 8)
measures the file read performances by reading a 128 MB file (i.e.
the file created during test 6).
Multiple file copy (id: 9)
measures the file copying performances by copying a tree of files
containing 25 directories, 50 subdirectories and 1600 files with a size
ranging from 1KB to 16 KB.
Large file copy (id: 10)
measures the file copy performances by copying a 128 MB file
(i.e. the file created during test 6).
File tree zip (sequential - id: 11)
measures the file zipping performances by sequentially executing
ZIP.EXE (with the highest compression) for each file contained in a
tree of 2 directories, 4 subdirectories and 128 files.
File tree zip (in one run - id: 12)
measures the file zipping performances by compressing with highest rate
a tree of files (-r option) containing 16 directories, 32
subdirectories and 1024 files.
Large file zip (id: 13)
measures the file zipping performances by compressing a 128 MB file
(i.e. the file created during test 6).
File tree unzip (id: 14)
measures the file unzipping performances by unzipping, with the
overwrite flag, the archive created during test 12.
File tree deletion (id: 15)
deletes all the files created during the previous test reporting
the elapsed time. The speed option has no effect on this test.
Text string operations (id: 16-23)
These tests measure the performances of various common text string
operations : character statistics, word extraction from a text file,
word sorting, case conversion and word search.
Most test are performed on a 2 MB text file containing mixed case
words generated by a special random text generator.
Character statistics test (id: 16)
counts the occurrences of the various characters contained in a
2 MB text file.
Word parsing test (id: 17)
extract the text words contained in a 2 MB text file.
Word sort test (case insensitive - id: 18)
sort case insensitively the words previously extracted from
a 2 MB text file.
Word sort test (case sensitive - id: 19)
sort case sensitively the words previously extracted from
a 2 MB text file.
Case conversion test (uppercase - id: 20)
converts a 2 MB text file to upper case.
Case conversion test (lowercase - id: 21)
converts a 2 MB text flie to lower case.
Word search test (case insensitive - id: 22)
perform a case insensitive search of a word in a 2 MB text
file counting the occurrences.
Word search test (case sensitive - id: 23)
perform a case sensitive search of a word in a 2 MB text file
counting the occurrences.
This program measures the performances of some of the most used graphic
operations: bitmap rendering, drawing lines, rectangles and text,
loading, moving and sizing a dialog.
The program interface consists in a standard PM window with a menu bar
which allows to execute a specific test or all tests.
The menu items
Run all tests
executes all the tests optionally asking the user confirmation
via a message box.
displays a table with the results (elapsed times,
performed operations, operations per second) of all the executed tests.
sets a common iteration multiplier for all tests. High values
produce more accurate results but the test take longer.
Prompt the user for the next test
is provided to run all the benchmarks in a virtual machine since
the virtual machine timer usually yelds inaccurate results. To properly
execute the tests in a virtual machine you have to use a real
chronometer and check how long time is needed to execute the various
Save the results as:
allows to specify the name of a file where the benchmark
results will be written on program termination.
allows to compare the smoothness of an animation performed in a real
machine with that performed in a virtual machine. Usually the animation
performed in the virtual machine is not so fluid as the one performed
in the real machine.
measures the bitmap drawing performance.
measures the bitmap drawing performance by continuously enlarging
and reducing a bitmap.
draws vertical and horizontal lines of various colours and sizes.
draws filled rectangles of various colours and sizes.
draws a text string of various colors multiple times.
creates and destroys a dialog window containing various controls
(buttons, radiobuttons, multi line edit and listbox) multiple times.
moves a dialog window on the screen (Full window drag
should be enabled in order to provide significative results).
changes the size of a dilaog window in 1 pixel steps. The dialog
procedure takes care of re-sizing and re-positioning the inner controls
so that a lot of calculations are performed for each size change.