Research Division EED/Controls Software<P> Design Note 128.0<P> Epicure Performance Benchmark Results

Research Division EED/Controls Software

Design Note 128.0

Epicure Performance Benchmark Results

David M. Kline

Introduction

A series of benchmarks were written to evaluate particular aspects of the Epicure control system. The purpose of this paper is to describe the benchmarks, provide source code, and present the results in both tabular and graphical form. Furthermore, it is the intention of this paper to provide the data to Epicure system programmers and management at the group and department level to facilitate efficient project development, and justification for funding future hardware and software expenditures.

In general, the benchmarks evaluated the control system performance at three different points. The first benchmark measured the speed to access memory between the VAX and VMEbus address spaces. The second measured the speed to process transactions using Epicure system services which interface the VAX with the VMEbus processors. The final benchmark used the Epicure DA_ services to measure transaction processing. These routines are used by Epicure system programmers, operators, and experimenters who develop data acquisition applications. The data for the first benchmark was acquired and calculated manually. The second and third benchmarks wrote their information to a record oriented data file. The benchmarks calculated and wrote a variety of statistics for both list execution speed and lists per second. The data written about the test includes the device list size and number of tests run. The data written about the lists per second and list execution speed includes: deviation, variance, standard deviation, maximum and minimum values, median, mode , mean, data points and point frequencies ' . The benchmarks were run for a variety of device list sizes in order to obtain a cross section of how the control system responds under different loads. The data file contains a record for individual list sizes and can be accessed for further analysis when required . The graphs in appendix were generated by statistics output from the data file and ported to Excel for presentation.

The first of the benchmarks measured the access speeds of the bus architecture presently implemented to communicate messages between VAX and VMEbus processors. In particular, an interface board set is used to bridge a VAXstation 3200 and VMEbus through the Q22-bus architecture. The board set was developed by the Research Division/EED Controls Group and consists of hardware registers used as mapping registers to allow VMS to view VMEbus as memory pages. The benchmark used the QVI_ system services to map VMEbus address space into process virtual address space. Accesses to the VMEbus common memory module were thus virtual block references from the application running on the VAX. The benchmark consisted of two tests which measured the time to perform Programmed I/O (PIO) reads and writes across the Q22-bus interface. One additional test was performed to measure the potential number of memory accesses on the VMEbus. A later section is devoted to presenting the results.

The second benchmark measured the time to process transactions between the VAX and VMEbus processors. A transaction consists of passing data acquisition requests or status messages between the VAX and VMEbus processors using queues residing in the VMEbus common memory. The benchmark used a combination of DB_ and QVI_ services to construct the device list and send and receive the request between the DAE input and output queues. The elapsed time between sending and receiving the message was measured for various device list sizes. From the execution speeds, the number of lists per second could be derived. The results are useful in representing the maximum throughput and minimum execution speed from the perspective of the DAS process; or any future data logging or alarm monitoring facilities. A later section is devoted to presenting the results from this benchmark.

The third benchmark is similar to the second; however, it measured the time to process transactions using the Epicure DA_ services. These services provide a simple and consistent interface for Epicure system programmers, operators, and experimenters who are writing applications for data acquisition. Therefore, this benchmark provides a reasonable representation of the performance that can be achieved from a user perspective. The benchmark used these services to construct a device list and send the request through the various interface services and processes to the VMEbus processors. The elapsed time to process the request and receive the data was measured for various device list sizes. Similarly to the second benchmark, the number of lists per second could be derived. A later section provides the results from the benchmark.

The remainder of this paper focuses on the results of the benchmark, describing the conditions under which they were executed. A few additional considerations should be mentioned. The front end used to benchmark the control system is a VAXstation 3200 with 16Mb of memory. The interface between the VAX and the VMEbus processors is the QVI-PLUS. The CAMAC crate is located second in the daisy chain and the device under test is a 007 24-bit input gate. The device template which was used to create the devices was the T.ADC2. Each CAMAC transfer consisted of 16-bits and were not processed.

Benchmark I: Bus Architecture

[Description:] Measure the Programmed I/O (PIO) read and write access times across the QVI-PLUS between the VAX and VMEbus common memory module. Additionally, measure the potential number of memory transactions obtainable between VAX and VMEbus. [Conditions:] The tests were completed using a typical CAMAC front end processor running VMS V5.5-1. The tests were run interactively at the default priority of 4. No other users were logged in and no Epicure products were running, only the network processes were running. The test were implemented using the C language and source code is provided in appendix . [Results:] The results are described by the below table:

Benchmark II: DAE Interface

Description:
Measure the elapsed time to execute device lists of various sizes on a typical CAMAC front end. Calculate statistics and place into a data file.
Conditions:
The test was completed using a typical CAMAC front end running VMS 5.5-1. The test was run interactively at the default priority of 4. No other users were logged in and the network processes were running. The test was implemented using the C language and source code is provided in appendix .
Results:
The results are described by the below table. The data represents the maximum throughput and minimum execution time of a CAMAC front end. Furthermore, the data collected was intended to be compared with the data obtained in ``Benchmark III'', therefore no graphs are provided.

Benchmark III: DA Interface

Description:
Measure the elapsed time to execute device lists of various sizes on various VAX nodes throughout the control system. Calculate statistics and place into a data file.
Conditions:
The test was completed using a typical CAMAC front end and ran on various VAX hardware types running VMS 5.5-1. The test ran in batch mode and at a default priority of 4. No other users were logged in. The test was implemented using the C language and source code is provided in appendix .
Results:
The results are described by the below tables. Graphs of the results are displayed in figures 1-4 located in appendix . Additionally, source code is provided in appendix .

Table 1:
The table below indicates the number of lists per second that can be processed for various device list sizes over a variety of VAX hardware types. Refer to figure 1 for a graphical presentation.

Table 2:
This table describes the same information as table 1 however, data from ``Benchmark II'' is included. Figure 2 in appendix is a graphical representation of the data.

Table 3:
The execution time in milliseconds for various device list sizes on different VAX hardware types is indicated in the table below. Data from ``Benchmark II'' has been included as a reference point. Refer to figure 4 for a graph representing the data.

Table 4:
Below describes the efficiency of the Epicure control system for various device list sizes over a range of VAX hardware types. Efficiency is defined as where m represents the measured throughput, whereas p represents the maximum throughput. The values in the table are in percent. Figure 3 presents a graphical representation of the data, refer to appendix .

Supplemental Data I: Datasource Logicals

One additional test conducted that wasn't part of the benchmarks concerned, the definition of the data source logical; in particular, where the data source logical is defined to be the same as the source front end. The logical may be defined in two ways, ``NODE::'' or ``0::''. The graphs in figures 5 & 6 show how the device lists per second and the execution times differ when the logical is defined in both ways.

In analyzing the data, one would conclude that although there is a difference during some portion of the graph, it isn't significant to mandate that all datasource logicals be defined as ``0::'' when running on the source node. Consequently, the maximum difference approximated only a 0.9% variance.

Supplemental Data II: Further Studies

This paper provided the information collected from a series of basic benchmarks that evaluated the performance of the Epicure control system. Additional studies are being written to examine how the control system performs under a normal load during a fixed-target run. The results from these studies will be included as they become available.

Appendix: Data File Data Structure Definition

struct STATISTICS {
    unsigned int lstsiz;                /* size of list */
    unsigned int runcnt;                /* number of tests executed */
    float dev_lps;                      /* deviation */
    float var_lps;                      /* variance */
    float sdv_lps;                      /* standard deviation */
    float max_lps;                      /* max */
    float min_lps;                      /* min */
    float median_lps;                   /* median */
    struct {                            /* mode */
        int cnt;
        float data[RUN_MAX];
        } mode_lps;
    float mean_lps;                     /* mean */
    struct {                            /* data points */
        char tag;
        float data;
        } pt_lps[RUN_MAX];
    struct {                            /* data points frequency */
        char cnt;
        float data;
        } frq_lps[RUN_MAX];
    float dev_let;                      /* deviation */
    float var_let;                      /* variance */
    float sdv_let;                      /* standard deviation */
    float max_let;                      /* max */
    float min_let;                      /* min */
    float median_let;                   /* median */
    struct {                            /* mode */
        int cnt;
        float data[RUN_MAX];
        } mode_let;
    float mean_let;                     /* mean */
    struct {                            /* data points */
        char tag;
        float data;
        } pt_let[RUN_MAX];
    struct {                            /* data points frequency */
        unsigned char cnt;
        float data;
        } frq_let[RUN_MAX];
};

Appendix: Benchmark I - Source Code

This page was intentionally left blank.

Appendix: Benchmark II - Source Code

This page was intentionally left blank.

Appendix: Benchmark III - Source Code

This page was intentionally left blank.

Appendix: Graphs - Figures 1-6

This page was intentionally left blank.

Appendix: Supplemental Graphs - Figures 7-10

The following pages are graphs showing the data collected from all nodes that were used to evaluate the Epicure control system.

Keywords: Epicure

Distribution: normal

P. Czarapata, MS-220

B. DeMaat, MS-220

description

Security, Privacy, Legal

rwest@fsus04.fnal.gov