Validity Checking of SWIC Data
R.West, T.Watts, D.Baddorf
August 7, 1991
All data from all SWICs currently in the experimental beamline areas may be read by the EPICURE system and pooled for user access. The name of the EPICURE process which performs this data collection is SWICDP. A version of SWICDP executes on each of the CAMAC frontend VAX computers (HUEY, DEWEY, and LOUIE) and pools on that frontend all the SWIC data for the associated beamline. At this time, the RDCS$DATASOURCE logicals on the DISNEY cluster define the association between beamline and VAX node to be the following:
Beamline VAX Node -------- -------- MESON DEWEY NEUTRINO LOUIE PROTON HUEYIf the definitions are different on your system, ask your system administrator to redefine the RDCS$DATASOURCE logicals to match the ones on DISNEY. If you still have difficulties after this, contact us for further assistance.
A user written program uses the normal EPICURE data acquisition calls to read this pooled data. The structure of the returned SWIC data is specified in the SWIC_USER and SWIC_CM header files located in the EPICURE_INC and EPICURE_SYS_INC directories, respectively.
If data collection on a particular VAX frontend node fails for any reason, data for a simple CAMAC device such as a power supply can transparently be obtained by the EPICURE system from one of the other CAMAC frontend nodes. This transferring of data collection is referred to as failover. However, there is no failover for SWIC data. The SWICDP process executing on a given frontend only collects data for the associated beamline's SWICs. In fact, the process of data collection failover by the upper levels of the EPICURE data acquisition system actually creates an error situation in the case of SWIC data. Assume the VAX node which was performing data collection for the MESON beamline is out of service due to a hardware problem. EPICURE then redirects the data collection for all MESON devices, including SWICs, to one of the other frontend machines, say to the NEUTRINO frontend. The data returned for a requested MESON SWIC will then actually be the data of the NEUTRINO SWIC which has same channel number as the requested MESON SWIC. To protect against this potential error situation, the user of the SWIC data must check the SWIC name which is returned as part of the data. Multiple names may be assigned to the same SWIC hardware module, so an exact name match can't be expected. The first alphabetic character of the returned name and of the expected name should match, however, because the first letter of an EPICURE device name indicates the beamline (M, N, or P). A mismatch indicates data is being obtained from the wrong frontend VAX.
SWIC data returned to a user is read from the data pool, not directly from the SWIC module. The SWICDP process reads the SWIC hardware each cycle after T6 and updates the datapool before T1. If the SWICDP process stops executing for any reason, the data pool is not updated and any data read from it is going to be ``old''. There are timestamps contained in the SWIC data which must be checked by the user to make sure the data is ``current'' and that the data has been updated since the previous read. Due to various load factors, the SWICDP process occasionally misses fully updating the data pool by the start of the next cycle. On average, this situation occurs less than once per hour. The user can realize the data from a read performed this cycle is the same as the data from a previous cycle's read by checking the timestamps contained in the data.
The SWIC data contains hardware and software status information which should also be checked by a user to insure that only valid data is being processed.
At this time, RDEE/Controls cannot guarantee SWIC data collection. We have agreed to make a best effort to collect the first five scans from all SWICs defined on each beamline. Currently, a maximum of 39 SWIC modules are being used on any one beamline and all ten scans of data are being collected from each SWIC in each beamline. However, these numbers are subject to change if system load problems are observed with increased usage. In the future, we hope to be able to alleviate the existing limitations, but require funding of special hardware to do so. Until that time, the guidelines specified in this note need to be followed to assure proper interpretation of SWIC data.
Distribution: Normal Richard Ford M.S. 219 Sam Childress M.S. 220