Chapter 5. SAFEGUARD SYSTEM EVALUATION

System evaluation attempts to estimate total system performance in order to provide confidence that the overall system objectives will be achieved. Executing this task in the SAFEGUARD program involved two major phases of activity.

The first phase focused attention on the performance and functional requirements specified by the tactical system designers and documented in the performance specification for the hardware and in the Data Processing System Performance Requirements (DPSPRs) 1-2 for the software. The goal of this phase was to verify the viability of the overall system design and to discover and correct any inadequacies in system operation as early as possible. Another objective was to identify critical aspects of system performance to assure that appropriate emphasis was reflected in system requirements and that appropriate test programs and associated test requirements were identified and planned.

The second phase broadened the scope of the evaluation effort to ensure that the design implementation satisfied the design requirements. During this period, system and subsystem models were modified and validated using data collected from field and laboratory tests. Corresponding adjustments were made to estimates of system capability.

Obviously, the SAFEGUARD System evaluation was an evolutionary activity, proceeding from system requirements to system implementation. The objective was to obtain adequate data to permit a confident assessment of system capability in the most cost-effective way. This was accomplished by identifying the appropriate function of each of the available sources of data and establishing how the various simulations, subsystem tests, and system tests would supplement each other.

APPROACH TO SYSTEM EVALUATION

SAFEGUARD System capability was evaluated primarily through simulation, with test data serving to validate the simulation modeling and add confidence to the evaluation results. The hierarchy of simulation tools developed ranged from a total system simulation capable of indicating the response of all system elements during a typical attack to much more detailed simulations of critical system functions such as battle planning, data gathering (target track), and intercept (guidance and missile performance).

Figure 5-1 diagrams the primary-interfaces among the design, evaluation, and test activities.


Figure 5-1. Effectiveness Evaluation Procedure

Evaluation, primarily an analytic activity, relied heavily on simulation because of the complex nature of the SAFEGUARD System. Hence, in Figure 5-1, evaluation is represented by the major simulations developed. Three sources of test data were relevant to system evaluation activity and played a major role in simulation validation:

• Meck System Test Program
• Tactical Software Control Site (TSCS), Madison, New Jersey
• Tactical Sites — Grand Forks Air Force Base, North Dakota, and Colorado Springs, Colorado.

For tests at Meck Island, Kwajalein Atoll, "live" ICBM and SLBM targets were available and were tailored to the program requirements and engaged by defensive missiles. Meck data was particularly valuable with respect to the SAFEGUARD target-track and intercept functions and to performance of the Missile Site Radar (MSR) and the SPRINT and SPARTAN missiles. This test site, however, could not provide all required test data nor a complete test of SAFEGUARD System operation for the following reasons:

• Impracticality of providing threat traffic commensurate with the design threat — many aspects of system performance could not be stressed at low traffic levels.
• Absence of nuclear effects — intercept planning, for example, is designed to minimize the effects of self-blackout. The adequacy of this function could not be tested in a blackout-free environment.
• Lack of critical system elements — the Meck test facility did not include a Perimeter Acquisition Radar (PAR) and because of the lack of nuclear effects and high traffic, the data processing software did not include battle-planning functions.
• Expense of live testing — the cost associated with a single live test limited the number and variety of tests that could be conducted.

The Meck System Test Program is discussed more completely later in this chapter.

Tests conducted at TSCS were a major source of data for simulation validation, particularly in those areas not covered by the Meck System Test Program. By using the System Exerciser (SYSEX) to provide high traffic and the nuclear environment, data was available for evaluating several system functions including battle planning and target selection. Because TSCS included both the PAR and MSR data processing systems, data was provided in the netted-system environment not available from the Meck System Test Program.

The Meck test facility did not include a prototype PAR, hence tests conducted at the tactical PAR site provided the necessary PAR performance data. In addition, system tests initially conducted at TSCS were repeated at the tactical sites, supplementing the system performance data gathered at TSCS and utilized in simulation validation.

The remainder of this chapter describes the major simulation tools developed during the evaluation program, the procedures by which these simulations were validated using test data, the key analysis and results obtained with the simulations during the evaluation program, and details of plans and results of system testing at Meck and at Grand Forks. Also included is a description of the system integration and demonstration tests conducted at TSCS and Grand Forks for system certification and customer acceptance.

MAJOR SIMULATIONS

Several simulations were developed to assist in evaluating the system. These simulations proved to be a valuable tool in the overall evaluation program.

System Simulation

The complexity of the SAFEGUARD System and the complex interactions between functions made it difficult to predict the overall system response and effectiveness during a typical attack. The SAFEGUARD System Simulation (SAFSIM/ TACSAF), which uses the HIS 635 computer, was developed to evaluate system response and to determine system effectiveness over the spectrum of threat conditions. Two versions of the simulation were developed. The first, SAFSIM, was used early in the evaluation program to evaluate the DPSPRs. SAFSIM consists of models of each of the system functions designed to meet the performance requirements unconstrained by any real-time data processing limitations. A second version of the simulation, TACSAF, has models representing the "as implemented" tactical capability and incorporates knowledge gained from the various test programs, in particular the Meck System Test Program. The major elements of TACSAF are listed in Table 5-1.3

Table 5-1
Major Elements of the SAFEGUARD System Simulation (TACSAF)

TRAJECTORY GENERATOR

• Target Reentry Vehicles (RVs) — Input RV ballistic coefficient history, launch point, impact point, reentry angle, launch or impact time
• Tank, Junk Dispersion — Input AVs relative to RV velocity

SURVEILLANCE/DETECTION MODEL

• Threat Characteristics — Radar Cross-Section (RCS) models, vehicle dynamic motions
• MSR Search Sectors

MSR TRACK

• MSR Weapon Process (MW) Data Gathering Logic — Data rates, track priorities, Automatic Gain Control (AGC), waveform switch
• Target Selection
• Discrimination

ENVIRONMENTAL MODELS

• Nuclear Blackout/Attenuation
• Wake Effects
• Tank-Breakup Effects

RESOURCE ALLOCATION/OVERLOAD RESPONSE

• Dynamic Accounting of MSR and Missile Site Data Processor (MSDP) Loads
• MW Logic for Radar and Data Processor (DP) Overload Response

INTERCEPT PLANNING (SPRINT Interceptor Response)

• Farm Selection Logic
• Constraint Resolution
           Nuclear Models — Blackout, radiation, shock, dust
           SPRINT Deadzone
           Wake Avoidance
• SPRINT Capability (flyout, acceleration)

INTERCEPT ASSESSMENT

• Miss Distance Determination (True and Estimated)
• Interceptor Lethality Contours

PENETRATOR EFFECTS

• Facility Damage — Shock, dust
• Nuclear Blackout

The level of detail included in the various functional models varies over a wide range. For example, the MSR track function is not a closed loop as in the MSR Simulation (MSRSIM)4 discussed next, but uses a statistical error model to represent pulse-by-pulse variations in the track data provided to the track filter. The statistical model was developed using results from MSRSIM and includes Signal-to-Noise Ratio (S/N) effects and the effects of the wake and tank-breakup environments. Similarly, the SPRINT intercept model is greatly simplified because guidance algorithms are not included. SPRINT miss distance is estimated as a statistical combination of target-track and missile-track uncertainties amplified by a factor that depends on intercept geometry. This simplified model was based on extensive simulation of the guidance loop using SES/SIS (see SPRINT Engagement Simulation), which in turn was validated in connection with the Meck System Test Program.

Conversely, the SPRINT Interceptor Response (IR) function essentially duplicates the tactical algorithms, as does the Target Selection Function (TSF). The SPRINT IR and TSF modules can both be exercised as part of the total simulation or can be exercised in a "stand-alone" package for separate response analyses.

To achieve reasonable fidelity, TACSAF is necessarily an extremely large and complex simulation. Some list-handling, and dynamic-storage techniques not commonly found in FORTRAN simulations assisted greatly in program organization. Also, TACSAF is event-based to minimize computer resource usage. This means that time is not the independent variable, but rather a time-ordered list of significant events is processed. No computations are performed in between significant events and time is merely updated to correspond to the time of the event under consideration. As an aid to analysis, convenient summaries can be produced by processing the data recorded on magnetic tape during a TACSAF run. A partial list of the summaries currently available in TACSAF includes:

• Overview of Significant Events - Object-by -Object
• Detection Summary
• MSR Target Selection Function
• MSR Discrimination
• Clutter Effects Summary
• SPRINT IR Planning Summary
• SPRINT Engagement Summary
• Resource Allocation Summary - Radar and DP Loads versus Time.

In addition to tabulations of the summary data, plot routines are available for several of the summaries.

TACSAF was used for a wide range of system evaluation activities, including the system traffic-handling capability and overload response and a detailed evaluation of the battle-planning functions. TACSAF also provided an extensive data base of expected results for the set of System Integration and Demonstration tests conducted at both the TSCS and tactical site. The performance criteria and bounds for those tests were developed from the data base.

MSR Simulation

The primary objective of the SAFEGUARD System was to detect, identify, and intercept threatening reentry vehicles. Data from the target-track function, of sufficient quality to support target selection and intercept, was critical to attaining the primary objective. The MSR Simulation (MSRSIM), which runs on the HIS 635 computer, was developed to permit a comprehensive evaluation of the SAFEGUARD target-track functional capability.4

The main features of the simulation are detailed models of target characteristics and models of the radar hardware and data processor software, which are included in the tracking loop. MSRSIM consists of two modules of simulation programs, the BED programs that model the radar hardware and threat environment and the LOGIC programs that model the data gathering software.

Features of the threat environment modeled in the BED module are:

• RV radar cross section and sheathing effects
• RV dynamics (trajectory and tumbling)
• Wake parameter histories
• Ballistic coefficient histories
• Tank fragmentation.

The BED hardware models include:

• Main lobe antenna pattern
• Sum-channel video with correlated noise
• Range mark generator
• Monopulse phase detector
• Digital encoder granularities
• T1 and T3 compressed waveforms
• Dispersed pulse measurement.

Modeled features of the data processor algorithms in the LOGIC module are:

• Acquisition and track initiation
• Kaiman track filter
• Crossing target logic
• AGC and track-mode selection logic
• Range gate setting and antenna beam pointing
• Tracking status assessment.

Detailed mathematical models of the radar signal processor and high-fidelity signal synthesizers are the heart of MSRSIM. For the MSR, the models digitally synthesize compressed-pulse video and monopulse-angle processor signals and dispersed-pulse integrator signals, including the effects of bandwidth-limited noise. Backscattered signals from waking and/or fragmenting objects, which may be off-axis in the phased-array main beam, are appropriately superimposed to create realistic radar responses. The model accurately represents the interference backscatter from multiple targets by considering their relative ranges from the radar and their off-beam positions. Gain states of the radar, waveforms transmitted, and other factors influencing the phase and amplitude of the interfering returns from targets are also represented.

The time-based simulation is stepped ahead to the time that the next radar pulse is transmitted to the next object in track. After all objects are flown prior to the time of the next radar pulse, their radar cross sections are computed. The objects include reentry vehicles, tanks, and tank fragments, all of which may wake during reentry. Next, the sum-channel video signal is generated and range and amplitude measurements are taken. Simultaneously (to represent the monopulse-angle measurements), sin a and sin ß difference-channel signals are generated and combined with predicted sum-channel signals to generate noisy, off-axis sin a and sin ß measurements. The range, amplitude, and sin a and sin ß measurements are digitally encoded and passed to the data processor software. If the amplitude S/N is above a threshold, the measurements are inserted into the Kaiman tracking filter to update the estimated target-state vector and to estimate target position and velocity at the time of the next pulse transmission. Estimated position information is fed to the hardware model to point the antenna beam and set the range tracking gate for the next pulse transmission. Simultaneously, the estimated variances and peak amplitude S/N are sent to the track-logic software to determine the RF gain setting and the track mode.

Different versions of the simulation are available. All use the same BED module, but different LOGIC modules. One version of the simulation, using a LOGIC package consistent with the Meck data processing, was used extensively in defining Meck test requirements, providing premission simulations, and evaluating mission results to validate the simulation. A second LOGIC package was used in developing the tactical tracking algorithms. This version of the simulation was labeled TACMSRSIM.

PAR/SPARTAN SAFSIM

A decision was made in 1972 to develop a separate version of SAFSIM to model the PAR/ SPARTAN system response. The development of separate PAR/SPARTAN and MSR/SPRINT versions of SAFSIM was desirable, since the amount of computer storage required by a single simulation would have necessitated sophisticated and expensive disc-to-core program swapping techniques to fit the simulation into the available core storage on the HIS 635 or IBM 370 computers. No analysis capability was lost through this simpler and cheaper approach of developing two separate versions of SAFSIM. Also, a low degree of interaction existed between the PAR/SPARTAN and MSR/SPRINT roles in SAFEGUARD. PAR/ SPARTAN SAFSIM (P/Z SAFSIM) was developed on the IBM 370, because the execution speed and core storage advantages were needed for simulating the large number of objects visible to the PAR during a battle. For example, simulation of one of the full system integration test scenarios on P/Z SAFSIM requires 1200 kilobytes of storage and 1- to 1.5-hours execution time on the IBM 370/165.

The attack environment portion of P/Z SAFSIM consists of models of the key attributes of each threat complex type (e.g., RV/tank separation velocities, RCS profiles), trajectory targeting programs, and exoatmospheric nuclear environment models. The satellite environment was not simulated since it was found to have an insignificant effect on system response during an attack.

The PAR surveillance function is approximated in P/Z SAFSIM. Rather than implement a detailed search-raster model, the time for the first surveillance-pulse hit on an object is determined by a random draw from a uniform distribution, with later attempts scheduled deterministically using the scan time of the subsector. The effects of radar resource availability on scan times and of nuclear burst attenuation on detection capability are modeled. The scan-while-track and search-in-range detection mechanisms used by the PAR Weapons Process (PW) are also modeled. PW rules for the traffic-level-dependent switch on maximum search listening range are implemented.

In the tracking area, accurate models are incorporated for track initiation and group clustering, automatic gain control, pulse scheduling, polynomial/Kalman filtering, known object recognition, and low S/Nsample pulsing response. Due to excessive execution time and storage requirements, a signal processor model was not included in P/Z SAFSIM. Thus, in unresolved target-track situations, the SAFSIM tracking performance was not an accurate reflection of PAR tracking, and it was necessary to use the detailed radar evaluation simulation (PAR Testbed) in conjunction with P/Z SAFSIM.

The four key subfunctions of PW target selection are modeled in P/Z SAFSIM: determining value structure attacked, determining launch complex type, ranking objects according to estimated probability of being an RV PRV), and re-scaling PRV to support the SPARTAN Interceptor Response (ZIR) missile allocation function.

For battle planning, portions of the simulated models were implemented for defense-mode-dependent SPARTAN allocation, battlespace determination, Minuteman-flyout-corridor avoidance, manual SPARTAN controls (SPARTAN Hold and Hold Fire), and intercept-point selection. Particular attention was given to intercept-point selection due to the complexity and importance of the problem of resolving nuclear effects constraints, such as fratricide and blackout among multiple intercept points. The ZIR models were also configured to run in a stand-alone mode outside the body of P/Z SAFSIM. This facilitated detailed high-fidelity analysis using PW inputs from the PAR Testbed Simulation or TSCS.

Finally, the SPARTAN intercept effectiveness model simulated both controlled and free-intercept guidance modes, using forced or unforced jethead ignition, as appropriate. The primary error sources modeled were atmospheric exit velocity errors, missile state estimation errors at third-stage ignition, third-stage velocity errors, and third-stage spin-stabilization errors.

PAR Testbed

PAR Testbed is a high-fidelity simulation of the threat environment, PW software, and PAR hardware. The threat environment models offer the same capabilities as those described for PAR/SPARTAN SAFSIM. In addition, the Test-bed can be driven directly by SAFEGUARD Threat Action Generator (STAG) tapes (see System Integration Testing) for precise simulation of the threat environment encountered in TSCS tests. The satellite and meteor environment is also simulated.

The PW software portion of Testbed includes detailed models of the raster-scan search and verification processing. In the track area, models are provided for track initiation, including range rate histograms, correlation processing, and Known Object Recognition (KOR) checks; Continental United States (CONUS) impact and meteor tests; pulse scheduling rules, including scan-while-track, search-in-range, drop track, sample pulsing, and lost track algorithms; KOR checking throughout track processing; clustering and reclustering logic; and finally, Kalman and polynomial filtering for both null and off-axis tracks. In the target-selection area, models are provided for each of the subfunctions that lead up to determining value structure attacked and launch complex type, ranking objects according to estimated probability of being an RV (PRV), and rescaling PRV to support the SPARTAN Interceptor Response missile allocation function. Finally, the traffic sensitive switch of search listening range and pulse allocation rules is modeled.

The PAR hardware models include interfering multiple-target effects in range and angle, and correlated noise effects in range. Range marks are generated from simulated noisy range video. Angle measurements are modeled by forming a sum-signal vector and two difference-signal vectors (including noise and jitter effects) and by precisely modeling the characteristics of the mono-pulse detectors. Simulated signal levels and noise levels are impacted by the pulse-by-pulse setting of digitally controlled attenuators. Finally, the characteristics of the T1 waveform (beam-width, sidelobe levels, etc.) have been modeled in detail.

SPRINT Engagement Simulation

Two detailed simulations were used to evaluate SPRINT intercept capability in the SAFEGUARD System. The SPRINT Engagement Simulation (SES) was developed to evaluate the performance requirements applicable to SPRINT capability (primarily guidance) and to support the definition of requirements for the SPRINT intercept portion of the Meck System Test Program. All elements of the SAFEGUARD System necessary to the engagement of a single RV by a single SPRINT missile were modeled in SES. A detailed simulation of the SPRINT missile was provided, including statistical models of all significant performance variations such as pitchover errors, motor thrust and total impulse, autopilot biases, etc. The guidance program implemented in SES contained all of the guidance functions specified in the DPSPRs for SPRINT guidance, but were non-real time in the sense that they were not constrained by real-time data processing considerations. Both the target-prediction and missile-prediction functions provided accuracies not achievable in real time to serve as a base for evaluating tactical implementation.

Models of missile and target track were developed consistent with the MSR performance specifications and results obtained from MSRSIM. As indicated earlier, SES was used initially to identify stressing intercept conditions for use in defining the Meck test requirements.

As the tactical guidance design for SPRINT was developed, the guidance section of SES was replaced with tactical algorithms. Using performance data from the Meck System Test Program, the missile simulator was modified and the target-track model was extended to simulate target-wake effects and other clutter sources, such as tank breakup.

SES was used extensively throughout the SAFEGUARD evaluation program to thoroughly explore SPRINT intercept capability throughout the field of fire and to investigate the sensitivity of intercept capability to guidance design alternatives and variations in environment.

The major studies conducted5-10 included:

• Determining the linear region of the SPRINT field of fire
• Determining the effect of wake-induced target-track biases on intercept capability
• Characterizing controlled guidance effectiveness in limiting intercept-point shift
• Determining the sensitivity of intercept capability to variations in target- and missile-track rates and guidance rate
• Determining sensitivity to variations in missile performance, air density, target beta histories, and other environment factors including blackout-coast intervals
• Evaluating the tactical algorithm design, including missile and target predictors, intercept-point bias, and capability bias
• Developing an approximate intercept capability model for use in the system simulation.

SPRINT Intercept Simulation

The SPRINT Intercept Simulation (SIS) is a Monte Carlo simulation containing all the elements of the subsystem, with the exception of the target-tracking and -state estimation functions. Target-state estimation information is supplied to SIS via magnetic tape generated by the MSR simulation, MSRSIM. The prelaunch guidance and missile data processing functions contained in SIS are FORTRAN equivalents of the real-time M-2 software code used for the live missions at Meck. The missile simulator and MSR missile-track error simulator are FORTRAN models of the SPRINT missile and MSR missile track hardware, which contain models of all error sources that significantly contribute to intercept effectiveness.

The overall approach to intercept subsystem evaluation developed during the Meck program consisted of three major phases:11

• Analysis and simulation
• Stressing tests
• Reconstruction of test results.

Simulations and analysis techniques were developed to verify system and subsystem requirements, and to characterize system performance to hardware, software environment, and threat parameters. The parameters that drive system performance (e.g., radar noise, unpredictable target accelerations, variations in missile performance) are of a random statistical nature, and the properties of the system are highly interactive and nonlinear; therefore, it is not generally feasible to determine intercept effectiveness by analytical techniques. Consequently, Monte Carlo simulation techniques were required to adequately characterize system performance.

Since the cost of a Monte Carlo live-mission approach to validating system performance at Meck was obviously prohibitive, the approach was to use data gathered from a relatively small number of particularly stressing live missions to validate Monte Carlo simulations. These tests were specifically designed to stress some aspect of missile or software performance and to provide data to validate simulation models and techniques (see SPRINT Performance Characterization).

The live-mission results were constructed by simulation using the data gathered during the live mission. If reconstruction was successful, confidence was developed in the validity of the simulation and, consequently, in the accuracy of the system performance predicted by the simulation. If reconstruction was unsuccessful, deficiencies in modeling or simulation techniques were exposed and corrected.

To use simulation results to characterize SPRINT intercept subsystem effectiveness with a high degree of confidence, the simulation itself had to be a valid model of the subsystem. The SIS verification analysis used premission Monte Carlo simulation, live mission, and post-mission Monte Carlo simulation results. Premission simulation analysis provided nominal guidance-and missile-parameter histories and predicted possible mission outcomes. The final post-mission simulation analysis consisted of reconstructing the flight and matching flight test and simulation results.

The premission Monte Carlo simulations with SIS used MSRSIM tapes of target-state histories. In most cases, the MSRSIM tapes contained 25 Monte Carlo samples of estimated target-state histories representing different random target-track noise and variations in target ballistic coefficient histories. For the premission simulations, all other random system errors that influence intercept effectiveness (missile-track noise, missile performance variations, and atmospheric density) were fully simulated.

During the post-mission analysis, data gathered during the live mission were used to particularize the conditions actually present during the mission and to "tune" the simulation models and consequently reconstruct the flight. Three major classes of data were gathered during each flight:

• Recorded software parameter histories (E-Tape)
• Missile telemetry data
• Optical track data - Recording Automatic Digital Optical Tracker (RADOT).

The E-Tape provided recorded histories of all pertinent guidance, missile-track, and target-track parameters. During the post-mission analysis, the recorded target-state estimates were used to drive SIS. The missile telemetry data provided missile acceleration histories that were used primarily to validate missile simulator models. The RADOT data provided an accurate independent source of missile-state histories, particularly during the powered-flight phases. On some flights, RADOT provided a valuable independent measure of miss distance.

SPARTAN Intercept Simulation

The SPARTAN Intercept Simulation (SPASIM) is a six-degree-of-freedom simulation of the hardware and software making up the SPARTAN intercept subsystem. SPASIM was typically used in the Monte Carlo mode to predict SPARTAN miss distance and kill-probability performance for given intercept points and PAR target-track error distributions.

High-fidelity simulations of SPARTAN and the portions of MSR relevant to SPARTAN support were provided. The SPARTAN flight dynamics models were extensively validated using Meck test data.12 All of the significant missile error sources were modeled — specific impulse, burn time, and weight variations for all three missile powered-flight stages; attitude control errors during each phase of the missile flyout profile; and time reference and atmospheric density variations. Similarly, MSR missile tracking hardware models were thoroughly validated using Meck test data, and models were provided for the key error sources such as radar noise, atmospheric refraction, and SPARTAN exhaust-plume effects.

SPASIM incorporates high-fidelity models of the MSR Weapons Process (MW) guidance and missile data processing functions. Guidance function models are provided for both free- and controlled-intercept guidance modes, and for each of the seven major SPARTAN flight phases -boost, dive guidance, aerodynamic steering, exit attitude control, exoatmospheric coast, jethead burn, and post-jethead coast. The missile data processing models incorporate the missile-state estimation and radar beam-pointing command-generation functions. Finally, PAR target-track errors were modeled by three-dimensional Gaussian distributions, where the means and variances could be set to values drawn from PAR Testbed or TSCS analysis, or in the default case, to the DPSPR performance specification values.2

ANALYSIS AND KEY RESULTS

The total SAFEGUARD evaluation program consisted of a closely coordinated combination of analyses, simulation studies, and test programs conducted over a period of several years. The areas of investigation covered a wide spectrum, resulting in modifications to the design requirements and/or implementation of virtually every functional area in the SAFEGUARD System. In addition, many other results confirmed that the current design was effective in meeting the system objectives.

Even a brief summary of the total evaluation effort is beyond the scope of this document. The evaluation activities described here are typical of the total program and serve to illustrate the emphasis on simulation and the importance of the test programs to the evaluation effort. Extensive annual reports13-18 provide the details on most of the evaluation effort.

MSR/SPRINT Performance and Characterization

Many areas of MSR/SPRINT performance were investigated. The following discussions cover some of the more significant investigations.

SPRINT Battlespace Utilization

The SPRINT capability analysis conducted early in the evaluation program identified a sizable region of the SPRINT field of fire where successful SPRINT intercepts would be difficult to achieve and attainable miss distance would be highly sensitive to acceleration uncertainties associated primarily with target prediction. This region of the field of fire, the "poor-geometry" region, results from the crossing angle between the target- and missile-velocity vectors being such as to align the miss-distance direction (i.e., the normal to the relative velocity vector) along the missile centerline. When this geometry prevails, miss distance can be reduced only by speeding up or slowing down the missile. The poor-geometry region was labeled the "nonlinear" region in contrast to the remainder of the field of fire (the "linear"region) where achievable miss distance could be estimated as a linear function of the missile and target tracking errors. Subsequent simulations with the SPRINT Engagement Simulation (SES) generally confirmed the size and shape of the regions.

Using the system simulation (SAFSIM), investigations of the overall system effectiveness when intercepts were constrained to the linear region of the SPRINT field of fire indicated that system effectiveness was in no sense degraded, but conversely was somewhat enhanced. Since only intercepts with a priori high-kill probability were planned, thereby eliminating potential second shots on the basis of a kill assessment decision, the blackout environment (due to SPRINT bursts) was reduced.

As tactical guidance algorithms were developed and incorporated in SES, further exploration of the linear regions was conducted. As expected, due to necessary guidance approximations in the real-time program, the nonlinear region was slightly expanded. Data generated with SES during the study indicated that the boundary of the deadzone was closely coupled to the missile-target crossing angle. Zero degrees is the "worst case" geometry. Plots of miss distance versus crossing angle obtained from the SES results indicated that miss distance increased rapidly as the crossing angle decreased below a specified minimum. Based on these results, the boundary of the deadzone, as implemented in the tactical version of interceptor response, is defined by this specified minimum crossing angle.

Target-track data gathered in the early portion of the Meck Test Program initially characterized the expected performance of the wake-track algorithms employed in the tactical system. That knowledge was used to develop (based on MSRSIM results) a single-pulse error model to simulate the effect of wake-on-track capability. A concurrent analysis conducted with SAFSIM indicated that most intercepts at Grand Forks would be influenced by wake. Hence, the single-pulse error model was incorporated in the SPRINT Engagement Simulation (SES) to determine the impact of the waking environment on intercept capability.

For each target type, the single-pulse error model used a defined region of heavy wake, where heavy wake is the expected region of unresolved-wake returns. These heavy-wake regions are defined in terms of altitude and look angle. Within the heavy-wake regions, appropriate noise, biases, and transient effects are added to the non-wake-error statistics before the track data is provided to the track filter.

The major effect of heavy wake is the introduction of biases in the filtered target-position data. The biases (both range and angle) can vary as a function of vehicle type and look angle (angle between target-velocity vector and radar line of sight).

The SES study of the effect of these biases on intercept capability led to the conclusion that miss distance was increased when intercepts were conducted within the heavy-wake regions. Therefore, requirements developed for the battle-planning function (SPRINT Interceptor Response) accommodated the wake environment. Investigations, using the system simulation (TACSAF) for a variety of attack conditions, confirmed that adequate battlespace was available for almost all intercepts and that the wake constraint produced no overall loss in system effectiveness.

SPRINT Performance Characterization

The SPRINT intercept simulations described earlier were developed to predict the results of the SPRINT intercept tests at Meck and to provide a tool to characterize intercept capability throughout the field of fire. The Meck flight tests served two objectives:

Provided data for evaluating SAFEGUARD System performance to determine if the specifications and design produced the desired results
• Provided data for reconstructing the test with simulations. The reconstruction verified the validity of the simulation and/or indicated deficiencies in the modeling.

The measure of intercept capabilities is attainable miss distance. Miss distance, a statistical quantity, is caused by uncertainties that can be categorized as follows: (1) radar tracking inaccuracies, (2) differences between the available or achieved interceptor acceleration and that required to eliminate predicted miss distance, and (3) unpredictable target accelerations. The dependence of intercept effectiveness on subsystem errors made it desirable in the individual flight tests to subject the guidance-interceptor-radar subsystem to situations where the system uncertainties stress the intercept conditions. More can be learned about the effect of system errors from this type of testing, since excessive interceptor-acceleration capability will not overcome or mask subsystem errors. Consequently, the live SPRINT tests were structured to make demands on the interceptor capability and guidance function. The following intercept conditions were of interest:

• Inadequate acceleration capability in the miss-distance direction caused by low dynamic pressure or stressing engagement geometry
• Target flight path near the tangent to the SPRINT time-of-flight contours
• Intercept shortly after motor burnout
• Long-range, low-altitude intercepts
• Low intercept elevation angles
• Near the vertical missile trajectory.

SPRINT intercept effectiveness (miss distance and intercept-point motion) was demonstrated for a variety of intercept conditions during the M-2 series of flight tests at Meck Island. Data was generated for evaluating SPRINT intercept subsystem performance and validating the SPRINT Intercept Simulation. Integration of subsystem elements was proved, and operational characteristics of subsystem elements were determined.

The following major results and conclusions are based on mission events and SPRINT Intercept Simulation analyses conducted during the Meck Test Program:

• Intercept effectiveness was evaluated, and the SPRINT Intercept Simulation was verified for 17 collocated and remote launch SPRINT Meck test missions.
• Observed intercept-point motion for the controlled-intercept guidance-mode intercept points at Meck was always within the specified value for nonstressing engagement geometry.
• The SPRINT Intercept Simulation was shown to be a highly credible simulation of the SPRINT intercept subsystem.
• Most of the differences between premission-predicted and live-mission intercept effectiveness were due to differences between premission and live-mission target-track characteristics.
• The SPRINT Missile Simulation (MESSIM) used in SIS was significantly updated as a result of observed versus predicted mission performance. The key updates were missile drag, airframe response, and atmospheric models. The final SPRINT missile simulation was verified as an adequate model of SPRINT performance.
• An evaluation of the Missile Site Radar Missile Track Simulation (MTRACK) showed it to be a very good model of MSR tracking accuracy and performance.
• The utility of first-stage guidance was shown by premission simulation analysis of Mission M2-22. First-stage velocity dispersion is greater than the velocity estimation error at the time of command computation.
• The intercept-point bias guidance function was deleted from the tactical guidance design because of the guidance performance stress introduced by the aim-point shift during end-game for Mission M2-7 and the uncertainty in warhead shadowing.
• SIS parametric analysis of capability bias [*] performance during the M2-48 premission analysis showed that capability bias is effective only when an adequate increase in prelaunch time of flight is included to account for the increased missile flight time to intercept.

[* - A guidance technique to enhance system performance in poor geometry situations.]

• Intercepts occurring during heavy-wake track (M2-18, -38) produced miss distance at least on the order of the target position estimation bias associated with heavy-wake track.
• The sensitivity of miss distance to intercept time after wake quench was determined during the M2-25 SIS premission analysis. Mission M2-27 and -25 verified that intercepts occurring after wake quench are not significantly affected by the target-state estimation occurring at wake quench.

Target-Track Performance

One of the most, if not the most critical function in the SAFEGUARD System is that of target track. The performances of the target-classification (threatening or not threatening), target-discrimination (RV or non-RV), battle-planning, and intercept functions are all critically dependent upon the quality of data provided on a reentering object by the target-track function (both hardware and software). A major portion of the total evaluation effort was devoted to understanding the sources of target-track error in the MSR, the environmental effects on tracking, and the performance of the software tracking algorithms.

An extensive measurement program was conducted on the MSR at Meck to determine the single-pulse range and angle statistics, and to validate the modeling employed in the various simulations, particularly MSRSIM. Biases observed between various types of metric data were systematically investigated. These included offsets in the data at mode changes, face handovers, frequency-to-frequency, and AGC changes. A previously undetected amplitude-dependent bias was isolated and characterized, and the effect of the bias was minimized by adding a phase commutation technique to the MSR receivers. Position biases between the missile-track and target-track modes were identified by simultaneously tracking SPARTAN missiles in both the missile-track and target-track modes.

As a result of these investigations, bias effects in the MSR were minimized by a combination of hardware changes, software corrections, and radar alignment techniques.

The alignment procedures and measurement techniques developed during the Meck program were applied to the tactical radars at Grand Forks.

Cluttered Environment

Obtaining high quality track data on an object during reentry is made more difficult by two potential sources of clutter within that altitude regime — target wake and tank breakup.19-25 See the Classified Supplement for further discussion of these clutter sources.

A major part of the Meck test program was devoted to providing the information needed to characterize the clutter environment, develop track algorithms for operation in the clutter environment, and validate the design performance.

The design of wake track was accomplished using the MSRSIM, as was the entry and exit logic which uses a combination of a missed-look test, wake-to-body amplitude, and range-extent measurement. Wake track is basically the use of a dynamic threshold to provide a substitute tracking location on the leading edge of the target/wake range return whenever the clean-target zero-slope is missing. This produces a range measurement error of time-varying magnitude and direction. The target amplitude, wake amplitude, and length all change during reentry; each change is a function of RV type and trajectory parameters. Thus, the resulting range error is a complex function of threshold bias, threat type, and trajectory. The angle measurement is taken at the range of the zero-slope beyond the threshold crossing and is therefore at the range of the first wake peak. The measurement is subject to glint caused by the random multiscatter wake. The resulting increased angle-measurement standard deviation and angle bias in the direction of the wake grow as the trajectory look angle increases and more of the wake enters the affected range resolution cell. (The effects of these errors on intercept capability were investigated using SES and SAFSIM. ) For further discussion, see SPRINT Battlespace Utilization.

The on-going Meck test program was modified to provide a series of targets to thoroughly explore the performance of the wake-track algorithms.

Prior to the availability of the wake-track design in the MSR at Meck, digital video recording data were obtained for waking targets and used with MSRSIM to verify the design. Later in the program, premission simulations using MSRSIM were carefully matched against the mission results at Meck to improve the design and to verify the modeling in MSRSIM.

MSR/MSDP Traffic Capacity

Extensive analysis of MSR/MSDP (Missile Site Data Processor) traffic capacity during 1971 culminated in major modifications to the overload response in the MSR Weapons Process (MW) early in 1972. (See the Classified Supplement for additional material on traffic response. ) Although test program schedules did not permit full software and system testing of the MW overload response, it is implemented in the software as an untested capability. The key features of the MW overload response and the traffic capacity improvements achieved are summarized in the Classified Supplement. SAFSIM results on SPRINT utilization inefficiencies in the overload response are also discussed there.

Significant traffic performance improvements were achieved by functional modifications in four major areas:

• Reduction of track data rates at the onset of template overload
• Revision of overload coast-management rules to delay coast initiation until adequate data quality is achieved and to provide timely and efficient reacquisition
• Real-time limitation of template pulse transmissions to efficiently stay within the return generation capacity of a three-processor system exerciser
• Provision of an MSDP overload response that gives preference to support of MSR/ SPRINT processing over displays, communications, and SPARTAN processing.

PAR/SPARTAN Performance and Characterization

Many areas of PAR/SPARTAN performance were investigated. Some of the more important areas are discussed here.

PAR/SPARTAN — Performance Impact of Unresolved target Track

Refer to Classified Supplement for this discussion.

PAR Performance Characterization

Data gathered by the PAR while tracking satellites with the PAR Test Process (PT) software was analyzed to evaluate range and angle measurement accuracy, radar sensitivity, and PAR antenna patterns.

The radar sensitivity in the search mode was characterized as the S/N at a given range on a known target at a scan angle of 45 degrees. The results indicate that the PAR S/N is approximately 4 dB better than the hardware specification requires.

The peak value, -3 dB widths, and first side-lobe levels of PAR antenna sum patterns were analyzed. The peak values exhibit a dependence with frequency that agrees with the dependence obtained from the radar sensitivity analysis. The search channel was found to be about 0.5 dB more sensitive than the track channel. Analysis of the -3 dB widths indicates the achievement of the desired beamwidth, which changes slightly (approximately ±1 millisine) as the transmission frequency and scan angle change. The null of the monopulse-difference pattern was analyzed and found to be positioned on the tracked target to within 0.1 millisine.

Analysis of PAR measurements of range and angle resulted in the identification of seven components of the total absolute PAR measurement error: (1) a random component, (2) a bias that depends on the off-axis position of a target, (3) a bias that depends on received-signal amplitude, (4) a bias that depends on transmission frequency, (5) a bias that depends on sine-space location of a target, (6) a bias that depends on face-orientation errors, and (7) a remaining unremovable bias. The random-error component was characterized as a function of S/N and was found to be within the system specifications. A bias in off-axis angle measurements was detected.

It is a complicated function of off-axis position, amplitude, and frequency and was seen to change from mission to mission, which strongly implied alignment and calibration differences. Analysis of the alignment procedures used by site personnel led to an upgrading of off-axis track-bias performance to tolerable levels. A bias in range measurements that depends on the received-signal amplitude was detected and characterized, and could be removed from the data using an algorithm in the system software. No significant system performance impact arises from this bias; thus, the PW change was not made. Similarly, a bias in sin a angle measurements that depends on the location of the target in sine space was detected and characterized, and was removed from the data by a PW modification. A component of the total PAR measurement error was attributed to face-orientation errors. After correcting the data for all known biases, face-orientation angles were estimated so that the total PAR measurement error (compared to true satellite positions) was minimized. When this was done, the total PAR measurement errors were within the system specification.

SPARTAN Performance Characterization

This section summarizes an evaluation of the SPARTAN intercept subsystem performance and a validation of the SPARTAN Intercept Subsystem Simulation (SPASIM), based on ten SAFEGUARD System flight tests that used the SPARTAN interceptor.12 The missions were flown from Meck Island, Kwajalein Missile Range, during the time from August 1971 to June 1973.

The objectives of the study presented here were threefold:

1. Assess the ability of the SPARTAN intercept subsystem to successfully intercept a reentry vehicle
2. Verify that the SPARTAN Intercept Subsystem Simulation can predict and reconstruct the SPARTAN intercept function to a high degree of confidence
3. Using the validated simulation, assess the intercept effectiveness of the tactical system throughout the tactical field of fire.

Each of these objectives was successfully achieved. Significant conclusions and results obtained during the study included the following:

• Evaluation of intercept effectiveness and simulation verification was completed for ten missions.
• The miss distance observed on each mission was successfully reconstructed. The primary contributor to miss distance was shown to be velocity differences between the actual delivered third-stage incremental velocity for the mission and the average performance assumed by guidance in determining third-stage ignition time. Radar -tracking and telemetered-acceleration data gathered during the mission were used to determine actual delivered velocity.
• Missile performance models were significantly updated as a result of observed versus predicted results for the first six missions (i.e., through Mission M2-34). After update, the simulation (SPASIM) was shown to adequately predict mission results.
• A moderate degradation in kill probability for Mission M2-06 resulted from a 2.2-sigma aim-point miss distance. This was the largest miss distance on any mission and was attributed to the combination of a very long coast period after third-stage burnout, a large turnaround angle, and a missile-state estimation error. Mission M2-06 was a low-altitude, long-range mission — the longest range mission in the test series.
• With the exception of Mission M2-03, the actual exit velocity was significantly lower than that predicted for the first six flight tests (through M2-34), indicating significant inaccuracy in the true missile performance assumed for SPASIM and guidance development. As a result of these findings, substantial updates were made to propulsive, weight-flow, and aerodynamic characteristics of the first- and second-stage models.
• Predictions of third-stage orientation stability after spinup were verified using telemetered-attitude data gathered during several missions.
• The third-stage propulsion characteristics used for initial guidance development were shown to result in significantly lower performance than actually delivered. These findings resulted in an update to the third-stage model.
• Effectiveness of the forced third-stage ignition guidance mode is adequate over the range of incremental velocity requirements. The impact of a forced third-stage ignition on intercept miss is accurately characterized by the increased coast time over which the velocity error due to the initial turnaround maneuver is propagated.
• In addition to the performance model updates indicated above, substantial improvements were made to SPASIM during the study. These included additional Monte Carlo and third-stage incremental velocity printouts to facilitate analysis and model verification, and an exoatmospheric short-coast option to more accurately model turnaround angle for missions where third-stage ignition occurs shortly after exit and the vehicle has not tumbled.
• Characterization of the tactical system performance indicates that aim-point and target-miss distance achieved are well within the requirements for target kill throughout the tactical field of fire.

SYSTEM TESTING AT MECK ISLAND

The primary sources for system evaluation-were: the Meck System Test Program, Tactical Software Control Site, and the tactical sites. This section discusses the test program as it was supported by the prototype Meck System, including the planning and structure of the test program and some of the mission results.

The Meck System was emplaced as a prototype installation; more importantly, it provided the data for concept verification and evaluated a number of system functions in a controlled environment. The program was intended to provide answers to critical design problems that could only be obtained in a live-target environment and to validate the software simulations that were essential to the success of the SAFEGUARD effort.

The need for a field test program was recognized in the early phases of development and as a result, the installation at Meck contributed to early decisions regarding the SAFEGUARD System implementation.

Objectives

The overall objectives of the Meck System Test Program were to:

Provide a near-tactical environment for gathering system performance data to aid in development and evaluation of the SAFEGUARD deployment
• Gather S-band radar data for reentry physics research
• Gather data to evaluate concepts for future defense systems
• Gather data for evaluation of tactical and experimental target systems.

Test Requirements

The System Evaluation Program13-18 was the major source of test requirements for Meck testing. These test requirements26 were developed using the tactical threat and the tactical system capability as guidelines. Direct requirements from many other agencies, such as the following, also influenced the structure of the Meck System Test Program.

• SAFEGUARD System Command [now Ballistic Missile Defense Systems Command (BMDSCOM)] —primarily responsible for the development and testing of the SAFEGUARD System
• SAFEGUARD System Evaluation Agency (now TRADOC Systems Analysis Activity) -independent evaluation
• Army Air Defense Command — projected user
• Atomic Energy Commission and Army Materiel Command — test requirements related to warhead section evaluation and demonstration.

In addition, the following agencies indirectly influenced the test program, where possible, to support common goals:

• Advanced Ballistic Missile Defense Agency (ABMDA) [now BMD Advanced Technology Center (BMDATC)]
• Defense Nuclear Agency
• Site Defense Program Office
• Air Force Agencies such as Strategic Air Command (SAC) and Space and Missile Systems Organization (SAMSO).

Meck System versus Tactical System

The capabilities of the Meck System were as near tactical as practicable. To support a test program starting in early 1970, the Meck System designs had to be committed before all of the development work on the various tactical subsystems was completed. Many of these design decisions were made in 1967, during the SENTINEL time frame. At that time, area defense was the primary mission and a multisite deployment was contemplated. As a result, the test plan for Meck initially emphasized area defense. After the SAFEGUARD decision, testing shifted to the SAFEGUARD mission and was compatible with the tactical deployment schedules.

The Meck System ultimately evolved into a prototype SAFEGUARD Missile Site Radar System, which approximated essential features of the tactical system. However, it differed from the tactical system in several aspects.27-30

• A Perimeter Acquisition Radar was not installed at Meck because the benefits to be derived did not appear to warrant the expenditure. Other radars in the Kwajalein Atoll could fulfill the acquisition role, and prototype testing of the PAR could be performed at Grand Forks. The absence of the PAR required that the MSR be used at greater than tactical ranges.
• The targets used in system tests at Meck, to a large extent, emulated those specified in the tactical design threat. However, limited system resources (i.e., only eight precision target-track channels with three data processors installed) and the need to obtain data that was well defined required that the test targets used be accompanied by a minimum of debris. The final sus-tainer stage was emplaced to ensure that the experiment was conducted in a controlled environment.
• The number of SPRINT and SPARTAN launch cells at Meck was limited to four and two, respectively, as compared to planned tactical complements an order of magnitude greater.
• Only three processors were used during Meck testing compared to ten in tactical. Computer capacity was further restricted by the necessity to perform the nontactical function of real-time flight-safety calculations.
• Only two faces were implemented on the Meck MSR as compared to the four faces on the tactical MSR. The primary threat direction of approach at Grand Forks was essentially centered on the overlap region between two MSR faces. The orientation of the Meck faces was such that targets launched from Vandenberg Air Force Base approached at an angle of about 63 degrees from the bisector of the two faces.
• The Meck installation did not include a system similar to the Ballistic Missile Defense Center (BMDC); therefore, internetting concept testing was not possible.
Because of hardware, software, and threat environment restrictions, battle-strategy testing was not feasible at Meck. Such testing was left to simulation at the TSCS in Madison.

Test Targets versus Design Threat

One of the major influences on the test requirements was the threat against which the Meck System should be evaluated. This was dictated, in part, by the threats defined for the SAFEGUARD System. The Design Evaluation Threat, as outlined in the Data Processing System (DPS) Performance Requirements for the MW1 process,1 was a guide for determining the Meck threat parameters. Traffic-related characteristics (i.e., rate of delivery of RVs) were not included, since it was not considered meaningful to overload the Meck System capability with high-performance targets at the expense of obtaining useful, controlled data. Such system stressing was better evaluated by means of simulations.

A comparison of the reentry vehicle characteristics used in the Meck System Test Program with the design threat indicated an excellent match, even though the basic Meck targets were obtained from existing Air Force and Navy inventories. Although a Fractional Orbit Bomb System (FOBS) was considered part of the threat, no FOBS-like targets were presented to the Meck System, leaving this threat to be evaluated via simulations.

SAFEGUARD Hardware Facilities and Functional Capabilities

SAFEGUARD facilities were emplaced on two islands in the Kwajalein Atoll: Meck Island and Illeginni Island. Figures 5-2, 5-3, and 5 -4 show the facilities available on these two islands. The Meck Island control building housed the two-faced MSR and MSDP. The launch area on Meck consisted of four SPRINT and two SPARTAN launch cells, the respective missile assembly buildings, and related ground support equipment. Illeginni Island had two SPRINT remote launch cells, which were actively used, and two SPARTAN cells, which were built in anticipation of the possible inclusion of remote launch SPARTANs in the SAFEGUARD deployment. Additional details are contained in the Meck Island System Description.31

(POOR QUALITY PHOTO)

Figure 5-2. Meck Island Installation


Figure 5-3. Meck Island Facilities


Figure 5-4. Illeginni Island Location and Site Plan

The functional capability of the Meck System was incrementally developed and was compatible with the planned test program, dictated by the deployment schedule and threat assumption. During the final tests, the Meck System capability was as near tactical as necessary to satisfy the evaluation requirements. As the tactical system design evolved, the Meck System was modified to reflect this refined capability. Table 5-2 summarizes the SAFEGUARD System capability additions for Meck testing.

System testing in support of SAFEGUARD started in 1970. Prior to that time, subsystem checkout and integration was achieved using a software package known as M-0. The M-0 process continued to evolve and was used for subsystem test, calibration, alignment, and maintenance, independent of the M-1 and M-2 processes that followed.

The initial system software capability (M-1), which predated both the SAFEGUARD production decision and the evolution of the tactical designs, allowed relatively early verification of basic concepts and provided preliminary data to support basic evaluation objectives.32

The capability of the second version (M-2) reflected the selective refinement and augmentation of M-1 functions in accordance with the SAFEGUARD System tactical performance requirements. The capability additions planned for M-2 were implemented in six revision stages, defined as M2 Revision 17 (M2-R17) through M2 Revision 20 (Revisions R1 through R16 were implemented in M-1). The first of the M-2 revision stages was operational in August 1971 and the last in early 1974. The major functional capabilities introduced in M-2 are listed below.

M2-R17 — An essentially complete SPARTAN intercept function containing all major tactical SPARTAN functional capabilities, including external-sensor and sequential-state vector inputs to exercise the SPARTAN interceptor capability in the PAR/SPARTAN mode over its complete field of fire. It also included track-function capabilities to provide tactically representative target tracking for SPARTAN engagements.
• M2-R18 — Initial SPRINT intercept, tracking, and target-selection function capabilities for evaluation of SPRINT (collocated or remote launch) engagements over the complete SPRINT free-intercept guidance mode field of fire. It also included tactical endoatmospheric target selection based on target slowdown characteristics.
• M2-R18E — Remaining SPRINT tactical functional capability including the controlled-intercept guidance mode.
• M2-R19 — Expansion of the track functional capability to include tactical track initiation, and resolved-target wake track for aspect angles less than 75 degrees. High-fidelity Digital Target Environment Simulation (DTES) was added to allow accurate simulation of heavily waking single and multiple targets.
• M2-R19E — Expansion of the waking-target functional capability to include resolved target wake track for all aspect angles. Also included target selection using the Gamma filter.
• M2-R20 — Surveillance modifications for tactical detection and acquisition performance in a dense target environment (Tactical TC-3 search). Also contained target-track capability to include unresolved-target wake track, cluster track, and track-through-tank breakup of multiple resolved and unresolved targets at all aspect angles.

Further explanation of M-2 functional capability is provided in the SAFEGUARD System, M-2 System Functional Requirements and Description .33

Data Recording and Reduction Facilities at Meck

Data gathered with the Meck System was recorded on one or more of the following devices: magnetic tape recorder, analog video recorder, digital video recorder, brush strip recorder, line printer, and teletypewriter.

Magnetic tape was used for most of the data including encoded-radar replies, MSR/MSDP interface data, intermediate results of calculations performed within the MSDP, and timing of significant system events. The recording subsystem had ten conventional tape units and two high-performance magnetic tape recorders. Data recorded on the latter was stripped and re-recorded on the conventional units before it could be reduced on the IBM 360 computer.


Table 5-2 Meck Capability Additions

The analog video recorders consisted of four 16-mm motion picture cameras mounted on four dual-sweep oscilloscopes. Each device could be assigned to display and film either sum-channel or angle-channel video from any of the eight target-track channels or two missile-track channels.

The digital video recorder encoded sum-channel A-scope video over a selectable 6-kft, 30-kft, or 60-kft range interval and recorded it on magnetic tape with digital identifiers. The data could be displayed on a Cathode Ray Tube (CRT) in the MSDP or reduced by the IBM 360 computer to produce various plots and listings. This video recorder could record data from all eight track channels, and recorded data could be reviewed immediately after a mission since no film had to be processed.

The two 120-channel brush strip recorders recorded (1) data gathered while testing interceptor missiles when they were still in the cells, and while sending orders to the in-flight missiles, and (2) the timing of significant system events. These data were available for immediate analysis.

Line printers printed summary data on MSDP performance and certain significant system events upon completion of a test. Hardware malfunc -tions detected by fault detection programs were printed on teletypewriters.

Since the MSDP was fully utilized while supporting software, hardware, and system testing (even on a three-shift basis), a second computer was used to support data reduction and other non-real-time supported software activities. Thus, most reduction of recorded data performed at Meck was done on an IBM 360 computer with the following output devices: three line printers, one Gould electrostatic plotter, one CALCOMP plotter, and one card punch.

Range Support Facilities

The Kwajalein Missile Range (KMR) range support facilities consisted of equipment and facilities used to obtain data and determine vehicle performance. Telemetry, metric, and signature data were acquired by electronic and optical tracking systems to evaluate environment, control functions, propulsion, airframe, and guidance systems on a vehicle. Figure 5-5 gives the locations of range instrumentation sites in the Kwajalein Atoll.

The following KMR facilities were used in the SAFEGUARD System Test Program:

• Telemetry systems
• Instrumentation transmitters and receivers
• Photo/optical systems
• Meteorological services
• Communications (general)
• Closed-loop television
• Timing
• Target Track Radar (TTR) at Kwajalein
• Radars at Roi-Namur.

The KMR radar facilities at Roi-Namur, operated by Lincoln Laboratories, are of special interest; ALCOR, ALTATB, and TRADEX radars contributed actively as data sources.

Program Coordination with Other Agencies

The test requirements received from various interested agencies were evaluated and incorporated as appropriate into the Meck System Test Program (MSTP). Figure 5-6 depicts the test planning cycle that followed and shows the various other activities that were included, such as target negotiations, resource procurement, and safety studies. It also shows the documentation, mission certification, and data analysis related to the overall Meck System Test Program.

The Meck System Test Program,34 prepared by Bell Laboratories, contains a broad summary of the entire test program and basic plans for each mission, and represents the coordinated program of all agencies. Periodic meetings of the System Test Working Group provided a forum for test-planning discussions among the interested agencies. Likewise, implementation details of each test were discussed at periodic Test Program Review meetings with those agencies specifically responsible for test and target procurement and implementation.


Figure 5-5. Range instrumentation Sites in Kwajalein Atoll


Figure 5-6. Test Planning Cycle

The nominal target procurement cycle (see Figure 5 -7) started when the Ballistic Missile Defense Systems Command (BMDSCOM)issued a Scope of Work to the Air Force, for ICBM Minuteman I and Titan II targets, and to the Navy for Intermediate Range Ballistic Missile (IRBM) Polaris missiles.35 The Air Force and Navy then contracted appropriate studies and the necessary target software/hardware. Initial lead times were two or even three years. Approximately 230 days before the target mission, Bell Laboratories, through either the Army (BMDSCOM) or Navy (SPO), issued a Targeting

Requirements Confirmation Report, which officially finalized the desired targeting and missile configuration for a specific mission.

The SPARTAN and SPRINT interceptor missiles used in the Meck System Test Program constituted a potential hazard to the lives and property of inhabitants within the Kwajalein Atoll and up to 350 nautical miles outside the Atoll. The Flight Safety System (FSS) held this hazard probability to an acceptable level. The FSS consisted of (1) a missile-borne Flight Termination System (FTS), a real-time computer program within the MSDP Software System that detects a potentially hazardous trajectory and starts the missile destruction, and (2) the Flight Safety Console (FSC), which provides enough data to an operator to allow detection of a potential hazard and manual initiation of missile destruct. In addition, mission planning for the Meck System Test Program excluded interceptor missiles from unsafe portions of their capability volume.

As with interceptor flight safety, target flight safety was directly concerned with the problem of protecting inhabited land masses from target vehicles and/or target debris. Determining applicable constraints was always an in-line function of the test planner in designing system tests.


Figure 5-7. Target Procurement Cycle

Program Planning and Mission Design

As Bell Laboratories and other interested agencies supplied test requirements, planning was initiated to incorporate these requirements in the test program. A number of factors had to be considered in developing an efficient and integrated plan. Because the Equipment Readiness Date (ERD) and Initial Operational Capability (IOC) date were fixed, the program completion date was inflexible. As a result, the total number of individual missions that could be undertaken was clearly limited. Interceptor-evaluation requirements were matched to radar-evaluation requirements, and jointly dictated the target characteristics and trajectory geometry, In addition, the state of system development dictated the sequence in which requirements could be fulfilled. A feasible program also had to result in a minimum expenditure of target and interceptor resources.

As shown in Figure 5-6, individual mission designs were documented in the MSTP as early as four years prior to the planned mission date. This early documentation showed the broad objectives of the mission and provided descriptive information, including target type, planned trajectory, interceptor type, and intended intercept location.

As individual mission design progressed, proposals were submitted to the System Test Working Group for support of previously unfulfilled objectives. Until about eight months prior to the mission, more details were added in terms of secondary objectives and, in some cases, tertiary objectives. Concurrent with the assignment of objectives were flight safety considerations and special requirements concerning the geometry of each target complex. Target requirements also included special pay load features, such as calibration-reentry spheres.

Extensive simulations (see System Simulation) established that the mission design was compatible with the Meck System and that reasonable assurance of success could be attained. During this detailed planning stage, a Manual Interaction Simulation (MIS) plan was established for the mission. The MIS plan permitted preplanned software control of complex sequences of display console operations in a mission environment, thus exercising the process software in a repeatable fashion. This technique, while developed initially to provide repeatability in the certification environment, was used later during the missions to reduce the number of operator actions that had to be performed in real time.

Finally, three months before the mission, the Mission Test Specification36 was issued for review and approval by BMDSCOM and all interested agencies. The Meck System Test Program34 was repeatedly updated to contain a current account of each of these missions as they developed in detail, as well as a current description of the total test program. At one month, Bell Laboratories at Meck published a Mission Test Plan36 for each mission to define operational details.

Premission and post-mission documents are listed for each mission in Reference 36.

Test Requirements Memorandum

A mechanism, established in 1968, permitted the initiation of subsystem tests on Meck hardware, in addition to the more formally scheduled system test activity. To implement such a test, the requester issued a Test Requirements Memorandum (TRM) describing the test, including data requirements and perhaps an estimate of the man-hours and equipment time period. Bell Laboratories at Whippany, New Jersey, and at Meck Island coordinated the test into the Meck activity schedule. Test results and data were delivered to the requester; in some cases, a formal report was also issued.37

Later in the program, this TRM procedure was expanded to include not only hardware-related tests, but also tests related to specific data-gathering tasks. The intent was to maintain control and coordination over the many tasks that Meck was asked to perform, so that proper scheduling and priorities could be established. Almost 150 TRMs were logged. Table 5-3 lists some of the more significant ones as examples of the type of activity performed under this heading.

Process Verification, Function Integration, and Mission Certification at Whippany and Meck

The complexity of Meck missions and the expense of conducting them (target, interceptor, and range-support costs) dictated that each mission be carefully planned and certified prior to its execution. A procedure established at Whippany and Meck provided reasonable assurance that the mission would be successful. Comprehensive dispersion simulations of both target and interceptor were performed in the Central Logic and Control (CLC) environment.

The Meck System software was verified, integrated, and certified prior to a live mission to establish a high level of confidence in the mission design and performance of the Meck System, exclusive of interceptor hardware. These tasks were undertaken to exercise all portions of the Meck System, both software and hardware, that would be used during SAFEGUARD missions. Live interceptors were not fired during this process.

Figure 5 -8 depicts the software development cycle beginning with (tactical) system requirements and culminating with post-mission analysis. Briefly, separate software functional specifications were prepared for each of the major design areas, i.e., Surveillance, Track, SPRINT Guidance, Sensor Control, etc. From these individual specifications, each of the major design areas generated software code essentially independent of the other major design areas. (Inputs to and outputs from each major area, i.e., interfaces, were defined early in the design phase. ) Each major area was responsible for testing its own code, i.e., unit testing. When all design areas successfully completed unit testing, the code was delivered to a process construction team that integrated the major blocks into a single software package (process) and placed it under configuration control. The process was then verified on the CLC, the computer for which the code was designed, by testing it against a software simulation of the target environment and major subsystems (radar, missiles, etc.). This verification was used to regression test all the capabilities that existed in previous versions of the process and verified proper implementation of any new capability. This procedure terminated with what was basically an acceptance test for the process — a simulation of the most difficult mission anticipated during the life of the process. The verification validated both the major functional blocks and the support software (operating system, data reduction program, etc. ). Upon completion of the verification procedure, the process was delivered to Meck Island and design control shifted to the Whippany Certification Group.

Table 5-3 Examples of Significant TRMs

TRM-40A,B,C & D

High-power transmitter tests to determine the cause of waveguide breakdown under high duty cycle conditions

TRM-89

Track of TRANSIT satellite by MSR, TTR, and ALCOR radars to determine biases between data collected by each radar

TRM-93

Track targets of opportunity, HK-1 and HK-2, to obtain data on Minuteman I tank breakup and effect of Titan II fuel on breakup phenomenon (HK-2 was also a joint-use mission)

TRM-9537

Gather data to determine MSR biases between the two faces

TRM-97

Gather data on SPRINT firing of an aged motor

TRM-104 and TRM-105

Gather additional data on targets of opportunity involving Mark 11 and Mark 12 reentry vehicles

TRM-110

Gather Minuteman tank debris data to determine footprint for target safety consideration

TRM-133

Gather data on SPARTAN flight test of production motors from IMCO


Figure 5-8. Software Development Cycle

At Whippany, a series of "certification" tests tailored to each planned Meck mission was defined to further exercise the software process in a simulated-mission environment. The major difference between process verification and mission certification was that verification was used to check out the new "nominal" performance of the system, while certification was specific with respect to each mission and purposely attempted to stress the system at the most pessimistic extremes of the test in order to expose failure mechanisms or otherwise sensitive areas. To this end, various target and missile anomalies were simulated and the system responses were noted. The certification goal was to establish a high degree of confidence that the software subsystem and mission design would satisfy the mission objectives under a wide variety of adverse conditions.

When Meck received the new process, the process verification tests were repeated using the more complete facility. That is, it was no longer necessary to rely on software simulation of the MSR and its interfaces since an Analog Tactical Environment Simulator (ATES) allowed for actual MSR transmission and provided reply signals to the IF receiver of the radar on up to five discrete targets and two wakes in a beam. The Meck process verification was then followed by mission certification for each mission. This certification consisted of an ATES rerun of the tests performed at Whippany, plus additional tests designed by the Meck test planners. Again, the ultimate objective was to prepare for each mission in a manner that would give the highest degree of confidence in the ultimate success of the mission. Just prior to each mission, a performance-prediction estimate was prepared delineating the expected performance and outlining the potential problem areas uncovered during the mission-preparation period.

Finally, the mission was conducted, data reduction was accomplished, and the performance analysis was undertaken.

Throughout this verification and certification cycle, functional analysis determined the correct implementation of the system design specification. A comparative analysis technique was used. Independent FORTRAN simulations of the primary functions, tracking and guidance, were developed and used to generate time sequences, order streams, etc., which were then compared with similar parameters generated on the CLC. These parameters were checked for reasonableness and consistency between the two simulations. A large number of mission-reliability runs were performed under the control of the MIS plan (see Program Planning and Mission Design). Without the problem of human response time, the simulations were highly repeatable and performance discrepancies were easily detected. Discrepancies were thoroughly investigated and understood before each mission was conducted.

The verification and certification procedures exposed several design flaws that had to be corrected in the Meck System and in the tactical design. The depth of premission testing was a significant factor in the high degree of success achieved in the test program. Indeed, the actual mission sometimes seemed to be an anticlimax. Each mission, of course, was the vital validation of the preceding simulations.

Premission Radar and Missile Checkout

In addition to the routine day-to-day system checks to assure that the systems and subsystems were operating properly, a series of premission tests was performed that assured readiness to support each mission. In the case of the radar, built-in test equipment and a small computer capable of providing an order stream to the MSR were used for off-line testing. In addition, the M-0 process in the MSDP fed a set of specific orders to the radar hardware causing the radar to react in a predefined manner. This routine checked out the radar, the MSDP interfaces, and data recording. la addition, signals from an antenna on a far-test pole provided off-axis calibration of the radar and location of failed antenna cartridges. Tracking the test pole and calibration-quality spheres provided a calibration constant, which permitted direct readout of target-radar cross section and overall radar system performance analysis. The data collected was used in Post-Mission Data Analysis (POMDA).

A variety of physical radar targets deployed locally by small rockets, aircraft, or balloons was considered for premission and post-mission radar calibration. However, only balloon-lifted spheres were used extensively as SAFEGUARD targets. Most spheres were 12 inches in diameter and were of radar-calibration quality. They were also used to verify system performance and to furnish an accurate amplitude reference for analyzing recorded data. On the average, about 4 spheres were used per week, amounting to roughly 1000 over the entire SAFEGUARD Meck System Test Program.

Premission missile checkout included sending steering and discrete commands to the in-cell missile and noting the response on the telemetry monitors. In addition, during the last few seconds before lift off, the missile underwent go/no-go tests to determine readiness to launch.

Mission Summary Information

SAFEGUARD missions were conducted for a variety of test purposes. Some of the missions were not part of the original evaluation plan but were in response to requests from other agencies. Much of these data were useful to the system evaluation effort.

Post Mission Data Analysis

Bell Laboratories and its subcontractor, Calspan Corporation, expended significant effort in performing timely analysis of mission data to confirm expected performance or, if possible, to permit the identification and correction of any system deficiencies prior to the next mission. The first step handled at Meck provided a preliminary statement of system performance for inclusion in 4-hour and 48-hour reports after each mission. At CONUS, the data was analyzed and presented at a Post Mission Data Analysis (POMDA) meeting within six weeks after the mission. Representatives of all interested design areas attended the meeting. The representatives made note of unexpected performance and quickly set out to understand, explain, and, if necessary correct the problem areas.

Bell Laboratories/Meck completed their summary of post-mission data by publishing a Final Mission Test Report36 within 90 days after the mission. Calspan Corporation also issued a series of post-mission reports:36

• Mission Data Summary Memorandum, which presented MSR data in a series of plots to be used as a tool for mission analysis and function evaluation
• Target Data Summary, which summarized the results of MSR data relating to the target-delivery system performance
• Mission Data Analysis Summary Report, which summarized the results of the primary analysis of MSR performance.

Mission Description and Results

The charts that follow summarize the major characteristics of the system test missions conducted at Meck since early 1970. Table 5-4 summarizes the concept verification phase of the test program during which major SAFEGUARD System firsts were achieved. Successful completion of these milestones permitted the timely commitment of manpower and funds to the development of the SAFEGUARD System as a viable concept.

Table 5-4
Concept Verification

Missile Type

Target Type

Concept Verification Objective

Simulated Target

ICBM or IRBM

SPARTAN

M1-1,M1-1A*


First SPARTAN launch at Meck


M1-4

First live-target intercept


M1-30

First salvo


M1-7, M1-7A*

First two-face IRBM intercept


M2-1

First external data intercept


M2-3

First in-flight redesignation

SPRINT

M1-9, M1-9A*


First SPRINT launch at Meck


M1-12

First live-target intercept


M1-13

First salvo


M1-8

First two-face IRBM intercept


M2-7

First remote launch


M2-7

First remote-launch intercept

*Where two missions are shown, the first attempt was not completely successful.

Mission failures are referred to in several of the summary charts. Note that of the 70 missions conducted, 12 were listed as failures. Of these, four could be attributed to target problems, three to SPRINT missiles, three to SPARTAN missiles, one to data processing hardware, and one to a target-classification error due to a mission-peculiar attribute. Seven of the eight SAFEGUARD-related failures were successful missions in a second attempt. This information is summarized in Table 5-5.

Figure 5-9 presents the broad aspects of each of the Meck missions showing the interceptor type, the target type, the date of the mission, and the software configuration in use at the time. Mission numbers enclosed in rectangles or ovals identify the twelve missions classified as failures.

Figures 5-10 and 5-11 summarize the location of each Interceptor flight in the capability volume. Interceptors are classified according to target type and the two pairs of salvos are identified.

Table 5-6 summarizes radar evaluation missions in M-2, highlighting target type versus trajectory geometry.

Table 5-5
Summary of Mission Results


M1 Series

M2 Series

Missions

13

50

Successes

12

46

Failures

5*

7 (***)

Missions Repeated

4 (**)

3 (****)

Total Operations

17

53

* - M1-1, Data Processing; M1-9, SPRINT; M1-7, Target; M1-14 and M1-14A, SPARTAN W/H.

**- M1-1A, M1-9A, M1-7A, M1-14A. (M1-14A counted here, although conducted during M2 Series.)

*** - M2-42 and M2-14, SPRINT; M2-20, MSR tank-classification error; M2-44, Target; M2-28, no target deployment; M2-245, SPARTAN; M2-36, Target.

**** - M2-42A, M2-20A, M2-245A.



Table 5-6
Radar Evaluation with Waking Targets

RV

Trajectory Geometry

Radial

Offset

Fly By

MK-6

M2-1
M2-31*

M2-10
M2-14

M2-27

SV-2B

M2-2
M2-7
M2-11
M2-35*
M2-135*

M2-15

M2-25
M2-46*
M2-146*

SV-3

M2-22

M2-18

M2-38

SV-4C

M2-24

M2-20A

M2-28 **

* - Primary objective was to gather data to evaluate response to RV flying through active tank breakup.

** - data due to target failure.


Figure 5-9. Meck Mission Summary


Figure 5-10. Summary of SPARTAN Intercept Locations


Figure 5-11. Summary of SPRINT Intercept Locations

Targets of Opportunity

The extensive tracking resources of the MSR were attractive to many Air Force programs involving reentry-target complexes targeted into the Kwajalein Atoll area where reentry data could be obtained. These Targets of Opportunity (TOOs) were frequently used as dynamic targets to checkout the SAFEGUARD System at Meck prior to a major system test. In addition, they provided targets to satisfy data requirements related to specific SAFEGUARD engineering tests requested by various design groups. Non-SAFEGUARD agencies (such as BMDATC and the Air Force) also requested MSR data on other targets of opportunity and were generally accommodated on a noninterference basis.

Using Western Test Range (WTR) numbers, Figure 5-12 tabulates the targets of opportunity (TOOs) that Meck tracked. The shading categorizes them according to their status. An "engineering test" status was given to TOOs where some interest was shown in the data and the operation could be conducted on a strict noninterference basis with no hold-status guarantees available on the target launch. "Required" status was reserved for specific TOOs where a strong interest in the data required participation even at the expense of rescheduling other activities. "Mandatory" status was given to only one TOO, which was upgraded to a joint-use target, where the SAFEGUARD project was able to influence payload and trajectory characteristics and obtained guarantees of hold status on the target launch. The TOOs identified with a "none" status were used by Meck personnel for system checkout with no data obligations. Participation in many of the TOOs resulted in useful MSR data packages, which were made available to the Air Force and other agencies. This figure also identifies the TOOs that were used to satisfy Test Requirements Memoranda (TRMs).

System Test Incentive Program Summary

Since 1971, the Bell Laboratories SAFEGUARD contract with BMDSCOM has included System Test Incentives as part of the Scope of Incentive Criteria and Evaluation. From the beginning, an important ground rule was adopted; no "out-of-line" additional effort could be expended on a mission because it was incentivized. Table 5-7 summarizes the major system test events that were incentivized for each fiscal year.

Miss distance, for incentive purposes, is defined as the scalar distance between the interceptor and the intended aimpoint at the time of warhead-command burst. This was considered to be the best yardstick for evaluating total system performance, and therefore, played a major role in the establishment of the System Test Incentive program. Although miss distance could be measured by independent sensors, it was decided that the SAFEGUARD Systems calculation of miss distance using the MSDP would suffice for incentive purposes. A series of documents36 summarized miss distance obtained on each mission, using independent sensors on most of the live-target intercepts, and tabulated these measurements with the MSDP self-scored miss distance.

Miss Distance Measurement Techniques

The most significant, single indicator of the success level for an intercept-test mission is miss distance. The contract required that miss distance be measured by a means independent of the SAFEGUARD System to verify the quality of the internal measurement.

Significant effort was required to select acceptable methods for obtaining the independent real-time measurements. To be acceptable, the methods had to be of acceptable cost, have timely availability, and produce measurements of adequate accuracy.

Fortunately, a large background of experience was available on which to base the selection of methods. The single-path doppler system, successfully employed in the NIKE-ZEUS era, was initially favored because of its very high accuracy and demonstrated reliability. However, after completing the initial stages of adapting this method to the SAFEGUARD environment, the technique was abandoned because of cost. The lesser but acceptable accuracies of other recourses yielded higher cost effectiveness.

Three techniques for independently measuring miss distance38 were principally employed during the series of SAFEGUARD intercept-test missions.

The TTR/ALCOR [*] "two-radar solution" was the principal recourse for the relatively long-range SPARTAN intercept missions.

[* - TTR = Target Track Radar, ALCOR = a member of the three-radar complex of the PRESS Projectat Roi-Namur. Each is a high-performance C-band tracking radar.]

In this technique, one of the radars continuously tracked the target while the other tracked the interceptor. Any biases between the radars was powerfully determined by simultaneously tracking the target with both radars until it was necessary to switch one to tracking the interceptor. Backup was provided by processing the data obtained during the interval when both objects were contained in each radar beam.

The TTR or ALCOR "single-radar solution" was the principal recourse for the relatively short-range SPRINT intercept missions. In this technique, the radar tracked the preferred reference object (usually, but not necessarily, the target) throughout the mission, and recorded range data and angle-off-beam-center data for each radar pulse occurring during the interval when the other object was passing through the radar beam.

The network of Recording Automatic Digital Optical Trackers (RADOTs) served as a major backup for SPRINT intercepts, furnishing photo -optical triangulation whenever at least three of the eight separately-located instruments were not obscured by weather.

For each intercept mission, all available results from primary and backup sources of independent miss-distance measurements were collected into a coordinated report. This facilitated applying the results to the evaluation of miss distance as determined in real time and recorded by the MSDP of the SAFEGUARD System. The independent measurements yielded target-to-missile separation versus time. Evaluation of the SAFEGUARD function required interpretation of success in computing the time-of-burst and in delivering the burst-command signal to the interceptor.


Figure 5-12. Targets of Opportunity Mission Summary

Table 5-7
Summary of System Test Incentives

Fiscal Year

Major System Incentives

Mission Number

Completion Date (GMT)

Contract Points

Points Earned

% of Maximum

1971

First SPARTAN Intercept

Ml-4

August 29, 1970

10

10

100

First SPRINT Firing at Meck

Ml-9 M1-9A

October 29, 1970 December 5, 1970

10

6

60

First SPRINT Intercept

Ml-12

December 24, 1970

10

10

100

First SPARTAN Salvo

Ml-30

January 12, 1971

10

10

100

First SPRINT Salvo

M1-13

March 17, 1971

10

10

100

Miss Distance


May 7, 1971*

10

8.33

83

1972

First SPARTAN Intercept Using Tactical Guidance

M2-1

August 28, 1971

12

12

100

First SPRINT Remote Launch

M2-7

March 17, 1972

18

18

100

Miss Distance


July 16, 1972*

30

24.64

82

1973

Miss Distance


July 21, 1973*

35

22.61

65

1974

Miss Distance


November 30, 1973*

15

15

100




Total

170

146.58

86

*Date of last mission included in calculation.

INTEGRATION AND DEMONSTRATION TESTING AT TSCS AND SITE

Early in the SAFEGUARD development cycle, the problems associated with system testing of SAFEGUARD were recognized and planning was initiated for the extensive testing program that would be required following basic function-integration testing of the software processes [MW, PW, and BMDC Weapons Process (BW)] at TSCS and later at the tactical sites.

A SAFEGUARD System Integration and Evaluation Test Plan (SIETP)39 was developed and documented, which provided detailed plans for accomplishing the following objectives:

1. Integration of the three sites (PAR, MSR, and BMDC) into a smoothly operating system and verification of interfaces
2. Verification of the baseline system-performance capability against the design evaluation threat
3. Determination of the system-performance limits, the limiting subsystems, and functions using extensions of the design evaluation threat
4. Evaluation of the system-overload response and degraded-mode capability.

Objectives 1 and 2 were addressed by a series of approximately 12 basic tests. The initial tests of the series did not include threat traffic but were limited to satellite background, and were designed to thoroughly test the system interfaces and Command and Control (C&C) subsystem, including displays and manual actions. The remaining tests exercised the total system in the three basic system-operating modes (Accidental Launch, Pindown Response, and Minuteman Defense), utilizing nominal threat-traffic levels.

Objectives 3 and 4 were addressed by a series of 25 additional tests designed primarily for system evaluation, which provided increasingly severe system environments (nuclear and clutter) to probe the limits of system capability and provide increased traffic, well into the system-overload regions. These tests, as well as the initial series, were to provide an extensive data base for validating the system simulations, which could then be utilized to extend the evaluation in whatever direction was appropriate, including variations in the offensive threat.

In the spring of 1973, the SAFEGUARD program received significant redirection as a result of the Strategic Arms Limitation Treaty (SALT I) agreements. The limited-defensive capability provided by a single site made an extensive evaluation of system capability less appropriate and the program objective shifted in direction to obtaining operational experience with deployment, operation, and modification of a complex Antiballistic Missile (ABM) or similar system. These factors, as well as significant adjustments in schedules and funding, called for a System Test and Evaluation Program more limited in scope and designed to support additional basic objectives .40

In addition to supporting the objectives of initial system integration and verifying the baseline system-performance capability, the revised system test program40 was structured to meet the following objectives:

Provide a convenient and satisfactory vehicle by which the installed netted system could be accepted by the Government
• Provide a series of major tests which would serve as a tool in the post-IOC time period for assessing system operability and readiness on a continuing basis [System Readiness Verification (SRV)]
• Provide a limited data base for system simulation validation.

The resulting test program consisted of eight basic tests. As in the previously described program, the first three provided for integrating the three sites and verifying the interfaces and Command and Control subsystem. The remaining tests exercised the netted system in the three basic operational modes at nominal traffic levels and at nominal threat parameters.

System Integration Testing

SAFEGUARD System testing made extensive use of the system exerciser subsystem (hardware and software), which is a part of the tactical configuration at each site, to facilitate assessing operability on a continuing basis. The exercise subsystem simulates the threat environment and responds to tactical-software engagement planning by simulating defensive missiles. An off-line facility, the SAFEGUARD Threat Action Generator (STAG), generates a Site Threat Trajectory File (STTF) tape, which provides offensive trajectories and threat parameters to the system exerciser. (STAG is a software facility that enables simulation of a threat for use by the system exerciser.) In addition to producing the threat tapes, STAG can produce Site Information Tapes (SIT), which simulate intersite communications.

The initial integration tests were performed first at the TSCS with each software process operating in the "local" mode, i.e., unnetted with the rest of the system and with SIT simulating the other sites. Local-mode tests were conducted to establish software sanity and correct any software deficiencies that became visible. Local-mode tests were followed by "netted" tests with real-time intersite communications (no SIT) and STTF tapes synchronized so that each site observed the identical threat. Extensive recording during the test run, together with off-line data reduction, permitted detailed investigation of software problems and analyses of functional performance.

The local and netted tests were run repeatedly over a period of several weeks to establish the reliability of system response and to ensure that most, if not all, software problems had surfaced during the runs and had been corrected. Following satisfactory completion of a test sequence at TSCS, the identical sequence was repeated at the tactical site.

Acceptance Testing

Using the system test program as a vehicle for Government "acceptance" of the installed system imposed a number of specific requirements on the program. The means by which these requirements were implemented are illustrated in Figure 5-13 and described in the following paragraphs.

The planned system acceptance tests, as described in the System Integration and Evaluation Test Plan40 were reviewed with BMDSCOM. The review was conducted to ensure that the tests adequately covered all basic system operational modes and threat variations. In a number of instances, additions and/or modifications to the tests were made at the request of BMDSCOM.


Figure 5-13. SAFEGUARD System Test Program

System performance criteria and specific performance bounds were established for each of the tests. In developing performance criteria, the following guidelines were used:

1. The number of pass/fail criteria were minimized by utilizing system-level criteria rather than detailed functional-performance criteria
2. Performance bounds were established such that a properly operating system should pass a given test on a single run
3. The set of criteria was sufficiently comprehensive that an improperly operating system could not go undetected
4. The pass/fail criteria for each test should permit rapid assessment of test success.

The performance criteria41-45 [Acceptance Test Requirements (ATRs) and System Technical Verification Test Requirements (STVTRs)] were subject to Government concurrence and approval and were documented along with detailed test descriptions in System Test Specifications (STSs).46-50 The STS documents were placed under formal change control and required BMDSCOM concurrence prior to modification.

In line with the rapid-assessment guideline (4 above), a system data-reduction program was developed to extract, from the considerable quantity of data recorded, that data relevant to the system performance criteria. The program provided, in a very readable format, the information necessary to verify the pass/fail status.

The acceptance test criteria were applied only to the tests conducted at the tactical site. As each of the tests was concluded at the TSCS, however, a report and data package was provided to BMDSCOM.

System Readiness Verification Tests

As indicated previously, the system exerciser subsystem was part of the tactical configuration at each site to assess system opera-bility and readiness on a continuing basis. The expected variability in overall system response had to be well characterized for any tests to be used for readiness testing. The tests utilized for integration and system acceptance testing met this requirement and, therefore, were used for the System Readiness Verification exercises. The complexity of the SAFEGUARD System and complex interactions between functions were such that, although the ultimate outcome of an exercise in terms of defense level was predictable, the path taken through the system to achieve that result was quite variable and repeatable to only a limited degree. Statistical variations, in not only the threat parameters but in functional performance within the system, produced variations in performance that were totally proper system operation but quite unpredictable.

The system tests were repeated many times at TSCS and at site, in addition to being simulated via the system simulation. Therefore, a large data base was available from which to draw functional performance bounds for SRV exercises.

To avoid the need for off-line data reduction to appraise the results of an exercise, facilities were provided in the tactical software to make the information readily available. During an exercise run, information concerning detection tracking, discrimination, and engagement of a threatening object was output to a high-speed printer. At the conclusion of the exercise, information from the weapons process was combined with system exerciser data, and a "quick look" report on engagement results was provided via the printer.

Evaluation and Simulation Verification

System simulations were used extensively in connection with the system test program. Simulation results were used in the following ways:

1. Design of the test scenarios to ensure that the desired functional capability was being exercised
2. Selection of performance criteria for acceptance tests to be indicative of overall system response and capability
3. Fine tuning of the performance bounds to ensure that a properly operating system would pass and that system deficiencies would be detected with high confidence
4. Predictions of total system response to a particular scenario such that anomalous performance could be. quickly identified for further analysis
5. Indications of the expected variability in system response due to relatively low probability occurrences and estimates of the probability of a particular response occurring.

Item 5 in particular was a major time saver with respect to test analysis. The stochastic nature of the system inputs (threat) and performance of system functions frequently produced results that initially appeared to indicate system deficiencies. However, analysis indicated that, in most cases, these results were normal system response to low probability events.

MAJOR CHALLENGES AND INNOVATIONS

Evaluating a test program as complex as the SAFEGUARD System presented many problems. Compounding these was the time frame involved. Many of the innovative techniques proved to be more effective than anticipated.

Cost Effectiveness of System Evaluation

Considerable resources were spent on the system evaluation organization — a department-size effort in-house, augmented by a subcontractor simulation development and test data analysis effort of approximately equal size. This investment led to the development of an in-depth analytical capability that otherwise could not exist with the tight schedules in the design or test organizations. Applying this capability to the analysis of system performance led to identifying and resolving fundamental system problems that required simulation-based analysis owing to their complex and interactive nature (e.g., track and intercept of waking targets in a tank-breakup environment, traffic management, and overload response). Similarly, this detailed understanding of functional performance allowed significant contributions to the quality and efficiency of the key SAFEGUARD test programs — Meck, MSR site, PAR site, and TSCS. System evaluation inputs were particularly useful in defining a relatively small number of test scenarios that stressed all key aspects of functional performance, predicted system test response through simulation, and defined data-recording/data-reduction requirements tailored to expedite the analysis of known performance problems. In summary, the cost of the system evaluation program was well justified by the results — a significantly stronger system design and higher quality test programs.

Multisimulation Approach to System Evaluation

Developing a family of complementary simulations rather than a single, large simulation embodying both broad scope and extremely detailed models proved most effective. Separate simulations were developed for detailed analysis of radar and missile functional problems, in addition to the total system simulation. This approach allowed a great deal of flexibility in the approach to analyzing a particular problem. In particular, the long delays in obtaining analysis results, which are necessarily associated with the use of a single large simulation, were avoided.

Iterative Approach to Analysis of Complex System Problems

It is most efficient to first analyze a given system problem with the simplest model possible, then use the insight gained to build an intermediate-level simulation to probe key areas further, and finally, put the appropriate level of detail into the system simulation. This approach significantly improves the cost effectiveness of system simulation development effort by avoiding a uniform level of detail in the models — too detailed in some areas and not detailed enough in others. Most important, it maximizes the timeliness of the results by allowing insight based on analysis with simple models to be factored into project decision-making sooner than would be possible with the more detailed simulation models.

System Exerciser Evaluation

Insufficient resources were devoted to an evaluation of System Exerciser (STAG, EMX, and EPX) fidelity and flexibility. This led to the relatively late discovery that a few of the key elements of system response (such as unresolved-target track of PAR) could not be accurately exercised-with the automatic facility. The resultant crash program of exercise facility modifications and "work arounds," while successful, was an extremely inefficient use of resources. Further information on the System Exerciser is given in Chapter 4.

Requirement for Intermediate Level System Design Documentation

When the focus of the system evaluation effort shifted from analysis of the system design implied in the DPSPRs to analysis of the actual design implementation in MW and PW, it became apparent that a level of documentation falling between the DPSPRs and the process workbooks/ coding specifications was required. It was not possible to maintain useful simulation development and modeling schedules when working from coding specifications. The required intermediate level of documentation, which was provided in time to allow the effort to proceed, is best exemplified by the PW Functional Design Requirements.

Evolution of Meck System Test Program

Major challenges to the planning and execution of the Meck System Test Program related primarily to fulfilling the major objectives of the program within economic constraints, which tended to change with varying funding pressures. An adaptive process resulted in which test planners established a carefully considered priority-ordering of the test requirements so that implementation costs could be weighed against value. Examples of challenges and innovations that were necessary are included in this section.

At the time the SAFEGUARD Meck System Test Program was initiated in early 1970, the Meck System test resources (targets and interceptors) planned for use in the test program through June 1974 were somewhat greater than those actually expended. Table 5-8 compares the final program to the February 1970 program. A number of factors contributed to these differences.

The test program was continually scrutinized for efficiencies which could result in economies to the test program. For example, increased emphasis on the use of simulated or taped targets for intercept purposes was justified by experience and subsequently resulted in the elimination of several targets.

Experience with azimuth diversity and exploration of the MSR face-overlap region using IRBM targets demonstrated that these factors were not as troublesome as originally feared. As a result, Bell Laboratories recommended the elimination of several IRBM targets.

The need for target-tracking tests of actual FOBS targets was weighed against development costs of FOBS. On the basis of our demonstrated ability to conduct realistic simulations, it was decided that TSCS exercises against simulated FOBS would be more cost effective.

Table 5-8
Resource Comparison Between Present and 1970 Version of Meck System Test Program

SPRINT

1970

1975

Meck

24

26

Remote Launch

6

7

Warhead

(8)*

(11)

SPARTAN

1970

1975

Meck

26

25

Remote Launch

4

0

Warhead

(6)

(11)

ICBM

1970

1975

Minuteman

33

26

Titan

7

7

FOBS

4

0

IRBM

1970

1975

Polaris A2

6

4

Polaris A3

3

1

Note: Contingency resources were not included in this tabulation

* Parentheses indicate the warhead totals were included as part of the Meck or remote launch mission.

The development of the tactical system depended heavily on the evaluation of Meck test data. Timely analysis resulted in a continuing re-evaluation of the test program and provided additional emphasis on data for suspected problem areas. For example, additional target-tracking tests were added to better characterize tank breakup when it was found that a redesign of the system was necessary to perform RV data processing through tank breakup.

Additional economies in the execution of the test program were obtained using the operational concepts of mission pairing and mission coupling. It was found that, by observing reasonable constraints, a missile flight test could be paired with a target-tracking mission and we could achieve significant savings in the mission preparation and operation intervals. This reduced the total required man-hours and permitted maintaining and even increasing the pace of the test program.

The remote launch station on Illeginni Island supported a series of critical tests. The success there permitted it to be closed down after calendar year 1973, approximately 16 months before its planned closing. The cost savings justified this change even though the more tactical-like facilities at Illeginni were preferred for assembling and launching production SPARTAN missiles. The remaining production missiles were rescheduled for launch from Meck cells, with a minor compromise in shipping and handling and a significant saving in program costs.

An exhaustive set of highly repeatable simulations was made possible through the innovation of the Manual Interaction Simulator, which eliminated human response differences from repeated runs of the same mission. These mission reliability runs permitted early detection of non-normal system performance prior to each mission and thereby resulted in high confidence that the mission itself would run as expected.