Phase II of the DFBW program was to test a triplex and redundant digital computer system. This meant that all the hardware and software installed in the F-8 Crusader would be replaced. Phase II lasted for about 12 years.
It took exactly 2 years and 9 months after the end of the last flight of Phase I to the first flight of Phase II. Among the hardware items to be replaced in NASA 802 were the new mounts for new computer, a new cooling system for the computer, the new computers themselves (both primary and backup), a 3-channel interface unit for the pilot, larger pumps for the hydraulics, replacement of the inertial system with rate gyros and linear accelerometers, a sidestick (included in the primary system), more powerful actuators, a new generator, fuel tanks, and an upgraded engine. On the software side, there were knew redundancy management functions and other new software for ground control of the airplane and remote telemetry. Everything inside NASA 802 was new but the only external evidence consisted of a video camera and antenna added to the vertical tail.
There was a short lived Phase IB that tested various computer synchronization configurations that would be needed for the 3 computers in the triplex computer system in Phase II.
For Phase II there was talk of replacing the F-8 with the F-4 Phantom that the USAF had used to test an analog FBW system. A Lockheed JetStar, already owned by NASA, was also considered. However, due to the increased availability of spare parts for the F-8 the costs of operation were lower.
It took NASA 2 years to find a suitable computer. Among machines looked at were a NOVA computer with 4000 words of memory with a cost of $20,000. A Honeywell 601 the same memory at a weight of less than 30 pounds was also considered. The RCA 215 and Control Data Corporation Alpha was on the list for a price of $35,000 each for a lot of 25 computers. All these computers were inexpensive and light but were considered short of memory and processing power. The next group included the Honeywell 801 “Alert”, the Sperry 1819A and the General Electric CP-32A. These machines had larger memory of 16,000 words at 18-bits but were considered more expensive in the range of $70,000 per computer.
Software engineers wanted a machine that used floating point arithmetic instead of the fixed-decimal point machine that the Apollo program computers used. The fixed-decimal arithmetic was considered more error prone. Floating point machines stored the value and location of the number and the location of the decimal point. Floating point arithmetic is also easier to program but requires a larger computer word size but is not as accurate as fixed point. However, large word size meant more accuracy.
By 10 November 1972, NASA eventually settled on IBM’s AP-101. The –101 had a 32,000 word memory, consumed 370 watts of power and weighed 47.7 pounds. The computer required no external cooling and the internal IBM cooling system would be operable to 50,000 feet. The cost of the –101 was the same computer used in Rockwell’s Space Shuttle and because of this, the complier and other support software IBM was developing would essentially be provided for free. On 27 August 1973, IBM signed a contract to supply computers to the FBW program.
IBM’s AP-101 pallet mounted 3 computers and a single 3 channel interface unit. These computers could do both floating point and fixed decimal arithmetic. They could process 480,000 instructions per second (sixty times after than the IBM computer used in the Gemini program less than 10 years earlier!). MTBF (Mean Time Between Failures) was projected to be 5,484 hours but as much lower.
Fixing the AP-101 was one of the major hurdles of the program. The F-8 program received serial numbers 1 to 9 of the AP-101 which meant the computers from these early lots were most likely to fail. The first bugs ranged from floating-point failures and an instruction that didn’t work to a 50% drop in the data transfer rate. . By mid 1975, there were 19 faults in the first 7 computers causing an MTBF of 204 hours, less than 5% of the predicted rate. Case in point, some of the failures were due to the separation of the circuit boards, which were built in layers. Turns out that the board manufacturer was using a watered-down coating fluid between layers that expanded when heated while the computer was running. Fixing these issues and others ranged in cost from $87,000 (for the first computer) to $130,000 dollars (for the last computer).
On delivery of the flight system, things were pretty much as expected: 675 hours MTBF at 7,000 hours of use. February to July 1976, operations reached 1200 hours per months and an MTBF at 500 hours. In September 1976 the MTBF reached a low of 375 hours which delayed the first flight by 4 months due to a major hardware rework. Eventually MTBF recovered to 500 hours at 18,000 hours of operation. The AP-101 never did meet IBM’s reliability projections.
The main problem concerning software was how to get a multicomputer system to behave like a single computer for control laws and like 3 independent computers for fault tolerance. The software would be scheduled to execute cyclically within a 20 millisecond loop. That solved the synchronization. The computers would synchronize up to 3 times sending each other discrete signals and sharing data with attempts totaling no more than 200 microseconds. If a computer failed to keep in step the others would ignore it and move on. Built in test software would detect that failure and restart the computer. The new software for Phase II meant that there had to be new pilot interface protocols.
The mode-and-gain panel still had the direct (DIR), stability-augmentation (SAS), and control augmentation modes (CAS) but added a maneuver-load-control button in CAS mode, which was the predictive augmentation in pitch and expanded the 4 position gain switches to 5. There was also a digital autopilot with Mach-hold, altitude-hold, and heading-hold selections.
The Phase II system also allowed the pilot to access the computer software from inside the cockpit via a “Computer Interface Panel” which contained three seven segment displays and 2 thumb wheels with numbers 0 to 9 and “enter” and “clear” buttons.
The software used fixed point arithmetic for sensor data and floating point for control laws. Memory layout of the software started with 2,000 words of data. The operating system and redundancy management software comprised the next 3,000 words. The control laws comprised 5,000 words and the sensor redundancy and preflight testing software took 2,500 words each. The ground display and program load instructions took up the final 4,000 words of space for a grand total of 19,000 words of the initial 24,000 words of space (this was expandable to 32,000 words).
By early 1976, the software was mature enough for the pilots to be involved in verification and QA. Release 5 of the software was released by 22 June for flight review on the “iron bird” but had so many problems the review was cancelled. Release 6 arrived in July and was tested without problems. By late July, the Release 7 software was verified and was ready for flight qualification by 10 August.
Phase II also meant a name change for the BCS (Backup Control System) to Computer Bypass System (CBS). However, more importantly it meant new hardware in the form triply redundant analog computers. The F-8 used the same analog computers as the USAF’s F-4E FBW program.
On 18 and 19 September 1973, final approval of cockpit panels was achieved in preparation for first flight. Additional testing delayed first flight until late December 1974 but this was to be again delayed by more software and hardware problems. The final design review took place on 29 May 1975.
Lightning testing also took place in Phase II. It was found that magnetic fields leaked into the interior of the aircraft but because of the proliferation of openings in the fuselage, it was simply suggested to avoid flying trough thunderstorms (something that aircraft with conventional flight control systems already do). There were no lightning test on the AP-101 computers themselves, only static discharge tests.
By January 1976 the program lost 2 months because Release 2 of the software failed to synchronize the computers. On 5 April Gary Krier flew the simulator with the Release 2 of the software installed and he gave it a Cooper scale ratings of 1.5 and 3. By June, Krier flew the Release 6 and 7 software on the iron-bird with several anomalies noted and resolved. On 27 August 1976, with the software issues resolved, the F-8 took off from Edwards AFB for the first flight of Phase II.
The Phase II flight-test phase lasted from 27 August 1976 to 16 December 1985. The mission rules for the initial first flights of Phase II came down to a 2-part procedure: try a reset of the indicator; then, if a problem persists, return to base in a configuration governed by the following table:
|Failure In:||If In:||Return In:|
During preflight for the first flight the CBS failed it’s self-test twice so the preflight was restarted. A canopy latch also failed to seal correctly but, with the assistance of ground crewman, this too, was resolved. Krier made a 45 minute flight with the “best” of the nine AP-101 computers installed. Serial number 3 with 2,135 hours of operation was Channel A, number 8 with 1,576 hours was Channel B and number 4 with 2,951 hours was Channel C.
The second flight on 17 September, which proved to be a pivotal flight, computer number 4 moved to Channel B and number 7 replaced it in Channel C. At some point during the test program all 3 computer failed, at least one of them during flight. The second flight objective was “envelope expansion,” meaning, increasing the range of altitude, airspeed and g-forces 802’s new flight control system could withstand. The intention was to fly 802 to 40,000 feet to see how the computer’s cooling system worked in low density air and then down to 20,000 feet at sustained supersonic flight to see how the cooling system handled moderate heating and g-loads of 4.
An account of that flight follows:
Krier used afterburner on takeoff due to the full load of fuel needed to achieve the high altitude and speed required for the research flight. After a normal climb to 20,000 feet, Krier did some small maneuvers to exercise all the control surfaces. Then he repeated those maneuvers to plus-50-knot-speed intervals, eventually using the afterburner again to nudge the F-8 past 500 knots, supersonic speed. He did stability and control tests up to 527 knots, Mach 1.1, then begin a supersonic climbing turn to 40,000 feet in afterburner. 23 minutes after takeoff, trying to level off, Krier cut the afterburner at Mach 1.21, and within one second, the Channel A fail light and its associated air-data light illuminated. The computer tried a restart, failed, then just quit. Without hesitation, Krier began to return to base, following the rules and staying in primary mode since he he was in it when the failure occurred. An uneventful landing on the 2 good computers followed.
The MBTF of the AP-101s was at it’s lowest point at that time in the program at 350 hours. Each computer was modified to fix prior problems and not a single one of them was internally similar because a different component had failed in each one. Testing was halted until all 9 computers could be brought up to the same standard. In early January 1977 all nine were returned to NASA Dryden.
On 28 January 1977 computer serial number 3 was in Channel A with the intention of complete the test objectives set out in the second flight. 38 minutes into the flight at Mach 1.1 and 40,000 feet (almost the same conditions as in the second flight) Channel A failure lights lit up again. Once again Krier returned to base using the 2 good channels. Serial number 3’s self test routine detected an error in memory and the program tried to restart the computer 19 times before giving up and declaring a “self-fail.” After the flight, engineers sent the computer back to IBM for another refurbishment.
Failures on consecutive flights were frustrating to the Dryden team but it proved that the triplex system handled failures well. Still, IBM’s projection for MTBF after the refurbishment was 1,030 hours. The actual figure for the first 5 machines was 354 hours, unchanged from when all 9 computers were sent to IBM in the previous fall.
The next 2 flights occurred in February and early March 1977. These flights occurred without incident and tested the autopilot and augmented modes. There was mixing of modes in various regimes of flight and gradually confidence in the system (especially the computers) gradually increased.
The F-8 flights in support of the Space Shuttle program began in 1977 to test the Shuttle’s backup flight control system software. Originally, Shuttle’s flight control system consisted of 5 computers running with no backup. The flight control system was revised to include 4 computers with the 5th running as a backup and offering functions to provide for return to Earth in the event other 4 failed. In the F-8 program, NASA contracted IBM to write the backup software and then it was to be tested running in parallel with the F-8 DBFW FCS. To run the software, the pilot would use enter code “60” in the Computer Interface Panel. This would shutdown the usual software downlink and switch to the piggybacked software’s downlink. He would then make a series of simulated shuttle approaches. When completed, the pilot would switch back to the normal downlink, entering code “61” in the CIP.
After some initial problem with the software tapes, the first test flight in support of the Shuttle was on 18 March 1977. To simulate shuttle approaches, the pilot, this time McMurty, kept the power at idle and deployed the speedbrake, giving the aircraft a very high rate of descent. He flew the approach and descent rate 6 times, each time being roughly consistent with the other. Krier flew profile 2 on 21 March in the morning and profile 4 in the afternoon. The next morning McMurty flew profile 5 with Krier going up and flying profile 4 in the afternoon. Over the next few days Krier and McMurty flew numerous profiles. By 15 April the F-8 program not only gained plenty of experience in supporting the Shuttle program but also increased the reliability of the AP-101. F-8 flights in support of the Shuttle program were halted for 2 months as the aircraft was prepared for the next set of tests for the Remotely Augmented Vehicle experiment.
The Remotely Augmented Vehicle (RAV) was an attempt to enable in-flight changes to be made to a flight control laws. Telemetry downlinks would provide vehicle state to a computer on the ground. Uplink commands would be sent back to the actuators on the vehicle as though the computer and software were on the airplane.
The ground-control system initially consisted of a simplified version of the roll-and-yaw stability augmentation and pitch-control augmentation modes. There was no autopilot or sidestick support. Structurally, the software had an executive routine that contained the interrupt structure and synchronization logic, plus 5 subroutines. 4 of them executed the control laws, one of which handled the trim commands in a faster inner loop, with general feedback in a slower outer loop. The other one performed initialization and ran synchronization in the background. The telemetry downlink data went directly into the routines, and the executive the uplink of the four 10-bit command words.
The remote augmentation experiment began with a sample rate of 100 per second and could be adjusted in-flight. The pilot could engage the ground system by entering code 21 for pitch, 22 for roll and 23 for yaw using the CIP. Shortcuts of combinations of control axes included 24 for both roll and yaw, and 25 for all three (pitch roll and yaw).
The ground based computer was a Varian V-73 minicomputer. The airborne AP-101s would check data for “reasonability” constraints then pass that data to the V-73. On the uplink, the –101s would do nothing with the data but check the uplink and pass it along to the actuators. The system could only be used about 15,000 to ensure the uplink signal was received without ground interference.
Flight number 16 was the first RAV flight on 15 June 1977. The flight was delayed a month so changes could be made to the mode control panel. This initial flight was flown with the RAV system flown in a monitor mode to verify the links worked. There were a few software problems observed and they were quickly corrected. Over the next few weeks there were some minor problems that did cause flight aborts. There was a major effort to redesign the software. The design review for this software, now called RAVEN (Remotely Augmented Vehicle Experimental Norm), occurred on 31 May 1977. The first flight for RAVEN was on 8 September 1977. After a few flights the concept was proven viable. RAVEN actually cut costs at $10-20 per word and one day turn around for changes to the remote augmentation software versus $100-$300 per system. The RAVEN concept was eventually used in the AFTI/F-16 program.
Another round of Space Shuttle support flights occurred on 12 August 1977. At the end of Enterprise’s 5 flight on 26 October 1977 (with Fred Haise flying) suffered a major PIO. As Enterprise approached within 30 feet of the runway, Enterprise rolled slightly, seeming to search with the main gear for solid ground. Enterprise touched down hard and bounced, pulsed down in pitch and rolled right. The roll continued for a few cycles until enough energy was expended to make a landing unavoidable. Here’s a video of the landing:
This oscillation happened because of transport delays in the control system. Between the time the pilot moved the control stick and the time something happened at the control surface, there was a gap on the order of 200-300 milliseconds. The delay was caused by analog to digital conversion, control law execution, and digital to analog conversion, as well as the length of the wires and lag in the hydraulics. Too long of a delay would cause the pilot to loose patience and deflect the control surface even more, but by the time the first set of commands is in process, the effect is soon amplified, causing an overshoot. The pilot reacts to this by giving an opposite command and it results in an overshoot in the other direction. Here are some videos of that flight from different perspectives:
The task of the F-8 team was to help find out the range of transport delays within which PIO can be avoided. RAV was tried in the first few support flights, flown by McMurty and Krier. The landing gear door was removed and the wing was kept in the down position to keep the approach speed of 200 knots, close to the Space Shuttle. These flights occurred on 24 and 25 March 1978. Unfortunately, both flights were aborted due to excessive vibration and blown fuses. They would have to wait to get an onboard version of the transport software, in other words, using RAV software wasn’t going to work.
By 7 April, the new software was ready and 14 flights followed within the next 10 days. Enterprise would have taken over a year to do this because of the need to reattach it to the 747. Again, the F-8 proved it’s value.
A notable PIO event occurred with the F-8 on the 18th. John Manke made an approach to the runway at 265 knots and 100 milliseconds built into the transport delay. He pulled the nose a little high and compensated with a quick and excessive downward pulse, causing the F-8 to almost land nose first. It took 5 pulses to settle into a safe departure attitude.
This series of flight produced valuable data about handling characteristics with delays from 20 to 200 milliseconds. Flying the simulated shuttle approaches, enabled pilots to gather data that set reasonable sample rate and control law execution limits. These tests also resulted in the development of a PIO suppression filter that was tested in the F-8.
New control laws, called Adaptive Control Laws, were developed in the software for portions of the Phase II program. Adaptive control laws are designed to adjust aircraft control based on constantly changing variables such as dynamic pressure, Mach number, angle of attack and a whole series of other factors. Adaptive control laws would combine these factors with results of previous commands and dynamically project the best solution. Honeywell developed software that used sophisticated mathematical equations to determine the best feedback gains and optimal control methods. The high volume of mathematics meant that this could only be done with a digital computer.
The first adaptive control law flight was on 24 October 1978 with the system flown in monitor mode. In November, 5 flights were done trying out various channel and sample-rate combinations. These flights proved the validity of adaptive control laws as some of software was used a baseline for development in the F-16C fighter.
One of the experiments that didn’t see further development beyond the F-8 program was something sensor-analytic-redundancy management. Sensor-analytic-redundancy management explored the question of getting reliable data to the FCS in the event of failure of hardware that fed data to the FCS. These flights ran between June 1975 and September 1982. The experiments were considered successful but providing multiple types of sensor proved a simpler route.
The final experiment flown in NASA’s F-8 program was called REBUS (REsident Back-Up Software). All three channels of any flight control systems (pitch, roll and yaw) have identical software. What happens if that software has a failure bringing down the entire system? You could have dissimilar software in each channel but that would triple development costs. NASA wanted to see if backup software could be removed from the design. The solution was to a hardware device to monitor system performance and fault declarations. REBUS worked autonomously and could provide the FCS with an analog backup. The biggest problem was initial switchover on failure of the primary system. As long this switchover took less than 200 milliseconds, the data was still good to use.
Software testing took place on the iron-bird and a final flight readiness review took place on 17 February 1984. By this time the original Phase II software’s reliability had been well established so in order to test REBUS it had to be modified to generate faults. Timing of the fault generation could be controlled by the pilot in the CIP.
Edward Schneider made of the first REBUS evaluation flight on 23 July 1984 and evaluated 81 items.
The plan was to arm REBUS t 0.6 Mach and 20,000 feet, check out the computer bypass system to be certain of a fallback, then transfer to REBUS. First Schneider would do pulses in all axes and some maneuvers. Then he would go back to the primary system and make sure it worked. He would then expand the envelope by going back to REBUS, doing some 2g maneuvers, downmode to the computer bypass system, backup to REBUS, back to primary again, and repeat these cycles with different maneuvers a few more times. Finally he would make simulated approaches in REBUS, do some touch and goes and low approaches and then land, controlled by the primary system.
This test was so successful that the second flight, on the 27th, ended with a landing with REBUS running. In total REBUS was active for 3 hours and 54 minutes with 22 transfers between it and the primary FCS. REBUS testing made in valuable FCS contributions to the B-2 Spirit “Stealth Bomber” and generated a debate between redundant and dissimilar flight control software which led to differing interpretations between Airbus and Boeing.
The 169th Phase II flight and the 211th total total flight for NASA’s F-8 aircraft was on 16 December 1985. The F-8 is now preserved and on display at the Edwards AFB Flight Test Museum.