Got an interesting link today from my Internet friend of many years, a retired Air Force test pilot.
Just as we had a top secret program for many years involving captured Soviet fighters, the Soviets had a few of ours.
And the conclusions of one of the Russians top test pilots at the time, in evaluating “The Foreigner” (an F-5 that came from Vietnam after we left) vs a MiG 21, were objective, at times, funny (didn’t know that Russian fighters did not use brakes integrated with the rudder pedals), and, most of all, surprising.
In simulated dogfights, the F-5 won every time.
Lex would have loved to have read this article. He had some flight time of his own in an F-5E, with some amusing stories.
The conclusion of the Soviet experts in confronting a Tiger after their tests?
Our “experts” suggested not to engage in a close dogfight, but to use the “hit-and-run” tactics instead.
Static test airframes, or more commonly called, “iron birds” are partially built, non-flying airframes or old formerly flying airframes that are used by agencies and manufacterers to test either the strength of than airframe, various design components or aircraft subsystems (avionics, flight control, engines, etc).
The iron birds used for strength testing are typically full scale representations of the aircraft that are rigged to gaint gantry cranes with weights and strain gauges attached. See the pic:
Lockheed’s F-35 test airframe installed on gantry cranes with strain gauges.
Once installed on the cranes the airframe is literally pulled and pushed to properly simulate all the aerodynamic forces that the aircraft will encounter throughout it’s flying career. Often the iron birds are tested till destruction.
This is a VC-10 undergoing wing fatigue testing. Note the bending wing.
Some iron birds are formerly flying airframes that have accumulated too many flying hours and are no longer consider safe to fly. These aircraft are typically stripped of most equipment (engines mostly) and used to test various aircraft subsystems in support of other programs.
This NASA’s F-8 Crusader iron bird that was used to test software for NASA’s Digital Fly-By-Wire program in the 1960s,
As the latest example of NASA’s iron bird, this is an F/A-18 Hornet used by NASA to support many of the F/A-18 test programs.
Iron birds aren’t limited to NASA. The US military also used them for the same purposes.
This B-2 at the National Museum of the USAF was never an actual flying airframe. This “aircraft” appropriately named “Fire and Ice”was used for fatgiue and climatic testing.
A close up of “Fire and Ice’s” nose gear door.
You can learn more about that particular aircraft here.
As an aside, old airframes are also typically used as maintaince trainers in the military. These are called ground instructional airframes:
Phase II of the DFBW program was to test a triplex and redundant digital computer system. This meant that all the hardware and software installed in the F-8 Crusader would be replaced. Phase II lasted for about 12 years.
It took exactly 2 years and 9 months after the end of the last flight of Phase I to the first flight of Phase II. Among the hardware items to be replaced in NASA 802 were the new mounts for new computer, a new cooling system for the computer, the new computers themselves (both primary and backup), a 3-channel interface unit for the pilot, larger pumps for the hydraulics, replacement of the inertial system with rate gyros and linear accelerometers, a sidestick (included in the primary system), more powerful actuators, a new generator, fuel tanks, and an upgraded engine. On the software side, there were knew redundancy management functions and other new software for ground control of the airplane and remote telemetry. Everything inside NASA 802 was new but the only external evidence consisted of a video camera and antenna added to the vertical tail.
There was a short lived Phase IB that tested various computer synchronization configurations that would be needed for the 3 computers in the triplex computer system in Phase II.
The USAF’s F-4E Analog FBW test-bed aircraft.
For Phase II there was talk of replacing the F-8 with the F-4 Phantom that the USAF had used to test an analog FBW system. A Lockheed JetStar, already owned by NASA, was also considered. However, due to the increased availability of spare parts for the F-8 the costs of operation were lower.
NASA 814. NASA’s Lockheed JetStar test-bed aircraft.
It took NASA 2 years to find a suitable computer. Among machines looked at were a NOVA computer with 4000 words of memory with a cost of $20,000. A Honeywell 601 the same memory at a weight of less than 30 pounds was also considered. The RCA 215 and Control Data Corporation Alpha was on the list for a price of $35,000 each for a lot of 25 computers. All these computers were inexpensive and light but were considered short of memory and processing power. The next group included the Honeywell 801 “Alert”, the Sperry 1819A and the General Electric CP-32A. These machines had larger memory of 16,000 words at 18-bits but were considered more expensive in the range of $70,000 per computer.
Software engineers wanted a machine that used floating point arithmetic instead of the fixed-decimal point machine that the Apollo program computers used. The fixed-decimal arithmetic was considered more error prone. Floating point machines stored the value and location of the number and the location of the decimal point. Floating point arithmetic is also easier to program but requires a larger computer word size but is not as accurate as fixed point. However, large word size meant more accuracy.
By 10 November 1972, NASA eventually settled on IBM’s AP-101. The –101 had a 32,000 word memory, consumed 370 watts of power and weighed 47.7 pounds. The computer required no external cooling and the internal IBM cooling system would be operable to 50,000 feet. The cost of the –101 was the same computer used in Rockwell’s Space Shuttle and because of this, the complier and other support software IBM was developing would essentially be provided for free. On 27 August 1973, IBM signed a contract to supply computers to the FBW program.
IBM’s AP-101 pallet mounted 3 computers and a single 3 channel interface unit. These computers could do both floating point and fixed decimal arithmetic. They could process 480,000 instructions per second (sixty times after than the IBM computer used in the Gemini program less than 10 years earlier!). MTBF (Mean Time Between Failures) was projected to be 5,484 hours but as much lower.
Fixing the AP-101 was one of the major hurdles of the program. The F-8 program received serial numbers 1 to 9 of the AP-101 which meant the computers from these early lots were most likely to fail. The first bugs ranged from floating-point failures and an instruction that didn’t work to a 50% drop in the data transfer rate. . By mid 1975, there were 19 faults in the first 7 computers causing an MTBF of 204 hours, less than 5% of the predicted rate. Case in point, some of the failures were due to the separation of the circuit boards, which were built in layers. Turns out that the board manufacturer was using a watered-down coating fluid between layers that expanded when heated while the computer was running. Fixing these issues and others ranged in cost from $87,000 (for the first computer) to $130,000 dollars (for the last computer).
On delivery of the flight system, things were pretty much as expected: 675 hours MTBF at 7,000 hours of use. February to July 1976, operations reached 1200 hours per months and an MTBF at 500 hours. In September 1976 the MTBF reached a low of 375 hours which delayed the first flight by 4 months due to a major hardware rework. Eventually MTBF recovered to 500 hours at 18,000 hours of operation. The AP-101 never did meet IBM’s reliability projections.
The main problem concerning software was how to get a multicomputer system to behave like a single computer for control laws and like 3 independent computers for fault tolerance. The software would be scheduled to execute cyclically within a 20 millisecond loop. That solved the synchronization. The computers would synchronize up to 3 times sending each other discrete signals and sharing data with attempts totaling no more than 200 microseconds. If a computer failed to keep in step the others would ignore it and move on. Built in test software would detect that failure and restart the computer. The new software for Phase II meant that there had to be new pilot interface protocols.
The mode-and-gain panel still had the direct (DIR), stability-augmentation (SAS), and control augmentation modes (CAS) but added a maneuver-load-control button in CAS mode, which was the predictive augmentation in pitch and expanded the 4 position gain switches to 5. There was also a digital autopilot with Mach-hold, altitude-hold, and heading-hold selections.
Schematic of the Mode Control Panel.
The Phase II system also allowed the pilot to access the computer software from inside the cockpit via a “Computer Interface Panel” which contained three seven segment displays and 2 thumb wheels with numbers 0 to 9 and “enter” and “clear” buttons.
The software used fixed point arithmetic for sensor data and floating point for control laws. Memory layout of the software started with 2,000 words of data. The operating system and redundancy management software comprised the next 3,000 words. The control laws comprised 5,000 words and the sensor redundancy and preflight testing software took 2,500 words each. The ground display and program load instructions took up the final 4,000 words of space for a grand total of 19,000 words of the initial 24,000 words of space (this was expandable to 32,000 words).
By early 1976, the software was mature enough for the pilots to be involved in verification and QA. Release 5 of the software was released by 22 June for flight review on the “iron bird” but had so many problems the review was cancelled. Release 6 arrived in July and was tested without problems. By late July, the Release 7 software was verified and was ready for flight qualification by 10 August.
Phase II also meant a name change for the BCS (Backup Control System) to Computer Bypass System (CBS). However, more importantly it meant new hardware in the form triply redundant analog computers. The F-8 used the same analog computers as the USAF’s F-4E FBW program.
Additionally, for Phase II, the actuators were modified with increased power, quicker response and greater reliability. The pistons in the actuators were enlarged, providing 20% more power.
On 18 and 19 September 1973, final approval of cockpit panels was achieved in preparation for first flight. Additional testing delayed first flight until late December 1974 but this was to be again delayed by more software and hardware problems. The final design review took place on 29 May 1975.
Lightning testing also took place in Phase II. It was found that magnetic fields leaked into the interior of the aircraft but because of the proliferation of openings in the fuselage, it was simply suggested to avoid flying trough thunderstorms (something that aircraft with conventional flight control systems already do). There were no lightning test on the AP-101 computers themselves, only static discharge tests.
By January 1976 the program lost 2 months because Release 2 of the software failed to synchronize the computers. On 5 April Gary Krier flew the simulator with the Release 2 of the software installed and he gave it a Cooper scale ratings of 1.5 and 3. By June, Krier flew the Release 6 and 7 software on the iron-bird with several anomalies noted and resolved. On 27 August 1976, with the software issues resolved, the F-8 took off from Edwards AFB for the first flight of Phase II.
The Phase II flight-test phase lasted from 27 August 1976 to 16 December 1985. The mission rules for the initial first flights of Phase II came down to a 2-part procedure: try a reset of the indicator; then, if a problem persists, return to base in a configuration governed by the following table:
Failure In:
If In:
Return In:
Primary
Primary
Primary
Primary
CBS
CBS
CBS
Primary
Primary
CBS
CBS
Primary
During preflight for the first flight the CBS failed it’s self-test twice so the preflight was restarted. A canopy latch also failed to seal correctly but, with the assistance of ground crewman, this too, was resolved. Krier made a 45 minute flight with the “best” of the nine AP-101 computers installed. Serial number 3 with 2,135 hours of operation was Channel A, number 8 with 1,576 hours was Channel B and number 4 with 2,951 hours was Channel C.
The second flight on 17 September, which proved to be a pivotal flight, computer number 4 moved to Channel B and number 7 replaced it in Channel C. At some point during the test program all 3 computer failed, at least one of them during flight. The second flight objective was “envelope expansion,” meaning, increasing the range of altitude, airspeed and g-forces 802’s new flight control system could withstand. The intention was to fly 802 to 40,000 feet to see how the computer’s cooling system worked in low density air and then down to 20,000 feet at sustained supersonic flight to see how the cooling system handled moderate heating and g-loads of 4.
An account of that flight follows:
Krier used afterburner on takeoff due to the full load of fuel needed to achieve the high altitude and speed required for the research flight. After a normal climb to 20,000 feet, Krier did some small maneuvers to exercise all the control surfaces. Then he repeated those maneuvers to plus-50-knot-speed intervals, eventually using the afterburner again to nudge the F-8 past 500 knots, supersonic speed. He did stability and control tests up to 527 knots, Mach 1.1, then begin a supersonic climbing turn to 40,000 feet in afterburner. 23 minutes after takeoff, trying to level off, Krier cut the afterburner at Mach 1.21, and within one second, the Channel A fail light and its associated air-data light illuminated. The computer tried a restart, failed, then just quit. Without hesitation, Krier began to return to base, following the rules and staying in primary mode since he he was in it when the failure occurred. An uneventful landing on the 2 good computers followed.
The MBTF of the AP-101s was at it’s lowest point at that time in the program at 350 hours. Each computer was modified to fix prior problems and not a single one of them was internally similar because a different component had failed in each one. Testing was halted until all 9 computers could be brought up to the same standard. In early January 1977 all nine were returned to NASA Dryden.
On 28 January 1977 computer serial number 3 was in Channel A with the intention of complete the test objectives set out in the second flight. 38 minutes into the flight at Mach 1.1 and 40,000 feet (almost the same conditions as in the second flight) Channel A failure lights lit up again. Once again Krier returned to base using the 2 good channels. Serial number 3’s self test routine detected an error in memory and the program tried to restart the computer 19 times before giving up and declaring a “self-fail.” After the flight, engineers sent the computer back to IBM for another refurbishment.
Failures on consecutive flights were frustrating to the Dryden team but it proved that the triplex system handled failures well. Still, IBM’s projection for MTBF after the refurbishment was 1,030 hours. The actual figure for the first 5 machines was 354 hours, unchanged from when all 9 computers were sent to IBM in the previous fall.
The next 2 flights occurred in February and early March 1977. These flights occurred without incident and tested the autopilot and augmented modes. There was mixing of modes in various regimes of flight and gradually confidence in the system (especially the computers) gradually increased.
The F-8 flights in support of the Space Shuttle program began in 1977 to test the Shuttle’s backup flight control system software. Originally, Shuttle’s flight control system consisted of 5 computers running with no backup. The flight control system was revised to include 4 computers with the 5th running as a backup and offering functions to provide for return to Earth in the event other 4 failed. In the F-8 program, NASA contracted IBM to write the backup software and then it was to be tested running in parallel with the F-8 DBFW FCS. To run the software, the pilot would use enter code “60” in the Computer Interface Panel. This would shutdown the usual software downlink and switch to the piggybacked software’s downlink. He would then make a series of simulated shuttle approaches. When completed, the pilot would switch back to the normal downlink, entering code “61” in the CIP.
After some initial problem with the software tapes, the first test flight in support of the Shuttle was on 18 March 1977. To simulate shuttle approaches, the pilot, this time McMurty, kept the power at idle and deployed the speedbrake, giving the aircraft a very high rate of descent. He flew the approach and descent rate 6 times, each time being roughly consistent with the other. Krier flew profile 2 on 21 March in the morning and profile 4 in the afternoon. The next morning McMurty flew profile 5 with Krier going up and flying profile 4 in the afternoon. Over the next few days Krier and McMurty flew numerous profiles. By 15 April the F-8 program not only gained plenty of experience in supporting the Shuttle program but also increased the reliability of the AP-101. F-8 flights in support of the Shuttle program were halted for 2 months as the aircraft was prepared for the next set of tests for the Remotely Augmented Vehicle experiment.
The Remotely Augmented Vehicle (RAV) was an attempt to enable in-flight changes to be made to a flight control laws. Telemetry downlinks would provide vehicle state to a computer on the ground. Uplink commands would be sent back to the actuators on the vehicle as though the computer and software were on the airplane.
The ground-control system initially consisted of a simplified version of the roll-and-yaw stability augmentation and pitch-control augmentation modes. There was no autopilot or sidestick support. Structurally, the software had an executive routine that contained the interrupt structure and synchronization logic, plus 5 subroutines. 4 of them executed the control laws, one of which handled the trim commands in a faster inner loop, with general feedback in a slower outer loop. The other one performed initialization and ran synchronization in the background. The telemetry downlink data went directly into the routines, and the executive the uplink of the four 10-bit command words.
The remote augmentation experiment began with a sample rate of 100 per second and could be adjusted in-flight. The pilot could engage the ground system by entering code 21 for pitch, 22 for roll and 23 for yaw using the CIP. Shortcuts of combinations of control axes included 24 for both roll and yaw, and 25 for all three (pitch roll and yaw).
The ground based computer was a Varian V-73 minicomputer. The airborne AP-101s would check data for “reasonability” constraints then pass that data to the V-73. On the uplink, the –101s would do nothing with the data but check the uplink and pass it along to the actuators. The system could only be used about 15,000 to ensure the uplink signal was received without ground interference.
Flight number 16 was the first RAV flight on 15 June 1977. The flight was delayed a month so changes could be made to the mode control panel. This initial flight was flown with the RAV system flown in a monitor mode to verify the links worked. There were a few software problems observed and they were quickly corrected. Over the next few weeks there were some minor problems that did cause flight aborts. There was a major effort to redesign the software. The design review for this software, now called RAVEN (Remotely Augmented Vehicle Experimental Norm), occurred on 31 May 1977. The first flight for RAVEN was on 8 September 1977. After a few flights the concept was proven viable. RAVEN actually cut costs at $10-20 per word and one day turn around for changes to the remote augmentation software versus $100-$300 per system. The RAVEN concept was eventually used in the AFTI/F-16 program.
The AFTI/F-16 in flight.
Another round of Space Shuttle support flights occurred on 12 August 1977. At the end of Enterprise’s 5 flight on 26 October 1977 (with Fred Haise flying) suffered a major PIO. As Enterprise approached within 30 feet of the runway, Enterprise rolled slightly, seeming to search with the main gear for solid ground. Enterprise touched down hard and bounced, pulsed down in pitch and rolled right. The roll continued for a few cycles until enough energy was expended to make a landing unavoidable. Here’s a video of the landing:
This oscillation happened because of transport delays in the control system. Between the time the pilot moved the control stick and the time something happened at the control surface, there was a gap on the order of 200-300 milliseconds. The delay was caused by analog to digital conversion, control law execution, and digital to analog conversion, as well as the length of the wires and lag in the hydraulics. Too long of a delay would cause the pilot to loose patience and deflect the control surface even more, but by the time the first set of commands is in process, the effect is soon amplified, causing an overshoot. The pilot reacts to this by giving an opposite command and it results in an overshoot in the other direction. Here are some videos of that flight from different perspectives:
The task of the F-8 team was to help find out the range of transport delays within which PIO can be avoided. RAV was tried in the first few support flights, flown by McMurty and Krier. The landing gear door was removed and the wing was kept in the down position to keep the approach speed of 200 knots, close to the Space Shuttle. These flights occurred on 24 and 25 March 1978. Unfortunately, both flights were aborted due to excessive vibration and blown fuses. They would have to wait to get an onboard version of the transport software, in other words, using RAV software wasn’t going to work.
By 7 April, the new software was ready and 14 flights followed within the next 10 days. Enterprise would have taken over a year to do this because of the need to reattach it to the 747. Again, the F-8 proved it’s value.
A notable PIO event occurred with the F-8 on the 18th. John Manke made an approach to the runway at 265 knots and 100 milliseconds built into the transport delay. He pulled the nose a little high and compensated with a quick and excessive downward pulse, causing the F-8 to almost land nose first. It took 5 pulses to settle into a safe departure attitude.
This series of flight produced valuable data about handling characteristics with delays from 20 to 200 milliseconds. Flying the simulated shuttle approaches, enabled pilots to gather data that set reasonable sample rate and control law execution limits. These tests also resulted in the development of a PIO suppression filter that was tested in the F-8.
New control laws, called Adaptive Control Laws, were developed in the software for portions of the Phase II program. Adaptive control laws are designed to adjust aircraft control based on constantly changing variables such as dynamic pressure, Mach number, angle of attack and a whole series of other factors. Adaptive control laws would combine these factors with results of previous commands and dynamically project the best solution. Honeywell developed software that used sophisticated mathematical equations to determine the best feedback gains and optimal control methods. The high volume of mathematics meant that this could only be done with a digital computer.
The first adaptive control law flight was on 24 October 1978 with the system flown in monitor mode. In November, 5 flights were done trying out various channel and sample-rate combinations. These flights proved the validity of adaptive control laws as some of software was used a baseline for development in the F-16C fighter.
One of the experiments that didn’t see further development beyond the F-8 program was something sensor-analytic-redundancy management. Sensor-analytic-redundancy management explored the question of getting reliable data to the FCS in the event of failure of hardware that fed data to the FCS. These flights ran between June 1975 and September 1982. The experiments were considered successful but providing multiple types of sensor proved a simpler route.
The final experiment flown in NASA’s F-8 program was called REBUS (REsident Back-Up Software). All three channels of any flight control systems (pitch, roll and yaw) have identical software. What happens if that software has a failure bringing down the entire system? You could have dissimilar software in each channel but that would triple development costs. NASA wanted to see if backup software could be removed from the design. The solution was to a hardware device to monitor system performance and fault declarations. REBUS worked autonomously and could provide the FCS with an analog backup. The biggest problem was initial switchover on failure of the primary system. As long this switchover took less than 200 milliseconds, the data was still good to use.
Software testing took place on the iron-bird and a final flight readiness review took place on 17 February 1984. By this time the original Phase II software’s reliability had been well established so in order to test REBUS it had to be modified to generate faults. Timing of the fault generation could be controlled by the pilot in the CIP.
Edward Schneider made of the first REBUS evaluation flight on 23 July 1984 and evaluated 81 items.
The plan was to arm REBUS t 0.6 Mach and 20,000 feet, check out the computer bypass system to be certain of a fallback, then transfer to REBUS. First Schneider would do pulses in all axes and some maneuvers. Then he would go back to the primary system and make sure it worked. He would then expand the envelope by going back to REBUS, doing some 2g maneuvers, downmode to the computer bypass system, backup to REBUS, back to primary again, and repeat these cycles with different maneuvers a few more times. Finally he would make simulated approaches in REBUS, do some touch and goes and low approaches and then land, controlled by the primary system.
This test was so successful that the second flight, on the 27th, ended with a landing with REBUS running. In total REBUS was active for 3 hours and 54 minutes with 22 transfers between it and the primary FCS. REBUS testing made in valuable FCS contributions to the B-2 Spirit “Stealth Bomber” and generated a debate between redundant and dissimilar flight control software which led to differing interpretations between Airbus and Boeing.
The 169th Phase II flight and the 211th total total flight for NASA’s F-8 aircraft was on 16 December 1985. The F-8 is now preserved and on display at the Edwards AFB Flight Test Museum.
NASA 802 as displayed at Dryden Flight Research Center.
Head-on view of NASA’s F-8 Crusader Digital Fly-By-Wire tested aircraft, coded NASA 802.
Phase I tests included first flight and validation of a single-channel flight control system and software. In addition, these flight would also test the validity of the back-up control system (BCS).
Before the first flight, project engineers issued a document containing limits and emergency procedures for the F-8 flights. For instance a “70 degrees per second roll-rate limit” was instituted due to the installed Apollo hardware limitations, there was a crosswind limit of 10-knots for flight test missions, in the event of a loss of power to the IMU, the pilot had to fly straight and level for 2 minutes and allow the gyros to realign.
Most of the in-flight emergency procedures involved switching to the BCS. Modes included: engine out, generator failure, battery voltage drop to 27 or below and any so-called “abnormal” digital flight control system behavior (that acted as a kind of catchall).
Pilots reported to engineers how well the FBW FCS was working via a scale that involved 3 questions to be asked:
“Is it controllable?
Is adequate performance attainable with a tolerable pilot workload?
Is it satisfactory without improvement?”
Using the Cooper-Harper rating, the answer to any of these questions was “no” and had a rating of 10 then improvement was mandatory. If the answer was “yes” the next question would be addressed. According to the Cooper-Hawk rating system:
“Ratings of 7 to 9 indicate major deficiencies that require improvement. Ratings of 4 to 6 warrant improvement but could be lived with. Ratings of 3 to 1 ranged from mildly unpleasantly to highly desirable performance”
The Apollo computer only has 2,000 words of erasable memory because the flight test program need flexibility so all the data had to be placed in erasable memory. This is were the KSTART software came in. KSTART was a way to store additional data in a memory location that could be repeatedly rewritten.
KSTART could store up to 105 variables which could be adjusted for each flight. For example, gain modes could be adjusted using 3 in-cockpit switches to select a particular gain. Other data (and some programs) could be stored in KSTART. Individual parameters for programs stored in KSTART could be adjusted in-flight using the DSKY. As the program progressed 3 executable programs were kept in erasable memory.
Before taxing the airplane for each flight, the flight control system needed to be checked. The servos in all 3 axes were engaged using switches on the cockpit control panel to the pilot’s left behind the throttle.The computer checkout procedure was to select the direct mode, then press the “fail” switch to test the warning lights. Then the pilot activated the switch to go the BCS. After these basic modes, the SAS (stability augmentation system) and the CAS (control augmentation system) were tested. Finally, the aircraft’s generator was turned on and the servos were then reset.
802’s first flight accompanied by NASA’s F-104 Starfighter chase aircraft.
Gary E. Krier
Finally on 25 May 1972 at 08:14:34, NASA test pilot Gary E. Krier took off from Edwards AFB runway 04. At 09:02:32 he touched down at Edwards back on runway 18. The flight test objective for Phase I as accomplished: digitally controlled flight. 2 more flights were made in June. One of the flights scheduled for 16 June slipped 3 days because of a BCS component failure. The 18 June flight was the first supersonic flight for the program. However during this flight there was some minor “porpoising” at Mach 0.98. As the flight progressed higher and faster this porpoising increased in frequency. There were no indications of problems with input quantization or tendencies for pilot-induced oscillations (PIOs). As a result, between flights, pilots would test control system modes using the F-8 aircraft configured as the “iron-bird.”
A close-up of the F-8C “iron bird.”
The first flight for the SAS was to be on 3 August, but there were some glitches found after the KSTART tape was loaded into the DSKY. It was found that programming meant to improve handling characteristics actually caused such drastic deviations in output that handling would be adversely effected. As a result, changes to the software required they be signed off by software engineers. Once changes were made, the flight would occur on 4 August. However problems with the oil system meant the flight was aborted just after takeoff roll. The mechanical problem gave a chance for software engineers to again look at the DSKY on 9 August. This revealed that the software glitches were not resolved as they revealed an “increasingly noticeable aileron oscillation.” In fact, the oscillations were so bad the flight was cancelled.
By 18 August these glitches were fixed and the flight was set but for a flat tire that occurred during taxi to the runway! The SAS flight finally did take place on 22 August and tested the SAS in various gain modes. Krier that overall handling in pitch and roll was much easier.
4 days later, Krier made another set to flights to test the SAS and new stick steering gains. Takeoff was with the SAS engaged at the low-gain setting of 1-inch. Krier reported a smoother ride at takeoff than direct mode. At 20,000 feet and 300 knots, Krier tried SAS in all control axis at different gain settings. The increased gain settings increased stability with yaw and pitch damping best set at a gain of 3 inches. Later after some turns Krier gave the pitch setting in the CAS a try in all gains. Then attempting formation flight in SAS mode, Krier gave the F-8 a Copper-Harper rating of 5 because of the necessarily increased pilot input because of PIO, in the roll axis. Increasing speed only exacerbated the PIOs so Krier eased off at 350 knots. On final approach to runway 33 at Edwards 802 encountered substantial crosswinds. The resultant crosswind conditions resulted in a large variation in airspeed for which 802’s ailerons and rudder had to compensate. “Krier said the touchdown took place in considerable side drift and under “marginal control” Privately, he said he was essentially out of control when the wheels reached the runway.
15 August began close-trail formation tests (CTFs) with NASA’s F-104. The roll axis in SAS remained a problem. There was a lag between stick movement and aircraft movement of about 280 milliseconds. That time delay for the digital systems was 105 milliseconds. The difference between the 2 resulted in measurement output errors.
The 44th flight of 802 was the flight flight done by someone else other than Krier, Thomas McMurtry. McMurtry was also the chief pilot on the Supercritical Wing Project. His first flight was on 21 September 1972. McMurtry had some problems with 802s rudder input at takeoff and rotation but this was because he had less time in the iron-bird learning how the DFBW system worked. For comparison, Krier had, up to that time over 200 hours in the iron bird. McMurtry had only 27 before his first flight.
Krier and McMurtry flew the next 6 flights from October 1972 to January 1973 testing the SAS, CAS, and BCS. In October 1972, 802 was grounded due to the BCS failing it’s self test and some issues in one axis. An early November flight revealed a fuel leak.
30 January 1973, the F-8 DFBW began formation flights with NASA’s other F-8 program, the F-8 Supercritical Wing. Performance differences between the 2 aircraft in various configurations were evaluated. Ground controlled approaches (GCAs) were done down to 200 feet AGL (above ground level) at Edwards.
NASA 802 and the F-8 Supercritical Wing Test aircraft in formation.
On 6 April 1973 2 2-flight sequence revealed a problem with the DFBW FCS that took about 3 weeks to resolve. There was a problem with the logic in the software’s code.
By 13 August the DFBW began support for the YF-16 program by evaluating different models side-stick control. The chief advantage of the side-stick is that is provides an unobstructed view of cockpit instrumentation. There are 2 types of side sticks, a “force sensing” stick and “displacement stick”. The “force sensing” stick uses sensors to translate stick pressure into voltage that would be sent to the computer. Force sensing sticks to not move. A “displacement stick” would use sensors that would give the pilot some feedback. The stick could move but range of motion was limited to 1/8” to 1/4” in any direction. The F-8 program would test a displacement type stick for the YF-16 program.
The first flight for the side-stick equipped aircraft was first flown on 19 September 1973. Initially, takeoff and landing were done with the center stick and the minimum altitude for side-stick operations was 5,000 feet with an aircraft load limit of 4-gs. CAS modes would not be used during these flights.
After takeoff and climb to 20,000 feet Krier enabled the side stick at 250 knots. He rated the initial pitch and roll maneuvers, with the side-stick, as a 2 on the Cooper-Harper rating scale. The side-stick actually behaved better in the roll axis than testing in the iron-bird predicted. The side stick did become more sensitive as speeds increased. Next, Krier tried various common maneuvers, IFR flying, approaches to minimums, turns and missed approaches. All these were rated, by Krier at 2 on the Cooper-Harper scale. Krier also reported no forearm fatigue after the 1 hour flight.
25 September was the next flight for the side-stick. This time the previous flight restrictions were lifted and Krier conducted low approaches, landings and missed approaches to the Edwards AFB runways. In 6 flights, three by each pilot, NASA proved the YF-16 control scheme workable and it was used in the new aircraft.
From 24 October to 27 November 1973 4 new pilots flew Phase I of the program. By this time most of the problems with the software had been resolved so relatively unfamiliar pilots could fly the airplane. Philip Oestricher (the chief test pilot on the YF-16), William Dana (a former X-15 pilot) Einar Enevoldson (NASA test pilot), and Ken Mattingly (former crew member of Apollo 13) all flew the F-8 DFBW aircraft.
As Phase I was concluded, General Electric won a contract for lightning strike effects on the FCS. The results of these test showed that damage would occur but the system wouldn’t fail because of lightning strikes. On 16 November 1972, the DFBW program received a NASA group achievement award. By March 1973 Gary Krier testified to members of the House Committee on Science and Astronautics that FBW technology was viable and was worth extra investment to attract commercial users.
Early computers, being the size of rooms and lacking reliability, were considered out of the question for use in a range of applications, not just aviation. The amount of work (calculations) these early computers could do in a relatively short period of time, in the mind of scientists in the late 1940s outweighed the abysmal reliability.
That reliability can be traced to the number of parts in a system. Analog computers had a large number of moving parts therefore not a not very reliable. Digital computers on the other hand and fewer moving parts and better reliability. That meant a digital computer was an easy early choice in the NASA’s F-8 Program.
By the late 1950s digital computers were small enough (not by today’s standards) to be considered to be installed in aircraft. Most of the output errors in these early digital computers was due to manufacturing defects in vacuum tubes in the logic circuits. In 1952, John Van Nuemann, suggested that all systems fail at some point and the solution was to use triplicate logic circuits to “vote” on what was valid output. This solution really wasn’t practicable for aviation until a few years later when transistors started to replace vacuum tubes in digital computers.
The Voyager 1 spacecraft
In the 1960s a group of General Electric engineers stumbled across Van Nuemann lectures on redundancy, as it was to be called, and did some projections that showed that identical software feeding outputs from individual computers to majority logic voters made failures 300 times less likely in 100 hours of operations. As a matter of fact the longest lived spacecraft, Voyager 1, features a 3 digital computer redundant system…and it’s STILL working! The Saturn V booster also used a similar “triple-redundant” computer . At the that time continued shrinkage of the computer hardware meant that processor units could be made redundant.
The Saturn V booster rocket
All this paid dividends in the Apollo and F-8 programs. Draper Laboratory won the contract to develop the guidance and navigation computers. During hardware development, Draper Lab implemented very strict quality controls (QA, quality assurance) of manufacture. In fact so much so, that “every piece of metal could be traced to the mine it came from.” The strict QA paid off as there were 16 computer and 36 display and keyboard system failures in the 42 computers and 64 DSKYs (Display and Keyboard Unit) built – all on the ground. Along with zero in-flight failures in 1,400 hours of operation, Draper Lab was able to achieve a 99.9% reliability (over the target reliability rate for Apollo of 99.8%).
Software QA and validation became a major cost driver in the Apollo program and by 1967 NASA approached Bellcomm, Inc to study successful software development and management techniques. These processes formed a baseline for QA and development standards for the software industry, years in to the future.
In the F-8 program, Nasa’s Dryden Flight Research Center would be responsible for overall vehicle integration. Draper Labs remained responsible for requirements analysis, software and interface design, simulator support and flight-test support.
No one had built an all digital flight control system before and Draper Labs ran into 2 issues initially. 1) the use of a digital system in an “all-analog” world and 2) how to ingrate the computer system in to an analog airplane. At the input end of the computer there was an analog-to-digital converter; at the output end a digital to analog to converter. When the pilot moved the stick, displacement translated to voltage. In the pitch axis for instance, the physical limit movement of the stick was 5.9 inches nose up and 4.35 inches nose down. The transformers were designed to generate a signal of plus or minus 3 volts. Therefore input to the analog-to-digital converter was scaled to the longer aft movement, so the forward movement had a maximum value of about 2.4 volts, while the aft movement topped out at –3.0 volts. The voltage from the transformers would be converted into bits and ten be used as input to the software control laws.
Control devices in each axis have a deadband region in which small movements of the stick have no result. In a mechanical, systems the deadband is caused by stretching of the control cables from age and use. This deadband can vary over the lifetime of the cables and the aircraft and each axis has a unique deadband region. In a FBW system, small discrepancies are magnified. If the FBW designers ignored the deadband, the control surfaces would move with every tiny motion of the stick and rudder pedals. The airplane would become too sensitive to fly without the occurrence of pilot induced oscillations (PIOs) that result from constant attempts to dampen motion. Deadband had to dealt with via good old fashioned trial and error.
On the output end, signals causing gearing gains had to be calibrated. Gearing was non-linear because movement of a control device was translated by control laws into movement of the appropriate control surface. This was done by adjusting a linear variable differential transformer to provide a corresponding response.
The digital flight control system for Phase 1 tests of NASA’s F-8 Crusader
Both deadband and gearing equations were at the center of control law development. The output to the actuators was a sum of the trim command from the electric trim button on the stick and the product of the stick gearing gain and stick deflection.
Through sampling techniques done every 30 milliseconds by the computer, control calculations would occur every 8 to 15 milliseconds.
Control law equations were written into specifications arranged by axis and functional groupings with no attention being paid to how they where used by the flight-control computer. This made the equations more difficult to manipulate and use. Here’s an example from Tomayko:
DEC1=(KGE1)DEP1+DET1
“DE meant “delta” or “change,” C is “command,” K is “constant,” GE is “gearing,” P is “pilot” and T is “trim.” The equation can be loosely translated as: “The command change equals the gearing gain times the pilot stick position plus the change in trim.”
Names for each variable were different for each axis of flight.
Changes to the software specification took place up to March 1973 when the final version of the flight-control software was published. In that time there were many which were managed by a 4-layer system. The lowest impact were “Assembly Control Board” requests. There were straight forward code changes that could be approved by the software manager at Draper. The next highest was an Anomaly – an error that needed to be repaired. Both DFRC and Draper signed off on it. Next was a Draper designed Program Change Notice – during development something that could not be implemented in the desired way, so the implementation had to be changed. Both managers signed off on this. The highest level was a “Program Change Request” – a chance to the specification. Both software and project managers had to sign off on this, as there was schedule and budget impacts.
Rope memory from the Apollo Guidance Computer
The name for the flight control software was “DIGFLY” which was pronounced “digifly” and was written in FORTRAN. There were 2 copies of DIGFLY in the computer memory core rope (here’s a video on how it’s actually done). DIGFLY itself was divided into system and application components. The system software provided task management, a restart segment, service routines to monitor the IMU and provide self test modes. The application software contained flight control and some miscellaneous components. 60% of the F-8 software was taken from the Apollo program.
Core rope memory test sample from the Apollo Program.
Converting the F-8 to DFW:
The F-8 Crusader was selected because there were quite a few airframes being retired and the F-8 itself had the internal space available for the necessary equipment. NASA received 4 airframes. F-8A Crusader Bureau Number (BuNo) 145385 was going to the be non-flying FBW test bed, the so-called “Iron Bird” and was given NASA tail number 816. F-8C BuNo 145546 (interestingly, the first C model to be built) was to the actual flying test bed aircraft. 546 was allocated NASA tail number 802.
F-8A Crusader NASA tail number 816 after the program.
F-8C 145546 (NASA tail number 802) photographed at the Naval Air Test Center in 1959.
Converting the F-8 to accommodate the DFBW hardware and software turned out to the rather straightforward, albeit with a lot work. Problems ranged from canopy that would fit to a sweatshirt found in 802’s fuel tank. It took a year to convert 802 but the interesting thing was that the aircraft retained its fighter-looks and for the most part, it’s performance.
A port side view of the Apollo hardware as installed in 802.
Cooling problems were the first hurdles and persisted until almost first flight. After all the testing to resolve the cool issue, which even included a redesign of the heat exchangers, it was found that someone forgot to turn on the external cart to cool the computer during engine run-up. DOH!
The Apollo guidance computer and inertial system (left) on a pallet with the cooling tank and associated piping on the right.
The core ropes containing the flight control software arrived from Raytheon in January 1972 and the second version of the software, DIGFLY 2 was undergoing test and development work by Draper Labs. DIGIFLY 2 would use what remained of Skylab’s core rope which turned out to be the last ropes made for an Apollo computer.
Control sticks were initially tested that were originally tested for the Lunar Module were used in the iron-bird and a DSKY (installed the F-8s left gun bay) from the Apollo 15 Command Module was used to replace one that had been previously blown out due to errors in power requirements.
The F-8 initially retained an analog backup flight control system as well as it’s stock APC (approach power compensator, an autopilot for the throttle).
In flight once the pilot positioned the stick and rudder pedals a completely electric system took over. The inertial measurement unit was an arrangement of accelerometers and gyros that could track altitude, velocity and position changes without depending on external devices to get data. This reference was compared to the pilot’s control inputs were expressed as voltage from transformers to the control stick and rudder pedals, these were called Linear Variable Differential Transformers (LVDTs). There were 2 installed installed at the base of the stick for each control axis, pitch, roll and yaw. Each one served the primary flight control system and the other the analog backup system.
Actuators in had 2 systems, a primary and analog backup. In primary mode the digital computer sent analog position signals for a single actuation cylinder. This cylinder was controlled by dual self monitoring servovalves. One valve controlled the servo and the other was there for comparison. If position values differed from each servovalve then the backup mode, 3 servocylinders in a 3 channel arrangement, would engage and control the flight surface.
There was an attempt to upgrade the power plant of 802 from the J57-P20A to a more powerful –p420 but there wasn’t time and the installed –P20 was sent to the Navy for refurbishment by the October 1971. By February 1972, the software and hardware had been thoroughly tested and installed in 802. The engine, pilot’s seat and tail were reinstalled in April.
December 17 1903, the Wright brothers make the first powered, sustained flight at Kitty Hawk, North Carolina. Cables and pulleys were the flight control system of the day.
A system of pulleys and cables enabled the Wright Brothers were the first to take to the air in controllable flight on 17 December 1903. Aircraft of World War 1 methods to control aircraft remained basically the same cable and pulley system. Pilot control inputs through stick and rudder pedals were transmitted to the control surfaces via pulleys and cables.
Fokker DR1. Representative for a typical World War 1 aircraft.
By the time World War 2 started aircraft were more complex, faster and far more capable. Most flight control systems at the time remained cables and pulleys but the problem of stability remained. There needed to be a method for reducing the constant need for pilot control input especially during long flights.
Boeing’s B-29 Superfortress. Typical configuration for a World War 2 aircraft.
By the late 1940s a very primitive “assisted flight control system” had flown from Newfoundland to England aboard a C-54 entirely under the control of a flight program punched out on cards.
Douglas C-54 Skymaster
Wartime technological leaps enabled postwar aircraft designs not only increase in speed but also increase in size. At 1000 knots there simply isn’t enough time for a human being to react. The larger size of aircraft also meant there was a great deal more inertia for a human to struggle to control. Due to the increase in aircraft size, inertia and dynamic pressure, without some from of mechanical assistance flying would become too difficult for pilots to handle because of the force amount of force required to move to control surface. The solution was to connect the pilot’s stick and rudder pedals to hydraulics which were, in turn, connected to surfaces with which to control the aircraft.
Development of hydraulic flight control systems meant there was no direct connection between the stick/rudder pedals and the control surface. Pilots develop a sense of what an aircraft is doing, not only from visual cues, but also from seat of the pants flying to understand orientation of the airplane. Hydraulic systems brought about the need for “artificial feel systems” that replicated force feedback to the pilot through the stick and from the control surface.
Hydraulic flight controls are heavier than pulleys and cables, adding weight to the aircraft. That translates into less weight overall an aircraft can use for a given task. Less weight devoted to fuel for range in a fighter, less weight devoted to cargo or passengers in the airlines. In spaceflight weight is also a critical issue. The more a spacecraft weighs, the more thrust is required to bring that space to orbit. Controlling a spacecraft with hydraulics to going to be too heavy.
NASA used a simple binary logic flight control program in the Mercury program. The logical design consisted of a control signal that transmitted “on/off” commands for firing of the maneuvering rockets. The attitude of the spacecraft could be changed by the pilot’s moving a hand controller with the direction of the controller’s movement indicating pitch, roll or yaw to the control system. The control system then sent appropriate signals to fire the correct sets of rockets to achieve the desired effect. The Mercury flight control system was only capable of attitude control.
The Mercury capsule as displayed at the Udvar-Hazy Center of the National Air and Space Museum.
By 1968 all the NASA was focused on putting a human on the moon. Grumman, designer of the Lunar Module, was tasked with NASA and MIT to develop a flight control system capable of landing on the moon. The flight control system for the Lunar Module was called PGNCS (Primary Guidance, Navigation and Control System pronounced “pings”). Considerable experience in developing PGNCS was gained by engineers that worked on the Polaris SLBM and Atlas ICBM programs.
The Grumman Lunar Module on display at the National Air and Space Museum.
PGNCS had all the elements that were going to be needed to develop a flight control system. The most important element was the inertial measurement unit. The Lunar Module used 3 IMUs (inertial measurement units), 1 each for each axis of flight (pitch, roll and yaw). The IMU generated analog signals that had to be read by one of the first digital computers.
Schematic cutaway of the IMU in the Lunar Module.
This video details the development of the IMU and integration with the Lunar Module’s flight control system (it’s a fascinating 3-part series).
The LLRV
The LLRV (Lunar Landing Research Vehicle) was developed to test flight control laws (programming code) for the Lunar Module here on Earth. The LLRV used reaction control jets because the Moon has 1\6th of the Earth’s gravity. The LLRV was not an aerodynamic vehicle as it used solely engine thrust to get airborne. Once flight control laws were developed, testing of the LLRV wound down but a group led by NASA thought that software developed and tested on the LLRV and the Apollo Lunar Lander might be beneficial to aircraft control. After LLRV, computers, sensors and actuators became advanced enough to start flight-testing.
Computers in flight control systems come into 2 distinct types. Analog and digital. Mechanical analog computers operate by creating a mechanical analogy between the position of numbers on various scales and the products, quotients, squares, cube roots, etc that it’s used to calculate. In terms of flight control computers, control laws are hard-wired via the circuitry in the computer. While analog computer is resistant to power surges and viruses it’s very difficult to re-program. That requires a physical reconfiguration of the embedded circuits. Analog computers also run at higher temperatures because data is in the form of amplitudes and temperature effects modulate the amplitude.
The first vehicular use of an analog computer was with the German A-4 [V-2] rocket of World War 2 fame. The A-4 used an electronic analog computer that modeled the differential equations of the control laws and accepted voltage values and input and generated voltage as output to an amplifier. The amplifier then sent those commands to the control surface actuators. This technology formed the basis for digital computers almost 40 years later.
Digital computers on the other hand read data in binary, “1”s and “0”s. Data needs to be converted to binary string of bits before it can be used by the computer. The problem is these bit streams, coming from multiple sources, can be too dense and rapid for proper computer processing into readable data sets. After 1963 improvements in transistors and work on “sampling theory” made the use of digital computers more widespread not only in aviation but a whole range of applications.
NT-33 In-Flight Simulator
In 1954, the NT-33 In-flight Simulator was developed to test other equipment that would be needed in a digital flight control system. The NT-33 tested improvements in gyroscopes, actuators, effectors, stability augmentation and pitot-static systems. Also in 1957 the USAF flew a modified B-47 (53-2280) with fly-by-wire channel in the pitch axis.
JB-74E 53-2280 Fly By Wire test-bed aircraft.
By early 1971 the NASA Office of Advanced Research and Technology wanted to see more technology transferred from the Apollo program. Luckily for them the flight control computer was, up that time, one of the most reliable computers ever built. Soon the office approved for a feasibility study to install Apollo flight control system hardware into an F-8 Crusader.
A stock Vought F-8C Crusader BuNo 146993 from VF-191 “Red Lightnings.”
The F-8 Crusader is a single seat, single engine, carrier borne fighter from the 1950s. The ‘sader, as it’s properly known, gained a fearsome reputation as a MiG killer in the skies over North Vietnam however by the 1970s the ‘sader was being phased out in-favor of the newer F-4 Phantom II. NASA chose the ‘sader because it was readily available and cost effective. The intention was to modify the ‘sader by removing the horizontal stabilizers and placing them in front of the wings (as canards). The F-8s centerline air inlet would have been unaffected by this modification. However this was considered too costly to be included in the program.
By 1970 NASA acquired 4 F-8Cs on their way to the boneyard and sent them to the Dryden Flight Research Center. Money for NASA’s Digital Fly By Wire was appropriated $1 million dollars for the first year. Over time the entire cost of the program, which ran just over 10 years, would run $12 million dollars. The program itself would be conducted in 3 phases starting in early 1971. Phase I, scheduled to start in 1971, would have 2 goals: ensuring the technology worked and developing the tools to move forward. Phase IB would introduce a second computer in the flight control system and begin to test and develop system redundancy. Phase II, scheduled to run Q2 of 1974, was to concentrate on gaining knowledge and developing techniques for increasing computer reliability. Over time the schedule wasn’t met but the objective for each phase never changed. Over the next year Flight Research Center hardware and software engineers began modification work on the F-8 aircraft.
Vought F-8C NASA 802 Digital Fly By Wire Test-bed aircraft.
The next segments will cover modification of the F-8 aircraft, phases of the flight test program and benefits to future aircraft programs.
The McDonnell Douglas YC-15 was a prototype developed of the USAF ‘s AMST program in 1972. The competition was the Boeing YC-14.
McDonnell Douglas developed the YC-15 from the Breguet 941s, using extensive wind tunnel testing (for optimum configuration testing) and using Cornell Aeronautical Labs B-26B In-Flight Simulator (for flight control testing).
The aircraft itself is 124.25 feet long, wingspan is 110.36ft, height is 43.30. Max gross weight is 216,680lbs. The interior cargo-box is 47 x 11.8 x 11.4.
Thrust for the YC-15 was provided by the JT8D turbofan (also the DC-9 powerplant) and produced a total thrust of 16,000lbs. The engines were mounted on shallow pylons mounted ahead of the wings leading edge. Thrust reversal was accomplished using so-called “daisy nozzles.” During final approach, with flaps fully extended and facing the engine, the engines provided 54% of the YC-15 lift.
The straight wings consisted of ailerons, double-slotted flaps, leading edge high lift devices (Kruger flaps, etc), and spoilers. The trailing edge devices, flaps and ailerons spanned 75% of the wings trailing edge. The flaps could extend as much as 46 degrees into the downstream. The YC-15 was the first jet powered aircraft to use externally blown flaps (EBF).
YC-15’s EBF
Flight controls consisted of the conventional hydraulic system and a stability and control augmentation system (SCAS). The SCAS was dual channel and 3 axis enabling hands off flight for high angle approaches (tactical approaches) and modes for attitude, altitude and heading.
The YC-15 saw the first use of a heads up display (HUD) system, specifically called the VAM (Visual Approach Monitor). Developed by Sundstrand, the VAM displayed the horizon, flight path scale, airspeed indexer and touchdown point.
Sundstrand’s VAM display
Being essentially a research airplane, the YC-15 did not need to fully conform to MILSPECS. As such it borrowed components from various aircraft, the DC-10 cockpit enclosure, the F-15 fuel pumps, the C-141 stabilizing struts, the A-10UARRSI, the C-5 cargo handling equipment and other parts from 9 other types of airplanes. Cockpit instrumentation used components from 10 different airplanes.
Here’s a cutaway of the YC-14 and YC-15 for comparison:
Part 2 will detail the YC-15s flight test program.
Part 3 will detail the YC-15 technological contributions to the C-17.
John Farley’s A View from the Hover is a long-awaited memoir of sorts from one of the UK’s most experienced test pilots.
John Farley is best known for the first flight of the P.1127 in 1964 while a test pilot at the Royal Aircraft Establishment. He spent 19 years contributing to the development of the Harrier, retiring as Chief Test Pilot BAe Dunsfold. He then spent five years as Manager of Dunsfold and a further two as Special Operations Manager at BAe Kingston. In 1990 he became the first Western test pilot to fly the MiG-29 fighter. He is currently part of the Farnborough Aircraft team developing the F1 air taxi.
Like most pilot’s memoirs, A View from the Hover starts out with Farley’s first experiences with flying in general. Then goes into his flying at RAE Farnborough and Bedford.
There’s also a chapter detailing Qinetiq’s Harrier VAAC programme. There’s some very detailed descriptions of the aircraft’s flight control system and the contributions the program made to the JSF.
Harrier VAAC
A great of the book is naturally going to detail Farley’s work on the Harrier itself. From the P.1127 to AV-8A testing with the USMC to the Sea Harrier FRS.1 to AV-8B Harrier 2 testing with McDonnell Douglas, and finally to the Sea Harrier FA.2. Overseas sales demonstrations to Spain, Italy, France and India are also discussed.
There’s some interesting discussion of the preparations and certifications needed for a demo flight. Speaking of demo flights other than the Fulcrum demo he flew the other most interesting evaluation was the IAI Lavi. He offers some opinion of how good an aircraft that was. The reader may gain some insight into China’s J-10.
Farley’s thoughts on simulation has this tidbit:
“A few years later, I was standing outside a Lightning (aircraft) simulator waiting my turn for an emergencies check. The game was the same as in the Hunter (aircraft). If you got the drills right you flew on. The pilot was about 3 miles out on a GCA to land and down to one engine having successfully put out a fire on the other. Then the instructor gave him a fire in the remaining engine. The pilot made a textbook Mayday call and said he was ejecting. When you pulled the handle at this point in that simulator, the canopy slid back on rails and the seat went u p a foot or so. Job done. However, nothing happened and the canopy remained closed. Then we heard this awful scream – it was quite chilling. The pilot concerned had failed to remove the seat safety-pin during his strap in checks and found he could not pull the handle. He really thought he was going to die. A bad dose of AMD (awareness of mortal danger) as the psychologists term it.”
Intense and indicative of just how realistic early simulators were.
The later chapters of the book include Farley’s thoughts on general aviation and actually made me think of a discussion I had with a CFI years ago. Farley wondered why GA airplanes don’t have an AoA indexer in much the same manner as fixed wing naval aircraft.
I won’t go into a description of one but here’s what an AoA indexer looks like:
The conclusion of the book gives Farley’s interesting perspective on teaching the fundamentals of aerodynamics which CFIs out there may find useful.
All in all this is a great book. A View from the Hover is a must read for those interested in flight test. However, if you’re an airplane geek and/or a pilot there’s a lot of great material here for you too. On Amazon it’s a bit pricey. I found the paperback for about $35 but it had been sitting on my wishlist for over a year.
*make sure that you head over to xbradtc’s place and click on the Amazon link to purchase (you’re welcome…even if I was banned from the latest “name the plane…lol)
In 1986 the BBC produced this interesting and informative documentary detailing The Empire Test Pilots School class 44 going through training at Boscombe Down in the UK. There are 6 parts each lasting 30 minutes. “Test Pilots” the viewer a good idea of the hard work that goes into becoming a test pilot, as well as the flight test process itself.
There are a LOT of different airplanes here making cameos. Beavers, Tornadoes, Buccaneers, Vikings, Hornets, Blackhawks, way to many to name here so fellow airplane geeks should be well pleased 🙂
If you want to learn more there are a few flights test blogs out there that I read:
Mark Jones Jr’s Multiply Leadership . Mark is a graduate of the USAF Test Pilots School and former test pilot in the C-17 Globemaster 3 program.
As I’ve related before in these pages, I am a casual flight-simulation enthusiast. In my youth I was a much more devoted aficionado (if not a particularly skilled one), and I spent many happy hours flying (mostly one-way) virtual sorties into scarily dense integrated air defense networks. But these days, I lack the time and proficiency to survive in that kind of unwelcoming environment, and so I satisfy myself with the prosaic tasks of practicing touch-and-goes, failing at in-flight refueling, and occasionally sparring with computer-controlled enemies at a skill level just low enough to make me the conquering hero. Another oak leaf cluster for my Distinguished Flying Cross? Why thank you, don’t mind if I do, you can paint the MiG silhouette right over there, that’s right Iceman, I am dangerous. (Hey, don’t judge me.)
There I was in the blue F-16, trying to jump a partially-anesthetized, unarmed A-4E in a level turn...and, er, I wind up botching the end game with way too much closure. What looks like a tidy little displacement roll on the tape was really a frenzied attempt to avoid a mid-air collision.
However, it’s interesting after all these years to see how current technology is employed by serious simulator fans who have stuck with the hobby. There are several different layers of simulation complexity. The first is mastering the control and management of your own aircraft, which is a nontrivial exercise in the era of 700+ page “game manuals.” (I am stuck at this level.) The second is basic combat against a computer-controlled aircraft, the so-called “1-v-1” engagement. This raises the complexity substantially, as weapons systems and tactics and all of the nastiness of an opponent come into play. The third level is multiplying the number of aircraft in play, which adds the element of multi-tasking under stress. And the fourth adds human players into the mix, which increases the chaos by an order of magnitude.
The “fourth-level” organized scenarios that are flown by serious devotees of the hobby are fascinating. While there is plenty to read in the open literature about Basic Fighter Maneuvers, there is not a lot about how a large air battle involving fourth-generation fighters equipped with missiles of the AMRAAM generation would play out tactically (the seminal open-source work on the subject, Robert Shaw’s Fighter Combat: Tactics and Maneuvering, was published in 1985). These are the intriguing parts of the mission that Lex could not show us on his helmet-cam — and for good reason, for there lie secrets upon which lives may one day depend.
But within the boundaries of what is publicly known, the experiences of the hardcore simulator crowd provide some fuel for thought. Not so much as a predictive device — but rather, to get the “feel” of the thing, and in particular how tomorrow’s fighter pilot (or UCAV operator) will need to quickly synthesize all kinds of fragmentary information in a very short time to detect, identify, engage, kill, and withdraw. If these games are any indication, it will be confusing, fast, violent, and curiously cerebral.
The Youtube video above is a recording of an air-to-air encounter during a Falcon 4.0 airfield strike, flown by a four-ship of F-16s over a simulated Korean peninsula. (Or, more properly, flown by four civilian hobbyists over the Internet, at least one of whom has melodramatic tastes in background music.) In this setup, they have no airborne radar controller with a God’s-eye view of the battlespace warning them of incoming threats. Instead, they have to rely on their own radar warning equipment, their awareness of each other’s position, and the onboard data networking equipment that allows them to construct a common picture of what’s going on. But they have to work to build that situational awareness, based on little half-second blips and buzzes that pulse on their threat receivers, and there is no time to spare, since modern missiles have the speed and range to kill virtually anything that can be detected, very quickly. Compared to the WWII experience, it seems strangely abstract.
This second video is taken from a mission analysis tape (mimicking the military’s TACTS/ACMI systems, most current flight simulators have some kind of “flight recorder” capability that allows the mission to be carefully dissected afterward — this is an attractive little presentation package called TacView). Here we see some of the consequences of chaos, when an F-15 takes a missile shot — and while his intended target exits the missile envelope, a friendly F-15 wanders into it. With modern fighter aircraft being as agile as they are, and modern missiles having the range, kinematics, and “semi-smart” acquisition mechanics that they do, and everything happening fast fast fast…stuff can happen.
I don’t assert that these commercial entertainment products will predict the outcome of future air battles. (Even assuming that flight simulations get the performance details within the ballpark, I think they still have a platform bias that undermodels operational-strategic capabilities which can change the battlefield fundamentals.) But I do think that they do a more creditable job at capturing the flavor of that battlefield than a lot of other media, which rely too heavily on accounts of past air wars that are receding in relevance.
And if nothing else, these mission accounts are very entertaining for an old computer game player to watch. They can be my wingmen anytime, no I can be theirs. Er. Well, something like that, anyway.