You are on page 1of 200

UNIVERSITY OF MINNESOTA This is to certify that I have examined this copy of a masters thesis by

Prasad P. Kulkarni

and have found that it is complete and satisfactory in all respects, and that any and all revisions required by the final examining committee have been made.

Dr. Christopher G. Prince

Name of Faculty Adviser

Signature of Faculty Adviser

Date GRADUATE SCHOOL

Simulating Wing-Sensors on a Sailplane Airfoil To Evaluate Usefulness For Pilot Feedback

A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY

Prasad P. Kulkarni

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE

July 2009

Prasad P. Kulkarni 2009

Acknowledgements

I would like to take this opportunity to thank my advisor, Dr. Christopher G. Prince for his guidance throughout this thesis, and indeed my entire graduate career. I gained a lot of valuable knowledge in the aviation industry with his assistance. I owe my heartfelt appreciation and thanks to Dr. Daniel Pope for being on my thesis committee, helping us better understand the basics of aerodynamics, and letting us use the license for the SolidWorks/FloWorks package. I also owe my heartfelt appreciation and thanks to Dr. Pete Willemsen for being a part of my thesis committee, and for helpful suggestions along the way. I would like to acknowledge Lori Lucia, Jim Luttinen and Linda Meek for their help with infrastructural issues. Finally, I would like to thank all my friends for their support and encouragement throughout the thesis.

Abstract

This thesis marks the start of a broader research programme to outfit the wing of a glider with an array of airflow sensors and feed that information back to an array of actuators contacting the skin of the pilot. Our broader research goals include improving the pilots quality of experience of flight and to assist the pilot in detecting dangerous flight conditions. For example, dangerous flight conditions include stall and spin. Since the sensor and actuator hardware for this research programme is not yet ready, we developed software for simulating sensors and visually rendering the sensor readings for airflow conditions. The goal of this specific thesis was to simulate different airflow conditions, and generate visual renderings of the pressure readings obtained from these simulations to determine whether a glider pilot can visually discriminate between the renderings of different airflow conditions. To obtain simulations for different airflow conditions, we first designed a 3D wing model using 3D modeling software (SolidWorks), and then using computational fluid dynamics software (FloWorks), we performed airflow analyses over the wing model by varying the angle of attack and airflow direction. We developed a technique for using the raw flow analysis data to extract virtual sensors that can be placed at any location on the simulated wing surface. We then mapped the virtual sensor readings to the RGB color model to obtain visual renderings of the sensor readings obtained for different airflow conditions. Finally, we ii

designed a test subject study involving 16 adult subjects for comparing the visual renderings obtained for different airflow conditions. The goal of the test study was to determine if adult subjects can visually discriminate between the renderings for different airflow conditions. The results of our test study were encouraging. Adults were able to discriminate reliably between the visual renderings for safe and unsafe flight conditions. We used the visual rendering system as a simulation of tactile (touch-based) feedback rendering with the assumption that if visual discrimination was possible then tactile discrimination would also be possible. The outcome of this thesis motivates us to take the research to the next level, and to use tactile hardware for rendering airflow conditions.

iii

Table of Contents

1 Introduction ................................................................................................................ 1 1.1 1.2 Overview ........................................................................................................... 1 Literature review ............................................................................................... 5

1.2.1 Computational fluid dynamics (CFD)............................................................. 5 1.2.2 Sensors .......................................................................................................... 6 1.2.3 Feedback...................................................................................................... 12 1.3 Requirements and design issues ....................................................................... 16

1.3.1 Sensors ........................................................................................................ 17 1.3.2 Feedback...................................................................................................... 18 1.3.3 Mapping ...................................................................................................... 18 1.4 1.5 Research goals ................................................................................................. 19 Plan for rest of thesis ....................................................................................... 22

2 Airflow Analysis Over Airfoil And Manipulation Of Data ..................................... 24 2.1 2.2 2.3 Overview ......................................................................................................... 24 Introduction to SolidWorks.............................................................................. 24 Designing 3D airfoil model in SolidWorks ...................................................... 25 iv

2.4 2.5

Introduction to CosmosFloWorks .................................................................... 26 Flow analysis and simulation in FloWorks ....................................................... 27

2.5.1 Evaluation of 2D flow analysis vs. 3D flow analysis .................................... 28 2.5.2 Flow analysis process................................................................................... 31 2.5.3 Mesh visualization ....................................................................................... 32 2.6 2.7 Calculating drag coefficient ............................................................................. 36 Manipulation of data for simulated sensor values ............................................. 37

2.7.1 Data storage format ...................................................................................... 38 2.7.2 Data lookup tables ....................................................................................... 40 2.8 Summary ......................................................................................................... 42

3 Software For Visual Rendering Of Sensor Readings .............................................. 43 3.1 3.2 3.3 3.4 Overview ......................................................................................................... 43 External inputs to the software......................................................................... 46 Selection of programming platform ................................................................. 49 Design of the core software ............................................................................. 50

3.4.1 Initialization of the core software ................................................................. 51 3.4.2 Relevant software features ........................................................................... 51 3.5 3.6 Color mapping of sensor readings .................................................................... 58 Output of the software ..................................................................................... 61

3.6.1 Visual rendering of sensor data .................................................................... 62 3.6.2 Relative and absolute rendering.................................................................... 63 v

3.6.3 Rendering of the wing surface ...................................................................... 66 3.7 3.8 Comparison of various specific outputs ........................................................... 67 Summary ......................................................................................................... 68

4 Evaluation Of Perceptual Discrimination Of Data Sets .......................................... 69 4.1 4.2 Overview ......................................................................................................... 69 Airfoil selection and simulation parameters ..................................................... 72

4.2.1 Selection of airfoil ....................................................................................... 72 4.2.2 Simulation parameters.................................................................................. 75 4.3 4.4 4.5 4.6 4.7 4.8 4.9 Test subject study ............................................................................................ 75 Visual stimuli .................................................................................................. 76 Stimuli design.................................................................................................. 78 Design for comparison of various stimuli images ............................................. 80 Test subject schedules ..................................................................................... 83 Balancing of various factors in schedule .......................................................... 84 Hypotheses for test subject study ..................................................................... 86

4.9.1 Response validity ......................................................................................... 87 4.9.2 Safety .......................................................................................................... 88 4.9.3 Airflow direction.......................................................................................... 89 4.9.4 Absolute rendering vs. relative rendering ..................................................... 90 4.9.5 Exploratory hypothesis................................................................................. 91 4.10 Pre-study knowledge given to test subjects ...................................................... 92 vi

4.11 Program for presenting trials............................................................................ 94 4.12 Analysis of subjects responses ......................................................................... 95 4.12.1 4.12.2 4.12.3 4.12.4 4.12.5 Analysis of response validity hypotheses .................................................. 96 Analysis of hypothesis related to safety .................................................. 100 Analysis of hypotheses related to airflow direction. ................................ 103 Analysis of hypotheses related to absolute vs. relative rendering trials .... 109 Analysis of exploratory hypothesis related to response time ................... 112

4.13 Summary ....................................................................................................... 116 5 Conclusion And Future Work ................................................................................ 119 5.1 Conclusion .................................................................................................... 119

5.1.1 Evaluation of hypotheses for test subject study........................................... 120 5.1.2 Visual rendering of airflow conditions ....................................................... 124 5.2 Future work ................................................................................................... 125

Appendix 1: Procedures For Wing Design And Flow Analysis ............................... 129 Appendix 2: Java Specifications Of Software Interface Layer ................................ 148 Appendix 3: Balancing Tables For Test Schedules .................................................. 152 Appendix 4: Test Subjects Responses ...................................................................... 154 Appendix 5: Visual Rendering For All Airflow Conditions ..................................... 170 Bibliography .............................................................................................................. 183 vii

List of Tables

Table 1.1: Miniature air pressure sensors ......................................................................... 8 Table 1.2: Miniature aviation air pressure sensors .......................................................... 11 Table 1.3: Planned progression of research programme ................................................. 21 Table 3.1: Conditions to identify the partition of a specific sensor reading ..................... 61 Table 4.1: Factors for stimuli design .............................................................................. 79 Table 4.2: Angle of attack pairings ................................................................................ 81 Table 4.3: Analysis of hypothesis H1: Responses for Expected-Y trials .......................... 97 Table 4.4: Analysis of hypothesis H2: Response for Expected-N trials ........................... 97 Table 4.5: Analysis of hypothesis H3: Responses for Expected-Y trials compared with responses for Expected-N trials ............................................. 98 Table 4.6: Analysis of hypothesis H4: Responses for non-stall vs. stall trials compared with responses for non-stall vs. non-stall trials............................ 100 Table 4.7: Analysis of hypothesis H5: Responses for non-stall vs. stall trials compared with responses for stall vs. stall trials .......................................... 101 Table 4.8: T-test results for means obtained from hypotheses H4 and H5 ..................... 102 Table 4.9: Analysis of hypothesis H6: Responses for horizontal airflow condition trials compared with responses for updraft airflow condition trials ...................... 103

viii

Table 4.10: Analysis of hypothesis H7: Responses for horizontal airflow condition trials compared with responses for downdraft airflow condition trials .......................................................................................... 104 Table 4.11: Analysis of hypothesis H8: Responses for updraft airflow condition trials compared with responses for downdraft airflow conditions trials ........................................................................................ 105 Table 4.12: T-test results for means obtained from hypotheses H6 and H8 ................... 105 Table 4.13: New design for comparing updraft vs. downdraft airflow conditions ......... 108 Table 4.14: Analysis of hypothesis H9: Responses for non-stall vs. stall trials of relative rendering compared with responses for non-stall vs. stall trials of absolute rendering ............................................. 109 Table 4.15: Analysis of hypothesis H10: Responses for non-stall vs. non-stall trials of relative rendering compared with responses for non-stall vs. non-stall trials of absolute rendering ...................................... 110 Table 4.16: Analysis of hypothesis H11: Responses for non-stall vs. stall trials of relative rendering compared with responses for non-stall vs. stall trials of absolute rendering ............................................. 111 Table 4.17: T-test results for means obtained from hypotheses H9 to H11 ................... 111 Table 4.18: T-test results for response time for trials with relative rendering compared with response time for trials with absolute rendering ................. 114

ix

List of Figures

Figure 1.1: Pressure belt installed on wing ....................................................................... 9 Figure 2.1: Uniform chord wing model designed in SolidWorks .................................... 26 Figure 2.2: Pressure readings for various span positions ................................................ 30 Figure 2.3: Preview of flow analysis .............................................................................. 32 Figure 2.4: Mesh of fluid cells ....................................................................................... 33 Figure 2.5: Partial cells for flow analysis ....................................................................... 34 Figure 2.6: Partial cells refined ...................................................................................... 35 Figure 2.7: Sample of wing data XML file ..................................................................... 39 Figure 2.8: Example of a lookup table............................................................................ 41 Figure 3.1: System organization..................................................................................... 44 Figure 3.2: Sample sensor configuration XML format ................................................... 47 Figure 3.3: Initial screen of the core software................................................................. 49 Figure 3.4: Actions menu in core software ..................................................................... 52 Figure 3.5: Sensor Data At dialog box ....................................................................... 53 Figure 3.6: Sensor Data At user specified location ..................................................... 54 Figure 3.7: One possible arrangement of sensors ........................................................... 55

Figure 3.8: Example partial output of sensor readings .................................................... 56 Figure 3.9: Airfoil Info dialog box ................................................................................. 57 Figure 3.10: Color mapping chart (modified from FloWorks) ........................................ 58 Figure 3.11: Logical partitions of the range of sensor readings ....................................... 59 Figure 3.12: Rendering of sensor readings ..................................................................... 62 Figure 3.13: Relative rendering example ........................................................................ 65 Figure 3.14: Absolute rendering example ....................................................................... 65 Figure 3.15: Rendering of pressure distribution across the wing surface......................... 66 Figure 4.1: Airfoil and its components ........................................................................... 70 Figure 4.2: Plot of lift coefficient versus angle of attack................................................. 73 Figure 4.3: Top and bottom surface images combined to form a component stimulus .... 77 Figure 4.4: Example pair of component stimuli to be presented to a test subject ............. 78 Figure 4.5: Example of one test schedule, presented to a test subject .............................. 84 Figure 4.6: Software interface to present a trial to subjects and collect the response ....... 94 Figure 4.7: Comparison of mean response time for different trial types (relative rendering) ..................................................................................... 113 Figure 4.8: Comparison of mean response time for different trial types (absolute rendering) .................................................................................... 113

xi

Chapter 1: Introduction

1.1 Overview

Airflow simulation software is used in the aviation industry to design and develop aircraft. Computers running this simulation software are used to study the airflow conditions around models of the aircraft. In this thesis, we will make use of such simulation software to provide an evaluation of the ability of an array of wing sensors to give feedback to a glider pilot depending upon variations in the airflow around a glider. We are working towards a system for sensing the local airflow outside a glider and sending feedback regarding that airflow to the pilot. For example, we may use an array of MEMS (Micro Electro-Mechanical Systems) air pressure sensors on the glider wing, and feed the information from these sensors back to the pilot on their skin through tactile rendering technology. Providing this kind of feedback to the pilot may be practically useful, may enhance safety, and may also enhance the quality of the flying experience. Such feedback may be practically useful because it may help the pilot to know about lift conditions (updrafts or downdrafts) without having to look at the incockpit instruments. The feedback system may enhance safety by assisting the pilot to

detect stall conditions well in time so that he or she can take proper actions to get the sailplane out of any arising danger. This thesis research marks the start of a broader research programme to outfit the wing of a glider with sensors and feed that information back to the pilot. This thesis research was concerned with a subset of the broader research programme (as discussed in Section 1.4). Some of the general problems which we will be addressing in this broader research programme are: 1. Will the feedback information provide interesting and useful perceptual discriminations to the pilot? 2. How can we give feedback to the pilot reflecting the local airflow conditions outside the glider? 3. How can we establish a mapping from sensory array to feedback mechanism? 4. Will the feedback information have adverse impact by adding to the pilots workload? In general, it seems important for such a system to not add to pilots workload. 5. Conversely, can the feedback information reduce the pilots workload? For example, if we feedback airflow information from an array of sensors to a tactile sense this could reduce the pilots reliance on the in-cockpit indicators (e.g., stall warning), potentially reducing the pilots work load. 2

The broader impacts of the research programme include: A. To potentially enhance the quality of glider flight. The airflow conditions around the glider can be measured by sensors mounted on wing and, using a tactile feedback system, that airflow can be represented to the persons sitting in the cockpit. With this we can provide the feel of air flowing around the sailplane. This may enhance the quality of glider flight because the pilot may be able to sense the variations in the airflow conditions around the sailplane without having to rely on incockpit instruments such as variometers, instruments used for indicating rising/sinking speed of the surrounding air. B. To potentially enhance the safety of the flight by helping the pilot to detect wing stall conditions. At times, the glider may be near to a wing stall condition without the pilot being aware. If the glider stalls its lift is destroyed and the glider loses altitude quickly. Such situations can be dangerous because the glider can crash and cause death of the pilot. We may be able to use the tactile feedback system to alert the pilot to the proximity of a stall without the pilot having to look at the in-cockpit instruments. Hence such a system may enhance the overall safety of the flight.

C. To facilitate new ideas and provide insights regarding artificial sensation in the aviation field. Providing a sense to human beings that they did not have before is called an artificial sense. For example, an electron microscope produces highly magnified images of objects which are not visible to the naked eye and hence can be considered to be providing an artificial sense. Similarly, infrared cameras with night vision provide humans with the ability to see objects in the dark and also can be considered to be providing an artificial sense. So, if we can provide humans with the ability to sense the airflow outside the glider over their skin using a tactile rendered hardware, it would provide them with an artificial sense of airflow conditions. The pilots experience would be similar to that of a soaring bird which can naturally sense the variations in the airflow conditions. D. To provide a new modality of experience to simulations of glider flight. Glider flight simulations (including games) may be enhanced by simulating the airflow conditions so that the simulator pilot can feel airflow around the simulated aircraft. Presently the information provided in such simulators is largely audio-visual. The effectiveness of the tactile feedback system can also be studied by such simulations.

E. To reduce the pilots work load. The glider pilot has to rely on in-cockpit instruments for various flight related data. Too much reliance on the instruments can hamper the pilots ability to fly the glider. We may be able to reduce the dependency on these instruments by providing some flight related information relevant to lift and stall through a tactile feedback system. In the rest of this introduction chapter, we first review relevant literature, we then cover requirements for hardware and software, and finally we give the plan for the remaining chapters.

1.2 Literature review

In this section, we review the literature pertaining to the research programme. This includes Computational Fluid Dynamics (CFD), sensory hardware, and technology for providing feedback to the pilot.

1.2.1 Computational fluid dynamics (CFD)

Wind tunnel simulators (Benson, 1997) are used to simulate the airflow outside of an aircraft. These simulators are based in the equations of fluid dynamics (Shaughnessy, 5

Katz, & Schaffer, 2005). More generally, such simulations comprise computational fluid dynamics or CFD (Moin & Kim, 1997). CFD based wind tunnel simulators are often used for aircraft development to simulate new aircraft designs. For example, they are used in the early design stages, when engineers are establishing key dimensions of the aircraft. Examples of current computational fluid dynamic software packages include SolidWorks with Cosmos FloWorks (http://www.solidworks.com/pages/products/cosmos/cosmosfloworks.html), and Fluent (http://www.fluent.com/). These packages can be used to construct models of 3D physical structures (e.g., a wing airfoil) and simulate fluid (e.g., air) flow over these structures. They can also be used to simulate sensors on those simulated structures.

1.2.2 Sensors

The sensors that we will eventually use on the glider wing (see Table 1.1, Step 4) may sense various properties of the local airflow such as pressure, airspeed, direction and turbulence. We intend to use these sensors to provide airflow information to a human pilot. In this section we will discuss how airflow parameters are measured and some sensor technologies that may have application to this research. Airflow parameters such as wind speed or wind pressure can be measured by a device called an anemometer. A simple type of anemometer, a cup anemometer, consists of four hemispherical cups mounted on a vertical shaft. As air flows past the device, the 6

cups rotates around the shaft in a manner that is proportional to wind speed (http://en.wikipedia.org/wiki/Anemometer). Another form of anemometer, the hot-wire anemometer, uses a heated very fine wire (e.g., several micrometers). The airflow causes a cooling effect on the hot-wire which is heated by a constant current. The heat lost due to airflow over the wire is then converted to wind velocity (http://www.efunda.com/designstandards/sensors/hot_wires/hot_wires_intro.cfm). A third type of anemometer, the plate anemometer, is used for measuring wind pressure. It consists of a flat plate kept suspended on a spring in a direction normal to wind. The spring balances the pressure exerted on the plate by the wind. A suitable gauge or a recorder measures the compression of the spring which determines the actual wind pressure (http://en.wikipedia.org/wiki/Anemometer). Now that we have covered some general airflow measurement techniques, in the rest of the section we consider sensors that could have more direct application to our broader research programme. Researchers have constructed artificial sensors based on the study of insect haircell physiology. For example Ozaki, Ohyama, Yasuda, and Shimoyama (2000), have constructed artificial insect haircell sensors using micro-mechanical structures (consisting of micro-cantilevers and micro-strain gauges) of size 3000 m in length, 250 m in width, 8 m in thickness, composed into arrays. These sensors were able to accurately measure airflow velocity in the range of tens of cm/s to 200 cm/s. While these artificial haircell sensors are interesting it may prove more practical however for us to rely on

commercially available technology. Table 1.1 lists some of the sensory hardware used for commercial purposes to measure air pressure (Hallberg et al., 2008).

Technology FOP-F125, FISO Technologies Inc.

Description 125 m, manufactured at the tip of an optical fiber; range: +/- 300 mm Hg; accuracy: +/- 8 mm Hg; resolution: < .4 mm Hg. Signal conditioner interface. (In July 2008, FISO representative indicates this sensor cannot survive contact with liquids).

SCP1000 Barometric pressure sensor; (www.vti.fi) Bosch BMP085

30-120 KPa measuring range; sensor size: 6.1 mm diameter; SPI or I2C sensor interfaces.

5 mm * 5 mm, height 1.2 mm. Absolute accuracy of down to 0.03 hPa. Power consumption down to 3 A. I2C interface.

Freescale Semiconductor

High-level analog output signal proportional to applied pressure, 15 TO 115 KPa (2.2 TO 16.7 PSI), .42 x .30 x

MP3H6115A6U/T1, .165 inches. CASE 1317-04

Table 1.1: Miniature air pressure sensors 8

Khbeis, Tan, Metze, and Ghodssi (2003) developed a microfabrication approach that enabled them to successfully integrate a pressure sensor array on an airfoil to detect certain turbulent airflows. These researchers made backside electrical connections on the sensor array so that additional air turbulence is not generated by the presence of the array. Boeing, Endevco and Georgia Tech have been working on a joint project to develop a smart sensor device, called a pressure belt, for measuring pressure distribution on the top and bottom surface of an airplane wing (Catlin, Eccles, & Malchodi, 2002). This device uses MEMS technology, with active components sealed in plastic which is positioned within the boundary layer of air flowing over the wing. The Pressure Belt is a long, flexible circuit card that has smart sensors located at intervals along the belt. Figure 1.1 illustrates the pressure belt installed on a wing (Catlin et al., 2002).

Figure 1.1: Pressure belt installed on wing

A recent news report (Sensors, 2008) indicates that the pressure belt device being jointly developed by Boeing, Endevco and Georgia Tech will be commercially available. Previous methods of obtaining this data required drilling holes on the wing, extensive cabling, signal conditioning and data acquisition products to achieve the same goal. The product allows Boeing, and now all Endevco customers, to make extremely accurate pressure measurements, including along new composite wings on aircraft such as the 787, without the need for wing modifications. Each Endevco Pressure Belt assembly is comprised of up to three ultra-miniaturized pressure sensor modules networked together and mounted on a flexible strip. The low profile of the Pressure Belt generates minimal aerodynamic impact (Sensors, 2008). Table 1.2 lists some of the commercially available sensors used in the aviation industry (Hallberg et al., 2008).

10

Technology Kulite CCQ-093

Description -320F to +250F; four insulated wire leads run from sensor package; sensor dimensions 2.4mm x 9.5mm; absolute pressure

Kulite LEH-1AC-250 Endevco Model 3239415

Miniature flatpack pressure transducer; absolute pressure; 12.7x7.6x2.46 mm

MEMS silicon pressure sensor; 1.65 mm long by 1.2 mm wide by 0.4 mm tall; four connection pads for mounting to a circuit or substrate using conductive epoxy; absolute pressures; range of 0-15 psia, and a nominal 200 mV full scale output when powered using a sensor excitation of 5 VDC.

Endevco Model 8507C

2 to 15 psi; 2.42x12.7mm; leads attached; needs signal conditioner

Table 1.2: Miniature aviation air pressure sensors

11

1.2.3 Feedback

Methods of providing feedback from information obtained via the sensory array to the pilot include haptic, visual and auditory. Haptic is related to the sense of touch and haptic rendering is a process of creating forces on the users skin to produce tactile sensation. Typically these forces are generated under computer control. We focus on tactile feedback because visual and auditory stimuli already exist in the pilots environment and providing more visual or auditory information may overload the pilot. Tactile displays have also been used as visual substitution for blind people (Bach-Y-Rita, Collins, Saunders, White, and Scadden, 1969). Vision substitution aims at providing some equivalent of vision via another modality, such as hearing or touch. In the display system of Bach-Y-Rita et al., (1969), four hundred solenoid stimulators were arranged in a 20x20 array built into a dental chair. The stimulators, spaced 12 mm apart, had 1 mm diameter Teflon tips which vibrated against the skin of the back. A television camera mounted on a tripod, scanned the visual scene and presented stimuli corresponding to the visual pixels onto the skin of the back of the blind subjects. With training, the blind subjects were able to use the system in various visual tasks. Haptic rendering on a persons torso can be used to provide navigational cues to the operator (Jones, Nakamura, and Lockyerb, 2004). Jones et al., (2004) used a matrix of electromechanical stimulators mounted in a vest that are sequentially activated to provide information useful for navigation. In related research (Jones, Lockyerb, and 12

Piateski, 2006), designed and fabricated a wirelessly controlled tactile display to assist in navigation. This display was comprised of a 4x4 array of vibrating motors mounted on a waist band which stimulated the skin across the lower back. The peak frequency of these motors ranged from 100 to 200 Hz and was within the range of optimal sensitivity of vibratory stimuli on the torso. The aim of designing this tactile display was to determine which tactor (a device used to convert an electrical signal to a rotary or linear motion for the purpose of tactile stimulation) array and what types of tactile patterns presented are most effective for providing directional cues. Simple activation patterns that had intuitive meaning were selected to represent commands used for navigation. For example, a pattern in which the vibrating frequency gradually decreased from the top row to the bottom row in the array of motors was used to represent the direction up. Similarly, a pattern in which vibrating frequency increases over the column of motors was used for representing the direction right. Before the start of their experiment, subjects were shown a visual representation of such vibrotactile patterns. These patterns were then randomly activated on the tactile display placed on the lower back of the subject and they were asked to identify the vibrotactile pattern presented. The results of the experiment suggested that the vibrotactile patterns presented were easy to identify and most subjects achieved 100% accuracy. In a further experiment, the same tactile patterns were evaluated when subjects were required to navigate outdoors. Again in this experiment most subjects attained 100% accuracy. The results from these experiments indicated that

13

the vibrotactile patterns were easily identified and can be used for providing navigational cues. Zlotnik (1988) studied various factors for applying electro-tactile technology in fighter aircraft. The objective of this study was to find an alternative display media that can provide vital flight-related data to pilots. Such kinds of display would reduce the dependency on auditory and visual feedback devices that are already overloaded in the cockpit of the fighter aircraft pilot. Especially in air-to-air combat situations, this kind of tactile feedback can improve the pilots awareness of the situation as the pilot can spend less time looking at cockpit devices and hence can spend more time in visual contact with enemy aircraft. This system used a tactile sleeve for giving feedback to the pilots arm. An array of stimulator points were embedded into the sleeve at specific locations. When the stimulator points on the sleeve were energized it translated airspeed into tactile sensations. Subjects were given training to quantify the stimulus values in laboratory experiments. A series of stimulus signals were presented to the subjects wearing the tactile sleeve and they were asked to compare the stimulus intensity with the previous signal. The findings of these preliminary experiments were promising. The subjects were able to detect the stimulus signal pulse was varied between 12 and 300 msec. The results suggested that if the digitally displayed information is presented in a tactile form then it can still be tracked effectively by the pilot.

14

Cardin, Vexo, and Thalmann (2006) developed a vibrotactile system to alert the pilot about the planes attitude, pitch and roll. The aim of this system was to enable the pilot to take the necessary corrective actions which cannot be taken by the autopilot. For example, certain external conditions such as the wind blowing at 120 km/hr, destabilize an aircraft, and cannot be automatically corrected. In such situations the vibrotactile feedback system can help the pilot to take corrective action by alerting him before the plane goes out of control. This system also improves the pilots awareness of the current situation in case of spatial disorientation. Their system placed four actuators on the pilots torso and shoulders to render the spatial orientation of the aircraft on the skin. In this system, the pitch information was rendered on the front and back of the torso while the roll information was rendered on the left and right shoulders. This system used feedback techniques such as stimulating the actuator points on the left when the aircraft rolls left. The vibrating intensity was directly proportional to the degree of angle by which the plane rolled. This system had two main components: an embedded vibration system and a feedback engine. The embedded vibration system consisted of an 8 bit microcontroller that collected data from the output of the feedback engine and drove the vibration motors. The feedback engine collected the pitch and roll information from a flight simulator. The flight simulator was used to conduct all the experiments and to give training to the subjects. The user of this system wore a shirt equipped with vibration motors placed at specific points. Various experiments were conducted to test the usability of the system and the pilots awareness of various situations such as: night time, reversed reading and 15

awakening from sleep. The test results suggested that with the use of vibrotactile feedback the response time of the pilot was reduced. The pilot required less time to stabilize the aircraft as compared to the time required in these situations without tactile feedback. Cardin, Vexo, and Thalmann (2007) developed a new haptic device called Head Mounted Wind (HMW) to generate air flow around the users head. The intention behind this system was to enhance the users immersion in an aviation related virtual world. In this system they used a set of eight fans driven by a microcontroller and placed equidistant to each other on a rigid structure which the user wore on their head. This system used a flight simulator to gather wind direction, wind force and the direction of the simulated aircraft. The direction and force of wind was used to determine which fan actuator to be activated and with what speed. Subjects were given training in the flight simulator along with the Head Mounted Wind. After training, experiments were conducted. The results suggested that the feeling of being immersed was improved and the feedback gives reliable information about simulated wind direction.

1.3 Requirements and design issues

In this section we will highlight requirements and design issues of the sensors, feedback technology and the mapping technique between them.

16

1.3.1 Sensors

Mounting the sensors on the glider wing should produce minimum interference with the airflow. If the sensors affect the air flow then they might also have impact on pressure, velocity and other factors acting on the wing. For instance, the aircraft may not attain the desired lift at a certain angle of attack (the angle between the wing and the direction of the air flowing over the wing) if the air flow is obstructed.

The sensors mounted on the wing should connect to the tactile hardware device in the cockpit. We should consider technological connection aspects such as the length of leads running from wing back to fuselage. For example, SPI (Serial Peripheral Interface) interfaced sensors may have limitations on the length of the leads (Hallberg et al., 2008).

These sensors should be robust to temporary immersion in water (e.g., rain or cleaning a wing).

We may need multiple separate types of sensors to measure the different characteristics of airflow such as air pressure, and air velocity.

The sensors should consume little power because gliders generally dont have an engine and a generator. The various electronic or electrical devices mounted in

17

the glider typically obtain power from a battery. Hence the source of electrical power in a glider is restricted. 1.3.2 Feedback

A decision needs to be made about which parts of the body are appropriate for providing feedback. We may not want to provide feedback on body parts where it is less effective. For example, providing the feedback on the back of pilot may not be effective as the pilots back is often in contact with the seat. A suitable location for feedback may be from the elbow to shoulder of both arms.

If a vibration type of actuator is used, then the designers need to decide what range of frequencies for the vibration or pulses are suitable for the feedback system (van Erp, Jansen, Dobbins, & van Veen, 2004).

The actuator points used in the feedback system should not hurt the pilot. Glider flights can last for several hours and hence it is important that sustained use of tactile feedback should not cause irritation to the pilots skin, and that the system not be aversive.

1.3.3 Mapping

The mapping system should relate the measurements from the sensor output to the actuators in the feedback system. 18

The mapping may need to perform some filtering on the sensor output before rendering tactile feedback. For example, it may be necessary to filter out noise present in the sensor measurements. The mapping system may need to set an upper and lower bound for the sensor output. Any values outside this band should not have any effect on the tactile feedback.

The intensity of the pressure, velocity or other airflow properties measured by the sensors should drive the intensity of the tactile feedback.

The number and shape of the sensory array should be mapped to the number and shape of actuators. For example, the air pressure around the wing might be measured by an array of 8x30 sensors mounted on the wing. The changes in the air pressure might be rendered on the pilots skin using a tactile feedback system that comprises 4x4 actuator points. The mapping needs to take into account the different numbers and shapes of sensor array versus actuator array.

1.4 Research goals

The specific goal of this thesis is to determine whether airflow feedback information can provide interesting and useful visual perceptual discriminations to the pilot. We will do this using a wind tunnel simulator. We will use such a simulator,

19

because at this initial stage of the research programme, we have not yet developed a sensory array for a glider wing. The wind tunnel simulator will be used to both simulate a wing and sensors on a wing. Software will be written to visualize the results of this simulation and provide an initial evaluation of the question of whether the wing sensor information may provide interesting and useful perceptual discriminations to the pilot. That is, instead of using tactile feedback, we will evaluate the usefulness of visual feedback. As of yet, we dont have a full fledged feedback system in place. Until such a tactile feedback system is available, we will use a visualization tool for rendering of the system data. We discuss in detail about the visualization or GUI tool in Chapter 3. Table 1.3 shows a roadmap of our planned research programme. The present thesis research comprises Steps 1 and 2 of the research programme.

20

Step 1 2 3

Description Simulation of a wing model and sensors in varied airflow conditions. Evaluation of visual discrimination based on simulation. Use simulated wing and simulated sensors with haptic (tactile) rendering hardware.

4 5 6

Develop sensory array hardware. Using sensory hardware on a wing section, collect data from real wind tunnel. Using a small scale plane model with sensors, collect data from wind tunnel, and in-flight with radio-control.

7 8

Using an existing full scale glider wing with sensors, collect data. Develop new glider wing design to ensure no airflow interference with sensors or develop sensors such that they dont interfere with existing wing air flow.

Table 1.3: Planned progression of research programme

21

1.5 Plan for rest of thesis

In Chapter 2 we will discuss the flow analysis over an airfoil using SolidWorks and FloWorks. In this chapter we will see how simulation of the wing sensors can be done by measuring simulated air pressure at specific points on the simulated wing. For example, we will measure the pressure at fixed points on the wing for a particular angle of attack. In Chapter 3 we will cover the requirements and implementation for a Graphical User Interface to visually show the pressure values from the simulated wing. We will also cover how the GUI provides abstractions for data sources. Beyond the specific goals of this thesis, we plan to use this GUI to provide a debugging facility for the research in general. Even when we will be having the tactile feedback hardware in place we may still want to have some kind of visual rendering of sensors to make sure they are working properly. In Chapter 4 we discuss an experiment where a set of GUI displays for different simulations were shown to adult subjects and used to determine if these people could detect visual differences between simulations using varying angles of attack on the simulated wing. In these experiments, we also varied the updraft or downdraft conditions to see if our subjects could discriminate between the results of GUI representations. This experiment provides a first evaluation of whether airflow feedback information can

22

provide interesting and useful perceptual discriminations to the pilot. In Chapter 5 we will conclude based on the results in Chapter 4. We also consider prospects for future research.

23

Chapter 2: Airflow Analysis Over Airfoil And Manipulation Of Data

2.1 Overview

A goal of our research is to simulate an array of sensors on a glider wing and evaluate whether airflow feedback information can provide interesting and useful perceptual discriminations to the pilot. For this purpose, we have designed 3D airfoil models in SolidWorks and using these models, we conducted FloWorks simulations of air flow over the airfoil. In this chapter along with the basic concepts and terminology we will discuss the procedures used for conducting airflow simulations over an airfoil. We consider the options of 2D analysis vs. 3D analysis of airflow. We also discuss how the simulator data was manipulated to generate simulated sensor values.

2.2 Introduction to SolidWorks

SolidWorks is a 3D CAD (Computer-Aided Design) software which is used in industry (e.g., aerospace and automobile) and educational institutions to design mechanical parts and assemblies. SolidWorks also provides tools for visualization and manipulation of the 3D parts.

24

2.3 Designing 3D airfoil model in SolidWorks

Using SolidWorks, an airfoil model was constructed to enable subsequent simulation of airflow and sensors. Designing a 3D airfoil model is a two-step process in SolidWorks. First, we sketch an airfoil curve in two dimensions (i.e., X & Y Cartesian coordinates). The second step is to make use of tools in SolidWorks to model a 3D wing. To plot a two-dimensional airfoil curve we need the co-ordinates of the airfoil curve. In this thesis we have used the airfoil of a Standard Cirrus glider for initial simulations, and later simulations were conducted with a second airfoil which we discuss in detail in Section 4.2. We have obtained the airfoil curve points for the Standard Cirrus from Gooden, Schempp-Hirth, and Treiber (2009). For the purpose of this thesis, we have used the airfoil curve co-ordinates for the Standard Cirrus Root Rib Airfoil. We considered using a uniform chord wing model vs. a variable chord wing model. Modeling a uniform chord wing model in SolidWorks is relatively simpler than modeling a variable chord wing model. To model a uniform chord wing, we need to create a 2D airfoil curve in SolidWorks and then use relevant tools to stretch the airfoil curve in third dimension (along the span of wing). Figure 2.1 shows a uniform chord wing model designed in SolidWorks. A detailed procedure for creating a uniform chord 3D wing model in SolidWorks is given in Appendix 1. To model a variable chord wing model we need to obtain the complete specification of a 3D wing model. For more details

25

on the example specification refer to Treiber et al., (2009). The procedures for modeling a variable chord wing in SolidWorks, based on these specifications, were not easily available. Also, the purpose of this research was not to utilize a high fidelity wing model, but rather to provide a proof of concept for perceptual discrimination. Hence we decided to use a uniform chord wing model for this thesis purpose.

Figure 2.1: Uniform chord wing model designed in SolidWorks

2.4 Introduction to CosmosFloWorks

CosmosFloWorks is a Computational Fluid Dynamics (CFD) software (simulated wind tunnel) used for simulation of fluid flow analysis over 3D models designed in SolidWorks. It integrates with SolidWorks and hence the user can switch 26

between design and flow analysis. CosmosFloWorks makes use of the Navier-Stokes (NS) equation to determine the fluid flow characteristics (Shaughnessy et al., 2005). We are using CosmosFloWorks for airflow analysis because we want to simulate sensors on the airfoil model. FloWorks constructs a mesh of fluid cells in the computational domain (the space in which the flow analysis is conducted) before we start the analysis process. One or more computational mesh cells in FloWorks can be used to provide values for simulated sensors on the airfoil model. Later in this chapter we discuss mapping of mesh cells to simulated sensors.

2.5 Flow analysis and simulation in FloWorks

As a part of this thesis research we are performing airflow analysis over a 3D airfoil and simulating the flow characteristics over the surface of the airfoil. The fluid flow characteristic of interest in this thesis is pressure. Traditionally, wind tunnels have been used to experimentally study the fluid characteristics over complex 3D parts. However, wind tunnels are not easily available, are expensive to maintain and are not convenient for at-will analysis. We will be using CosmosFloWorks for the fluid analysis since FloWorks simulates the flow conditions. We would like to obtain fluid flow characteristics over any set of points on the surface of an airfoil model. CosmosFloWorks gives us the flexibility to obtain these fluid flow characteristics.

27

A detailed procedure for performing fluid flow analysis over a 3D airfoil model in Cosmos FloWorks is given in Appendix 1. In the following sections we discuss airflow analysis over the wing model and collection of flow characteristics data.

2.5.1 Evaluation of 2D flow analysis vs. 3D flow analysis

We considered using 2D flow analysis vs. 3D flow analysis over the surface of the wing model designed in SolidWorks. In 3D flow analysis, the fluid dynamics computations are performed over the entire region of the solid, whereas, in 2D flow analysis the computations are performed in the cross sectional area of the solid. The various factors that came into our consideration for evaluating 2D flow analysis vs. 3D flow analysis were: time complexity and memory requirements for 3D computational analysis, practicalities involved in extrapolating 3D results from a series of 2D analysis, and the goal of developing a general method that could be used in subsequent research on other wing models. We have designed a wing model in SolidWorks for preliminary purposes with 1 m chord and 5 m span. Conducting a 3D flow analysis over the entire wing surface of this dimension takes considerable durations of real-time to perform fluid dynamics computations and also the memory requirement for the flow analysis process is quite high. One way to address this problem is to perform a series of 2D flow analysis at 28

different chord sections of wing and then extrapolate the results. Extrapolating the results from series of 2D analysis is possible only because of the uniform chord nature of the wing we designed. For example, we can conduct a 2D analysis at one chord section on the span of the wing and use the results to estimate results at the adjoining sections on the span because the cross-sectional area of the wing in these sections is same due to uniformity in chord length along the span of the wing. However, there are some tradeoffs with the approach of extrapolating results. While a single 2D analysis reduces memory requirement and the computational time, there are some practical difficulties in extrapolating results from series of 2D analysis. The extrapolating method has many manual steps which might make the resulting data error prone. Each 2D flow analysis has a separate results file and hence we would have to manually analyze the results in each 2D analysis file and extrapolate the values. Due to the uniform chord nature of the wing, we assumed that we would obtain uniform results for 2D analyses performed at different span positions. To assess this hypothesis, we did a small experiment using FloWorks. We took a series of 2D flow analysis results for different positions along the span and plotted a pressure distribution graph for each span position (along the chord). Figure 2.2 shows the graph of these results. Observe that the span position plots differ from each other. If we were to extrapolate one span position line from another, the graphs would need to be near identical.

29

Figure 2.2: Pressure readings for various span positions

Another issue with the extrapolating method was that it was applicable to only wing models of uniform chord. For our long-term research we would like to have a variable chord wing at some stage. We will not be able to apply the extrapolating method for variable chord wings because for extrapolation the cross-sectional area should be consistent at different sections to have the same pressure distribution over the area, which is clearly not the case with a variable chord wing. For obtaining the flow analysis results, we would like to have a general method that could be used in subsequent research that is suitable for all wing models. Therefore, we decided to use 3D flow analysis because of the fact that there are practical difficulties in extrapolating results from series of 2D analysis and also due to the fact that we want a generic method for obtaining results that will work for uniform as well as variable chord wing models. The memory requirements for performing 3D flow 30

analysis were addressed by conducting the simulations on a Dell XPS 720 system (Intel Core 2 Duo 3.2 GHz, 8 GB RAM, 64-bit architecture) in Dr. Daniel Popes (fluid dynamics expert at UMD) Mechanical and Industrial Engineering lab.

2.5.2 Flow analysis process

Once all the requirements and the parameters such as flow velocity or air turbulence are specified in FloWorks, the simulation process can be started to perform flow analysis over the wing model designed in SolidWorks. As the simulation proceeds, the user can see the visualization of the analysis process. Figure 2.3 shows one such snapshot of the in-progress flow analysis of airflow over the wing model. The user can observe the variations in parameters at various locations on the solid object. The spectrum of results as shown in the figure may vary as the calculations proceed for the convergence of the engineering goals set by the user.

31

Figure 2.3: Preview of flow analysis

While performing flow analysis, Cosmos FloWorks uses certain criteria to stop the analysis process. Users can also specify their own criteria, which are called Goals. Goals are the physical parameters used to obtain a steady state of flow analysis process from engineering point of view.

2.5.3 Mesh visualization

Prior to starting the flow analysis procedure FloWorks divides the computational domain into a mesh of cells where the computation is performed in each of the cells. These smallest divisions of the computational domain are termed fluid cells. Figure 2.4 shows a cross-section view of fluid cells for the airfoil model.

32

Figure 2.4: Mesh of fluid cells Flow parameters such as temperature, velocity, pressure, and force are computed in each of the mesh of fluid cells. We are interested in flow parameters such as pressure and velocity simulated or measured on various points on the airfoil. In this thesis, we want to simulate the behavior of sensors that could be mounted on the wing surface of a glider aircraft. We discuss in detail in a later section how we simulate sensors based on mesh cells. We also discuss how we can achieve a higher degree of accuracy in the results by performing mesh refinement/optimization as compared to the default refinement level of the mesh cells in FloWorks. Partial cells are a special type of fluid cell. Partial cells are the fluid cells that lie on the solid/fluid interface, partly on solid and partly on fluid. Partial cells are significant for surface analysis. We will be focusing on partial cells since we are interested in flow

33

parameters on the airfoil surface and partial cells will give us those values. Figure 2.5 illustrates partial cells for airflow analysis over the surface of the airfoil model.

Figure 2.5: Partial cells for flow analysis

2.5.3.1 Mesh refinement

The computational cells in the mesh have a default volume. However, it can be useful to specify the mesh cells to have a smaller volume. This can be useful because the degree of accuracy of results increases as we reduce the size of the computational cells. This process of reducing the cell size is called mesh refinement. CosmosFloWorks

34

provides seven levels of refinement. The higher the level of refinement the smaller is the cell size. As we refine the cells the total number of cells in the computational domain increases and thus it takes more computational time (and more memory) to complete the analysis. However, with our emphasis on surface analysis, we only need to refine the cells close to the surface of the airfoil and not all of the fluid cells. This technique is termed mesh optimization. With mesh optimization we can achieve a higher degree of accuracy of results close to the surface area and still complete the calculations in reduced time (and using a reduced amount of memory) as compared to refining all fluid cells. Figure 2.6 shows the cross-section view of an airfoil where only the partial cells are refined.

Figure 2.6: Partial cells refined

35

2.6 Calculating drag coefficient

As we refine the partial cells we obtain a higher accuracy with our results. To terminate the process of refinement, we need to decide on some parameter to stop the cell refinement. As per fluid dynamics expert, Dr. Daniel Pope (UMD MIE dept), we should continue refining the partial cells until the engineering goal(s) stabilize across the mesh refinement levels. We use CosmosFloWorks to determine the aerodynamic drag coefficient on an airfoil subject to a 0.01% turbulent airflow over its surface and use this as an engineering goal. Drag coefficient is a dimensionless quantity for measuring or quantifying the fluid flow resistance on an object under the flow conditions. The drag coefficient (CD) of an airfoil is given by:

CD =
where,

FD 0.5 * * V 2 * b * c

CD = airfoil drag coefficient FD = drag force determined by specifying the global goal as X-component of force = fluid density V = fluid velocity

36

b = length of the airfoil c = chord length of airfoil The drag coefficient equation can be specified as an equation goal in FloWorks.

2.7 Manipulation of data for simulated sensor values

When we perform airflow simulation over the airfoil model, FloWorks divides the computational domain into tens of thousands of computational cells and for each mesh cell it solves the Navier-Stokes equation to determine the flow characteristics (e.g., pressure) at that fluid cell. When the simulation is done, FloWorks generates data which has various parameters for each of the mesh cells. For our research purpose, we need to simulate the sensors on the wing model. Also, for practical purposes, there will likely be only a relatively limited number of sensors mounted on the surface of the wing at various locations. So we need a mapping from the raw mesh cell data to the simulated sensors on the wing surface. In this thesis, the mesh cells will form the base for our simulated sensors where each simulated sensor is represented by one or more mesh cells depending upon the size of the sensor. In the following section we discuss how we map one or more mesh cells to a simulated sensor.

37

2.7.1 Data storage format

The raw mesh cell data we obtain from FloWorks is in a format specific to that software. However, for our research we need the data to be stored in a uniform, platform independent and standard format. A standard format will provide easy access to the data from high level software which we discuss in Chapter 3.

We have defined our own organization of the mesh cell data in an XML format and we call this the wing data format. We define various tags in the XML format in order to easily access and document the data in the file. I have written a Java program that makes use of the open source Xcerces library to generate a XML file from the raw data we obtain from FloWorks. A sample XML wing data file is shown in Figure 2.7.

38

<?xml version="1.0" encoding="UTF-8"?> <wingData> <noOfFluidCells value="100"/> <noOfPartialCells value="10"/> <flowVelocity units="m/s" value="32"/> <airfoilType type="StandardCirrusRootRib"/> <wingDimensions chord="1" span="5" units="m"/> <dataSet number="1"> <angleOfAttack value="10"/> <parameterType unit="Pa">Pressure</parameterType> <data> <point unit="m" x="0.0046656" y="-0.0148535" z="0.00268195">101206</point> <point unit="m" x="0.0046656" y="-0.00682895" z="0.00268195">101245</point> <point unit="m" x="0.0046656" y="0.00119565" z="0.00268195">101213</point> </data> </dataSet> <dataSet number=2> </dataSet> </wingData>

Figure 2.7: Sample of wing data XML file

The information such as noOfFluidCells (total number of fluid cells from the original FloWorks simulations; this is for reference only, and is not otherwise used), noOfPartialCells (total number of partial cells; this is the number of data points actually represented in this XML file), and flowVelocity (air velocity in m/s. The tags dataSet and data are generated by a program running on the raw text data of mesh cells exported from the FloWorks project. One dataSet tag holds all the values of the mesh cells for a particular simulation. The data tags gives the information such as

39

the X, Y, Z co-ordinate of all the mesh cells and the pressure measured at that point in space during the simulation.

2.7.2 Data lookup tables Our goal was to map tens of thousands of mesh cells to various simulated sensors mounted on the simulated wing surface. Each wing sensor measurement was taken from one or more mesh cells. Given a position of a simulated sensor on the simulated wing surface we have to identify the mesh cells that occur in the area of the sensor. For simplicity, we assume that our sensors are rectangular in shape. To locate the mesh cells for a particular simulated sensor we first create two lookup tables based on the wing data, one for the positions along the chord (X_lookup table) and one for the positions along the span (Z_lookup table). We then organize the mesh cells into rows by columns. The first lookup table holds the positions of the rows on the wing with respect to the leading edge (front to back of simulated aircraft) while the second lookup table holds the positions of columns with respect to the right side of the wing. An example of one such lookup table is shown in Figure 2.8.

40

X_lookup[1] = 0.0046656 X_lookup[2] = 0.0144273 X_lookup[3] = 0.024189 X_lookup[4] = 0.0339506 X_lookup[5] = 0.0437004 X_lookup[6] = 0.0534382 X_lookup[9] = 0.0826397 X_lookup[10] = 0.140864 . . . X_lookup[97] = 0.926608 X_lookup[98] = 0.936321 X_lookup[99] = 0.946034 X_lookup[100] = 0.955747 X_lookup[101] = 0.965471 X_lookup[102] = 0.975206 X_lookup[103] = 0.98494 X_lookup[104] = 0.994675

Figure 2.8: Example of a lookup table

Once we have these lookup tables, then based on the given center position of a simulated sensor, we identify the mesh cell row and column of the center of the sensor. We then identify the cells adjacent to this center by the Java function (see also Appendix 2):
Double getSensorValue (x_center, y_center, surface, /* in metres */ /* in metres */ /* either top or bottom */ sensor_width, /* in metres */ sensor_length /* in metres */);

The entries i and i+1 in the X-lookup table give us the distance between two consecutive mesh cells along the X-axis (along chord). Similarly, the entries j and j+1 41

in the Z-lookup table give us the distance (in metres) between two consecutive mesh cells along the Z-axis (along span). Based on this information, the implementation of the function getSensorValue identifies the mesh cells around the center of the simulated sensor which lies within the boundary of the area of the sensor and then returns the average value (Double) of the pressure readings in these cells.

In the function getSensorValue, for each mesh cell that falls within the sensor area boundary, we form a pair of chord and span positions obtained from the corresponding lookup tables. For this (x, z) pair we search through the entire mesh cell data points to find the matching values for y-coordinate. Hence the complexity of the
getSensorValue is O(n), where n is total number of mesh cells.

2.8 Summary In this chapter, we first discussed the design of uniform chord 3D wing model in SolidWorks. We then discussed the airflow analysis over the wing surface and the various factors evaluated that lead us to 3D flow analysis. We also discussed in detail the procedures for mapping results from FloWorks analysis to the virtual sensor values.

42

Chapter 3: Software For Visual Rendering Of Sensor Readings

3.1 Overview

In the previous chapter we discussed an airflow analysis over a simulated wing model. The airflow simulation was done using FloWorks, to simulate the fluid dynamics and SolidWorks was used to design the wing model. The raw data obtained from the airflow simulation was then formatted into a platform independent XML file format. We also discussed the mapping of the raw XML data to the virtual (simulated) sensors. In this chapter we will discuss the visual rendering tool for the virtual sensors readings and a color visualization of wing and simulated pressure data. We introduce another XML file format that we use for configuring the positions of the simulated sensors on a wing surface. We also discuss various features of the visual rendering tool. Before moving to the specifics of the rendering tool it is useful to consider the overall organization of the software and hardware system, as we plan it, to give the broader scope of our research. Figure 3.1 shows the planned system organization. The data sources can either be static or dynamic. Static sources provide complete collections of data stored on some persistent medium (e.g., a disk file). For example, the data from the FloWorks simulation is a static source where we first run the simulation on the

43

simulated wing model and then access the data offline. The software aspects of Figure 3.1 are the Core software.

SolidWorks/ FloWorks simulation (virtual sensors)

GUI for visual rendering of data sources

Flight Simulators (e.g., XPlane)

Tactile Feedback Device 1

Mechanical sensors on wing model in wind tunnel

Software Interface Layer


Tactile Feedback Device 2

Electronic sensors on real wing model

Data Sources

Rendering

Figure 3.1: System organization 44

Dynamic sources are where we process the data online as it comes from flight simulator software, wind tunnel runs or real-time data coming from electronic sensors mounted on a physical wing. Unlike static sources, the data from dynamic sources are not available for offline access. The software interface layer (Figure 3.1) provides platform independent access to data sources. This interface layer will hide some of the details of the originating data. This is done by providing an API (Application Programming Interface) for accessing the data. The visual rendering tool for data sources and the tactile feedback system as shown in Figure 3.1 will not directly access the file or other representation of the data source, instead they will use the API defined in the software interface layer. This enables us to have unified access to the data sources. In order to plug a data source into the system we need to provide a driver to the data source which implements the specifications of the interface layer. For thesis purposes, I have written the driver for SolidWorks/FloWorks simulation data source. The implementation of the getSensorValue function we saw in Chapter 2 was a part of this driver for FloWorks. The complete Java specifications of the software interface layer are given in Appendix 2. At some point in our research we would like to map the readings taken by the sensors to the tactile feedback system which will provide a tactile rendering of the atmospheric conditions over the wing to the sailplane pilot. As of yet, we dont have a full fledged tactile feedback system in place. Some prototyping work is being done in this

45

direction (Parrott, 2008; Sebesta, 2008; Wronski, 2008). These systems are still in a research phase and it will take some time to integrate the tactile feedback hardware. Until the feedback system is in place we can use the visual rendering tool for rendering the sensor readings from the data sources. In this chapter we will mainly focus on the various aspects of the core software relevant to this thesis and the visual rendering tool.

3.2 External inputs to the software

A main purpose of the software for this thesis is visually rendering the virtual sensor readings. The software needs to be provided the data file from the FloWorks simulation. We have seen in Chapter 2 how we convert the raw data file from FloWorks to the platform independent wing data XML file. This sensor data XML file is one of the inputs provided to the software. Another input to the software is a sensor configuration XML file. This second XML file format defines the layout of the virtual sensors on the surface of the simulated wing and the various properties of these sensors. This XML file format has the flexibility to specify the sensors either at fixed locations or at variable locations on the surface of wing. Through the sensor configuration XML file we can also specify if we want to use readings from the upper or lower surface of the wing. A sample of the sensor configuration file is shown in Figure 3.2.

46

<sensorConfiguration> <totalSensors value=50 <sensorParam value="Pressure" unit="Pa"/> <sensorMake brand=AAA sensitivity= /> <sensorShape value="Rectangular"/> <sensorPositioning width="0.03" length="0.02" units="m" type="Uniform"> <layout surface = "Top"> <rows>5</rows> <columns>10</columns> <top>0.2</top> <right>0.6</right> <bottom>0.7</bottom> <left>4.6</left> </layout> </sensorPositioning> <!-- positions of variable sensors if any --> <sensorLocations></sensorLocations> <sensorRenderingsView value="rotated"></sensorRenderingsView> <sensorSizeInPixels value="40"></sensorSizeInPixels>

</sensorConfiguration>

Figure 3.2: Sample sensor configuration XML format

The tag totalSensors, specifies the total number of virtual sensors across which the readings will be taken. The tag sensorParam, which stands for sensor parameter, is used to specify the kind of reading we want to use for rendering, e.g., whether it is a pressure reading or velocity reading. The sensorMake, which stands for the manufacturer of sensors, specifies the company that making the sensor, and may be useful when we move to physical, not simulated sensors. We use the sensorPositioning tag, which stands for positions of the virtual sensors, to specify the positions of sensors in a grid of sensors

47

mounted on the wing surface. Inside this tag we can define the total rows, columns, interrow and inter-column distance of the sensors. We use the tag sensorLocations, which stands for locations of sensors, to define the sensor positions at specific locations. The
sensorRenderingView tag, which stands for rendering view of sensor readings, is used

to determine the orientation of the rendering image (discussed in Section 3.5). The value to this tag is either normal or rotated. If the parameter specified is rotated, we rotate the rendering image in 90 degrees anti-clockwise direction. We specify the value as rotated when the number of sensor rows is far greater than number of sensor columns. The tag
sensorSizeInPixels, which stands for size of sensor in pixels is used to adjust the size

of color mapped images for sensor readings. In this thesis, we are specifying uniformly spaced sensors using the sensor configuration file. The configuration file specifies locations of the reference points on the top-right and bottom-left portion of the wing surface, assuming it is the left wing of a glider. The sensor configuration file also provides the total rows, total columns, inter-row spacing and inter-column spacing for the simulated sensors. Based on this information the software written in Java prepares a list of sensor positions on the surface. Both the sensor data file and sensor configuration file can be provided from the File menu of the core software as shown in Figure 3.3. Selecting either option prompts the user with a dialog box for the location of the specific XML file. After providing the path to the file, the software uses the Xcerces library to parse the contents of the XML file. 48

Figure 3.3: Initial screen of the core software

3.3 Selection of programming platform

For developing the software we used the Java programming language and the development tool was the Netbeans IDE 1. Using Java for programming ensures that the software is relatively platform independent. The core software has mainly been developed on the Windows platform but should be easily ported to Mac or Linux 2. Also at some point in our research we may provide the rendering facilities via a web browser. The data source driver and the inputs provided to the core software are either platform independent or may have a platform independent wrapper over the native driver. For example, the TNGames Vest, a commercial tactile feedback vest has a DLL 3 interface.
1

Netbeans is an Integrated Development Environment from Sun Microsystems for developing applications in Java. My thesis advisor has been regularly using the software on a Mac. Dynamic Link Library or DLL is a technique in Microsoft Windows for writing platform specific drivers.

49

To integrate this tactile feedback device into our system we will need to write Java specific wrapper over the DLL in JNI (Java Native Interface). The Netbeans IDE was chosen in part because it offers rapid application development facilities. The Netbeans IDE is also available for Windows, Mac OS and Linux. This IDE has a GUI builder tool. For this thesis, I created a Java Desktop Application in Netbeans IDE which gives a default framework. The software for rendering the sensor data was built on top of this framework.

3.4 Design of the core software

The core software has a visual rendering tool for rendering the readings collected by the sensors. The readings can be taken from a static source such as virtual sensors or in the future they can be from real-time sensors 4. The rendering tool is planned to have sufficient flexibility to render the sensor data irrespective of the originating source. To render the sensor data the software needs to connect to the appropriate data source. The user selects a particular data source and then the core software uses the driver for that data source. This driver is the implementation of the APIs defined in the core software abstraction layer (see Appendix 2). Many of the actions that a user can take from the software are common across data sources. The core software is designed to integrate

Not a part of the scope of this thesis.

50

with data coming from a continuous stream. For example, it is planned to utilize the readings from real sensors measured in real time (dynamic data source). In the remainder of this section we discuss the design aspects of the core software and the features relevant to this thesis. We also discuss the mapping from sensor readings to the graphical rendering.

3.4.1 Initialization of the core software

For the purpose of this thesis we are using FloWorks simulation as a data source for the core software. After the sensor data and the sensor configuration XML files are provided as input, the core software parses the XML files and loads the data into memory for quick reference. The sensor configuration XML file is parsed to create a list of sensor positions on the wing surface.

3.4.2 Relevant software features

The various features of the core software provided through the Actions menu are seen in Figure 3.4.

51

Figure 3.4: Actions menu in core software

3.4.2.1 Sensor Data At

This feature enables access to the getSensorValue method. This has been mainly used for debugging purposes and provides the user the ability to obtain a sensor reading from the SolidWorks/FloWorks mesh data by placing the sensor at any arbitrary position on the wing surface. Clicking on the menu item Sensor Data At opens an interactive dialog box for user input as shown in Figure 3.5.

52

Figure 3.5: Sensor Data At dialog box

In this dialog box the user enters the position of the simulated sensor on the wing surface, sensor dimensions and whether the sensor is to be placed on the top or bottom surface of the wing. The X Position and Y Position correspond to the distance along the span and chord of the wing respectively. After entering this information, when the user clicks the OK button, the core software calls the method:
Double getSensorValue(Double x_dist, Double z_dist, int surface, Double length, Double width);

to obtain the sensor reading at the user specified position. The output is shown in a message box as seen in Figure 3.6.

53

Figure 3.6: Sensor Data At user specified location

3.4.2.2 Load Sensor Grid

The locations of the sensors are provided from the sensor configuration XML file. Suppose we want to read the values from the sensors that are arranged in a grid as specified in the sensor configuration file. Based on the reference points specified in the sensor configuration, the positions of each of the sensors on the wing surface are determined. The reference points are the positions on the wing surface that identifies the area covered by the sensors assuming uniform spacing between the rows of sensors and uniform spacing between columns of sensors Figure 3.7 shows one such logical arrangement of sensors on the wing surface based on a sensor configuration XML file. The points (0.6, 0.2) and (4.6, 0.7) are the reference points.

54

Figure 3.7: One possible arrangement of sensors 5

When a user clicks the Actions Load Sensor Grid menu item, the core software obtains the reading for each of the sensors, retaining these readings in memory. For each sensor the core software calls the abstraction layer API getSensorValue(). The sensor readings are then displayed on the console. For the sensor arrangement shown in Figure 3.7 an example partial ouput of sensor readings can be seen in Figure 3.8.

Note that this is not what is rendered from the software. Visual rendering output is discussed in Section 3.6.1.

55

Sensor(0.2, 0.6) = 100974.5625 Pa Sensor(0.2, 1.04) = 100959.85 Pa Sensor(0.2, 1.49) = 100956.3 Pa Sensor(0.2, 1.93) = 100955.5 Pa Sensor(0.2, 2.38) = 100955.25 Pa Sensor(0.2, 2.82) = 100955.25 Pa Sensor(0.2, 3.27) = 100955.75 Pa Sensor(0.2, 3.71) = 100957.29166666667 Pa Sensor(0.2, 4.16) = 100964.2 Pa Sensor(0.2, 4.6) = 100992.95 Pa Sensor(0.33, 0.6) = 100856.0625 Pa . . . Sensor(0.57, 4.6) = 101067.55 Pa Sensor(0.7, 0.6) = 101195.0 Pa Sensor(0.7, 1.04) = 101186.0 Pa Sensor(0.7, 1.49) = 101184.0 Pa Sensor(0.7, 1.93) = 101183.25 Pa Sensor(0.7, 2.38) = 101183.0 Pa Sensor(0.7, 2.82) = 101183.25 Pa Sensor(0.7, 3.27) = 101183.75 Pa Sensor(0.7, 3.71) = 101184.04166666667 Pa Sensor(0.7, 4.16) = 101188.3 Pa Sensor(0.7, 4.6) = 101207.05 Pa

Figure 3.8: Example partial output of sensor readings

3.4.2.3 Get Airfoil Info

This feature is used to query the specifics of the wing such as the dimensions of the wing and the type of the airfoil. When the user clicks on the Actions Get Airfoil Info menu item, a dialog box is shown as seen in Figure 3.9. The data in the dialog box is obtained from the sensor data source. 56

Figure 3.9: Airfoil Info dialog box

3.4.2.4 Render Sensor Data & Render Wing Surface ActionsRender Sensor Data shows the mapping of the sensor readings to a visual array of colors and thus provides a visualization of the virtual sensors. Render Wing Surface shows the variations in the airflow conditions across the wing surface. We will discuss these two features in detail in Section 3.6. First, in Section 3.5, we present our technique for mapping sensor values to color information.

57

3.5 Color mapping of sensor readings

One of the goals of the core software is to provide a visualization of the sensor readings. The method we use for visualization is to map a sensor reading to the RGB (Red, Green, Blue) color model. In order to map a sensor reading to RGB color we utilize a lower bound and upper bound on the sensor reading. There are two ways we can set the bounds on the sensor readings which we discuss in detail in Section 3.6.2. The lower bound on the sensor readings is mapped to blue and the upper bound of the readings is mapped to red. All the sensor readings between this range are mapped to the RGB color model depending upon the offset of the reading in this range. Figure 3.10 shows an example of the color mapping reference chart we used. The upper bound on pressure readings is shown in red and the lower bound on pressure readings is shown by blue.

Figure 3.10: Color mapping chart (modified from FloWorks)

58

The problem in mapping a sensor reading to a color is to determine the red, green and blue components for any particular sensor reading. To solve this problem we divide the color range into four logical partitions, P1, P2, P3 and P4 as can be seen in Figure 3.11 (shown from right to left). The extreme right of Figure 3.11 is the lowest sensor reading (lower bound) which corresponds to pure blue whose RGB value is (0, 0, 255) and the extreme left is the highest sensor reading (upper bound) which corresponds to pure red whose RGB value is (255, 0, 0). In Figure 3.11, l denotes the lower bound on sensor readings, u denotes the upper bound on sensor readings, r denotes the range from lower bound to upper bound on the sensor readings and s denotes the specific sensor reading.

u
R = 255, G, B = 0

3 r 4
R, G = 255, B = 0

1 r 2
R = 0, G = 255, B

1 r 4
R = 0, G, B = 255

P4 R Y

P3 G

P2 C

P1 B

Figure 3.11: Logical partitions of the range of sensor readings

where, The colors at the boundaries of the partitions as shown in Figure 3.11 are:

59

R = Red (255, 0, 0) Y = Yellow (255, 255, 0) G = Green (0, 255, 0) C = Cyan (0, 255, 255) B = Blue (0, 0, 255)
indicates the value of color component increases from 0 to 255 as we move from

start (right end) of the partition towards the end of the partition (left end).
indicates the value of color component decreases from 255 to 0 as we move from

start (right end) of the partition towards the end of the partition (left end).

In any partition only one color component varies whereas the other two color components remain constant as seen in Figure 3.11. So, how do we determine which color components varies and which are constant for a specific sensor reading? First, we use the relations shown in Table 3.1 to identify the partition in which the sensor reading lies. We then find the offset of the sensor reading in the partition in which it lies and use the following relation to find the varying color component as:
Color component = offset * 255 / (partitions upper bound)

60

Partition P1

Condition
s l && s

1 r 4

P2

s>

1 1 r && s r 4 2 1 3 r && s r 2 4

P3

s>

P4

s>

3 r && s u 4

Table 3.1: Conditions to identify the partition of a specific sensor reading

3.6 Output of the software

Once we provide the wing data XML file and the sensor configuration XML file as inputs to the core software it can produce the renderings of virtual sensors readings. The readings are mapped to the RGB color model as described in the previous section. The core software can also generate the rendering of the entire pressure distribution over the wing surface. The color rendering depends on the bounds we set on the pressure readings. We will discuss these topics in detail in this section.

61

3.6.1 Visual rendering of sensor data

The user can view graphical rendering of the sensor readings over the wing surface. For each sensor reading we used the mapping technique described in Section 3.4 to get the RGB value corresponding to the reading. This visualization of data gives an idea of the variation in the airflow measured by the virtual sensors. For example, if we are measuring pressure of the airflow over the wing surface then the color mapping close to a blue gradient this indicates that there is relatively low pressure airflow around that portion of the wing where the sensor is mounted. Conversely, if the color mapping is close to a red gradient then this indicates that there is relatively high pressure airflow. We render all the sensor readings together as shown in Figure 3.12.

Figure 3.12: Rendering of sensor readings

62

In Figure 3.12 we can see the graphical rendering of the sensor reading taken from 50 virtual sensors at zero angle of attack for the FloWorks simulation data source. While the sensors are spaced at some distance from each other on the wing we show the color mapping of the sensors adjacent to each other to get a comparative view of the readings. The actual layout for these sensors is as shown in Figure 3.7. Showing the graphical renderings of the sensor readings side by side helps us to visualize the variations in the readings. In Figure 3.12 the blue colored cells in the top row shows the rendering of the pressure readings from the sensors mounted in a row close to the leading edge of the wing and the red colored cells in the bottom row show the rendering of the pressure readings from the sensors close to the trailing edge of the wing. This graphical rendering is for the sensors mounted on the top surface of the wing. As we can see from Figure 3.12, for the airflow over the top surface of the wing, the pressure at the leading edge is less as compared to the pressure at the trailing edge. Aircraft wings get lifted into the air due to the low pressure on their top surface and that is exactly what we can observe from these renderings.

3.6.2 Relative and absolute rendering

To map the virtual sensor readings to the RGB color model we need to first define the bounds on the readings. We use one of two methods to set the bounds on the sensor 63

readings: a relative scale and an absolute scale. In the relative scale method, the bounds are decided by the readings measured by the virtual sensors for a particular airflow condition. For example, the bounds are taken from the data at one specific point in time. The lowest value measured by any sensor at that time point is set as the lower bound and the highest value measured by any sensor at that point in time is set as the upper bound. The mapping is done relative to the lowest and highest value measured by a virtual sensor at the specific time point. If the airflow conditions change over time and the sensors measure different maximum and minimum readings then the relative scale will also change accordingly. The advantage with relative scaling is that small changes in the pressure readings can be seen properly in the color mapped images. The disadvantage with the relative scale is that we cannot compare the readings taken across different airflow conditions (e.g., at different times) against a uniform scale. To address this problem we can use the absolute scale method for mapping the readings to colors. In the absolute scale method, we use the same upper and lower bound for all the sensor readings taken across different airflow conditions or at different times. Typically, the upper and lower bound for the absolute method must be set manually by the user. In Chapter 4, we discuss in detail the various airflow conditions across which we take virtual sensor readings. We call the sensor rendering images obtained using the relative scale method relative rendering. Similarly, the sensor rendering images obtained for the absolute scale

64

method are called absolute rendering. Figures 3.13 and 3.14 show example relative rendering and absolute rendering images for the pressure readings measured by 100 virtual sensors for the same airflow conditions (i.e., the same wing data XML file). The sensor readings were obtained from virtual sensors arranged in 20 rows (along chord) and 5 columns (along span). To fit the rendering image completely on the screen the images were rotated 90 degrees anti-clockwise by specifying the rotated parameter to the
sensorRenderingView tag in the sensor configuration XML file format.

Figure 3.13: Relative rendering example

Figure 3.14: Absolute rendering example

65

3.6.3 Rendering of the wing surface

In the previous section we saw the rendering of virtual sensor data. Because the FloWorks simulation gives us data over the entire wing, we can also view the rendering of the airflow readings over the entire wing surface. Figure 3.15 shows a graphical rendering of pressure distribution over the full wing surface.

Figure 3.15: Rendering of pressure distribution across the wing surface

This kind of rendering is helpful for static data sources where we have data over the entire wing, as it helps us to strategically decide where to place the virtual sensors on the wing. Our goal with providing a visual rendering tool is to compare the rendering of the sensor data for various data sets and determine whether there are perceptual differences between those renderings. Thus, when we have data over the entire wing it is very important to know where to place the virtual sensors on the wing. We want a holistic view of the sensor data rather than incomplete visualization where we will not see much 66

variation. The problem we are trying to address here is how to place the sensors so as to best sample the variations in the airflow conditions over the wing surface. The rendering of the wing surface can serve as a tool to make decisions about the placement of sensors. For example, looking at Figure 3.15 we can say that there is little need to distribute the sensors throughout the span of the wing. Rather, we mainly see the pressure variations along the chord. So, if we have to place the sensors in an array of rows by columns we can use a relatively low density of sensor columns along the span of wing and a relatively high density of sensor rows along the chord of wing as that is where the pressure variation is mostly seen.

3.7 Comparison of various specific outputs

Now that we are able to render the sensor data graphically we would like to perform experiments where a set of visual rendering images for different simulations are shown to adult subjects and used to determine if these people can detect any visual difference between the simulations. These experiments will provide a first evaluation of whether airflow feedback information can provide interesting and useful perceptual discriminations to the pilot. We provide more details about these experiments in Chapter 4.

67

3.8 Summary

In this chapter, we discussed the design of the core software for accessing the sensor data sources. We discussed the system architecture (see Figure 3.1) and its various components: data sources, software abstraction layer, and rendering components. We then discussed the sensor configuration XML file format to specify the locations of virtual sensors on the simulated wing model. We also discussed the visual rendering tool for rendering the virtual sensor readings including our technique for mapping the sensor readings to the RGB color model.

68

Chapter 4: Evaluation Of Perceptual Discrimination Of Data Sets

4.1 Overview

In the previous chapter we discussed a GUI-based tool used to access sensor data sources. We also discussed the graphical rendering of sensor readings. In this chapter we discuss experiments where a set of graphical displays for different collections of virtual sensors, representing the air pressure surrounding a wing, are shown to adult subjects in order to determine if these subjects can detect any visual difference between the various data sets. These experiments will provide a first evaluation of whether airflow feedback information can provide interesting and useful perceptual discriminations to the pilot. The specific goal of this thesis is to determine whether the feedback information from the simulated sensors mounted on the wing model can provide interesting and useful perceptual discriminations to a pilot. In this chapter, we will discuss in detail the design of visual stimuli, which consists of pairs of rendering (component) images from sensor readings. We then discuss the various design aspects for comparison of the images. We conducted a test subject study where a sequence of visual stimuli or trials was shown to adult subjects to evaluate for perceptual discrimination. The test subject study was based on a set of hypotheses

69

driven by the goal of this thesis. We prepared a pool of visual stimuli to be evaluated by different adult subjects where each subject was asked to evaluate a subset of the trials. We prepared test subject schedules that assigned trials to each subject. In this chapter, we also discuss the various factors used for balancing the test schedules. I developed software to load the test schedule (XML file format), and present the trials to the test subjects and record their responses. We involved 16 test subjects for our study, eight males and eight females. We discuss the pre-study knowledge given to the test subjects and towards the end of the chapter we discuss the analysis of the responses given by the test subjects and evaluate it against the set of hypotheses. First, we briefly discuss some of the terminologies used in aerodynamics which we will be using throughout the chapter. Consider the airfoil as seen in Figure 4.1 (modified from http://en.wikipedia.org/wiki/File:Airfoil.svg).

Figure 4.1: Airfoil and its components 70

Chord: The distance between the leading edge and trailing edge of the wing. Angle of Attack: The angle between the relative wind or airflow direction and the chord line.

Lift Force: Component of the aerodynamic force perpendicular to the airflow direction.

Lift Coefficient: A dimensionless quantity that relates the lift force, fluid density, airspeed and the wing area.

Critical Angle of Attack: At a certain angle of attack there is no longer a smooth flow of air over the upper surface of a wing. At this point the lift produced by the wing is no longer sufficient to support the weight of the aircraft and so the aircraft is said to be in a stall condition. This angle is called the critical angle of attack. As we go on increasing the angle of attack beyond this critical point there is a sudden drop in lift force (the wing is stalled). So we can say that for a given airfoil the maximum lift is produced at the critical angle of attack.

Reynolds Number: A dimensionless quantity that gives the ratio of inertial forces such as fluid density and airspeed to the viscous forces such as fluid viscosity. This is a very important number because many wind tunnel tests are noted for a particular Reynolds number rather than airspeed.

71

4.2 Airfoil selection and simulation parameters In this section, we discuss in detail the specific airfoil we used to obtain simulation data for different airflow conditions, and the various parameters used for simulation. 4.2.1 Selection of airfoil A main goal of this thesis is to evaluate if we can visually discriminate between different airflow conditions such as non-stall (safe), stall (unsafe), updraft (airflow in vertical up direction) or downdraft (airflow in vertical down direction) conditions. In order to conduct these visual discrimination tests, we needed to obtain the simulation data for these airflow conditions. We can simulate the updraft and downdraft conditions by varying the direction of the airflow in the fluid dynamics software. However, to simulate the non-stall and stall conditions we need to know the critical angle of attack for the airfoil we use for flow analysis. This information was not easily available for the airfoil we discussed in Chapter 2 (Standard Cirrus). Hence we used a different airfoil for which we can obtain the critical angle of attack. McGhee and Beasley (1973) in their research paper discuss low speed aerodynamic characteristics of an airfoil section designed for general aviation applications. The paper discusses wind tunnel tests conducted on the NACA 0417 airfoil for which the airfoil curve coordinates were given. It is useful to know airfoil curve coordinates to design the airfoil model in SolidWorks. McGhee and

72

Beasley (1973) give a plot of lift coefficient against angle of attack for this airfoil. We reproduce the plot in Figure 4.2 for our discussion. The plot of lift coefficient vs. angle of attack is significant to us because it gives the critical angle of attack (the angle of attack at which lift coefficient is maximum) for the particular airfoil. For this particular airfoil, the critical angle of attack given in the paper was 16 degrees.

Figure 4.2: Plot of lift coefficient versus angle of attack (McGhee & Beasley, 1973)

Figure 4.2 corresponds to Reynolds number 1.9 * 106. For conducting flow analysis in FloWorks we need to know the airspeed. Reynolds number and airspeed are related by following equation:

R=

Vc

73

where, = fluid density V = horizontal component of airspeed c = chord length = fluid viscosity At standard sea level, = 1.23 kg/m3 = 1.789 x 10-5 kg/(ms) Given R = 1.9 * 106 and chord length c = 1 m, if we solve the equation of Reynolds number for airspeed we get V = 27.75 m/s (99.9 km/h). With the airfoil curve coordinates given in the paper, we designed a 3D wing model (uniform chord) in SolidWorks with 1m chord and 1m span (also see Appendix 1 for procedures for designing a wing model in SolidWorks). We then conducted some preliminary FloWorks simulations on this wing model by varying the angle of attack (we tried 12 different angles of attack) to compare our simulation results with those conducted with a wind tunnel given in the McGhee and Beasley (1973) paper. While we did not get exactly the same results, the critical angle of attack we obtained from our

74

preliminary flow analysis was 22 degrees. So, to simulate non-stall (angle of attack below critical angle of attack) conditions we conducted flow analysis using two angles of attack below 22 degrees (0 and 12) and for stall (angle of attack above critical angle of attack) conditions we conducted flow analysis using two angles of attack above 22 degrees (30 and 50).

4.2.2 Simulation parameters We conducted FloWorks analyses for these angles of attack at refinement level four for partial cells (also see Section 2.5.4.1 for Mesh Refinement). Due to constraints on memory and computational time we could not refine the partial cells beyond four levels of refinement. In absolute rendering, while mapping the sensor readings to the RGB color model for all the renderings, we set the lower bound on pressure values at 98744 Pa and the higher bound at 101825 Pa. These are the lowest and the highest pressure values obtained from the simulations for different airflow conditions.

4.3 Test subject study I conducted a test subject study involving 16 adult subjects, eight males and eight females. The test subjects were between 20 and 50 years of age with an average age of 25

75

years. The goal of this test study was to have the subjects visually compare the virtual sensor renderings for different airflow conditions and determine if the subjects were able to perceive any difference. Since the study involved visual perceptual discrimination, I ensured that the test subjects did not suffer from color blindness by verbally asking them before they participated if they had color blindness. The sensor renderings were collected for different airflow conditions: i.) angles of attack above and below critical angle of attack (22 degrees) for the airfoil discussed in the previous section to simulate safe and unsafe flight conditions. and ii.) varying the airflow direction to simulate updraft and downdraft airflow conditions.

4.4 Visual stimuli Our experiment was based on visual stimuli presented to the test subjects. In our study, a stimulus component was a virtual sensors rendering image generated from the virtual sensors software discussed in Chapter 3. For example, Figure 4.3 shows one such component stimulus. The left part of the Figure shows two rendering images, one corresponding to the top surface of the wing and the other corresponding to the bottom surface of the wing. We combine these two images and present them as a single visual stimulus (a stimulus component) to the test subject as shown in the right part of Figure 4.3. The reason we combine the two images is that we want the test subjects to compare the sensor readings across the entire wing rather than just the top or bottom surface.

76

Rendering for top surface of wing

Component stimulus

Rendering for bottom surface of wing

Figure 4.3: Top and bottom surface images combined to form a component stimulus

We present a pair of component stimulus images as visual stimuli to the test subject and ask them if they can tell if one component stimulus is different from the other. Figure 4.4 shows an example pair of component stimuli shown to a test subject. Test subjects were asked to compare the left component stimulus with the right component stimulus and give their response on the scale of 0-9, where 0 indicates that the test subject couldnt perceive any difference between the left and right component stimuli and 9 indicates that they could perceive a drastic difference between the component

77

stimuli. We have generated a set of such stimuli to be presented to the subject based on the various conditions that we discuss in Section 4.8.

Figure 4.4: Example pair of component stimuli to be presented to a test subject

4.5 Stimuli design The various factors involved in designing visual stimuli are shown in Table 4.1. Each component stimulus was generated by combining exactly one factor from each of the categories in Table 4.1. That is, each instance in Factor A is combined with every other instance in Factor B and also with every instance in Factor C. The total number of unique component stimulus images that can be generated was thus: 4 * 3 * 2 = 24.

78

Factor

Description 2 angles of attack above the stall condition. 30 degrees 50 degrees

A 2 angles of attack below stall condition. 0 degree 12 degrees Air flow only from horizontal direction (27.75 m/s) or (62 mph). Air flow from horizontal and also in vertical up direction (2.54 m/s) or B (500 ft/min). Air flow from horizontal and also in vertical down direction (-2.54 m/s) or (-500 ft/min). Absolute rendering of images. The sensor readings across different angle of attacks are rendered using a uniform scale across all data sets for low and high pressure values. C Relative rendering of images. The sensor readings for each angle of attack are rendered using a variable scale across different data sets. The scale is decided for the specific data set on the basis of the low and high values measured for the particular angle of attack.

Table 4.1: Factors for stimuli design

79

A component stimulus is defined in terms of a tuple: angle of attack (0, 12, 30 or 50), flow velocity direction (along horizontal direction only, along horizontal and vertical up direction or horizontal and vertical down direction), and rendering type (absolute or relative). Any variation in the tuple gives a new component stimulus image. Some examples of tuples that define a component stimulus image are: <30, horizontal, absolute>: The component stimulus corresponds to flow analysis simulation with AOA as 30 degrees, airspeed only in horizontal direction and absolute scale used for rendering the sensor readings. <12, horizontal and updraft, relative>: The component stimulus corresponds to flow analysis simulation with AOA as 12 degrees, airspeed in horizontal and vertical up direction and relative scale used for rendering the sensor readings.

4.6 Design for comparison of various stimuli images In this section we discuss the design for comparing various sensor renderings. The design question we address here is: What pairs of component images do we want to choose to present to the test subjects? We would like to know if people can discriminate between the visual representations of different airflow conditions. To compare the airflow simulations related to safe and unsafe flight conditions we considered the comparison of non-stall vs. non-stall (safe), non-stall vs. stall (unsafe) and 80

stall vs. stall (unsafe). We decided to have a set of pairings of component images as shown in Table 4.2.

Trials Type non-stall vs. non-stall

AOA pairs AOA0 vs. AOA12 AOA0 vs. AOA30 AOA0 vs. AOA50

non-stall vs. stall AOA12 vs. AOA30 AOA12 vs. AOA50 stall vs. stall AOA30 vs. AOA50

Table 4.2: Angle of attack pairings

Each of these six pairings were obtained by varying the airflow direction in horizontal direction only, vertical up direction (updraft), and with vertical down direction (downdraft) to simulate the change in airflow direction conditions (Factor B in Table 4.1). For each pairing we obtained the images using both absolute and relative rendering (Factor C Table 4.1). In total, we have 6 * 3 * 2 = 36 pairings to be compared. In addition, if we consider the left/right order of angle of attacks in each pairings, for example, if we consider AOA0 vs. AOA12 and AOA12 vs. AOA0 as separate stimuli pairings, we obtain 36 * 2 = 72 pairings. We refer to these 72 pairings as Expected-Ys

81

because when any of these pairings are shown to a test subject the expected response is that the subject should be able to perceive at least some visual difference between the left and right stimuli. In addition to the 72 Expected-Y stimuli we have also generated another 72 pairings of Expected-Ns where the left and right component stimuli are the same. The purpose of these stimuli is to make sure that a relatively simple strategy of a subject just responding 9 (or 0) on each trial will result in 50% correct. We combined equal numbers of Expected-Y stimuli with equal numbers of Expected-N stimuli in the set of stimuli images shown to each subject. Thus, the Expected- N stimuli help to identify if the test subjects have faked or given blind responses. In our study we used four angles of attack which give us four stimuli pairings for Expected-N stimuli: AOA0 vs. AOA0. AOA12 vs. AOA12. AOA30 vs. AOA30. AOA50 vs. AOA50.

This set of four stimuli was generated for each of the three airflow categories described in Section 4.5. We then replicated each of these stimuli six times to form in total 4 * 3 * 6 = 72 stimuli in the Expected-N set. This gives us 144 trials (72 Expected-Y and 72

82

Expected-N) to be presented to the test subjects. We wanted exactly two responses per trial of type Expected-Y. We wanted to have multiple responses per Expected-Y trial because exactly one response would have generated an insufficient amount of data and more than two responses would have created too many trials requiring at least 24 test subjects for the test study. Since we had practical limits on the number of test subjects and the time to conduct the test study, we decided to obtain exactly two responses per Expected-Y trial. We duplicated the set of Expected-Y trials to form another 72 trials. In the trials given to a test subject there were equal numbers of Expected-Y and Expected-N trials. Hence we also formed another set of 72 trials of type Expected-N by duplicating those trials. In total we had 288 (144 Expected-Y + 144 Expected-N) trials to be presented to the test subjects.

4.7 Test subject schedules We involved 16 test subjects for our study to rate the 288 trials. Each subject rated 18 trials, which consisted of nine Expected-Ns and nine Expected-Ys. We designed an XML file format named schedule that represented the list of trials to be presented to a particular subject. We prepared 16 test schedules, one per subject. Each test subject received a unique list of trials for rating. Figure 4.5 shows an example of one such test schedule. Observe that there are three separate training trials which we discuss in Section 4.11. 83

<schedule> <subject ID="1"/> <training> <ttrial aoa1="30" aoa2="12" set="C" type="A"/> <ttrial aoa1="50" aoa2="30" set="C" type="R"/> <ttrial aoa1="50" aoa2="50" set="A" type="R"/> </training> <trial aoa1="30" aoa2="30" set="B" type="A"/> <trial aoa1="30" aoa2="30" set="A" type="A"/> <trial aoa1="0" aoa2="12" set="A" type="R"/> <trial aoa1="30" aoa2="12" set="B" type="R"/> <trial aoa1="50" aoa2="50" set="B" type="R"/> <trial aoa1="50" aoa2="0" set="A" type="R"/> <trial aoa1="30" aoa2="50" set="B" type="R"/> <trial aoa1="12" aoa2="30" set="C" type="A"/> <trial aoa1="0" aoa2="50" set="B" type="A"/> <trial aoa1="0" aoa2="0" set="A" type="A"/> <trial aoa1="30" aoa2="30" set="A" type="R"/> <trial aoa1="0" aoa2="0" set="A" type="R"/> <trial aoa1="12" aoa2="12" set="B" type="A"/> <trial aoa1="50" aoa2="50" set="C" type="A"/> <trial aoa1="0" aoa2="30" set="A" type="A"/> <trial aoa1="50" aoa2="12" set="C" type="R"/> <trial aoa1="12" aoa2="12" set="B" type="R"/> <trial aoa1="50" aoa2="30" set="C" type="A"/> </schedule>

Figure 4.5: Example of one test schedule, presented to a test subject

4.8 Balancing of various factors in schedule In Section 4.5 we discussed the categorization of the simulation data based on various factors such as airflow velocity direction, angles of attack above and below stall conditions, and absolute and relative rendering. In this section we discuss the balancing of the test schedules across these factors with the goal of avoiding a bias for any of these 84

factors. We have made our best attempt to balance the test schedules for the following factors: Absolute and relative rendering trials: Each subject received 18 trials to rate, with nine trials from absolute rendering and nine from relative rendering. Probability of responses: We have balanced for probability of chance correct response in the following way. Each test subject received 50% (nine) trials from the Expected- Ys category and the remaining 50% trials from the Expected-Ns category. If the subject blindly responds then the chance probability of correct response is 50%. Balancing angles of attack: We have two angles of attack (0 and 12) below critical angle of attack (22), also called non-stall angles of attack, and two angles of attack above critical angle of attack (30 and 50), also called stall angles of attack. We balanced for this factor to make sure that each test schedule has comparable numbers of non-stall and stall angles of attack. For example, out of 36 angles of attack in a test schedule (each schedule has 18 trials and each trial has two angles of attack), we either have 20 stall angles and 16 non-stall angles of attack, or 16 stall angles and 20 non-stall angles of attack. Balancing stall vs. non-stall and non-stall vs. stall trials: Out of the nine Expected-Y trials in each test schedule, three trials are of type non-stall vs. stall

85

and three trials are of type stall vs. non-stall. (For the remaining three trials in Expected-Y category we either have two non-stall vs. non-stall and one stall vs. stall trials, or two stall vs. stall and one non-stall vs. non-stall trials.) Randomizing the sequence of trials: We used the following procedures for randomizing the 18 trials in each schedule: We randomized the Expected-Y (nine trials) and Expected-N (nine trials) separately. We created a test schedule as three groups of six trials, each group containing three Expected-Y and three Expected-N trials, and then randomized trials within each group. Balancing tables used for creating the test schedules satisfying the above constraints are given in Appendix 3.

4.9 Hypotheses for test subject study We designed hypotheses to determine response validity, responses for safety related simulation data, responses for simulation data obtained by varying airflow direction, and responses for trial with absolute and relative rendering. We also designed

86

an exploratory hypothesis for response time. Following are the set of hypotheses on which the test study was based:

4.9.1 Response validity The purpose behind this category of hypotheses is that we wanted to assess if subjects were responding at random. H1: The expected response for Expected-Y trials is greater than zero. That is, we expect test subjects to be able to perceive a difference between left and right stimuli in these trials that were designed as different on the left and right. H2: The expected response for Expected-N trials is zero. That is, we expect that the test subjects are not able to perceive any difference between left and right stimuli in the trials that were designed as same on the left and right. H3: The average response value for Expected-Y trials should be greater than the average response value for Expected-N trial because we expect that for ExpectedY trials the average response should be greater than zero and for Expected-N trials we expect the average response value should be zero.

87

4.9.2 Safety The hypotheses in this category were designed to compare safe flight conditions with unsafe flight conditions. H4: The pressure distribution over the wing surface changes drastically beyond critical angle of attack compared to the pressure distribution over the wing surface below the critical angle of attack. We expect to identify this pattern by comparing the response value for non-stall vs. stall trials with non-stall vs. non-stall trials. Hence we expect that the average response value for non-stall vs. stall trials to be relatively high as compared to the average response value for non-stall vs. nonstall trials. H5: From the preliminary FloWorks analyses for different angles of attack we observed that after the critical angle of attack there is not much difference in the pressure distributions over the wing surface. We want to determine if our subjects can visually perceive these pressure distributions. Hence we expected that for stall vs. stall trials the average response value would be less than the average response value for non-stall vs. stall trials.

88

4.9.3 Airflow direction The purpose of this category of hypotheses is to determine if change in the airflow direction has any effect on the test subjects responses. H6: When a glider sails through updraft airflow the wing surface experiences an upward pressure exerted due to air flowing in the vertical up direction in addition to the pressure exerted due to air flowing in the horizontal direction. So we expect that the average response for trials with airflow only in the horizontal direction should be different than the average response for trials with updraft airflow. That is, we expect our subjects to visually perceive the change in pressure due to updraft airflow. H7: When a glider sails through downdraft airflow the wing surface experiences pressure exerted due to air flowing in downward direction in addition to the pressure exerted due to air flowing in horizontal direction. So we expect that the average response for trials with airflow only in horizontal direction should be different than the average response for trials with downdraft airflow. That is, we expect our subjects to perceive the change in pressure due to downdraft airflow. H8: Since the glider experiences downward pressure on its wings due to downdraft airflow and an upward pressure due to updraft airflow, we expect that the average response for trials with updraft airflow should be different than the 89

average response for trials with downdraft airflow. That is, we expect our subjects to perceive a difference between updraft airflow and downdraft airflow conditions.

4.9.4 Absolute rendering vs. relative rendering This category of hypotheses is formulated to compare the responses for trials with absolute rendering with responses for trials with relative rendering. The purpose of these hypotheses is to determine if the rendering scale has any effect on subjects responses. H9: In relative rendering, each image has its own scale for high and low pressure values. So, even slight variations in the pressure are magnified. On the other hand, in absolute rendering, all the rendering images have the same bounds on low and high pressure values. So, slight variations in pressure may not be depicted. Hence, we expect that for non-stall vs. stall trials with relative rendering the average response value should be greater than the average response value for non-stall vs. stall trials with absolute rendering. H10: On similar lines as hypothesis H9, we expect that for non-stall vs. non-stall trials with relative rendering the average response value should be greater than the average response value for non-stall vs. non-stall trials with absolute rendering.

90

H11: On similar lines as hypothesis H9, we expect that for stall vs. stall trials with relative rendering the average response value should be greater than the average response value for stall vs. stall trials with absolute rendering.

4.9.5 Exploratory hypothesis In this hypothesis, we consider response time, not response value. For the earlier hypotheses, the design of the trials drove our expectation for response value. However, we do not have a strong knowledge base about what the outcome should be for the response times. We collected this data because it was relatively easy to do so, and it might provide useful information. H12: One possibility is that subjects quickly visually perceive that the images in the Expected-N trials are the same. Similarly, when there is a drastic difference between the images in Expected-Y trials (e.g., AOA0 vs. AOA50) the subjects possibly may quickly perceive that there is a drastic difference in the images. On the other hand, if there are some similar patterns in the images (e.g., AOA0 vs. AOA12 or AOA30 vs.AOA50) then the subjects possibly may take relatively more time to analyze and rate the similarities. Thus, we do not have a solid base on how the subjects response time might vary with different trials, and H12 is an exploratory hypothesis. 91

4.10

Pre-study knowledge given to test subjects Before we presented the trials to the test subjects we briefed our subjects on the

test study. We took care to not reveal our hypotheses before the subjects rated the trials. We informed them about what they could expect from this study and their role in this study. Since we are conducting a study of visual perception discrimination involving colors, we made sure that our test subjects did not suffer from color blindness. During the test subject study I was not present in the room where the study was conducted. The following is the script we read to the test subject before starting the presentation of trials: We are doing a study of visual perception discrimination. We will present you a sequence of trials where each trial has two colored images. One image will be on the left part of the screen and the other will be on right part of the screen. Your task is to analyze these two images in each of the trials and decide how different the two images are. You have to rate the difference on the scale of 0-9, where 0 being you are not able to perceive any difference in two images and 9 being there is huge difference in two image. You have to carefully analyze the two images, possibly find any discriminating patterns and then decide on your rating. For this study we have prepared a pool of trials and you will be presented a subset of these trials to be rated. We have randomized the pool of trials and then

92

selected n trials for you to rate. Since we have randomized the trials I dont know which trials are being presented to you. We have completely automated the task of presenting the trials to you through our software. So, once your test study is started you can complete it without my intervention. I wont be seeing what trials are being presented to you; neither will I be seeing what responses you are giving. You can analyze the images as long as you want. Once you come to any decision you can enter you response through the interface provided by our software. Before starting the actual study you will be presented some training samples so that you can get an idea of what kind of trials you can expect from this study and you can get acquainted with the interface for entering your responses. There will be help provided to you on the screen that will guide you what to do next. You can do the training as many times as you want and once you feel confident with it just start the study by pressing View menu Start comparison. This will discard your training trials and will present you a fresh set of trials to be rated. These are the actual trials for which we want your feedback. Once your test study is finished you will be notified accordingly through the software.

93

4.11

Program for presenting trials A total of 288 trials were rated across the test subjects (18 trials per subject). We

involved 16 subjects for our study. The selection of trials for each test subject was done by balancing the various factors discussed in Section 4.8. I have developed software in Microsoft Visual C++ to load a test subject schedule, read the trials, load the stimuli images according to the angles of attack for the trial and then present these stimuli images to the subject. Figure 4.6 shows the interface we designed to present a trial to a subject.

Figure 4.6: Software interface to present a trial to subjects and collect the response

As can be seen from Figure 4.6, the GUI window included documentation of the reference scale for the rating to be given for a particular trial. The test subject enters the 94

response in the textbox provided (which takes only numeric inputs). When the test subject rates the current trial by entering a number in range 0-9 the next trial is presented either using the Enter key or by hitting the Next button on the GUI (with the mouse). Before loading the next trial, the software records the response given and the time taken to give the response. The time to rate the trial by the subject is measured by noting (in software) the time elapsed between the start of presentation of the trial and the moment when the subject presses the number from 0-9. We want to accurately record the response time of subjects hence we have designed the program in VC++. Java could have been used, but with Javas garbage collection (outside the control of the programmer) we might not have obtained accurate timings. Each subject was presented three selected trials as training trials to get acquainted with the software and its interface. We have made sure that these training trials do not match any of the 18 trials to be rated by the subject. These training trials familiarized the test subjects with what they could expect from the actual study. Subjects could repeat the training trials more than once.

4.12

Analysis of subjects responses The subject study was conducted involving 16 adult subjects (see also Section

4.3). Two subjects responses had to be discarded and replaced with new responses from

95

different subjects. One subject used the reverse scale (9 = no difference; 0 = drastic difference) for rating the images. This subject asked me to restart the test trials so the correct scale could be used. For the second subject whose responses had to be discarded, I accidentally cleared the responses instead of logging them to file. The reason we had to discard these two subjects responses is that in our test subject study we wanted the conditions to be the same for each test subject, and if we re-ran the test trials for only some subjects, the conditions would differ for those subjects. The subject responses and response time for each of the trials was logged to a file. All the responses for the 16 test subjects are given in Appendix 4. In this section, we discuss the analysis of the subjects responses.

4.12.1 Analysis of response validity hypotheses Table 4.3 presents data for analysis of hypothesis H1 (we expected average responses for Expected-Y trials to be greater than zero). Remarkably, all subjects were able to perceive that the stimulus pairs in the Expected-Y trials were different. This is fully in agreement with hypothesis H1. Table 4.4 presents data for analysis of hypothesis H2 (we expected average responses for Expected-N trials to be zero). From the results we can observe that most of the Expected-N trials carried out by the subjects (85.4%) were correctly discriminated as being the same. 96

Trials with non-zero Total Trials responses 144 144

Trials with response zero 0

Average response 6.076

Table 4.3: Analysis of hypothesis H1: Responses for Expected-Y trials

Trials with response Total Trials zero 144 123

Trials with nonzero response 21

Mean response 0.2083

% Correct Response 85.4%

Table 4.4: Analysis of hypothesis H2: Response for Expected-N trials

While a statistical analysis of this data may not be needed for hypothesis H1 (0/144 is almost obviously statistically significant) we will use statistical analysis for hypothesis H2, and so we will also apply statistical analysis to H1 for consistency. In hypothesis H1 and H2, each trial could have two possible outcomes (e.g., for H1, each trial had zero or non-zero response). We assume a fixed (constant) probability of correct response (correct = fitting our hypothesis) of 0.5 for hypothesis H1 and H2. We ensured that the trials were statistically independent by designing the test schedules in

97

such a way that no subject repeated the same trial. In this case, a cumulative binomial statistical analysis can be used to determine if the results obtained for hypothesis H1 and H2 were obtained due to chance. That is, if test subjects responded randomly could we have observed the outcomes or results we obtained? Applying a cumulative binomial to Table 4.3 gives p < 0.00001. Similarly, applying a cumulative binomial to Table 4.4 also gives p < 0.00001. Note that for social sciences, the cut-off to reject a hypothesis is p > 0.05, and so our results are highly statistically significant. Table 4.5 presents results for hypothesis H3 (we expected the average response value for Expected-Y trials to be greater than the average response value for Expected-N trials). From the means in the Table 4.5 it seems clearly evident that there was a big difference in the average response for Expected-Y trials and average response for Expected-N trials. This is fully in agreement with hypothesis H3.

Trials Type Expected-Y Expected-N

Number of trials 144 144

Mean Responses 6.076 0.2083

Table 4.5: Analysis of hypothesis H3: Responses for Expected-Y trials compared with responses for Expected-N trials

98

It is reasonable to present an analysis of whether the difference between the means in Table 4.5 is statistically significant. Potentially the difference between the means could have occurred due to chance and we want to determine this likelihood. A Students T-test can be used to perform this analysis and for Table 4.5 this gives p < 0.00001 (t-value = 31.21; d.f. = 286; two-tailed analysis). It seems clear that these means are statistically different.

4.12.1.1

Interpretation of results

We designed hypotheses H1 to H3 to test the validity of responses given by the subjects. In our test study we divided the trials into two groups (Expected-Y trials and Expected-N trials). We intentionally inserted nine Expected-N trials in each test schedule to assess if the subjects were responding by chance. Our expectation was that if subjects responded knowledgably, then the response value for Expected-N should be near zero, the response value for Expected-Y trials should be non-zero, and the overall responses for Expected-N should be less than the responses for Expected-Y trials. It is evident from the results of hypothesis H2 (85.4% correct responses) that the subjects generally correctly perceived the Expected-N trials. We can observe from the results of hypothesis H1 that all the Expected-Y trials were rated as non-zero. Also, from the results of hypothesis H3 we can observe that the mean response value for Expected-Y trials is greater than the mean response value for Expected-N trials. We would not have observed this result for 99

Expected-Y trials or Expected-N trials if the subjects had responded randomly. So, from these results we can say that the responses given by the subjects did not occur by chance and therefore are valid.

4.12.2 Analysis of hypothesis related to safety Table 4.6 presents results for hypothesis H4 (we expected that the average response value for non-stall vs. stall trials to be relatively high as compared to the average response value for non-stall vs. non-stall trials). From the means it seems that the subjects were able to perceive more difference in non-stall vs. stall trials as compared with the difference in non-stall vs. non-stall trials. This is in agreement with hypothesis H4.

Trials Type non-stall vs. stall non-stall vs. non-stall

Number of trials 96 24

Mean Responses 6.552 5.167

Table 4.6: Analysis of hypothesis H4: Responses for non-stall vs. stall trials compared with responses for non-stall vs. non-stall trials

100

Table 4.7 presents results for hypothesis H5 (we expected that the average response value for stall vs. stall trials to be less than the average response value for nonstall vs. stall trials). From the means it seems that the subjects were able to perceive less difference in stall vs. stall trials as compared with the difference in non-stall vs. stall trials. This is in agreement with hypothesis H5.

Trials Type non-stall vs. stall stall vs. stall

Number of trials 96 24

Mean Responses 6.552 5.083

Table 4.7: Analysis of hypothesis H5: Responses for non-stall vs. stall trials compared with responses for stall vs. stall trials

Are the results for hypotheses H4 and H5 statistically significant? We want to determine the likelihood that the difference between the means occurred due to chance. A Students T-test was applied to the means obtained for these hypotheses. Table 4.8 shows the results of the T-test with two-tailed analysis and in each case, the differences between the means obtained for hypotheses H4 and H5 are statistically significant.

101

Hypothesis H4 H5

p 0.005 0.002

t-value 2.856 3.109

d.f. 118 118

Table 4.8: T-test results for means obtained from hypotheses H4 and H5

4.12.2.1

Interpretation of results

We designed hypotheses H4 and H5 to determine if our subjects were able to perceive differences related safe and unsafe flight conditions. We expected that the subjects would perceive more difference between safe and unsafe flight conditions as compared to the difference between two safe flight conditions or two unsafe flight conditions. From the results of hypothesis H4 we can say that the subjects were able to perceive more difference between safe (non-stall) and unsafe (stall) conditions as compared to difference between two safe flight conditions (non-stall vs. non-stall trials). In hypothesis H5, we hypothesized that there might be relatively less difference between two safe conditions as compared to differences between one safe and another unsafe condition. This hypothesis was based on observations we made during the preliminary FloWorks analyses that after the critical angle of attack, the pressure distributions over the wing does not vary drastically. From the results of hypothesis H5, we can say that our

102

hypothesis was true since subjects perceived less difference between two safe condition trials (stall vs. stall trials) as compared to trials with one safe and another unsafe condition.

4.12.3 Analysis of hypotheses related to airflow direction. Table 4.9 presents results for hypothesis H6 (we expected that the average response value for trials with airflow only in horizontal direction to be different than the average response value for trials with updraft airflow conditions). From the means, the average response for trials with horizontal airflow is only slightly different than the average response for trials with updraft airflow conditions. It looks like our hypothesis H6 may not hold true.

Airflow Direction horizontal only updraft

Number of Trials 48 48

Mean Responses 6.041667 6.145833

Table 4.9: Analysis of hypothesis H6: Responses for horizontal airflow condition trials compared with responses for updraft airflow condition trials

103

Table 4.10 presents results for hypothesis H7 (we expected that the average response value for trials with airflow only in horizontal direction to be different than the average response value for trials with downdraft airflow conditions). The average response for trials with horizontal airflow is same as the average response for trials with updraft airflow conditions 6. So, it looks like our hypothesis H7 does not hold true.

Airflow Direction horizontal only downdraft

Number of Trials 48 48

Mean Responses 6.041667 6.041667

Table 4.10: Analysis of hypothesis H7: Responses for horizontal airflow condition trials compared with responses for downdraft airflow condition trials

Table 4.11 presents results for hypothesis H8 (we expected that the average response value for trials with updraft airflow conditions to be different than the average response value for trials with downdraft airflow conditions). From the means, the average response for trials with horizontal airflow is only slightly different than the average response for trials with updr aft airflow conditions. It looks like our hypothesis H8 may not hold true.
6

We triple checked that there was no error in calculating the mean responses given in Table 4.9. The mean responses coincidentally happened to be the same.

104

Airflow Direction updraft downdraft

Number of Trials 48 48

Mean Responses 6.145833 6.041667

Table 4.11: Analysis of hypothesis H8: Responses for updraft airflow condition trials compared with responses for downdraft airflow conditions trials

Are the results for hypotheses H6 and H8 statistically significant? (Note that for hypothesis H7, where the means are identical, there is no need to do the statistical analysis). We want to determine the likelihood that the difference between the means occurred due to chance. A Students T-test was applied to the means obtained for these hypotheses. Table 4.12 shows the results of the T-test with two-tailed analysis and d.f. = 94. Observe that from the T-test results given in Table 4.12, p > 0.05 for hypotheses H6 and H8. From this evidence, we cannot infer that any of the pairs of mean response values within hypotheses H6, H7 and H8 are different from each other.

Hypothesis H6 H8

p 0.826707 0.808837

t-value 0.219539 0.242609

Table 4.12: T-test results for means obtained from hypotheses H6 and H8 105

4.12.3.1

Interpretation of results

We designed hypotheses H6 to H8 to determine if changes in airflow direction have any effect on the subjects responses. When the glider sails through updraft airflow, its wings experience an increased pressure in the upward direction causing an increase in altitude of the glider. So, we designed hypothesis H6 to determine if the subjects can detect these conditions causing the increase in altitude by discriminating between just horizontal airflow condition trials and updr aft airflow condition trials. The results indicate that the subjects could not reliably discriminate between conditions that cause the glider to gain altitude due to updraft airflow. When the glider sails through downdraft airflow conditions its wings experiences an increased pressure in the downward direction, causing the glider to lose altitude. So, we designed hypothesis H7 to determine if subjects can detect these conditions causing a decrease in altitude by discriminating between just horizontal airflow condition trials and downdraft airflow condition trials. The results indicate that the subjects could not reliably discriminate between conditions that cause the glider to lose altitude due to downdraft airflow. We designed hypothesis H8 to determine if the subjects could discriminate between updr aft airflow condition trials and downdraft airflow condition trials. The results indicate that the subjects could not reliably discriminate between these two airflow conditions. 106

It seems that changes in airflow direction did not have any reliable effect on the subjects responses. The subjects were not able to reliably find differences between the conditions that cause the glider to lose or gain altitude due to changes in airflow direction. These results were unexpected and disappointing. We expected the test subjects to discriminate between updraft and downdraft airflow conditions because we know that the pressure distribution changes with the changes in airflow directions. So how do we resolve this problem? Lets analyze the approach we used for comparing updraft and downdraft airflow conditions. With the current design, we compared the mean response value for updraft conditions with the mean response value for downdraft conditions. Also, for each trial in the Expected-Y category we compared the airflow conditions across two different angles of attack, keeping the airflow direction constant. So, when we compare the mean response value for updraft conditions with mean response value for downdraft condition, we cannot confidently say if any visual differences perceived (or not perceived) by the subjects was due to change in angle of attack or due to change in airflow direction. To address this problem, we have created another, better design to evaluate the differences between updraft and downdraft conditions. If we have to compare trials for the change in airflow direction, it is better to keep the angle of attack and rendering type constant. Table 4.13 shows a new design of trials for comparing differences between updraft and downdraft airflow conditions. An example of a trial for this design is as follow: 107

<AOA0, updraft, Relative> vs. <AOA0, downdraft, Relative>: A component stimulus for angle of attack (AOA) zero with airspeed in vertical up direction (updraft) and relative scale for rendering is compared with the component stimulus for angle of attack zero with airspeed in vertical down direction (downdraft) and relative scale for rendering. Observe that the only varying component in each trial is the airflow direction. We can have a similar design of trials for comparing updraft vs. horizontal only airflow conditions, or downdraft vs. horizontal only airflow conditions.

Rendering

Updraft AOA0 AOA12

Downdraft AOA0 AOA12 AOA30 AOA50 AOA0 AOA12 AOA30 AOA50

Expected Response >0 >0 >0 >0 >0 >0 >0 >0

Relative AOA30 AOA50 AOA0 AOA12 Absolute AOA30 AOA50

Table 4.13: New design for comparing updraft vs. downdraft airflow conditions

108

4.12.4 Analysis of hypotheses related to absolute vs. relative rendering trials Table 4.14 presents results for hypothesis H9 (we expected that the average response value for non-stall vs. stall trials of type relative rendering to be greater than the average response value for non-stall vs. stall trials of type absolute rendering). From the means it seems that the subjects were able to perceive more difference in non-stall vs. stall trials of type relative rendering as compared with the difference in non-stall vs. stall trials of type absolute rendering. This is in agreement with hypothesis H9.

Trials Type non-stall vs. stall of Relative Rendering non-stall vs. stall of Absolute Rendering

Number of trials 48 48

Mean Responses 7.1875 5.917

Table 4.14: Analysis of hypothesis H9: Responses for non-stall vs. stall trials of relative rendering compared with responses for non-stall vs. stall trials of absolute rendering

Table 4.15 presents results for hypothesis H10 (we expected that the average response value for non-stall vs. non-stall trials of type relative rendering to be greater than the average response value for non-stall vs. non-stall trials of type absolute rendering). From the means it seems that the subjects were able to perceive more 109

difference in non-stall vs. non-stall trials of type relative rendering as compared with the difference in non-stall vs. non-stall trials of type absolute rendering. This is in agreement with hypothesis H10.

Mean Trials Type non-stall vs. non-stall of Relative Rendering non-stall vs. non-stall of Absolute Rendering Number of trials 12 12 Responses 6.75 3.583

Table 4.15: Analysis of hypothesis H10: Responses for non-stall vs. non-stall trials of relative rendering compared with responses for non-stall vs. non-stall trials of absolute rendering

Table 4.16 presents results for hypothesis H11 (we expected that the average response value for stall vs. stall trials of type relative rendering to be greater than the average response value for stall vs. stall trials of type absolute rendering). From the means it seems that the subjects were able to perceive more difference in stall vs. stall trials of type relative rendering as compared with the difference in stall vs. stall trials of type absolute rendering. This is in agreement with hypothesis H11.

110

Trials Type stall vs. stall of Relative Rendering stall vs. stall of Absolute Rendering

Number of trials 12 12

Mean Responses 6 4.167

Table 4.16: Analysis of hypothesis H11: Responses for non-stall vs. stall trials of relative rendering compared with responses for non-stall vs. stall trials of absolute rendering

Are the results for hypotheses H9 to H11 statistically significant? We want to determine the likelihood that the difference between the means occurred due to chance. A Students T-test was applied to the means obtained for these hypotheses. Table 4.17 shows the results of the T-test with two-tailed analysis and in each case, the differences between the means obtained for hypotheses H9 to H11 are statistically significant.

Hypothesis H9 H10 H11

p 0.002 < 0.000001 0.016

t-value 3.092 4.989 2.599

d.f. 94 22 22

Table 4.17: T-test results for means obtained from hypotheses H9 to H11 111

4.12.4.1

Interpretation of results

We designed hypotheses H9, H10 and H11 to determine if the subjects can discriminate between trials with absolute rendering and trials with relative rendering. In relative rendering, each image has its own scale for high and low pressure values. So, even the slightest variations in pressure are magnified. On the other hand, in absolute rendering, all the images have same bounds on low and high pressure values. So we expected that the subjects should be able to perceive more difference in trials with relative rendering as compared to trials with absolute rendering. From the results of hypothesis H9, H10 and H11 we can say that subjects could indeed perceive greater differences in the relative rendering trials as compared to the absolute rendering trials.

4.12.5 Analysis of exploratory hypothesis related to response time Figures 4.7 and 4.8 presents plots for analysis of hypothesis H12. We can observe from Figures 4.7 and 4.8 that the mean response time for trials with relative rendering is greater in magnitude than the corresponding mean response time for trials with absolute rendering. For example, the mean response time for Expected-N trials with relative rendering is greater than the mean response time for Expected-N trials with absolute rendering.

112

Figure 4.7: Comparison of mean response time for different trial types (relative rendering)

Figure 4.8: Comparison of mean response time for different trial types (absolute rendering)

113

Are these pair wise mean response time differences as seen in Figure 4.7 and Figure 4.8 statistically significant? That is, what is the likelihood that the mean response time differences occurred due to chance? A Students T-test with two-tailed analysis was applied and the results for the T-test are given in Table 4.18. Observe that from the T-test results, p > 0.05 for all the paired differences. This indicates that the difference between mean response times we obtained occurred due to chance.

Trial Type Expected-Ns non-stall vs. non-stall non-stall vs. stall stall vs. stall All Relative vs. All Absolute

p 0.2154 0.09672 0.52722 0.27685 0.0505

T-test value 1.244 1.735085 0.634621 1.115083 1.963234

d.f. 142 22 94 22 286

Table 4.18: T-test results for response time for trials with relative rendering compared with response time for trials with absolute rendering

However, while this analysis of the pairs of means individually does not show statistical significance, there seems to be some information in the magnitude direction. All five differences are in the same direction. That is, the response time for the relative 114

rendering trials is more than the response time for the absolute rendering trials for all five differences across Figures 4.7 and 4.8. What if we statistically analyzed all five differences together, and not individually? Consider a new event for analysis, the event that a pair of means across Figures 4.7 and 4.8 was greater/smaller. For example, the mean response time for Expected-Ns for relative rendering is greater than the mean response time for Expected-Ns for absolute rendering. What is the probability that such an event (greater or smaller than for a pair) occurred due to chance? We assumed that this probability was 0.5. For example, there is an equal probability that the mean response time for Expected-N trials with relative rendering to be either greater or smaller than the mean response time for Expected-N trials with absolute rendering. In Figures 4.7 and 4.8, we obtained five such events and they all turned out in one direction (response time for relative rendering trials was greater than response time for absolute rendering trials). Applying a cumulative binomial to the magnitude direction event in Figures 4.7 and 4.8 gives p < 0.05. From this evidence, when we consider all five differences together, the magnitude direction event did not occur due to chance. This implies that the subjects took more time to respond for trials with relative rendering as compared to response time for trials with absolute rendering.

115

4.12.5.1

Interpretation of results

We designed exploratory hypothesis H12 regarding response time to determine if there is any relation between the response times and the type of trials. The mean responses from Figures 4.7 and 4.8 indicate that the response time for trials with relative rendering was greater than the response time for trials with absolute rendering. The results of T-test analysis suggest that the differences in mean response time between relative rendering trials and absolute rendering trials occurred due to chance. However, applying a cumulative binomial to all five differences in Figures 4.7 and 4.8 suggests that it was unlikely that, by chance, trials with relative rendering consistently had longer response times than trials with absolute rendering. Hence, the evidence indicates that the subjects took more time to visually perceive the difference between images in trials with relative rendering than to visually perceive difference between images in trials with absolute rendering.

4.13

Summary In this chapter, we discussed the design of visual stimuli, which consists of pairs

of component rendering images for sensor readings. We discussed the design aspects for comparing the images in visual stimuli.

116

We also discussed in detail the test subject study where a set of visual stimuli were shown to 16 adult subjects (eight males and eight females) to evaluate whether the test subjects were able to visually perceive differences in visual stimuli obtained for different airflow conditions. We discussed the design aspects of the test subject schedules (XML file format) to present the trials to the subject, and the various factors used to balance the test schedules. We constructed a set of hypotheses that were driven by the goals of this thesis. We then discussed the software for loading a test schedule, presenting the trials to the subject, and recording their responses. We designed hypotheses (H1 to H3) to test the validity of responses given by subjects. The results of these hypotheses indicate that the responses are valid. That is, the subjects were not responding randomly. We designed hypotheses (H4 and H5) to determine if subjects could effectively discriminate between airflow conditions related to safe and unsafe flight conditions. The results indicate that subjects perceived more difference between safe vs. unsafe conditions as compared to the differences between two safe or two unsafe conditions. These results were in agreement with our expectations. We designed hypotheses (H6 to H8) to determine if the subjects could perceive differences between airflow conditions that cause the glider to gain or lose altitude due to updraft or downdraft airflow. The results indicate that change in airflow direction did not have much effect on the responses given by the subjects. We also discussed another, better design for comparing updraft and downdraft airflow conditions (Section 4.12.3.1). If we

117

conduct a new test subject study with the trials based on this new design then we might get some interesting results for updraft and downdraft airflow conditions. We designed hypotheses (H9 to H11) to determine if subjects could perceive more difference in trials with relative rendering as compared to trials with absolute rendering and the results indicate that the subjects were able to visually perceive differences better with relative rendering as the average response for trials with relative rendering was greater than the average response for trials with absolute rendering. We designed an exploratory hypothesis (H12) to see if there was any relation between response time and the type of trials. The results suggest that the subjects took a longer time to perceive differences between images with relative rendering than to perceive differences between images with absolute rendering.

118

Chapter 5: Conclusion And Future Work

5.1 Conclusion This thesis marks the start of a research programme to outfit the wing of a glider with sensors and feed that information back to the skin of the pilot. We presented an outline of the planned research programme in Table 1.3. The main goals of this thesis were to simulate a wing model and sensors in varied airflow conditions such as: safe flight conditions (non-stall), unsafe flight conditions (stall), updraft airflow conditions, and downdraft airflow conditions. We wanted to know whether airflow feedback information can provide interesting and useful visual perceptual discriminations to the pilot. For this purpose, we were interested in obtaining simulation data for different airflow conditions because we did not yet have access to an array of sensors on an aircraft wing. We designed a 3D wing model in SolidWorks and conducted fluid dynamics simulations in FloWorks by varying the angle of attack and airflow direction to obtain simulation data for various flight conditions. We then designed a technique for mapping the raw flow analysis data to virtual sensors that can be placed at any location on the simulated wing surface. We provided the facility to specify an array of virtual sensors through an XML file format. We built a visual rendering tool for rendering the readings obtained from virtual sensors which included our technique for mapping the sensor 119

readings to the RGB color model. We used visual rendering as opposed to tactile rendering as we did not yet have access to tactile rendering hardware. The final task in this thesis was to compare the visual renderings for different airflow conditions to evaluate if adult test subjects could find any discriminating patterns among them. To implement this comparison, we designed a test subject study where a set of visual trials were shown to 16 adult subjects. Each trial consisted of a pair of visual component images obtained for different airflow conditions. The goal of the test study was to determine if the test subjects could visually perceive any difference between the renderings obtained for these airflow conditions. We designed a set of hypotheses for our test study that drove our expectations for the responses given by the test subjects (see Section 4.9).

5.1.1 Evaluation of hypotheses for test subject study We designed hypotheses (H1-H3) to test the validity of responses given by the subjects. Our analysis of subjects responses showed conclusively that they did not respond randomly. We designed hypotheses (H4 and H5) to determine if the subjects were able to perceive differences between safe and unsafe flight conditions. In this thesis, the unsafe conditions arise once the wing crosses the critical angle of attack, causing the aircraft to 120

stall and lose altitude. These conditions are dangerous because the pilot can be killed if the glider crashes due to the sudden drop in altitude. The safe flight conditions are those where the wing maintains its flight. Our analysis of hypothesis H4 and H5 conclusively shows that the test subjects were effectively able to perceive visual differences between renderings for safe flight conditions and renderings for unsafe flight conditions. Based on the experiment we performed in this thesis we have reason to believe that if we were to use a visual rendering system in the cockpit of a glider with real sensors mounted on the wing then the glider pilot using the visual rendering system could effectively discriminate between safe and unsafe flight conditions. However, there are some differences between the experiment performed in this thesis and the real-life situation for safe and unsafe airflow conditions. Unlike the virtual sensors used in this thesis, the real-life situation would include having real sensors mounted on the wing of an aircraft. Also, the data coming from the sensors would be varying with time unlike the static data we used in this thesis. So, when we eventually mount a visual rendering system in a cockpit we will need to consider these differences. When the glider sails through updraft airflow its wings experience an increased pressure in the upward direction causing the glider to gain altitude. Similarly, when the glider sails through downdraft airflow its wings experience decreased pressure in the downward direction causing the glider to lose altitude. Glider pilots rely on updraft airflow to stay aloft in the air, whereas, very strong downdraft air can be dangerous as the 121

glider can crash due to a sudden drop in its altitude. So, we designed hypothesis H6-H8 to determine if the subjects could visually perceive any differences in the trials conducted for updraft and downdraft airflow conditions. The results of the analyses of these hypotheses indicated that the subjects could not reliably visually discriminate between the airflow conditions that cause the glider to gain or lose altitude. The changes in airflow direction did not have any reliable effect on the subjects responses. Hence, if we mount real sensors on the aircraft wing and use a visual rendering system in the cockpit of a glider then the glider pilot using the rendering system might not get reliable discriminations between visual renderings for updraft and downdraft airflow conditions. We also discussed a plan for another, better design, for a second test subject study to evaluate perceptual differences between updraft and downdraft conditions (see also Section 4.12.3.1). In this design, we would compare trials for only change in airflow direction, keeping angle of attack (and rendering type) constant. For example, we can have a trial where we compare the visual rendering of updraft airflow at zero angle of attack with the visual rendering of downdraft airflow at zero angle of attack, keeping the rendering type either absolute or relative for both airflow conditions. If we evaluate the updraft and downdraft airflow conditions using this new design, it might provide some interesting perceptual discriminations. We designed hypotheses (H9-H11) to determine if changes in the rendering type for sensor readings have any effect on subjects responses. In relative rendering, each 122

image has its own scale for high and low pressure values and even the slightest variations in pressure are magnified. On the other hand, in absolute rendering, all the images have the same bounds on low and high pressure values. So, we expected that the subjects should be able to perceive more differences in trials with relative rendering as compared to differences in trials with absolute rendering. From the results, our expectation was confirmed, and we can conclusively say that the subjects could visually perceive greater differences between relative rendering trials than the difference between absolute rendering trials. We designed an exploratory hypothesis (H12) regarding response time to determine if there is any relation between the subjects response times and the rendering type of trials. From the results, we can conclusively say that the subjects took more time to perceive differences in relative rendering trials than to perceive differences in absolute rendering trials. Based on the response values and the response times for absolute and relative rendering trials, it seems that subjects were able to perceive more differences in relative rendering trials but it took them longer to spot those differences as compared to differences in absolute rendering trials. This could help us to decide when absolute rendering or relative rendering would be useful if we were to use a visual rendering system in the cockpit of a glider. If the pilot needs quick discrimination then absolute

123

rendering may be best. However, if the pilot needs really marked discrimination, relative rendering may be best.

5.1.2 Visual rendering of airflow conditions This thesis marks the start of a research programme to outfit the wing of a glider with sensors and feed that information back to the pilot. We have built a system to enable visual discrimination between different airflow conditions and used it as a simulation of tactile feedback hardware. We assumed that if visual discrimination is possible between different airflow conditions then we can also get discrimination using tactile feedback hardware. We may use the visual rendering system for the following purposes: For debugging and visualizing the airflow data obtained from sensor data sources including XPlane (a flight simulator), real sensors mounted on a real wing, or airflow simulation data from SolidWorks/FloWorks or other Computational Fluid Dynamics software. The visual rendering system can help us to know if the airflow data obtained from the sensors looks reasonable, or to make sure that the sensor feedback system is working properly. For example, if the airflow conditions over the aircraft vary then we expect the visual renderings to also vary.

124

We will likely mount a visual display in an aircraft once we have a physical array of airflow sensors. The visual rendering system should assist the pilot to effectively discriminate between safe and unsafe flight conditions.

5.2 Future work There are many possibilities where we can build on top of the work done in this thesis. Some of them are listed below: Table 1.3 shows the roadmap of our planned research programme. In this thesis, we planned and successfully completed the first two steps of the roadmap, simulating a wing model and sensors in varied airflow conditions, and evaluation of visual discrimination based on simulation. We used the visual display system as a simulation of tactile feedback hardware. Since the visual rendering system enables discrimination between safe and unsafe flight conditions, we can work on the next stage of the research roadmap shown in Table 1.3. That is, we can build the tactile feedback hardware to hopefully enable discrimination between various airflow conditions. The analysis of the test subject study indicated that the subjects could not reliably perceive visual differences between updraft and downdraft airflow conditions that cause the glider to gain or lose altitude. We would like to conduct a second test 125

study using another, better design, to evaluate perceptual differences between updraft and downdraft airflow conditions (see also Section 4.12.3.1). For purposes of this thesis, we designed a uniform chord wing model in SolidWorks. However, practical wings designed as per standard specifications, are often of variable chord length. Anderson et al., (2007) designed a variable chord wing model in SolidWorks as a part of their senior year project at University of Minnesota - Twin Cities. We can contact this team and try to obtain the procedures for designing a variable chord wing in SolidWorks. The procedures we established in this thesis for conducting flow analysis over a uniform chord wing model (see Appendix 1) should also apply for a variable chord wing model. In this thesis, conducting a computational flow analysis has taken considerable amounts of computational time and the memory requirement has been high. It would be useful to investigate methods that will reduce the computational time and memory requirements. For placing the sensors on the wing surface, an important question we need to address is: How can we place sensors so as to best sample the variations in the airflow conditions over the wing surface? For purposes of this thesis, we used a visual rendering tool to render the complete flow analysis data and identify the regions where we see variations (also see Section 3.6.3). This rendering tool helped us to identify areas on the wing where we needed to put more sensors. 126

Thus, this was a heuristic technique, and not an analytic decision. There could be better ways of deciding where to place the sensors to sample the variations. Additionally, we can consider placing sensors near the region where the wing interfaces with the glider and with the airflow. For example, we can place some sensors on the wing near the fuselage of glider, some sensors near the tip of the wing, some sensors near the leading edge, and some sensors on the trailing edge. The shape of the virtual sensors we used for this thesis was rectangular. However, practical sensors may not necessarily be rectangular, for example, they may be oval or circular shaped. Using a rectangular sensor was easier to integrate with FloWorks data as the reading measured by a virtual sensor was an aggregate of the fluid cells over the area covered by the rectangular sensor. To work with other shapes of sensors, we will have to design a method for collecting the fluid cells readings covered by the area of a sensor of other shapes. In Chapter 4, we discussed a technique for mapping sensor readings to the RGB color model. We mapped the sensor reading to a color because we wanted to develop a visual rendering system for the sensor readings. At some point in our research we would like to map the sensor readings to tactile feedback hardware. The technique we used for mapping sensor readings to color may be modified to work with tactile devices. In our mapping technique, we map a sensor reading to a RGB color scale with range 0 to 255. In the case of tactile feedback device we will also need a similar kind of mapping to map a sensor reading to a scale that 127

drives the control parameters of the feedback device (e.g., rate of actuation or frequency). Evaluating other computational fluid dynamics engines (e.g., Fluent) to determine if the flow analysis over wings can be performed in less computational time and with lower memory requirements. We can also evaluate other computational fluid dynamics software and see if we can obtain results more similar to the wind tunnel test results of McGhee and Beasley (1973). Evaluating computational fluid dynamics software that can run on a supercomputer so that we can obtain flow analysis data in real-time and also to utilize parallel computing resources to divide the flow analysis task into multiple hardware resources and thus reduce the turnaround time for the flow analysis for the entire wing. In this thesis, to obtain readings from the virtual sensors, we take the entire flow analysis for the entire wing as input and map the pressure readings to virtual sensors. This was intentionally done to have the flexibility to put the sensors at any location on the wing because we did not know the best region on the wing surface to locate the virtual sensors. However, once we determine the best locations for the sensors on the wing surface, it would be useful to explore methods for obtaining flow analysis data only from specific points on the wing and within the specific areas of the sensors, rather than from the entire flow analysis for the entire wing. 128

Appendix 1: Procedures For Wing Design And Flow Analysis


In this Appendix we discuss the procedures for designing 3D wing model in SolidWorks and the procedures for conducting flow analysis using FloWorks.

A1.1 Plotting airfoil curve through XY points. Once the airfoil curve points are obtained then we can draw a spline curve in SolidWorks passing through these points. For this purpose we have used the feature Insert curve through XYZ points in SolidWorks. This feature expects a text file containing a list of three-dimensional points for the curve. If the airfoil curve co-ordinates are two-dimensional then the third (Z co-ordinate in most cases) is manually specified as zero in the text file. Once you have the text file of co-ordinates ready, the feature Insert curve through XYZ points will draw a smooth curve passing through those points. A sample of this can be seen in Figure A1.1.

129

Figure A1.1: 2D Airfoil Curve

A1.2 Creating a 3D airfoil model from 2D representation Once we plot the two-dimensional airfoil curve we can specify the third dimension using the Extrude Boss/Base feature in SolidWorks. This feature simply elongates the two-dimensional drawing into a third dimension (in our case Z-direction). An example extruded airfoil model is depicted in Figure A1.2.

130

Figure A1.2: 3D airfoil model designed in SolidWorks

A1.3 Creating a COSMOSFloWorks project The purpose of this section is to discuss a step-by-step procedure of creating a flow analysis project in FloWorks for the solid object model designed in SolidWorks.

131

1 Click FloWorks, Project, Wizard. 2 Once inside the Wizard, select Create new
in order to create a new configuration and name it airflow analysis.

Click Next.

3 For the Length entry, accept the default


settings

Click Next.

4 Set the analysis type to External.


Click Next.

132

5 Expand the Gases folder and double-click Air row. Keep default Flow Characteristics.

Click Next.

6 Click Next accepting the adiabatic default


outer wall condition and the default zero roughness value for all model walls.

Click Next.

7 Set the Velocity in X direction


e.g., 32 m/s.

Click Next.

133

8 Accept the default for the Result resolution and keep the automatic
evaluation of the Minimum gap size and

Minimum wall thickness.


Click Finish. Now COSMOSFloWorks creates a new configuration with the COSMOSFloWorks data attached.

After creating a project, COSMOSFloWorks shows a box around the solid model. This box is called the computational domain.

134

A1.4 Specifying 2D plane flow In this section, we discuss the procedures for conducting a flow analysis over a cross-sectional area of a solid by specifying a 2D plane for the analysis process.

1 In the COSMOSFloWorks analysis tree, expand the Input Data icon. 2 Right-click the Computational Domain icon and select Edit Definition. The Computational Domain dialog box appears.

3 Click the Boundary Condition tab. 4 In the 2D plane flow list select XY-Plane Flow Automatically the Symmetry condition is specified at the Z min and Z max boundaries of the Computational Domain. Click the Size tab. You can see that the Z min and Z max boundaries are set automatically based on the model dimensions.

For most cases, to study the flow field around an external body and to investigate the effects of design changes it is recommended to use the default Computational Domain

135

size as determined by COSMOSFloWorks. The accuracy will be increased at the expense of CPU time and
7 memory due to the larger Computational Domain.

5 Specify the Z-coordinates of the Computational domain boundaries as shown on the picture.

6 Click OK.

A1.5 Setting goals in FloWorks Users can monitor the convergence towards goals during flow analysis and can also manually halt the analysis process if there is no further need for calculations. User can still view the results of flow analysis for the partially completed solution process. In CosmosFloWorks there are four types of goals: Global Goal, Surface Goal, Volume Goal and Equation Goal. As per the FloWorks help document these goal types are defined as:

Global Goal is a physical parameter calculated within the entire computation domain.

The italicized text in quotes used throughout this Appendix is directly from the FloWorks tutorials.

136

Surface Goal is a physical parameter calculated on a user-specified face of the model.

Volume Goal is a physical parameter calculated within a user-specified space inside the Computational Domain, either in the fluid or solid (if Heat Conduction in Solids is taken into account).

Equation Goal is a goal defined by an equation (basic mathematical functions) with the specified goals or parameters of the specified projects input data features (global initial or ambient conditions, boundary conditions, fans, heat sources, local initial conditions, etc.) as variables

A1.6 Specifying a global goal In this section, we discuss the step-by-step procedures for setting the global goal for the flow analysis process. For example, we show how to set the X-component of Force (aerodynamic drag force) in FloWorks. We can use similar procedures to specify the Y-component of the force (lift force) as a goal.

1 Click FloWorks, Insert, Global Goals.

137

2 In the Parameter table select the first check box in the X Component of Force row.

3 Accept selected Use for Conv. check box to use this goal for convergence control.

4 Click OK. The new GG X - Component of Force 1 item appears in the COSMOSFloWorks analysis tree.

A1.7 Define the engineering goal

In this section, we discuss the procedures for specifying an engineering goal that can be used as a criterion for stopping the flow analysis process.

Right-click the COSMOSFloWorks Analysis Tree Goals icon and select Insert Surface
Goals.

138

Click on the airfoil. The upper surface is selected. To select the lower surface Right Click on the airfoil, click Select Other option from the context menu. From the popup box select the other face to select the lower face of the airfoil.

In the Parameter table select the Av check box in the Total


Pressure row. Previously selected Use for Conv. (Use for Convergence Control) check box means that the created

goal will be used for convergence control.

If the Use for Conv. (Use for Convergence Control)

check box is not selected for a goal, it will not influence the task stopping criteria. Such goals can be used as monitoring parameters to give you additional information about processes occurring in your model without affecting the other results and the total calculation time.

Click OK

. The new SG Av Total Pressure 1

item appears in the COSMOSFloWorks Analysis Tree.

139

Engineering goals are the parameters in which the user is interested. Setting goals is in essence a way of conveying to COSMOSFloWorks what you are trying to get out of the analysis, as well as a means of reducing the time COSMOSFloWorks takes to reach a solution. By only selecting the variable which the user desires accurate values for, COSMOSFloWorks knows which variables are important to converge upon (the variables selected as goals) and which can be less accurate (the variables not selected as goals) in the interest of time. Goals can be set throughout the entire domain (Global Goals), in a selected area (Surface Goal) or within a selected volume (Volume Goal). Furthermore, COSMOSFloWorks can consider the average value, the minimum value or the maximum value for goal settings.

5 Click File, Save.

140

A1.8 Solution

In this section, we discuss the procedures for starting the flow analysis process once all the goals are set.

Click FloWorks, Solve, Run.

The already selected Load


results check box means that

the results will be automatically loaded after finishing the calculation.

Click Run.

141

A1.9 Monitor the solver

In FloWorks, the process which does the fluid dynamics computations is called the solver. In this section, we discuss the relevant user interfaces of this solver process.

This is the solution


monitor dialog box. On the left is a log of each step taken in the solution process. On the right is an information dialog box with mesh information and any warnings concerning the analysis.

1 Click Insert Preview

on the Solver

toolbar.
2 This is the Preview Settings dialog box.

Selecting any SolidWorks plane from the


Plane name list and pressing OK will

create a preview plot of the solution on that plane. For this model Front Plane is a good choice to use as the preview plane.

142

The preview allows one to look at the results while the calculation is still running. This helps to determine if all the boundary conditions are correctly defined and gives the user an idea of how the solution will look even at this early stage. At the start of the run the results might look odd or change abruptly. However, as the run progresses these changes will lessen and the results will settle in on a converged solution. The result can be displayed either in contour-, isoline- or vectorrepresentation.

3 When the solver is finished, close the monitor by clicking File

Close.

143

A1.10 Running the simulation Once you have created a FloWorks project from an airfoil model designed in SolidWorks and specified the flow properties (e.g., external flow, fluid type as air, velocity in X-direction) then you can start performing the simulation of airflow over the airfoil. To start a simulation click the FloWorks Solve Run option. This opens a solver window which shows the details of the fluid flow analysis. The user can monitor the progress of the flow analysis using this solver window. The solution solver dialog box can be seen in Figure A1.3.

Figure A1.3: Solution solver

The solver displays some pre-calculation processing results such as creating an initial mesh of fluid cells (explained in Chapter 2), loading the flow properties, and setting up the engineering goals. Once the calculation starts the user can monitor the 144

results in the preview pane by clicking the Insert Preview

button on the solver

window toolbar. The user can specify which parameter they want to monitor.

A1.11 Viewing results After the calculations are completed the user can see the distribution of the parameters in different ways. The flow results are stored in a .fld file, so user can view the results anytime by just loading the flow analysis project in SolidWorks. The various ways in which the user can view the results, as described by the FloWorks help file are: Cut plot: Cut plot displays a section view of a parameter distribution. The parameter can be represented as a contour plot and as isolines. 3D profile plot: 3D-Profile Plot displays how a parameter is distributed at the section plane but unlike the cut plot that gives you only color visualization, 3DPlot additionally distances the plot points from the section plane to the distance proportional to the parameter value. Surface plot: Surface plot displays the parameter distribution on the selected model faces or surfaces. Flow trajectories: Allows user to display the fluid parameters as trajectories of flow over the object.

145

Particle Study: Particle study allows you to display trajectories of physical particles and obtain various information about the particle's behavior including their effect on the model walls such as erosion and accumulation. Physical particles are spherical particles of specified material (liquid or solid) and constant mass. Displaying trajectories of physical particles allows you to get knowledge of how extrinsic particles with mass (dust, droplets) are distributed in the flow.

XY plot: XY plot allows you to see how a parameter changes along a specified direction. To define the direction, you can use curves, sketches (2D and 3D sketches) and model edges. The data are exported into an Excel workbook, where parameter charts and values are displayed.

Surface parameters: Allows you to display parameter values (minimum, maximum, average and integral) calculated over the specified surface. The data can also be exported into an Excel workbook.

Volume parameters: Allows you to display parameter values (minimum, maximum, average, bulk average and integral) calculated within the specified volumes (part or subassembly components in assemblies, as well as bodies in multibody parts) within the Computational Domain.

Point parameter: Displays parameter values at specified points inside the Computational Domain. The point of interest can be specified by its coordinates

146

or can be selected on a plane, sketch, edge, curve or surface. You can also define a grid so the points will be taken from the intersections of the grid lines. The point parameters can be exported into an Excel workbook. Goal plot: Allows you to study goal changes in the course of the calculation. COSMOSFloWorks uses Microsoft Excel to display goal plot data. Each goal plot is displayed in a separate sheet. The Summary sheet displays goal values at the moment of finishing the calculation (or at the loaded time moment for timedependent analyses). Mesh visualization: Allows you to display the computational mesh cells at the calculation moment selected for getting the results. For our research purposes we will use mesh visualization. We are interested in the pressure flow parameter measured at various points on the airfoil.

147

Appendix 2: Java Specifications Of Software Interface Layer

/* * Copyright 2008, 2009 Regents, University of Minnesota Duluth * * The Fly By Feel (FbF) software is free software: * you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * long with this program. If not, see <http://www.gnu.org/licenses/>. * * The FbF Team * (Contact: chris@cprince.com) */ package sensorSource; /** * * @author Prasad Kulkarni * * This interface provides an abstraction to the sensor data source. We * can have multiple data sources for collecting sensor data, some of * which are noted below: * - Simulated sensors from SolidWorks/FloWorks airflow simulation over * airfoil. * - Flight simulators. * - Mechanical sensors in wind tunnel. * - Electronic sensors on real wing model. * * The interface SensorDataSource will hide the details of the * originating data source. This interface will provide access from a * high level UI to the low level data. The UI doesn't need to be aware * of the specifics of the originating data source. The UI will make * use of this interface to get the sensor data and render it on the * UI. As mentioned above, the sensor data can be from any of the * source but the UI will always render it in consistent form. * Whatever data source we may use to gather the sensor data, the

148

* developers need to implement this abstract interface for that * particular source. * */ public interface SensorDataSource { public enum DataSourceType { SWFW, // SolidWorks/FlowWorks XPlane, // The XPlane flight Simulator XPlaneStatic // XPlane data that has been logged to a file }

/* * Connects to the data source. If the source is static like the * airflow simulation data, this function reads the data file. The data * file will be specific to the driver of the data source that * implements this interface. If the source is dynamic like real time * data collected from sensors mounted on wing, this function would * establish network connection, may be through TCP/IP. * Inputs such as data file or TCP/IP address will be provided from the * Graphical User Interaface that will use this interface. * * @return boolean indicates whether or not connection was successful. */ boolean connect();

/** * Disconnect from the data source. This only really makes sense for * dynamic data sources presently. E.g., for the XPlane data source, a * disconnect is useful because the connection is established using a * socket. */ void disconnect();

/** * Reset the data source. * */ void reset();

/* * Retrieves the name of the data source of the sensors. * This name is just for display or information purposes, and does not * serve a technical interfacing function. */ String getDataSourceName();

149

/** * Retrieve the data source type. * @return The type enum for the data source. */ DataSourceType getDataSourceType();

/* * The data from the source is read as a collection of data sets. Each * of the data sets describes the airflow over the wing. One data set * can have pressure information while another data set from same data * source may have velocity information. Also it is possible to have * multiple AOA (angle of attack) in same data source, each data set * having information about a different AOA used for getting readings. * * A SensorConfig object is passed to this method because DataSet's * may depend on the particular view or layout provided by the sensory * configuration. */ DataSet getNextDataSet(SensorConfig sc);

/* * * * * * *

The function getSensorValue has two versions. One version is used for static source while another version is used for dynamic sources. VERSION 1 for static source such as FloWorks simulation. For the dynamic source version of this API see {@link #<SensorData getSensorValue (int iSensorID)> [getSensorValue]}

* * Gets the sensor value at the specified position. The caller of this * function doesn't need to know the originating source i.e., whether * the sensors are real sensors mounted on wing or they are * simulated sensors from the airflow simulations. * * @param x_dist Distance on X-axis of wing with respect to upper * rightmost corner. * @param z_dist Distance on Z-axis of wing with respect to upper * rightmost corner * @param surface Tells whether the sensor is mounted on top or bottom * surface of wing. 1-top surface. 0-bottom surface. * @param length Length of the sensor (Assuming it is rectangular) * @param width Width of the sensor (Assuming it is rectangular) * @return SensorData Object that wraps the airflow properties * measured by the sensor * SensorData getSensorValue (Double x_dist, Double z_dist,

150

int surface, Double length, Double width);

* * * * * * *

The function getSensorValue has two versions. One version is used for static source while another version is used for dynamic sources. VERSION 2 for dynamic source such as FloWorks simulation. For the static source version of this API see {@link #<SensorData getSensorValue (Double x_dist, Double z_dist, int surface, Double length, Double width)> [getSensorValue]}

* * There can be multiple sensors mounted on the wing where each sensor * has a unique identifier. Given the sensor ID this function gets the * reading measured by this sensor. The caller of this * function doesn't need to know the originating source i.e., whether * the sensors are real sensors mounted on wing or they are * simulated sensors from the airflow simulations. * * @param iSensorID Unique sensor identifier * @return SensorData Object that wraps the airflow properties * measured by the sensor */ SensorData getSensorValue (int iSensorID);

/* * Get the airfoil information on which sensors are mounted. */ AirfoilInfo getAirfoilInfo();

/* * Get the airflow properties set when the sensor data is collected. */ AirflowProperties getAirflowProperties();

};

151

Appendix 3: Balancing Tables For Test Schedules


The balancing table for test schedules that satisfy the constraints of Section 4.8 for Expected-Ys:
TOTAL TOTAL R A TOTAL 5R 4R 5R 4R 5R 4R 5R 4R 4A 5A 4A 5A 4A 5A 4A 5A 9 9 9 9 9 9 9 9

SUBJECT 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

NN 1R 2A 1R 2A 1R 2A 2R 1R 6R 6A 12

SS 1R 1A 1R 1R 1A 1R 1R 1A 1R 1A 2A

NS 3A 3R 3A 3R 3A 3R 3A 3R 3R 3R 3R 3R

SN

3A 3A 3A 3A

6R 6A 12R 12A 12R 12A 12 24 24

36

36

72

where, NN = non-stall vs. non-stall SS = stall vs. stall NS = non-stall vs. stall SN = stall vs. non-stall A = absolute rendering image R = relative rendering image 152

The balancing table for test schedules that satisfy the constraints of Section 4.8 for Expected-Ns:
Total R 4R 5R 4R 5R 4R 5R 4R 5R Total A 5A 4A 5A 4A 5A 4A 5A 4A

Subject 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

NR 2 3 2 2 2 3 2 2

SR 2 2 2 3 2 2 2 3

NA 2 2 3 2 2 2 3 2

SA 3 2 2 2 3 2 2 2

TOTAL 9 9 9 9 9 9 9 9

18

18

18

18

36R

36A

72

where, NR = non-stall angle of attacks and relative rendering images SR = stall angle of attacks and relative rendering images NA = non-stall angle of attacks and absolute rendering images SA = stall angle of attacks and absolute rendering images

153

Appendix 4: Test Subjects Responses


Responses for test subject 1:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) A A R R R R R A A A R R A A A R R A

Air flow (A or B or C) B A A B B A B C B A A A B C A C B C

AOA1 (0, 12, 30 or 50) 30 30 0 30 50 50 30 12 0 0 30 0 12 50 0 50 12 50

AOA2 (0, 12, 30 or 50) 30 30 12 12 50 0 50 30 50 0 30 0 12 50 30 12 12 30

Rating (0-9) 3 2 7 8 0 8 6 5 7 1 1 2 2 1 7 8 3 4

Response Time (in ms) 7328 8109 6062 6031 9172 5844 5312 8765 6718 5907 5344 7359 7344 10922 7313 3954 7078 4750

154

Responses for test subject 2:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) R A R A R A A R R R A R A A A R R A

Air flow (A or B or C) C A C C C C B A C B C B A A B A A A

AOA1 (0, 12, 30 or 50) 30 30 0 30 50 0 50 12 12 12 30 0 0 50 12 0 50 12

AOA2 (0, 12, 30 or 50) 30 0 0 12 30 0 0 12 50 30 30 0 12 50 0 50 50 12

Rating (0-9) 0 6 0 4 5 0 9 0 8 5 0 0 3 0 4 8 1 0

Response Time (in ms) 23719 20875 21609 25328 13985 11297 7078 10703 14468 9828 9563 7578 15906 9797 22140 12734 22110 12906

155

Responses for test subject 3:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) R R R A R A R R R R A A A A R A A A

Air flow (A or B or C) C C A A A C B B C C A A B C B B A B

AOA1 (0, 12, 30 or 50) 30 50 30 0 0 12 30 12 30 12 50 30 12 12 50 0 0 50

AOA2 (0, 12, 30 or 50) 12 50 0 50 0 12 30 0 50 12 30 30 30 50 0 0 0 50

Rating (0-9) 5 0 7 3 0 0 0 3 6 0 1 0 3 4 7 0 0 0

Response Time (in ms) 24515 17531 37672 26515 14422 7735 13109 28531 18344 11171 5391 4046 11844 9234 14735 5203 3765 5203

156

Responses for test subject 4:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) A R R R R R R A A A A R A R R A A A

Air flow (A or B or C) C B A B C A C C B A C C A B A B A A

AOA1 (0, 12, 30 or 50) 0 30 12 50 50 0 12 50 30 12 0 0 12 0 30 30 50 50

AOA2 (0, 12, 30 or 50) 12 30 12 30 50 30 30 12 30 0 0 0 12 50 30 12 50 0

Rating (0-9) 4 0 0 4 0 5 3 2 0 2 0 0 0 8 0 3 0 4

Response Time (in ms) 25188 18266 13625 14984 12094 16359 13562 14750 11953 14140 11734 8156 15094 10281 12797 9953 10656 5453

157

Responses for test subject 5:

Trial Rendering Number (A or R) 1 R 2 A 3 R 4 R 5 R 6 A 7 R 8 R 9 A 10 R 11 A 12 A 13 A 14 R 15 A 16 R 17 A 18 A

Air flow (A or B or C) A B A C B C C C B A A B A B C C C B

AOA1 (0, 12, 30 or 50) 50 12 50 0 0 30 30 50 0 30 12 50 50 30 0 12 12 50

AOA2 (0, 12, 30 or 50) 12 50 50 12 0 30 30 0 0 50 30 50 50 0 30 12 12 30

Rating (0-9) 8 6 0 4 0 0 0 5 0 2 2 0 0 5 6 0 0 3

Response Time (in ms) 15969 11547 13219 11656 9641 6843 13578 6704 5329 8579 6281 6657 5078 9453 10219 8641 6390 4000

158

Responses for test subject 6:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) A R A A R R A R R A R R R R A A A A

Air flow (A or B or C) B B C C A A C C B B C B A A A A B A

AOA1 (0, 12, 30 or 50) 12 0 30 50 50 12 12 0 50 50 0 12 30 12 30 30 0 0

AOA2 (0, 12, 30 or 50) 12 30 0 50 30 12 0 50 50 12 0 12 30 50 30 12 12 0

Rating (0-9) 0 5 5 0 5 0 3 7 0 5 0 0 0 7 0 2 1 0

Response Time (in ms) 4406 19750 9969 2922 8828 4266 5657 7265 6078 6937 3891 5360 4860 6813 3907 6250 3891 2235

159

Responses for test subject 7:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) A R R R A R A A R A A R A R A A R R

Air flow (A or B or C) C B C B C A A C C B B C A A B A B B

AOA1 (0, 12, 30 or 50) 50 50 30 50 0 0 12 12 12 0 0 30 12 30 30 30 0 12

AOA2 (0, 12, 30 or 50) 50 12 0 50 50 0 50 12 0 30 0 30 12 12 30 50 12 12

Rating (0-9) 0 7 7 0 8 0 6 0 7 7 0 1 0 7 0 5 7 0

Response Time (in ms) 23109 24922 32328 24796 40641 20407 34297 30828 33235 21875 12625 24922 16141 26828 17109 27422 30110 18031

160

Responses for test subject 8:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) A A A R R R A R A A A A R R R R A R

Air flow (A or B or C) A C C C A C B C C B B C B B A B B A

AOA1 (0, 12, 30 or 50) 50 30 0 50 12 12 12 0 30 30 50 50 12 30 50 0 30 12

AOA2 (0, 12, 30 or 50) 12 50 0 50 0 12 12 30 30 50 50 0 50 30 50 0 0 30

Rating (0-9) 6 5 1 0 7 0 1 6 2 6 0 7 9 1 0 1 2 6

Response Time (in ms) 14984 8063 13375 24609 12547 21750 11547 11203 23422 9531 12672 6437 10625 13157 14078 12906 8812 5484

161

Responses for test subject 9:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) A A A R A A R R R R A R A R A A R R

Air flow (A or B or C) A C C C A A B B A B B A B C A A C B

AOA1 (0, 12, 30 or 50) 0 12 50 50 12 0 0 0 30 30 12 50 30 50 50 30 12 30

AOA2 (0, 12, 30 or 50) 50 30 50 30 12 0 12 0 0 30 50 50 30 0 50 50 12 12

Rating (0-9) 9 7 0 7 0 0 8 0 1 0 9 0 1 8 0 5 0 6

Response Time (in ms) 27813 38468 35687 24125 51062 28984 30343 34672 27734 54062 18687 28485 9640 26906 19828 14360 15625 7141

162

Responses for test subject 10:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) R A R R A R R A R A R A A A R A R A

Air flow (A or B or C) A B B A A C A C B B C B A C B A B A

AOA1 (0, 12, 30 or 50) 30 0 50 0 0 12 0 12 12 50 0 50 30 50 0 12 50 30

AOA2 (0, 12, 30 or 50) 30 0 30 0 12 30 50 0 12 50 0 0 30 12 30 12 50 12

Rating (0-9) 0 1 8 0 3 8 9 4 0 0 0 9 0 8 9 0 0 8

Response Time (in ms) 34094 30312 29469 55532 27328 40797 19438 22421 46500 47562 53375 22687 39922 35719 31219 41922 29781 30375

163

Responses for test subject 11:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) A R R A R A R R R R A A R A A A A R

Air flow (A or B or C) B A C A A C C C A B C B C A C B A B

AOA1 (0, 12, 30 or 50) 50 50 30 0 12 30 0 0 50 50 12 12 30 0 0 12 50 50

AOA2 (0, 12, 30 or 50) 30 30 12 0 12 30 0 12 0 12 12 12 30 30 50 30 50 50

Rating (0-9) 6 8 7 0 1 1 0 8 9 9 0 0 0 7 8 6 1 0

Response Time (in ms) 25734 25359 28125 27812 44454 38891 42485 26250 22938 26438 27672 12687 25187 12016 6985 10578 24625 27797

164

Responses for test subject 12:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) R A A A A A R R R R R A A A A R R R

Air flow (A or B or C) C B C C C C A B B C A B B A B B C A

AOA1 (0, 12, 30 or 50) 30 12 30 50 0 0 30 0 50 12 12 30 30 50 12 0 12 30

AOA2 (0, 12, 30 or 50) 30 0 12 50 0 12 50 0 50 12 30 0 30 0 12 50 50 30

Rating (0-9) 0 3 5 0 0 6 9 0 0 0 8 7 0 7 0 8 9 0

Response Time (in ms) 33500 11437 13859 13562 36266 15704 7235 36563 14297 23922 4735 4875 34735 11531 11968 7781 5500 2110

165

Responses for test subject 13:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) A A R A A A R R R R A R R R R A A A

Air flow (A or B or C) B B B C C A A B A A B A C C B C A C

AOA1 (0, 12, 30 or 50) 0 0 12 0 12 12 12 30 30 30 50 0 50 50 30 50 30 30

AOA2 (0, 12, 30 or 50) 50 0 12 30 12 50 0 0 30 12 50 0 50 12 50 30 30 30

Rating (0-9) 8 0 0 8 0 9 7 5 0 4 0 0 0 8 5 3 0 0

Response Time (in ms) 47344 7875 10609 12750 7407 7985 37250 16422 3250 23454 3766 3829 4250 23703 11453 10297 2953 1875

166

Responses for test subject 14:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) R A A A R R R R A A A R A A R R A R

Air flow (A or B or C) C B B B C A A C A A B C C C B B A B

AOA1 (0, 12, 30 or 50) 0 50 50 0 0 0 12 50 30 12 12 30 0 50 30 12 30 12

AOA2 (0, 12, 30 or 50) 0 12 50 12 30 0 50 50 30 0 12 50 0 0 30 12 0 30

Rating (0-9) 0 9 0 5 9 0 8 0 0 5 0 7 0 7 0 0 7 8

Response Time (in ms) 7234 4578 6391 7156 10718 12407 10438 8859 8187 12625 8484 12718 5109 11047 6656 10391 11312 13922

167

Responses for test subject 15:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) R R A R A A A A R R A R A A R A R R

Air flow (A or B or C) A B B A A A B C C B C C C A C B A A

AOA1 (0, 12, 30 or 50) 50 50 0 50 12 50 0 12 30 0 12 30 30 0 12 30 0 12

AOA2 (0, 12, 30 or 50) 50 0 0 12 30 50 30 50 30 0 12 0 30 0 0 50 12 12

Rating (0-9) 0 9 0 9 8 1 5 4 0 1 0 9 0 0 6 4 9 0

Response Time (in ms) 16953 20782 28453 11344 18203 21032 8391 10828 16953 26656 24312 6094 22938 40453 11891 7390 8906 38093

168

Responses for test subject 16:

Trial Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Rendering (A or R) A R R A A A A R R A A R A R R A R R

Air flow (A or B or C) B C B B C C C C C A A A C B A A B A

AOA1 (0, 12, 30 or 50) 30 12 30 30 30 30 50 50 0 50 50 12 0 12 0 12 12 50

AOA2 (0, 12, 30 or 50) 30 12 30 12 0 50 50 50 50 30 12 12 0 0 30 12 50 50

Rating (0-9) 0 0 0 2 4 3 0 0 9 5 4 0 0 8 8 0 9 0

Response Time (in ms) 15734 15547 15297 15265 11359 11094 6594 8734 18000 10328 15859 11516 4016 26984 25765 4328 25750 5719

169

Appendix 5: Visual Rendering Images For All Airflow Conditions

In this Appendix we present the visual rendering images generated for all airflow conditions using both relative and absolute rendering. These are the complete set of component images used in the test subject study.

170

A5.1 AOA0, H-Rel: Component stimulus for angle of attack zero with airflow in only horizontal direction (relative rendering)

A5.2 AOA0, H-Abs: Component stimulus for angle of attack zero with airflow in only horizontal direction (absolute rendering)

171

A5.3 AOA12, H-Rel: Component stimulus for angle of attack 12 with airflow in only horizontal direction (relative rendering)

A5.4 AOA12, H-Abs: Component stimulus for angle of attack 12 with airflow in only horizontal direction (absolute rendering)

172

A5.5 AOA30, H-Rel: Component stimulus for angle of attack 30 with airflow in only horizontal direction (relative rendering)

A5.6 AOA30, H-Abs: Component stimulus for angle of attack 30 with airflow in only horizontal direction (absolute rendering)

173

A5.7 AOA50, H-Rel: Component stimulus for angle of attack 50 with airflow in only horizontal direction (relative rendering)

A5.8 AOA50, H-Abs: Component stimulus for angle of attack 50 with airflow in only horizontal direction (absolute rendering)

174

A5.9 AOA0, Up-Rel: Component stimulus for angle of attack zero with airflow in vertical up direction (relative rendering)

A5.10 AOA0, Up-Abs: Component stimulus for angle of attack zero with airflow in vertical up direction (absolute rendering)

175

A5.11 AOA12, Up-Rel: Component stimulus for angle of attack 12 with airflow in vertical up direction (relative rendering)

A5.12 AOA12, Up-Abs: Component stimulus for angle of attack 12 with airflow in vertical up direction (absolute rendering)

176

A5.13 AOA30, Up-Rel: Component stimulus for angle of attack 30 with airflow in vertical up direction (relative rendering)

A5.14 AOA30, Up-Abs: Component stimulus for angle of attack 30 with airflow in vertical up direction (absolute rendering)

177

A5.15 AOA50, Up-Rel: Component stimulus for angle of attack 50 with airflow in vertical up direction (relative rendering)

A5.16 AOA50, Up-Abs: Component stimulus for angle of attack 50 with airflow in vertical up direction (absolute rendering)

178

A5.17 AOA0, Dn-Rel: Component stimulus for angle of attack zero with airflow in vertical down direction (relative rendering)

A5.18 AOA0, Dn-Abs: Component stimulus for angle of attack zero with airflow in vertical down direction (absolute rendering)

179

A5.19 AOA12, Dn-Rel: Component stimulus for angle of attack 12 with airflow in vertical down direction (relative rendering)

A5.20 AOA12, Dn-Abs: Component stimulus for angle of attack 12 with airflow in vertical down direction (absolute rendering)

180

A5.21 AOA30, Dn-Rel: Component stimulus for angle of attack 30 with airflow in vertical down direction (relative rendering)

A5.22 AOA30, Dn-Abs: Component stimulus for angle of attack 30 with airflow in vertical down direction (absolute rendering)

181

A5.23 AOA50, Dn-Rel: Component stimulus for angle of attack 50 with airflow in vertical down direction (relative rendering)

A5.24 AOA50, Dn-Abs: Component stimulus for angle of attack 50 with airflow in vertical down direction (absolute rendering)

182

Bibliography

Anderson, J., et al. (2007). Cirrus design. https://wiki.umn.edu/view/CirrusDesign/WebHome

Bach-Y-Rita, P., Collins, C. C., Saunders, F. A., White, B., & Scadden, L. (1969). Vision substitution by tactile image projection. Nature, 221, 963-964.

Benson, T. J. (1997). Interactive educational tool for classical airfoil theory. AIAA Aerospace Sciences Meeting, January 2007, Reno, NV.

Cardin, S., Vexo, F., & Thalmann, D. (2006). Vibro-tactile interface for enhancing piloting abilities during long term flight. Robotics and Mechatronics, Vol.18, No.4, 381-391.

Cardin, S., Vexo, F., & Thalmann, D. (2007). Head mounted wind. Proceedings of 20th Annual Conference on Computer Animation and Social Agents (CASA), pp. 101-108.

Catlin, W., Eccles, H. & Malchodi, L. (2002). Smart sensor project takes flight: Boeing 'pressure belt' to measure airplane wing stress. International Society of Automation. May 2002. http://www.isa.org

Gooden, J. H. M., Schempp-Hirth. & Treiber, H. (2009). Wing airfoils. http://www.standardcirrus.org/Airfoil.html 183

Jones, L. A., Lockyerb, B., & Piateski, E. (2006). Tactile display and vibrotactile pattern recognition on the torso. Advanced Robotics, Vol. 20, No. 12, 1359-1374.

Jones, L. A., Nakamura, M., & Lockyerb, B. (2004). Development of a tactile vest. 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 27-28 March 2004, pp. 82-89.

Khbeis, M., Tan, X., Metze, G. & Ghodssi, R. (2003). Microfabrication of a pressure sensor array using 3D integration technology (abstract). Oral presentation at the American Vacuum Society 50th International Symposium, Baltimore, MD. http://www.egr.msu.edu/~xbtan/Papers/avs03.pdf

McGhee, R. J., & Beasley, W. D. (1973). Low-speed aerodynamic characteristics of a 17percent-thick airfoil section designed for general aviation applications. NASA Langley Research Center, Hampton, VA. December 1973. NASA Report NASA-TN-D-7428

Moin, P., & Kim, J. (1997). Tackling turbulence with supercomputers. Scientific American, January 1997, Vol. 276, Issue 1, 62-68

Ozaki, Y., Ohyama, T., Yasuda, T., & Shimoyama, I. (2000). An air flow sensor modeled on wind receptor hairs of insects. The Thirteenth Annual International Conference on MEMS, 23-27 January 2000, pp. 531 - 536.

184

Parrott, J. (2008). Tactile feedback for a sailplane pilot giving wing air conditions: Vibration motor sleeves. Undergraduate Research Opportunity Grant, Summer 2008. University of Minnesota Duluth, Department of Computer Science. http://www.d.umn.edu/~cprince/PubRes/FbF/proposals/Jordan.pdf

Hallberg, D., Kulkarni, P., Parrott, J., Pope, D., Prince, C. G., Sebesta, D., Weber, P., and Wronski, M. (2008). State-of-the-art review for glider sensor feedback project. Unpublished manuscript. http://www.d.umn.edu/~cprince/PubRes/FbF/ReviewV3.pdf

Sebesta, D. M. (2008). Tactile feedback for a sailplane pilot giving wing air conditions: Commercial vest. Undergraduate Research Opportunity Grant, Summer 2008. University of Minnesota Duluth, Department of Computer Science. http://www.d.umn.edu/~cprince/PubRes/FbF/proposals/David.pdf

Sensors. (2008). Endevco and boeing sign license agreement. June 26, 2008. http://www.sensorsmag.com/sensors/article/articleDetail.jsp?id=526198

Shaughnessy, E. J., Katz, I. M., & Schaffer, J. P. (2005). Introduction to fluid mechanics. New York: Oxford University Press.

Treiber, H., et al. (2009). Standard cirrus specifications. http://www.standardcirrus.org/Specifications.html

185

van Erp, J. B. F., Jansen, C., Dobbins, T., & van Veen, H. A. H. C. (2004). Vibrotactile waypoint navigation at sea and in the air: Two case studies. Proceedings of EuroHaptics, pp. 166-173.

Wronski, M. L. (2008). Tactile feedback for a sailplane pilot giving wing air conditions: Micro fan sleeves. Undergraduate Research Opportunity Grant, Summer 2008. University of Minnesota Duluth, Department of Computer Science. http://www.d.umn.edu/~cprince/PubRes/FbF/proposals/Matt.pdf

Zlotnik, M. A. (1988). Applying electro-tactile display technology to fighter pilot - flying with feeling again. Proceedings of IEEE 1988 National Aerospace and Electronics Conference NAECON, 23-27 May 1988, pp. 191-197.

186

You might also like