Digital & IR Imaging MAE 3120 Lab 09 Ryan Rafati, Shawn Satterwhite Mechanical & Aerospace Engineering George Washington University Experiment Date: 04/18/2023
Table of Contents
1. Abstract
2. Introduction
3. Symbols
4. Equations
5. Equipment
6. Procedure
7. Results
8. Discussion
9. Conclusions
10. Appendix
1. Abstract
High-speed digital cameras come in two forms: CCD and CMOS. The CMOS cameras are better equipped for higher frame rates and are ideal for quicker captures. The cameras connected to the digital acquisition system have ranges of 8 and 24 bits. The camera's pixel intensity has a proportional relationship to the exposure time. Transistor - transistor logic is a type of signal used in different types of circuits. The TTL concept is used to classify logical levels in a binary form. This helps with classification and identifying voltage levels. In infrared imaging, radiation of a different object can help predict temperatures without contact.
2. Introduction
This lab is an introduction to high-speed imaging and infrared camera mechanisms. For the high-speed imaging section, engineers worked with the motion blur and were instructed to minimize it while also keeping the exposure time in consideration. The specific camera used in this lab can decrease the exposure time down to 1 microsecond. Engineers are also reminded of the benefits of external lighting and will be introduced to the practical application of it. Pulsed LEDS are also introduced in this lab. For the Infrared imaging section, engineers worked with a FLIR A15 camera that could record at 60 fps and 160x180 pixels. The temperature range for this camera is -40 degrees Celcius to 160 degrees Celsius. The Stefan Boltzman equation was also introduced to help with radiation and emissivity correlations.
3. Symbols
E - Emissive Power (W.m^2)
σ - Stefan Boltzmann Constant
ε - emissivity
τ - exposure time
4. Equations
5. Equipment
IDT NX3-S3 camera
IDT Constellation 120 LED strobe lights
FLIR A15 camera
Tripods
6. Procedures
Part I - Principles of high-speed imaging
In this section, groups were called by the TA to begin preparation for the high-speed imaging camera. Groups needed to first a member to be the catapult, who will use a golf ball and catapult it when the camera begins its imaging process. The other members need to select the frame rate and exposure time. For the camera to focus, the system needs to be put in 'play' mode. The group members holding the camera then need to focus the camera on the ball and hands of the catapult person to ensure accurate pictures. To ensure high-quality imaging make sure brightness settings are correctly displayed. Next, hit the record button on the camera, which directs the camera to be ready for when the trigger mark is clicked on the computer. Once data is taken, make sure to review the results and check for blurriness or other inconsistencies. For blurriness decrease the exposure time, and for lots of movement of the ball, you can increase the frame rate. Perform the experiment again, and correct parameters based on previous results. Then save data to a USB drive and comment on the results.
Part II - Instrument Synchronization
In this section, a strobe light was given to increase brightness with short exposure times. First, a BNC cable needs to be attached to the output of the camera and to the oscilloscope which monitors the signal. Then the 'sync out' of the camera is connected to the 'sync in' of the LED and the LED needs to be placed on pulsed mode. This allows the LED to be turned on whenever imaging is occurring.
Part III - Principles of IR imaging
In the last section of this lab, thermal imaging was introduced. The camera needs to be pointed at the intersection of the wall to observe thermal effects and emissivity. Then point the camera to different objects/people to observe thermal heat patterns.
7. Results
Part I
Preliminary Question
Exposure Time 0.001 s
Motion blur occurs when an object moves across the camera such that it is seen as moving across at least one pixel more than its actual motion should be moving across. In order to find the critical exposure time for the camera such that there is no motion blur, the relationship between the speed of the object and the pixel size must be determined so that each pixel captures the optimal amount of the image of the object.
First, the pixel size should be determined by figuring out the size of the object in the frame, and the number of pixels the object takes up. If the object is 20 cm in diameter and at a set distance is captured by 20 pixels, then each pixel has a size of 0.1 cm, or 0.001 m. Next, to find the optimal exposure time for the given pixel size, the pixel size, 0.001 m, should be divided by the object's speed, 1 m/s. This results in an exposure time of 0.001 seconds for optimal results in this scenario.
Pictures:
No-strobe launch
Images=156
Frames=1000
TriggerTime=123
Brightness=32
Rate=600
Part II
Strobe launch
Images=147
Frames=1000
TriggerTime=556
Brightness=32
Rate=600
We observed a square wave signal that was equivalent to the frame rate of the strobe which was 600 frames/second.
Ball Trajectory:
Figure I: Ball trajectory tracked across multiple images with Logger Pro software.
The trajectory of the ball was analyzed using LoggerPro image analysis software. Multiple frames were imported into the software, and using image-annotation tools, the center points of the ball in multiple frames were recorded. Between the first recorded position and the second, there was determined to be a displacement of the ball of approximately 7.1 centimeters, as can be seen in Figure I. The frame number of the first position was 288, and the frame number of the second position, shown in Figure I, was 304. This meant that a difference of 16 frames had taken place between the launch frame (288) and the next. The 16 frames, with a frame rate of 600 fps, corresponds to 16 frames times 1/600 seconds per frame, or 0.0267 seconds. Knowing that the ball traveled 0.071 meters over the course of 0.0267 seconds, it was found that the launch velocity of the ball was 2.66 m/s. The angle of the launch, using the annotated positions, was determined to be 5.85 degrees right from the vertical, or 84.15 degrees up from the horizontal.
Part III
With another group (the TA will call you), you will now experiment with the fundamentals of thermal imaging. The camera can be operated by clicking on the “PvSimpleUISample” link on the desktop.
Test the effect of emissivity with the IR camera. For this, point the camera to the intersection of the whiteboard and the wall; i.e. you should have both in your field of view. Take a snapshot with the computer. What do you observe? Explain why.
Some materials reflect thermal radiation (similar to a mirror reflecting visible light). Move the camera around the lab till you find a material that reflects thermal radiation well. What is the material? Are you surprised? Explain why such materials are used in greenhouses.
To test the effect of emissivity with the IR camera, the camera was pointed to a concrete wall with no external heat source. Figure II below shows a dark blue color of the wall when thermally imaged, meaning the wall had relatively low thermal emissivity, with the IR camera picking up little radiation. When the camera was pointed to the computer, as seen in Figure III, the warmer parts of the computer, such as the keyboard, or the location of the battery, showed up in yellow-orange color, meaning there were relatively high amounts of thermal radiation being emitted and captured by the IR camera. A difference can also be seen in people wearing glasses as compared to people not wearing glasses. Even though all the glasses worn by the people in Figure IV were clear-lens glasses, the relative temperature difference, as well as the relatively low emissivity of the lenses, cause the glasses to appear very dark as compared to the region of the eyes on people without glasses.
Figure II: IR image of the concrete wall.
Figure III: IR image of a laptop computer.
Figure IV: IR image of people with and without glasses.
The glass wall of the classroom was imaged by the IR camera and the reflected IR could be seen, even though the glass was visibly transparent. Figure V shows the reflected IR radiation off of the visibly transparent glass wall. This was an unexpected result as normally glass is mostly transparent when observed with the naked eye. This implies that the glass we use for windows and walls is made to allow certain wavelengths of radiation, such as the wavelengths around our visible range, through, while being more reflective to others. Figure V also implies that as compared to the radiation coming off of objects behind the glass wall, the reflected radiation was stronger, and could be seen in lieu of what was behind the wall. Glass can be used in greenhouses to trap IR radiation inside the greenhouse structure so that less energy needs to be expended to keep the greenhouse at the right temperature.
Figure V: Reflected IR radiation off of glass wall.
8. Discussion
What is the frame rate typically used in movies?
The frame rate in movies is typically around 24 frames per second (fps).
Give the resolution in pixels (horizontal x vertical) of 1080p and 4K TV. At the movie frame rate, you found above to which transfer rate does this correspond? Assume there is no compression and each pixel has a 24-bit depth (8-bit/color). Express your values in bit/s. Compare these USB 2, USB 3, and Ethernet protocols.
The standard resolution at 1080p has a screen with 1920x1080 pixels, and at 4K has 3840x2160 pixels.
The transfer rate, in bits per second, that corresponds to the frame rate of 24 fps with a bit depth of 24 bit/pixel at a resolution of 1080p corresponds to the following:
1920 pixels * 1080 pixels * 24 bit/pixel * 24 Hz = 1.194*10^9 = 1.19 Gbit/s.
USB 2 has a transfer rate of around 480 Mbit/s or around 40% of the transfer rate found above.
USB 3 has a transfer rate of around 4.8 Gbit/s, around 400% of the transfer rate found above.
Ethernet can transfer up to 10 Mbit/s, only 0.84% of the transfer rate found above.
With the high-speed camera, if a pixel is at 1024, then it is “saturated”. Which term did we use when talking with digital signals in general? What should be the maximal intensity value you recommend using on each pixel?
The term we used with digital signals, in general, was clipping. The maximum the pixel should be at is 900. We weren't given a specific unit, but from our knowledge of clipping, we understood that the maximum value of intensity before saturation should be a safe amount away from the saturation point.
You are using a thermal imager to record the temperature of a human body. By mistake, the emissivity has been set up at 0.15. What temperature would you read for a healthy person?
If the emissivity has been set to 0.15, it would mean that the temperature reading would be a lot lower than normal
Using the equation below provided in the lab:
Using:
TH = 98 (Human Temp)
e assumed = 0.97 (Human skin emissivity)
e actual = .15 (camera)
We find that T ind = 61.45 degrees Celsius
This answer clearly shows how the emissivity was set incorrectly because a human's temperature could not be 61 degrees Celsius.
You saw that reflection on IR imagers can lead to spurious measurements. Explain how this could compromise your measurements if you are inspecting electrical systems for abnormal heat sources or hot spots, which would be an indication of faulty problems. How could you detect if a hot spot is a reflection or not?
Reflection on the IR imagers can compromise measurement in electrical systems by falsely pointing to hot spots or heat sources. This could lead to not knowing if parts are damaged, or not accurately accounting for insulation needed for areas that get really hot. It also can lead engineers on the wrong path when exploring an issue with the electrical system, because it can give faulty information on where the heat is generated or not.
You can detect if a hot spot is a reflection or not, by looking at the area around the hot spot and seeing what interactions are happening. If there are no heat sources near it and it is just one spot then it might be a reflection. Also looking at the positioning of the angle of the camera in relation to where the hotspot is can help determine if there is a reflection happening or not.
Describe a calibration procedure for the Thermal imager (to relate brightness to temperature).
If we wanted to take thermal images of a person's face, we should calibrate it by pointing the camera at maybe a person's hand or another body part that is in the same temperature range, which helps calibrate the brightness in relation to temperature. We could not calibrate this system by pointing it at the wall which would have a very different brightness and temperature spectrum.
9. Conclusion
This experiment allowed for hands-on experience with high-speed imaging cameras and thermal cameras. Engineers gained experience in setting parameters to optimize their catapult launch by changing the frame rate, exposure time, and other variables. There was also exposure to strobe light and effectiveness on image quality, by seeing brightness changes, and trigger time changes. Engineers also began working with thermal cameras and seeing patterns when taking different types of pictures of different objects.