Remote Sensors for Precision Ag at West Hills College

0
1620
Remote Sensors

In the previous article of this series, Unmanned Aerial Systems (UAS) were discussed. For all of the attention that UAS get, it is just a piece of the puzzle. First of all the unmanned aerial vehicles are only one type of platform for capturing imagery; there are also satellites, manned flights, and ground based.  Second, platforms are not even the most important part of the puzzle. The imagery that shows how a crop is doing or provides information is the important part.  That leads us this month to the sensors or cameras that are used to capture the imagery.

Sensors in agriculture should be a bigger deal than UAS, but they aren’t as cool so they don’t get all the headlines. They are like the guy that is in the background doing all the work, but doesn’t get the credit.

Sensors are a big topic, so this is actually going to be a two part article. This month will be “remote sensors”; next month will be “contact sensors”. Remote sensors do not actually touch or “contact” the object or materials they are sensing, therefore they are remote.

The difference between a sensor and a camera also needs to be established, though the terms are commonly used interchangeably. I once took a group of students to visit an aerial imagery service company that had just purchased a sensor for approximately $1 million. Before the tour, I cautioned the students that when this sensor was discussed, it should not be called a camera.  Of course upon arrival, the tour guide referred to the sensor… as a camera. He corrected himself when he saw the student’s reaction.

A camera most properly uses film and a shutter to allow light to briefly “expose” the film. There is a light sensitive chemical on the film that reacts to the various types of light and turns color. The resulting photograph is a chemical process.

Instead of film, a sensor has a “detector” that captures the amount of light when a shutter is used to expose it. This is an electronic process and results in a digital value…thus the term digital camera. Digital cameras use sensors to create imagery. One problem with sensors in digital cameras is that the detectors only capture the amount or strength of the light and not the color. To solve this, most digital cameras separate the light (using a type of prism) and then have filters that only allow one color to expose each of three different sets of detectors; one for the “red”, one for the “green”, and one for the “blue”. This is known as RGB image and when all three are combined and displayed in their proper colors, a natural color image is the result.

Higher quality sensors don’t separate the light to individual filters and then recombine. They have one set of detectors within a sensor which allows the user to capture a very specific color. This allows more control and thus better analysis.

It is also important to understand what light and color are. There is a wide range of energy coming from our sun, with a very small part of it as visible light. This light is in the form of waves of energy that are differentiated by the lengths of the waves. Each wavelength reacts differently when it strikes an object; it is reflected or absorbed by an object1 at different degrees. When a wavelength is reflected by an object, our eyes see the reflectance as a color associated with that wavelength.

A piece of red cloth can be used as a further example. It will absorb most of the light that hits it, but because of the dye that was used it reflects the red wavelength that we see. Variations of red color are slightly different wavelengths or a combination of blue and green wavelengths that we see. A black shirt absorbs all of the light wavelengths and does not reflect anything (thus why a black shirt is hotter on a sunny day); a white shirt reflects all light wavelengths (why a white shirt is cooler).

A digital camera will separate the colors into groups of wavelengths (also called “bands”) and a detector senses how much of each color has been reflected from the object, and thus an image is captured.

The resolution of the image is what determines how detailed that image is. Resolution in cameras is commonly measured as “megapixels”. This is basically the number of detectors that the sensor has.  Each detector captures the reflectance for one piece of the image (also known as a pixel). Higher quality sensor’s resolution is measured as “Ground Sample Distance” (GSD) which is a distance of ground that represented by one pixel. A GSD of 1 ft means the detector has captured the reflectance from a 1 ft X 1 ft area of the ground and is represented by one pixel.

Digital cameras used in the majority of smaller drones are RGB cameras that result in the natural color imagery. This works for many users to see aerial imagery of a house or property, construction site, accident scene, crime scene, or utility lines. West Hills has several UAS with RGB cameras that are used for documenting a crop, field, or irrigation on the Farm of the Future. The resolution of 1–2 inches allow us to see individual plants, irrigation lines, and branch structure within our orchard trees. This is valuable for documenting what is in the field and some simple determination for management. But it doesn’t give us detailed data about the health or vigor of a plant for further analysis.

Besides visible light wavelengths, there are also light wavelengths that are not visible to the naked human eye. The most common of these invisible wavelengths are: NIR (Near Infrared that is nearest to the red visible band); Mid-range IR (this is a wavelength that reflects thermal waves); and Far IR (this is infrared that is farther from the red visible band). They have their own reflectance properties, which means that even if the human eye cannot see them, a digital sensor can!

Just like capturing red, green, or blue bands, a filter can be used to separate the infrared band and then captured by the detector. The value of capturing an NIR is the difference in its reflectance properties from plant tissue.

 

  • Green wavelengths reflect at a relatively high percent from healthy plant tissue, which why we see plants as green
  • Red wavelengths reflect at a relatively low percent and absorbed at a high percent from healthy plant tissue, because photosynthesis relies heavily on red light
  • Infrared wavelengths reflect at a significantly high percent from healthy plant tissue

 

From this we can also say that red is reflected at a high percent from stressed or dead plant tissue, (which is why tree leaves turn red or other colors in the fall). Infrared will be absorbed by stressed or unhealthy plant tissue, meaning that less will be reflected. Capturing infrared reflectance allows the user to determine the stress level of plant tissue. The higher the reflectance, the less stress the plant is experiencing.

There is a small problem; if infrared is invisible how do we see it? When we take a RGB image, the image is displayed using its respective color. How do we display an image that has no color? Most analysts will display the infrared band as red which creates what is known as a false color image. All green vegetation is now displayed as red. And not just red, but the differences in shades of red are based on the stress level of the plant. Bright red is a healthy plant; pink or dirty red is a stressed plant.

Where do we get this multispec imagery from? If you have an unmanned aerial systems with a multispectral sensor you can capture your own imagery. However there are two other common and valuable platforms: satellite and manned flight. I taught a digital imagery course for years using Landsat satellite multispectral imagery. This imagery is FREE if you know how to find it, download it and process it. (This is beyond the scope of this article; come to West Hills College and take my Digital Imagery class!)

There are seven Landsat satellites taking images of every part of the earth. The imagery is available for download in the form of seven separate bands of Blue, Green, Red, NIR, Mid-Range IR (Thermal), Far-Range IR, and depending on the Landsat another IR and or panchrometric (higher resolution black and white). Sensors used vary for each Landsat satellite and included: MultiSpec Scanners; Thematic Mapper; Enhanced Thematic Mappers; and Operational Land Imager. Each captures a slightly different set of wavelength bands. Users need to decide which bands are most appropriate for them.

These are all high quality sensors, but since they are 400-500 miles above the earth, they are fairly low resolution. The GSD of most Landsat imagery is 10 meters, much lower resolution than the two to three inches from a UAS. This provides wide imagery over a field, but does not allow images of individual trees or plants. For large fields or farms, satellite imagery is good infrared data that can be converted to NDVI (Normalized Differential Vegetative Index) or other vegetative indices.

Another problem with satellite imagery is that many times clouds are covering the earth surface. Most wavelengths that we want, do not pass through clouds and therefore the image is restricted. There are very short radar wavelengths that will pass through clouds, but they provide limited useful data.

These concerns are addressed by the use of manned aerial flights. Missions can be flown at 1200 ft to 28,000 ft AGL (Above Ground Level) and at almost any time. Clouds can be avoided and a large area covered very quickly. Considering the area covered, the cost on a per acre basis is very reasonable.

A professional aerial imagery service does not just stick a camera out the window. The sensor is a high quality multispectral camera mounted in an opening of the belly of the aircraft. The deliverables from the service could be the raw images if the customer has the means to do the processing or a final ortho-rectified and georeferenced image ready for interpretation. West Hills relies on GeoG2 Solutions, an aerial imagery service that provides us with monthly infrared images of our Farm of the Future for use in class.

Though I introduced this topic saying that imagery is the important step, there is actually a lot more to remote sensing. There is also imagery processing software, spectral signatures, hypersprectral, radiance, and thermal bands that have importance to agriculture. Most of it is still being researched. Hopefully the background in this article has been helpful to continue forward when it becomes available.

1  Actually there are four things that happen to light when it hits an object: transmitted, reflected, absorbed, refracted.  For simplicity, reflected and absorbed are the two things we are most concerned about.