10 Frequently Asked Questions Answered
top of page
  • Writer's pictureOverwatch Imaging

10 Frequently Asked Questions Answered

With the 2023 trade show season well underway, Overwatch Imaging's marketing and business development team compiled a list of some of the most commonly asked questions we've heard so far this year. We're happy to these details and insights into the technology and solutions we're delivering to important missions around the world.

1. What are Overwatch Imaging's core capabilities?

Overwatch Imaging designs and manufactures advanced AI-enabled intelligence and reconnaissance imaging payloads for crewed and uncrewed aircraft that automatically scan and map wide swaths of terrain under or in front of the aircraft.

The imagery captured during operations is processed at the edge using a variety of proprietary AI algorithms to provide relevant information to a user via a web-based GUI or to a mission management system. Current capabilities include wide area mapping, determination of wildfire perimeters, object detection in EO and IR bands in both terrestrial and maritime environments, and change detection, among others.


Overwatch Imaging's software suite can be be paired with a full motion video (FMV) gimbal to reduce task saturation and/or automate sensor operator functions. This can include trained algorithms to identified targets of interest down to single pixel targets, transmission of intelligence and data packages, automated cross cueing, automated notifications, and more. This is accomplished by leveraging Overwatch’s sensor control and AI software to automate mechanical sensor movements and to process captured imagery to provide only the most relevant data and/or objects of interest to the end-user or mission management system.



2. What is Overwatch AI?


OVERWATCH AI is collaborative, customizable, mission-specific, and can be deployed on 3rd party gimbals to automate otherwise manually-intensive tasks and improve wide-area search, mapping, and ISR capabilities. Overwatch Imaging sensors feed full bit-depth data to onboard AI-enabled GPU-accelerated image processors, which run Overwatch Imaging proprietary software to register, mosaic, align, geolocate, compress, and analyze imagery. The sensor’s onboard computer runs Overwatch’s proprietary neural net AI software modules, including automatic object detection, flight pass-to-pass (or day-to-day) change detection, fire perimeter mapping, mosaic outputs and much more.


Overwatch AI can be paired with a full motion video (FMV) gimbal to reduce task saturation and/or automate sensor operator functions. This can include trained algorithms to identified targets of interest down to single pixel targets, transmission of intelligence and data packages, automated cross cueing, automated notifications, etc. This is accomplished by leveraging Overwatch’s sensor control and AI software to automate mechanical sensor movements and to process captured imagery to provide only the most relevant data and/or objects of interest to the end-user or mission management system.



3. What is step-stare imaging?


Step-stare imaging captures high-resolution imagery of an area or subject of interest, often in multiple spectral bands simultaneously, in an optimized scan pattern as the system passes over or circles the area. The scan pattern is are adjusted in real-time to meet specific mission parameters, and in many cases the overlapping images are stitched together to create a single, high-resolution composite image of the area.

Step stare imaging has a number of distinct advantages over other airborne ISR, mapping and intelligence systems. Primarily, it allows for a much higher level of detail and accuracy over a wider area. Systems are compact, portable and relatively low SWaP, require lower bandwidth, and are easily collaborative with networked FMV gimbals.

When combined with Overwatch Imaging’s onboard processing and AI modules, step-stare imaging delivers the ideal combination of image detail and resolution, wide-area coverage, and automated capabilities in a versatile form factor and collaborative functionality.



4. How is your imagery processed?


Images from the sensors are processed with an advanced embedded graphics processing unit (GPU) for real-time image analytics, onboard cross-cueing, and pre-transmission data reduction. The systems are designed to simultaneously operate multiple area scan cameras including visible band (RGB), near infrared (NIR), shortwave infrared (SWIR), mid-wave infrared (MWIR) and long-wave infrared (LWIR) in a co-boresighted system with a dual-antenna Global Positioning System (GPS) inertial navigation system.


The sensors feed full bit-depth data to onboard AI-enabled GPU-accelerated image processors, which run Overwatch Imaging proprietary software to register, mosaic, align, geolocate, compress, and analyze imagery.


5. What is a "Quick Mosaic"?



Quick Mosaic is a primary software module of TK series payloads that stitches and blends collected imagery into a single, high-resolution geo-referenced mosaic image. Quick Mosaic outputs are comprised of downscaled imagery that allows for faster processing so they can be delivered in real-time during flight. They include multiple palleted tiff images to highlight wildfire activity, daytime and IR multi-spectral composite images, oil-on-water mapping, infrastructure inspection, and much more. Full-resolution raw imagery is stored on the payload and is available for further analysis or archival purposes.



6. Do your systems support CoT messaging?

Yes. When an Overwatch Imaging system is installed in conjunction with an FMV gimbal, the system supports CoT messaging to cross cue the FMV gimbal to a detected object of interest to support a faster positive identification of that object by the operator.


7. What's the general difference between PT and TK series sensors?


Overwatch Imaging has two series of imaging payload sensors that are utilized for multi-spectral wide-area automated surveillance.

The PT Series utilizes a forward-looking pan-tilt gimbal with image-based radar for general search and object detection use cases.

The TK Series is a nadir-oriented multi-camera, multi-spectral, imaging system. These systems use a step-stare movement that allows them to step across the flight track to support effective Fields of View that are much larger than the field of view of a single camera.



8. How does Overwatch incorporate sensor and image fusion?


As the industry transitions to sensor interoperability and autonomy, Overwatch has positioned is products to support the increased data requirements for longer duration multi-mission applications that require a suite or pairing of sensors to exchange information and allow platforms to operate autonomously. Combining imagery intelligence about the same scene obtained by various sensors in different modes not only increases shared situational awareness but supports command and control decisions when minutes matter.

Image fusion has been used by Overwatch Imaging to create mission specific composite imagery since October 2016. Fused multi-band images are created using an image registration technique that is robust to differences in spectral band, image resolution, and lens distortion.


Overwatch Imaging multi modal fusion capabilities include the ability to integrate commercial-off-the-shelf Software Defined Radios (SDR). To date the most common SDR application has been the receiving of Automatic Identification System (AIS) signals commonly used in the maritime domain, but the inherent ability of an SDR to be quickly modified to receive other signals of interest does allow for significant growth into this space, including the fusing of airborne or land-based signals.



9. Are Overwatch systems and software interoperable?


Overwatch Imaging sensors and software are versatile, portable, and collaborative. The sensors are designed to be easily integrated and platform agnostic. The sensors are powered by available 28 VDC aircraft power and utilize a common ethernet interface. The sensors provide unparalleled computer and image processing power and can be operated via the ethernet connection with a device as small as a tablet.


The sensors and software are currently deployed on a wide range of crewed and uncrewed aircraft and can be used in collaboration (i.e. cross cueing) with 3rd party sensors to provide complimentary or enhanced image data and intelligence. Overwatch sensors have transitioned the market from an ad-hoc solution to a multi-mission, multi-function solution.


Overwatch’s proprietary software is interoperable with 3rd party sensor platforms to control sensor function, process data on the edge and automate sensor operator roles in search, detection, mapping and ISR missions. Overwatch continuously leverages its software architecture to create custom solutions for specific end-users and/or missions in order to process high volumes of data to provide only relevant information in a timely manner.



10. What image correction does Overwatch software perform?


Overwatch Imaging applies several real-time corrections to incoming imagery. There are two categories of corrections: pre-characterized and dynamic. Pre-characterized corrections include bad pixel, lens distortion compensation, and replacement anti-vignetting for reflected-band cameras (UV, RGB, NIR, SWIR). Bad pixel replacement replaced all pixels in a list with the median of their neighbors. Lens distortion compensation is used to improve real time geolocation accuracy. Pre-characterized anti-vignetting results in a uniformly illuminated image from center to corner, despite differences in illumination on the focal plane caused by the lens. These corrections run in near real time with the corrected imagery being processed within 3 frames of capture (4 hz max capture rate).


Overwatch Imaging also applies dynamic corrections to incoming imagery. These corrections are calculated based on analysis of the incoming image and previous images. Bad pixel identification if performed to find bad pixels that were not identified during manufacture. As cameras age, more bad pixels can appear. These are identified by finding pixels that are consistently much different from their neighbors across images. When bad pixels are identified, they are removed in the same way as pre-characterized bad pixels.


For MWIR and LWIR cameras, Overwatch also applies a dynamic non-uniformity correction (NUC). This correction uses incoming imagery to estimate and remove nonuniformity in the imagery. The nonuniformity can take the form of fixed-pattern noise, row/column noise, and low spatial frequency effects like vignetting.

253 views
bottom of page