top of page
Overwatch Imaging, Wide-area search, Wide-area imaging, Wide area motion imagery, Sensor enhanced fire recon, Sensor enhanced, Airborne ISS, Airborne imaging, Airborne imaging, Multispectral payload, Aerial mapping , Multispectral imaging, Multispectral mapping , Fire mapping, Wildfire mapping, Fire terrain mapping, NIROPS, Integration , Autonomous image processing, Aerial ISR, Airborne search and rescue, Maritime search and rescue, Maritime ISR, Coastal surveillance system, Vision systems, Remote sensing, edge computing, AI enabled, Artificial intelligence, Geospatial, Cross-cueing, Wide area scanning, Situational awareness, Computer vision, Dual-asix, NADIR, Step-stare , Payload , Smart sensor, Cursor on target, Collaborative AI, Distributed Sensing, Agile combat deployment, Personnel recovery Sensor-to-sensor, Unmanned aircraft, Uncrewed systems, UAV, UAS , Disaster response, Unmanned systems, Infrastructure inspection, Linear asset inspection , Combat search and rescue.

Frequently Asked
Questions

Key details about our hardware, software,
processing power, and how we tie it all together

  • What are Overwatch Imaging's core capabilities?
    Overwatch Imaging designs and manufactures advanced AI-enabled intelligence and reconnaissance imaging payloads for crewed and uncrewed aircraft that automatically scan and map wide swaths of terrain under or in front of the aircraft. The imagery captured during operations is processed at the edge using a variety of proprietary AI algorithms to provide relevant information to a user via a web-based GUI or to a mission management system. Current capabilities include wide area mapping, determination of wildfire perimeters, object detection in EO and IR bands in both terrestrial and maritime environments, and change detection, among others. Overwatch Imaging's software suite can be be paired with a full motion video (FMV) gimbal to reduce task saturation and/or automate sensor operator functions. This can include trained algorithms to identified targets of interest down to single pixel targets, transmission of intelligence and data packages, automated cross cueing, automated notifications, and more. This is accomplished by leveraging Overwatch’s sensor control and AI software to automate mechanical sensor movements and toprocess captured imagery to provide only the most relevant data and/or objects of interest to the end-user or mission management system
  • What is Overwatch AI?
    OVERWATCH AI is collaborative, customizable, mission-specific, and can be deployed on 3rd party gimbals to automate otherwise manually-intensive tasks and improve wide-area search, mapping, and ISR capabilities. Overwatch Imaging sensors feed full bit-depth data to onboard AI-enabled GPU-accelerated image processors, which run Overwatch Imaging proprietary software to register, mosaic, align, geolocate, compress, and analyze imagery. The sensor’s onboard computer runs Overwatch’s proprietary neural net AI software modules, including automatic object detection, flight pass-to-pass (or day-to-day) change detection, fire perimeter mapping, mosaic outputs and much more. Overwatch AI can be paired with a full motion video (FMV) gimbal to reduce task saturation and/or automate sensor operator functions. This can include trained algorithms to identified targets of interest down to single pixel targets, transmission of intelligence and data packages, automated cross cueing, automated notifications, etc. This is accomplished by leveraging Overwatch’s sensor control and AI software to automate mechanical sensor movements and to process captured imagery to provide only the most relevant data and/or objects of interest to the end-user or mission management system Overwatch AI functionality can be divided into three primary categories: Search: AUTOMATED SENSOR OPERATION enables high-efficacy wide-area search and ISR activities through smart, systematic, mission-specific controls. This increases efficiencies and frees humans in the loop to focus on other tasks. Capabilities: Sensor steering and control Geo-referenced image data Automated search patterns Customizable path following 3rd party gimbal control Analyze: POWERFUL REAL-TIME ANALYSIS automates otherwse tedious tasks. Overwatch AI can be trained with unique parameters to meet mission-specific objectives and enhances human interpretation capabilities. Capabilities: Customizable object detection Fire, flood, oil spill mapping Change detection Automated maritime search Fire detection and mapping Report: ACTIONABLE INTELLIGENCE DELIVERED in real-time via edge processing and a simple user interface. Raw data is distilled down to the most important information, reduced for transmission and delivered to operators or networked systems for review or action. Capabilities: Interactive map display Internal, external sensor data fusion Mission-specific imagery outputs Simple network integration Data reduction for transfer
  • What is step stare imaging?
    Step-stare imaging captures high-resolution imagery of an area or subject of interest, often in multiple spectral bands simultaneously, in an optimized scan pattern as the system passes over or circles the area. The scan pattern is are adjusted in real-time to meet specific mission parameters, and in many cases the overlapping images are stitched together to create a single, high-resolution composite image of the area. Step stare imaging has a number of distinct advantages over other airborne ISR, mapping and intelligence systems. Primarily, it allows for a much higher level of detail and accuracy over a wider area. Systems are compact, portable and relatively low SWaP, require lower bandwidth, and are easily collaborative with networked FMV gimbals. When combined with Overwatch Imaging’s onboard processing and AI modules, step-stare imaging delivers the ideal combination of image detail and resolution, wide-area coverage, and automated capabilities in a versatile form factor and collaborative functionality.
  • How is imagery processed?
    Images from the sensors are processed with an advanced embedded graphics processing unit (GPU) for real-time image analytics, onboard cross-cueing, and pre-transmission data reduction. The systems are designed to simultaneously operate multiple area scan cameras including visible band (RGB), near infrared (NIR), shortwave infrared (SWIR), mid-wave infrared (MWIR) and long-wave infrared (LWIR) in a co-boresighted system with a dual-antenna Global Positioning System (GPS) inertial navigation system. The sensors feed full bit-depth data to onboard AI-enabled GPU-accelerated image processors, which run Overwatch Imaging proprietary software to register, mosaic, align, geolocate, compress, and analyze imagery.
  • What is a "Quick Mosaic"?
    Quick Mosaic is a primary software module of TK series payloads that stitches and blends collected imagery into a single, high-resolution geo-referenced mosaic image. Quick Mosaic outputs can be delivered in real-time during flight and include multiple palleted tiff images to highlight wildfire activity, daytime and IR multi-spectral composite images, oil-on-water mapping, infrastructure inspection, and much more.
  • Do your systems support CoT messaging?
    Yes. When an Overwatch Imaging system is installed in conjunction with an FMV gimbal, the system supports CoT messaging to cross cue the FMV gimbal to a detected object of interest to support a faster positive identification of that object by the operator.
  • What's the main difference between PT and TK series sensors?
    Overwatch Imaging has two series of imaging payload sensors that are utilized for multi-spectral wide-area automated surveillance: The PT Series utilizes a forward-looking pan-tilt gimbal with image-based radar for general search and object detection use cases. More PT SERIES info here The TK Series is a nadir-oriented multi-camera, multi-spectral, imaging system. These systems use a step-stare movement that allows them to step across the flight track to support effective Fields of View that are much larger than the field of view of a single camera. More TK SERIES info here
  • How does Overwatch incorporate sensor and image fusion?
    As the industry transitions to sensor interoperability and autonomy, Overwatch has positioned is products to support the increased data requirements for longer duration multi-mission applications that require a suite or pairing of sensors to exchange information and allow platforms to operate autonomously. Combining imagery intelligence about the same scene obtained by various sensors in different modes not only increases shared situational awareness but supports command and control decisions when minutes matter. Image fusion has been used by Overwatch Imaging to create mission specific composite imagery since October 2016. Fused multi-band images are created using an image registration technique that is robust to differences in spectral band, image resolution, and lens distortion. Overwatch Imaging multi modal fusion capabilities include the ability to integrate commercial-off-the-shelf Software Defined Radios (SDR). To date the most common SDR application has been the receiving of Automatic Identification System (AIS) signals commonly used in the maritime domain, but the inherent ability of an SDR to be quickly modified to receive other signals of interest does allow for significant growth into this space, including the fusing of airborne or land-based signals.
  • Are Overwatch systems and software interoperable?
    Overwatch Imaging sensors and software are versatile, portable, and collaborative. The sensors are designed to be easily integrated and platform agnostic. The sensors are powered by available 28 VDC aircraft power and utilize a common ethernet interface. The sensors provide unparalleled computer and image processing power and can be operated via the ethernet connection with a device as small as a tablet. The sensors and software are currently deployed on a wide range of crewed and uncrewed aircraft and can be used in collaboration (i.e. cross cueing) with 3rd party sensors to provide complimentary or enhanced image data and intelligence. Overwatch sensors have transitioned the market from an ad-hoc solution to a multi-mission, multi-function solution. Overwatch’s proprietary software is interoperable with 3rd party sensor platforms to control sensor function, process data on the edge and automate sensor operator roles in search, detection, mapping and ISR missions. Overwatch continuously leverages its software architecture to create custom solutions for specific end-users and/or missions in order to process high volumes of data to provide only relevant information in a timely manner.
  • What image correction does Overwatch software perform?
    Overwatch Imaging applies several real-time corrections to incoming imagery. There are two categories of corrections: pre-characterized and dynamic. Pre-characterized corrections include bad pixel, lens distortion compensation, and replacement anti-vignetting for reflected-band cameras (UV, RGB, NIR, SWIR). Bad pixel replacement replaced all pixels in a list with the median of their neighbors. Lens distortion compensation is used to improve real time geolocation accuracy. Pre-characterized anti-vignetting results in a uniformly illuminated image from center to corner, despite differences in illumination on the focal plane caused by the lens. These corrections run in near real time with the corrected imagery being processed within 3 frames of capture (4 hz max capture rate). Overwatch Imaging also applies dynamic corrections to incoming imagery. These corrections are calculated based on analysis of the incoming image and previous images. Bad pixel identification if performed to find bad pixels that were not identified during manufacture. As cameras age, more bad pixels can appear. These are identified by finding pixels that are consistently much different from their neighbors across images. When bad pixels are identified, they are removed in the same way as pre-characterized bad pixels. For MWIR and LWIR cameras, Overwatch also applies a dynamic non-uniformity correction (NUC). This correction uses incoming imagery to estimate and remove nonuniformity in the imagery. The nonuniformity can take the form of fixed-pattern noise, row/column noise, and low spatial frequency effects like vignetting.
bottom of page