News

July 10, 2018

We have been notified by the Air Force that our BAARS SBIR effort has been selected for a Phase II follow on. This is extremely exciting news, as in this effort we will transition our successful low-latency video processing technique developed in the LLEVS Phase II STTR and multispectral camera fusion architecture developed in the BAARS Phase I SBIR into a prototype digital augmented reality google for evaluation by AFSOC operators for ultimate incorporation into the BAO kit. The BAARS capability will be a day/night system combining multiple cameras, a see-through display for daytime operation, and battery powered operation with integral head tracking for augmented reality overlay of data and symbology fed from the interfaced TAK system. The BAARS prototype developed in this Phase II will have a modular design, and form the beginning of a roadmap of HMD variants to be evolved as camera, display, and processor technologies improve over time. The goal of BAARS is to increase battlespace awareness of dismounted operators. As we work to develop the BAARS augmented reality HMD, we will be actively working to identify other CONOPS and applications to bring battlespace information into a first-person immersive view for the operator.

September 8, 2017

The Air Force has selected Perceptive Innovations for the AF171-030 Phase I SBIR program to develop a system design concept for BAARS, the Battlefield Airmen Augmented Reality System. In this effort, PI2 will develop a low-latency multispectral digital helmet-mountable near-to-eye augmented reality system for use by Battlefield Airmen, capable of use to aid vision in night, day, and all-weather operations. In this project, we will leverage our ongoing Phase II work with AFRL developing the Low-Latency Embedded Vision Processor (LLEVS), which is developing and demonstrating a self-contained image processing system with the ultimate in low-latency for AR/VR, hosted on a 16nm low power FPGA SoC also capable of hosting all computing functions for BAARS. PI2 personnel have a long legacy in small-SWaP hardware systems design and development. In Phase I, our team will fully explore the BAARS trade space of multispectral imagers, head trackers, and microdisplays, and develop multiple candidate HMD configurations in terms of optics, mechanical design, and ergonomics, and down-Select to a BAARS configuration. In a potential Phase II of this effort, the PI2 team will develop a BAARS prototype and demonstrate it to the Air Force for future incorporation into the Battlefield Air Operations (BAO) Kit.

June 21, 2017

Perceptive Innovations has been selected by the Air Force Research Laboratory for a follow-on Phase II SBIR research program for Synergistic EO and RF SAR/SAL Processing. This two-year research project seeks synergy between RF (SAR) and optical (SAL) synthetic aperture imaging, such that advantages can be identified to share a common processor for both functions on a tactical SWaP-constrained ISR platform. Traditionally, these very different functions would be implemented as completely independent and isolated "stove pipe" payloads. However, future tactical ISR platforms must be small and potentially expendable in contested environments, and there is significant opportunity to improve SWaP efficiency through shared SAR and SAL processing in a common processor. In this Phase II, the PI2 team seeks to identify and quantify such opportunities in an open COTS hardware architecture that enables us to create a Testbed to predict such advantages for future Air Force integration of SAL onto a tactical SAR platform with shared processing.

May 9, 2016

Perceptive Innovations is selected for an Air Force Phase I SBIR research project AF161-149 “Synergistic EO and RF SAR/SAL Processing”. The objective of this research is to "develop requirements for a modular, scalable, open architecture, and COTS pooled-processor for common high resolution imaging for SAR and SAL sensing modalities on the same platform, identify opportunities for synergistic multi-mode processing, and quantify expected performance metrics including SWaP".

May 3, 2016

Perceptive Innovations has been selected to continue our low-latency advanced image processing work for the Air Force in Phase II of the AF15-AT13 STTR research project. In this two year development effort, we will develop a prototype of the LLEVS binocular image processor that performs typical image processing functions, including image warping and offset correction, with just a few milliseconds of latency. We will accomplish this using advanced FPGA image processing, and create a completely scalable pixel processor able to handle multiple image sources at high resolution and frame rates, driving multiple displays. This work will enable the breakthrough capability of head mounted displays without perceptible lag.

August 17, 2015

Perceptive Innovations is selected for a USSOCOM Phase I SBIR research project SOCOM15-003 “Novel Approaches to Fully Digital Optical Solutions”. The objective of this research is to “Identify and develop novel approaches, designs, materials, and concepts to reduce latency in digital visual systems.” Specifically, the goal of this effort is to develop a TALOS Vision System (TVS), using all digital processing to enable day/night vision supporting digital fusion and augmented reality overlays at extremely low latency from light into the sensor to light out of the display (objective: 1.2ms, threshold: 5ms).

July 22, 2015

Perceptive Innovations is selected for an Air Force Phase I STTR research project AF15-AT13 “Low-Latency Embedded Vision Processor (LLEVS)”. In this effort we are partnered with Auburn University to meet the objective to “Develop architectures for an embedded processor capable of implementing the image processing algorithms required for a digital helmet-mounted display for dismounted soldiers.” In this research we specifically focus on maintaining extremely high-quality image processing at high resolution (objective 5 mega-pixels) and high frame rates (objective 96 Hz) in a binocular digital vision system, with less than 1-frame latency measured from digital camera inputs to light in the user’s eyes.