12 May 2021
Visidon is exhibiting at 2021 Embedded Vision Summit, the premier conference for innovators incorporating computer vision and AI products. This virtual event takes place May 25-28.
By Jari Hannuksela, co-founder and Director of Research and Development at Visidon
Wednesday, May 26: 10:30 am – 11:00 am PT
Multi-frame image processing techniques such as high dynamic range (HDR), super-resolution and image denoising are commonly used in embedded systems to enhance image quality, optimize dynamic range and enrich the color tone of images. Typically, the implementation combines multiple shots and automatically handles moving objects to avoid ghosting or artifacts and make small details visible. Join our Expert Bar to get your questions answered about how to create top-quality images in embedded devices using multi-frame imaging. Ask us about how to achieve a good trade-off between performance, energy efficiency and reuse via programmability utilizing SoC computing units, especially accelerators such as DSPs and GPUs.
1) The Visidon Depth SDK: Image Stylization Optimized for Mobile and Embedded Platforms
This technology demo of the Visidon Depth SDK shows both still photos and a live video feed on mobile platforms. Visidon’s Depth SDK includes feature sets for single and dual cameras and a gallery SDK for modifying results as a post-process. Results from input to computed depth, and to final stylized images are demonstrated for single and dual camera configurations with real-world examples. (Representative: Otso Suvilehto, Technology Lead)
2) Visidon’s Image Noise Reduction Solutions Optimized for Mobile and Embedded Platforms
This demonstration provides an overview of Visidon’s noise reduction technologies for embedded platforms. It describes various image and video noise reduction techniques, along with showing noisy input and reduced-noise output image pairs. It also introduces noise reduction using several methods, including highly optimized multi-frame fusion and filtering technologies, as well as efficient convolutional neural network (CNN) approaches. (Representative: Valtteri Inkiläinen, AI Software Engineer & Saku Moilanen, AI Software Engineer)
Our denoise demo was chosen as one of the best demos at this year´s Embedded Vision Summit!
The demo video is already available as a sneak peek at the summit demo directory (link at ➡”Get a flavor of what to expect”)
Technology demos will be run in parallel every event day from Tuesday to Friday during specific demo hours.
Summit demo hours:
Tuesday, May 25: 11:00 am – 12:00 pm PT
Wednesday, May 26: 1:00 pm – 2:00 pm PT
Thursday, May 27: 12:00 pm – 1:00 pm PT
Friday, May 28: 10:00 am – 11:00 am PT