July 14, 2010

360 + fisheye

by Stefan de Konink

A short update from the stitching team.

Jan Martin from diy-streetview.org gave his shot on the demo content in Hugin. This resulted in the following beautiful stitch.

Next Gen Stitch

Bruno Postle also send us his attempt on the new footage.

Bruno's attempt

Wim Koornneef (dmmdh productions) created an online, interactive, panorama view.


4 responses to “360 + fisheye”

  1. Wim Koornneef says:

    Just like Jan Martin and Bruno Postle I give the demo images a try, instead of Hugin I used PTGui Pro to stitch the 9 source images.
    To reduce as much as possible the hicks at both sides of the street I used PTGui’s VP correction and to correct the white balance differences of the images I used Exposure correction.
    I spiced and sharpened the equirectangular output a bit (colors, brightness etc.) and made an interactive spherical Flash panorama of it that you can see here:

    http://www.dmmdh.nl/panos/demo_Elphel-Eyesis_16072010/output.html

    I hope you like the panorama.

    Wim Koornneef
    dmmdh productions
    http://www.dmmdh.nl

  2. DeadlyDad says:

    Those look amazing! Two things occurred to me:

    1. As the overlapping sections contain (relatively) the same thing, the average of the color differences would equal the amount of change needed to bring both to (relatively) the same white balance.

    2. If, instead of 45 degree lenses, you used 90 degree ones, while you would be losing resolution, you would also be able to have 4 cameras set to over exposure and 4 to under exposure, and use the resulting sets to generate one HDR set.

    2a. Using the subtle differences between pixels with the two sets, you can recover some of the lost resolution with Super Resolution techniques ( http://medlibrary.org/medwiki/Super-resolution )

    3. Adding GPS, accelerometer, & electronic compass to the unit adds enough information to create 5 (or more) chronologically separated stereo pairs per second. This not only allows 3D model generation, but the multiple, perspective-correct textures of each surface can be greatly enhanced through SR, and the SR-enhanced textures can be inserted back into the picture sets.

    Possible? Certainly. Practical? Well…that depends on the project. I can see the military drooling over the ability to have an UAV gather a complete 3D map of an area in a recon mission. Increase the frame rate sufficiently, and you have panoramic video – add in a VR headset, and you have a product that makes using a treadmill something to look forward to.

  3. andrey says:

    We do not plan to increase the field of view of each lens, but we are looking to increase the dynamic range – it is somewhat possible with the sensors we currently use. We even have a flavor of the JP4 format dedicated to compressing such increased dynamic range images It is not much, but we can gain approximately 3:1. Making HDR images from two cameras is rather difficult without loosing resolution and/or DOF – we tried to make parallax small, but it is not zero, of course.

    Andrey

  4. You seem to have mislabeled the pano images above. The 2nd is close to a ‘beautiful stitch’, the first could charitably be called an ‘attempt’. Whose is which?

Leave a Reply

Your email address will not be published. Required fields are marked *


9 − = zero