Creating a digital Dolphin from reality – An introduction to 3D photogrammetry
The results of our interns first experiments with 3D photogrammetry.
Written by David ‘Ed’ Edwards, Aug 21 2018
How do you create an exact digital replica of a real creature?
That’s a broad version of the question asked of our Scientific Internship students, via our experimental technologies module. The answer (spoilers) is 3D photogrammetry.
Stereoscopic Photogrammetry (the use of photographs to take measurements) has seen frequent use in ecological research. Marine Dynamics Academy’s own Toby Rogers’ Masters Thesis (which can be enjoyed for free at ResearchGate) focused on its use.
By comparison, 3D photogrammetry (the use of photographs to generate digital 3D models) is still somewhat in its infancy, likely due to the numerous technical considerations. It is however, seeing increased application in the videogame and visual effects industries, as an alternative to artists creating digital assets from scratch.
Great for Call of Duty, pretty irrelevant to those of us obsessed with the sea.
But we do believe the science has educational and outreach potential, within the context of marine research and conservation. Through our Scientific Internship, we have started to experiment with possible pipelines and applications.
This update aims to provide some background on the use of 3D Photogrammetry at the Marine Dynamics Academy. We’ll be summarising our initial experiments and the processing pipeline that our interns were involved with. We’re not going to go into incredible amounts of detail, but by the end you’ll have a fundamental knowledge of how we achieved our results.
3D Photogrammetry experiment One – Common Dolphin
Building on some basic concepts
Back in November 2017, our team performed image collection for photogrammetry tests on two subjects: a Smooth-hound shark and a great white shark.
Neither animal had been killed for this express purpose. While the great white sharks’ demise came under suspicious circumstances (believed to be a fishing target, illegal practice in South Africa), the Smooth-hound was caught by and purchased from a local fisherman, as part of their daily commercial activities.
Members of the Marine Volunteers program assisted with some of the photograph collection on both. The initial results were positive. We were able to generate highly detailed digital reconstructions of each animal from approximately eight hundred photos on each.
We’ve a couple of papers in progress on these and will share greater details about the processes used at a later date.
Photographing the common dolphin
We launched this module as part of the Scientific Internship in June 2018. Our very first group of interns were essentially ‘test subjects’ for both the learning materials and practical application of 3D photogrammetry. Emma Butterworth, Danielle Kelly and Gary Lyon comprised our first team, excited to be trying something completely new.
At least, that’s what they told me. They may have just wanted to humour me, but I’ll take what I can get.
I took them through a quick lecture about the goals and approach to be taken when photographing our subject. In this case, the subject was a deceased Common Dolphin which was to later be dissected. I emphasised then as I do now, that these are the earliest stages of 3D Photogrammetry’s use at the Marine Dynamics Academy. This is all about testing different approaches, seeing what works and what doesn’t. This will allow us to gradually build a reliable pipeline from which future research can be undertaken.
With my ramblings concluded and the interns (miraculously) still awake, we went outside and proceeded to take the required photographs. I gave a quick practical demonstration of the angles and distances to be used, with emphasis always being on coverage.
We want as many photos as possible and ideally, with incremental steps between positions. The software we use is going to look for patterns across the surface of the subject and use those to calculate where the various photographs match up. Therefore, the more photographs we have that share areas of the subject with eachother, the easier this will be for the software. This photograph collection process was developed with Richard Harper of Staffordshire University, a specialist in visual media sciences.
Once my demonstration was completed, the interns were left to capture the rest of the photographs themselves. This might seem trial by fire, but we strongly believe in learning by doing when it comes to training. They’d taken notes, demonstrated a strong understanding of the theory and were more than capable of doing what was needed.
A little under seven hundred and fifty photographs later, we had our materials:
The dolphin was later used in an educational dissection, about which and the intern’s experience you can read here.
Digital Reconstruction
Processing photogrammetry data
The software we use for reconstructing a 3D model from these images, is Reality Capture. Alternatives on the market do exist such as Agisoft’s Photoscan, but for our use of photogrammetry, we’ve found Reality Capture to be by far the superior product.
The 749 photos taken during our shoot are imported directly into the software, running on a 64-bit Windows machine boasting 48GB RAM, duel Intel Xeon X5680 and Nvidia Quadro K4000.
Such an amount of RAM might sound excessive but when it comes to 3D photogrammetry, you want as much juice as you can get.
Previewing the results
Once the software completed processing the images, it provides us a series of components.
Think of components as essentially being groups of images that the software has automatically been able to align and thus, produce a rough visualisation from. Best case scenario would be that we’ve taken so many images at the correct angles, that the program is able to create a single component comprising every photograph our interns have taken.
Unfortunately, that’s not the case here. My bad.
You can see that the above component has roughly been able to construct down the right hand-side of the Dolphin’s body. It has been able to automatically use 216 of the 749 photographs we imported. However, we are completely missing the other side.
Conversely, another one of our components is almost the exact opposite. Made up of 148 photographs, Reality Capture has been able to reconstruct a portion of the dolphin’s left hand side.
What this suggests is that although the photos featured in each of these components are of a high enough quality, the program is struggling to find where they line up with other components (of which we have 56). This usually means that we don’t have enough coverage between the images taken for surface points to be lined up. Ideally, we’d go back and take more photographs, but that’s not an option in this case.
Thankfully, we can address this to some extent manually.
We do this through the use of control points. These are markers that we use to identify an area on the surface of the object, which though seen in two photographs, hasn’t been unified into a single component. By selecting photographs that despite being in separate components, we know share a degree of surface area, we’re basically telling Reality Capture “Hey, these two are looking at the same thing”, which it can then use to refine its reconstruction of the subject.
What’s really important, is that we target these control points on images which are in separate components. This will help the program stitch together images which it hasn’t been able to automatically. It can be time consuming, but by being careful and considered in which areas we use as control points, the process gradually speeds up.
With those control points added, we process again and Reality Capture gives us a new reconstruction. What’s really useful, is that it works from the existing components rather having to start from scratch every time.
The result of this, is starting to look a little more like an actual Dolphin:
This is a massive step forward, resulting in far fewer components existing in our scene and proof that our approach to control points is an effective one.
There’s still much to be done, so I essentially repeat this process on images yet to be consolidated into the main component.
Rendering the photogrammetry reconstruction
A few hours later, I’ve stitched together the vast majority of components and am happy with the rough reconstruction. If this was to be used in a high quality rendering such as video or Virtual Reality demonstration, I’d likely spend the time getting every last photograph I possibly could in there. Since the purpose of this is to essentially test some of the practices and get a rough visualisation together, I’m happy to move forward.
I render the reconstruction at a normal level of detail, rather than high which would take significantly longer. The result is a flat colour atop our 3D model, meaning we get to see all the small details and surface variations that the software was able to pick up.
The results are quite staggering:
Scratches, wear and tear on the subject and surface variations. Photogrammetry gives us an incredible amount of detail to scrutinise. Much of this would be difficult to see even in the field, as the colour, specularity (how shiny it is) and other elements can make it difficult to perceive underlying, three dimensional details.
For the most part, our interns have captured enough photographs to produce an accurate, impressive reconstruction of a Common dolphin.
There are exceptions though, as can be seen here where the right hand side of the beak is missing a lot of detail:
It’s not great for this particular model, but highlights the importance of ensuring image focus, quantity and coverage are of the required specification. We do have further images of this area, many of which were given manual control points but nevertheless, there just isn’t enough data for the program to produce a good reconstruction.
But that’s the whole point of the process at this stage – see what works, see what doesn’t and improve accordingly.
Beauty Renderings
The above look cool, but hardly realistic. What we still need to do is overlay the actual colour captured by the photographs, onto the newly rendered 3D reconstruction.
Which sounds mega complicated, but is literally just a click of a button.
I love computers… when they behave.
Now this looks like a Common Dolphin!
Adding the texture just takes the whole thing to another level. This was once a living, breathing creature that we now have (for the most part), a physically accurate, photo realistic digital recreation of. We can view it from various angles and but for the aforementioned areas where there simply weren’t enough photos taken, it looks pretty great:
As far as missing photographs are concerned, it’s really the area around the beak and under the head which suffer the most.
Areas which protrude from the main form are always tricky because ideally, we need to be getting the camera right up close to some of those difficult to reach spaces. But again, the whole point of this exercise was to identify such problems so we can begin working on the best solutions.
Conclusion
Clearly, there are lessons to be learned and improvements to be made to our process, but as a practical exercise this has been a great way to introduce Marine Dynamics Academy interns to 3D photogrammetry. Our learning materials will be updated with greater clarity and emphasis given to some of the technical considerations, before reconstructing our next subject.
What's the future of 3D photogrammetry at Marine Dynamics Academy?
Research through Virtual Reality
At present, our focus is purely on refining our processes for data capture. Defining clear rules and approaches will ensure we’re able to capture all we needed efficiently
However, we have also begun making initial steps into potential research projects.
I’m currently working on a Virtual Reality solution with Richard Harper and Tom Vine of Staffordshire University, that could potentially provide scientists with a suite of research tools. These are aimed at allowing greater flexibility in data collection, than is possible in a live environment.
The aforementioned deceased white shark that was recovered in November 2017, has been our test subject for these developments. While we’re not quite ready to share further details or showcase any of the technical features just yet, we’re excited by the results so far and Marine Dynamics Academy will be the first place to come for any news.
Credits
Thanks to Richard Harper of Staffordshire University for consulting on the data capture and reconstruction process, to local fisherman for their help in retrieving the animal bodies and of course, Emma, Danielle and Gary for their great work on the photography.
We look forward to sharing future developments on 3D photogrammetry at Marine Dynamics Academy with everyone, soon!