All posts by Gary

PHOTOGRAMMETRY: 3D Models from Photos

(From Autodesk’s website:) What is photogrammetry?

Photogrammetry is the art and science of extracting 3D information from photographs. The process involves taking overlapping photographs of an object, structure, or space, and converting them into 2D or 3D digital models.

Photogrammetry is often used by surveyors, architects, engineers, and contractors to create topographic maps, meshes, point clouds, or drawings based on the real-world.

I’ve written in past posts about 360° panorama photos (360° Panoramas!, More 360° Panoramas!, and 360° Panoramas (again)). In a 360° panorama, the camera (the viewer) is at a single location looking out on the world. Today, we will visit what seems to be the opposite situation.

3D models are created by taking a series of photos of an object from many different directions. The object could be something small, like a sculpture. Or something large, like a movie set. Or something in between, like a building. The camera could be mounted on a tripod and the small model turned to different positions, or the camera could be moved around the small model to take many different views. For an even larger model, the camera could be carried by a drone, for instance, and moved around a very large area to take many images.

I’ve played with 3D models a bit over the last few years. Once you have acquired images of your target, they must be processed in some way to create a 3D object, usually a “mesh” of many triangles that simulate the original model. Much of the software to do this is relatively expensive (hundreds or thousands of dollars), or rented by the month. However, not all software is expensive. After looking at other options, I found Open Drone Map (or ODM). The original purpose of ODM apparently was to create maps and/or models from photos taken from a drone. However, the software doesn’t really care whether the camera was on a drone, or handheld, or on a tripod.

Using ODM, I was able to successfully process several sets of photographs I have accumulated over the last few years. My smallest models were created from about 40 photos shot with my cell phone and the largest I’ve created so far used a couple hundred photos shot with a drone. People successfully use ODM with 5,000+ photos, although that may take days to process, even on a powerful computer.

Once you have created a 3D model you must use special software to view it. Surprisingly, current versions of Windows do come with a simple 3D viewer, but it doesn’t seem to be very robust. There are also websites where the 3D model can be uploaded, then you can view the model with a web browser.

Below is one of the first models I created. It is a tabletop scene of a small wood manger. This model was created from 48 photos shot with my DSLR as I walked around the table, taking photos at different heights to be sure everything was visible. Click the “play” button, wait for it to load, then use your mouse left button spin the model around on your screen, and your mouse scroll wheel to zoom in and out. To see the model full screen, press the “f” key. (I recommend trying that – press the “f” key again to exit full screen mode.)

The photo below is one of the 48 photos that make up the model above.

Another 3D model I created is an interesting rock at the entrance to the Arizona-Sonora Desert Museum (ASDM). This one is created from 40 photos I shot with my cell phone as I walked around it several times.

I used several other programs to generate all of the models shown here. First is WSL – Windows Subsystem for Linux. The version of ODM I used runs on Linux, so this allowed me to run it in a Linux environment on my Windows computer. I used Blender to clean up (remove) the extraneous parts of the 3D images, which were then uploaded to Sketchfab. Other programs played more minor roles. Expect to see more about Blender in this blog in the future.

A 3D Printed Thermometer Sensor Holder

When camping, I frequently would like to know the temperature outside our 2020 T@B 320S Boondock Edge trailer as well as inside. I purchased a “ThermoPro TP60S Digital Hygrometer Indoor Outdoor Thermometer” through Amazon (if you purchase from this link I’ll earn a small commission at no additional cost to you) and mounted the indoor module on the wall next to the Alde control panel using Velcro.

Now, where to locate the outside sensor? I placed it in the propane tank / battery box, just setting it on the bottom. This seemed to work fine. The outside temperature seems to be relatively accurate except when the sun is shining directly on the box. The only problem I could see was that the sensor picked up a lot of dirt, and occasionally some moisture from sitting on the bottom. I was also concerned about dropping something on it and damaging the unit.

I have finally gotten around to moving the sensor to a safer location. I figured I could mount it over the flange at the back of the propane tank / battery box and it would be safely out of the way. When the lid is closed, there is a small gap below the lid where the mount can sit without interfering with the lid closing. Using Fusion 360, I designed a holder for the sensor.

I first measured the width of the flange at the top of the box, and eyeballed how I thought I would like the mount to sit on that flange. I measured the sensor, and made a rough drawing of what I wanted. Then I created a test part in Fustion 360. I just made the end of the sensor mount and about 10mm of the body. That way I could print it in a reasonable amount of time without using too much plastic filament to test the fit. Here’s my first iteration:

Sensor Holder Test #1

I then tested this, and found that it didn’t hang the way I had hoped. It needed something to keep it from tilting.

So, on to iteration #2. I added a little leg to keep it from tilting.

This worked fine. Now that I had tested the hanger, and believed it to be correct, I added the rest of the structure in Fusion 360, and added holes in the bottom to improve air flow to the sensor, resulting in the completed sensor holder.

Available on Thingiverse at www.thingiverse.com/thing:4917124.

Night Drone Photography

With recent changes to the FAA rules, it is now possible for drone pilots to fly at night without jumping through as many hoops as before. To be eligible to fly at night, I had to take the update course for my “Part 107” certificate. I also must have an anti-collision light on my drone that is visible for 3 miles in any direction (that sucker is bright!).

When flying at night, one needs to be very aware of their surroundings so as not to hit something that you can’t see. It’s best to check out the location in daylight hours to be sure there are no wires or other such items that you might encounter.

I flew my first night flight a couple weeks ago just to try it out. I flew from my deck, which is surrounded (and partially covered) by trees. I know where they are, and where on my deck I am clear of overhead obstructions. Landing is the tricky part — making sure that I am not descending into the trees or onto my roof. I was successful in flying a short flight and taking a few photos.

Several days later I flew from the Edmonds waterfront. I walked up the beach until I was clear of other beach-goers and had a good place to take off and land (a flat, almost-level, rock). My goal was to get some shots showing the Edmonds Ferry at or near the dock and the city lights of Edmonds. I was successful.

I flew my DJI Mini 2, which is very light-weight and has a 12 megapixel camera. I would like to try again with my DJI Phantom 4 Pro, which is heavier and has a better quality 20 megapixel camera. I think the heavier drone will probably be a bit more stable, which will improve the sharpness of the photos taken with the slow shutter speed required. Although, looking at the photos, the sharpness is quite good considering the camera is “sitting” on a platform floating in the air, subject to wind and motor/propeller vibration. Shutter speeds were between one third and one second with ISO varying from 1600 to 3200. With the small sensor on the DJI Mini 2, these high ISOs made for somewhat grainy photos.

The photo below was shot as a series of nine RAW photos. The drone was positioned at one point in the sky, then three photos were shot using exposure bracketing (each photo with a different exposure) to capture the wide brightness range. Then the drone was rotated, and another three shots were taken. I did this three times. Each set of three photos was merged using Adobe Lightroom Classic to form one HDR photo, resulting in three HDR photos, each with a slightly different view. These resulting three photos were then merged into a single panorama photo, again using Lightroom, to create the final image.

Edmonds Waterfront
Edmonds Waterfront

Edit 2/13/22: If you want to see the above photo in larger size, look at my Flickr album here.