Tag Archives: Photography

Drone Photos Without a Drone

Last year when we were in Tucson, staying at the Lazy Days KOA, I wanted to take a decent photo of our campsite. The best shot would have been from a drone, but I didn’t think the KOA would allow me to fly my drone there. Not to mention that campground is in the airspace of a commercial airport and two(!) military airbases. There really was no chance of getting permission to fly a drone there!

So, what to do? I did some searching, and found photos that looked like photos from a drone that were taken without a drone. How did they do it?

One person that had some pretty interesting photos had taken the photos with a 10′ “Selfie” stick. That’s right, 10 feet! That sounded great to me, so I purchased one. Click here to get your own from Amazon (this is an affiliate link).

OK, it’s not really 10 feet. It’s actually 3 meters, or about 9.8 feet. It’s a carbon fiber pole that extends in sections, so you don’t have to extend it to the full 3 meters. When collapsed it’s only about ~18″ long.

When holding this pole up over my head, the camera is about 16′ above ground. Perfect for many photos.

I attached my GoPro to the end of the pole and used my cell phone to control the camera. I could see what the camera was seeing through my phone, and snap the shot (or shoot a video). It takes a little practice to hold the pole steady and aim it where you want.

The pole seems to be made well. It locks into position and stays there until you want to collapse it (a slight twist at each section releases it).

Using this pole, I was able to get some pretty nice shots of our KOA campsite, as seen below.

Tucson Lazy Days KOA campsite
Tucson Lazy Days KOA campsite

3D Model of Ponderosa Pine Tree Bark

Here is my latest diversion. This is the bark of a Ponderosa pine tree near Camp Sherman, Oregon. I shot 26 photos with my Pixel 5a cell phone, then processed the photos with WebODM into a 3D model.

I did further processing in Blender to eliminate a few extraneous bits and then created an animation of the model.

Below is the animation I created using Blender. Click on the image to start the video, click “f” to view it full screen. Press “f” again to exit full screen mode (I suggest viewing it in full screen to really see it):

This model can also be seen in Sketchfab, a 3D viewer. Click on the image below, and after the model loads, click and drag to see it in 3D. Same as for the animation, press “f” to view it full screen and “f” again (or escape) to exit full screen mode:

Model of my Deck

I entered the following in the WebODM (Open Drone Map) forum. Some of it is a bit technical for this blog, but thought it might be interesting to some people.

Just for fun, and to learn more about WebODM and Blender, I flew my DJI Mini 2 drone around my deck to create a model. My deck has trees on 3 sides of it and overhanging it. Flying between the tree branches to get some of the shots was a bit challenging. There was a bit of a pucker factor a few times when flying inches below a branch and the drone started drifting upwards! (The Mini 2 has only forward and downward sensors, which is good here – I could never have flown that close to the trees if there were active sensors the other directions.)

I shot 142 images and processed those and saw some areas that didn’t seem to have adequate coverage. So I shot another 49 images to fill in some areas. That improved the places I concentrated on, but it seemed that some other areas decreased in quality. The glass railing and adjacent sunroom windows and doors caused some oddities, as expected. One thing I found odd is that the deck and other items seem to be “reflected” in the undersides of the tree branches.

My processing system is Windows 10 Pro on a laptop with 64GB of RAM. I initially processed this using Docker/WebODM, but ran out of memory when I increased pc-quality to ultra. I then processed it in Windows native WebODM, and it processed in 24+ hours. The WebODM timer showed 36 minutes, so I don’t have accurate information…

I postprocessed this with Blender to clean up some of the extraneous parts of the model, but purposefully left most of the trees. To get the upload files under the 100MB limit for the free Sketchfab account, I decimated the model in Blender to 70% and converted the PNG files to JPG.

Processing parameters:

debug: true, mesh-octree-depth: 13, mesh-size: 1000000, min-num-features: 30000, pc-quality: ultra, resize-to: -1, verbose: true

Click on the image below to activate the model. Use your mouse buttons to change the view, and your scroll wheel to zoom in and out. Type “F”, or click the double-ended arrow in the lower right, to open it in full screen. (I highly recommend viewing it in full screen.)

3D Printing from Photogrammetry

or

Blender

or

What Have I Gotten Myself Into Now?

This was going be be a blog about using Blender to create 3D scenes. Sort of. I’m just barely starting to learn Blender, so it wasn’t going to be anything fancy or in-depth.

But, I went down a rabbit hole. Imagine that! I started with the ASDM Rock I photographed a few months ago (see my post PHOTOGRAMMETRY: 3D Models from Photos), and was going to try to add some sunshine, and animate the sun moving across the rock, and maybe in the future create some somewhat realistic looking grass around the rock. But, I got sidetracked and decided to try to 3D Print the rock. Not at full scale(!). Just a little plastic rock I could put on my desk.

Blender

OK, so what is Blender? From Wikipedia, “Blender is a free and open-source 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, motion graphics, interactive 3D applications, virtual reality, and computer games.” Did you read all of that? Free. Open Source. 3D computer graphics software. Animation.

Blender is used to create everything from 2D and 3D still pictures to full length animated movies. Wow!

I’ve known about Blender for several years (at least). I’ve looked at it a few times, but every time the learning curve scared me off. But it can do so much. And FREE, so no big investment (except my time) to play with it. After playing a bit with Open Drone Map, creating 3D models from “just a bunch of photos,” I thought maybe I should look at Blender again. So for the last 4-6 months I’ve been watching tutorials on YouTube and LinkedIn Learning, being awed by what others have done with Blender, and wondering if I could accomplish anything significant with it.

Very brief recap of my blog on Photogrammetry: I shot 40 photos with my cell phone of this cool looking rock that is located in front of the Arizona-Sonora Desert Museum outside of Tucson, AZ. I then used Open Drone Map to process these 40 photos to create a model of the rock, and used Blender to do some very minor editing to eliminate the extraneous parts of the model. I uploaded the model to Sketchfab, where you can view it in all of its 3D-ness.

Starting with this same model, I used Blender to create a base and export it to an STL file, which can be used to print a 3D model. That sounds rather mundane, but I spent many hours trying to get the initial model ready for 3D printing. Several YouTube videos later, I managed to create something that would print nicely. I also added a little sunshine to the scene, just because I could.

For comparison with the printed model, here is one of the photos in the sequence that was used to create the model.

Rock at Arizona-Sonora Desert Museum Entrance
Rock at Arizona-Sonora Desert Museum Entrance

Here is the ASDM Rock, as rendered in Blender. I added a little sunshine to the scene, just because I could :-).

ASDM Rock, from Open Drone Map model, rendered in Blender
ASDM Rock, from Open Drone Map model, rendered in Blender

The final result? Here is my printed “rock.” I think it rather accurately represents the original rock!

3D Print of Rock in Photo Above
3D Print of Rock in Photo Above

PHOTOGRAMMETRY: 3D Models from Photos

(From Autodesk’s website:) What is photogrammetry?

Photogrammetry is the art and science of extracting 3D information from photographs. The process involves taking overlapping photographs of an object, structure, or space, and converting them into 2D or 3D digital models.

Photogrammetry is often used by surveyors, architects, engineers, and contractors to create topographic maps, meshes, point clouds, or drawings based on the real-world.

I’ve written in past posts about 360° panorama photos (360° Panoramas!, More 360° Panoramas!, and 360° Panoramas (again)). In a 360° panorama, the camera (the viewer) is at a single location looking out on the world. Today, we will visit what seems to be the opposite situation.

3D models are created by taking a series of photos of an object from many different directions. The object could be something small, like a sculpture. Or something large, like a movie set. Or something in between, like a building. The camera could be mounted on a tripod and the small model turned to different positions, or the camera could be moved around the small model to take many different views. For an even larger model, the camera could be carried by a drone, for instance, and moved around a very large area to take many images.

I’ve played with 3D models a bit over the last few years. Once you have acquired images of your target, they must be processed in some way to create a 3D object, usually a “mesh” of many triangles that simulate the original model. Much of the software to do this is relatively expensive (hundreds or thousands of dollars), or rented by the month. However, not all software is expensive. After looking at other options, I found Open Drone Map (or ODM). The original purpose of ODM apparently was to create maps and/or models from photos taken from a drone. However, the software doesn’t really care whether the camera was on a drone, or handheld, or on a tripod.

Using ODM, I was able to successfully process several sets of photographs I have accumulated over the last few years. My smallest models were created from about 40 photos shot with my cell phone and the largest I’ve created so far used a couple hundred photos shot with a drone. People successfully use ODM with 5,000+ photos, although that may take days to process, even on a powerful computer.

Once you have created a 3D model you must use special software to view it. Surprisingly, current versions of Windows do come with a simple 3D viewer, but it doesn’t seem to be very robust. There are also websites where the 3D model can be uploaded, then you can view the model with a web browser.

Below is one of the first models I created. It is a tabletop scene of a small wood manger. This model was created from 48 photos shot with my DSLR as I walked around the table, taking photos at different heights to be sure everything was visible. Click the “play” button, wait for it to load, then use your mouse left button spin the model around on your screen, and your mouse scroll wheel to zoom in and out. To see the model full screen, press the “f” key. (I recommend trying that – press the “f” key again to exit full screen mode.)

The photo below is one of the 48 photos that make up the model above.

Another 3D model I created is an interesting rock at the entrance to the Arizona-Sonora Desert Museum (ASDM). This one is created from 40 photos I shot with my cell phone as I walked around it several times.

I used several other programs to generate all of the models shown here. First is WSL – Windows Subsystem for Linux. The version of ODM I used runs on Linux, so this allowed me to run it in a Linux environment on my Windows computer. I used Blender to clean up (remove) the extraneous parts of the 3D images, which were then uploaded to Sketchfab. Other programs played more minor roles. Expect to see more about Blender in this blog in the future.

Milky Way Timelapse

[Originally published Nov. 14, 2020, minor update Nov. 4, 2022]

Something I’ve wanted to do for years is to create a timelapse video of the night sky star motion. I made it one of my goals for this year to accomplish that. I’ve been spending a lot of time in places that have terrible views of the night sky. Mostly, too much atmospheric haze and/or too much light pollution.

In July, when Comet Neowise was visible, we found a place a short drive away that had a pretty good night sky view, and was above much of the haze. We went there to try to get a good view, and maybe a photo or two, of the Comet.

Comet Neowise, July, 2020 (15 seconds at f/4.5, ISO 800, 135mm)

We found that this location was also good viewing of the Milky Way.

Milky Way (30 seconds at f/4.0, ISO 1600, 10mm)

It might have been a good time to try for a star timelapse with the Milky Way included, but it was late and I didn’t take the time to try it.

In September we camped at Red Bridge State Wayside in Oregon. The campground is a great place, but the sky is mostly blocked by beautiful Ponderosa pine trees. It does have a pretty good view of the sky from an area near the parking lot. I took my camera and tripod with the hope of getting some decent sky images.

Toward dark I set up on the grass looking over the parking lot and took several test exposures. I was shooting with my Pentax K-3 (crop-frame) camera with a Tamron 10-24mm lens. The exposure I settled on was 6 seconds at f/3.5, ISO 6400. I set the camera to shoot 500 photos, one every 20 seconds. I turned off in-camera noise reduction, thinking I could save battery and do it in Lightroom later.

The first photo was shot at about 9:20 pm, and the last photo just past midnight. I sat in a chair near the camera for the almost three hours it took, reading a book on my Kindle. Fortunately the night was relatively warm and getting cold wasn’t too much of a problem. I did get out of the chair a few times to do some jumping jacks to stay warm.

OK, now for what I did wrong.

  1. I judged the exposure by what the image looked like on the back of the camera. Remember, it was almost pitch black when I was doing this. The image looked great! The next morning I looked at the images. I couldn’t believe that all frames were totally black. How could I have done that? Then I realized they were underexposed so badly that I couldn’t see anything in normal light, but, viewed in a darkened room, there was some image there. Don’t judge the image exposure by what your eye sees when its almost totally dark out! Lightroom to the rescue (sort of).
  2. Turning off in-camera noise reduction was a mistake. the Pentax K-3 does quite well at keeping the noise down, but at ISO 6400, I really needed to let the camera do what it could. Again, Lightroom noise reduction helped (but I wouldn’t say it rescued me).

Once I had 500 RAW images, I imported them all into Lightroom and did what I could to adjust exposure and reduce noise. Then exported them all as JPEG files (a painfully slow process on my ancient laptop computer). Next I fired up Adobe After Effects, brought in all of the JPEG images, and created a 1080p video at 30 frames per second. 500 frames at 30 frames per second results in a video only 16-2/3 seconds long!

The resulting video has lots of noise and color changes due to the extreme exposure adjustments I made. But I think it’s acceptable for my first attempt. Next year (or maybe this winter) I’ll do this again and improve my results.

Here is my video for you to see:

Milky Way Timelapse

360° Panoramas!

[Originally published February 4, 2020. Minor updates November 5, 2022.]

I’ve been playing with 360° photography for over a year. I find this to be an enjoyable variation to my regular photography.

The simplified explanation of the process is to shoot enough images to capture all directions, then stitch these images together into a special format. I sometimes use a drone to capture the images, and sometimes a camera on a tripod. The result is a spherical image, viewed as though you are at the center of the sphere and can look in all directions.

On my tripod, I use a panorama attachment, shown here, that allows the camera to pivot horizontally and vertically around the optical center of the lens so that the resulting photos align properly.

Camera on 360° panorama mount

This panorama attachment has a pivoting base with detents that help position the camera at the right intervals. Using the 10-22mm lens shown above, I take eight shots in a horizontal circle to have enough overlap between shots to stitch them together properly.

When I create a panorama from a drone, I am currently using a Phantom 4 Pro, which has an automated panorama mode where it takes all of the images with one press of the “shutter.”

When the panorama is assembled from the individual images, it can be viewed with a special viewer on my computer. I publish some of my images on a website, Round.me, that specializes in displaying 360° panorama images. Unfortunately, I can’t display the images as panoramas on this website.

I am going to use an image that I shot at Gates Pass outside Tucson, AZ, this past November as an example . There is a small knoll just off the road that I climbed up to take the photos from. I set my tripod on the highest rocks on the top and took 37 images. The images were taken on manual exposure, all with the same exposure. Because there is a wide range of lighting, from shadows to full sunlight to shooting directly into the sun, I shot the images in RAW to best capture the shadows and highlights. I could also shoot bracketed exposures to capture the full tonal value, but I have found that shooting RAW in this situation yields images that I can work with to get the results I want, and it is a bit simpler than bracketing the exposures and post-processing those (although I have also done that).

Back home I import all of the images into Adobe Lightroom. In this instance, I started by doing an “Auto” process on all of the RAW images, which lightens the shadows and tones down the brightest highlights (like the sun!), then change a few other parameters (primarily Clarity, Vibrance, and Saturation) to make the image a bit more to my liking. I might also do some additional brightness adjustments if I feel that is necessary. I then export all of the images as JPEG files.

I use a program called PTGui (https://www.ptgui.com/) to create the panorama image. It is a very capable program, and can process HDR or RAW files directly, but I feel I have the control I want in a way that makes sense to me by doing the initial processing in Lightroom. Once I have the JPEG files, I import them all into PTGui. It will automatically (and magically!) align the images. Once they are aligned, if there are any problems, I can “assist” PTGui to find matching points in adjacent images. I usually only need to do this when some of the images include almost all sky or water.

It the panorama is shot from the drone, I can’t shoot straight up as the camera is mounted below the drone, and the drone blocks the camera’s view. That leaves a hole in the sky above. In this instance, I use a special mode of PTGui to export an image of the “top” of the panorama. Then I use Photoshop to fill in the hole with “sky color” similar to what is around it, and then “reassemble” the panorama with PTGui.

When using a tripod, I can shoot straight up, but the tripod is in the way when I shoot down. So when I am finished shooting all of the images, I pick the tripod up and take a shot of the ground where it was. Many of the images from the tripod pointing downward contain parts of the tripod, such as the camera platform or the tripod legs. I can select these images and indicate to PTGui which parts of particular images should not be used. If I don’t to this, some of the tripod will show up in the resultant panorama.

Once I’ve done all of the point matching and/or masking, if needed, PTGui creates the panorama photo, which is a JPEG file in a format that panorama programs can interpret. If viewed with any “normal” photo program, the image looks quite distorted as shown below. Even though I took the tripod out of the panorama, you can still see its shadow. Here is the Gates Pass JPEG image. Click on the image to view it in Roundme. Once the image opens up, click and drag with your mouse to spin the image around, or zoom in and out with your scroll wheel.

Gates Pass 360° Panorama
Gates Pass 360° Panorama

You can also view this on a tablet or iPad. With the right browser, you will be able to look in different directions just by turning the tablet.

To see all of the panoramas I have published, check out my Panorama Page at roundme.com/@garystebbins. When on that page, click the “TOURS” button to see the panoramas I have published. Some of the images here have been shot with my camera on a tripod, while others have been shot from a drone. Keep watching this site, as I will be adding new panoramas from time to time. 🙂