Here is my latest diversion. This is the bark of a Ponderosa pine tree near Camp Sherman, Oregon. I shot 26 photos with my Pixel 5a cell phone, then processed the photos with WebODM into a 3D model.
I did further processing in Blender to eliminate a few extraneous bits and then created an animation of the model.
Below is the animation I created using Blender. Click on the image to start the video, click “f” to view it full screen. Press “f” again to exit full screen mode (I suggest viewing it in full screen to really see it):
This model can be seen in Sketchfab. Click on the image below, and after the model opens, click and drag to see it in 3D. Same as for the animation, press “f” to view it full screen and “f” again (or escape) to exit full screen mode:
I entered the following in the WebODM (Open Drone Map) forum. Some of it is a bit technical for this blog, but thought it might be interesting to some people.
Just for fun, and to learn more about WebODM and Blender, I flew my DJI Mini 2 drone around my deck to create a model. My deck has trees on 3 sides of it and overhanging it. Flying between the tree branches to get some of the shots was a bit challenging. There was a bit of a pucker factor a few times when flying inches below a branch and the drone started drifting upwards! (The Mini 2 has only forward and downward sensors, which is good here – I could never have flown that close to the trees if there were active sensors the other directions.)
I shot 142 images and processed those and saw some areas that didn’t seem to have adequate coverage. So I shot another 49 images to fill in some areas. That improved the places I concentrated on, but it seemed that some other areas decreased in quality. The glass railing and adjacent sunroom windows and doors caused some oddities, as expected. One thing I found odd is that the deck and other items seem to be “reflected” in the undersides of the tree branches.
My processing system is Windows 10 Pro on a laptop with 64GB of RAM. I initially processed this using Docker/WebODM, but ran out of memory when I increased pc-quality to ultra. I then processed it in Windows native WebODM, and it processed in 24+ hours. The WebODM timer showed 36 minutes, so I don’t have accurate information…
I postprocessed this with Blender to clean up some of the extraneous parts of the model, but purposefully left most of the trees. To get the upload files under the 100MB limit for the free Sketchfab account, I decimated the model in Blender to 70% and converted the PNG files to JPG.
Click on the image below to activate the model. Use your mouse buttons to change the view, and your scroll wheel to zoom in and out. Type “F”, or click the double-ended arrow in the lower right, to open it in full screen. (I highly recommend viewing it in full screen.)
(From Autodesk’s website:) What is photogrammetry?
Photogrammetry is the art and science of extracting 3D information from photographs. The process involves taking overlapping photographs of an object, structure, or space, and converting them into 2D or 3D digital models.
Photogrammetry is often used by surveyors, architects, engineers, and contractors to create topographic maps, meshes, point clouds, or drawings based on the real-world.
3D models are created by taking a series of photos of an object from many different directions. The object could be something small, like a sculpture. Or something large, like a movie set. Or something in between, like a building. The camera could be mounted on a tripod and the small model turned to different positions, or the camera could be moved around the small model to take many different views. For an even larger model, the camera could be carried by a drone, for instance, and moved around a very large area to take many images.
I’ve played with 3D models a bit over the last few years. Once you have acquired images of your target, they must be processed in some way to create a 3D object, usually a “mesh” of many triangles that simulate the original model. Much of the software to do this is relatively expensive (hundreds or thousands of dollars), or rented by the month. However, not all software is expensive. After looking at other options, I found Open Drone Map (or ODM). The original purpose of ODM apparently was to create maps and/or models from photos taken from a drone. However, the software doesn’t really care whether the camera was on a drone, or handheld, or on a tripod.
Using ODM, I was able to successfully process several sets of photographs I have accumulated over the last few years. My smallest models were created from about 40 photos shot with my cell phone and the largest I’ve created so far used a couple hundred photos shot with a drone. People successfully use ODM with 5,000+ photos, although that may take days to process, even on a powerful computer.
Once you have created a 3D model you must use special software to view it. Surprisingly, current versions of Windows do come with a simple 3D viewer, but it doesn’t seem to be very robust. There are also websites where the 3D model can be uploaded, then you can view the model with a web browser.
Below is one of the first models I created. It is a tabletop scene of a small wood manger. This model was created from 48 photos shot with my DSLR as I walked around the table, taking photos at different heights to be sure everything was visible. Click the “play” button, wait for it to load, then use your mouse left button spin the model around on your screen, and your mouse scroll wheel to zoom in and out. To see the model full screen, press the “f” key. (I recommend trying that – press the “f” key again to exit full screen mode.)
The photo below is one of the 48 photos that make up the model above.
Another 3D model I created is an interesting rock at the entrance to the Arizona-Sonora Desert Museum (ASDM). This one is created from 40 photos I shot with my cell phone as I walked around it several times.
I used several other programs to generate all of the models shown here. First is WSL – Windows Subsystem for Linux. The version of ODM I used runs on Linux, so this allowed me to run it in a Linux environment on my Windows computer. I used Blender to clean up (remove) the extraneous parts of the 3D images, which were then uploaded to Sketchfab. Other programs played more minor roles. Expect to see more about Blender in this blog in the future.
With recent changes to the FAA rules, it is now possible for drone pilots to fly at night without jumping through as many hoops as before. To be eligible to fly at night, I had to take the update course for my “Part 107” certificate. I also must have an anti-collision light on my drone that is visible for 3 miles in any direction (that sucker is bright!).
When flying at night, one needs to be very aware of their surroundings so as not to hit something that you can’t see. It’s best to check out the location in daylight hours to be sure there are no wires or other such items that you might encounter.
I flew my first night flight a couple weeks ago just to try it out. I flew from my deck, which is surrounded (and partially covered) by trees. I know where they are, and where on my deck I am clear of overhead obstructions. Landing is the tricky part — making sure that I am not descending into the trees or onto my roof. I was successful in flying a short flight and taking a few photos.
Several days later I flew from the Edmonds waterfront. I walked up the beach until I was clear of other beach-goers and had a good place to take off and land (a flat, almost-level, rock). My goal was to get some shots showing the Edmonds Ferry at or near the dock and the city lights of Edmonds. I was successful.
I flew my DJI Mini 2, which is very light-weight and has a 12 megapixel camera. I would like to try again with my DJI Phantom 4 Pro, which is heavier and has a better quality 20 megapixel camera. I think the heavier drone will probably be a bit more stable, which will improve the sharpness of the photos taken with the slow shutter speed required. Although, looking at the photos, the sharpness is quite good considering the camera is “sitting” on a platform floating in the air, subject to wind and motor/propeller vibration. Shutter speeds were between one third and one second with ISO varying from 1600 to 3200. With the small sensor on the DJI Mini 2, these high ISOs made for somewhat grainy photos.
The photo below was shot as a series of nine RAW photos. The drone was positioned at one point in the sky, then three photos were shot using exposure bracketing (each photo with a different exposure) to capture the wide brightness range. Then the drone was rotated, and another three shots were taken. I did this three times. Each set of three photos was merged using Adobe Lightroom Classic to form one HDR photo, resulting in three HDR photos, each with a slightly different view. These resulting three photos were then merged into a single panorama photo, again using Lightroom, to create the final image.
Edit 2/13/22: If you want to see the above photo in larger size, look at my Flickr album here.
In my last post, already several months ago, I promised another 3D printer post. That is still coming. It’s half written. Make that a quarter written. I’ve been sidetracked, not to mention that my laptop computer bit the dust and I haven’t yet decided what to replace it with.
My first 360° panorama post was a little over a year ago, Feb. 4, 2020, where I discussed how 360° panoramas were made and showed one from Gates Pass near Tucson, AZ. My second post on panoramas was written on March 9, 2020, noting that 360° panoramas could be displayed on YouTube.
So, what’s new with panoramas?
First, 360° can be displayed on Flickr (I knew that, but had never tried it). Here’s my first panorama on Flickr. Flickr isn’t as good at displaying these as it could be – maybe it will improve in the future. The first problem I noticed is at the very bottom of the photo – directly below the camera. There’s some distortion there that shouldn’t be. Also, it is more difficult to zoom in and out with the mouse scroll wheel, as it usually scrolls the page instead. And it was difficult to go into full-screen mode, and once there I wasn’t always able to pan around the image.
It is possible to display panoramas interactively on WordPress, but only if I pay for a “professional” level. Since I don’t make any money from this site, I can’t really justify doing that. If you wish to see my photo(s) in a better viewer, take a look at it (them) in Roundme. This photo was taken a few days ago while on a cross-country ski outing to the top of Amabilis Mountain. 11+ miles and 2000’+ elevation gain, but the views were totally worth it! What a gorgeous day we had. Here are the rest of the photos I shot that day.
All of the 360° panoramas I have posted in the past were shot by using my DSLR camera mounted to a tripod (or, in one case, handheld). The latest two were shot from a drone from tens of feet to several hundred feet above the ground.
I got my first done 3+ years ago, but it’s a bit too big to take on a backpack or cross-country ski trip. About a month ago I got a much smaller drone that is something I can take along with me. The drone itself weighs about 1/2 pound. I carried it in my backpack on my cross-country ski trip.
The larger drone in the photo above is a DJI Phantom 4 Pro, and the little guy is a DJI Mini 2. Both drones can automatically shoot a series of photos to be stitched into a 360° panorama photo. I then use the program PTGui to stitch the multiple images into a panorama image.
If you are curious, the panorama image is just a regular JPEG file, although it is stretched “a bit” at the top and bottom. As mentioned in my first post, it is exactly twice as wide as it is high – 360° wide and 180º high. The right and left edges join together in the panorama viewer, and the top and bottom edges are compressed to display as a single point – straight above the camera for the top edge and straight below for the bottom edge. Some additional metadata is added to the file so that the viewer program knows how to interpret the file. Here’s what the photo looks like when viewed without a panorama viewer.
There you have it – one more 360° blog post. Next (I hope) I’ll actually finish writing the 3D printer blog I promised a few months ago. Stay tuned!
Something I’ve wanted to do for years is to create a timelapse video of the night sky star motion. I made it one of my goals for this year to accomplish that. I’ve been spending a lot of time in places that have terrible views of the night sky. Mostly, too much atmospheric haze and/or too much light pollution.
In July, when Comet Neowise was visible, we found a place a short drive away that had a pretty good night sky view, and was above much of the haze. We went there to try to get a good view, and maybe a photo or two, of the Comet.
We found that this location was also good viewing of the Milky Way.
It might have been a good time to try for a star timelapse with the Milky Way included, but it was late and I didn’t take the time to try it.
In September we camped at Red Bridge State Wayside in Oregon. The campground is a great place, but the sky is mostly blocked by beautiful Ponderosa pine trees. It does have a pretty good view of the sky from an area near the parking lot. I took my camera and tripod with the hope of getting some decent sky images.
Toward dark I set up on the grass looking over the parking lot and took several test exposures. I was shooting with my Pentax K-3 (crop-frame) camera with a Tamron 10-24mm lens. The exposure I settled on was 6 seconds at f/3.5, ISO 6400. I set the camera to shoot 500 photos, one every 20 seconds. I turned off in-camera noise reduction, thinking I could save battery and do it in Lightroom later.
The first photo was shot at about 9:20 pm, and the last photo just past midnight. I sat in a chair near the camera for the almost three hours it took, reading a book on my Kindle. Fortunately the night was relatively warm and getting cold wasn’t too much of a problem. I did get out of the chair a few times to do some jumping jacks to stay warm.
OK, now for what I did wrong.
I judged the exposure by what the image looked like on the back of the camera. Remember, it was almost pitch black when I was doing this. The image looked great! The next morning I looked at the images. I couldn’t believe that all frames were totally black. How could I have done that? Then I realized they were underexposed so badly that I couldn’t see anything in normal light, but, viewed in a darkened room, there was some image there. Don’t judge the image exposure by what your eye sees when its almost totally dark out! Lightroom to the rescue (sort of).
Turning off in-camera noise reduction was a mistake. the Pentax K-3 does quite well at keeping the noise down, but at ISO 6400, I really needed to let the camera do what it could. Again, Lightroom noise reduction helped (but I wouldn’t say it rescued me).
Once I had 500 RAW images, I imported them all into Lightroom and did what I could to adjust exposure and reduce noise. Then exported them all as JPEG files (a painfully slow process on my ancient laptop computer). Next I fired up Adobe After Effects, brought in all of the JPEG images, and created a 1080p video at 30 frames per second. 500 frames at 30 frames per second results in a video only 16-2/3 seconds long!
The resulting video has lots of noise and color changes due to the extreme exposure adjustments I made. But I think it’s acceptable for my first attempt. Next year (or maybe this winter) I’ll do this again and improve my results.
I just discovered this week that YouTube can display 360° panorama videos. So now I can show them in YouTube. And, added bonus, I can embed YouTube videos into this blog.
Click on the video below to see it in action. Once it starts playing, click and drag in the video with your mouse. Or, play it on a mobile device and move the device around. If you want to play with this in a browser in a larger window, here is the link: https://youtu.be/5dxS63p0qmI. This is a still photo displayed as a video. Note that it is only 30 seconds long. To play with it longer than that, pause the video and click and drag around as much as you want.
It took a little work to get it right. 360° panoramas are twice as wide as they are high. Exactly. Thinking about it a bit, that becomes obvious. The panorama is 360° wide (hence the name) and 180° high. But standard HD video is 1920 pixels wide and 1080 pixels high. 4K video (which is what I’m published this at) is twice that: 3840 x 2160. The ratio is 16:9, which obviously is not the same as the 2:1 panoramas.
When I created the first video, there was a black bar top and bottom, which when viewed as a 360° panorama created a round hole at the “top” and “bottom”. Oh, what to do?
The panorama starts out with a horizontal:vertical ratio of 2:1, which is the same as 16:8. If I were to stretch the panorama a bit vertically, maybe about 12.5%, it would then be 16:9 and have the same ratio as the HD video.
So, I created the panorama (I already did that, see https://garystebbins.com/2020/02/04/360-panoramas/), then dropped the file into Photoshop and expanded the image vertically to 112.5%, then dropped that file into Adobe Premiere Pro, added a little music, and there you have it.
Almost… there is a bit of embedded data that has to be added to the file to tell YouTube it is a 360° panorama. I found a little program online called “Spatial Media Metadata Injector,” which can be found here, that does this bit of magic.
Maybe I’ll discover some easier method than this, but now I know the steps, this isn’t bad. I suspect I can do the ratio matching all in Premiere Pro, which would save one step.
If you want to see a wild 360° video check this out:
Be sure to spin it around and look around you. Enjoy!
I’ve been playing with 360° photography for over a year. I find this to be an enjoyable variation to my regular photography.
The simplified explanation of the process is to shoot enough images to capture all directions, then stitch these images together into a special format. I sometimes use a drone to capture the images, and sometimes a camera on a tripod. The result is a spherical image, viewed as though you are at the center of the sphere and can look in all directions.
On my tripod, I use a panorama attachment, shown here, that allows the camera to pivot horizontally and vertically around the optical center of the lens so that the resulting photos align properly.
This panorama attachment has a pivoting base with detents that help position the camera at the right intervals. Using the 10-22mm lens shown above, I take eight shots in a horizontal circle to have enough overlap between shots to stitch them together properly.
When I create a panorama from a drone, I am currently using a Phantom 4 Pro, which has an automated panorama mode where it takes all of the images with one press of the “shutter.”
When the panorama is assembled from the individual images, it can be viewed with a special viewer on my computer. I publish some of my images on a website, Round.me, that specializes in displaying 360° panorama images. Unfortunately, I can’t display the images as panoramas on this website.
I am going to use an image that I shot at Gates Pass outside Tucson, AZ, this past November as an example . There is a small knoll just off the road that I climbed up to take the photos from. I set my tripod on the highest rocks on the top and took 37 images. The images were taken on manual exposure, all with the same exposure. Because there is a wide range of lighting, from shadows to full sunlight to shooting directly into the sun, I shot the images in RAW to best capture the shadows and highlights. I could also shoot bracketed exposures to capture the full tonal value, but I have found that shooting RAW in this situation yields images that I can work with to get the results I want, and it is a bit simpler than bracketing the exposures and post-processing those (although I have also done that).
Back home I import all of the images into Adobe Lightroom. In this instance, I started by doing an “Auto” process on all of the RAW images, which lightens the shadows and tones down the brightest highlights (like the sun!), then change a few other parameters (primarily Clarity, Vibrance, and Saturation) to make the image a bit more to my liking. I might also do some additional brightness adjustments if I feel that is necessary. I then export all of the images as JPEG files.
I use a program called PTGui (https://www.ptgui.com/) to create the panorama image. It is a very capable program, and can process HDR or RAW files directly, but I feel I have the control I want in a way that makes sense to me by doing the initial processing in Lightroom. Once I have the JPEG files, I import them all into PTGui. It will automatically (and magically!) align the images. Once they are aligned, if there are any problems, I can “assist” PTGui to find matching points in adjacent images. I usually only need to do this when some of the images include almost all sky or water.
It the panorama is shot from the drone, I can’t shoot straight up as the camera is mounted below the drone, and the drone blocks the camera’s view. That leaves a hole in the sky above. In this instance, I use a special mode of PTGui to export an image of the “top” of the panorama. Then I use Photoshop to fill in the hole with “sky color” similar to what is around it, and then “reassemble” the panorama with PTGui.
When using a tripod, I can shoot straight up, but the tripod is in the way when I shoot down. So when I am finished shooting all of the images, I pick the tripod up and take a shot of the ground where it was. Many of the images from the tripod pointing downward contain parts of the tripod, such as the camera platform or the tripod legs. I can select these images and indicate to PTGui which parts of particular images should not be used. If I don’t to this, some of the tripod will show up in the resultant panorama.
Once I’ve done all of the point matching and/or masking, if needed, PTGui creates the panorama photo, which is a JPEG file in a format that panorama programs can interpret. If viewed with any “normal” photo program, the image looks quite distorted as shown below. Even though I took the tripod out of the panorama, you can still see its shadow. Here is the Gates Pass JPEG image.
You can see the resultant panorama at https://roundme.com/tour/523925/. Once at the site, use your mouse and scroll wheel to look in different directions and zoom in and out. Or view this on a tablet or iPad. With the right browser, you will be able to look in different directions just by turning the tablet.
To see all of the panoramas I have published, check out my Panorama Page at roundme.com/@garystebbins. When on that page, click the “TOURS” button to see the panoramas I have published. Some of the images here have been shot with my camera on a tripod, while others have been shot from a drone. Keep watching this site, as I will be adding new panoramas from time to time. 🙂