I entered the following in the WebODM (Open Drone Map) forum. Some of it is a bit technical for this blog, but thought it might be interesting to some people.
Just for fun, and to learn more about WebODM and Blender, I flew my DJI Mini 2 drone around my deck to create a model. My deck has trees on 3 sides of it and overhanging it. Flying between the tree branches to get some of the shots was a bit challenging. There was a bit of a pucker factor a few times when flying inches below a branch and the drone started drifting upwards! (The Mini 2 has only forward and downward sensors, which is good here – I could never have flown that close to the trees if there were active sensors the other directions.)
I shot 142 images and processed those and saw some areas that didn’t seem to have adequate coverage. So I shot another 49 images to fill in some areas. That improved the places I concentrated on, but it seemed that some other areas decreased in quality. The glass railing and adjacent sunroom windows and doors caused some oddities, as expected. One thing I found odd is that the deck and other items seem to be “reflected” in the undersides of the tree branches.
My processing system is Windows 10 Pro on a laptop with 64GB of RAM. I initially processed this using Docker/WebODM, but ran out of memory when I increased pc-quality to ultra. I then processed it in Windows native WebODM, and it processed in 24+ hours. The WebODM timer showed 36 minutes, so I don’t have accurate information…
I postprocessed this with Blender to clean up some of the extraneous parts of the model, but purposefully left most of the trees. To get the upload files under the 100MB limit for the free Sketchfab account, I decimated the model in Blender to 70% and converted the PNG files to JPG.
Click on the image below to activate the model. Use your mouse buttons to change the view, and your scroll wheel to zoom in and out. Type “F”, or click the double-ended arrow in the lower right, to open it in full screen. (I highly recommend viewing it in full screen.)
Last fall we were traveling and spent some time in the Tucson / Lazydays KOA Resort. As part of the amenities, we were given free Wi-Fi access during out stay. The Internet there is managed by Tengo Internet, which we understand loosely means “we have Internet.” Ummm. Maybe. Sometimes. I think it can be interpreted similarly to “Yes, we have no bananas.”
Yes, we had Wi-Fi. We were right across a street from the Wi-Fi antenna, and we had a good Wi-Fi signal. Having a good Wi-Fi signal is not synonymous with having good Internet. Or any Internet at times.
I found that at about 4:00 am I had decent Internet. At 4:00 pm the Internet was so slow it was practically unusable. At 8:00 pm it was so slow that my phone said I had no Internet and it switched to cellular data, at $10/gigabyte. Ouch!!
Tengo Internet had a paid option that they guaranteed would provide 5 megabits/second speeds. I paid. It didn’t help. I still essentially had no Internet connectivity at the busy times of day, and I don’t think I ever saw 5 Mb/s speeds (except maybe at 4:00 am).
After a week of this, when we were depending on having Internet available, I decided it was time to find a solution. I looked into standard mobile hotspot providers, like Verizon and T-Mobile. The problem was that you had to pay a monthly subscription fee whether you were using the hotspot or not. And since we wanted it just when traveling, sometimes for a few days at a time, that didn’t seem like a good solution. And what if the provider I chose didn’t have good coverage in the area I needed it?
Eventually I stumbled across SolisWiFi.co, at that time, SkyRoam. (Some areas of their website still identifies it as SkyRoam.) I purchased a Solis Lite WiFi hotspot, about the size of a hockey puck.
This device has a built-in battery that lasts up to 16 hours. Add an app to your mobile phone to control the hotspot and purchase Internet access, and you’re ready to go. When powered on, the Solis Lite will find the best provider to connect to, and provide you with 4G Internet. It’s not 5G. But good 4G was plenty fast enough for us.
You can connect up to 10 devices to the hotspot. That easily covered both of our mobile phones, an iPad, a laptop PC, and an Echo Dot. Range was easily 20 feet or more.
The Solis Lite worked well when sitting around camp. When we went to the pool, we took it with us. Several times we went to a picnic area in Saguaro National Park, and took the hotspot with us there. We had no cell phone service – none at all. Yet the hotspot was able to find enough signal from some carrier that we were able to access the Internet without problem.
We also carried the Solis Lite in the car when we were traveling. We typically track our trip on maps and other applications on my iPad or our phones, and that map updating can use significant data. Using the Solis Lite hotspot in the car kept us from using our expensive cellular data.
Where does it work? They say in over 130 countries worldwide. USA coverage seems to be pretty good. I don’t recall encountering any place that it didn’t work for us.
How much? The hotspot was about $125. There are several purchase plans for Internet usage. You can purchase by the megabyte, by the day or by the month (with usage caps). If you can predict your usage, the monthly plans are probably the cheapest (your mileage may vary). As I write this, a USA monthly subscription that includes 10GB of data is $40 (there are several other options). That’s $4/GB, much cheaper than my phone data plan of $10/GB (I’ll be shopping around soon…). If you exceed your monthly plan, you can add data at any time. Global plans are a bit more expensive than USA plans.
If you choose to purchase by the Megabyte, that starts at $8 for 1 GB, $35 for 5GB, $60 for 10GB, and $100 for 20GB.
The Global Unlimited Daypass is $9. Unlimited data. Anywhere. Great if you need data just for a day. You can buy these in advance (watch for sales) and activate them when you need.
Check the plans carefully. They seem to change from time to time, so don’t assume that a plan you had six months ago is identical to the plan you can get now.
Where can you get the Solis Lite? Last fall it was available from Amazon. I purchased it directly from SkyRoam, and was very disappointed in the shipping. They don’t seem to care that you might want it soon. It took several days for it to ship, and I think they paid extra to have USPS delay it for a few more days. Right now, Solis WiFi’s site says it’s out of stock. There is one left on Amazon (search for Skyroam Solis). I also found it at Target and Ebay. It’s out of stock at several other places, which makes me wonder if there is a supply problem, or if they may have stopped manufacturing that model and may be coming out with something new.
Was the Solis Wi-Fi Hotspot without problems? No. Several times the hotspot locked up with an error message in the app, and I had to power it off and back on to get it working. When I tried to change the password through the app, I found that the display was white on white – not exactly readable. At one point I had a few GB of data left in the monthly plan, and I was metering it out to avoid buying extra data to make it through the month. The plan expired many hours before the app indicated it would and I lost the remaining data. I suspect a problem in the app having to do with the difference between UTC and local time caused that, but support was unable to tell me what had happened. Overall, though, it worked well.
Note that my analysis and purchase was about six months ago. Things change quickly, so check around for other options. That said, we have been happy with the Solis Lite, and I think we saved some money on our one trip with it. There is a convenience to being able to access multiple carriers. Keep an eye on the app or the Solis website for deals. They frequently have discounts on data (there is a 30% off monthly plans right now). You can always buy data for use in the future.
This was going be be a blog about using Blender to create 3D scenes. Sort of. I’m just barely starting to learn Blender, so it wasn’t going to be anything fancy or in-depth.
But, I went down a rabbit hole. Imagine that! I started with the ASDM Rock I photographed a few months ago (see my post PHOTOGRAMMETRY: 3D Models from Photos), and was going to try to add some sunshine, and animate the sun moving across the rock, and maybe in the future create some somewhat realistic looking grass around the rock. But, I got sidetracked and decided to try to 3D Print the rock. Not at full scale(!). Just a little plastic rock I could put on my desk.
OK, so what is Blender? From Wikipedia, “Blender is a free and open-source 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, motion graphics, interactive 3D applications, virtual reality, and computer games.” Did you read all of that? Free. Open Source. 3D computer graphics software. Animation.
Blender is used to create everything from 2D and 3D still pictures to full length animated movies. Wow!
I’ve known about Blender for several years (at least). I’ve looked at it a few times, but every time the learning curve scared me off. But it can do so much. And FREE, so no big investment (except my time) to play with it. After playing a bit with Open Drone Map, creating 3D models from “just a bunch of photos,” I thought maybe I should look at Blender again. So for the last 4-6 months I’ve been watching tutorials on YouTube and LinkedIn Learning, being awed by what others have done with Blender, and wondering if I could accomplish anything significant with it.
Very brief recap of my blog on Photogrammetry: I shot 40 photos with my cell phone of this cool looking rock that is located in front of the Arizona-Sonora Desert Museum outside of Tucson, AZ. I then used Open Drone Map to process these 40 photos to create a model of the rock, and used Blender to do some very minor editing to eliminate the extraneous parts of the model. I uploaded the model to Sketchfab, where you can view it in all of its 3D-ness.
Starting with this same model, I used Blender to create a base and export it to an STL file, which can be used to print a 3D model. That sounds rather mundane, but I spent many hours trying to get the initial model ready for 3D printing. Several YouTube videos later, I managed to create something that would print nicely. I also added a little sunshine to the scene, just because I could.
For comparison with the printed model, here is one of the photos in the sequence that was used to create the model.
Here is the ASDM Rock, as rendered in Blender. I added a little sunshine to the scene, just because I could :-).
The final result? Here is my printed “rock.” I think it rather accurately represents the original rock!
(From Autodesk’s website:) What is photogrammetry?
Photogrammetry is the art and science of extracting 3D information from photographs. The process involves taking overlapping photographs of an object, structure, or space, and converting them into 2D or 3D digital models.
Photogrammetry is often used by surveyors, architects, engineers, and contractors to create topographic maps, meshes, point clouds, or drawings based on the real-world.
3D models are created by taking a series of photos of an object from many different directions. The object could be something small, like a sculpture. Or something large, like a movie set. Or something in between, like a building. The camera could be mounted on a tripod and the small model turned to different positions, or the camera could be moved around the small model to take many different views. For an even larger model, the camera could be carried by a drone, for instance, and moved around a very large area to take many images.
I’ve played with 3D models a bit over the last few years. Once you have acquired images of your target, they must be processed in some way to create a 3D object, usually a “mesh” of many triangles that simulate the original model. Much of the software to do this is relatively expensive (hundreds or thousands of dollars), or rented by the month. However, not all software is expensive. After looking at other options, I found Open Drone Map (or ODM). The original purpose of ODM apparently was to create maps and/or models from photos taken from a drone. However, the software doesn’t really care whether the camera was on a drone, or handheld, or on a tripod.
Using ODM, I was able to successfully process several sets of photographs I have accumulated over the last few years. My smallest models were created from about 40 photos shot with my cell phone and the largest I’ve created so far used a couple hundred photos shot with a drone. People successfully use ODM with 5,000+ photos, although that may take days to process, even on a powerful computer.
Once you have created a 3D model you must use special software to view it. Surprisingly, current versions of Windows do come with a simple 3D viewer, but it doesn’t seem to be very robust. There are also websites where the 3D model can be uploaded, then you can view the model with a web browser.
Below is one of the first models I created. It is a tabletop scene of a small wood manger. This model was created from 48 photos shot with my DSLR as I walked around the table, taking photos at different heights to be sure everything was visible. Click the “play” button, wait for it to load, then use your mouse left button spin the model around on your screen, and your mouse scroll wheel to zoom in and out. To see the model full screen, press the “f” key. (I recommend trying that – press the “f” key again to exit full screen mode.)
The photo below is one of the 48 photos that make up the model above.
Another 3D model I created is an interesting rock at the entrance to the Arizona-Sonora Desert Museum (ASDM). This one is created from 40 photos I shot with my cell phone as I walked around it several times.
I used several other programs to generate all of the models shown here. First is WSL – Windows Subsystem for Linux. The version of ODM I used runs on Linux, so this allowed me to run it in a Linux environment on my Windows computer. I used Blender to clean up (remove) the extraneous parts of the 3D images, which were then uploaded to Sketchfab. Other programs played more minor roles. Expect to see more about Blender in this blog in the future.
When camping, I frequently would like to know the temperature outside our 2020 T@B 320S Boondock Edge trailer as well as inside. I purchased a “ThermoPro TP60S Digital Hygrometer Indoor Outdoor Thermometer” through Amazon (if you purchase from this link I’ll earn a small commission at no additional cost to you) and mounted the indoor module on the wall next to the Alde control panel using Velcro.
Now, where to locate the outside sensor? I placed it in the propane tank / battery box, just setting it on the bottom. This seemed to work fine. The outside temperature seems to be relatively accurate except when the sun is shining directly on the box. The only problem I could see was that the sensor picked up a lot of dirt, and occasionally some moisture from sitting on the bottom. I was also concerned about dropping something on it and damaging the unit.
I have finally gotten around to moving the sensor to a safer location. I figured I could mount it over the flange at the back of the propane tank / battery box and it would be safely out of the way. When the lid is closed, there is a small gap below the lid where the mount can sit without interfering with the lid closing. Using Fusion 360, I designed a holder for the sensor.
I first measured the width of the flange at the top of the box, and eyeballed how I thought I would like the mount to sit on that flange. I measured the sensor, and made a rough drawing of what I wanted. Then I created a test part in Fustion 360. I just made the end of the sensor mount and about 10mm of the body. That way I could print it in a reasonable amount of time without using too much plastic filament to test the fit. Here’s my first iteration:
I then tested this, and found that it didn’t hang the way I had hoped. It needed something to keep it from tilting.
So, on to iteration #2. I added a little leg to keep it from tilting.
This worked fine. Now that I had tested the hanger, and believed it to be correct, I added the rest of the structure in Fusion 360, and added holes in the bottom to improve air flow to the sensor, resulting in the completed sensor holder.
With recent changes to the FAA rules, it is now possible for drone pilots to fly at night without jumping through as many hoops as before. To be eligible to fly at night, I had to take the update course for my “Part 107” certificate. I also must have an anti-collision light on my drone that is visible for 3 miles in any direction (that sucker is bright!).
When flying at night, one needs to be very aware of their surroundings so as not to hit something that you can’t see. It’s best to check out the location in daylight hours to be sure there are no wires or other such items that you might encounter.
I flew my first night flight a couple weeks ago just to try it out. I flew from my deck, which is surrounded (and partially covered) by trees. I know where they are, and where on my deck I am clear of overhead obstructions. Landing is the tricky part — making sure that I am not descending into the trees or onto my roof. I was successful in flying a short flight and taking a few photos.
Several days later I flew from the Edmonds waterfront. I walked up the beach until I was clear of other beach-goers and had a good place to take off and land (a flat, almost-level, rock). My goal was to get some shots showing the Edmonds Ferry at or near the dock and the city lights of Edmonds. I was successful.
I flew my DJI Mini 2, which is very light-weight and has a 12 megapixel camera. I would like to try again with my DJI Phantom 4 Pro, which is heavier and has a better quality 20 megapixel camera. I think the heavier drone will probably be a bit more stable, which will improve the sharpness of the photos taken with the slow shutter speed required. Although, looking at the photos, the sharpness is quite good considering the camera is “sitting” on a platform floating in the air, subject to wind and motor/propeller vibration. Shutter speeds were between one third and one second with ISO varying from 1600 to 3200. With the small sensor on the DJI Mini 2, these high ISOs made for somewhat grainy photos.
The photo below was shot as a series of nine RAW photos. The drone was positioned at one point in the sky, then three photos were shot using exposure bracketing (each photo with a different exposure) to capture the wide brightness range. Then the drone was rotated, and another three shots were taken. I did this three times. Each set of three photos was merged using Adobe Lightroom Classic to form one HDR photo, resulting in three HDR photos, each with a slightly different view. These resulting three photos were then merged into a single panorama photo, again using Lightroom, to create the final image.
Edit 2/13/22: If you want to see the above photo in larger size, look at my Flickr album here.
I’ve wanted to build a custom sundial for my home for years. I have a book (actually, more than one) about designing sundials. The primary one I use is Sundials: Their Theory and Construction by Albert Waugh. This book has lots of information about various types of sundials and includes formulas for designing sundials. I have thought about designing and building a traditional sundial that would be mounted in my front yard, or maybe a vertical sundial on my garage door (which gets sunshine much of the day, but not late afternoon). But that hasn’t happened.
Now that I have a 3D printer, I decided I could make a small sundial (my printer’s print bed is only about 9″ across) using that. I searched for designs, but couldn’t find a sundial I liked in the normal places to find 3D objects to print. I found one that was OK on Thingiverse, but it wasn’t really what I was looking for. Time to design my own!
I wanted to be able to easily modify the sundial for different locations. After all, if I’m going to make a dial for myself, I’m sure I have friends that would like one. And I want to easily customize it to make different sizes.
The sundial I found on Thingiverse had the base and gnomon (that’s the piece that sticks up to cast the sun’s shadow) all in one piece, which made it rather bulky to send in the mail. I wanted something that could be made flat for shipping. So the gnomon needed to be separate from the base, but easily attached.
With all of these requirements, it seemed to me I needed something that I could specify parameters to make it easy to customize, and then based on these parameters do “a lot of math” (not actually so much, but sines and cosines, at least). This is a different way of design than using Fusion 360 or some other similar CAD program, like I did the for the protective feet in a previous blog post. I needed something that could calculate angles and create shapes based on these calculated angles. Is there such a thing? But, of course! There is OpenSCAD, “The Programmers Solid 3D CAD Modeller”. This tool is basically a programming language in which you describe shapes. You write a program, which can include parameters which are used in the calculations. Just what I needed for this project!
The first thing I did was to determine what parameters I would need, i.e., the values I would want to be able to easily change. Obviously, the latitude and longitude of the location where the sundial would be “installed” would have to be easily changeable. What else? How about the size of the base so I could designate whether the sundial would be a 3″ dial, or a 6″ dial, or some other size. Here are the parameters I came up with (as shown in OpenSCAD):
In OpenSCAD, these are dimensionless parameters, but the sizes get interpreted in millimeters by my slicing program. So think of these as sizes in millimeters, except for the locationName, which is text, the latitude and longitude, which are degrees, and the timeZone, which is hours. So the dial described above is 120mm on a side, which is very close to 5 inches.
Here is a photo of the sundial base created by the above parameters:
Pretty simple, right? A Cuboid (a cube with unequal size sides) for the base, with another cuboid subtracted from it (the depression in the middle), a bunch of cuboids for lines added at various angles, and another cuboid subtracted from it where the gnomon will fit in, then some letters and numbers stuck to the top surface around the edges. Nothing to it! 🙂
And the gnomon is really simple. Just a cuboid the size of the slot it will fit into, and another cuboid to cut away the upper portion at the correct angle (the latitude of it’s location).
Once you print the base and the gnomon, the gnomon fits into the slot in the base:
With the base and gnomon apart, they can easily be mailed in an envelope. I have sent several to friends in padded envelopes, which can be sent inexpensively, with no problems.
Of course, this all seems simple now. I’ve already done it. Actually creating the sundial took me several days of work to get it just right. A lot of that time was learning OpenSCAD (I’m still just a novice), and also deciding how I wanted my sundial to look. Not to mention getting the formulas right for the basic dial. It took some time to get the hour numbers to print correctly on the dial border. Some of the logic was like, “if the hour line intersects the top border (not the left or right borders), print on the top border (centered vertically), otherwise print on the left or right border (centered horizontally), but don’t print on the bottom border (because the location text is there).” There are still some edge cases where the numbers print in the “wrong” location (which depends on your definition of wrong), but they haven’t occurred often enough yet for me to fix the logic.
For the curious, the code for the Gnomon is:
translate([0,-gnomonDepth,0]) cube([gnomonBaseLength, gnomonBaseLength+gnomonDepth, gnomonWidth]); // Full gnomon
//subtract linear portion above gnomon
rotate([0,0,latitude]) translate([0,0,-1]) cube([gnomonBaseLength*4, gnomonBaseLength*4, gnomonWidth+2]);
You might recognize some of the parameters to the cube() function as input parameters above, for instance gnomonDepth and gnomonWidth. The other parameters to the cube function (like gnomonBaseLength) are calculated from the input parameters.
In my last post, already several months ago, I promised another 3D printer post. That is still coming. It’s half written. Make that a quarter written. I’ve been sidetracked, not to mention that my laptop computer bit the dust and I haven’t yet decided what to replace it with.
My first 360° panorama post was a little over a year ago, Feb. 4, 2020, where I discussed how 360° panoramas were made and showed one from Gates Pass near Tucson, AZ. My second post on panoramas was written on March 9, 2020, noting that 360° panoramas could be displayed on YouTube.
So, what’s new with panoramas?
First, 360° can be displayed on Flickr (I knew that, but had never tried it). Here’s my first panorama on Flickr. Flickr isn’t as good at displaying these as it could be – maybe it will improve in the future. The first problem I noticed is at the very bottom of the photo – directly below the camera. There’s some distortion there that shouldn’t be. Also, it is more difficult to zoom in and out with the mouse scroll wheel, as it usually scrolls the page instead. And it was difficult to go into full-screen mode, and once there I wasn’t always able to pan around the image.
It is possible to display panoramas interactively on WordPress, but only if I pay for a “professional” level. Since I don’t make any money from this site, I can’t really justify doing that. If you wish to see my photo(s) in a better viewer, take a look at it (them) in Roundme. This photo was taken a few days ago while on a cross-country ski outing to the top of Amabilis Mountain. 11+ miles and 2000’+ elevation gain, but the views were totally worth it! What a gorgeous day we had. Here are the rest of the photos I shot that day.
All of the 360° panoramas I have posted in the past were shot by using my DSLR camera mounted to a tripod (or, in one case, handheld). The latest two were shot from a drone from tens of feet to several hundred feet above the ground.
I got my first done 3+ years ago, but it’s a bit too big to take on a backpack or cross-country ski trip. About a month ago I got a much smaller drone that is something I can take along with me. The drone itself weighs about 1/2 pound. I carried it in my backpack on my cross-country ski trip.
The larger drone in the photo above is a DJI Phantom 4 Pro, and the little guy is a DJI Mini 2. Both drones can automatically shoot a series of photos to be stitched into a 360° panorama photo. I then use the program PTGui to stitch the multiple images into a panorama image.
If you are curious, the panorama image is just a regular JPEG file, although it is stretched “a bit” at the top and bottom. As mentioned in my first post, it is exactly twice as wide as it is high – 360° wide and 180º high. The right and left edges join together in the panorama viewer, and the top and bottom edges are compressed to display as a single point – straight above the camera for the top edge and straight below for the bottom edge. Some additional metadata is added to the file so that the viewer program knows how to interpret the file. Here’s what the photo looks like when viewed without a panorama viewer.
There you have it – one more 360° blog post. Next (I hope) I’ll actually finish writing the 3D printer blog I promised a few months ago. Stay tuned!
In a previous blog I told about my new 3D printer, and showed a few photos of it printing objects that I had found online. What else can you do with a 3D printer? You can design your own items to print, and those can be anything you can imagine. They might be purely decorative, or could be functional. Here’s something simple that is functional.
Julie has a basket that stands on four metal legs. Unfortunately, the little plastic feet that went on the legs have long since disappeared, and now they scratch the floor.
Simple solution: make some protective feet.
Fortunately, this really is a simple solution. I have some filament that I can print called TPU (thermoplastic polyeruthane) which is flexible but also tough. It sounds like the perfect material for this.
I measured the leg, and it was just a little under 19mm across. So I needed to design a foot that would fit over this 19mm square leg, hold up to some use, and stay on the leg. I decided to make it 2mm thick, as that seemed like a good thickness to not be too flimsy, yet not be overkill.
I use Fusion 360 from AutoDesk for most of my 3D designing. It’s free for hobbyists, and very capable. There are many other 3D design programs that would have worked, but that’s the one I’m most proficient with at this time. Maybe I’ll switch in the future. There is a great open source 3D design program I’m interested in playing with, but for this project I used Fusion 360
How would you go about designing a foot for this? It seems like a simple object, and it is. Just a cube with a hole in it. Like this:
I won’t go into any great detail about designing this, but will give some basic steps.
The leg is 19mm square, and I want the walls to be 2mm thick. 19mm + 2mm on each side makes 23mm. Draw a 23mm square.
Extrude that up 23mm, making a 23mm cube.
On the top surface of that cube, draw a centered 19mm square, leaving 2mm of the original cube on each side.
Extrude that down 19mm, subtracting this 19mm cube from the 23mm cube. That gives us the basic shape shown above. Note that by doing this we are left with a 4mm bottom. I could have reduced the height of the 23mm cube so that the sides and bottom were uniformly 2mm, but I figured a little extra material on the bottom would just add to the wear resistance.
Export this object as a 3D mesh, slice it, print it, and test it. I found that it fit, but was a bit loose, and probably would fall off over time.
Go back to Fusion 360 and tweak the internal (subtracted) cube to 18.5mm. If you’re paying attention (you were, weren’t you?), you’ll notice that the walls are now 2.25mm thick. I didn’t see any reason to go back and adjust this, although it would have been easy to do. At this point I believed I probably had a workable foot. In Fusion 360, it looked like this:
Kind of square-ish with sharp edges. Fusion 360 to the rescue. I “Filleted” (rounded) all edges, inside and out, with the exception of the bottom edges, which I “Chamfered” (cut at an angle). Why do the bottom different? Because 3D printers like mine have a problem with steep overhangs, and a fillet starts out with almost a 90° overhang, whereas a chamfer has only a 45° overhang and can be printed by most printers. Because this object is so small, it probably wouldn’t have made any real difference, but it’s a good habit to form when designing objects for 3D printing. I now have this, which looks a lot like the photo at the top of the page:
I think it looks good! Export it, Slice it, Print it. Test the fit…
It looks like a winner to me. It fits snugly, won’t fall off, and will protect the floor. The color? Just happens to be the color of TPU filament I have and a color Julie likes. Which is probably why I have this filament. 🙂
Watch for a future blog on designing a more complex object using a totally different 3D design program.
Something I’ve wanted to do for years is to create a timelapse video of the night sky star motion. I made it one of my goals for this year to accomplish that. I’ve been spending a lot of time in places that have terrible views of the night sky. Mostly, too much atmospheric haze and/or too much light pollution.
In July, when Comet Neowise was visible, we found a place a short drive away that had a pretty good night sky view, and was above much of the haze. We went there to try to get a good view, and maybe a photo or two, of the Comet.
We found that this location was also good viewing of the Milky Way.
It might have been a good time to try for a star timelapse with the Milky Way included, but it was late and I didn’t take the time to try it.
In September we camped at Red Bridge State Wayside in Oregon. The campground is a great place, but the sky is mostly blocked by beautiful Ponderosa pine trees. It does have a pretty good view of the sky from an area near the parking lot. I took my camera and tripod with the hope of getting some decent sky images.
Toward dark I set up on the grass looking over the parking lot and took several test exposures. I was shooting with my Pentax K-3 (crop-frame) camera with a Tamron 10-24mm lens. The exposure I settled on was 6 seconds at f/3.5, ISO 6400. I set the camera to shoot 500 photos, one every 20 seconds. I turned off in-camera noise reduction, thinking I could save battery and do it in Lightroom later.
The first photo was shot at about 9:20 pm, and the last photo just past midnight. I sat in a chair near the camera for the almost three hours it took, reading a book on my Kindle. Fortunately the night was relatively warm and getting cold wasn’t too much of a problem. I did get out of the chair a few times to do some jumping jacks to stay warm.
OK, now for what I did wrong.
I judged the exposure by what the image looked like on the back of the camera. Remember, it was almost pitch black when I was doing this. The image looked great! The next morning I looked at the images. I couldn’t believe that all frames were totally black. How could I have done that? Then I realized they were underexposed so badly that I couldn’t see anything in normal light, but, viewed in a darkened room, there was some image there. Don’t judge the image exposure by what your eye sees when its almost totally dark out! Lightroom to the rescue (sort of).
Turning off in-camera noise reduction was a mistake. the Pentax K-3 does quite well at keeping the noise down, but at ISO 6400, I really needed to let the camera do what it could. Again, Lightroom noise reduction helped (but I wouldn’t say it rescued me).
Once I had 500 RAW images, I imported them all into Lightroom and did what I could to adjust exposure and reduce noise. Then exported them all as JPEG files (a painfully slow process on my ancient laptop computer). Next I fired up Adobe After Effects, brought in all of the JPEG images, and created a 1080p video at 30 frames per second. 500 frames at 30 frames per second results in a video only 16-2/3 seconds long!
The resulting video has lots of noise and color changes due to the extreme exposure adjustments I made. But I think it’s acceptable for my first attempt. Next year (or maybe this winter) I’ll do this again and improve my results.