If you’re paying attention to the news or social media, you’ve almost certainly heard about ChatGPT. Do you know what it is?
I created an account to play with it and learn more. I asked ChatGPT to tell me about itself. Here are the unedited results. The headings below are my questions to ChatGPT, and the text is it’s reply.
If you would like to play with ChatGPT, go here: https://chat.openai.com/. I had to try several times to get signed up, as their servers are very busy.
What Is ChatGPT?
ChatGPT is a pre-trained language model developed by OpenAI. It is based on the transformer architecture and is trained on a massive amount of text data from the internet. It’s capable of understanding the context and generating coherent, fluent and human-like text.
ChatGPT can be fine-tuned for various natural language processing tasks such as text generation, language translation, text summarization, question answering, text completion and more. It’s mainly used in conversational context, such as in chatbots, virtual assistants, and customer service bots, where it can generate human-like text and respond to user input. It can also be used to generate content for websites, social media, and other platforms, such as product descriptions, articles, and blog posts.
It’s a powerful tool for natural language processing and has a wide range of applications in industries such as finance, healthcare, e-commerce, and more. It can help businesses to automate customer service, generate marketing content, improve search engine optimization and more
What is ChatGPT’s Purpose?
ChatGPT is a large language model developed by OpenAI. Its primary purpose is to generate human-like text. It can be used for a variety of natural language processing tasks such as language translation, text summarization, question answering, text generation and text completion.
ChatGPT is built on the transformer architecture, which is a neural network architecture designed for natural language processing tasks. It’s trained on a massive amount of text data from the internet and it’s able to understand the context and generate coherent and fluent text.
One of the main use cases for ChatGPT is to generate human-like text in a conversational context, such as in chatbots, virtual assistants, and customer service bots. It can also be used to generate content for websites, social media, and other platforms, such as product descriptions, articles, and blog posts.
It can also be used for other natural language processing tasks such as sentiment analysis, named entity recognition, and text classification.
Overall, ChatGPT’s purpose is to generate human-like text and make natural language processing tasks more efficient and accurate.
On September 8, 2022, Edmonds, Washington, had a low tide of -1.7 feet. That happens frequently, so it’s nothing too exciting. Except this day we had sunshine and I had my drone up in the air. Lots of people were out enjoying the beach, and it seemed like the perfect time and place for a few photos.
The ferry was at the dock. At these low tides, it is possible to walk under the ferry dock without getting your feet wet (at least not very wet – wear boots).
This photo is part of a 360° panorama. If you want to view the whole panorama, click on the photo, wait a few seconds for the panorama to load, then use your mouse to spin it around and look in all directions.
Last year when we were in Tucson, staying at the Lazy Days KOA, I wanted to take a decent photo of our campsite. The best shot would have been from a drone, but I didn’t think the KOA would allow me to fly my drone there. Not to mention that campground is in the airspace of a commercial airport and two(!) military airbases. There really was no chance of getting permission to fly a drone there!
So, what to do? I did some searching, and found photos that looked like photos from a drone that were taken without a drone. How did they do it?
One person that had some pretty interesting photos had taken the photos with a 10′ “Selfie” stick. That’s right, 10 feet! That sounded great to me, so I purchased one. Click here to get your own from Amazon (this is an affiliate link).
OK, it’s not really 10 feet. It’s actually 3 meters, or about 9.8 feet. It’s a carbon fiber pole that extends in sections, so you don’t have to extend it to the full 3 meters. When collapsed it’s only about ~18″ long.
When holding this pole up over my head, the camera is about 16′ above ground. Perfect for many photos.
I attached my GoPro to the end of the pole and used my cell phone to control the camera. I could see what the camera was seeing through my phone, and snap the shot (or shoot a video). It takes a little practice to hold the pole steady and aim it where you want.
The pole seems to be made well. It locks into position and stays there until you want to collapse it (a slight twist at each section releases it).
Using this pole, I was able to get some pretty nice shots of our KOA campsite, as seen below.
Here is my latest diversion. This is the bark of a Ponderosa pine tree near Camp Sherman, Oregon. I shot 26 photos with my Pixel 5a cell phone, then processed the photos with WebODM into a 3D model.
I did further processing in Blender to eliminate a few extraneous bits and then created an animation of the model.
Below is the animation I created using Blender. Click on the image to start the video, click “f” to view it full screen. Press “f” again to exit full screen mode (I suggest viewing it in full screen to really see it):
This model can also be seen in Sketchfab, a 3D viewer. Click on the image below, and after the model loads, click and drag to see it in 3D. Same as for the animation, press “f” to view it full screen and “f” again (or escape) to exit full screen mode:
I entered the following in the WebODM (Open Drone Map) forum. Some of it is a bit technical for this blog, but thought it might be interesting to some people.
Just for fun, and to learn more about WebODM and Blender, I flew my DJI Mini 2 drone around my deck to create a model. My deck has trees on 3 sides of it and overhanging it. Flying between the tree branches to get some of the shots was a bit challenging. There was a bit of a pucker factor a few times when flying inches below a branch and the drone started drifting upwards! (The Mini 2 has only forward and downward sensors, which is good here – I could never have flown that close to the trees if there were active sensors the other directions.)
I shot 142 images and processed those and saw some areas that didn’t seem to have adequate coverage. So I shot another 49 images to fill in some areas. That improved the places I concentrated on, but it seemed that some other areas decreased in quality. The glass railing and adjacent sunroom windows and doors caused some oddities, as expected. One thing I found odd is that the deck and other items seem to be “reflected” in the undersides of the tree branches.
My processing system is Windows 10 Pro on a laptop with 64GB of RAM. I initially processed this using Docker/WebODM, but ran out of memory when I increased pc-quality to ultra. I then processed it in Windows native WebODM, and it processed in 24+ hours. The WebODM timer showed 36 minutes, so I don’t have accurate information…
I postprocessed this with Blender to clean up some of the extraneous parts of the model, but purposefully left most of the trees. To get the upload files under the 100MB limit for the free Sketchfab account, I decimated the model in Blender to 70% and converted the PNG files to JPG.
Click on the image below to activate the model. Use your mouse buttons to change the view, and your scroll wheel to zoom in and out. Type “F”, or click the double-ended arrow in the lower right, to open it in full screen. (I highly recommend viewing it in full screen.)
Last fall we were traveling and spent some time in the Tucson / Lazydays KOA Resort. As part of the amenities, we were given free Wi-Fi access during out stay. The Internet there is managed by Tengo Internet, which we understand loosely means “we have Internet.” Ummm. Maybe. Sometimes. I think it can be interpreted similarly to “Yes, we have no bananas.”
Yes, we had Wi-Fi. We were right across a street from the Wi-Fi antenna, and we had a good Wi-Fi signal. Having a good Wi-Fi signal is not synonymous with having good Internet. Or any Internet at times.
I found that at about 4:00 am I had decent Internet. At 4:00 pm the Internet was so slow it was practically unusable. At 8:00 pm it was so slow that my phone said I had no Internet and it switched to cellular data, at $10/gigabyte. Ouch!!
Tengo Internet had a paid option that they guaranteed would provide 5 megabits/second speeds. I paid. It didn’t help. I still essentially had no Internet connectivity at the busy times of day, and I don’t think I ever saw 5 Mb/s speeds (except maybe at 4:00 am).
After a week of this, when we were depending on having Internet available, I decided it was time to find a solution. I looked into standard mobile hotspot providers, like Verizon and T-Mobile. The problem was that you had to pay a monthly subscription fee whether you were using the hotspot or not. And since we wanted it just when traveling, sometimes for a few days at a time, that didn’t seem like a good solution. And what if the provider I chose didn’t have good coverage in the area I needed it?
Eventually I stumbled across SolisWiFi.co, at that time, SkyRoam. (Some areas of their website still identifies it as SkyRoam.) I purchased a Solis Lite WiFi hotspot, about the size of a hockey puck.
This device has a built-in battery that lasts up to 16 hours. Add an app to your mobile phone to control the hotspot and purchase Internet access, and you’re ready to go. When powered on, the Solis Lite will find the best provider to connect to, and provide you with 4G Internet. It’s not 5G. But good 4G was plenty fast enough for us.
You can connect up to 10 devices to the hotspot. That easily covered both of our mobile phones, an iPad, a laptop PC, and an Echo Dot. Range was easily 20 feet or more.
The Solis Lite worked well when sitting around camp. When we went to the pool, we took it with us. Several times we went to a picnic area in Saguaro National Park, and took the hotspot with us there. We had no cell phone service – none at all. Yet the hotspot was able to find enough signal from some carrier that we were able to access the Internet without problem.
We also carried the Solis Lite in the car when we were traveling. We typically track our trip on maps and other applications on my iPad or our phones, and that map updating can use significant data. Using the Solis Lite hotspot in the car kept us from using our expensive cellular data.
Where does it work? They say in over 130 countries worldwide. USA coverage seems to be pretty good. I don’t recall encountering any place that it didn’t work for us.
How much? The hotspot was about $125. There are several purchase plans for Internet usage. You can purchase by the megabyte, by the day or by the month (with usage caps). If you can predict your usage, the monthly plans are probably the cheapest (your mileage may vary). As I write this, a USA monthly subscription that includes 10GB of data is $40 (there are several other options). That’s $4/GB, much cheaper than my phone data plan of $10/GB (I’ll be shopping around soon…). If you exceed your monthly plan, you can add data at any time. Global plans are a bit more expensive than USA plans.
If you choose to purchase by the Megabyte, that starts at $8 for 1 GB, $35 for 5GB, $60 for 10GB, and $100 for 20GB.
The Global Unlimited Daypass is $9. Unlimited data. Anywhere. Great if you need data just for a day. You can buy these in advance (watch for sales) and activate them when you need.
Check the plans carefully. They seem to change from time to time, so don’t assume that a plan you had six months ago is identical to the plan you can get now.
Where can you get the Solis Lite? Last fall it was available from Amazon. I purchased it directly from SkyRoam, and was very disappointed in the shipping. They don’t seem to care that you might want it soon. It took several days for it to ship, and I think they paid extra to have USPS delay it for a few more days. Right now, Solis WiFi’s site says it’s out of stock. There is one left on Amazon (search for Skyroam Solis). I also found it at Target and Ebay. It’s out of stock at several other places, which makes me wonder if there is a supply problem, or if they may have stopped manufacturing that model and may be coming out with something new.
Was the Solis Wi-Fi Hotspot without problems? No. Several times the hotspot locked up with an error message in the app, and I had to power it off and back on to get it working. When I tried to change the password through the app, I found that the display was white on white – not exactly readable. At one point I had a few GB of data left in the monthly plan, and I was metering it out to avoid buying extra data to make it through the month. The plan expired many hours before the app indicated it would and I lost the remaining data. I suspect a problem in the app having to do with the difference between UTC and local time caused that, but support was unable to tell me what had happened. Overall, though, it worked well.
Note that my analysis and purchase was about six months ago. Things change quickly, so check around for other options. That said, we have been happy with the Solis Lite, and I think we saved some money on our one trip with it. There is a convenience to being able to access multiple carriers. Keep an eye on the app or the Solis website for deals. They frequently have discounts on data (there is a 30% off monthly plans right now). You can always buy data for use in the future.
This was going be be a blog about using Blender to create 3D scenes. Sort of. I’m just barely starting to learn Blender, so it wasn’t going to be anything fancy or in-depth.
But, I went down a rabbit hole. Imagine that! I started with the ASDM Rock I photographed a few months ago (see my post PHOTOGRAMMETRY: 3D Models from Photos), and was going to try to add some sunshine, and animate the sun moving across the rock, and maybe in the future create some somewhat realistic looking grass around the rock. But, I got sidetracked and decided to try to 3D Print the rock. Not at full scale(!). Just a little plastic rock I could put on my desk.
OK, so what is Blender? From Wikipedia, “Blender is a free and open-source 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, motion graphics, interactive 3D applications, virtual reality, and computer games.” Did you read all of that? Free. Open Source. 3D computer graphics software. Animation.
Blender is used to create everything from 2D and 3D still pictures to full length animated movies. Wow!
I’ve known about Blender for several years (at least). I’ve looked at it a few times, but every time the learning curve scared me off. But it can do so much. And FREE, so no big investment (except my time) to play with it. After playing a bit with Open Drone Map, creating 3D models from “just a bunch of photos,” I thought maybe I should look at Blender again. So for the last 4-6 months I’ve been watching tutorials on YouTube and LinkedIn Learning, being awed by what others have done with Blender, and wondering if I could accomplish anything significant with it.
Very brief recap of my blog on Photogrammetry: I shot 40 photos with my cell phone of this cool looking rock that is located in front of the Arizona-Sonora Desert Museum outside of Tucson, AZ. I then used Open Drone Map to process these 40 photos to create a model of the rock, and used Blender to do some very minor editing to eliminate the extraneous parts of the model. I uploaded the model to Sketchfab, where you can view it in all of its 3D-ness.
Starting with this same model, I used Blender to create a base and export it to an STL file, which can be used to print a 3D model. That sounds rather mundane, but I spent many hours trying to get the initial model ready for 3D printing. Several YouTube videos later, I managed to create something that would print nicely. I also added a little sunshine to the scene, just because I could.
For comparison with the printed model, here is one of the photos in the sequence that was used to create the model.
Here is the ASDM Rock, as rendered in Blender. I added a little sunshine to the scene, just because I could :-).
The final result? Here is my printed “rock.” I think it rather accurately represents the original rock!
(From Autodesk’s website:) What is photogrammetry?
Photogrammetry is the art and science of extracting 3D information from photographs. The process involves taking overlapping photographs of an object, structure, or space, and converting them into 2D or 3D digital models.
Photogrammetry is often used by surveyors, architects, engineers, and contractors to create topographic maps, meshes, point clouds, or drawings based on the real-world.
3D models are created by taking a series of photos of an object from many different directions. The object could be something small, like a sculpture. Or something large, like a movie set. Or something in between, like a building. The camera could be mounted on a tripod and the small model turned to different positions, or the camera could be moved around the small model to take many different views. For an even larger model, the camera could be carried by a drone, for instance, and moved around a very large area to take many images.
I’ve played with 3D models a bit over the last few years. Once you have acquired images of your target, they must be processed in some way to create a 3D object, usually a “mesh” of many triangles that simulate the original model. Much of the software to do this is relatively expensive (hundreds or thousands of dollars), or rented by the month. However, not all software is expensive. After looking at other options, I found Open Drone Map (or ODM). The original purpose of ODM apparently was to create maps and/or models from photos taken from a drone. However, the software doesn’t really care whether the camera was on a drone, or handheld, or on a tripod.
Using ODM, I was able to successfully process several sets of photographs I have accumulated over the last few years. My smallest models were created from about 40 photos shot with my cell phone and the largest I’ve created so far used a couple hundred photos shot with a drone. People successfully use ODM with 5,000+ photos, although that may take days to process, even on a powerful computer.
Once you have created a 3D model you must use special software to view it. Surprisingly, current versions of Windows do come with a simple 3D viewer, but it doesn’t seem to be very robust. There are also websites where the 3D model can be uploaded, then you can view the model with a web browser.
Below is one of the first models I created. It is a tabletop scene of a small wood manger. This model was created from 48 photos shot with my DSLR as I walked around the table, taking photos at different heights to be sure everything was visible. Click the “play” button, wait for it to load, then use your mouse left button spin the model around on your screen, and your mouse scroll wheel to zoom in and out. To see the model full screen, press the “f” key. (I recommend trying that – press the “f” key again to exit full screen mode.)
The photo below is one of the 48 photos that make up the model above.
Another 3D model I created is an interesting rock at the entrance to the Arizona-Sonora Desert Museum (ASDM). This one is created from 40 photos I shot with my cell phone as I walked around it several times.
I used several other programs to generate all of the models shown here. First is WSL – Windows Subsystem for Linux. The version of ODM I used runs on Linux, so this allowed me to run it in a Linux environment on my Windows computer. I used Blender to clean up (remove) the extraneous parts of the 3D images, which were then uploaded to Sketchfab. Other programs played more minor roles. Expect to see more about Blender in this blog in the future.
When camping, I frequently would like to know the temperature outside our 2020 T@B 320S Boondock Edge trailer as well as inside. I purchased a “ThermoPro TP60S Digital Hygrometer Indoor Outdoor Thermometer” through Amazon (if you purchase from this link I’ll earn a small commission at no additional cost to you) and mounted the indoor module on the wall next to the Alde control panel using Velcro.
Now, where to locate the outside sensor? I placed it in the propane tank / battery box, just setting it on the bottom. This seemed to work fine. The outside temperature seems to be relatively accurate except when the sun is shining directly on the box. The only problem I could see was that the sensor picked up a lot of dirt, and occasionally some moisture from sitting on the bottom. I was also concerned about dropping something on it and damaging the unit.
I have finally gotten around to moving the sensor to a safer location. I figured I could mount it over the flange at the back of the propane tank / battery box and it would be safely out of the way. When the lid is closed, there is a small gap below the lid where the mount can sit without interfering with the lid closing. Using Fusion 360, I designed a holder for the sensor.
I first measured the width of the flange at the top of the box, and eyeballed how I thought I would like the mount to sit on that flange. I measured the sensor, and made a rough drawing of what I wanted. Then I created a test part in Fustion 360. I just made the end of the sensor mount and about 10mm of the body. That way I could print it in a reasonable amount of time without using too much plastic filament to test the fit. Here’s my first iteration:
I then tested this, and found that it didn’t hang the way I had hoped. It needed something to keep it from tilting.
So, on to iteration #2. I added a little leg to keep it from tilting.
This worked fine. Now that I had tested the hanger, and believed it to be correct, I added the rest of the structure in Fusion 360, and added holes in the bottom to improve air flow to the sensor, resulting in the completed sensor holder.
With recent changes to the FAA rules, it is now possible for drone pilots to fly at night without jumping through as many hoops as before. To be eligible to fly at night, I had to take the update course for my “Part 107” certificate. I also must have an anti-collision light on my drone that is visible for 3 miles in any direction (that sucker is bright!).
When flying at night, one needs to be very aware of their surroundings so as not to hit something that you can’t see. It’s best to check out the location in daylight hours to be sure there are no wires or other such items that you might encounter.
I flew my first night flight a couple weeks ago just to try it out. I flew from my deck, which is surrounded (and partially covered) by trees. I know where they are, and where on my deck I am clear of overhead obstructions. Landing is the tricky part — making sure that I am not descending into the trees or onto my roof. I was successful in flying a short flight and taking a few photos.
Several days later I flew from the Edmonds waterfront. I walked up the beach until I was clear of other beach-goers and had a good place to take off and land (a flat, almost-level, rock). My goal was to get some shots showing the Edmonds Ferry at or near the dock and the city lights of Edmonds. I was successful.
I flew my DJI Mini 2, which is very light-weight and has a 12 megapixel camera. I would like to try again with my DJI Phantom 4 Pro, which is heavier and has a better quality 20 megapixel camera. I think the heavier drone will probably be a bit more stable, which will improve the sharpness of the photos taken with the slow shutter speed required. Although, looking at the photos, the sharpness is quite good considering the camera is “sitting” on a platform floating in the air, subject to wind and motor/propeller vibration. Shutter speeds were between one third and one second with ISO varying from 1600 to 3200. With the small sensor on the DJI Mini 2, these high ISOs made for somewhat grainy photos.
The photo below was shot as a series of nine RAW photos. The drone was positioned at one point in the sky, then three photos were shot using exposure bracketing (each photo with a different exposure) to capture the wide brightness range. Then the drone was rotated, and another three shots were taken. I did this three times. Each set of three photos was merged using Adobe Lightroom Classic to form one HDR photo, resulting in three HDR photos, each with a slightly different view. These resulting three photos were then merged into a single panorama photo, again using Lightroom, to create the final image.
Edit 2/13/22: If you want to see the above photo in larger size, look at my Flickr album here.