PHOTOGRAMMETRY: 3D Models from Photos

(From Autodesk’s website:) What is photogrammetry?

Photogrammetry is the art and science of extracting 3D information from photographs. The process involves taking overlapping photographs of an object, structure, or space, and converting them into 2D or 3D digital models.

Photogrammetry is often used by surveyors, architects, engineers, and contractors to create topographic maps, meshes, point clouds, or drawings based on the real-world.

I’ve written in past posts about 360° panorama photos (360° Panoramas!, More 360° Panoramas!, and 360° Panoramas (again)). In a 360° panorama, the camera (the viewer) is at a single location looking out on the world. Today, we will visit what seems to be the opposite situation.

3D models are created by taking a series of photos of an object from many different directions. The object could be something small, like a sculpture. Or something large, like a movie set. Or something in between, like a building. The camera could be mounted on a tripod and the small model turned to different positions, or the camera could be moved around the small model to take many different views. For an even larger model, the camera could be carried by a drone, for instance, and moved around a very large area to take many images.

I’ve played with 3D models a bit over the last few years. Once you have acquired images of your target, they must be processed in some way to create a 3D object, usually a “mesh” of many triangles that simulate the original model. Much of the software to do this is relatively expensive (hundreds or thousands of dollars), or rented by the month. However, not all software is expensive. After looking at other options, I found Open Drone Map (or ODM). The original purpose of ODM apparently was to create maps and/or models from photos taken from a drone. However, the software doesn’t really care whether the camera was on a drone, or handheld, or on a tripod.

Using ODM, I was able to successfully process several sets of photographs I have accumulated over the last few years. My smallest models were created from about 40 photos shot with my cell phone and the largest I’ve created so far used a couple hundred photos shot with a drone. People successfully use ODM with 5,000+ photos, although that may take days to process, even on a powerful computer.

Once you have created a 3D model you must use special software to view it. Surprisingly, current versions of Windows do come with a simple 3D viewer, but it doesn’t seem to be very robust. There are also websites where the 3D model can be uploaded, then you can view the model with a web browser.

Below is one of the first models I created. It is a tabletop scene of a small wood manger. This model was created from 48 photos shot with my DSLR as I walked around the table, taking photos at different heights to be sure everything was visible. Click the “play” button, wait for it to load, then use your mouse left button spin the model around on your screen, and your mouse scroll wheel to zoom in and out. To see the model full screen, press the “f” key. (I recommend trying that – press the “f” key again to exit full screen mode.)

The photo below is one of the 48 photos that make up the model above.

Another 3D model I created is an interesting rock at the entrance to the Arizona-Sonora Desert Museum (ASDM). This one is created from 40 photos I shot with my cell phone as I walked around it several times.

I used several other programs to generate all of the models shown here. First is WSL – Windows Subsystem for Linux. The version of ODM I used runs on Linux, so this allowed me to run it in a Linux environment on my Windows computer. I used Blender to clean up (remove) the extraneous parts of the 3D images, which were then uploaded to Sketchfab. Other programs played more minor roles. Expect to see more about Blender in this blog in the future.

A 3D Printed Thermometer Sensor Holder

When camping, I frequently would like to know the temperature outside our 2020 T@B 320S Boondock Edge trailer as well as inside. I purchased a “ThermoPro TP60S Digital Hygrometer Indoor Outdoor Thermometer” through Amazon (if you purchase from this link I’ll earn a small commission at no additional cost to you) and mounted the indoor module on the wall next to the Alde control panel using Velcro.

Now, where to locate the outside sensor? I placed it in the propane tank / battery box, just setting it on the bottom. This seemed to work fine. The outside temperature seems to be relatively accurate except when the sun is shining directly on the box. The only problem I could see was that the sensor picked up a lot of dirt, and occasionally some moisture from sitting on the bottom. I was also concerned about dropping something on it and damaging the unit.

I have finally gotten around to moving the sensor to a safer location. I figured I could mount it over the flange at the back of the propane tank / battery box and it would be safely out of the way. When the lid is closed, there is a small gap below the lid where the mount can sit without interfering with the lid closing. Using Fusion 360, I designed a holder for the sensor.

I first measured the width of the flange at the top of the box, and eyeballed how I thought I would like the mount to sit on that flange. I measured the sensor, and made a rough drawing of what I wanted. Then I created a test part in Fustion 360. I just made the end of the sensor mount and about 10mm of the body. That way I could print it in a reasonable amount of time without using too much plastic filament to test the fit. Here’s my first iteration:

Sensor Holder Test #1

I then tested this, and found that it didn’t hang the way I had hoped. It needed something to keep it from tilting.

So, on to iteration #2. I added a little leg to keep it from tilting.

This worked fine. Now that I had tested the hanger, and believed it to be correct, I added the rest of the structure in Fusion 360, and added holes in the bottom to improve air flow to the sensor, resulting in the completed sensor holder.

Available on Thingiverse at www.thingiverse.com/thing:4917124.

Night Drone Photography

With recent changes to the FAA rules, it is now possible for drone pilots to fly at night without jumping through as many hoops as before. To be eligible to fly at night, I had to take the update course for my “Part 107” certificate. I also must have an anti-collision light on my drone that is visible for 3 miles in any direction (that sucker is bright!).

When flying at night, one needs to be very aware of their surroundings so as not to hit something that you can’t see. It’s best to check out the location in daylight hours to be sure there are no wires or other such items that you might encounter.

I flew my first night flight a couple weeks ago just to try it out. I flew from my deck, which is surrounded (and partially covered) by trees. I know where they are, and where on my deck I am clear of overhead obstructions. Landing is the tricky part — making sure that I am not descending into the trees or onto my roof. I was successful in flying a short flight and taking a few photos.

Several days later I flew from the Edmonds waterfront. I walked up the beach until I was clear of other beach-goers and had a good place to take off and land (a flat, almost-level, rock). My goal was to get some shots showing the Edmonds Ferry at or near the dock and the city lights of Edmonds. I was successful.

I flew my DJI Mini 2, which is very light-weight and has a 12 megapixel camera. I would like to try again with my DJI Phantom 4 Pro, which is heavier and has a better quality 20 megapixel camera. I think the heavier drone will probably be a bit more stable, which will improve the sharpness of the photos taken with the slow shutter speed required. Although, looking at the photos, the sharpness is quite good considering the camera is “sitting” on a platform floating in the air, subject to wind and motor/propeller vibration. Shutter speeds were between one third and one second with ISO varying from 1600 to 3200. With the small sensor on the DJI Mini 2, these high ISOs made for somewhat grainy photos.

The photo below was shot as a series of nine RAW photos. The drone was positioned at one point in the sky, then three photos were shot using exposure bracketing (each photo with a different exposure) to capture the wide brightness range. Then the drone was rotated, and another three shots were taken. I did this three times. Each set of three photos was merged using Adobe Lightroom Classic to form one HDR photo, resulting in three HDR photos, each with a slightly different view. These resulting three photos were then merged into a single panorama photo, again using Lightroom, to create the final image.

Edmonds Waterfront
Edmonds Waterfront

3D Printed Sundial

A few weeks ago I posted about designing and printing a simple 3D part. This post will be about a more complex object, using a different design tool.

I’ve wanted to build a custom sundial for my home for years. I have a book (actually, more than one) about designing sundials. The primary one I use is Sundials: Their Theory and Construction by Albert Waugh. This book has lots of information about various types of sundials and includes formulas for designing sundials. I have thought about designing and building a traditional sundial that would be mounted in my front yard, or maybe a vertical sundial on my garage door (which gets sunshine much of the day, but not late afternoon). But that hasn’t happened.

Now that I have a 3D printer, I decided I could make a small sundial (my printer’s print bed is only about 9″ across) using that. I searched for designs, but couldn’t find a sundial I liked in the normal places to find 3D objects to print. I found one that was OK on Thingiverse, but it wasn’t really what I was looking for. Time to design my own!

I wanted to be able to easily modify the sundial for different locations. After all, if I’m going to make a dial for myself, I’m sure I have friends that would like one. And I want to easily customize it to make different sizes.

The sundial I found on Thingiverse had the base and gnomon (that’s the piece that sticks up to cast the sun’s shadow) all in one piece, which made it rather bulky to send in the mail. I wanted something that could be made flat for shipping. So the gnomon needed to be separate from the base, but easily attached.

With all of these requirements, it seemed to me I needed something that I could specify parameters to make it easy to customize, and then based on these parameters do “a lot of math” (not actually so much, but sines and cosines, at least). This is a different way of design than using Fusion 360 or some other similar CAD program, like I did the for the protective feet in a previous blog post. I needed something that could calculate angles and create shapes based on these calculated angles. Is there such a thing? But, of course! There is OpenSCAD, “The Programmers Solid 3D CAD Modeller”. This tool is basically a programming language in which you describe shapes. You write a program, which can include parameters which are used in the calculations. Just what I needed for this project!

The first thing I did was to determine what parameters I would need, i.e., the values I would want to be able to easily change. Obviously, the latitude and longitude of the location where the sundial would be “installed” would have to be easily changeable. What else? How about the size of the base so I could designate whether the sundial would be a 3″ dial, or a 6″ dial, or some other size. Here are the parameters I came up with (as shown in OpenSCAD):

Sundial Parameters as shown in OpenSCAD
Sundial Parameters

In OpenSCAD, these are dimensionless parameters, but the sizes get interpreted in millimeters by my slicing program. So think of these as sizes in millimeters, except for the locationName, which is text, the latitude and longitude, which are degrees, and the timeZone, which is hours. So the dial described above is 120mm on a side, which is very close to 5 inches.

Here is a photo of the sundial base created by the above parameters:

Sundial Base
Sundial Base

Pretty simple, right? A Cuboid (a cube with unequal size sides) for the base, with another cuboid subtracted from it (the depression in the middle), a bunch of cuboids for lines added at various angles, and another cuboid subtracted from it where the gnomon will fit in, then some letters and numbers stuck to the top surface around the edges. Nothing to it! 🙂

And the gnomon is really simple. Just a cuboid the size of the slot it will fit into, and another cuboid to cut away the upper portion at the correct angle (the latitude of it’s location).

Sundial Gnomon

Once you print the base and the gnomon, the gnomon fits into the slot in the base:

Square Sundial Base and Gnomon

With the base and gnomon apart, they can easily be mailed in an envelope. I have sent several to friends in padded envelopes, which can be sent inexpensively, with no problems.

Of course, this all seems simple now. I’ve already done it. Actually creating the sundial took me several days of work to get it just right. A lot of that time was learning OpenSCAD (I’m still just a novice), and also deciding how I wanted my sundial to look. Not to mention getting the formulas right for the basic dial. It took some time to get the hour numbers to print correctly on the dial border. Some of the logic was like, “if the hour line intersects the top border (not the left or right borders), print on the top border (centered vertically), otherwise print on the left or right border (centered horizontally), but don’t print on the bottom border (because the location text is there).” There are still some edge cases where the numbers print in the “wrong” location (which depends on your definition of wrong), but they haven’t occurred often enough yet for me to fix the logic.

For the curious, the code for the Gnomon is:

difference() {
    translate([0,-gnomonDepth,0]) cube([gnomonBaseLength, gnomonBaseLength+gnomonDepth, gnomonWidth]); // Full gnomon
    //subtract linear portion above gnomon
    rotate([0,0,latitude]) translate([0,0,-1]) cube([gnomonBaseLength*4, gnomonBaseLength*4, gnomonWidth+2]);
}

You might recognize some of the parameters to the cube() function as input parameters above, for instance gnomonDepth and gnomonWidth. The other parameters to the cube function (like gnomonBaseLength) are calculated from the input parameters.

If you are curious about the code, or want to print your own Square Sundial, my “Square Sundial” can be found on Thingiverse at https://www.thingiverse.com/thing:4802077.

360° Panoramas (again)

In my last post, already several months ago, I promised another 3D printer post. That is still coming. It’s half written. Make that a quarter written. I’ve been sidetracked, not to mention that my laptop computer bit the dust and I haven’t yet decided what to replace it with.

My first 360° panorama post was a little over a year ago, Feb. 4, 2020, where I discussed how 360° panoramas were made and showed one from Gates Pass near Tucson, AZ. My second post on panoramas was written on March 9, 2020, noting that 360° panoramas could be displayed on YouTube.

So, what’s new with panoramas?

First, 360° can be displayed on Flickr (I knew that, but had never tried it). Here’s my first panorama on Flickr. Flickr isn’t as good at displaying these as it could be – maybe it will improve in the future. The first problem I noticed is at the very bottom of the photo – directly below the camera. There’s some distortion there that shouldn’t be. Also, it is more difficult to zoom in and out with the mouse scroll wheel, as it usually scrolls the page instead. And it was difficult to go into full-screen mode, and once there I wasn’t always able to pan around the image.

It is possible to display panoramas interactively on WordPress, but only if I pay for a “professional” level. Since I don’t make any money from this site, I can’t really justify doing that. If you wish to see my photo(s) in a better viewer, take a look at it (them) in Roundme. This photo was taken a few days ago while on a cross-country ski outing to the top of Amabilis Mountain. 11+ miles and 2000’+ elevation gain, but the views were totally worth it! What a gorgeous day we had. Here are the rest of the photos I shot that day.

You can see all of the photos I’ve uploaded to Roundme by going to https://roundme.com/@garystebbins/tours.

What else is new?

All of the 360° panoramas I have posted in the past were shot by using my DSLR camera mounted to a tripod (or, in one case, handheld). The latest two were shot from a drone from tens of feet to several hundred feet above the ground.

I got my first done 3+ years ago, but it’s a bit too big to take on a backpack or cross-country ski trip. About a month ago I got a much smaller drone that is something I can take along with me. The drone itself weighs about 1/2 pound. I carried it in my backpack on my cross-country ski trip.

Phantom 4 Pro and DJI Mini 2 drones
Phantom 4 Pro and DJI Mini 2 drones

The larger drone in the photo above is a DJI Phantom 4 Pro, and the little guy is a DJI Mini 2. Both drones can automatically shoot a series of photos to be stitched into a 360° panorama photo. I then use the program PTGui to stitch the multiple images into a panorama image.

If you are curious, the panorama image is just a regular JPEG file, although it is stretched “a bit” at the top and bottom. As mentioned in my first post, it is exactly twice as wide as it is high – 360° wide and 180º high. The right and left edges join together in the panorama viewer, and the top and bottom edges are compressed to display as a single point – straight above the camera for the top edge and straight below for the bottom edge. Some additional metadata is added to the file so that the viewer program knows how to interpret the file. Here’s what the photo looks like when viewed without a panorama viewer.

Kachess Lake Overlook

There you have it – one more 360° blog post. Next (I hope) I’ll actually finish writing the 3D printer blog I promised a few months ago. Stay tuned!

Designing a Simple Part for 3D Printing

In a previous blog I told about my new 3D printer, and showed a few photos of it printing objects that I had found online. What else can you do with a 3D printer? You can design your own items to print, and those can be anything you can imagine. They might be purely decorative, or could be functional. Here’s something simple that is functional.

Julie has a basket that stands on four metal legs. Unfortunately, the little plastic feet that went on the legs have long since disappeared, and now they scratch the floor.

Simple solution: make some protective feet.

Fortunately, this really is a simple solution. I have some filament that I can print called TPU (thermoplastic polyeruthane) which is flexible but also tough. It sounds like the perfect material for this.

I measured the leg, and it was just a little under 19mm across. So I needed to design a foot that would fit over this 19mm square leg, hold up to some use, and stay on the leg. I decided to make it 2mm thick, as that seemed like a good thickness to not be too flimsy, yet not be overkill.

I use Fusion 360 from AutoDesk for most of my 3D designing. It’s free for hobbyists, and very capable. There are many other 3D design programs that would have worked, but that’s the one I’m most proficient with at this time. Maybe I’ll switch in the future. There is a great open source 3D design program I’m interested in playing with, but for this project I used Fusion 360

How would you go about designing a foot for this? It seems like a simple object, and it is. Just a cube with a hole in it. Like this:

I won’t go into any great detail about designing this, but will give some basic steps.

  • The leg is 19mm square, and I want the walls to be 2mm thick. 19mm + 2mm on each side makes 23mm. Draw a 23mm square.
  • Extrude that up 23mm, making a 23mm cube.
  • On the top surface of that cube, draw a centered 19mm square, leaving 2mm of the original cube on each side.
  • Extrude that down 19mm, subtracting this 19mm cube from the 23mm cube. That gives us the basic shape shown above. Note that by doing this we are left with a 4mm bottom. I could have reduced the height of the 23mm cube so that the sides and bottom were uniformly 2mm, but I figured a little extra material on the bottom would just add to the wear resistance.
  • Export this object as a 3D mesh, slice it, print it, and test it. I found that it fit, but was a bit loose, and probably would fall off over time.
  • Go back to Fusion 360 and tweak the internal (subtracted) cube to 18.5mm. If you’re paying attention (you were, weren’t you?), you’ll notice that the walls are now 2.25mm thick. I didn’t see any reason to go back and adjust this, although it would have been easy to do. At this point I believed I probably had a workable foot. In Fusion 360, it looked like this:
  • Kind of square-ish with sharp edges. Fusion 360 to the rescue. I “Filleted” (rounded) all edges, inside and out, with the exception of the bottom edges, which I “Chamfered” (cut at an angle). Why do the bottom different? Because 3D printers like mine have a problem with steep overhangs, and a fillet starts out with almost a 90° overhang, whereas a chamfer has only a 45° overhang and can be printed by most printers. Because this object is so small, it probably wouldn’t have made any real difference, but it’s a good habit to form when designing objects for 3D printing. I now have this, which looks a lot like the photo at the top of the page:

I think it looks good! Export it, Slice it, Print it. Test the fit…

It looks like a winner to me. It fits snugly, won’t fall off, and will protect the floor. The color? Just happens to be the color of TPU filament I have and a color Julie likes. Which is probably why I have this filament. 🙂

Watch for a future blog on designing a more complex object using a totally different 3D design program.

Milky Way timelapse

Something I’ve wanted to do for years is to create a timelapse video of the night sky star motion. I made it one of my goals for this year to accomplish that. I’ve been spending a lot of time in places that have terrible views of the night sky. Mostly, too much atmospheric haze and/or too much light pollution.

In July, when Comet Neowise was visible, we found a place a short drive away that had a pretty good night sky view, and was above much of the haze. We went there to try to get a good view, and maybe a photo or two, of the Comet.

Comet Neowise, July, 2020 (15 seconds at f/4.5, ISO 800, 135mm)

We found that this location was also good viewing of the Milky Way.

Milky Way (30 seconds at f/4.0, ISO 1600, 10mm)

It might have been a good time to try for a star timelapse with the Milky Way included, but it was late and I didn’t take the time to try it.

In September we camped at Red Bridge State Wayside in Oregon. The campground is a great place, but the sky is mostly blocked by beautiful Ponderosa pine trees. It does have a pretty good view of the sky from an area near the parking lot. I took my camera and tripod with the hope of getting some decent sky images.

Toward dark I set up on the grass looking over the parking lot and took several test exposures. I was shooting with my Pentax K-3 (crop-frame) camera with a Tamron 10-24mm lens. The exposure I settled on was 6 seconds at f/3.5, ISO 6400. I set the camera to shoot 500 photos, one every 20 seconds. I turned off in-camera noise reduction, thinking I could save battery and do it in Lightroom later.

The first photo was shot at about 9:20 pm, and the last photo just past midnight. I sat in a chair near the camera for the almost three hours it took, reading a book on my Kindle. Fortunately the night was relatively warm and getting cold wasn’t too much of a problem. I did get out of the chair a few times to do some jumping jacks to stay warm.

OK, now for what I did wrong.

  1. I judged the exposure by what the image looked like on the back of the camera. Remember, it was almost pitch black when I was doing this. The image looked great! The next morning I looked at the images. I couldn’t believe that all frames were totally black. How could I have done that? Then I realized they were underexposed so badly that I couldn’t see anything in normal light, but, viewed in a darkened room, there was some image there. Don’t judge the image exposure by what your eye sees when its almost totally dark out! Lightroom to the rescue (sort of).
  2. Turning off in-camera noise reduction was a mistake. the Pentax K-3 does quite well at keeping the noise down, but at ISO 6400, I really needed to let the camera do what it could. Again, Lightroom noise reduction helped (but I wouldn’t say it rescued me).

Once I had 500 RAW images, I imported them all into Lightroom and did what I could to adjust exposure and reduce noise. Then exported them all as JPEG files (a painfully slow process on my ancient laptop computer). Next I fired up Adobe After Effects, brought in all of the JPEG images, and created a 1080p video at 30 frames per second. 500 frames at 30 frames per second results in a video only 16-2/3 seconds long!

The resulting video has lots of noise and color changes due to the extreme exposure adjustments I made. But I think it’s acceptable for my first attempt. Next year (or maybe this winter) I’ll do this again and improve my results.

Here is my video for you to see:

Another David Preston Book

It’s already been a couple weeks since I published another book for David Preston. I did the final editing for him, and then formatted the book for both Paperback and Kindle, contracted a cover design, and published it on Amazon.com.

The book, Victim Ride, is the fifth book in David’s “Harrison Thomas Mystery” series.

“Someone from Harrison Thomas’ past intends to kill him, evidently to avenge the death of a loved one slain by Harrison. But who? He has never killed anyone! He must search through his past to find his assailant. If he does not do it quickly, he will die.”

I again used Fiverr for the artwork. I used the same artist that did the cover for Marcia Marsha Marczia. I liked the previous cover this artist did for me, and she came through again with what I think is a great cover for David’s latest book.

Where to buy? Victim Ride is only available through Amazon, in both Paperback and Kindle formats.

You can find all of David’s Harrison Thomas Mystery books here.

Marcia, Marsha, Marczia

…is the title of the book by David Preston that I (re)published for him on March 14. I just wanted to give you an overview of what it takes to self-publish a book. And, this book has an interesting history.

The main audience for Marcia, Marsha, Marczia is probably junior high or high school girls. You might enjoy reading this book even if you aren’t in that group. I did. If you should happen to be interested for yourself or a friend, daughter, or granddaughter, you can order the paperback or Kindle book from Amazon, or other distributors.

David completed writing this book in 1993. He was teaching High School English at that time. It was a bit more difficult to publish a book back then than it is now. He submitted the manuscript to Dell, the publishing company, but they weren’t interested. The manuscript sat in a drawer (I presume) for many years, and David ran across it (I’m making this up…) in about 2011. David mentioned it to a few friends, and one of them offered to scan the manuscript and OCR it. (OCR – Optical Character Recognition. As a verb, to run the scanned image through a program that will convert it to text on a computer.) The book was published on Amazon as a Kindle book titled The Third Marcia in 2012.

Skip ahead a few years. I started bugging David last year to update some of his books, primarily to update the “About the Author” section, but also to add a list of his books at the end of each book, and update the covers to make the books a bit more eye-catching. I also wanted him to publish the e-books (and eventually the paperbacks) through additional channels for wider distribution. He finally broke down and let me work on this 🙂 .

The last thing to work on is the cover, but I’ll show that first and discuss it later. Here is the old Kindle cover followed by the new cover:

I’ve edited and published a number of books for David, both Kindle e-books and paperbacks. The process has changed a bit through the years. I have used Microsoft Word to format the books, except for one book where I used Adobe InDesign. It used to be that I formatted the e-book and the paperback differently, but now different tools are available, and I can use a single Word file to create both formats.

I first create the paperback format and get it fine-tuned. Then I use the converter software at Draft2Digital to create the file for Kindle.

As this book had been published before, I thought there would be very few changes to the text. However, when I read through the book, I found a few typos, but lots of places where the OCR process had left errors. Most of these were like 1 (number 1) instead of l (letter el). The font specified by the OCR operation had made these look almost identical, but the font I used for the paperback (Garamond) had a noticeable difference between these, so they needed to be corrected.

Once the word editing is complete, I search the book for any double (or more) spaces (which are converted to a single space), tabs, line breaks where they shouldn’t be, empty paragraphs, etc., and change those as appropriate. I also convert all prime (‘) and double prime (“) marks to left (‘, “) and right (’, ”) quotation marks, and correct any other punctuation that needs it. And, of course, there are the dashes and hyphens that need to be made correct and consistent. If this is a new book (David will have another book to publish this month or next), then the draft of the book goes back and forth between him and me multiple times during this process.

Once the words are all correct, the document must be formatted for the size of the printed book (5.25″ x 8″ for David’s paperbacks) with appropriate margins. As this is printed two-sided and bound, the margin on the inside needs to be a bit wider than the outside margin so text isn’t printed too close to the binding, making it harder to read.

Every Chapter heading in the book needs to be formatted as a Word “Header 1” Style so that it will be found by the Kindle conversion tool. All other paragraphs, with some exceptions, are made a “Paragraph” style that I have defined. This is an indented paragraph with no vertical spacing between paragraphs. One exception is the first paragraph of each chapter, which has no indent.

A Table of Contents is included in most of the paperback books. This is auto-generated from the Chapter and other headings, like “Foreword” or “About the Author.” I need to make sure it is properly formatted and updated so the page numbers are correct. The TOC is stripped out by the converter when converted to Kindle format and then recreated by the converter in a special format for e-book readers.

Once everything above is complete, which usually takes me several passes, the formatted book interior is ready. I note how many pages long it is, and save it as a PDF (the format required for printing). Next I upload the MSWord file to Draft2Digital’s website and convert it to MOBI format, the file format for Kindle, and download that for later upload to Amazon publishing.

Now it’s time to design the cover. The cover is the first thing most potential buyers see, so it is really quite important. I’m not a graphic artist with cover design experience, so I contracted this out through Fiverr. Remember I mentioned that I noted the number of pages in the book? This is important because it affects the cover design. You must know the number of pages to determine the spine thickness. The front, back, and spine are created as a single image. A separate image is created for the e-books. Here is the image for the paperback cover of Marcia, Marsha, Marczia:

At this point I have all of the files needed to publish the book in both paperback and e-book formats. I published both through Amazon for David, and also published the e-book through Draft2Digital for distribution to other companies, like Apple Books, Barnes & Noble, Kobo, etc., and to libraries. Draft2Digital doesn’t do paperbacks yet, but hopefully will in the near future.

Nothing to it! Now you can publish your own book.

More 360° Panoramas!

I just discovered this week that YouTube can display 360° panorama videos. So now I can show them in YouTube. And, added bonus, I can embed YouTube videos into this blog.

Click on the video below to see it in action. Once it starts playing, click and drag in the video with your mouse. Or, play it on a mobile device and move the device around. If you want to play with this in a browser in a larger window, here is the link: https://youtu.be/5dxS63p0qmI. This is a still photo displayed as a video. Note that it is only 30 seconds long. To play with it longer than that, pause the video and click and drag around as much as you want.

It took a little work to get it right. 360° panoramas are twice as wide as they are high. Exactly. Thinking about it a bit, that becomes obvious. The panorama is 360° wide (hence the name) and 180° high. But standard HD video is 1920 pixels wide and 1080 pixels high. 4K video (which is what I’m published this at) is twice that: 3840 x 2160. The ratio is 16:9, which obviously is not the same as the 2:1 panoramas.

When I created the first video, there was a black bar top and bottom, which when viewed as a 360° panorama created a round hole at the “top” and “bottom”. Oh, what to do?

The panorama starts out with a horizontal:vertical ratio of 2:1, which is the same as 16:8. If I were to stretch the panorama a bit vertically, maybe about 12.5%, it would then be 16:9 and have the same ratio as the HD video.

So, I created the panorama (I already did that, see https://garystebbins.com/2020/02/04/360-panoramas/), then dropped the file into Photoshop and expanded the image vertically to 112.5%, then dropped that file into Adobe Premiere Pro, added a little music, and there you have it.

Almost… there is a bit of embedded data that has to be added to the file to tell YouTube it is a 360° panorama. I found a little program online called “Spatial Media Metadata Injector,” which can be found here, that does this bit of magic.

Maybe I’ll discover some easier method than this, but now I know the steps, this isn’t bad. I suspect I can do the ratio matching all in Premiere Pro, which would save one step.

If you want to see a wild 360° video check this out:

Be sure to spin it around and look around you. Enjoy!