Imaging the Limb of an Almost-Full Moon – 13/02/2014
Last night was one of those nights where I would have loved to have been imaging another deep sky target, but was prevented from doing so by the massive glare of the 98% full moon. When the moon is in the sky, it makes a massive difference to what you are able to capture, as it basically washes out the background, and gets rid of the tiny bit of contrast that is present when you capture most faint deep-sky objects. Therefore, it’s better to leave those until you have a dark night where there’s no moon to spoil the fun. In the meantime, the moon itself, and the planets, become the best targets for imaging.
Imaging the moon and planets is very different from imaging deep sky objects. Firstly, a DSLR is generally not the best way to do it, especially when it comes to planets. The imaging sensor chip on a DSLR is very large, and so objects like planets tend to get lost on the large sensor. Secondly, the way in which planets (including close-ups of the moon), are best imaged is by taking video, not stills, and the best way is with uncompressed video, which apparently DSLRs are not very good at taking, or so I’m told! The turbulence of the air means that the surface of the moon seen through a telescope is constantly in motion. Taking thousands of frames of this and then using stacking software to analyse those frames allows extraction of the best bits on each frame, which are then combined to give a stable image.
I used my QHY5v planetary imaging camera (which is usually used as my guide camera), to capture the video for this closeup. The software to capture the vids is called ‘QGVideo’ and is supplied with the camera. Its a pretty simple bit of software that captures the uncompressed video stream from the camera (any compression makes stacking much less effective).
The files you end up with are massive (and when I say big I mean it… several GB usually, for a few minutes of mono footage at just 752 x 480 pixels). Th initial footage always looks daunting, and you wonder how the heck you’d get any sort of smooth image out at the other end. Here’s the footage that I used for this image…
These are output as AVI, and so can be loaded straight into the stacking program for processing. The program in question in this case is Autostakkert! (exclamation mark is a must 😉 ! ). It’s free and can be downloaded from :-
Other stacking programs are available for planetary work, in particular ‘Registax’, but I find that Autostakkert is by far the easier to use. I still use registax to sharpen the image using its wavelets function, but for the stacking, Autostakkert is my choice.
You import the AVI into AS, and define ‘alignment points’ that allow the program to work out how to align the image during stacking by defining features for it to ‘watch’ during the analysis process. I generally do this using the automatic function in AS, as I find it gives good results. For example – here’s and image of the alignment points that AS automatically chose for me…
Once this has been done, and you click the ‘analyse’ button, AS goes off and takes a minute or so to analyse all the frames and decide how good the quality of each is. You then set what percentage of the frames you war to use. In this case, I set 70%. This keeps the ‘best’ 70% and dumps the rest, only using the 70% for stacking. Using the really bad frames just drags down the overall quality of the stacked image, so this enable you to at least try and get rid of some of those mathematically. Once you’ve decided on this percentage, and set the other settings (the ones I use can be seen in the screenshot on the right), then you click ‘stack’ and let AS do its thing. You end up with a pair of images at the end – one a ‘plain’ stack, and one that is auto-sharpened. This auto-sharpened image isn’t bad at all, but I tend to use it as a way to see how the process has gone, and leave the actual sharpening to Registax wavelets as I’ll describe in a minute.
The images that you get out of AS (in TIFF format if you use the settings I do), look like this…
As you can see, the unsharpened image is still not that impressive. You have to trust Registax to work its magic! The next step is, therefore, to import the unsharpened image into Registax (I use Version 5, as I find version 6 overcomplicated), and use the wavelets function to bring out the detail as much as possible. This process is basically a very controllable sharpening process, what I cannot begin to understand. By the way, don’t take the settings on the registax screenshot as meaning much, I took this just to show the immediate effect of the process, but you need to play with the sliders until you get a pleasing result. Once this is done, I click the ‘Do All’ button and then save the image.
The last step is to import into Lightroom and tweak the levels, contrast, sharpness, and noise reduction to end up with the best result I can. Again, there is no hard and fast way of doing this – I just play until I end up with a good result (or the best result I can get anyway!). What I like about Lightroom is that it’s totally non-destructive, meaning my original picture is always retained if I ever need to return to it. I also crop a little here too, as there’s usually some artefacts around the edges left from the staking process.
I then export as a jpeg from Lightroom, and it’s ready to go. The end result is the image at the top of the page. Considering what it comes from, it’s amazing what the end result is. Again, kudos to those who produce this amazing software to enable us to do at home what could only have been done in high-end facilities (or not at all) only a couple of decades ago.
Overall, I was really impressed with the little QHY5v as an imaging camera. I’ve never imaged in mono before, but it seems it produces very smooth, low noise images, that the Toucam that I usually use would struggle very much to match. No colour, but in the case of the moon, detail is king anyway.