Ha Solar Imaging Techniques - Astro

Transcription

Ha Solar Imaging Techniques - Astro
H-a Solar Imaging
Techniques
(Bringing Home the Gold!)
An introduction to the medium and general overview
including tips, tricks, and techniques
© Paul G Hyndman www.astro-nut.com
In hopes of sparing others the frustrations encountered along the way, I’d like to share
some of the information gleaned while on my own journey to solar Nirvana. Hopefully,
fellow travelers will find this material useful. If you learn something of value, share it with
others... and we all grow in the process!
This guide is intended to provide an overview of basic and advanced Solar H-a imaging
techniques, equipment foibles, processing tips, and a few tricks of the trade. With so
many facets to cover, this will be more of a summary than an in-depth tutorial. If the
interest is there, we can delve deeper into the individual topics at a later date.
The format is flexible and intended to be very informal, so if you have any questions or
comments along the way, feel free to jump in and fire away!
Images used in this presentation were captured via consumer-grade digital cameras
(Nikon CoolPix995, Canon D60, or Canon 10D), webcam (TouCam II 840 Pro), and a
dedicated astro-CCD (SBIG ST11K).
Before we dig in, let’s take a quick peak at what we will strive to accomplish: ultra-high
resolution imaging with repeatable results (consistency!)
While this image in itself may seem to be somewhat impressively detailed, consider this:
it is only a snippet that was clipped from a much larger single-image frame:
2
At 4008 x 2672 pixels, the un-cropped master image is a mind-blowing 55 by 37 inches
(at web-based 72ppi), with a 4X Powermate yielding a solar disk 36 inches in diameter!
This is a processed version from a single gray-scale full-disk image.
Yes, those are vestiges of Newton’s rings (see above), an anomaly we will discuss
later. But for now, let’s get down to business!
(note: SBIG ST11K w/9 micron pixels)
3
You have a solar H-a setup…
now what?
You have a solar H-a setup… now what?
Some of the most daunting experiences an aspiring astro-imager might ever suffer
through are those associated with extreme narrow band imaging. Such is the task of
capturing solar H-a images: what you see in the eyepiece is not necessarily what you'll
capture in the image. Indeed, I’m one of those who was vexed by the nuances of the
medium, moving between cameras, peripheral equipment, and processing techniques in
attempts to adequately capture what was readily seen in the eyepiece.
After much angst, and through the helpful support of others, I can happily report that it is
in fact possible to consistently capture far more H-a detail than the eyepiece reveals.
4
Calibrate those devices!!!
Before attempting to gather or process any images, you should check monitor, printer,
and laptop calibration. This assures the images and level of detail will be maximized…
faithfully interpreted and reproduced by all. Moreover, it guarantees that any other
calibrated device will also see precisely the same image… be it another computer you
own or one halfway around the world!
Minimally, you should ensure the contrast and brightness of your system allows you to
detect all 16 blocks in a standardized gray-scale test pattern. Adjust as needed, or you
may be tossing away crucial detail!
5
What’s it all about, this
Hydrogen Alfalfa stuff?
Spectra showing 656.28nm H-a line
It’s helpful to have a fundamental idea of what the target is, how it and your equipment
behaves, and the proper setup procedures before setting out to bag those killer solar Ha images. Proper preparation can assure maximizing the quality of the data you’ll be
gathering.
A good starting point might be for us to briefly discuss what a solar H-a signal is
(frequency and color-wise) and what it is not, as well as developing an assessment of
the camera’s abilities to handle it. This will determine in large part how one goes about
gathering and processing the data, and most certainly minimize the amount of trial and
error before satisfying results are consistently produced. Once those basic principals
are established, you’ll be bringing home the gold on a regular basis!
When the sun’s spectrum is scrutinized, dark lines can be found scattered about. Early
pioneers such as Fraunhofer began to realize that these lines represented elements
that absorbed the underlying spectra, moreover they could correlate these to specific
elements. Some fortuitously coincide with areas of intense activity, Hydrogen Alpha
being one. The name denotes it as being the first (Alpha) absorption band of Hydrogen,
occurring at 656.28nm, a point that lies inside the visible red band of the spectrum.
6
Using additional filters
No blue, green, yellow, orange, or bubble-gum pink colors pass through the H-a filter...
nothing but red!
This cannot be emphasized strongly enough, and if one grasps that concept, all else will
be easier to understand. Blue or green filters (if used) only serve to knock down the
signal level, as there are no blue or green components to be filtered. Neophytes often
mistakenly think that an IR blocking filter might be of help, or conversely, one
permanently mounted inside the camera may be degrading the image. Nope! A solar Ha filter is a precision line-filter, with the IR band lying far beyond. An internal filter will
only attenuate an H-a signal if it extends to the 656nm frequency.
The detail we are targeting abounds primarily in the chromosphere and is normally
overwhelmed by the much brighter underlying photosphere... not unlike letters on the
face of a high-beam lamp becoming obscured when the headlights are switched on.
Precision H-a filters block the strong out-of-band background and isolate all but the
band of interest, thus allowing us as in our headlight example to be able to "read the
writing" despite the intensely illuminated surface below.
7
Precision? Exactly how close
are we talking here?
Solar H-a performance is determined in part by the frequency base (baseband) and how
wide of a frequency slice (bandwidth) we intend on viewing. Sounds simple enough, but
the baseband accuracy and incredibly thin slices place solar H-a filters in a league of
their own. Measurements are made in angstroms, a unit that is one hundred-millionth of
a centimeter... and our filter bandwidths are in tenths of that angstrom (gasp!) This
makes for a rarified field of contenders, as maintaining such levels of precision requires
an interesting mix of extreme engineering skill and magic. Even slight changes in
ambient temperature must be dealt with by the optical gurus creating these systems,
lest the filter drift "off-band". Fortunately, we are now blessed with solar H-a filters and
new technologies that have brought increased supplies and lowered costs... all the
while meeting or exceeding previous performance benchmarks.
8
So where does my filter fit into
the mix, and what can I
expect to see or image?
Surface H-a detail from 0.9, 0.7, and 0.5 ang bandwidth images
Prominences tend to dissipate into the bright background sky when the bandwidth is
above 1.0 ang (though some specialized “prominence scopes” examine a wider swath,
having been outfitted with an occulting cone). They look full between 0.7 to 0.5 ang,
becoming thinner and more structurally defined as bandwidth tightens further. Surface
detail cannot be seen above 1.0 ang, is coarse but detectable at 0.9 ang, looks good at
0.7 ang, and picks up substantial contrast and definition at 0.5ang.
The tighter the bandwidth, the more contrast the surface details exhibit, while
prominences typically radiate perpendicularly from the solar limbs (edges), spreading
upwards and outwards, as well as both coming towards us (blue-shifted) and going
away from us (red-shifted), and thus captured with a wider frequency range. That is to
say, wider in the context as previously mentioned... not more than 0.8 or 0.9
angstroms.
Portions on the disk (filaments and CMEs) moving towards us, will often look their best
when a small amount of blue-shift is incorporated into the mix. This does not mean
there is any blue in the image, but merely that the signal has been shifted a miniscule
amount to pick up data that would otherwise push out of band. Some filters include
provision to adjust that point and, when tweaked ever so slightly, will enhance the
contrast of the filaments dramatically (the relatively subdued signal levels of H-a can
use all the help they can get… the better the RAW data, the more juice you’ll be able to
squeeze out of it).
With proper equipment and techniques, an adept solar imager can extract considerably
more detail from the instrument than most eyeballs could ever hope to see!
9
So then, let’s recap…
Time for a quick recap.
Okay... Red signal, tight bandpass, and realizing that nothing other than an additional
solar H-a filter will be of help in improving things (called stacking, but that's another
story for another time).
10
What camera should I use?
That's an easy question to answer: use whatever camera is available! Of course you'll
find some far more up to the task than others, and even a few twists and turns from
otherwise identical models (yikes!) Having tried 35mm film based, webcam, digicam,
Digital SLR, and dedicated CCD though, my vote is definitely for the dedicated
monochromatic CCD.
Though decent images might be captured with the other aforementioned devices, there
are foibles.. er, "features" that present some unique challenges. In deference to those
dedicated to film-based astro-imaging, let me apologize now for my belligerent attitude:
it just isn't going to happen for Solar H-a, folks! Without the ability to study histograms
and test images on the fly, or the capacity to blow up and scrutinize a "live time"
focusing image (2.75x just doesn't get it), for this imager it is at best a crap shoot as to
what the results will be (are we feeling lucky today?)
11
How about a digicam or
digital SLR?
GRGRGRGRGRGRGR
BGBGBGBGBGBGBG
GRGRGRGRGRGRGR
BGBGBGBGBGBGBG
GRGRGRGRGRGRGR
BGBGBGBGBGBGBG
GRGRGRGRGRGRGR
BGBGBGBGBGBGBG
GRGRGRGRGRGRGR
BGBGBGBGBGBGBG
GRGRGRGRGRGRGR
BGBGBGBGBGBGBG
RGB Bayer Masking
How about a digicam or digital SLR?
Ahhhh, now we're getting warmer! With a digicam or digital SLR, we can scrutinize a
zoomed section of the image to "nail down" focus, examine histograms on the fly to
maximize the dynamic range (we'll discuss these later), and capture a boatload of
images while bracketing an exposure range. A major limitation of digicams and digital
SLRs is the technique employed to produce color images. CCD/CMOS arrays
themselves are usually colorblind, so digicams/digital SLRs often use a system of
micro-lens filters over each pixel to produce a color image (Bayer masking is a popular
technique).
This results in a matrix that looks like this (R=Red, G=Green, B=Blue), where Green is
50% of the mix (to more closely mimic the center of our own visual acuity), with Red and
Blue being 25% each.
12
What the camera sees
R…R…R…R…R…R…R…R...R
……………………………………
R…R…R…R…R…R…R…R…R
……………………………..........
R…R…R…R…R…R…R…R…R
……………………………..........
R…R…R…R…R…R…R…R...R
……………………………………
R…R…R…R…R…R…R…R...R
……………………………………
R…R…R…R…R…R…R…R…R
.............................................
Red components of Bayer Mask
Remembering that with solar H-a, we are using what is essentially a precision line filter
(read: monochrome) with only the Red channel containing info, hence 75% of the pixels
in a Bayer configuration are not used. Any green or blue micro-lensed pixels add black
voids to the image... not unlike an old AP wire-photo in a nickel newspaper... with this
sparsely populated matrix as the result. Note how dark the resulting image is when
mixed with 75% black and converged (upper left square in image).
13
Leakage and interpolation
errors
Over exposed CoolPix image showing RGB composite and its Red, Green,
and Blue components. Note the RGB composite appears correct (it isn’t)
while the Green channel here is richest in detail.
To further exacerbate things, many digital cameras have signal leakage or interpolation
errors... with H-a image data being recorded in other than the correct channels. Nikon
CoolPix arrays, for example, usually have such high leakage at 656nm (H-a) that one is
often better off saturating the image (over-exposing) and working with the Green
channel in lieu of the correct (Red) channel.
The leftmost image, a composite RGB, actually appears to be ideal... but in truth is
vastly overexposed. We know there can be no yellows or oranges in our H-a image, as
they are far outside the band. What has happened is that leakage or interpolation errors
from adjacent channels was so great that a signal was reported where none should be.
They are mixed as a composite by the camera, forming a pretty but very wrong image of
the sun.
14
Substituting channels
G...G...G...G...G...G...G...G...G...
...G...G...G...G...G...G...G...G...G
G...G...G...G...G...G...G...G...G…
...G...G...G...G...G...G...G...G...G
G...G...G...G...G...G...G...G...G…
...G...G...G...G...G...G...G...G...G
G...G...G...G...G...G...G...G...G…
...G...G...G...G...G...G...G...G...G
G...G...G...G...G...G...G...G...G…
...G...G...G...G...G...G...G...G...G
G...G...G...G...G...G...G...G...G…
...G...G...G...G...G...G...G...G...G
Green components of Bayer
Mask
When such high leakage is encountered though, we can use it to our advantage to
produce an array result as seen here. With 50% of the matrix now populated, the
resulting image is less encumbered by “dead” black pixels and converge to form a
brighter image with higher apparent resolution (upper left square in image).
This technique will often allow using a single image to capture both the brighter surface
detail (in the Green Channel) and the dimmer prominences from the now overexposed
Red channel, though better results can usually be obtained by shooting one set of
"Greens" optimized for surface features and another set of "Greens" using a stop or two
longer exposure for the proms. This is not an ideal solution, as 50% of the pixels are
discarded (the Red and Blue channels) and those that are used (Green) have
questionable linearity, as they are driven by leakage or interpolation errors rather than
signal.
15
RGB to grayscale
conversion
Normal channel components
Stretched channel components
Converting our sample image to grayscale will make it easier to compare and work with,
as visual acuity increases. The composite RGB image (on the left) still looks best, with
the Red lacking detail (it is over-exposed), the Green seemingly better, and the Blue
looking best.
But when we stretch the individual channels to max/min values, we begin to see the true
nature of the underlying components. The composite (left) still looks reasonable, the
Red channel shows some proms and surface features though it is overexposed here,
the Green channel has crisp unadulterated surface features, and the Blue channel
(right) has very coarse detail (as would the Red were it properly exposed, since only
25% of the pixels are allocated for each of those two channels). Remember though that
this is from a camera that exhibits signal leakage... one without leakage will usually
deliver the best H-a image from its Red channel, as the Green and Blue channels
remain almost completely unilluminated.
16
RGB to grayscale stretched conversion
from a non-leakage camera
RGB to grayscale stretched conversion from a non-leakage camera
Stretched color channels from a camera without leakage or interpolation errors will
resemble the image set seen here.
Note that the Red channel (second from left) from a non-leakage camera contains H-a
signal data, though somewhat coarse in appearance as only 25% of the array's pixels
are allocated to Red. The Green and Blue channels from the master image were almost
black, so have very little detail when stretched. The RGB composite (far left) is washed
out when stretched, the result of the non-signal Green and Blue channels diluting the
mix (the unstretched version direct from the camera would appear very dark and
muddy).
17
Color schemes
Some cameras use a variation of the Bayer mask based upon a CMYK color scheme
rather than RGB. Such systems when used for H-a may be prone to interpolation errors,
mistakes in signal identity that occur since no direct sensors for Red exist in that
scheme.
Although only 25% of the pixels in the RGB Bayer mask are allocated to the Red
channel, they do directly intercept those signals. The CMYK variants measure relative
amounts of Red signal as it falls upon Yellow and Magenta micro-lensed detectors, with
results being interpolated. The situation can become even more confusing as we
attempt to split synthesized Red channel data from the matrix. For the most part though,
CMYK H-a interpolation errors can be handled in similar fashion to "leakage" from an
RGB system.
We'll later examine the histograms of unprocessed and stretched channels in detail.
18
What about a dedicated astro-CCD?
Since Solar H-a is a monochromatic signal (a narrow slice of precise frequency), it is
best captured via a monochromatic device that is sensitive to that frequency and has
sufficient quantum efficiency so as to get the pictures as fast as possible (minimizing
atmospheric smearing). A temperature controlled chamber helps assure consistency
and optimal QE, while a large array with suitably sized pixels helps to capture the most
detail within the constraints of established sampling criteria.
(Nyquist's Sampling Theorem is an accepted standard).
This image has been cropped and reduced but is an otherwise unprocessed single shot
taken with a monochromatic dedicated astro-CCD (it has not been stretched or altered
in any other way). Signal intensity is easy to detect, as all pixels contribute to the mix:
what you see is what you get!
It should be noted that the unreduced master image produced a 2600 pixel diameter
disk... a 36 inch wide blazing ball of hi-res activity. The wide dynamic range allowed this
one single image to be used for proms, surface features, and limb detail, a process
requiring up to 3 separate exposures from traditional digicams/digital SLRs. We'll
discuss this in more detail later.
19
Monochromatic array
MMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMM
MMMMMMMMMMMMMMMMM
Grayscale components of monochromatic imaging device
A monochrome array is ideal for narrowband imaging, as it affords full grid utilization.
Unlike Bayer masking or other RGB/CMYK color array schemes, all pixels are available
for signal detection. The individual pixels are contiguous and directly adjacent to each
other, one being placed in each of the squares seen in the illustration.
20
Are there other pitfalls I
need to be aware of?
There are many pitfalls, but one particularly nasty surprise that has confounded many
an aspiring H-a imager is a phenomenon known as Newton's rings. This anomaly is
gathering attention as more imagers begin to work with the extreme narrow-band signal
of solar H-a, where the problem is usually first noticed (though it can appear elsewhere).
The optical window/filter over the CCD or CMOS array is usually the culprit. The rings
are the result of interference patterns built up when anti-reflection coatings on wedged
surfaces (or those with varying radii) do not sufficiently suppress them. Technically,
there is nothing wrong with the camera, as they are not designed, tested, or warranted
to work with extreme narrow-band signals such as H-a.
When a narrow-band signal is allowed to bounce unrestrained between wedged
surfaces, the strong primary signal not only comes through, but the harmonics
(derivative signals) team up to form strong signals and also come through. These
individual variations away from the primary signal are in themselves relatively weak, but
build up where they overlap each other, forming a moiré-like interference pattern.
A "normal" wide-spectrum image (what the cameras are intended to be used with)
usually does not vividly exhibit this anomaly, as the expanded range of primary signals
are much stronger than any adjacent harmonics, and completely bury them by brute
strength. (The pattern may also vary with changes in frequency. Narrow-band has no
adjacent primary frequencies to mask the rings... wideband does.)
21
The dreaded “Newton’s rings”
Here is a sample which shows the rings "anchored" in position with the camera. Note
that as the camera was rotated, the sunspots moved, but the rings stayed where they
were in reference to the camera base-plane.
Advice is invariably sought only after the problem has been discovered (Ooops!), but my
retrospective advice is to try the camera before you buy it, as two otherwise identical
cameras (same manufacturer, same model, even one serial number apart) can produce
vastly different results. Not even dedicated astro-CCDs are immune from this "feature",
indeed my $9000 11 megapixel camera suffers from this malady (sigh!) High end
cameras designed specifically with extreme narrow-band in mind can sometimes be
ordered without an optical window/filter over the array though, with the express purpose
being to avoid Newton's Rings.
The camera that produced the above image eventually had a CMOS/Window transplant
and, though somewhat improved... still had the rings but now in a different pattern and
location.
I’ve posted my most recent "discoveries" of some of their nuances on my site:
http://www.astro-nut.com/sun-ha-newtons-rings.html
22
That was a mouthful! How
about a brief summation?
An ideal H-a image capturing device will properly confine the signal to a single channel
(preferably monochromatic) without introducing erroneous information, but reality
dictates we find the methods that work best with our own unique equipment.
Any camera can be used to capture solar H-a, but costs rise logarithmically as you seek
to increase the results. Sensitivity, capability, and array size = additional $. All this is
moot of course, when some kid with a webcam starts knockin' our socks off with SOHOlike images!
23
Setting up the filters
By now you're likely getting antsy, but hang in there... we're almost ready to begin
gathering some serious images! First though, we're going to set up the telescope and
filter assembly without the camera, as it is often far simpler to determine key filter
adjustments while at the eyepiece, so as to have a good starting point. This allows us to
set the tuner to pull in the filaments with the highest amount of contrast, while not so far
as to shift away from the surface features. If the etalon filter is squared perfectly to the
OTA, non-shifted surface features will usually be fairly sharp.
Start by setting the scope's focus to reveal a crisp disk. Slowly turning the T-max while
at the eyepiece will often yield a considerable increase in contrast of the eyebrow-like
filaments that dance across the surface. Altering the angle ever so slowly, we can often
find a small area of adjustment where surface features and proms are still sharp and
filaments jump out at you... this is where we want to be (and why you want a scope that
allows you to reach the tuner while at the eyepiece!)
Moving down to the eyepiece end of the scope, some etalon-based systems use an
adjustable blocking filter. An etalon works on an interference pattern principle,
generating multiple harmonics of the target frequency which accompany it to the
blocking filter. The BF is a wider passband H-a filter, needing only to be “tight” enough
to exclude the harmonics while allowing the target frequency to pass. Newer BFs are
extremely stable over a wide temperature range, but you can often tune out a tad more
red background “haze” by tweaking it. I find this easiest to do before plonking the
camera onto things, starting with the BF tuner fully CCW and slowly advancing it in a
CW direction until the background definition against faint proms seems best. The
newest generation BFs are so stable that their tuning wheels have been eliminated.
24
Setting up the filters
With our scope and filter now set up and tuned for the hunt, what remains is to
determine exactly how the camera will behave with this signal. We’ll take a series of
images and examine them back in the digital darkroom. Use conventional coupling
methods and whatever optical devices (PowerMates, Barlows, EPs ) needed to provide
the image scale we seek to achieve.
If the camera supports a RAW mode or allows storing the data without camera
processing, those would be the preferable methods of storage. JPEGs and other
formats may attempt to irretrievably alter the data to what the camera’s firmware
interprets as correct, making accurate analysis difficult at best. JPEGs usually autostretch the images to what the camera thinks is correct. Unfortunately, the camera does
not realize it is dealing with a solar H-a image, and so is always wrong!
25
Block that unwanted light!
Block that unwanted light!
If you've not already done so, you may want to install a light shield on the front of the
scope to block out direct sunlight. This not only enhances viewing, but will allow greater
precision in focus. My own deflector is made from a simple piece of black/white Gator®
board (foam backing board sold in craft stores) with an opening on one end to slip
around the filter. The white side faces towards the sun, reflecting unwanted heat, while
the dark side ensures the sunlight is blocked from view (snazzy graphics, optional!)
A viewing hood is also recommended... something large enough to cover your head and
a laptop or viewing screen, if one is to be used (the signals you are about to scrutinize
can be quite weak, so every little bit of improvement helps). Having tried different
materials, my preference is for something that restricts most of the ambient light, allows
air to circulate (it can get quite hot under a midday sun!), and can be easily and
inexpensively fabricated. What filled the bill for me was a rectangular box shape that
was made from "Weed Block" landscaping fabric. The material is porous, allowing air to
circulate (Ahhhh!), dense enough to block out significant ambient light, can be cut to
shape with scissors, and basted together with duct tape.
26
Focusing techniques
Hartmann focusing mask
A focusing mask can help nail down results, creating multiple images away from critical
focus. Each opening performs as an off-axis telescope, producing an image that
traverses the FOV as focus is changed. The farther apart the openings, the more
profound the effect.
Openings nearer the center will result in lesser image movement even with large
changes in focus, while those nearer the edge will show greater movement even with
small changes in focus. Bigger openings = brighter targets, but to be bigger means that
the bulk of their areas is moved closer to each other, diminishing the off-axis effects. It
is a balance of enough total (and individual) area to be bright enough to work with, but
not so large as to minimize their off-axis effect. Too small = not bright enough; too big =
hard to detect any changes. Working with only 40mm of aperture will be a tad more
challenging than with a larger workspace, but is still do-able, and gross focusing error
should be detectable. Generally, you want the bulk of the opening area as close to the
edges of the aperture as possible.
27
Focusing techniques
Out of focus
Critical focus achieved
Hartmann Mask
Scheiner Disk
Hartmann masks have three holes and are an ideal tool for point source targets (stars).
When used in conjunction with zoom or an enlarged laptop view, even the smallest of
focus deviations become quite obvious. The images will converge as critical focus is
approached. A two-holed mask (Scheiner disk) employs the same principals, but
presents only two images. This can be quite helpful for extended objects, where linear
features such as a limb can be used. The Scheiner is exquisitely well suited for solar
work.
It may seem difficult to accurately scrutinize a full disk or limb (especially with smaller
aperture setups), but you can use another of the mask’s traits to advantage: image
intensity. Images overlap when at critical focus, resulting in perceptibly increased
intensity. As you move directly into critical focus, look for a very weak feature... perhaps
the smallest sunspot in a group or a wisp of prominence activity. Now slowly rack the
focus in or out, and as the mask diverges/converges the ensuing images by
infinitesimally small amounts, you will see that weak target all but disappear when just a
hair out of focus, only to reappear when the images converge, popping into and out of
view as focus is shifted.
Don't be timid about boosting your camera's exposure time or sensitivity while
previewing the mask images, or the use of "Zoom" to enlarge the image scale (via your
camera or laptop control program if using one). Just set things back to normal when the
mask has been removed and fire away! Be aware though of the possibility of "chasing"
focus due to atmospheric changes.... tweaking and tweaking while Mother Nature plays
games with you! At some point you must say "good enough!"
28
Exposure guidelines
Optimizing available signal data
Dark image
Normal image
Bright image
Generally, astro-images are best captured using manual exposure settings. A simple
trial and error series can be taken and quickly analyzed, ensuring optimal results. Most
digital cameras and dedicated astro-CCDs not only have the ability to display images as
they are captured, but also provide for on-the-spot analysis.
Among the most useful of those tools is the histogram, a graphic representation of the
image’s brightness. They are relatively accurate and easy to interpret, as opposed to
trying to determine image quality based solely upon an LCD rendering of the image.
The vertical axis represents the amount of pixels at each brightness level.
The horizontal axis indicates the brightness level.. (see above) From left to right,
brightness goes from dark to bright. The more pixels there are toward the left, the
darker the image. The more pixels toward the right, the brighter the image. If pixels are
bunched to either side, you can change the exposure setting and take the shot again,
so that the signal levels are better distributed.
The next two slides utilize lunar images, but the histogram concepts are valid also for
solar work.
29
Exposure guidelines
Dark image
Normal image
Bright image
Here are some images and their corresponding histograms which, as we can see,
closely follow our model for Dark, Normal, and Bright. (see above)
The darker image is weighted heavily to the left while the brightest image shows a shift
to the right. When the signal flattens against either wall, data is lost or “clipped”,
decreasing the dynamic range.
30
Exposure guidelines
Normal image
Compressed image
Ideally, pixel data should extend as widely as possible, tapering at each end.
The histogram for the normal image illustrates data spread from dark to light, neither
end clipped. Poor exposure settings such as shown to the right result in data
compression, limiting dynamic range.
The wider the signal, the greater the dynamic range and the more distinct levels of
detail the image will contain. Learning to interpret the histogram data will reward you
with richly detailed images, containing the maximum data your equipment is capable of
capturing!
Don’t thrash me for saying this, but…
Those who do not study histograms are doomed to repeat their mistakes! (Groan!)
31
Exposure guidelines
Nearly ideal (RAW) image, exhibiting full dynamic range
The near-ideal histogram from this RAW (monochromatic) solar H-a image indicates the
darkened background sky approached but did not pile against the zero mark (left side)
and the highest levels of the solar disk tapered prior to hitting the full-white or right side.
The white level was slightly restrained, so as to enhance plage detail… a technique we
may cover in more detail at a later time.
32
Exposure guidelines
Preserve that data!
The histograms we just viewed illustrate ideal scenarios. In reality, we often find we
must compromise to accommodate sky conditions or meet other constraints. What is
important to remember is a wider dynamic range equates to more recoverable detail in
our images while a compressed range reduces levels of distinction. We seek to avoid
“pushing” fully against either wall, as that indicates information is being lost, and once
the signal is clipped or saturated, that information is forever gone!
To preserve the integrity of data (for most astro-imaging applications), RAW data files
are preferable to JPEGs, as the latter manipulates, stretches (then compresses), and
forever alters the data the camera originally captured.
Anything the camera can do via internal firmware, you can usually do later during
processing… and more. You can also go back to the unaltered master to use different
processing techniques… no can do if you let the camera make those decisions for you!
33
Gather some data
Good images may occasionally come our way by accident, but the easiest way to
ensure success is by trying different scenarios and paying attention to what seems to
work. That would include taking a series of images spanning a range of exposure
settings then determining which settings worked best.
An image processing program (such as Images Plus) can be used to split RGB images
into their individual channels, tossing away the Green and Blue (unless if you are the
owner of a camera with leakage or interpolation errors).
On subsequent imaging expeditions we'll have the benefit of prior experience and
histogram data to refer to, but for the first few runs we will bracket exposure ranges,
gathering more than we actually expect to use. The first few sessions may seem a tad
overwhelming as new concepts, devices, and procedures are encountered, but hang in
there... your efforts will be rewarded with consistently predictable results!
34
Gather some data
We’ll squeeze off one image each at different exposure times that show up on the
camera’s playback screen as ranging from bright red down to almost dark black, paying
note as to what Powermate, Barlow, EP, and/or camera Zoom settings were used (data
for exposure durations etc are often stored with the images on the CF card). Trotting
back indoors, we now face the task of determining which channels actually hold data
and how wide the dynamic ranges are. For Digicams and Digital SLRs, the lowest
(slowest) ISO setting will usually provide the most noise-free images to work with, and
the exposures are still plenty fast enough to reduce atmospheric smearing.
Avoid using additional knock-down (darkening) filters when possible, relying instead
upon faster exposures to reduce exposure intensity. All other things being equal
(amplifier gain etc), the faster the image is captured, the less time there is for
atmospheric actions to smear the results.
A few points:
Don't trust your eyes here (remember: RGB images will be muddy and dark), use
histogram data.
Do not use auto-focus or auto-exposure.
In burst mode the array may have insufficient recovery time, blurring all but the first shot
in the series.
35
Dissecting the images
Referring back to our knowledge (assumption?) that our camera likely contains a Bayer
Mask RGB, we need a program that will allow us to split the composite image into its
individual constituents. While Adobe PhotoShop or other programs can be used to
accomplish this, I prefer to use Images Plus, as it can be used to accomplish several
other crucial steps during the routine of normal processing.
Note: it is best to convert the images to TIFF or similar non-lossy format before ANY
processing is begun. Each and every time an image is opened and saved in a lossy
format (ie: JPEG) valuable data becomes forever lost! Subsequent references to saving
files means doing so using non-lossy formats!
1) Using Color | Split Colors LRGB | three channels are extracted from the master.
2) Using Color | Interpret – Mix Colors | the images are converted to Gray or luminance
(our visual acuity is usually better at seeing contrast within a grayscale rather than the
colored channels, but the final call will be based on the histogram data)
3) Call up and examine the Histogram for each channel. To save time, you can skip
between ones that are obviously far too clipped or saturated. What we are looking for is
the Channel (R, G, or B… hopefully R) and exposure settings that display the widest
dynamic range before any further image manipulation takes place.
36
Dissecting the images
Original RAW image
Processed image
So what channel do I use?
The wider the dynamic range, the finer the accuracy in revealing potential detail. As the
range becomes compressed, the image tends to become “posterized”, breaking down
into distinct lower-resolution sections. Note the Channel that had the most dynamic
range and the exposure time used (available in the EXIF data). This will be your basis
point for all future solar H-a images shot using that same Zoom, EP, or PowerMate,
bracketing the value to cover yourself. Surprisingly, I’ve found little deviation despite the
time of year or meridian elevation.
4) Prominences will be difficult to judge via the histogram, as the entire disk will be in
saturation when the proms are at their best. Usually an exposure that is about two stops
slower than the optimal surface shots will provide ample dynamic range (you can fine
tune your selection based later on the visual appearance of the stretched prom images).
The lower the dynamic range, the fewer details you will be able to pick out of the proms,
making for a solid splash of prominence rather than a structured flame-like appearance
(yeah, I know… but bold theatrical color and detail helps sell the look! :o)
To determine which channel of a color digicam/digital SLR to use, take a series of
exposures and break them into RGB components. If the camera has low leakage, the
Blue and Green channels will have very little signal level despite the exposure settings.
You will likely get the best results from that camera using the Red channel data.
37
Dissecting the images
Original RAW image
Processed image
So what channel do I use?
If however, you do have recognizable data in the Green channel (the Blue is almost
never salvageable for H-a), throw out those channel images that are obviously poor
quality, and convert the remaining ones to grayscale (our visual acuity is greater there,
making critical scrutiny easier). Search among them for those with the best contrast and
definition, noting which color channel they came from (Red or Green) and the exposure
range used. Chances are that if you do have significant leakage, the best images will be
derived from the Green channel data.
The Image on the left is a RAW unstretched image as delivered by the camera
(downloaded in 16/48bit TIFF format via Images Plus Canon conversion utility) and the
fully processed image is to the right. Surface detail is somewhat washed out, as I used
a lengthened exposure prominence shot for this sequence (to illustrate that even at
increased exposure levels, with a low-leakage camera the Blue and Green channels will
appear dark). RAW images are preferred over JPEGs, as most cameras automatically
stretch the latter, giving a false indication as to the true exposure levels.
RAW images appear darker and muddier than the data that has actually been recorded,
as the only channel we are interested in (ideally, Red) is mixed with an equal amount of
the Blue and twice as much from the Green channel signals (RGB digicam pixel
allocation commonly is R=25%, G= 50%, B=25%). Since the Blue and Green channels
should not have any signal and are essentially black, the resulting composite RGB
image is: R+G+B or Signal + Black + Black = mud!
38
Dissecting the images
RAW Red
Stretched Red
The image has been split into its RGB components to analyze the contents of each
channel. The left image is the R component as directly extracted from the RAW image
and the right image has been auto-stretched. Note the histograms which show that the
RAW Red has captured almost the complete 64K intensity range without clipping
(intensity range = 689 to 61,301). The image requires very little stretching to fill the
range completely, meaning maximum detail has been captured and preserved.
39
Dissecting the images
RAW Green
Stretched Green
In stark contrast, the Green channel here appears black, and the histogram attests to
the low signal level present (intensity range 31 to 2213). When expanded to 64K, the
stretched version is somewhat coarse, with mild clipping and posterization (areas of
distinct demarcation).
While the exposure level could be increased so as to exploit the higher pixel density of
the Green channel, the low level of leakage from this camera does not warrant it. The
requisite increase in exposure times would allow more time for atmospheric
disturbances to smear the details, as well as bleeding of the now saturated adjacent
Red pixel wells.
40
Dissecting the images
RAW Blue
Stretched Blue
Also appearing black in the RAW version, the Blue channel has an extremely low signal
level (intensity range = 0 to 684), with the stretched version displaying severe clipping
and posterization.
We are now ready to get down to the serious task of collecting our images (Wahoo!) But
don’t bother etching these exposure numbers into your brain… write them down and
keep them handy with your H-a equipment. Use these to get close to the optimal
exposures, but check the actual histograms to really nail things down tight!
41
Processing techniques
Making it pretty!
A powerful facet of digital imaging lies in the ability to easily manipulate the work in
ways that formerly required a well-equipped darkroom and scads of experience. Some
techniques are impossible to perform in the darkroom, and can only be done via
computer. Indeed, the programs are so effective that many of the best film-based astrophotographers scan their images so as to allow digital processing.
Some of the digital techniques include:
Deconvolution to “tighten” images dispersed by atmospheric conditions
Dark frame subtraction to remove hot pixels etc.
Flat fielding to remove gradients, vignetting, etc.
Binning to increase signal levels
Stacking to remove random noise and/or improve S/N ratio
Adjustment of dynamic range and levels
Mask construction/application to combine various exposures
Smoothing and noise reduction
Sampling and iterative processing
Color interpretation/enhancement
Sizing to aesthetically pleasing scale
42
Processing techniques
A quick guide
Image processing techniques and results can vary considerably. Program differences,
methods used, or desired emphasis are a few of the many factors that might result in
vastly different outcomes. Software designed specifically for astro-imaging contain
features that will greatly simplify the tasks at hand, often seeming to perform feats of
magic.
Images Plus is a favorite: it is economical, powerful, easy to use (video tutorials guide
you through functions and tasks), and very well supported. It is the basis for processing
techniques shown here, though other programs may also offer worthwhile attributes and
be quite adept at performing similar functions.
The following instructions are intended to provide a “fast-track” for those seeking a path
to attain solar-imaging Nirvana. So gather up your image sets*, and let the fun begin!
*NOTE: For RGB cameras, I like to grab about 5 shots each at one exposure above,
one at, and one below my known optimal settings. I usually grab the surface shots first,
then quickly capture the prom shots.
43
OK
X
X
The processing then proceeds as follows:
1) Open the images in Images Plus
2) Using Color | Split Colors LRGB | extract the channel you previously determined to
contain the best data.
44
3) Using Color | Interpret – Mix Colors | convert the color channel to Gray or luminance.
4) Compare it to its counterparts to see which are the most crisp and pristine appearing
images. You can open a few or a lot, clicking off and closing the non-contenders as you
cull down the selections. Ultimately you will have one for the surface and one for the
proms. Until you get more adept at handling the images within the program, you may
want to work with only the surface or only the proms at one time, rather than
simultaneously.
45
Here’s where it starts to get interesting, and Images Plus really begins to shine!
5) Clicking on “Preview”, generates a 25% sized copy of the original, allowing you to
test what effects a change will produce at lightening-fast speed, then apply those
parameters to the full-sized image.*
6) Open the Histogram option and examine the current data spread.
* Alternatively, you may use the mouse to select and copy any rectangular portion of the
image.
46
7) Click on Color | Brightness Levels and Curves | Brightness – Contrast –Stretch click
the “auto” button allowing the program to decide on suitable parameters. Click on Apply”
and recheck the Histogram (hit the Histogram update button to check the new range)
8) In all likely hood, this will be pretty darned close to optimal, but you can click on
“Enable Sliders” and manually make corrections, with the changes occurring
immediately.
9) Like what you see? Click on the full-sized image to select it, then click on “Apply” in
the “Background – Contrast – Stretch” box to perform the same parameter changes on
it (voila!)
10) Click on File | Save Image with History | to keep a copy of the now partially
optimized image allowing with a log of what changes you made to it along the way. This
will serve to guide you in future processing efforts, when trying to recall whatever it was
that made that one image so special!
47
11) Returning attention to the full-sized image, from the | Edit | pull-down menu, check
the “Copy A Portion of an Image” option. You can now drag the mouse to create a
rectangle around a subsection of the image. This will allow you to test out complex
deconvolution permutations in blazing speed, and applying the good ones to the large
image as we did earlier with the stretching routines. No need to waddle through
numerous iterations only to find much later that they aren’t to your liking. Click and hold
down the mouse button to draw a box around a feature-laden area. Release the button
and a full-sized version (or scaled if really large) of that area pops up. Make two or three
copies of it, to compare restoration results in the next step.
12) Click on Restoration | Iterative Restoration | Adaptive Richardson – Lucy | leaving
the Noise Threshold set on 2.0, PSF size on 7x7 to start, and Iterations on 15. Hit the
“Apply” button and bam… the image is deconvolved. You can click repeatedly between
the forward/back buttons to how much impact those parameters had. I usually start at
about those numbers and if there was any noticeable change, try it one step smaller on
the PSF scale (ie: try 5x5 if there was a change at 7x7). One secret to extracting the
most detail is to start at the smallest PSF size where change is detected. Going too low
though serves only to emphasize the granular “noise” texture of the underlying data
structure.
48
When you’ve found the lowest PSF size that produces noticeable change, try it with and
more fewer iterations on the other sample tiles you created in step 11. Try 20 iterations
and 10. You want to find the lowest number of iterations that looks as almost as good as
the higher number sets. Going more than a tad too high will hurt us in our next step, but
going way too high produces ringing and other unwanted artifacts in the current step.
You now know what PSF size looks best and how many iterations to apply. Click the
full-sized image to select it and then the “Apply” button to perform the same restoration
algorithms on it.
49
13) If step 5 was where it began to get interesting, this is where it becomes downright
intriguing! Repeating the process of creating three small samples (drag the mouse
around it… yada yada yada) we will now do (gasp!) multi-level iterative restoration!!!
The Adaptive Richardson – Lucy dialog box will still be open with the previously used
values. Bump the PSF up one notch (ie: if it was at 7x7 go to 9x9) but drop way down
on the iterations… maybe start at 5... the number of iterations needed on subsequent
processing runs diminishes greatly.
The magic that happens now is pulling together the next magnitude of convolved data
lying above the smaller PSF threshold. Multi-level restorative processing often can
provide a dramatic increase in resolution. When/if you’ve found results that please you,
“Apply” it as before to the full-sized image. Draw and copy three more sample boxes
and try yet one PSF size larger… continuing the process so long as resolution
continues to increase. DO NOT get over exuberant and go to the point that the image
develops a processed look. Strive to be conservative, as it is usually far more preferable
to retain a natural “not-quite-over-the-top” appearance than those that look canned!
14) Like where you are? Save the “Image with History” (save save save!) and hunker
down for the most dramatic step to this point: colorization!
50
15) Hit the “Duplicate Image at 100%” Button three times, creating three clones of the
current full sized image then select Color | Combine Color LRGB |. Click the Red button
in the dialog box then click on any of the four large images. Do the same for the Green,
Blue, and luminance buttons, choosing a different one of the four for each selection. Set
the values in the boxes as follows: Red 2.0, Green leave it at 1.0, Blue 0.4., Luminance
leave at 1.0 and click the apply button. Voila… colorization! You can experiment with
different ratios, taking care to not go too far or the relevant color channel will saturate,
smearing or reducing the detail present in it.
When you like what you see, click on “Done” and “Save With History”
Close out all open images and open the image you chose for the prominence
components. Proceed in the same fashion as you did for the surface shots, but be
aware that the optimal PSF grid is usually about two sets larger… if you started at 7x7
for the surface, the proms will likely respond at 11x11. Colorize the proms the same
way, then Save with History and close out Images Plus… We now move over to Adobe
PhotoShop (mainly because I’ve not mastered these steps with IP yet! :o)
51
Adobe PS processing steps:
The Surface and Prominence images are now ready to be combined; a process easily
accomplished using Adobe PhotoShop. We will be opening both images, copying the
prom shot on top of the surface shot, assuring the images are in alignment and properly
scaled, making an additional copy of the surface image disk, placing it on the top of the
stack, and doing a bit of cutting, feathering, and final tweaks.
The routine might seem longish and tedious, but runs through very quickly and
effectively once you understand it (translation: it takes far longer to explain it than to do
it! :o)
1) Open both images in PS (three images, if processing “hairy limbs” as well)
2) You will be working with Layers, which is usually parked in the palette dock (upper
right hand corner). If it is not in the dock or already in the workspace, you may need to
open it via the “F7” key (or menu item Window | Layer |). If left in the dock, it collapses
when not being viewed, so drag it to the work area (where it is allowed to remain in
expanded form). Click on it and drag it to a section of the screen where it will be visible
but not in the way.
52
3) Select the Prom image to the clipboard by clicking on it to make it the active image,
hitting <control> “A” (or using the menu item Select | All |). A dashed line will appear
around the entire image. Now hit <control> “C” (or menu item (Edit | Copy |) to copy the
entire image onto the clipboard.
53
4) Click on the Surface image to make it the active image, then hit <control> “V” (or Edit
| Paste | from the menu) to paste this image on top of the surface image. This is done
so that we might use the more evenly darkened sky from the Surface image as a
backdrop rather than having to do much more manipulation with the Prom image’s
usually brightly flared background. The Layers palette now shows two layers, with the
Prom image on top. If a dashed box surrounds the image at this time, hit <control> “D”
to deselect it.
54
5) We will now align the two images. From the small pull-down menu in the Layers
Palette (with layer 1 still selected), select Differences. This allows you too “see through”
the top image, with those areas with identical features darkening up as they come into
alignment. Since these images are vastly different from each other, we will be
concentrating on the perimeter and major prominence activity. Select the Move tool (or
use the cursor keys) to nudge the Prom layer so that it lines up with the underlying
Surface image.
Quite often the Prom shot will be slightly bloated, due to the longer exposure (this is
less prevalent with dedicated astro-CCDs). If that condition exists, you’ll be able to see
it here, and can use the Transform function to correct for it (typically about a 1- 1.5%
image size difference might exist).
Hit <control> “T” to bring up the Transform function, click on the “Maintain Aspect Ratio”
function, the chain link icon on the upper tool bar. This keeps the “W” (width) and “H”
(height) components in proper ratio to each other) Click on the number in the W and
change it from 100.0% to 98.0 (don’t insert the percent sign). The Prom image will
shrink in both W and H… if only one portion shrank, re-check the “Maintain Aspect
Ratio” as it is probably not selected. 98.0% will likely be a tad too small, but you can
backspace over the .0 and change it to a .1 or .2… etc, backspacing and increasing it
by .1 increments until it has expanded to the proper dimension. Click on the Move Tool
to get out of the Transform mode. A pop-up box asks if you want to apply the
Transform. Click the Apply button, go to the Layers palette, and change the view from
Difference to Normal.
55
6) Using the Magic Wand tool, click on the dark outer portion of the prom image (adjust
the “Tolerance” as needed). A dashed outline will surround the rectangular outer
perimeter of the image while another will encircle slightly larger than the disk. Holding
down the “Shift” key while in the Wand Mode, allows adding more areas to the
selection, while using the “Alt” key allows removing sections. A “+” or “-” sign will
accompany the wand in those modes.
56
7) Choose Select | Feather | and enter a radius value between 0.2 and 250.0 (the actual
amount of Feathering will depend upon several items: image size in pixels, sharpness of
the deviations, etc, but we can get an approximate feel for how much is required by
viewing the dashed lines which will be displayed) try a value of about 3 or 5 and check
the results. If you don’t like the results, go back <control><alt> “Z” and try another value,
noting if it needs more (higher number) or less Feathering.
Get close to the tops of the proms, but don’t worry if it just skirts the tips of them, as it
will fade from that point. Trial and error work best here, using <control><alt> “Z” to undo
the change and <control><shift>”Z” to restore the change. Click <control> “X” to cut the
feathered background away. Use <control><alt> “Z” and <control><shift>”Z” to toggle
between the changed version and the unchanged to see if it is what you want. This will
leave you with a bright Prom disk (with no surface features) that has crisp prominences
against a clean dark background sky.
57
8) Click the Prom Layer “Eye” off (layer 1) to view the Surface layer (layer 0) and click
on Layer 0 in the palette to select it (Hit <control> “D” to turn off the dashed Select lines
if they are still on from step 6), then with the Wand Tool, click on the dark background
perimeter that surrounds the disk to select it. Now, use Select | Inverse | to select the
disk instead of the dark area. If the disk were selected directly, you’d only get very
rough irregular portions… selecting it via the Inverse method is a quick and clean
method of capturing the full disk itself.
Use the Feathering technique as described previously so that a smoother transition can
be made (try a very low value… 1 or 2 say), then copy the disk to the clipboard
(<control> “C”), click the layer 1 box in the Palette to turn it back on again (and establish
that as our building point), and hit <control> “V” to paste a copy of the disk cutout on top
of the prom shot. You may slide it around if needed via the Move tool or cursor keys, but
if copied and pasted directly using the above procedure, the alignment will often remain
intact.
HINT: Use the “Zoom Tool” (or <ctrl> “+”) to increase the view size of the image. The
increments used to shift the images are finer as the image becomes larger.
9) Tweaks can be made to the individual layers at this time… ie: minor color balance
shifts or intensity changes, but nothing too radical should be needed at this point. You
might try using a mild amount of Unsharp Masking (Filter | Sharpen | Unsharp Mask) on
the prom layer and individually on the surface layer to punch up the detail a tad… again
don’t overdo it.. less is usually best!
58
10) Flatten the image (Layer | Flatten Image |) then size it according to your needs.
When going to a lower size, you’ll likely find that an increase in Unsharp Masking can
restore some “punch” to the image. That in turn may require a slight re-adjustment in
the Contrast. Always perform the final tweaks based upon the final size the image is
intended to be displayed. An image that has killer resolution for its native (larger) size,
can look quite mushy and “ho-hum” when reduced without additional sharpening.
59
RAW processed for Surface
RAW processed for Hairy Limbs
RAW processed for Proms
Composited results
When properly set up and exposed, a single RAW solar H-a image from a dedicated
astro-CCD will span a wide dynamic range. So broad in fact, that the same base image
may be duplicated then processed using multiple paths: Normal for the relatively strong
Surface features, Boosted for the fainter “Hairy Limbs”, and Maximum Boost for the
much dimmer Proms.
This greatly simplifies the task of gathering suitable images while also ensuring that
features from each magnitude realm (Surface to “Hairy Limbs” to Proms) is directly and
correctly linked. There are no lapses of time between the elements, as occurs when
multiple images must be used, nor are there convolution changes (atmosphere, optical
instability, tube currents, etc.) which make direct matching of separate images highly
unlikely. When all components are derived from a single image though, artistic blending
of boundary zone details is not necessary (averted imagination not required!)
60
Flat-fielding
+
Original Grayscale
=
Pseudo-flat
Final image
If the variation is not too great, you can adjust intensity of various parts of the image via
a pseudo-flat. This is a good method of dealing with “hot spots” or excessive limb
darkening (a phenomenon that occurs due to the wide extremes of dynamic range
represented within the image).
The exposure should be limited so that the brightest area is not saturated (still contains
detail) and hopefully enough detail remaining in the darker areas that the flat can
selectively boost. While the example illustrates control of limb darkening, the same
technique can be used to control "hot spots". As always, it is best to minimize such
anomalies during the "capture" rather than the processing stages of imaging... the better
the basis image, the better the chances of more pleasing final results!
1) Make a copy of the image to be corrected (NOTE: this COPY of the image itself will
become your pseudo flat).
2) Use a "clone tool" on the copy image to remove areas much darker or brighter than
the rest.
3) Blur the copy image slightly (you can use "Gaussian blur") to further remove detail
while preserving the gradient deviations.
61
4) Apply the blurred copy image as a calibration image or mask to the original image...
Voilá, corrected!
Tweaking the calibration image intensity will vary the level of effect. Try a few settings
until you get the balance you want. Only a small amount of tweak is used with this
image, as I wanted to lighten up the outlying limb just a tad. While difficult to detect the
darkened limbs in the reduced size original grayscale shown here, their prominence --pun intended --- is readily discernable in the large-scale version.
I call it a "pseudo-flat" because it isn't really a flat nor was it "collected" via flat-fielding.
Rather, it is constructed using the image itself, and is then applied as if it were a real
flat. I suppose it could be thought of as a hybridized auto-calibration image, but
"pseudo-flat" rolls off the ptongue easier, don't you pthink?!?
62
So what’s next?
The sky’s the limit!
Lacking time to prepare a “proper ending”, I’ll close with an animation sequence from
the “Astro-Imaging with a Digital Camera” shtick I did for my club . The mix includes a
few solar images, so I should be safe (heheheh!)
I apologize if the following animation sequence is jumpy…my laptop is “graphicschallenged”.
63