Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Solar Processing & Acquisition Tutorials for HA | Jan 16th 2020


MalVeauX

Recommended Posts

Hey all,

I made an acquisition and processing tutorial a while back (3 years ago? Yikes!) and it is fairly dated in terms of what I'm doing these days. I've been asked for a long time to make a new one showing what I'm doing these days. Specifically how I'm processing a single shot image for both the surface and prominences and how to process them together to show prominences and the surface at once. I've abandoned doing split images and composites and strictly work from one image using layers. Acquisition does not use gamma at all anymore. Nothing terribly fancy, but it's not exactly intuitive so hopefully this new video will illustrate most of the fundamentals to get you started. Instead of an hour, this time it's only 18 minutes. It's real time from start to finish. I'm sorry for the long "waiting periods" where I'm just waiting for the software to finish its routine, it lasts 1.5 minutes and 30 seconds tops typically at first. The first 4 minutes is literally just stacking & alignment in AS!3. I typically will go faster than this, but wanted to slow down enough to try to talk through what I'm doing as I do it. Hopefully you can see each action on the screen. I may have made a few mistakes or said a few incorrect things or terms, forgive me for that, this is not my day job. I really hope it helps folk get more into processing as its not difficult or intimidating when you see a simple process with only a few things that are used. The key is good data to begin with and a good exposure value. Today's data came from a 100mm F10 achromatic refractor and an ASI290MM camera with an HA filter. I used FireCapture to acquire the data with a defocused flat frame. No gamma is used. I target anywhere from 65% to 72% histogram fill. That's it! The processing is fast and simple. I have a few presets that I use, but they are all defaults in Photoshop. A lot of the numbers I use for parameters are based on image scale, so keep that in mind, experiment with your own values. The only preset I use that is not a default is my coloring scheme. I color with levels in Photoshop, and my values are Red: 1.6, Green 0.8, Blue 0.2 (these are mid-point values).

Processing Tutorial Video (18 minutes):

https://youtu.be/RJvJEoVS0oU

RAW (.TIF) files available here to practice on (the same images you will see below as RAW TIFs):

https://drive.google.com/open?id=1zjeoux7YPZpGjlRGtX6fH7CH2PhB-dzv

Video for Acquisition, Focus, Flat Calibration and Exposure (20 minutes):

(Please let me know if any links do not work)

++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++

Results from today using this work flow method.

Colored:

49396160256_525d06f008_c.jpg

49396161366_a6fbdd6d9a_c.jpg

49396366262_216393f7fc_c.jpg

49395683593_6568bde3ef_c.jpg

B&W:

49396363742_49ff36281a_c.jpg

49396161976_98e587eb93_c.jpg

49395682288_5585a71316_c.jpg

49396166436_a9aa72ef36_c.jpg

SSM data (sampled during 1.5~2 arc-second seeing conditions):

SeeingConditions_afternoon_01162020.jpg.ee5bc2072531deed769e20fd3bd9b04b.jpg

Equipment for today:

100mm F10 Frac (Omni XLT 120mm F8.3 masked to 4")
Baader Red CCD-IR Block Filter (ERF)
PST etalon + BF10mm
ASI290MM
SSM (for fun, no automation)

ssm_100mmsolarscope_01162020.jpg.12db68485d006e97a9c84e632cc288a0.jpg

solarsetup_01162020.jpg.1a0d1308cc92c8c412818d9f5eda922e.jpg

Very best,
 

Edited by MalVeauX
  • Like 11
  • Thanks 5
Link to comment
Share on other sites

Great new tutorial. I based my workflow on your old tutorial and was never really happy with the junction between the surface and proms after processing them separately.  I like your dodge and burning method on the proms now and processing the whole image at once. I'll try it on some of my images I took last summer to see how they turn out. :smile:

Alan

  • Thanks 1
Link to comment
Share on other sites

  • Ibbo! pinned this topic

Hey all,

It rained all day yesterday, but today a cold front came through and its clear so far. I used the time to do a quick video tutorial on my workflow for acquiring data that I process into images later. FireCapture is the software I'm using specifically because it allows real time flat calibration applied to the live feed from your camera and its embedded into the source video upon recording, which is a huge help when making sure you've eliminated newtonian rings, gradients, dust and artifacts, etc, instead of finding out later. It also helps with exposure, seeing WYSIWYG with the flat applied, so you can avoid clipping data. This will cover basic focusing (manual, by hand, nothing special), flat calibration (this is a big part of this, and includes defocus and diffuser methods for full FOV and partial disc FOV flat calibration), and exposure values. I use gamma as a tool to expand and crush shadow tones and mid-tones to change contrast so that it's easier to real time see the output of the camera for focusing and for seeing prominences in data without changing actual exposure. This is a key element that I use a lot, but I do not use gamma when recording, it will always be checked off when recording is happening. Exposure values are totally variable, there's no magic number, we merely look at the histogram and make adjustments based on the histogram.

Please forgive any mistakes in words or terms, it's not my day job (wish it was!) to do this stuff.

Also, sorry for the wobbling and slewing around, it was very windy this morning and I was touching the scope and moving it quickly while trying to make this video.

Key elements to know perhaps before watching the tutorial video:

FireCapture: I'm using FireCapture software specifically for this entire process. Huge thanks to the author of this software and that it's free!

Gamma: I use gamma a lot in FireCapture, it's totally software manipulation. I do not use gamma (its off or neutral, neutral is 50 in FireCapture by the way) when recording however. Gamma stretches values that are useful when using your eyeball to see what's on your live feed from your camera, it's handy to stretch up the shadows and mid-tones (moving the slider to the left towards 0) (less contrast, see faint stuff like prominences); it's also handy to crush the shadows and mid-tones so that perceived contrast is higher on things like spicules, plages, filaments, spots, etc (moving the slider to the right, towards 100). Several times in the tutorial I will set exposure and then use gamma to crush shadows and see surface detail to critically focus, and then open up shadows with gamma to then see the prominences on the limb, again, without changing exposure--the key is that exposure wasn't changed to see the surface or prominences, just software manipulation of gamma, and the point of that is that the data is there, so turn gamma off when recording your video. You can get the faint prominences lifted in post processing and you can increase surface contrast with post processing from the same single exposure capture (my previous tutorial, Rapid Workflow). You don't have to use this, I just find it handy to focus and see prominences to know its in my data, then turn it off to actually capture the data.

Flat Calibration, Defocus Method: Defocus method is commonly used and easy when the solar disc fills the FOV of your camera so that there's only sun in your FOV. One can simply defocus the disc until features are gone, somewhere near the center of the disc ideally. I lower exposure values to achieve about 65% histogram fill. FireCapture recommends between 50~80% if you use the hover tool. Exposure time doesn't matter. I prefer not to use gain if possible doing this, but you can use gain if you need to. FireCapture has a default flat frame tool built in, you simply click it, tell it how many frames you want to capture, it will capture them and apply the flat calibration to your real time video stream from the camera.

Flat Calibration, Diffuser Method (Bag Flats): The Diffuser method is an easy way to create a flat calibration frame when the solar disc does not fill the FOV on your camera sensor, and you can see the limb or void of space around the full disc or partial disc. You need an opaque transluscent bag, I'm using a cereal bag. It should not be completely see through, but opaque. Not all bags are equal, so you have to experiement and find one that does what you need. The key is that it diffuses light, as in, it scatters the light. What this does is illuminates the bag itself so that when its in front of your aperture, the light source is now larger and it will fill your FOV on your sensor so that you can create a flat frame even though the solar disc isn't filling your FOV. This works for full disc FOV with a short scope, and for partial disc FOV. The bag needs to be over the entire aperture into your scope, but also it needs to be farther away from the front of your aperture, not directly touching it, I find it needs 2+ inches of space so that there's no hard edge and the diffuser material will be illuminated farther out than what the solar disc would normally appear as. A lens hood or lens shade is ideal for this to provide that space, ideally, larger than your actual aperture. This is key to not have the bag flat up against your entrance to your aperture of your solar scope (you may need to make a small hood or cardboard holder for dedicated solar scopes that tend to completely lack lens hoods). I again target 65% histogram fill. When performing this method, put the disc in the center or near center of your FOV or sweet spot, we focus first to get critical focus, then don't touch the focuser. Then we put the bag on, and raise exposure to fill the histogram to 65% (or 50~80% per FireCapture's hover tool). Gamma off. Capture your flat frames with the FireCapture tool. It will auto-apply the flat frame. Now remove the bag. You will need to lower exposure values again to your recording exposure values. It will still be in focus, no need to change it. You can however still fine focus if you need to.

Exposure Time: In general I recommend 10ms or shorter exposure times to freeze seeing. This depends on image scale. With fine image scales such as 0.3"/pixel, I tend to try to keep it closer to 2ms, 3ms or 5ms to better freeze the seeing. With course image scales, like 2"/pixel, 1.5"/pixel, 1"/pixel, 0.8"/pixel, 10ms is likely fine as a maximum. Whatever exposure time is needed to best fill your histogram without clipping the data to the right (whites) and short enough exposures to freeze seeing. You may need to use some gain to get the histogram filled if you reach the limits of exposure duration to freeze the seeing. Every imaging system and filter system is different, so there's no magic numbers other than time related to freezing seeing and then the rest is just manipulating it based on the histogram.

Histogram: Lower left, or in case you moved it elsewhere, is your Histogram. You need this to understand your exposure. It shows the shape and spread of your data from the black point on the far left to the white point on the far right. When I refer to histogram fill, I'm referring to putting as much data between those two points as I can without pushing it past the white point, or clipping to the right, which results in lost data (its considered white after that point). There's a % value there to tell you how close you are to filling the entire histogram between the two points. This is where I'm referring to the 65%, 80%'s 90%'s ranges, etc during the tutorial.

Critical Focus: Being in focus is not that easy sometimes. I manually focus. It's even easier if you have a controller and motorized focuser of course. When the seeing is poor, it can be hard to focus critically as you chase the seeing. Ideally, adjust it to highest contrast that you can see and watch it to see moments of good seeing, if you see pencil-drawing like features, you're close. When seeing is dreadful you may never achieve critical focus. When seeing is average with brief moments of good seeing, you can see high contrast features, lines, etc, and know if you're close to focus and adjust from there. I suggest you may adjustments, wait and watch, make adjustments, wait and watch. It's much easier when you have good seeing conditions. It's also much easier with course image scales from small aperture scopes as they are less effected by seeing conditions. It's much more challenging with fine image scales from very large apertures that require excellent seeing conditions.

Bit Rate & Container: Ideally you would want to use 16 bit for bit rate when capturing (especially for prominences), I used 8 bit in these videos because its faster and more accessible to most people and their systems. If you can capture in 16 bit however, I suggest you do that. The container will matter based on the bit rate. Some capture to AVI, but AVI has limits with bit rate. I suggest SER container which will allow the 16 bit rate if you want to use it or can use it. It doesn't matter which one, there's no difference as the data is RAW, the container is just changing what you'll use to preview it mainly, as the stacking software will be happy to look at an AVI or SER container all the same. I'm using SER container. And for preview, I have a SER player that allows me to view the videos independently (free).

How Many Frames? You can preset any number of frames to capture. I capture in bursts of 1,000 frames at a fairly fast frame rate (it's slower in the video due to running all the software, recording this video real time, with the video feed, etc on a laptop). You can capture less. You can capture more. Just know that the more time you spend capturing frames, features can change on the sun, as its super dynamic. I would not capture more than 2~3 minutes of video at course image scales, and at fine image scales I wouldn't go past a minute or so likely, usually less. The apparent movement of things like prominences, filaments, flares, etc, can occur during just minutes of time. So keep your bursts of recording short on time, packed with as many frames as possible (ie, you want fast FPS). This is why we recommend monochrome sensor cameras with fast data rate potential (you can use region of interest to speed up a slower, larger pixel array camera).

The scope, filter system, camera, etc, I'm using do not matter. Nothing will have the same values used above. The intent is just the idea of how to go through the process of this workflow and it will work for small 40mm PST solar scopes and up all the same.

++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++

Video Tutorial:

 

(Please forgive any mistakes, wobbly stuff, incorrect terms or descriptions, this isn't my day job)

++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++

Bag Flats are always a difficult one to explain and show, so here's some images and a result from the above method for full or partial disc FOV that is usually difficult to perform flat calibration for:

BagFlat_PST_01202020.jpg.5336ccf549710997e73ce9ce718adbc3.jpg

BagFlat_60mmMask_01202020.jpg.6da434a3e461caafb22bd1ac01f83caf.jpg

BagFlat_Applied_PartialDisc_01202020.thumb.jpg.68ad64326b0ea2385c67bb86e5869770.jpg

Very best,

  • Like 3
  • Thanks 3
Link to comment
Share on other sites

  • 4 weeks later...

Hi MalVeauX.

Didn't realise you'd posted an extra tutorial until today. I like your opaque bag method for getting flats when the Sun doesn't fill the frame. I get the full disc with my ASI178 and Lunt 50 so hadn't taken flats up to now. I'll give your method a try. :smile:

I agree with your method about not using gamma when recording, just while focusing etc. I use Firecapture too. An extra bonus of not using gamma is a significant increase in frame rate especially when recording full frame as it takes a fair amount of real time processing to apply gamma to every pixel. I was wondering one time why my frame rate was so low when recording until I realised I'd had left gamma on after focusing. Turning gamma off (or just leaving it at 50) made the frame rate much higher again.

I've found recording video in 16 bit makes no difference to recording in 8 bit. I was wondering why, and found this article by Craig Stark The effect of stacking on bit depth. As long as there is noise in each frame, stacking not only reduces noise but increases bit depth. He concludes

Quote

In the presence of noise, stacking numerous 8-bit images has been shown to reduce the quantization error significantly. Stacks of 100 noisy images yield quantization accuracy on par with 12-bit noiseless quantization. Stacks of 50 images were nearly as good, loosing only half a bit. Further, given the inherent noise in the cameras, the need for 16 bits is not entirely clear. Only when the noise is very low (a standard deviation of 0.5 on a scale of 0-255 or 128 on a scale of 0-65,535) was the quantization noise remaining in a stack of 50 frames becoming the major component of the noise in 8-bit stacks. Until this point was reached, the total error in equal length stacks was virtually identical, regardless of bit depth.

CMOS cameras used for planetary/solar imaging are generally 12 bit. The camera output is just multiplied by 16 to give 16 bits. Stacking just 100 8-bit images which contain noise, (as astro images invariably have) will give you quantization noise equivalent to 12.1 bits. As we normally stack more than 100 frames we've already improved on the 12 bit quantization error on the 16 bit recording.

Also recording in 8-bit generally doubles your frame rate if recording full frame so another bonus. :smile:

Alan

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

14 hours ago, symmetal said:

Hi MalVeauX.

I've found recording video in 16 bit makes no difference to recording in 8 bit. I was wondering why, and found this article by Craig Stark The effect of stacking on bit depth. As long as there is noise in each frame, stacking not only reduces noise but increases bit depth. He concludes

CMOS cameras used for planetary/solar imaging are generally 12 bit. The camera output is just multiplied by 16 to give 16 bits. Stacking just 100 8-bit images which contain noise, (as astro images invariably have) will give you quantization noise equivalent to 12.1 bits. As we normally stack more than 100 frames we've already improved on the 12 bit quantization error on the 16 bit recording.

Also recording in 8-bit generally doubles your frame rate if recording full frame so another bonus. :smile:

Alan

This is an excellent paper, thanks! This helps loads when trying to figure out if something matters or not here especially regarding how large the 16bit data is and the FPS impact (if applicable) that comes with it. Conventional wisdom has been that 16bit helps with faint prominences, and anecdotal experience seems to lean that way too, but it may have been just n=1 or n=2 and not really better due to bit depth at all. Very interesting!

Very best,

  • Like 1
Link to comment
Share on other sites

  • 2 months later...

First time for me this morning with a 178 chip - Lunt LS60 Tha - 201 frames, almost 1GB of data (.ser) and my laptop (i7 8GB Ram) is almost frozen running AS3! on one video. Sheesh!

Looks like I'll be dropping to .avi tomorrow, or reducing the area of capture!

Even my 174 on white light froze on AS3!

Good fun this solar imaging lark.... :D

  • Like 1
Link to comment
Share on other sites

39 minutes ago, Altocumulus said:

First time for me this morning with a 178 chip - Lunt LS60 Tha - 201 frames, almost 1GB of data (.ser) and my laptop (i7 8GB Ram) is almost frozen running AS3! on one video. Sheesh!

Looks like I'll be dropping to .avi tomorrow, or reducing the area of capture!

Even my 174 on white light froze on AS3!

Good fun this solar imaging lark.... :D

What frame size were you using?

I usually run 800x600 on H-a. Mono16 bit .SER [ASI174]
100 stacked of 2k or 3k frames in AS!3.
My laptop has no problem with that.
I often capture, browse and process at the same time.
Not so much parallel processing as serial [image] killer. ;)

  • Thanks 1
Link to comment
Share on other sites

15 minutes ago, Freddie said:

There is no point in capturing using SER as once you have stacked there is no advantage over capturing using AVI.

There is no choice with SharpCap.
Which defers automatically to SER upon choosing Mono16 [from AVI with Mono8.]
I read the 8 v 12 v 16 bit discussion on CN. Some disagree over the choice.

  • Like 1
Link to comment
Share on other sites

Don't capture in 8 bit unless you know what you are doing. Use SER.

49 minutes ago, Freddie said:

I would, and do, go with AVI every time. (though in FireCapture)

https://pdfs.semanticscholar.org/657f/7ee24526277d5a9da3dbc6bdf0826c864a64.pdf
 

This paper discusses issues when doing stacking with fixed precision mathematics. Avoid doing stacking with fixed precision mathematics - use floating point for result of stacking whenever possible.

It does not talk about what happens when you use 8bit capture format.

For example in ASI174 there is no native 8bit ADC. It uses 10bit ADC when doing 8bit capture. 8bits produced are going to be MSBs of those 10 bits. You need to adjust your gain so that there is no data loss due to this clipping of LSBs.

In order for 2 least significant bits not to matter - you need to adjust your gain to be 0.25 e/ADU. Since unity gain value is 189, you need to add twice 61 to that to get 0.25e/ADU value. This means gain of 311 in 0.1dB.

You also need to examine bias at these settings and see if there is clipping to the left and adjust offset accordingly.

  • Like 1
Link to comment
Share on other sites

I don’t know if FC and SC capture in different ways when set to AVI mode but the overwhelming evidence from actual captures from many different imagers (at least when using FC) is that there is no observable benefit to capturing SER, it just takes up more storage and potentially slows achievable capture speed.

  • Like 1
Link to comment
Share on other sites

7 minutes ago, Freddie said:

I don’t know if FC and SC capture in different ways when set to AVI mode but the overwhelming evidence from actual captures from many different imagers (at least when using FC) is that there is no observable benefit to capturing SER, it just takes up more storage and potentially slows achievable capture speed.

I advocate using SER - because

a) it is lighter weight format - it just uses 178 additional bytes for header and rest is raw frame data. There is optional block of timestamps appended at the end. AVI on the other hand is audio/video interleaved format that is container format for both audio and video data capable of storing variety of pixel and compression formats.

From this it is obvious that SER will take less space for same pixel format.

b) Given that AVI is what it is - it can contain any number of video formats together with loosy video compression. Although experienced planetary imagers will know to avoid that and use raw data even if it is recorded in avi - many other people won't know that and will happily choose AVI format. In doing so, they run a risk of using formats like YUV that has sub sampling or even some compression formats like MJPEG and such that introduce artifacts into recorded data. For the benefit of general public, instead of convoluted discussion on pixel formats and raw vs loosy compressed images - I just advocate SER to everyone. That way people avoid pitfalls associated with AVI.

Have a look at http://www.grischa-hahn.homepage.t-online.de/astro/ser/ for description of SER.

It is lightweight format that is particularly suitable for planetary imaging. In no way it adds overhead that will slow down capture - on the contrary - it will be faster than AVI.

  • Like 2
Link to comment
Share on other sites

12 minutes ago, Freddie said:

Sorry, badly worded on my part. We were obviously originally talking about 16 bit SER. There is no real world advantage to capturing in 16 bit SER Vs lower bit but I neglected to state 16 bit above.

That depends on how strong your source is (which depends on numerous factors - one of them being sampling rate).

One should limit exposure time depending on current seeing level (coherence time). In particularly good seeing, or when sampling rate is lower than critical (full disk imaging for example with shorter focal length and large pixels) - Signal level per single exposure can be higher than 8bit could write. Why not exploit that with 12 bit capture?

I've never done Ha solar, so I can't be sure what sort of signal level per exposure is to be expected, but in other types of planetary imaging - you can sometimes saturate 8bit range with short exposures.

Then there is the matter of explaining details of 8bit capture to general public. I often say - if you have 16bit - use it, no harm in doing so (except for maybe sometimes a bit lower frame rate). If one uses 8bit but there is clipping of values - much larger damage to recording is done.

  • Like 1
Link to comment
Share on other sites

That’s been an interesting discussion, thanks for your input and comments. As someone who has done plenty of solar imaging, practical experience shows that 16 bit capture gives no observable quality improvement in the final processed image but can slow down capture speeds (which can cause issues in high res imaging of active areas such as proms) and certainly takes up more storage which may or may not be an issue depending on the individual computer being used. Some interesting theoretical points though.

  • Like 2
Link to comment
Share on other sites

In answer to the earlier question on size - I was, as it as the first time with the 178, running with full chip.

I've seen many a discussion on .SER or .AVI that makes my head spin - and certainly some points above make it spin even more. .SER retains more data in the envelope, than .AVI - that is easy to understand.

For the time being I shall see what I can get away with whilst staying with .SER (as I tend to use Sharpcap).

Have downloaded that .pdf for a bit of light reading.

 

Thanks all....

  • Haha 1
Link to comment
Share on other sites

  • 11 months later...

Hello Marty,

How are you ? Thanks very much again. I will take a close look at this thread which I had not seen before. I will study your tutorial again on processing.

I have ordered a Skywatcher evostar 90mm scope at 900mm focal length (F10), and will start setting it up for straight through imaging with a Quark chromosphere filter. I have yet to decide on the camera yet, will need to buy one.

Thanks for your files which I will also work on later next week !

 

Magnus.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.