Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

MalVeauX

Solar Processing & Acquisition Tutorials for HA | Jan 16th 2020

Recommended Posts

Hey all,

I made an acquisition and processing tutorial a while back (3 years ago? Yikes!) and it is fairly dated in terms of what I'm doing these days. I've been asked for a long time to make a new one showing what I'm doing these days. Specifically how I'm processing a single shot image for both the surface and prominences and how to process them together to show prominences and the surface at once. I've abandoned doing split images and composites and strictly work from one image using layers. Acquisition does not use gamma at all anymore. Nothing terribly fancy, but it's not exactly intuitive so hopefully this new video will illustrate most of the fundamentals to get you started. Instead of an hour, this time it's only 18 minutes. It's real time from start to finish. I'm sorry for the long "waiting periods" where I'm just waiting for the software to finish its routine, it lasts 1.5 minutes and 30 seconds tops typically at first. The first 4 minutes is literally just stacking & alignment in AS!3. I typically will go faster than this, but wanted to slow down enough to try to talk through what I'm doing as I do it. Hopefully you can see each action on the screen. I may have made a few mistakes or said a few incorrect things or terms, forgive me for that, this is not my day job. I really hope it helps folk get more into processing as its not difficult or intimidating when you see a simple process with only a few things that are used. The key is good data to begin with and a good exposure value. Today's data came from a 100mm F10 achromatic refractor and an ASI290MM camera with an HA filter. I used FireCapture to acquire the data with a defocused flat frame. No gamma is used. I target anywhere from 65% to 72% histogram fill. That's it! The processing is fast and simple. I have a few presets that I use, but they are all defaults in Photoshop. A lot of the numbers I use for parameters are based on image scale, so keep that in mind, experiment with your own values. The only preset I use that is not a default is my coloring scheme. I color with levels in Photoshop, and my values are Red: 1.6, Green 0.8, Blue 0.2 (these are mid-point values).

Processing Tutorial Video (18 minutes):

https://youtu.be/RJvJEoVS0oU

RAW (.TIF) files available here to practice on (the same images you will see below as RAW TIFs):

https://drive.google.com/open?id=1zjeoux7YPZpGjlRGtX6fH7CH2PhB-dzv

Video for Acquisition, Focus, Flat Calibration and Exposure (20 minutes):

(Please let me know if any links do not work)

++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++

Results from today using this work flow method.

Colored:

49396160256_525d06f008_c.jpg

49396161366_a6fbdd6d9a_c.jpg

49396366262_216393f7fc_c.jpg

49395683593_6568bde3ef_c.jpg

B&W:

49396363742_49ff36281a_c.jpg

49396161976_98e587eb93_c.jpg

49395682288_5585a71316_c.jpg

49396166436_a9aa72ef36_c.jpg

SSM data (sampled during 1.5~2 arc-second seeing conditions):

SeeingConditions_afternoon_01162020.jpg.ee5bc2072531deed769e20fd3bd9b04b.jpg

Equipment for today:

100mm F10 Frac (Omni XLT 120mm F8.3 masked to 4")
Baader Red CCD-IR Block Filter (ERF)
PST etalon + BF10mm
ASI290MM
SSM (for fun, no automation)

ssm_100mmsolarscope_01162020.jpg.12db68485d006e97a9c84e632cc288a0.jpg

solarsetup_01162020.jpg.1a0d1308cc92c8c412818d9f5eda922e.jpg

Very best,
 

Edited by MalVeauX
  • Like 9
  • Thanks 2

Share this post


Link to post
Share on other sites

Thankyou for taking the time to do this Martin, much appreciated by all over here in the sunny UK we need all the help we can get 😁

Dave

Edited by Davey-T
  • Like 1
  • Thanks 1

Share this post


Link to post
Share on other sites

Great new tutorial. I based my workflow on your old tutorial and was never really happy with the junction between the surface and proms after processing them separately.  I like your dodge and burning method on the proms now and processing the whole image at once. I'll try it on some of my images I took last summer to see how they turn out. :smile:

Alan

  • Thanks 1

Share this post


Link to post
Share on other sites

Thank you Martin. For another excellent tutorial!  :thumbsup:

 

  • Thanks 1

Share this post


Link to post
Share on other sites

Hey all,

It rained all day yesterday, but today a cold front came through and its clear so far. I used the time to do a quick video tutorial on my workflow for acquiring data that I process into images later. FireCapture is the software I'm using specifically because it allows real time flat calibration applied to the live feed from your camera and its embedded into the source video upon recording, which is a huge help when making sure you've eliminated newtonian rings, gradients, dust and artifacts, etc, instead of finding out later. It also helps with exposure, seeing WYSIWYG with the flat applied, so you can avoid clipping data. This will cover basic focusing (manual, by hand, nothing special), flat calibration (this is a big part of this, and includes defocus and diffuser methods for full FOV and partial disc FOV flat calibration), and exposure values. I use gamma as a tool to expand and crush shadow tones and mid-tones to change contrast so that it's easier to real time see the output of the camera for focusing and for seeing prominences in data without changing actual exposure. This is a key element that I use a lot, but I do not use gamma when recording, it will always be checked off when recording is happening. Exposure values are totally variable, there's no magic number, we merely look at the histogram and make adjustments based on the histogram.

Please forgive any mistakes in words or terms, it's not my day job (wish it was!) to do this stuff.

Also, sorry for the wobbling and slewing around, it was very windy this morning and I was touching the scope and moving it quickly while trying to make this video.

Key elements to know perhaps before watching the tutorial video:

FireCapture: I'm using FireCapture software specifically for this entire process. Huge thanks to the author of this software and that it's free!

Gamma: I use gamma a lot in FireCapture, it's totally software manipulation. I do not use gamma (its off or neutral, neutral is 50 in FireCapture by the way) when recording however. Gamma stretches values that are useful when using your eyeball to see what's on your live feed from your camera, it's handy to stretch up the shadows and mid-tones (moving the slider to the left towards 0) (less contrast, see faint stuff like prominences); it's also handy to crush the shadows and mid-tones so that perceived contrast is higher on things like spicules, plages, filaments, spots, etc (moving the slider to the right, towards 100). Several times in the tutorial I will set exposure and then use gamma to crush shadows and see surface detail to critically focus, and then open up shadows with gamma to then see the prominences on the limb, again, without changing exposure--the key is that exposure wasn't changed to see the surface or prominences, just software manipulation of gamma, and the point of that is that the data is there, so turn gamma off when recording your video. You can get the faint prominences lifted in post processing and you can increase surface contrast with post processing from the same single exposure capture (my previous tutorial, Rapid Workflow). You don't have to use this, I just find it handy to focus and see prominences to know its in my data, then turn it off to actually capture the data.

Flat Calibration, Defocus Method: Defocus method is commonly used and easy when the solar disc fills the FOV of your camera so that there's only sun in your FOV. One can simply defocus the disc until features are gone, somewhere near the center of the disc ideally. I lower exposure values to achieve about 65% histogram fill. FireCapture recommends between 50~80% if you use the hover tool. Exposure time doesn't matter. I prefer not to use gain if possible doing this, but you can use gain if you need to. FireCapture has a default flat frame tool built in, you simply click it, tell it how many frames you want to capture, it will capture them and apply the flat calibration to your real time video stream from the camera.

Flat Calibration, Diffuser Method (Bag Flats): The Diffuser method is an easy way to create a flat calibration frame when the solar disc does not fill the FOV on your camera sensor, and you can see the limb or void of space around the full disc or partial disc. You need an opaque transluscent bag, I'm using a cereal bag. It should not be completely see through, but opaque. Not all bags are equal, so you have to experiement and find one that does what you need. The key is that it diffuses light, as in, it scatters the light. What this does is illuminates the bag itself so that when its in front of your aperture, the light source is now larger and it will fill your FOV on your sensor so that you can create a flat frame even though the solar disc isn't filling your FOV. This works for full disc FOV with a short scope, and for partial disc FOV. The bag needs to be over the entire aperture into your scope, but also it needs to be farther away from the front of your aperture, not directly touching it, I find it needs 2+ inches of space so that there's no hard edge and the diffuser material will be illuminated farther out than what the solar disc would normally appear as. A lens hood or lens shade is ideal for this to provide that space, ideally, larger than your actual aperture. This is key to not have the bag flat up against your entrance to your aperture of your solar scope (you may need to make a small hood or cardboard holder for dedicated solar scopes that tend to completely lack lens hoods). I again target 65% histogram fill. When performing this method, put the disc in the center or near center of your FOV or sweet spot, we focus first to get critical focus, then don't touch the focuser. Then we put the bag on, and raise exposure to fill the histogram to 65% (or 50~80% per FireCapture's hover tool). Gamma off. Capture your flat frames with the FireCapture tool. It will auto-apply the flat frame. Now remove the bag. You will need to lower exposure values again to your recording exposure values. It will still be in focus, no need to change it. You can however still fine focus if you need to.

Exposure Time: In general I recommend 10ms or shorter exposure times to freeze seeing. This depends on image scale. With fine image scales such as 0.3"/pixel, I tend to try to keep it closer to 2ms, 3ms or 5ms to better freeze the seeing. With course image scales, like 2"/pixel, 1.5"/pixel, 1"/pixel, 0.8"/pixel, 10ms is likely fine as a maximum. Whatever exposure time is needed to best fill your histogram without clipping the data to the right (whites) and short enough exposures to freeze seeing. You may need to use some gain to get the histogram filled if you reach the limits of exposure duration to freeze the seeing. Every imaging system and filter system is different, so there's no magic numbers other than time related to freezing seeing and then the rest is just manipulating it based on the histogram.

Histogram: Lower left, or in case you moved it elsewhere, is your Histogram. You need this to understand your exposure. It shows the shape and spread of your data from the black point on the far left to the white point on the far right. When I refer to histogram fill, I'm referring to putting as much data between those two points as I can without pushing it past the white point, or clipping to the right, which results in lost data (its considered white after that point). There's a % value there to tell you how close you are to filling the entire histogram between the two points. This is where I'm referring to the 65%, 80%'s 90%'s ranges, etc during the tutorial.

Critical Focus: Being in focus is not that easy sometimes. I manually focus. It's even easier if you have a controller and motorized focuser of course. When the seeing is poor, it can be hard to focus critically as you chase the seeing. Ideally, adjust it to highest contrast that you can see and watch it to see moments of good seeing, if you see pencil-drawing like features, you're close. When seeing is dreadful you may never achieve critical focus. When seeing is average with brief moments of good seeing, you can see high contrast features, lines, etc, and know if you're close to focus and adjust from there. I suggest you may adjustments, wait and watch, make adjustments, wait and watch. It's much easier when you have good seeing conditions. It's also much easier with course image scales from small aperture scopes as they are less effected by seeing conditions. It's much more challenging with fine image scales from very large apertures that require excellent seeing conditions.

Bit Rate & Container: Ideally you would want to use 16 bit for bit rate when capturing (especially for prominences), I used 8 bit in these videos because its faster and more accessible to most people and their systems. If you can capture in 16 bit however, I suggest you do that. The container will matter based on the bit rate. Some capture to AVI, but AVI has limits with bit rate. I suggest SER container which will allow the 16 bit rate if you want to use it or can use it. It doesn't matter which one, there's no difference as the data is RAW, the container is just changing what you'll use to preview it mainly, as the stacking software will be happy to look at an AVI or SER container all the same. I'm using SER container. And for preview, I have a SER player that allows me to view the videos independently (free).

How Many Frames? You can preset any number of frames to capture. I capture in bursts of 1,000 frames at a fairly fast frame rate (it's slower in the video due to running all the software, recording this video real time, with the video feed, etc on a laptop). You can capture less. You can capture more. Just know that the more time you spend capturing frames, features can change on the sun, as its super dynamic. I would not capture more than 2~3 minutes of video at course image scales, and at fine image scales I wouldn't go past a minute or so likely, usually less. The apparent movement of things like prominences, filaments, flares, etc, can occur during just minutes of time. So keep your bursts of recording short on time, packed with as many frames as possible (ie, you want fast FPS). This is why we recommend monochrome sensor cameras with fast data rate potential (you can use region of interest to speed up a slower, larger pixel array camera).

The scope, filter system, camera, etc, I'm using do not matter. Nothing will have the same values used above. The intent is just the idea of how to go through the process of this workflow and it will work for small 40mm PST solar scopes and up all the same.

++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++

Video Tutorial:

 

(Please forgive any mistakes, wobbly stuff, incorrect terms or descriptions, this isn't my day job)

++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++++++++++++++++++++++++++++++++++++++++

Bag Flats are always a difficult one to explain and show, so here's some images and a result from the above method for full or partial disc FOV that is usually difficult to perform flat calibration for:

BagFlat_PST_01202020.jpg.5336ccf549710997e73ce9ce718adbc3.jpg

BagFlat_60mmMask_01202020.jpg.6da434a3e461caafb22bd1ac01f83caf.jpg

BagFlat_Applied_PartialDisc_01202020.thumb.jpg.68ad64326b0ea2385c67bb86e5869770.jpg

Very best,

  • Like 2
  • Thanks 3

Share this post


Link to post
Share on other sites

Hi MalVeauX.

Didn't realise you'd posted an extra tutorial until today. I like your opaque bag method for getting flats when the Sun doesn't fill the frame. I get the full disc with my ASI178 and Lunt 50 so hadn't taken flats up to now. I'll give your method a try. :smile:

I agree with your method about not using gamma when recording, just while focusing etc. I use Firecapture too. An extra bonus of not using gamma is a significant increase in frame rate especially when recording full frame as it takes a fair amount of real time processing to apply gamma to every pixel. I was wondering one time why my frame rate was so low when recording until I realised I'd had left gamma on after focusing. Turning gamma off (or just leaving it at 50) made the frame rate much higher again.

I've found recording video in 16 bit makes no difference to recording in 8 bit. I was wondering why, and found this article by Craig Stark The effect of stacking on bit depth. As long as there is noise in each frame, stacking not only reduces noise but increases bit depth. He concludes

Quote

In the presence of noise, stacking numerous 8-bit images has been shown to reduce the quantization error significantly. Stacks of 100 noisy images yield quantization accuracy on par with 12-bit noiseless quantization. Stacks of 50 images were nearly as good, loosing only half a bit. Further, given the inherent noise in the cameras, the need for 16 bits is not entirely clear. Only when the noise is very low (a standard deviation of 0.5 on a scale of 0-255 or 128 on a scale of 0-65,535) was the quantization noise remaining in a stack of 50 frames becoming the major component of the noise in 8-bit stacks. Until this point was reached, the total error in equal length stacks was virtually identical, regardless of bit depth.

CMOS cameras used for planetary/solar imaging are generally 12 bit. The camera output is just multiplied by 16 to give 16 bits. Stacking just 100 8-bit images which contain noise, (as astro images invariably have) will give you quantization noise equivalent to 12.1 bits. As we normally stack more than 100 frames we've already improved on the 12 bit quantization error on the 16 bit recording.

Also recording in 8-bit generally doubles your frame rate if recording full frame so another bonus. :smile:

Alan

  • Like 1
  • Thanks 1

Share this post


Link to post
Share on other sites
14 hours ago, symmetal said:

Hi MalVeauX.

I've found recording video in 16 bit makes no difference to recording in 8 bit. I was wondering why, and found this article by Craig Stark The effect of stacking on bit depth. As long as there is noise in each frame, stacking not only reduces noise but increases bit depth. He concludes

CMOS cameras used for planetary/solar imaging are generally 12 bit. The camera output is just multiplied by 16 to give 16 bits. Stacking just 100 8-bit images which contain noise, (as astro images invariably have) will give you quantization noise equivalent to 12.1 bits. As we normally stack more than 100 frames we've already improved on the 12 bit quantization error on the 16 bit recording.

Also recording in 8-bit generally doubles your frame rate if recording full frame so another bonus. :smile:

Alan

This is an excellent paper, thanks! This helps loads when trying to figure out if something matters or not here especially regarding how large the 16bit data is and the FPS impact (if applicable) that comes with it. Conventional wisdom has been that 16bit helps with faint prominences, and anecdotal experience seems to lean that way too, but it may have been just n=1 or n=2 and not really better due to bit depth at all. Very interesting!

Very best,

  • Like 1

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By Matt Hayden
      Hello!
      What with all this locking down business and home schooling my ten year old son, I have dusted off my TAL-1 and created a slightly less than ideal setup in his room. The westward facing window has been providing great views of the moon and venus over the past week. Obviously doing this indoors is not great (the floor is pretty solid, as is the TAL stand!) but it does mean we get to use the scope every evening, rather than going through the process of carrying the whole rig outside every day.

      I have a T-mount converter for my old-ish Pentax k-M DSLR, but have quite disappointing results with the camera, when compared to the observed image using the eye. The eyeball view is nothing less than banging - crystal clear detail with a 25mm eyepiece, strong contrast - it's spectacular. I add the camera with an eyepiece inside a tube, and can't replicate the same result, or anything like it. It's OK, but not good enough.
      I've cleaned the eyepiece today and collimated the scope. The primary mirror looks fine other than a couple of tiny dust particles, no scratches or weirdness on the coating. The camera is working pretty well as far as I can tell. Apart from a little clumsy-ness with the adjustments, the clarity in the eyepiece suggests it's all working fairly well. Photo attached of best result from last night.
      My question is: is this a focusing issue or some sort of aberration? Should I expect focusing to be difficult in the camera eyepiece? What's going on?! Does anyone have any other tips I can try to get this working better?

    • By rockmattstar
      I have a nexstar 6se and I love it. I've only had it about a year and it's my first proper telescope. Every opportunity I'm out in the garden both looking through it and attaching my dslr to the back for fainter objects.
      I'm noticing I'm getting movement in the longer exposure photos and after a bit of googling I think it's the mount.
      So I'm looking to upgrade. I'm thinking something a little future proof but I'm not made of money so decent, reliable, cheaper end but good enough for astrophotography and with the possibility I might continue to add bits and pieces. 
      What do I need and how much am I looking? Help thanks in advance.
    • By William Productions
      Hello, I am an amateur astronomer that wants to get into deep-sky astrophotography. I already have a telescope which is Celestron Nexstar 127 SLT but it doesn't meet the requirements to take photos of wide field nebulaes/galaxies, (Ex: Orion Nebula, Andromeda Galaxy). I need some help on what to use and afford! It has to be under £550.
      I need a 70 or 80mm optical tube, with a mount that does polar alignment and can be attached to the optical tube then I need a Canon camera that can take long exposure high ISO photos and last a filter or two to help reduce light pollution and contrast the nebula/galaxy more!
      This is just for my birthday, I do not expect the best!
      I just need a beginners setup.
    • By lalou
      Hi! I've recently acquired a new Astromodified Canon rebel XT and I've tried to take pictures of nebulas using it but I've noticed that there are these weird black artifacts that keep appearing in my images. Would like to know if anyone has experienced this before? Or are these dirt/dust specs on the camera, filter, and telescope glass? I've attached some of my edited and raw pictures for your reference. The black artifacts can already be seen in the raw image of the horsehead nebula and after stacking I think it got amplified. Anyway, advance thanks and I hope everyone's doing well.
       

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.