Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

vlaiv

Members
  • Posts

    13,016
  • Joined

  • Last visited

  • Days Won

    11

Posts posted by vlaiv

  1. Or this - same image:

    image.png.6e5d4e3c8b11bcd366ef50616e656c56.png

    The question is of course, like alacant asked already - what do you want from your image?

    There is a lot in your image if you know how to process it.

    I'm going to show you, but in order to get good images containing that level of detail - you will need to change some of your workflow.

    1) Flat calibration - that will be very important because you have gradient in your images that is really hard do remove. Flats will take care of that

    • Like 2
  2. 3 minutes ago, Adreneline said:

    I used flats created immediately after imaging and corresponding dark-flats with darks and a BPM - all in APP.

    Not sure what BPM is?

    3 minutes ago, Adreneline said:

    Not sure I understand this - I combined Ha and OIII using PixelMath with Ha assigned to R and OIII assigned to G and B.

    For the first image above I literally added synthetic luminance to the HOO image - no scale factors or anything - which resulted in it looking washed out.

    Ah ok, don't just add luminance - that is not how luminance should work.

    If you have HOO image (and you see colors and all) and you have luminance stretched - here is simple luminance transfer method (you will need to figure out how to tell PI to do pixel math - I'm just going to tell you the math)

    final_r = lum * R / max(R,G,B)

    final_g = lum * G / max(R,G,B)

    final_b = lum * B / max(R,G,B)

    This simply means - for each pixel of output image - take R component of HOO image and divide it with max value of R, G and B of that pixel - and multiply with lum value of corresponding pixel of luminance.

    That will give you good saturation and good brightness. It is so called - RGB ratio color / luminance combination method.

    • Like 1
  3. What exactly do you find to be a problem with that image?

    Artifact on the right is due to stacking - select "intersection" mode in DSS to remove it - or crop it out.

    Black corners are consequence of vignetting - use flat frames to correct for that.

    There is coma, but that is to be expected from newtonian scope without coma corrector.

    I really don't see anything else that might be "wrong" with this image.

    • Like 1
  4. 8 minutes ago, Adreneline said:

    I will have another go using Starnet++ before I stretch the OIII but that means using the PC rather than the MacBook (I just cannot fathom Starnet++ in Terminal on the MacBook).

    As ever I would value your comments - good and bad!

    If you try with StarNet++, here is workflow that I found useful:

    Same as above, but:

    Once you have stretched Ha to act as a luminance layer, turn it into 16 bit and do star removal on it (as far as I remember StarNet++ only works with 16bit images). Next do blend of starless and original image with starless layer set to subtract. This should create "star only" version of ha Luminance - save that for later.

    Do starless versions of Ha and OIII and blend those for color (again remove stars on stretched 16bit versions). Apply starless luminance to that and in the end - layer stars only version (should be only white stars and tight) over final image.

    This will result in tight stars from ha and white stars without funny hues.

    Above image is rather good. Yes, saturation is lacking - what is your color combine step like? Could you paste that one? PI with DBE removed nasty gradients, so there is potential to make this image better then my version which has that funny OIII gradient.

    How did you calibrate your images? It looks like flats (if you used them) are over correcting slightly Ha image. Could it be that you did not do flat darks or used bias as flat darks?

  5. First step - crop, bin and background&gradient removal - I do it in ImageJ, here is result of OIII:

    Screenshot_1.jpg.eafce3452cd94f1b2c890ef903fb6e3e.jpg

    As you see, even with x4 binning (which improves SNR by x4) signal is barely visible above noise level.

    Ha is of course much better:

    Screenshot_2.jpg.2e98fe2764e23eeeca4f9b4b1474c0c0.jpg

    Now that we have them wiped, we can continue to make an image out of this. Next step is loading those two still linear subs in Gimp.

    First step is to create copy of Ha. That will be our luminance. There is no point in trying to add OIII into synthetic luminance since it has such poor SNR - we are just going to make Ha sub worse.

    We proceed to make nice stretched black and white version of image - do stretch, do noise reduction and all to get nice looking image in b&w.

    Screenshot_3.jpg.73d663af22d73cd6adb8910dca356f05.jpg

    Now we do another stretch of other Ha copy - but far less aggressively. We also do OIII sub stretch. We will stretch that one rather aggressively because SNR is poor and we will use a lot of noise control because of it.

    OIII:

    Screenshot_4.jpg.0dcbc7d1997c3395e1ce926b5d1e126b.jpg

    Don' worry about uneven background (probably cloud or something in stack) - we are using Ha luminance and it should not show in final image).

    Screenshot_6.jpg.3a302664ad79a2a223d5c67c91ba5a71.jpg

    Notice how subtle Ha stretch this time is - this is because Ha signal has good SNR and we don't want to drown color with it.

    Next we do HOO composing of channels.

    Screenshot_5.jpg.aedc0b4a7de763eaeff994f5b0da0a12.jpg

    This does not look good and it shows the effect you feared - uneven background. But this is only because I had to boost OIII like crazy because SNR is so poor - there is hardly anything there. However, using Ha as luminance is going to solve a bit of that problem.

    And the end result after some minor tweaks is:

    GIMP-2_10.jpg.c634a4ba07c4ae2cc3eb67c45c3fa9f0.jpg

    There is still a lot of green in the background - it is hard to remove it completely as it kills any OIII signal in nebula as well.

    Another thing that would improve image would be use of StarNet++ or other star removal technique to keep stars white instead of them having hue.

    • Like 3
    • Thanks 1
  6. 2 hours ago, Adreneline said:

    Here are my calibrated/stacked and registered (but not cropped) fit files for Ha and OIII.

    I had a look at these, and OIII SNR is very poor. From stack's titles I'm guessing you only did 960 seconds of OIII? That is less than Ha and Ha is stronger signal anyway.

    Let's see what I can pull from these - I'll have to bin the data to try to improve SNR of OIII sub.

  7. 4 minutes ago, Adreneline said:

    In the tests I've carried out so far I find it really hard to distinguish between pre or post stretch but that may be due to other inadequacies in my processing regime.

    If you wish, you can post 32bit fits of linear Ha and OIII channels - without any processing done - just stacked - and I can do color composition for you with different steps shown.

    Mind you, it won't be in PI (ImageJ + Gimp), but hopefully you'll be able to replicate steps in PI yourself.

  8. 34 minutes ago, Adreneline said:

    Thanks vlaiv.

    Light Vortex also advocate using LinearFIt for broadband imaging but recommend combining before stretching - the opposite of the advice given for narrowband. Would you recommend dropping LF for broadband as well?

    I've not tried synthetic luminance before - I will give it a go and see how I get on.

    Many thanks.

    Adrian

    I don't use PI. Now, I'm assuming that Linear Fit does what it says, but do bare in mind that I already made a mistake with PI of assuming that certain operation "does what it says" - or rather I had different understanding of what command might do based on its title.

    I would not use Linear fit for any channel processing. It is useful when you have the same data and you want to equalize the signal - for example you have target shot in two different conditions. As such it should be part of stacking routine and not later in channel combination.

  9. I would skip linear fit as it does not make much sense.

    I would also do histogram stretch before channel mixing. In narrowband imaging, Ha is often very dominant signal component - like in your example above. It can be as much as x4 or more stronger than other components.

    You might think that linear fit would deal with that - but it won't always do it properly - because of different distribution of signal (for example - if you have ha signal where there is no oiii signal and vice verse - linear fit will just leave same ratio of the two - it won't scale Ha to be "compatible" with OIII).

    Once you have wiped background - effectively set to 0 where there is no signal (average will be 0 but noise will "oscillate" around 0) - then stretching will keep background at zero if you are careful with histogram - if you don't apply any offsets. This means that background will be "black" for mixing of color.

    You also might want to create synthetic luminance layer for your image and do color transfer onto it once you have done your color mixing.

    • Thanks 1
  10. 17 minutes ago, sploo said:

    Yea, that gives me some hope too. My 12" scope should be about 1.8x "slower", but that still means that 2s exposures may be acceptable.

    Total integration time is related like aperture, provided that you match resolution. Actual SNR formula is rather complicated. You can get same SNR with smaller aperture as with larger if you change sampling rate - or arc seconds per pixel.

    For example - you will get the same SNR from 100mm at 2"/px as 200mm at 1"/px in the same time.

    You can still use 1s exposures - just means you need to get 3600 of them vs 2000 of them with smaller scope (that is just example number, actual number will depend on host of factors - QE of sensor, your sky background value, sampling rate, etc ...).

  11. 23 minutes ago, sploo said:

    I experienced that (visually) for the first time last night (the first night since owning the 300P that I've seen clear skies). Being able to see Orion through the eyepiece is a pretty incredible experience. Unsurprisingly a 305mm aperture is collecting rather more light than the ~70mm of my longest camera lens.

    Interestingly, Orion appeared as a dark cloud (probably with a greenish hue) to the eye, but shakily holding my phone against the eyepiece and taking a snap resulted in the more familiar pink-with-blue centre I've seen when using the DSLR.

    I assume with a really wide aperture scope that colour might be visible visually?

    That depends on several factors. It is surface brightness that is most important I think. It also depends if you are dark adapted or not. More dark adaption you have - more you will loose ability to see the color.

    Prime candidate for seeing color in telescope is M57 for example, as it has high surface brightness. Don't magnify your target too much as you will spread the light over large area and the photo sensitive cells in your eye will receive small number of photons - and fail to trigger response. Don't get fully dark adapted (I know this sounds counter intuitive - but if you want to see color in planets and DSOs - you don't want to loose your color sensitivity and go into full dark adaptation).

    And yes, it takes rather large scope to see some color in DSOs

    Btw - you won't see red / blue color in Orion nebula like in images - you will start by sensing a sort of greenish / teal color. This is because eyes are most sensitive in green part of spectrum - so OIII and to some extent Hb wavelengths. Eyes are fairly insensitive to Ha wavelengths.

    image.png.5294268e2882b22f57efffc242a027e6.png

    This shows two different regimes and sensitivity to light - note that Scotopic vision almost goes away for wavelengths above 600nm. Once you switch to night vision - you don't really see Ha wavelengths - Photopic vision is responsible for that. This means if you want to see any sign of red color - don't get dark adapted. Similarly in low light conditions when you start switching to Scotopic vision - you are much more sensitive to green/blue part at around 500nm - this is why you have hint of greenish hue in Orion nebula.

    Best for viewing color / nebulosity would probably be Mesopic vision - this is crossover when eye uses both cones and rods to see - but neither at its best:

    image.png.7d21f3bd18ab95586fe98cdc8eb7e967.png

    • Thanks 1
  12. 7 minutes ago, dan_adi said:

    I have seen very short exposures on bright objects like the planets. I haven’t seen good pics of faint nebula and galaxies with such short exposures. Vlaiv can explain better than me about signal to noise ratio and optimum exposure length for faint fuzzies. I will look for dso images with very short exposures as you mention and see how they compare with “classic” long exposure photography. Thanks for the info!

    Have a look here:

    https://www.astrokraai.nl/viewimages.php?t=y&category=7

    As far as I can tell - most of the images are taken with 16" dob and 1s exposures. Detail is incredible for such a short exposure and number of subs (most are less than half an hour of imaging).

    Thing with short exposures vs long exposures is just read noise. If we had a camera that has no read noise - exposure length would not matter at all (as long as we can get stars to align subs to).

    In fact as long as read noise is not dominant component of the noise - there is very little difference between short and long subs of equal exposure length. Large scope will such in photons making both target and sky background brighter in shorter amount of time (this really depends on sampling rate for particular case, but I just want to explain certain point) - higher values for target and background - higher the shot noise from target and LP noise and less important read noise becomes - less difference between shorter and longer subs for same total imaging time.

  13. It looks like RA is acting pretty much the same. There is a small difference between the two - first one is 0.73", second one is 0.86" but second one is probably closer to truth since in pre meridian flip - you had saturated guide star which lowers centroid precision.

    But DEC is acting funny post flip - and I think you have DEC backlash. I also think that you have disbalance in DEC that was working against PA error and hence minimized the backlash (similarly like you should make your mount east heavy to minimize RA backlash) but once you switched side of meridian - now disbalance started working with PA since it changed orientation and DEC backlash now started to show.

  14. 34 minutes ago, michael.h.f.wilkinson said:

    I have used the ASI178MM for DSO imaging with decent results, using my APM 80mm F/6 on an EQ3-2 mount, and likewise with the bigger ASI183MC. Using even a chea EQ mount you do not have to limit yourself to sub second exposure times, and that makes life a lot easier

    That is just regular DSO imaging with shorter exposures. You won't achieve any additional sharpness over regular DSO imaging, except for some issues with mount. Seeing related blur will be the same.

    Lucky imaging differs in its goal - to remove as much as possible seeing induced blur.

    • Like 1
    • Thanks 1
  15. Depending on your budget - there might be something else that is very interesting / tempting to try out with C6.

    EEVA with C6 and ASI178 and this:

    https://www.teleskop-express.de/shop/product_info.php/info/p11425_Starizona-Night-Owl-2--0-4x-Focal-Reducer---Corrector-for-SC-Telescopes.html

    This will make your C6 F/4 scope. It works with only smaller sensors, but ASI178 is rather small so should not have any problems.

    C6 with that reducer will give you 600mm of focal length and with ASI178, if you bin x2 you will get 1.65"/px. That is very fine resolution to be working at for EEVA.

    • Thanks 1
  16. 4 minutes ago, sploo said:

    In 35mm (full frame) DSLR photography terms, the "500 rule" is often used; that is, an exposure time no longer than 500s divided by your focal length. I.e. for very wideangle shots (16mm lens) you can expose for 500/16=31 seconds before star trailing becomes an issue. A 1500mm telescope used for prime focus would, I assume, only allow 500/1500=1/3s exposures.

    If using a Canon APS-C body then it's 1.6x shorter (as you get a field of view on the crop sensor that's approximately the same as a lens with a 1.6x longer focal length).

    No, that 500 rule is gross approximation. If you want to do proper calculation for this case - use sampling rate and sidereal rate and see what you get from the two.

    Let's say that you are using modern DSLR sensor that has pixel size of about 4um or so. You are using 1500mm focal length. This gives you 0.55"/px, or each pixel is 0.55 arc seconds "long". Sidereal rate is about 15"/s. This means that in a single second - star will streak across 30 or so pixels. If you have FWHM of about 3" - that is 6 pixels, and you can start to see star elongation at 20% larger major radius. This means that elongation can be at most about 1.2px (6 pixels * 20% = 1.2px). That is 0.55 * 1.2 = 0.66 arc seconds.

    With sidereal rate of 15"/s - this will give you 44ms exposure not to exceed 0.66 arc seconds elongation.

    As you see - this figure is about x7.5 less than you estimated using 500 rule.

  17. Point of lucky imaging is that you discard some of your subs, and only keep some of lucky subs that are sharp enough.

    Otherwise you can consider lucky DSO imaging to be just a regular form of DSO imaging. Therefore you should treat it like regular dso images - calibrate them properly and so on. Issue is that you will have a lots of data and another issue is that you don't limit your self with sampling rate that is adequate for long exposure, but you rather aim for critical sampling rate of your telescope.

    In case of C6 and ASI178 - you just use native pixel size, that is close critical sampling rate (f/ratio for critical sampling rate with 2.4um pixels is about F/11 - F/10 of C6 is close enough).

    If you use DSS (which I would discourage in this case) - simply stack subs that have star FWHM below certain threshold. Otherwise use AS2/3! and do "scientific" stack to produce 32bit image. In the end, continue with planetary type workflow and apply wavelet sharpening in your processing. This will give you best sharpness.

    Don't expect that you will capture much, but what you capture - should be sharp / much sharper than you can get with regular long exposure imaging (if you do lucky imaging properly). In order for this type of imaging to work good and provide "deep" images with good SNR, you really need to have very large telescope that gathers a lot of light - like 16" or above. 6" is just going to give you a "taste" of this.

     

    • Thanks 2
  18. It is for use with 2" eyepieces.

    In order to reach focus, you must place eyepiece at certain distance to primary mirror. This is what the focuser is for, but it might not have enough travel, depending on eyepiece design used. This extension let's you use 2" eyepieces and puts them in good position for you to reach focus with your focuser.

    You can put 2" eyepiece directly in focuser, but odds are that you will need to rack out focuser very much and you might not even reach focus in that configuration.

    image.png.f0f063e95eb7a67c33eefe9b70bad8bf.png

    • Thanks 1
  19. In my view, software is worth paying for in two cases:

    1. It earns you money

    2. It saves you time in which you earn money

    In other cases it is really ok to use free software (it is also ok to use free software in first two points :D as well).

    For most of us, AP is hobby and as such does not qualify for point 1.

    Sometimes it qualifies for number two - if you end up frustrated by your hobby rather than happy it will impact your work ability the next day :D therefore sometimes it is ok to spend money on piece of software if alternative is not available or it would take too much time to learn it or whatever.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.