Jump to content

vlaiv

Members
  • Posts

    13,241
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. 1 hour ago, Xiga said:

    Sorry Vlaiv, my fault for not being clear. I don't mean the sensor, what I meant was, does it take a certain amount of aperture in order to be able to do tight crops? I know that with my 80ED, tight crops don't work well. I think it's mostly just down to the FL, but maybe it's a combination of both FL and aperture? i.e 'Aperture at Resolution'.

    Ah, ok, I think I get it. Follow two simple rules and you will be able to crop any sensor "perfectly".

    Rule 1 - don't crop smaller than presentation size.

    If you are going to view image here on SGL or anywhere else where you can get the sense of the size of the image displayed (not actual size of images - but display size of the image) - don't crop smaller than that.

    Many viewing utilities by default have "fit to screen" or "fit to viewing area" enabled for images - if your image is larger than viewing area - it will be sampled down to pixel count of viewing area - and that is ok, you don't loose anything by sampling down for display, but you don't want the opposite - small image being sampled up to fit viewing area. When sampling up - "missing pixels" need to be made up by sampling up algorithm and detail can't be made up and things look blurry when you up sample them because of this - avoid your images being upsampled if you can.

    In most circumstances you want your crops not end up below common computer screen sizes - like 1280 or 1600 (or even 1920 nowadays) in width unless you are sure it won't be up sampled for viewing.

    Rule 2 - make sure you have a proper sampling rate for conditions.

    This one is tricky because you don't know proper sampling rate until you finish sampling. Luckily you can get to proper sampling rate with algorithms. Proper sampling rate is about FWHM/1.6 in arc seconds. Let's say you have your stack and you measure FWHM of it to be on average 3.5". What is proper sampling rate for this? It is ~2.2"/px (3.5 / 1.6 = 2.1875). This is your target resolution. Imagine that you sampled at 1"/px, what then? Well first bin your data x2 to get to 2"/px and then down sample it to 2.2"/px (or leave it at 2"/px - if you are fairly close to proper sampling rate you are fine). In any case - use regular binning, fractional binning or down sampling to get to proper sampling rate.

    If you are already sampling coarser than proper sampling rate - you are fine - no need to do anything special (and certainly don't up sample). Someone will say - you can do drizzle, but I don't think drizzle works in amateur setups and in fact I would like to do experiment with who ever is willing - I'll provide under sampled data - or we can use any existing data that can be made under sampled by binning - and then compare two approaches - drizzle and upsampled stacking. I state that upsampled stacking will provide same level of resolution but will have much better SNR than drizzle - but this is digressing.

    There you go - two simple rules.

     

    • Thanks 1
  2. 7 hours ago, jager945 said:

    Apologies for any confusion Adrian! The two datasets were used by StarTools to automatically;

    • Create a synthetic luminance master (e.g. making the proper blend of 920s of O-III + and 2280s of Ha). You just tell ST the exposure times and it figures it out, but it in this instance, it would have calculated a signal precision of 1.77:1 (Ha:O-III), derived from sqrt(2880/920) for Ha vs sqrt(920/920). So I believe that would have yielded a 1/(1+1.77) * 100% = ~36% OIII vs 1.77/(1+1.77) * 100% = ~64% Ha blend.
    • Create a synthetic chrominance master at the same time (e.g. mapping Ha to red, O-III to green and also blue)

    In this particular case, this approach to creating synthetic luminance is quite wrong. You can try it out - do 64% / 36% mix and do for example 90% / 10% and look at resulting SNR (take a piece of nebula and measure average signal after removing background / gradients and take pure background and measure standard deviation, or just take rather uniform piece of nebula and measure both average signal and stddev).

    Above approach actually works only if you take same signal under same conditions and want to stack two different exposure lengths. In general case - you will not have that, even when stacking same filter data (different conditions over the course of the night - like different LP, different extinction, etc ...).

    Best way to / correct way to create synthetic luminance would be to stack both channels (or three channels in case of RGB) with both regular stacking and stddev stacking (which ever method for stacking one might be using - they can also use appropriate standard deviation calculation for set of pixel values that go into stack / weights adjusted, etc ...). You adjust stddev to account for number of stacked frames - SNR improvement (in simple average stacking - you divide with square of number of stacked frames), and you wipe background from regular stacked image. This will give you signal and noise for each pixel in each channel.

    Next thing you do is solve the following:

    ( (S1+S2*a) / sqrt(N1^2+(N2*a)^2) )' = 0

    for a (or in case that you are mixing more channels - a1, a2 - one less than number of channels) - for each pixel and apply per pixel weights in mixing two images.

    (above equation stands for -  SNR of result needs to be maximized for selected coefficient a, so we solve derivative of total SNR is equal to 0 for certain a).

     

    • Like 1
  3. 1 hour ago, Adreneline said:

    I am currently producing starless versions in the hope of creating a synthetic luminance from both the Ha and OIII. Is there a correct way to produce the synthetic luminace image? Is it a simple addition process?

    Thanks.

    Adrian

    No there is no simple process for doing that. In fact in your case, since you have very low SNR in OIII - using Ha as luminance is correct approach.

    In general, when you want to create artificial luminance from multiple channels - you either do weighted adding (for example for RGB, I would use something like 1/2 G and 1/4 of R and B ) or you do some fancy SNR based combination - this requires that you calculate SNR for each pixel in each channel - which is done by stacking regularly and stacking to standard deviation and then using the two to calculate SNR of each pixel.

    • Thanks 1
  4. I just realized that Irfanview does not have histogram stretch "out of the box" (there is option, but it is not readily available next to viewing window).

    Alternative would be - fitswork4 - not image viewing program, it has a lot of options - a bit like fits liberator, but maybe a bit more user friendly - it has great feature - it can read DSLR raw files and save them as Fits.

    • Like 1
  5. Or this - same image:

    image.png.6e5d4e3c8b11bcd366ef50616e656c56.png

    The question is of course, like alacant asked already - what do you want from your image?

    There is a lot in your image if you know how to process it.

    I'm going to show you, but in order to get good images containing that level of detail - you will need to change some of your workflow.

    1) Flat calibration - that will be very important because you have gradient in your images that is really hard do remove. Flats will take care of that

    • Like 2
  6. 3 minutes ago, Adreneline said:

    I used flats created immediately after imaging and corresponding dark-flats with darks and a BPM - all in APP.

    Not sure what BPM is?

    3 minutes ago, Adreneline said:

    Not sure I understand this - I combined Ha and OIII using PixelMath with Ha assigned to R and OIII assigned to G and B.

    For the first image above I literally added synthetic luminance to the HOO image - no scale factors or anything - which resulted in it looking washed out.

    Ah ok, don't just add luminance - that is not how luminance should work.

    If you have HOO image (and you see colors and all) and you have luminance stretched - here is simple luminance transfer method (you will need to figure out how to tell PI to do pixel math - I'm just going to tell you the math)

    final_r = lum * R / max(R,G,B)

    final_g = lum * G / max(R,G,B)

    final_b = lum * B / max(R,G,B)

    This simply means - for each pixel of output image - take R component of HOO image and divide it with max value of R, G and B of that pixel - and multiply with lum value of corresponding pixel of luminance.

    That will give you good saturation and good brightness. It is so called - RGB ratio color / luminance combination method.

    • Like 1
  7. What exactly do you find to be a problem with that image?

    Artifact on the right is due to stacking - select "intersection" mode in DSS to remove it - or crop it out.

    Black corners are consequence of vignetting - use flat frames to correct for that.

    There is coma, but that is to be expected from newtonian scope without coma corrector.

    I really don't see anything else that might be "wrong" with this image.

    • Like 1
  8. 8 minutes ago, Adreneline said:

    I will have another go using Starnet++ before I stretch the OIII but that means using the PC rather than the MacBook (I just cannot fathom Starnet++ in Terminal on the MacBook).

    As ever I would value your comments - good and bad!

    If you try with StarNet++, here is workflow that I found useful:

    Same as above, but:

    Once you have stretched Ha to act as a luminance layer, turn it into 16 bit and do star removal on it (as far as I remember StarNet++ only works with 16bit images). Next do blend of starless and original image with starless layer set to subtract. This should create "star only" version of ha Luminance - save that for later.

    Do starless versions of Ha and OIII and blend those for color (again remove stars on stretched 16bit versions). Apply starless luminance to that and in the end - layer stars only version (should be only white stars and tight) over final image.

    This will result in tight stars from ha and white stars without funny hues.

    Above image is rather good. Yes, saturation is lacking - what is your color combine step like? Could you paste that one? PI with DBE removed nasty gradients, so there is potential to make this image better then my version which has that funny OIII gradient.

    How did you calibrate your images? It looks like flats (if you used them) are over correcting slightly Ha image. Could it be that you did not do flat darks or used bias as flat darks?

  9. First step - crop, bin and background&gradient removal - I do it in ImageJ, here is result of OIII:

    Screenshot_1.jpg.eafce3452cd94f1b2c890ef903fb6e3e.jpg

    As you see, even with x4 binning (which improves SNR by x4) signal is barely visible above noise level.

    Ha is of course much better:

    Screenshot_2.jpg.2e98fe2764e23eeeca4f9b4b1474c0c0.jpg

    Now that we have them wiped, we can continue to make an image out of this. Next step is loading those two still linear subs in Gimp.

    First step is to create copy of Ha. That will be our luminance. There is no point in trying to add OIII into synthetic luminance since it has such poor SNR - we are just going to make Ha sub worse.

    We proceed to make nice stretched black and white version of image - do stretch, do noise reduction and all to get nice looking image in b&w.

    Screenshot_3.jpg.73d663af22d73cd6adb8910dca356f05.jpg

    Now we do another stretch of other Ha copy - but far less aggressively. We also do OIII sub stretch. We will stretch that one rather aggressively because SNR is poor and we will use a lot of noise control because of it.

    OIII:

    Screenshot_4.jpg.0dcbc7d1997c3395e1ce926b5d1e126b.jpg

    Don' worry about uneven background (probably cloud or something in stack) - we are using Ha luminance and it should not show in final image).

    Screenshot_6.jpg.3a302664ad79a2a223d5c67c91ba5a71.jpg

    Notice how subtle Ha stretch this time is - this is because Ha signal has good SNR and we don't want to drown color with it.

    Next we do HOO composing of channels.

    Screenshot_5.jpg.aedc0b4a7de763eaeff994f5b0da0a12.jpg

    This does not look good and it shows the effect you feared - uneven background. But this is only because I had to boost OIII like crazy because SNR is so poor - there is hardly anything there. However, using Ha as luminance is going to solve a bit of that problem.

    And the end result after some minor tweaks is:

    GIMP-2_10.jpg.c634a4ba07c4ae2cc3eb67c45c3fa9f0.jpg

    There is still a lot of green in the background - it is hard to remove it completely as it kills any OIII signal in nebula as well.

    Another thing that would improve image would be use of StarNet++ or other star removal technique to keep stars white instead of them having hue.

    • Like 3
    • Thanks 1
  10. 2 hours ago, Adreneline said:

    Here are my calibrated/stacked and registered (but not cropped) fit files for Ha and OIII.

    I had a look at these, and OIII SNR is very poor. From stack's titles I'm guessing you only did 960 seconds of OIII? That is less than Ha and Ha is stronger signal anyway.

    Let's see what I can pull from these - I'll have to bin the data to try to improve SNR of OIII sub.

  11. 4 minutes ago, Adreneline said:

    In the tests I've carried out so far I find it really hard to distinguish between pre or post stretch but that may be due to other inadequacies in my processing regime.

    If you wish, you can post 32bit fits of linear Ha and OIII channels - without any processing done - just stacked - and I can do color composition for you with different steps shown.

    Mind you, it won't be in PI (ImageJ + Gimp), but hopefully you'll be able to replicate steps in PI yourself.

  12. 34 minutes ago, Adreneline said:

    Thanks vlaiv.

    Light Vortex also advocate using LinearFIt for broadband imaging but recommend combining before stretching - the opposite of the advice given for narrowband. Would you recommend dropping LF for broadband as well?

    I've not tried synthetic luminance before - I will give it a go and see how I get on.

    Many thanks.

    Adrian

    I don't use PI. Now, I'm assuming that Linear Fit does what it says, but do bare in mind that I already made a mistake with PI of assuming that certain operation "does what it says" - or rather I had different understanding of what command might do based on its title.

    I would not use Linear fit for any channel processing. It is useful when you have the same data and you want to equalize the signal - for example you have target shot in two different conditions. As such it should be part of stacking routine and not later in channel combination.

  13. I would skip linear fit as it does not make much sense.

    I would also do histogram stretch before channel mixing. In narrowband imaging, Ha is often very dominant signal component - like in your example above. It can be as much as x4 or more stronger than other components.

    You might think that linear fit would deal with that - but it won't always do it properly - because of different distribution of signal (for example - if you have ha signal where there is no oiii signal and vice verse - linear fit will just leave same ratio of the two - it won't scale Ha to be "compatible" with OIII).

    Once you have wiped background - effectively set to 0 where there is no signal (average will be 0 but noise will "oscillate" around 0) - then stretching will keep background at zero if you are careful with histogram - if you don't apply any offsets. This means that background will be "black" for mixing of color.

    You also might want to create synthetic luminance layer for your image and do color transfer onto it once you have done your color mixing.

    • Thanks 1
  14. 17 minutes ago, sploo said:

    Yea, that gives me some hope too. My 12" scope should be about 1.8x "slower", but that still means that 2s exposures may be acceptable.

    Total integration time is related like aperture, provided that you match resolution. Actual SNR formula is rather complicated. You can get same SNR with smaller aperture as with larger if you change sampling rate - or arc seconds per pixel.

    For example - you will get the same SNR from 100mm at 2"/px as 200mm at 1"/px in the same time.

    You can still use 1s exposures - just means you need to get 3600 of them vs 2000 of them with smaller scope (that is just example number, actual number will depend on host of factors - QE of sensor, your sky background value, sampling rate, etc ...).

  15. 23 minutes ago, sploo said:

    I experienced that (visually) for the first time last night (the first night since owning the 300P that I've seen clear skies). Being able to see Orion through the eyepiece is a pretty incredible experience. Unsurprisingly a 305mm aperture is collecting rather more light than the ~70mm of my longest camera lens.

    Interestingly, Orion appeared as a dark cloud (probably with a greenish hue) to the eye, but shakily holding my phone against the eyepiece and taking a snap resulted in the more familiar pink-with-blue centre I've seen when using the DSLR.

    I assume with a really wide aperture scope that colour might be visible visually?

    That depends on several factors. It is surface brightness that is most important I think. It also depends if you are dark adapted or not. More dark adaption you have - more you will loose ability to see the color.

    Prime candidate for seeing color in telescope is M57 for example, as it has high surface brightness. Don't magnify your target too much as you will spread the light over large area and the photo sensitive cells in your eye will receive small number of photons - and fail to trigger response. Don't get fully dark adapted (I know this sounds counter intuitive - but if you want to see color in planets and DSOs - you don't want to loose your color sensitivity and go into full dark adaptation).

    And yes, it takes rather large scope to see some color in DSOs

    Btw - you won't see red / blue color in Orion nebula like in images - you will start by sensing a sort of greenish / teal color. This is because eyes are most sensitive in green part of spectrum - so OIII and to some extent Hb wavelengths. Eyes are fairly insensitive to Ha wavelengths.

    image.png.5294268e2882b22f57efffc242a027e6.png

    This shows two different regimes and sensitivity to light - note that Scotopic vision almost goes away for wavelengths above 600nm. Once you switch to night vision - you don't really see Ha wavelengths - Photopic vision is responsible for that. This means if you want to see any sign of red color - don't get dark adapted. Similarly in low light conditions when you start switching to Scotopic vision - you are much more sensitive to green/blue part at around 500nm - this is why you have hint of greenish hue in Orion nebula.

    Best for viewing color / nebulosity would probably be Mesopic vision - this is crossover when eye uses both cones and rods to see - but neither at its best:

    image.png.7d21f3bd18ab95586fe98cdc8eb7e967.png

    • Thanks 1
  16. 7 minutes ago, dan_adi said:

    I have seen very short exposures on bright objects like the planets. I haven’t seen good pics of faint nebula and galaxies with such short exposures. Vlaiv can explain better than me about signal to noise ratio and optimum exposure length for faint fuzzies. I will look for dso images with very short exposures as you mention and see how they compare with “classic” long exposure photography. Thanks for the info!

    Have a look here:

    https://www.astrokraai.nl/viewimages.php?t=y&category=7

    As far as I can tell - most of the images are taken with 16" dob and 1s exposures. Detail is incredible for such a short exposure and number of subs (most are less than half an hour of imaging).

    Thing with short exposures vs long exposures is just read noise. If we had a camera that has no read noise - exposure length would not matter at all (as long as we can get stars to align subs to).

    In fact as long as read noise is not dominant component of the noise - there is very little difference between short and long subs of equal exposure length. Large scope will such in photons making both target and sky background brighter in shorter amount of time (this really depends on sampling rate for particular case, but I just want to explain certain point) - higher values for target and background - higher the shot noise from target and LP noise and less important read noise becomes - less difference between shorter and longer subs for same total imaging time.

  17. It looks like RA is acting pretty much the same. There is a small difference between the two - first one is 0.73", second one is 0.86" but second one is probably closer to truth since in pre meridian flip - you had saturated guide star which lowers centroid precision.

    But DEC is acting funny post flip - and I think you have DEC backlash. I also think that you have disbalance in DEC that was working against PA error and hence minimized the backlash (similarly like you should make your mount east heavy to minimize RA backlash) but once you switched side of meridian - now disbalance started working with PA since it changed orientation and DEC backlash now started to show.

  18. 34 minutes ago, michael.h.f.wilkinson said:

    I have used the ASI178MM for DSO imaging with decent results, using my APM 80mm F/6 on an EQ3-2 mount, and likewise with the bigger ASI183MC. Using even a chea EQ mount you do not have to limit yourself to sub second exposure times, and that makes life a lot easier

    That is just regular DSO imaging with shorter exposures. You won't achieve any additional sharpness over regular DSO imaging, except for some issues with mount. Seeing related blur will be the same.

    Lucky imaging differs in its goal - to remove as much as possible seeing induced blur.

    • Like 1
    • Thanks 1
  19. Depending on your budget - there might be something else that is very interesting / tempting to try out with C6.

    EEVA with C6 and ASI178 and this:

    https://www.teleskop-express.de/shop/product_info.php/info/p11425_Starizona-Night-Owl-2--0-4x-Focal-Reducer---Corrector-for-SC-Telescopes.html

    This will make your C6 F/4 scope. It works with only smaller sensors, but ASI178 is rather small so should not have any problems.

    C6 with that reducer will give you 600mm of focal length and with ASI178, if you bin x2 you will get 1.65"/px. That is very fine resolution to be working at for EEVA.

    • Thanks 1
  20. 4 minutes ago, sploo said:

    In 35mm (full frame) DSLR photography terms, the "500 rule" is often used; that is, an exposure time no longer than 500s divided by your focal length. I.e. for very wideangle shots (16mm lens) you can expose for 500/16=31 seconds before star trailing becomes an issue. A 1500mm telescope used for prime focus would, I assume, only allow 500/1500=1/3s exposures.

    If using a Canon APS-C body then it's 1.6x shorter (as you get a field of view on the crop sensor that's approximately the same as a lens with a 1.6x longer focal length).

    No, that 500 rule is gross approximation. If you want to do proper calculation for this case - use sampling rate and sidereal rate and see what you get from the two.

    Let's say that you are using modern DSLR sensor that has pixel size of about 4um or so. You are using 1500mm focal length. This gives you 0.55"/px, or each pixel is 0.55 arc seconds "long". Sidereal rate is about 15"/s. This means that in a single second - star will streak across 30 or so pixels. If you have FWHM of about 3" - that is 6 pixels, and you can start to see star elongation at 20% larger major radius. This means that elongation can be at most about 1.2px (6 pixels * 20% = 1.2px). That is 0.55 * 1.2 = 0.66 arc seconds.

    With sidereal rate of 15"/s - this will give you 44ms exposure not to exceed 0.66 arc seconds elongation.

    As you see - this figure is about x7.5 less than you estimated using 500 rule.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.