Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,027
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. If you will be doing setup each session - limit your choices to CEM60. I would advise CEM120 otherwise regardless of the fact that you will be using relatively light scope (less than 15Kg total), but mount head is 26Kg alone .... I'm sometimes fed up with setting up Heq5 each session . If you can - maybe build a pier if you have an option to leave mount on the pier when not in use (as opposed to being stored away). In this case I would say that CEM120 is still an option.
  2. Non guided use - encoders Guided - no encoders
  3. Indeed - same thing applies to other "signal" sources - target signal (good signal) and thermal signal (like sky signal - bad signal as you remove it by calibration but random component remains). All of these are modeled like Poisson process.
  4. Yes. Primary role of NB filters is to cut sky background and associated noise. Just reducing background levels is really not important on its own - you can control that in processing by setting black point. It is associated noise that is problematic - stronger the signal - stronger the shot noise associated with it. Sky background does not contain important signal - it just contains constant signal and that is not useful - but noise associated with it is bad for image. Short vs long exposure - if read noise of camera is dominant noise source in single exposure - you will benefit from doing longer exposures (until read noise is not strongest per single sub). Reducing noise from LP/Sky background makes read noise dominant - and that is why it is better to do longer vs shorter exposures. If you limit yourself to having only short exposure - then NB filter vs without filter is again better - it will remove sky background associated noise, so in this case - just going with short exposures regardless - NB filters help if target emits in NB lines of the filter of course.
  5. Polarizing filter is selective in polarization and not wavelengths. This means it will pass some of UV and some of the IR in the same way it will pass some of the visible wavelength - depending on their polarization.
  6. I think it has to do with UV rays. Although solar filters remove a lot of light - Sun does emit a lot of light and using large scope also collects a lot of light. In the end there might be enough light left to do some damage. Let's say that we are using 100mm scope and that normal pupil in daylight is about 1-2mm? Right? That is somewhere between x2500 to x10000 more light collected by the scope then by directly looking at the sun. Now we have Herschel wedge which leaves about 1/20th of the light (5% or so?) and ND filter - ND3 which gives 1/1000th of light - combined they give 1/20000th of the light. That is only about 1/4 - 1/10 of what we would gather by naked eye - and looking directly at the sun. Image is magnified so it does not look as bright but same number of UV photons reach our eyes. There are number of resources online about harmful effects of UV on eyesight. As for Baader Continuum - it appears that Baader changed something in their manufacturing process as before their filter response curve looked like this: There is a leak in UV part of spectrum. So I guess it is all about avoiding cataract and other nasty things that can result from UV exposure.
  7. vlaiv

    M82

    Ah, ok, I think I get it. Follow two simple rules and you will be able to crop any sensor "perfectly". Rule 1 - don't crop smaller than presentation size. If you are going to view image here on SGL or anywhere else where you can get the sense of the size of the image displayed (not actual size of images - but display size of the image) - don't crop smaller than that. Many viewing utilities by default have "fit to screen" or "fit to viewing area" enabled for images - if your image is larger than viewing area - it will be sampled down to pixel count of viewing area - and that is ok, you don't loose anything by sampling down for display, but you don't want the opposite - small image being sampled up to fit viewing area. When sampling up - "missing pixels" need to be made up by sampling up algorithm and detail can't be made up and things look blurry when you up sample them because of this - avoid your images being upsampled if you can. In most circumstances you want your crops not end up below common computer screen sizes - like 1280 or 1600 (or even 1920 nowadays) in width unless you are sure it won't be up sampled for viewing. Rule 2 - make sure you have a proper sampling rate for conditions. This one is tricky because you don't know proper sampling rate until you finish sampling. Luckily you can get to proper sampling rate with algorithms. Proper sampling rate is about FWHM/1.6 in arc seconds. Let's say you have your stack and you measure FWHM of it to be on average 3.5". What is proper sampling rate for this? It is ~2.2"/px (3.5 / 1.6 = 2.1875). This is your target resolution. Imagine that you sampled at 1"/px, what then? Well first bin your data x2 to get to 2"/px and then down sample it to 2.2"/px (or leave it at 2"/px - if you are fairly close to proper sampling rate you are fine). In any case - use regular binning, fractional binning or down sampling to get to proper sampling rate. If you are already sampling coarser than proper sampling rate - you are fine - no need to do anything special (and certainly don't up sample). Someone will say - you can do drizzle, but I don't think drizzle works in amateur setups and in fact I would like to do experiment with who ever is willing - I'll provide under sampled data - or we can use any existing data that can be made under sampled by binning - and then compare two approaches - drizzle and upsampled stacking. I state that upsampled stacking will provide same level of resolution but will have much better SNR than drizzle - but this is digressing. There you go - two simple rules.
  8. vlaiv

    M82

    @Xiga I would love to try to help - but I've got no idea what you mean when you say that certain sensor "crops better". Can you explain what you mean by that?
  9. In this particular case, this approach to creating synthetic luminance is quite wrong. You can try it out - do 64% / 36% mix and do for example 90% / 10% and look at resulting SNR (take a piece of nebula and measure average signal after removing background / gradients and take pure background and measure standard deviation, or just take rather uniform piece of nebula and measure both average signal and stddev). Above approach actually works only if you take same signal under same conditions and want to stack two different exposure lengths. In general case - you will not have that, even when stacking same filter data (different conditions over the course of the night - like different LP, different extinction, etc ...). Best way to / correct way to create synthetic luminance would be to stack both channels (or three channels in case of RGB) with both regular stacking and stddev stacking (which ever method for stacking one might be using - they can also use appropriate standard deviation calculation for set of pixel values that go into stack / weights adjusted, etc ...). You adjust stddev to account for number of stacked frames - SNR improvement (in simple average stacking - you divide with square of number of stacked frames), and you wipe background from regular stacked image. This will give you signal and noise for each pixel in each channel. Next thing you do is solve the following: ( (S1+S2*a) / sqrt(N1^2+(N2*a)^2) )' = 0 for a (or in case that you are mixing more channels - a1, a2 - one less than number of channels) - for each pixel and apply per pixel weights in mixing two images. (above equation stands for - SNR of result needs to be maximized for selected coefficient a, so we solve derivative of total SNR is equal to 0 for certain a).
  10. No there is no simple process for doing that. In fact in your case, since you have very low SNR in OIII - using Ha as luminance is correct approach. In general, when you want to create artificial luminance from multiple channels - you either do weighted adding (for example for RGB, I would use something like 1/2 G and 1/4 of R and B ) or you do some fancy SNR based combination - this requires that you calculate SNR for each pixel in each channel - which is done by stacking regularly and stacking to standard deviation and then using the two to calculate SNR of each pixel.
  11. I just realized that Irfanview does not have histogram stretch "out of the box" (there is option, but it is not readily available next to viewing window). Alternative would be - fitswork4 - not image viewing program, it has a lot of options - a bit like fits liberator, but maybe a bit more user friendly - it has great feature - it can read DSLR raw files and save them as Fits.
  12. https://www.irfanview.com/ You'll need to install plugin for it, but it is available on the same website under plugins (plugin is "formats" - various formats, fits included).
  13. Here is green channel extracted, binned x4 and vignetting removed (not ideal but the best I could do): A lot of little stars hide in this image ...
  14. Or this - same image: The question is of course, like alacant asked already - what do you want from your image? There is a lot in your image if you know how to process it. I'm going to show you, but in order to get good images containing that level of detail - you will need to change some of your workflow. 1) Flat calibration - that will be very important because you have gradient in your images that is really hard do remove. Flats will take care of that
  15. How about this? That does not look bad, right? That is same image from Canon 450d - and Gimp processing.
  16. Like others pointed out - that is down to post processing.
  17. Not sure what BPM is? Ah ok, don't just add luminance - that is not how luminance should work. If you have HOO image (and you see colors and all) and you have luminance stretched - here is simple luminance transfer method (you will need to figure out how to tell PI to do pixel math - I'm just going to tell you the math) final_r = lum * R / max(R,G,B) final_g = lum * G / max(R,G,B) final_b = lum * B / max(R,G,B) This simply means - for each pixel of output image - take R component of HOO image and divide it with max value of R, G and B of that pixel - and multiply with lum value of corresponding pixel of luminance. That will give you good saturation and good brightness. It is so called - RGB ratio color / luminance combination method.
  18. What exactly do you find to be a problem with that image? Artifact on the right is due to stacking - select "intersection" mode in DSS to remove it - or crop it out. Black corners are consequence of vignetting - use flat frames to correct for that. There is coma, but that is to be expected from newtonian scope without coma corrector. I really don't see anything else that might be "wrong" with this image.
  19. If you try with StarNet++, here is workflow that I found useful: Same as above, but: Once you have stretched Ha to act as a luminance layer, turn it into 16 bit and do star removal on it (as far as I remember StarNet++ only works with 16bit images). Next do blend of starless and original image with starless layer set to subtract. This should create "star only" version of ha Luminance - save that for later. Do starless versions of Ha and OIII and blend those for color (again remove stars on stretched 16bit versions). Apply starless luminance to that and in the end - layer stars only version (should be only white stars and tight) over final image. This will result in tight stars from ha and white stars without funny hues. Above image is rather good. Yes, saturation is lacking - what is your color combine step like? Could you paste that one? PI with DBE removed nasty gradients, so there is potential to make this image better then my version which has that funny OIII gradient. How did you calibrate your images? It looks like flats (if you used them) are over correcting slightly Ha image. Could it be that you did not do flat darks or used bias as flat darks?
  20. First step - crop, bin and background&gradient removal - I do it in ImageJ, here is result of OIII: As you see, even with x4 binning (which improves SNR by x4) signal is barely visible above noise level. Ha is of course much better: Now that we have them wiped, we can continue to make an image out of this. Next step is loading those two still linear subs in Gimp. First step is to create copy of Ha. That will be our luminance. There is no point in trying to add OIII into synthetic luminance since it has such poor SNR - we are just going to make Ha sub worse. We proceed to make nice stretched black and white version of image - do stretch, do noise reduction and all to get nice looking image in b&w. Now we do another stretch of other Ha copy - but far less aggressively. We also do OIII sub stretch. We will stretch that one rather aggressively because SNR is poor and we will use a lot of noise control because of it. OIII: Don' worry about uneven background (probably cloud or something in stack) - we are using Ha luminance and it should not show in final image). Notice how subtle Ha stretch this time is - this is because Ha signal has good SNR and we don't want to drown color with it. Next we do HOO composing of channels. This does not look good and it shows the effect you feared - uneven background. But this is only because I had to boost OIII like crazy because SNR is so poor - there is hardly anything there. However, using Ha as luminance is going to solve a bit of that problem. And the end result after some minor tweaks is: There is still a lot of green in the background - it is hard to remove it completely as it kills any OIII signal in nebula as well. Another thing that would improve image would be use of StarNet++ or other star removal technique to keep stars white instead of them having hue.
  21. I had a look at these, and OIII SNR is very poor. From stack's titles I'm guessing you only did 960 seconds of OIII? That is less than Ha and Ha is stronger signal anyway. Let's see what I can pull from these - I'll have to bin the data to try to improve SNR of OIII sub.
  22. If you wish, you can post 32bit fits of linear Ha and OIII channels - without any processing done - just stacked - and I can do color composition for you with different steps shown. Mind you, it won't be in PI (ImageJ + Gimp), but hopefully you'll be able to replicate steps in PI yourself.
  23. I don't use PI. Now, I'm assuming that Linear Fit does what it says, but do bare in mind that I already made a mistake with PI of assuming that certain operation "does what it says" - or rather I had different understanding of what command might do based on its title. I would not use Linear fit for any channel processing. It is useful when you have the same data and you want to equalize the signal - for example you have target shot in two different conditions. As such it should be part of stacking routine and not later in channel combination.
  24. I would skip linear fit as it does not make much sense. I would also do histogram stretch before channel mixing. In narrowband imaging, Ha is often very dominant signal component - like in your example above. It can be as much as x4 or more stronger than other components. You might think that linear fit would deal with that - but it won't always do it properly - because of different distribution of signal (for example - if you have ha signal where there is no oiii signal and vice verse - linear fit will just leave same ratio of the two - it won't scale Ha to be "compatible" with OIII). Once you have wiped background - effectively set to 0 where there is no signal (average will be 0 but noise will "oscillate" around 0) - then stretching will keep background at zero if you are careful with histogram - if you don't apply any offsets. This means that background will be "black" for mixing of color. You also might want to create synthetic luminance layer for your image and do color transfer onto it once you have done your color mixing.
  25. Total integration time is related like aperture, provided that you match resolution. Actual SNR formula is rather complicated. You can get same SNR with smaller aperture as with larger if you change sampling rate - or arc seconds per pixel. For example - you will get the same SNR from 100mm at 2"/px as 200mm at 1"/px in the same time. You can still use 1s exposures - just means you need to get 3600 of them vs 2000 of them with smaller scope (that is just example number, actual number will depend on host of factors - QE of sensor, your sky background value, sampling rate, etc ...).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.