Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,097
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I think you are confused with ambiguous usage of term "combine". Here is quote from LV article that you just linked (I just did quick search on term noise and this is pretty much where it appears in text): "Though your images may differ, it is common to apply some noise reduction to images in their linear state. The choice of whether or not to do this, or how aggressively to do it, depends on the level of noise in your images. Please note that this is beyond the scope of this tutorial and is covered amply by another tutorial specially written on the subject of noise reduction. My personal noise reduction routine of choice for the images we are about to post-process was using MultiscaleLinearTransform (as described in the tutorial on noise reduction) with stretched clone copies of the images themselves acting as masks. " Here it advises you to do denoise at linear stage. Let's quickly "run thru" stages of data reduction and processing: 1. calibration 2. stacking (combining) 3. non linear transform 4. multichannel combining In some workflows 3 and 4 can swap places. What is important is that both articles place denoise phase at the same place in the workflow. First article about MURE denoise from PI forums says that you should do it after phase 2 but prior to phase 3. What has confused you is that it uses term combine instead of stacking and that warning about doing it after "combine" (stacking) is referring to the fact that you should do it after phase 2 and not after phase 1 - after calibration. If you do it "prior to combining" (meaning prior to stacking) - you will end up skewing statistics of signal and noise and stacking (or here called "combining") will not produce wanted results - down the thread on PI forum OP says that he tried that with result of loss of faint detail - this is why you should not denoise individual subs prior to stacking (or "combining"), but rather after stacking / "combining", but prior phase 3 while data is still linear. To reiterate - both articles say the same thing - denoise while data is still linear - this has nothing to do with multichannel combining, or making color (false or otherwise) image out of individual mono masters. Denoising should be done at linear stage prior to this.
  2. I'm not following. This thread / first post explicitly says that denoising should be done at linear phase after stacking. MURE denoising does not work on multichannel data, like RGB / false color images - it even states that you can't do it on OSC data (you can, but you need to split your OSC subs into separate channels without debayering and treat them as individual mono/filter subs that you stack and then apply denoise as you would with regular mono subs). You say here that there is article that says you should apply some type of denoise algorithm after combining of individual masters into multichannel image? Right? Which article would that be?
  3. Can you give a link to second one - recommending denoising after channel combine?
  4. From what I can see in first post of that topic you linked to, MURE denoise works as first type of algorithm that I described. It assumes image is combination of Poisson and Gaussian noise distributions and does some clever math to remove associated noise. In that context, sentence: "Do not combine denoised images. Signal-to-noise ratio (SNR) will be enhanced by combining noisy images and denoising the result. Combined images must be equally exposed, have the same pixel resolution, and be registered by projective transformation with no distortion correction." refers to exactly what I mentioned - stacking of denoised images vs denoising stack and not channel combine. It also says that during registering and combining you need to be careful not to do any sort of projective corrections (like fixing lens distortion in wide field images and such). Because this algorithm utilizes noise distribution statistics - it needs linear data, so you should apply it after stacking and before any sort of non linear transform (histogram stretch). Non linear transform skews relationship between signal strength and associated noise in Poisson distribution. It is explained in detail in this post: https://pixinsight.com/forum/index.php?topic=9206.msg59115#msg59115
  5. I would say that it depends on denoising algorithm used. Some denoising algorithms are designed to work best with linear data of known noise statistics (by known here it means estimated from data, but of known distribution type). Others are designed based on perception uniformity of the image, or general smoothness (like TV family of algorithms that assume smooth underlying data). I don't think this refers to channel combination. It looks like it was written in response to question like: "Should I denoise subs prior to stacking or denoise resulting stack". Simple SHO palette combination of data does not influence particular noise distribution per channel. Some other methods of combining data will influence noise statistics (like linear combination per channel), but again - denoising algorithms work per channel, so it will depend on type of algorithm used (some even transform RGB image into Lab and employ different levels of denoising based on working with luminance or chrominance data).
  6. According to this thread, there are in fact offset and gain values: What values do you have set on your camera, and more importantly did you change them at any point?
  7. Something is very odd with those dark subs. Here are my findings: Average background value for first and second dark at 300s is very much different in intensity - it should be roughly the same, if you take two darks only difference between them should be random noise and their average value should be about the same. I subtracted bias from both and got ~910.4 ADU in first, while only ~494.8 ADU in second dark. We can use short dark (flat dark) to do cross check which one might be the right value, if any. We can also look at the specs for your camera and see what should be expected dark current value at this temperature and exposure. Short dark (8s ) after bias removal has average value of ~6.65 ADU. Dark current should be linear in time, so if 8s dark produces 6.65 average ADU value, 300s one should produce 300/8 * 6.65 = 249.375 ADU None of the above seem to have that value - first one has more than 3 times that amount, while second has about two times that amount - this is very strange and should not happen! Looking around for QHY9 specs, I can't seem to find system e/ADU (gain) value anywhere? Nor can I find dark current at -15C? This is something manufacturers of cameras provide. But I did stumble upon something interesting - there is debate over gain and offset settings for that camera. I was under impression that CCD cameras don't have this - system gain and offset should be factory set with no user intervention. Do you have these values, and can you change gain and offset? And most importantly did you change those values?
  8. You can actually check other people's readings on that website as well - just turn SQM overlay to get recorded measurements and other info. One can add their own measurements as well.
  9. I see Canon 600D listed in your kit in signature, so I'm guessing you take odd picture now and then with that camera and the scope? If you are, then there is simple way to measure SQM your self (approximately). Take an image of sky near zenith (where you want your SQM value measured) and another one of bright reference star. Use green channel from your image and measure star intensity - photometric measurement - you can use AstroImageJ for this. Then take empty patch of sky and measure average value there (in first image). Make sure you subtracted dark frame from your image (you can do flat calibration as well but you don't need to do it for this rough measurement). Divide average background value with pixel scale squared (so if you image at 1.5"/pixel - divide with 1.5*1.5 = 2.25). Divide total star brightness with background value and calculate magnitude difference based on this number -2.5*Log(star_intensity / background_intensity) = magnitude difference between star and sky brightness. Subtract this value from star magnitude. This should give you SQM reading.
  10. Do you have any software to check all fits files that you've taken - like inspect Fits headers and pixel values - that can be important clue if something was recorded wrong. This can also be the case - like silly truncation to 8bit data or something like that? Maybe if you post one of each - fits files I mean, just a single sub, one of each - no need to post masters or whole set. If it is ED80 - then light leak is not likely suspect - refractors don't have these issues (unless light leak is on the other side - after focuser, but that is a long shot - one would need to have opening in optical train that would be lit up when taking flats).
  11. Ah, that is opposite from what I assumed - this is under correction in flats. If we go to original formula calibrated = lights / flats and calibrated value is lower than it should be - we have two possible cases: 1. lights are lower in value than they should be - this can happen from improper calibration (like multiple darks / bias subtraction, but I don't think it is the case) 2. Flats are "stronger" than they should be (higher in value) - this can happen if one is for example calibrating flats with bias only and there is significant dark current, but I don't think it's the case here. Another reason why flats might be stronger, and this would be my main suspicion in this case - flocking and baffling of telescope - some unfocused light is making it's way to the sensor when using flats, bypassing "regular" optical train. Depending on scope type, this can happen with: Newtonian scopes if there is reflection from tube walls opposite focuser that end up going down the focuser tube, it can happen because reflection of secondary support that ends up in the tube. Focuser tube needs to be baffled as well, if it's not - it can "channel" light inside (multiple bounces coming directly from flat source). For folded designs with central obstruction it would depend on central obstruction fully covering aperture - if you look at the back of your scope you should not be able to see the light coming straight from aperture - it needs to bounce of secondary. Refractors should be pretty immune to this - all the light that enters telescope goes thru the lens, so there is simply no chance of it being unfocused and reaching sensor.
  12. http://www.astrotest.it/test-reports/filters-including-h-alpha/daystar-quark-vs-quark-combo/ It's in Italian, so google translate (or if you know Italian ... )
  13. If both outer field and dust shadows are brighter - that is over correction. calibrated = light / flat If calibrated is larger in value, from above simple equation we have two possibilities - either light is larger than it should be or flat is lower than it should be. Light can be higher under these conditions: 1. dark or bias (or both) is not removed, so there is residual signal besides light signal 2. there is source of light pollution which is not present when doing darks / bias files - such as scope having a light leak and you take camera off for dark / bias frames 3. Darks were "colder" than light subs. Flat can be lower that it should be if: - not properly calibrated (which I doubt since you took same duration flat darks under same conditions) - flats were taken when sensor was cooled but matching darks were taken with hotter sensor. - sensor is in non linear region when doing flats - again I doubt this since you went for ADU at about half way and in general non linearity would produce host of artifacts with calibration. Here are some questions that could help to understand what is happening: 1. Was temperature regulation same in all corresponding subs? 2. How did you acquire subs (on scope / off scope / day / night - particular conditions for each) 3. Exact calibration method used. Maybe try without bias - it's not needed for fully matching darks.
  14. I'm also considering quark at some (distant?) future - and yes, have the same concerns about it as you now do I've seen those negative reviews and issues as well and that is quite a large sample base that behaved substandard. What it does not say is how many users are out there that are happy with their quarks (that do really work, or perhaps people don't understand that it should work better if it's substandard unit). We have no way of knowing if QC is better nowadays, and what is the chance of getting poor one. This is always a concern, as internet remembers things for long time, but suppliers tend to hold things in stock for a long time as well. If it is any consolation - combo quarks were not around at that time - so these units are likely to be under new qc - IF it was improved in recent times.
  15. How about 4" F/10 scope with Quark combo? That one does not have telecentric barlow included, so with above scope you can get both high magnification views and full disk viewing. If you want to do full disk view at low magnification - just put aperture mask to make scope F/20-F/30 without having to use telecentric lens. For medium powers, 2x telecentric will turn that into 2000mm FL scope, that you can further stop down if you want F/25 or F/30, or you can use it like that at F/20. Another x3 telecentric lens will turn it into F/30 4" high magnification setup. Quark combo also has much larger blocking filter - so you can use it again with combinations of telecentric lens and aperture stops to image at various scales.
  16. Quite right! Expected delivery time on TS website changed to 1-3 weeks
  17. It looks like first batch is seriously flawed (hence recall). TS in Germany has them listed now - but not yet available - expected to be in stock within 3 days - I wonder if those will be "proper" ones - or ones that are recalled.
  18. Found it - here is filter: https://www.cyclopsoptics.com/astronomy-filters/stc-astro-duo-narrowband-filter-48mm-2/ and here is the thread about it:
  19. Someone recently reviewed such "double" NB filter - I can't remember what's the name of filter nor manufacturer - but you can maybe find it somewhere in SGL via search. If you want broadband general purpose LPS filter - then IDAS P2 is very good choice - I have it and use it on both OSC and Mono cameras (for luminance). I'm pleased with it so far.
  20. It looks like a very good filter if it matches response curve in the image - but it is certainly not LPS filter and Galaxy improvement of 95% is just ridiculous. This is very good UHC filter by the looks of the spectral response curve. It should pass following wavelengths: 486.1um - H beta 495.9um & 500.7um - OIII And it has peak that is probably passing: 656.28um - H alpha maybe even 671.7nm und 673.0nm - which are SII, but that peak looks rather narrow to be ~20um wide (on second image it looks like it has FWHM of about 10um in that second peak, so maybe not SII lines). If you want "comparison curve" - take a look at other UHC filters and their curves. Such filters are best suited for imaging emission nebulae (or observing them). BTW, one can't judge filter only by it's transmission curve - surface accuracy, anti reflective properties and durability all play a part in overall usefulness of filter.
  21. Don't know about the best, but from what I've heard Samyang 135mm F/2 is held in high regard for wide field with ASI1600 (and other sensors).
  22. I can't remember exactly, but you list does look right - there are three distinct types: small ones that go on worm shaft, large ones that go on main shafts (RA and DEC) and conical one that goes on dec where CW shaft extends. I just took mine with me, went to local shop - showed them to the guy and said "I want the best ones like these" - he gave me bunch of SKFs Further "mods" that you could consider would be better tripod (if not pier mounted) and saddle plate. There is one "trick" you might want to try also - but involves just settings. DEC axis can have different amounts of backlash depending where on "circle" you are checking - and if you remove backlash where there is plenty - you will get stiff motion on opposite side. If you adjust where is little backlash you might still get some backlash on other side. If this happens - you can actually determine range of DEC motion that you are likely to use with your mount - you are likely to go below equator on south side, but you are not likely to hit anything close to horizon on north side, so it is likely that DEC won't do full 360 rotation in use (it might do even less if part of sky is not accessible from your location). Place problematic part of DEC where it will not be used by turning it either by hand (worm wheel when mount motor cover is off) or in EQMod - by slewing to that position and then resetting motor position to home (not parking to home but resetting to home - if you have done PEC, make sure you don't do slew in RA - keep it as is - in home position). Return scope to home by undoing clutches.
  23. If lifting heavy things is a concern, then think again about 8" dob. It's not the largest scope out there, but it's bulky. Basic version without goto is 26Kg - divisible in two parts. About 11Kg OTA (Optical tube assembly - or scope main tube) and about 15Kg base. OTA is somewhat bulky but easily carried by one person. Base is bulkier and you can carry it via handle - it feels like lugging around heavy bulky travel suitcase. Goto version is a bit heavier - due to motors on the base.
  24. Hi and welcome to SGL. If you want a scope that will do almost all things well - planets and Moon, galaxies, star clusters, double stars, nebulae, etc ... and you don't have special requirements for travel or such, this is simply the best option: https://www.firstlightoptics.com/dobsonians/skywatcher-skyliner-200p-dobsonian.html It is a bit larger scope (maybe look up some videos on youtube to get idea of its size), and while it is excellent "starter" scope - it is also scope that many people consider "for life". It comes with all basic accessories that you will need to start observing (eyepieces and such) and considering your budget you will have enough for additional books on astronomy and any eyepieces, filters, barlows etc that you might like to add at some point. You did mention that you would like a goto version (above is manual) - there is such scope in "goto" variant as well, but of course it is more expensive (and above your budget I'm afraid, so if goto is something that is very important, there are other options): https://www.firstlightoptics.com/dobsonians/skywatcher-skyliner-200p-flextube-goto.html Something else to consider is that this scope is not suited for astrophotography and in general astrophotograpy requires dedicated equipment and is quite a bit involved and expensive. One can certainly do astrophotography on a budget and even get decent results, but it usually means not using telescope and just using camera and lens on a simple (yet good) mount. Once you start thinking about telescopes and astro photography, then you need to start thinking in thousands for budget.
  25. Some math is in order to answer that question. Don't know if I'm going to quote specs right, but I'll do the math so you can "fill" in the right values and redo calculations if necessary. You'll be using 50mm focal length with sensor that has 5.7um pixels and 3888 of them in width. First we need to establish working resolution in terms of arc seconds per pixel - and we use following formula for that: resolution = 206.3 * pixel size / focal length, and when we swap in the values, we get ~23.5"/pixel Next thing that we need to know is sidereal rate, and that is ~15"/s, this means that each 1 second, sky will move 15". If you align your camera so that width is in direction of RA, we can then translate sky motion in number of pixels. In 300s at a rate of 15"/s, sky will move 4500". Since we have ~23.5"/pixel resolution we can now calculate number of pixels that sky will move in 300s - result is ~191 pixels. Since your camera is 3888 pixels we can use this to get percent of frame displacement compared to width of frame. This turns out to be a bit less than 5% (100% * 191/3888). This is very small movement between first and last frame and stacking software should easily pick up on this. You will not loose much of image due to stacking crop either - only about 5%.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.