Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. 4 minutes ago, Tomatobro said:

    Ah...I see what you mean.  The error I am making is that PHD is "guiding" while creating the PEC file which is clearly wrong. Will do your method when the clouds clear....

    Thanks vlaiv

    Depends.

    I was describing "old" manual way of creating PEC curve. I think that one offers more flexibility and allows you to see PE data and choose how to create PEC curve. You can do all sorts of analysis of mount behavior with PECPrep. For that one you need "clean star motion" data in PHD2 log, so no guide corrections, only star position.

    There is new way of doing pec in EQMod - autopec. This is when you just press record in EQMod control panel. For this to work you need your guiding turned on. EQMod can "listen" to guide pulses and based on that reconstruct PEC curve, but in order to do it, actual corrections need to be sent. This process is "automatic" and it does not allow you to either see the data nor to analyze it.

    http://eq-mod.sourceforge.net/docs/eqmod_vs-pec.pdf

    check page 9 - autopec for how to record with new/simple procedure

    • Like 1
  2. 2 minutes ago, Anthonyexmouth said:

    thanks,

    1. where do i disable guide output, PHD or EQMOD?

    2. are more periods better, would running it longer help or not?

    Guide output is option in PHD2 - there is check box (somewhere, I can't remember exactly, but I'll look it up on line now).

    image.png.c00943cad952d27d37a3b68e8a6cde4b.png

    Uncheck that particular checkbox and guide output will not be sent from PHD2

    It's generally better to capture as much worm period cycles as you can - it's like stacking subs - more subs you stack - better the image. Each captured worm cycle will suffer all sorts of "noise" - wind, seeing, large period error (period of more than half hour or hour - which is not of particular concern, but you want to "cancel" it out with multiple worm cycles).

    I usually go for 8 - 10 cycles, I think I never had patience to do more than 10 cycles

    • Like 1
  3. I think it's worth doing.

    PEC works very well with guiding if you are using EQMod. There are couple of benefits of it. First is that you get a chance to understand how your mount behaves. When you load PHD2 log into PecPrep you have a chance to analyze mount behavior. One of more important things to note is after you generate PEC curve there is "residual" shown - note max RA change rate. It will be expressed in arc seconds per minute or per second. This will tell you maximum guide cycle that you can use. Longer guide cycles are something you want to use as it smooths out the seeing. In order to use longer guide cycles (like 5-6 or more seconds) you need smooth mount and you need your periodic error after correction not to change rapidly. If max RA rate is something like 0.06"/s - and you use 5 second guide cycle - at worst your drift between corrections will be 0.3".

    It will enable smoother guiding as well because there will be less corrections issued - and corrections are not ideal so you want as few corrections as possible to keep the mount on track.

  4. Here is how I do it:

    Disable any PEC if you already have it loaded in EQMod (there are controls for PEC in one of the tabs).

    Fire up PHD2 and do calibration near equator. Disable guide output for PHD2 and start "guiding" session (mount won't be guided at this point because you disabled guide command output). Make sure you have logging enabled in PHD2. Do about 1 hour and 30 minutes of recording. HEQ5 has worm period of 638s and you want as much full periods as you can. If you want for example 8 full periods, that is going to be 1:25 or something like that. Make sure you record for at least 1 minute past complete cycle just to make sure you got it. While you are "guiding" at some point - press PEC timestamp button in EQMod.

    Once you are finished, just stop guiding and close PHD2. Next start PECPrep on same computer, load phd2 log and analyze PE curve. Choose only harmonics of Worm period in main window - generate PEC curve and save it.

    Load it into EQMod and make sure you have PEC gain set to 1.

    From that point on, it should be always loaded when you start EQMod - and default tracking rate should be Sidereal+PEC.

    That is it.

    • Thanks 1
  5. Don't think you need to be concerned with this - this is something that flats can easily remove.

    I've found similar thing with my ASI1600 - but not in form of vertical stripes - more like checkerboard pattern.

    Here is H alpha flat (not solar, but regular Ha filter for night time imaging):

    image.png.d72eb1474cd58773ddaa23b3e5d818ff.png

    Pattern is readily shown, but it calibrates out with use of flats.

    I think such artifacts are consequence of manufacturing process - micro lenses or something else when creating sensor on silicon can lead to this. Even additional circuitary between pixels - maybe not each pixel is exactly the same size - some of them might have small part of pixel dedicated to integrated circuits for readout or amp stage.

    • Like 1
  6. I don't think you'll go wrong with this one:

    https://www.altairastro.com/Starwave-102ED-FPL53-Refractor.html

    I think it's the same optics as this:

    https://www.teleskop-express.de/shop/product_info.php/info/p9868_TS-Optics-PhotoLine-102mm-f-7-FPL-53-Doublet-Apo-with-2-5--Focuser.html

    Or this one:

    https://www.stellarvue.com/stellarvue-sv102-access-ed-apo-refractor-telescope/

    Old FPL-51 version was also branded by Altair Astro, now only sold by TS as the scope you linked to in the first post. FPL-51 one shows CA on brighter targets, and I remember reading one review where it showed pinched optics.

    Here is review of it (AA version): https://www.altairastro.com/public/reviews/Starwave-102ED-Review-Sky-at-Night-Magazine-Issue-86-4_stars.pdf

    I think that new version with FPL-53 glass is better choice if you can afford it. I've been interested in that scope for quite some time now (waiting for funds to become available to get one for myself). According to reviews, optics on that model should be really good.

    I think that FPL-51 is still good scope for the money. 100ED will surely show better image, but 102 F/7 will be more portable, easier to mount and will have better focuser. It will also show wider view. I would call it a compromise.

    FPL-53 one I would not call compromise, I believe it will show the same quality image as 100ED with all mentioned advantages - wider field, shorter physical length and better focuser.

    • Like 1
  7. 2 minutes ago, astrorg said:

    I agree and I am no expert about the 35nm filter - I did read it somewhere [not sure].
    If you follow whatever they tell you on shop pages, you would spend about 4-500% more than you really need!

    So, you actually are saying I could buy a Quark Combo [use Barlow] + Baader 2" 35nm HA filter and I am OK?
    Love this then! 

    Except I would not use barlow, but rather telecentric lens. It does very similar thing as barlow but optics are a bit different.

    With telecentric lens magnification stays roughly the same as you move eyepiece / sensor away and towards lens element - with barlow it will change - closer will make it less magnifying - further away it will make it more magnifying. Barlow also has effect on eye relief with eyepieces, telecentric lens does not.

    Due to different optical design telecentric lens are better suited for Halpha applications - field uniformity and such.

    For cost effective telecentric look at ES ones:

    https://www.firstlightoptics.com/barlows/explore-scientific-2x-3x-5x-barlow-focal-extender-125.html

    Also, consider which one you will get, depending on original F/ratio of your intended scope. Do bare in mind that using telecentric lens is not the only way you can change F/ratio of your system - aperture mask also changes F/ratio of your system, and you can use either or combination of both to achieve wanted F/ratio. However, max useful magnification will be limited by aperture (about x2 aperture diameter in mm), so using aperture mask reduces max magnification (good for full disk viewing without adding too much focal length, or in combination with x2 tele extender to get medium magnifications).

  8. 3 minutes ago, Adreneline said:

    Hi vlaiv,

    The PI forum article (the link I've sent) says in Warning (Note) 2 : "Do not combine denoised images. Signal-to-noise ratio (SNR) will be enhanced by combining noisy images and denoising the result. Combined images must be equally exposed, have the same pixel resolution, and be registered by projective transformation with no distortion correction."

    The LV tutorial (  https://www.lightvortexastronomy.com/tutorial-narrowband-bicolour-palette-combinations.html ) advises the opposite - perform denoising before combining. That's why I'm confused.

    Sorry if I am confusing you as well :)

    Adrian

    I think you are confused with ambiguous usage of term "combine".

    Here is quote from LV article that you just linked (I just did quick search on term noise and this is pretty much where it appears in text):

    "Though your images may differ, it is common to apply some noise reduction to images in their linear state. The choice of whether or not to do this, or how aggressively to do it, depends on the level of noise in your images. Please note that this is beyond the scope of this tutorial and is covered amply by another tutorial specially written on the subject of noise reduction. My personal noise reduction routine of choice for the images we are about to post-process was using MultiscaleLinearTransform (as described in the tutorial on noise reduction) with stretched clone copies of the images themselves acting as masks. "

    Here it advises you to do denoise at linear stage.

    Let's quickly "run thru" stages of data reduction and processing:

    1. calibration

    2. stacking (combining)

    3. non linear transform

    4. multichannel combining

    In some workflows 3 and 4 can swap places. What is important is that both articles place denoise phase at the same place in the workflow. First article about MURE denoise from PI forums says that you should do it after phase 2 but prior to phase 3. What has confused you is that it uses term combine instead of stacking and that warning about doing it after "combine" (stacking) is referring to the fact that you should do it after phase 2 and not after phase 1 - after calibration. If you do it  "prior to combining" (meaning prior to stacking) - you will end up skewing statistics of signal and noise and stacking (or here called "combining") will not produce wanted results - down the thread on PI forum OP says that he tried that with result of loss of faint detail - this is why you should not denoise individual subs prior to stacking (or "combining"), but rather after stacking / "combining", but prior phase 3 while data is still linear.

    To reiterate - both articles say the same thing - denoise while data is still linear - this has nothing to do with multichannel combining, or making color (false or otherwise) image out of individual mono masters. Denoising should be done at linear stage prior to this.

    • Thanks 1
  9. 1 minute ago, Adreneline said:

    This is the link vlaiv:

    https://pixinsight.com/forum/index.php?topic=9206.0

    Thanks for looking.

    Adrian

    I'm not following. This thread / first post explicitly says that denoising should be done at linear phase after stacking. MURE denoising does not work on multichannel data, like RGB / false color images - it even states that you can't do it on OSC data (you can, but you need to split your OSC subs into separate channels without debayering and treat them as individual mono/filter subs that you stack and then apply denoise as you would with regular mono subs).

    30 minutes ago, Adreneline said:

    I agree entirely, but these two articles seem to be at odds - one says denoise before combining Ha, OIII, SII masters using PixelMath (or whatever) and the other says denoise after combining the individual masters.

    You say here that there is article that says you should apply some type of denoise algorithm after combining of individual masters into multichannel image? Right? Which article would that be?

  10. 11 minutes ago, Adreneline said:

    Hi Wim,

    I agree entirely, but these two articles seem to be at odds - one says denoise before combining Ha, OIII, SII masters using PixelMath (or whatever) and the other says denoise after combining the individual masters.

    I have a feeling it's all pretty marginal if the original data is good and you would be hard pressed to tell the difference as to when the noise reduction was performed. As you say, best to have good data and not to have to denoise and not to have to use deconvolution either!

    Thanks for your response.

    Adrian

    Can you give a link to second one - recommending denoising after channel combine?

  11. From what I can see in first post of that topic you linked to, MURE denoise works as first type of algorithm that I described. It assumes image is combination of Poisson and Gaussian noise distributions and does some clever math to remove associated noise. In that context, sentence:

    "Do not combine denoised images. Signal-to-noise ratio (SNR) will be enhanced by combining noisy images and denoising the result. Combined images must be equally exposed, have the same pixel resolution, and be registered by projective transformation with no distortion correction."

    refers to exactly what I mentioned - stacking of denoised images vs denoising stack and not channel combine.

    It also says that during registering and combining you need to be careful not to do any sort of projective corrections (like fixing lens distortion in wide field images and such).

    Because this algorithm utilizes noise distribution statistics - it needs linear data, so you should apply it after stacking and before any sort of non linear transform (histogram stretch). Non linear transform skews relationship between signal strength and associated noise in Poisson distribution.

    It is explained in detail in this post:

    https://pixinsight.com/forum/index.php?topic=9206.msg59115#msg59115

     

  12. I would say that it depends on denoising algorithm used.

    Some denoising algorithms are designed to work best with linear data of known noise statistics (by known here it means estimated from data, but of known distribution type). Others are designed based on perception uniformity of the image, or general smoothness (like TV family of algorithms that assume smooth underlying data).

    1 hour ago, Adreneline said:

     "Do not combine denoised images. Signal-to-noise ratio (SNR) will be enhanced by combining noisy images and denoising the result".

    I don't think this refers to channel combination. It looks like it was written in response to question like: "Should I denoise subs prior to stacking or denoise resulting stack". Simple SHO palette combination of data does not influence particular noise distribution per channel. Some other methods of combining data will influence noise statistics (like linear combination per channel), but again - denoising algorithms work per channel, so it will depend on type of algorithm used (some even transform RGB image into Lab and employ different levels of denoising based on working with luminance or chrominance data).

  13. Something is very odd with those dark subs.

    Here are my findings:

    Average background value for first and second dark at 300s is very much different in intensity - it should be roughly the same, if you take two darks only difference between them should be random noise and their average value should be about the same.

    I subtracted bias from both and got ~910.4 ADU in first, while only ~494.8 ADU in second dark.

    We can use short dark (flat dark) to do cross check which one might be the right value, if any. We can also look at the specs for your camera and see what should be expected dark current value at this temperature and exposure.

    Short dark (8s ) after bias removal has average value of ~6.65 ADU.

    Dark current should be linear in time, so if 8s dark produces 6.65 average ADU value, 300s one should produce 300/8 * 6.65 = 249.375 ADU

    None of the above seem to have that value - first one has more than 3 times that amount, while second has about two times that amount - this is very strange and should not happen!

    Looking around for QHY9 specs, I can't seem to find system e/ADU (gain) value anywhere? Nor can I find dark current at -15C? This is something manufacturers of cameras provide. But I did stumble upon something interesting - there is debate over gain and offset settings for that camera. I was under impression that CCD cameras don't have this - system gain and offset should be factory set with no user intervention.

    Do you have these values, and can you change gain and offset? And most importantly did you change those values?

     

     

  14. 50 minutes ago, bobmoss said:

    Would love the get hold of a proper SQM to see how accurate the website is. It is pretty dark here though, must get around to trying to properly estimating how dark it actually is.

    I see Canon 600D listed in your kit in signature, so I'm guessing you take odd picture now and then with that camera and the scope?

    If you are, then there is simple way to measure SQM your self (approximately). Take an image of sky near zenith (where you want your SQM value measured) and another one of bright reference star. Use green channel from your image and measure star intensity - photometric measurement - you can use AstroImageJ for this. Then take empty patch of sky and measure average value there (in first image). Make sure you subtracted dark frame from your image (you can do flat calibration as well but you don't need to do it for this rough measurement).

    Divide average background value with pixel scale squared (so if you image at 1.5"/pixel  - divide with 1.5*1.5 = 2.25). Divide total star brightness with background value and calculate magnitude difference based on this number

    -2.5*Log(star_intensity / background_intensity) = magnitude difference between star and sky brightness. Subtract this value from star magnitude.

    This should give you SQM reading.

    • Like 1
  15. Do you have any software to check all fits files that you've taken - like inspect Fits headers and pixel values - that can be important clue if something was recorded wrong.

    This can also be the case - like silly truncation to 8bit data or something like that?

    Maybe if you post one of each - fits files I mean, just a single sub, one of each - no need to post masters or whole set.

    If it is ED80 - then light leak is not likely suspect - refractors don't have these issues (unless light leak is on the other side - after focuser, but that is a long shot - one would need to have opening in optical train that would be lit up when taking flats).

  16. Ah, that is opposite from what I assumed - this is under correction in flats.

    If we go to original formula calibrated = lights / flats

    and calibrated value is lower than it should be - we have two possible cases:

    1. lights are lower in value than they should be - this can happen from improper calibration (like multiple darks / bias subtraction, but I don't think it is the case)

    2. Flats are "stronger" than they should be (higher in value) - this can happen if one is for example calibrating flats with bias only and there is significant dark current, but I don't think it's the case here.

    Another reason why flats might be stronger, and this would be my main suspicion in this case - flocking and baffling of telescope - some unfocused light is making it's way to the sensor when using flats, bypassing "regular" optical train. Depending on scope type, this can happen with:

    Newtonian scopes if there is reflection from tube walls opposite focuser that end up going down the focuser tube, it can happen because reflection of secondary support that ends up in the tube. Focuser tube needs to be baffled as well, if it's not - it can "channel" light inside (multiple bounces coming directly from flat source).

    For folded designs with central obstruction it would depend on central obstruction fully covering aperture - if you look at the back of your scope you should not be able to see the light coming straight from aperture - it needs to bounce of secondary.

    Refractors should be pretty immune to this - all the light that enters telescope goes thru the lens, so there is simply no chance of it being unfocused and reaching sensor.

  17. If both outer field and dust shadows are brighter - that is over correction.

    calibrated = light / flat

    If calibrated is larger in value, from above simple equation we have two possibilities - either light is larger than it should be or flat is lower than it should be.

    Light can be higher under these conditions:

    1. dark or bias (or both) is not removed, so there is residual signal besides light signal

    2. there is source of light pollution which is not present when doing darks / bias files - such as scope having a light leak and you take camera off for dark / bias frames

    3. Darks were "colder" than light subs.

    Flat can be lower that it should be if:

    - not properly calibrated (which I doubt since you took same duration flat darks under same conditions)

    - flats were taken when sensor was cooled but matching darks were taken with hotter sensor.

    - sensor is in non linear region when doing flats - again I doubt this since you went for ADU at about half way and in general non linearity would produce host of artifacts with calibration.

    Here are some questions that could help to understand what is happening:

    1. Was temperature regulation same in all corresponding subs?

    2. How did you acquire subs (on scope / off scope / day / night - particular conditions for each)

    3. Exact calibration method used. Maybe try without bias - it's not needed for fully matching darks.

  18. 2 minutes ago, astrorg said:

    Ha ha hah a aaaaa

    AS usual something comes up that makes yo think! I mean those like me that cannot afford much and have to REALLY do a thorough research to avoid wasting hard earned cash - not much of it too!

    But I do agree that Quark is 'ace' when it works.
     

     

    I'm also considering quark at some (distant?) future - and yes, have the same concerns about it as you now do :D

    I've seen those negative reviews and issues as well and that is quite a large sample base that behaved substandard. What it does not say is how many users are out there that are happy with their quarks (that do really work, or perhaps people don't understand that it should work better if it's substandard unit). We have no way of knowing if QC is better nowadays, and what is the chance of getting poor one. This is always a concern, as internet remembers things for long time, but suppliers tend to hold things in stock for a long time as well.

    If it is any consolation - combo quarks were not around at that time - so these units are likely to be under new qc - IF it was improved in recent times.

  19. How about 4" F/10 scope with Quark combo?

    That one does not have telecentric barlow included, so with above scope you can get both high magnification views and full disk viewing.

    If you want to do full disk view at low magnification - just put aperture mask to make scope F/20-F/30 without having to use telecentric lens.

    For medium powers, 2x telecentric will turn that into 2000mm FL scope, that you can further stop down if you want F/25 or F/30, or you can use it like that at F/20.

    Another x3 telecentric lens will turn it into F/30 4" high magnification setup.

    Quark combo also has much larger blocking filter - so you can use it again with combinations of telecentric lens and aperture stops to image at various scales.

    • Like 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.