Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

mgutierrez

Members
  • Posts

    56
  • Joined

  • Last visited

Posts posted by mgutierrez

  1. 7 minutes ago, vlaiv said:

    No idea of what is going on there.

    Did you have previous autopec / pec curve loaded? Maybe it just repeated earlier corrections?

    no. I'm an indi user. Almost first time I run eqmod and first time for autopec

  2. @vlaiv can I please make you another question? In order to test and understand how eqmod autopec works under the hoods, I performed a test recording the pec curve with eqmod only tracking; no guiding (no capturing nor running phd2 or similar; only tracking with eqmod). Obviously while generating the curve, it was completely flat as expected. However the corresponding .txt data contains values !=0 on the PE column. I expected to find zero. Do you know how autopec calculates these values and why they are !=0, even being the curve flat?

    pec.txt:

    # EQMOD HEQ5/6 V2.00w
    !WormPeriod=479
    !StepsPerWorm=51200
    # time - motor - smoothed PE
    0 43740 -0.0275
    1 43848 -0.0287

    ...

    peccapture_EQMOD.txt:

    # EQMOD HEQ5/6 V2.00w
    # AUTO-PEC
    # RA  = 23.4058906013238
    # DEC = 90
    # PulseGuide, Rate=0.1
    !WormPeriod=479
    !StepsPerWorm=51200
    #Time MotorPosition PE
    1.000 8389308 -0.0058
    2.000 8389415 -0.0116

    ...

  3. On 11/10/2021 at 22:00, vlaiv said:

    You can record it via PHD2.

    Start by calibrating guide scope like you normally would - close to meridian and on equator (DEC 0).

    Then turn off guide corrections and start guide session. Make sure you have logging enabled in PHD2. In EQmod at some point press timestamp button (this is used for synchronization).

    You need to record about 1-1.5h of data (several worm cycles). You don't need to image during this time and best time to do it is when the moon is full and it's clear outside. You probably won't image at this time so you can put aside couple of hours to do this.

    After you finished your run of data gathering - stop guiding and close PHD2. Now run PecPrep software (part of EQMod - http://eq-mod.sourceforge.net/pecprep/). Load phd2 guide log into it and start your analysis and preparation of PEC file.

    Alternative to all of this is to simply use EQMod and record PEC file while you image something. You again need to be guiding - but this time don't disable guide corrections - just have regular imaging session with guiding for example. At some point hit "record" PE button in EQMod and after again - several worm cycles (about hour to hour and a half) - just start using that PEC. EQMod will automatically do same thing you would manually do in PecPrep.

    If you want to go PecPrep route - check youtube for tutorial on how to do it (there are several videos covering basic procedure).

    Check out this document as well:

    http://eq-mod.sourceforge.net/docs/eqmod_vs-pec.pdf

    It should give you brief overview of EQmod pec and how you can record it while imaging.

    looking for info about pecprep I found this thread. There is only a thing I don't fully get. Why do we need to disable guide output from phd2? If we enable the output, corrections are also written to the log file and hence the error could be computed afterwards, no? why pecprep is not able to do it, but autopec via eqmod can?

  4. 2 minutes ago, vlaiv said:

    If you don't know ADUs have been altered after e/ADU is specified - you can never get correct e-values back

    absolutely. You need to know that. It seems most of the dslr won't rescale (multiply by 2^16/2^adcbit). In fact an oversaturated flat from my nikon shows a max value of 4095 in pixinsight stat's module in 16bit format. That's why I need to put 16bit in readout depth within basicccdparameters so it returns an ~1e/adu of gain; as some websites report

  5. 8 minutes ago, vlaiv said:

    you capture 16000e and you end up with 1000ADU but this time - you multiply that number with 16 to scale it to 0-65535 range and you get 16000ADU. Now you calculate e/ADU conversion factor by 16000e / 16000ADU and you end up with 1e/ADU.

    and this is where I think I'm getting confused.

    The camera's adc outputs, let's say, numbers. Then, that number is multiplied or not (I don't know if the camera itself or the driver, or whatever). If the number is multiplied, for me that's not ADU anymore. Other users call them DN (digital number?). For me, ADU is exactly the adc output without altering it in any way. And that's the number I've always thought that gain definition (e-/adu) refers to.

    In the second case, I cannot say that the gain is 16000e/16000adu but 16000e/(16000/16) adu. That is, 16e/adu. I could say that "gain" is also 16000e/16000dn=1e/dn

    Thanks again vlaiv

  6. I completely agree and that makes sense, sure.

    But:

    Quote

    Now you take second camera - record the same thing, but by convention you decide to exploit whole 16bits of 16bit format you'll be using to record image, so you multiply value with 16 and from 4095 you get number 65520

    This is what I mean. If I run the basicccdparemeters correctly, I guess it should return 1e/adu. Cause, finally, both cameras with the same adc return the same. But the second rescales afterward. Sorry if I'm not explaining well.

    I underlined "correctly" cause, according to other post I'm reading, I need to put 16bit in readout depth with cameras that do not rescale the result, just as in this case.

  7. I'm confused @vlaiv. Unity gain is 1e=1adu, independently of the rescaling which happens later. According to the test I did, ISO800 gives 17e/adu. According to other websites, ISO800 is almost 1e/adu. I don't take into account if the adu is rescaled or not. It should not matter, no? I mean, gain is measured in e/adu (a true adu)

     

    PS: I'm reading in another post that  "readout depth" parameter is a quite confusing name. It seems it has the same meaning as the dropdown that I see (for example) on stats module. That is, I have to put into "readout depth" in basciccdparameters the bit depth I need to put in stats (or other module) to see the true adu. In case of a DSLR, always (always?) 16bit, since they don't rescale. This would explain everything

  8. Hi guys,

    I'm trying to do a sensor analysis of my Nikon D3300 dslr. BasicCCDParameters returns this output:

    image.png.ff17b16b20f6681cb102f94dd0f7ef84.png

    Gain is unexpectedly high, since ISO800 is near unity gain. The only way I get a consistent outcome is by setting a readout depth of 16bit. As far as I know "readout depth" corresponds with the bit depth the camera writes the data. That is, whether is rescaled or not. It seems is not rescaled in this case. A max value from statistics shows 4095 (in pixinsight stats module, in 16bit) and that makes me think that the camera does not rescale any value.

    I think I'm missing something but not sure what/where/why.

    Thanks in advance

  9. thanks again @vlaiv. Definitely, my problem is the lack of knowledge of some astro-topics.

    I've been reading around but honestly I'm still a bit confused with some terms. So, if I've understood properly, the calculations and data involved are sky background, measured star flux and reported star magnitude from a catalog, right?

    Quote

    You calculate the ratio between star ADUs and background ADUs and convert that in magnitudes and you add that magnitude to magnitude of the star to get magnitude of the background sky.

    I can't grasp the details of that explanation. How can that calculation lead to the magnitude of the background sky?

    Quote

    It does not matter if you've used some sort of filter to do this because your reference in catalog of stars will be V magnitude

    so, as I've read, V filter bandwidth is about 80nm centered on ~500. Does that mean that doing this measure with a Ha filter would fail? Just only for my understanding.

     

    Thanks for your patience, Vlaiv

  10. Han, I have another question. I don't know how astap really makes the calculation. I thought it was based at least partially on the background level. But it seems there are more factors. In any case, I don't fully understand why (apparently) the focal ratio of the tube does not play a role. At the same exposure time, normally a lower F would have a higher background level than a slower one

  11. On 19/04/2022 at 10:21, vlaiv said:

    Just be careful that you need to properly calibrate your sub. Use one of the channels for measurement - green is probably the best as it carries the most of luminance information (humans are most sensitive to noise in luminance data).

    sorry, again, for resurrecting this thread.

    @vlaiv, I agree with the quoted affirmation. But, just to be sure we are swamping the faintest signal (channel), wouldn't be better to choose the channel with the lowest background signal?

  12. thanks @vlaiv for your useful input.

    Some concepts are clear. But I'm running into trouble to understand something that should be basic. From your previous post:

    Quote

    Here is an example. Say that your mean background value is 900 electrons. Just from Poisson noise - we will have noise level of 30. Add some read noise to it - let's say that read noise is 2e, so total noise is sqrt(30*30+2*2) = 30.066 (see here how small read noise compared to large background noise barely affects total noise levels). Thus we have SNR of 900/ 30.066 = ~29.93.

    This really means that we can expect to see 900 +/- 30.066 as background value about 67% of the time (values between ~870 and 930).

    But if we take square that is only 100x100 pixels - that is 10000 pixels, and average those - we get SNR improvement that is equal to x100 (square root of number of averaged samples). This means that our SNR is no longer ~29.93 but rather ~2993 (increased by factor of x100)

    Ok, we have the whole background with mean=900. SNR is ~29.93. Ok.

    I don't fully get the point of the snr boost. If we measure the square patch in the same way as we did with the whole background, its mean value should be similar to 900 and hence the noise should remain similar. I'm missing some point. Thanks for your patience

  13. 43 minutes ago, vlaiv said:

    total noise is sqrt(30*30+2*2) = 30.066 (see here how small read noise compared to large background noise barely affects total noise levels).

    that's actually my point. So I guess we should start with a enough exposed light, right?

  14. 50 minutes ago, vlaiv said:

    Sky signal level will not depend on read noise

    Thanks for the quick reply.

    I see your point, but honestly not completely. Is not the signal (our measurement) level affected by the read noise? With a very low signal (due to a too short exposure) and a quite high read noise, wouldn't be our measurement even more affected by the read noise? Even if the LP follows a poisson distribution, why that "mean value" would not be affected (specially if the rn is quite high compared to the signal) by such read noise?

  15. I had to rescue this thread from my bookmarks. Yesterady a doubt came into my mind.

    @vlaiv, when we are measuring (shot?) noise from the background, we are also accounting for its read noise; not only shot noise, right? In other words, if my background patch as a stddev of 20 (for example), part of that "20" also contains read noise (even if the sub is calibrated), no? So, how can we be sure that we are accounting only the correct amount of noise we want to swamp?

  16. 13 minutes ago, han59 said:

    Is this understandable?

    Pre-conditions
    1) Image is astrometrical solved. (for flux-calibration against the star database)
    2) The image background value has measurable increased above the pedestal value or mean dark value.
        If not expose longer. This increase is caused by the sky glow.
    3) Apply on single unprocessed raw images only.
    4) Providing dark image(s) in tab darks (ctrl+A) or entering a pedestal value  (mean value of a dark)
         increases the accuracy. If possible provide also a flat(s) in tab flats. Calibrated images are also fine.
    5) DSLR/OSC raw images require 2x2 binning. For DSLR images this is done automatically.
    6) No very large bright nebula is visible. Most of the image shall contain empty sky with stars.
    7) The calculated altitude is correct. The altitude will be used for an atmospheric
        extinction correction of the star light. The altitude is calculated based on time, latitude,
        longitude. Note that poor transparency will result in lower values compared with
        handheld meters.

     

    Han

    For me, it is.

    But for those who wondered the same as me. maybe for point #2, is it a good hint to include an example as you did in the previous post? (ie. to get a 1100 of sky background given a 1000 dn of pedestal) Maybe astap could for example warns if the sky background is under pedestal+10% ? Or any other hint for the user to know that the image is correctly exposed.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.