Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

mgutierrez

Members
  • Posts

    56
  • Joined

  • Last visited

Everything posted by mgutierrez

  1. Hi all, I'm not very used to upload images. This time I wanted to share with you the this so famous object(s) from Orion. I finally managed to include it on my library. Hope you like it https://www.astrobin.com/o5lc9o/
  2. no. I'm an indi user. Almost first time I run eqmod and first time for autopec
  3. @vlaiv can I please make you another question? In order to test and understand how eqmod autopec works under the hoods, I performed a test recording the pec curve with eqmod only tracking; no guiding (no capturing nor running phd2 or similar; only tracking with eqmod). Obviously while generating the curve, it was completely flat as expected. However the corresponding .txt data contains values !=0 on the PE column. I expected to find zero. Do you know how autopec calculates these values and why they are !=0, even being the curve flat? pec.txt: # EQMOD HEQ5/6 V2.00w !WormPeriod=479 !StepsPerWorm=51200 # time - motor - smoothed PE 0 43740 -0.0275 1 43848 -0.0287 ... peccapture_EQMOD.txt: # EQMOD HEQ5/6 V2.00w # AUTO-PEC # RA = 23.4058906013238 # DEC = 90 # PulseGuide, Rate=0.1 !WormPeriod=479 !StepsPerWorm=51200 #Time MotorPosition PE 1.000 8389308 -0.0058 2.000 8389415 -0.0116 ...
  4. loud and clear @vlaiv. Fully understood. Thanks for the detailed explanation. m
  5. looking for info about pecprep I found this thread. There is only a thing I don't fully get. Why do we need to disable guide output from phd2? If we enable the output, corrections are also written to the log file and hence the error could be computed afterwards, no? why pecprep is not able to do it, but autopec via eqmod can?
  6. completely agreed with your statements. Thanks vlaiv; it's a pleasure to have such conversations with you.
  7. absolutely. You need to know that. It seems most of the dslr won't rescale (multiply by 2^16/2^adcbit). In fact an oversaturated flat from my nikon shows a max value of 4095 in pixinsight stat's module in 16bit format. That's why I need to put 16bit in readout depth within basicccdparameters so it returns an ~1e/adu of gain; as some websites report
  8. and this is where I think I'm getting confused. The camera's adc outputs, let's say, numbers. Then, that number is multiplied or not (I don't know if the camera itself or the driver, or whatever). If the number is multiplied, for me that's not ADU anymore. Other users call them DN (digital number?). For me, ADU is exactly the adc output without altering it in any way. And that's the number I've always thought that gain definition (e-/adu) refers to. In the second case, I cannot say that the gain is 16000e/16000adu but 16000e/(16000/16) adu. That is, 16e/adu. I could say that "gain" is also 16000e/16000dn=1e/dn Thanks again vlaiv
  9. I completely agree and that makes sense, sure. But: This is what I mean. If I run the basicccdparemeters correctly, I guess it should return 1e/adu. Cause, finally, both cameras with the same adc return the same. But the second rescales afterward. Sorry if I'm not explaining well. I underlined "correctly" cause, according to other post I'm reading, I need to put 16bit in readout depth with cameras that do not rescale the result, just as in this case.
  10. I'm confused @vlaiv. Unity gain is 1e=1adu, independently of the rescaling which happens later. According to the test I did, ISO800 gives 17e/adu. According to other websites, ISO800 is almost 1e/adu. I don't take into account if the adu is rescaled or not. It should not matter, no? I mean, gain is measured in e/adu (a true adu) PS: I'm reading in another post that "readout depth" parameter is a quite confusing name. It seems it has the same meaning as the dropdown that I see (for example) on stats module. That is, I have to put into "readout depth" in basciccdparameters the bit depth I need to put in stats (or other module) to see the true adu. In case of a DSLR, always (always?) 16bit, since they don't rescale. This would explain everything
  11. so you think @vlaiv that the analysis in the screenshot is correct, and that the adc converter really gives 1ADU per ~17 electrons at ISO800?
  12. you are right Vlaiv; I actually meant a high raw number. Thanks for pointing it out. In any case, I think it should be near 1
  13. Hi guys, I'm trying to do a sensor analysis of my Nikon D3300 dslr. BasicCCDParameters returns this output: Gain is unexpectedly high, since ISO800 is near unity gain. The only way I get a consistent outcome is by setting a readout depth of 16bit. As far as I know "readout depth" corresponds with the bit depth the camera writes the data. That is, whether is rescaled or not. It seems is not rescaled in this case. A max value from statistics shows 4095 (in pixinsight stats module, in 16bit) and that makes me think that the camera does not rescale any value. I think I'm missing something but not sure what/where/why. Thanks in advance
  14. thanks a lot @han59 and @vlaiv for your replies. I need some time to digest the info and fully understand it. But, basically, the key point is to measure the difference in magnitudes between the sky background and the chosen star, and then compare (actually add) it to the reported star magnitude, if I understood well. I will re-re-re-re-read it one more time thanks again!
  15. thanks again @vlaiv. Definitely, my problem is the lack of knowledge of some astro-topics. I've been reading around but honestly I'm still a bit confused with some terms. So, if I've understood properly, the calculations and data involved are sky background, measured star flux and reported star magnitude from a catalog, right? I can't grasp the details of that explanation. How can that calculation lead to the magnitude of the background sky? so, as I've read, V filter bandwidth is about 80nm centered on ~500. Does that mean that doing this measure with a Ha filter would fail? Just only for my understanding. Thanks for your patience, Vlaiv
  16. thanks for the reply @vlaiv. Sorry, I don't fully get it. Could you please elaborate further? The focal ratio is an example. The same doubt applies by using filters; eventually, the problem I imagine is the lack of light compared to other setups. So, does it mean that is enough if the image can be plate solved?
  17. Han, I have another question. I don't know how astap really makes the calculation. I thought it was based at least partially on the background level. But it seems there are more factors. In any case, I don't fully understand why (apparently) the focal ratio of the tube does not play a role. At the same exposure time, normally a lower F would have a higher background level than a slower one
  18. loud and clear @vlaiv, thanks a lot. Absolutely makes sense
  19. sorry, again, for resurrecting this thread. @vlaiv, I agree with the quoted affirmation. But, just to be sure we are swamping the faintest signal (channel), wouldn't be better to choose the channel with the lowest background signal?
  20. I think I get it @vlaiv. So this particular snr improvement is not as I was stating; i.e. a signal divided by its noise. I guess I get it, and makes sense. Thanks. Thanks also to @symmetalfor adding further clarification
  21. thanks @vlaiv for your useful input. Some concepts are clear. But I'm running into trouble to understand something that should be basic. From your previous post: Ok, we have the whole background with mean=900. SNR is ~29.93. Ok. I don't fully get the point of the snr boost. If we measure the square patch in the same way as we did with the whole background, its mean value should be similar to 900 and hence the noise should remain similar. I'm missing some point. Thanks for your patience
  22. that's actually my point. So I guess we should start with a enough exposed light, right?
  23. Thanks for the quick reply. I see your point, but honestly not completely. Is not the signal (our measurement) level affected by the read noise? With a very low signal (due to a too short exposure) and a quite high read noise, wouldn't be our measurement even more affected by the read noise? Even if the LP follows a poisson distribution, why that "mean value" would not be affected (specially if the rn is quite high compared to the signal) by such read noise?
  24. I had to rescue this thread from my bookmarks. Yesterady a doubt came into my mind. @vlaiv, when we are measuring (shot?) noise from the background, we are also accounting for its read noise; not only shot noise, right? In other words, if my background patch as a stddev of 20 (for example), part of that "20" also contains read noise (even if the sub is calibrated), no? So, how can we be sure that we are accounting only the correct amount of noise we want to swamp?
  25. For me, it is. But for those who wondered the same as me. maybe for point #2, is it a good hint to include an example as you did in the previous post? (ie. to get a 1100 of sky background given a 1000 dn of pedestal) Maybe astap could for example warns if the sky background is under pedestal+10% ? Or any other hint for the user to know that the image is correctly exposed.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.