Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I don't think you really need a barlow at all. 2.9um pixel is optimally sampling 500nm with F/11.6. Even if we go for lowest wavelength of 400nm (although in reality it makes almost no difference as shorter wavelengths are more affected by seeing) - we get F/14.5 - which is still smaller than F/11 * 1.5 = F/16.5
  2. I had a look and yes - it seems very well made. All formulae are ok. It would be good to have general type calculator, but I'm afraid that read noise and e/ADU gain are not as easy to do in table form for many cameras (maybe just two fields instead?). On the side note - ZWO uses 0.1db system for their gain. If we know that 139 is unity gain, then it is easy to calculate any other e/ADU value given actual gain setting. For example - let's calculate e/ADU for Gain of 250 (it is present in above spread sheet). 250 - 139 = 111 (in units of 0.1db) = 11.1db Ratio = 10^(11.1/20) = = 10^(0.555) = ~3.59 = 3.6 Actual e/ADU is then 1 / 3.59 = ~0.2786 (not quite 0.25 that is in spread sheet but not far from it).
  3. Best to measure it as you want darks that include any bias offset. Either take darks that you already have (from previous session or ones you prepared for this session) - or take single dark in the field matching light you want to try out. Measure mean adu of that dark sub and use that value. Btw - electrons to ADU conversion is simple - there is published e/ADU for gain you are using (you can read that off the graph). Divide with that value to convert from electrons to ADU and multiply with that value to convert from ADUs to electrons. Just be careful if your camera has lower bit count than 16 bit. In that case there is additional step between sub values and ADUs (although we mean ADU when we say values measured directly from sub). If camera has less bit (like 12bit) - sub values are actually multiplied with 2^(16-bit_count). In case of ASI1600 - since it is 12bit camera - all sub values are multiplied with 16 (2^(16-12) = 2^4 = 2 * 2 * 2 * 2 = 16). If your camera is 14 bit - this multiplier is 4 (2 to the power of 2).
  4. You can do it on uncalibrated sub, but you need to know mean dark value. 1. Read mean background value in ADU 2. Subtract mean dark current value in ADU 3. Convert to electrons ... rest is the same (it works on "reverse" method as well - (read noise * 5)^2 = exposure_factor * (mean_background_adu - mean_dark_adu) * e/ADU )
  5. Ok, so there are definitively some strange gradients present in the image. not sure what caused them, and it is not easy to get around those. I binned image to improve SNR and result is sharpness and nicer looking stars at 100% (although not as much zoomed in as original)
  6. I just gave it a quick look and data looks very nice - I think a lot can be pulled out of it. I'll give it a go little bit later and will post the results.
  7. vlaiv

    M16 Ha

    Probably wrong term - I was thinking of taming their extent / size rather than brightness.
  8. vlaiv

    M16 Ha

    You have tons of SNR there. Maybe some selective sharpening to tame those stars?
  9. Another thing that made a difference for number of hits was where I was shooting darks. If I shoot in basement of my house - number that I get is significantly smaller. I guess that house itself and surrounding earth (since basement is below surface level) provides some shielding from background radiation. It also points to me not having Radon issues in my basement
  10. I've found that majority of impacts in darks are not in fact cosmic rays but of earthly origin. Once I created set of darks near some tools used to clean wood stove - bucket and small shovel and since those had some ash residue on them - I got much higher incidence. Conclusion was that it was due to radioactive decay in ash (wood absorbs radioactive material) - quite possibly from Chernobyl era. Btw - take any set of darks and stack them with max operator instead of average and you will in one image how many hits you had over whole session of taking darks.
  11. I'm saying that eccentricity is affected in specific cases - namely different RMS in RA and DEC, correlation between errors in RA and DEC and in general non random behavior of guide error. If none of those are the case - you have same (or nearly the same) RMS in RA and DEC, they are independent and random - then yes, you will just get round stars with increased FWHM. From above you can see that round stars don't depend on total RMS - there are (quite often) cases where you have large RMS but you still get round stars.
  12. Round stars don't depend on total RMS error. You can have RMS of 2" and still get round stars (at various image scales). In order to get round stars - RA and DEC error must be roughly the same, independent and random in nature.
  13. I haven't used DSS in quite a while, but if I do remember correctly, here is what you can do: You will need to stack both stack again. Start with long stack, but manually select (or just remember which one was used) - reference frame against which all others are registered. If I remember correctly - it is sub in the list marked with star and you can use space to mark any other as reference frame. Do the rest of the stack like you normally would Now, when stacking short stack - load all short subs, but also load that long reference sub. Mark it to be reference sub (marked with star) - but uncheck it when you come to stacking phase - don't include it in stack. Resulting stack should be aligned with first stack after that.
  14. Ok, so first thing to do is to register both images against same sub, they need to be aligned. Then load both images in ImageJ and do following: 1. Convert both images in 32bit precision and split into channels (do following for each channel separately) 2. Multiply short image with 4 (Process / Math / Multiply) - we use 4 as that is ratio of exposure lengths (120s / 30s = 4). 3. Duplicate long exposure image and run following macro on it (Process / Math / Macro): if (v>55000) v=0; else v=1; This will turn this into "mask" - where there is zero - short exposure should be used, where it is 1 - long exposure should be used. We use ~55000 as max is around 65000 and we want to replace all pixels that are over 90% of max value (just to be sure - interpolation when aligning subs can make "transition" pixels that are less than max but still need to be replaced). 4. Use Process / Image Expression Parser (macro) and enter following: Which simply means - take long exposure masked and add to short exposure inverse masked. You can always perform above operation - just mind that you need to choose set of parameters for each case: 1. Factor of multiplication for shorter stack needs to be ratio of exposures 2. Mask needs to be created at about 90% or so of max pixel value in long exposure (and of course - images need to be aligned the same - you can use ImageJ to align images as well - but that is another plugin and a bit more complicated procedure).
  15. I would do it in software like ImageJ with a bit of pixel math. It is as simple as loading both images and running image expression parser / macro. You enter single line expression and it combines two images. (actual line depends on data and saturation point, but if you post both stacks - I'll give you expression and explain how I wrote it).
  16. Star cores don't come into equation about optimum exposure time as you can easily take just few very short exposures at the end and use that data to replace any saturated pixels in original longer exposure. Optimum exposure duration is defined by how much additional noise are you willing to accept in your image. If you for example image for an hour. You can split that hour in different ways. You can say I'll just take one exposure that is one hour long. Or you might say I'll do 12 exposures 5 minute each. Or perhaps 60 x 1minute. What is the main difference between all of these? Level of read noise and how big it is compared to other noise sources. If you image 1 x 60 minutes - then your image will have only one dose of read noise as you used only one sub and it was read from sensor only one time. If you image 12 x 5 minutes - then your stack will gather 12 doses of read noise (12 reads - each read is additional dose or read noise) 60 x 1 minute = 60 doses of read noise. As far as signal and noise go - everything else is the same regardless of how you decide to split imaging time (there are other considerations like amount of data or how likely it is that you'll throw away a sub, but we don't consider these at the moment). How much additional noise in final image that is - depends on other noise sources and how big the read noise is compared to those. As soon as you make read noise sufficiently small compared to any of other noise sources - you don't need to expose for longer as any further gains are minimal.
  17. Don't see how practice relates to this. Just change offset or gain and you will move histogram peak (considerably). Say your histogram peak is at 1/3 and you raise gain so that e/ADU is half of the original - now histogram peak is suddenly at 2/3. Does this mean you need half of exposure time or if you change gain so that e/ADU is double the original - now peak is at 1/6 - must be twice as long exposure. Which one is it? Changing gain by factor of 2 in either direction might change read noise by minimal amount, and read noise is only thing that dictates needed sub length.
  18. Could be due to that, but it can also be due to temperature and maybe even due to atmospheric pressure. Different atmospheric conditions happen with different temperature and both lens cell and telescope body are made out of metal that shrinks / expands with temperature. I think that atmospheric pressure and temperature affect refractive index even more than amount of moisture (changing air density).
  19. vlaiv

    WR-134

    Because at some point you need to convert it to 16bit and you loose precision. You loose less precision where it matters if image is stretched but you still loose precision. Ok, for the sake of argument - let's do some math to see what is going on. Let's assume that "fidelity" of signal is about 0.1 (0-65535 units) at low scale of signal - say around absolute value of 5. So you have signal with good SNR that is 5.1 and you also have signal with good SNR that is 5.2 and those two can be distinguished. 16 bit data will put those two at 5 rounded - so they "posterize" or rather you no longer can distinguish them. Let's see what happens if you stretch your data. We will take gamma of say 0.2 as stretch factor (very hard stretch). 5.1 represented in 0-1 range is 0.00007781982421875 (I'm copying from calculator so sorry for number of digits, don't want to round things up at this stage) 5.2 represented in 0-1 range is 0.000079345703125 Now we can take these two numbers and apply gamma on them 5.1 after stretch is 0.15073636882582656930879380243356 5.2 after stretch is 0.15132290938844934280215215782172 Now all we need to do is convert those numbers in 0-65535 range and see if we can distinguish them after rounding. 5.1 is 9878.6586673693700462211106362855 after stretch or 9879 after rounding 5.2 is 9917.0981896814161298818438150042 after stretch or 9917 after rounding This shows that by stretching we can save some precision - but that does not mean that you don't introduce error. Let's run things in reverse for first number. 9879 converted to 0-1 range is 0.1507415771484375, with inverse gamma that is 7.7833269506148273600361174641031e-5 and converted back into 0-65535 range that is 5.1008811503549332586732699412746 = ~5.1009 So even with hard stretch of gamma being 5 (or 0.2 depends which direction you observe) - we still have some error due to rounding. It is much less than when working with linear data, but it is still there. Smaller stretch than that would raise rounding error, and apparently latest version of StarNet does not like overly stretched data.
  20. Sorry to say, but that is completely useless way to determine correct exposure length for astrophotography.
  21. vlaiv

    WR-134

    That does help, but it is still not the same as 32bit.
  22. Depends what you expect. I would say that this scope is much more refined version of wide field scope like ST102 (or ST120). A bit more of aperture (a bit less than 120). Much better (but not perfect, even for visual) color correction A bit slower and therefore easier on eyepieces, but still wide field scope Much better focuser. If you had the idea to use ST102 for imaging or EEVA - then this will be better option. Far from color free, but that is not the point - point is - it is affordable wide field 4 - 4.5 inch refractor with good mechanics and decent optics.
  23. vlaiv

    WR-134

    In my view - it is fairly simple. Use 32bit for all your processing and then you don't have to worry about any of that - whether you use short exposures or you use medium exposures of 1-2 minutes but you have 15h of total imaging time (same thing happens - as long as you have good SNR in low signal areas, 16bit will add rounding errors to it). In order to utilize StarNet V2 in 16 bit format - do linear stretch so that faint target parts become clearly visible (set white point), but do it in such way it is reversible, so you can scale back result and calculate star only version from it.
  24. Although I personally dislike star color and overall hue that StarTools is producing (no such thing as teal stars) - this rendition is very good.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.