Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

jimjam11

Members
  • Posts

    338
  • Joined

  • Last visited

Everything posted by jimjam11

  1. Do you have a coma corrector? Pds or not you need one and depending on which one you get this can shift the focus position even more. I wouldnt move anything until you have it all assembled. I can easily reach focus with my 150p but the drawtube protrudes into the light path and eats a chunk out of the stars. The solution to this was chopping a bit off the drawtube.
  2. Agree. Unless you cannot afford a mono setup skip osc and maximise your imaging efficiency.
  3. I have the optolong cls ccd for my 6d and it works OK for emission targets but it shadows the sensor badly, and adds a significant colour cast. If my target isn't an emission one I don't bother with a filter. Example of an image taken with the optolong cls ccd and an unmodified 6d: https://www.astrobin.com/395573/C/?_ga=2.240630720.2136164123.1596574169-711546074.1586286072&nc=user
  4. Yes it can. Sgpro can platesolve using a variety of solvers, astap is the fastest (and easiest to setup) in my experience. This is how I use sgpro after my equipment has been dormant for a while: Well in advance plan my sequence of target(s) making sure rotation is consistent if I have multiple targets configured. I use either 0 or 90deg in almost all cases. Open my mount control panel. Polar align using polemaster. Open sgpro and run connect all devices. Slew to a bright star in the vicinity of my first target (i normally use the same star for a month or so) and slew to it via my mount control panel. Take a frame and make sure focus is in the right ballpark. Run solve and sync from sgpro. If the slew was miles off the first solve might fail, but sgpro defaults to blind solving when this happens. I then run center and rotate target for my first target. At that point I let sgpro do its thing and come back in the morning to put it away. If the weather has been reasonable and everything has been left setup (camera connection to ota) I skip the manual slew and manual solve and sync. I run polemaster and then just start the sequence...
  5. I also think iTelescope is number 2. RC images seem to produce a donut like effect around bright stars which isnt visible in the second?
  6. Likewise. I have never seen anything suggesting the asi cameras are vulnerable to this. I regularly turn cooling straight off at the end of a run, and the asiair doesn’t even have a soft warm up function!
  7. My 200pds is terrible for light leaks as well. I resorted to curtain blackout material which works well when double thickness...
  8. The asiair can also do power distribution and eliminates the need for a laptop plus cabling... Not with a qhy camera though. Doh!
  9. Previous data was Bicubic spline. Lanczos 3 was even worse. This is with Bicubic b-spline and is definitely better but still not as good as drizzled: integration_resampled_bicubic_b_spline.xisf
  10. I reran the experiment with some natively undersampled data. This was captured with an EF200/ASI1600 yielding 4.04"/px. The results are somewhat different: Top left is 4.04"/px in theory undersampled. In reality the stars dont look anything like as bad as the 3.48"/px binned data above but they are slightly blocky. FWHM is 5.802" Top right is the data resampled prior to integration yielding 2.02"/px. FWHM is 6.145" Bottom right is drizzled. FWHM is 6.190" Bottom left is integrated as per the top left, but the integrated stack was then resampled. FWHM is 5.578". This is the lazy option! Some observations: Resampling a bunch of subs and then integrating them is time/space consuming compared to drizzle integration (in the land of PI at least). PI is very fast at drizzle integration despite it being a two step process. Whilst my data suggests drizzle has some benefit over resampling prior to stacking (acknowledging I might/probably have done something wrong) I am struggling to see the benefit of drizzle at all unless you have a very strange equipement config or image somewhere exotic (like mauna kea) which can yield very small star FWHM. Based on these results it is really hard to get undersampled data which properly benefits from drizzle because you need the combination of large aperture, short focal length and large pixels; something like a rasa paired with an Atik 11000? I guess the HST is one such example :).
  11. I like experiments and I have never really dabbled with drizzle, so here goes! I started by using some data captured at 0.86"/px. The calibrated subs have a median FWHM of 2.34" To get these undersampled I used integer resample with a downsize factor of 4 which yielded 3.448"px and visibly undersampled stars: The binned subs were then registered and integrated before being drizzle integrated (a PI workflow thing) with no weighting, no normalisation and linear fit clipping (dataset of 60 subs). The integrated only image looks badly undersampled: The drizzle (x2) integrated subs look better yielding 1.725"/px: Next up is resampling, so I resampled the registered bin4 subs before stacking them. I tested the Lanczos3 algorithm but seemed to get better results with Bicubic spline so this was chosen (screenshot shows auto, PI picked Bicubic Spline): These were then integrated yielding this, it doesnt look great: The resulting integrations are linked below for people to interrogate as they wish. drizzle_integration.fit integration_only.fit integration_resampled.fit
  12. You can also use nina (free) which has a built in optimal exposure calculator.
  13. Another person attempting to build an all sky camera! I have already built the dew heater etc, but the bit I am stuck on is routing the usb cable into the sealed camera enclosure. My setup will consist of a typical weatherproof box containing the camera and a home made dew heater + controller. I will run a 12V cable approx 6m from my AAG weather station laptop to the camera. I can use an active usb cable for the camera connection which is tested and working without issue, but I am unsure how to get the cables into the enclosure? I think I have a few options: 1. Cut the connector off the usb extension cable and pass it through a standard cable gland before wiring it to the camera via a terminal block? 2. Buy some kind of fancy split cable gland which means I can leave the cable intact. These seem very expensive and hard to source? 3. Buy a much bigger cable gland capable of passing the intact usb connector? Not sure how I would then seal around the much smaller cable? 4. Something else? Any advice on the best approach to take? TIA,
  14. When I speak to normal people who don't have an interest in astrophotography I find they ask a number of common questions such as: Where is this object? How large is it? Are the colours real? And the most common; how much magnification is required to see it? I have therefore set about collating images into easily identifiable areas of the night sky, typically constellations. By presenting these against a widefield whole constellation image correctly rotated and scaled it is possible to provide context to the objects and hopefully help answer some of these common questions. Video seems like a a sensible way of presenting this data because it allows motion and the ability to see closer versions of the objects. So here are Ursa Major and Coma Bernices. Ursa Major is one of the most obvious constellations in the sky and it is littered with galaxies. It has taken me 18 months to amass the source images and there are many more to go...
  15. Excellent article. I dont have an sqm meter but measuring from my images from last night the sky ceased darkening at approx 1am and started getting brighter from about 02:10. A whopping 1 hour of nautical (ish) darkness.
  16. You would be expected to use this with the flattener (or reducer) which exposes the right thread. You would typically use some of the spacers to reach the correct back focus for a flat field. https://www.firstlightoptics.com/reducersflatteners/william-optics-adjustable-flattener-for-zs73.html
  17. As an experiment it would be worth using the wbpp script with your data and then trying abe/dbe again. I have always found manual preprocessing a nightmare so always use bpp/wbpp even if I run image integration manually afterwards (the main part where you want to tinker). If you get the same result it would prove your manual preprocessing steps were correct and probably indicate an issue with your calibration files? I use lightroom/ps as the last step in processing, normally adjusting levels down to get the background where i want it.
  18. I guess another approach would be to test it and measure the result? Something like: Focus on 1 filter and then take 10mins with each filter. Then focus each filter manually and repeat. You could then measure the median fwhm for each set and compare the difference? With my zs73 (f5.9) it makes little difference so I tend to manually focus with lum. With my 200pds (f4.5) it does so I focus electronically and use offsets between the filters. I will probably add another eaf to my zs73 in time to fully optimise for all filters/subs.
  19. The image looks superb by the way! Did you run through preprocessing manually or did you use BPP/WBPP?
  20. Abe (or dbe) would typically be done immediately after stacking and prior to decon, mlt etc? If edge stacking artefacts cause trouble you could crop prior to abe/dbe. Do you have the warren Keller book, it has some excellent processing workflows in it...
  21. 1156? How did you reach that value because I reckon that more closely matches where I end up with LRGB imaging! With LRGB I dont like going below 60s because the acquisition overhead starts to become troublesome (interframe delay). As a result I usually end up in the range of 1000DN above bias so very close to you. Do you adopt that DN above bias for all gains?
  22. I wouldn't want to misquote but i believe the model was based on a number of different methodologies for arriving at the optimum dn above bias. The post from 12/06/17 17:53 states: However, the "expected ADU above bias" results in the last columns are similar to those from the well established John Smith/Stan Moore formula (~10*RN*RN/gain)*16 and Jon Rista's approximation (20*RN/gain)*16. The exception seems to be that (as shown in the attached table) Jon's method tends to underestimate at low gain and overestimate at high gain, compared to the other methods. I guess its purpose is to find the minimum sub length where read noise is adequately swamped whilst preserving as much dynamic range as possible. This leads to much shorter subs than people expect. Beyond this swamping point snr gains by going longer per sub is minimal? Another good reference is the Dr Robin glover presentation on youtube:
  23. Assuming this is at offset 50 which is typically about 800DN your median background is 6384. This means your background skyfog is approx 5600DN above bias. Optimal at gain 139 (taken from the CN thread) is approx 450DN above bias. You are therefore overexposed to the tune of 5000DN, a lot. If you look at the unstretched sub you can see tons of stars, these are over-exposed and you lose colour in them. I ran the binarize PI process on your sub and converted all pixels > 0.95 (> 62000 DN) to white. These are all overexposed and will clip to white. You could easily use much shorter subs and as long as your total integration time is the same you will not get a noticeable drop in SNR. You will gain in terms of star clipping, lose less subs to clouds etc and probably see improved resolution. The downside is you end up with more subs which take longer to process! As per the CN thread an optimally exposed sub will look terrible but this doesnt matter once stacked. This is one of my unstretched subs, 60s L , gain 0 @ F4.5. Raw frame is top left, binarized is bottom right:
  24. I never use unity gain; 200 for narrowband, 0 for lum and 76 for rgb. 300s at unity gain is likely to be dramatically overexposed. I switch gain so I can use 60s subs for lrgb without massively over exposing. The tables linked (on cloudynights) are a brilliant resource. Make sure your calibration frames match exactly. If you have the newer Asi1600 it defaults to offset 50 for all gains. It can essentially be ignored unless you go fiddling!
  25. If you use something like nina (free) or sgpro ($99) the hfr calcs are based on the entire field so you don't need to worry about bright or dim stars. It also gets you best focus across your entire frame as opposed to a single star at a singe point of the frame.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.