Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. You are getting quite low FPS for some reason. Even with 12bit capture, you should be able to pull 100FPS. With 8bit, you'll be limited with exposure length - 5.9µs is equivalent of ~169.5FPS, yet you are able to capture only half of that. Don't think that you need to go with gain 425. Try setting it to 379 next time. With such high gain settings - read noise difference is small - and it's better to avoid quantization effects if possible and keep gain in form of 135 + N*61. This is because unity gain is 135 and it doubles (or e/ADU halves) with every additional 61. This helps with rounding - in binary multiplication / division by 2 is simple shift of bits. 5.9µs is quite good exposure value.
  2. Why do you derotate 3 minute video? What do you use to stack your recordings? AS!3 can easily handle any planetary rotation that happens in 3 minutes (and even more) on 10" class aperture.
  3. Three things here come into play. 1. Total imaging time 2. Level of read noise and number of stacked frames 3. Single exposure length vs seeing effects These three are tied together in some ways so you can't optimize them independently. Take total imaging time. Say you image for 3 minutes - that means that you gathered 180s of data, right? Not necessarily. Camera, computer and USB speed come into play here. If you used 5ms exposure - and your camera is capable of doing 150FPS and you imaged for 3 minutes - total imaging time will be 180 x 150 x 5ms = 135s - so 2 minutes and 15 seconds of recording instead of 3 minutes. In the end - if you stack 5% of frames - you'll be stacking 5% of that 2:15 and not 3:00. Level of read noise dictates difference between one single long exposure and stack of shorter exposures. If read noise was zero - non existent, then it would not matter if you stacked 1000 x 1ms or 100 x 10ms or 10 x 100ms or 1 x 1s - from SNR point of view. Thermal signal grows linearly with time, LP signal grows linearly with time, target signal grows linearly with time - which means that they all add up the same regardless of number of subs in total imaging time (and hence the same noise levels coming from them) - only read noise grows as number of stacked frames - each frame has one "dose" of read noise. When read noise is non zero - more subs you stack (that add up to same total time) - worse resulting SNR will be. In planetary imaging we are forced to use short subs and lots of them. That is the reason why we want as low read noise camera as possible. It is also the reason to set camera gain in such way as to minimize read noise. Read noise comes in two parts - pre amp and post amp. Some of read noise is added before signal amplification and some after. Here lies the reason why higher gain results in less read noise. Say we have A to be pre amp read noise and B to be post amp read noise. Resulting noise will be A * gain + B (where gain is in e/ADU units and lower number means higher gain - you have less electrons per ADU - or more ADU units per electron - higher gain). If gain is 1 e/ADU then total read noise will be A + B, but if gain is set to 0.5 e/ADU - then total read noise will be A * 0.5 + B - and that is lower value of read noise. Three has to do with how seeing works and fact that we need to freeze it to avoid additional motion blur and just have wavefront aberration blur that we can asses for quality and accept / reject (AS!3 even does quality estimation around alignment points and accepts part of the sub that are good - if you turn that option on). From all of the above, we have number of "rules" that we must take into account when choosing gain and exposure length. - Higher gain means lower read noise - good in every case - Shorter exposure means higher impact of read noise, can cause "dropped" frames, or under utilization of total imaging time, but it is essential for freezing the seeing - Using longer exposures can in fact be counter productive. Yes, you minimize impact of read noise and you ensure you have no "dropped" frames and you use whole imaging run for frame selection - but you severely reduce change chance of frame being sharp enough to be included in resulting stack because of atmosphere motion blur. If seeing is exceptional - then use longer exposures If seeing is not good - use shorter exposures Always keep gain high as it reduces read noise
  4. Photon series also come with secondary optimized for imaging, but do check type of focuser. Difference can be due to single / double speed focuser used on particular model. Speed of the mirror also determines price. F/4 primary mirror will often be more expensive than F/5. Other than that, GSO (TS Optics Photon) based scopes seem to always have slightly lower price than Synta (SkyWatcher) - because Synta was first into the market and was regarded as good by most and GSO needed to compete that reputation with slightly lower price. Both make good mirrors and there won't be much to choose from in that department (some will even say that GSO makes a bit better mirrors - but I guess that is more anecdotal than verified fact).
  5. Take any G or F class star and measure intensity in sub from each filter - green will have higher ADU value if you used the same exposure length.
  6. You are quite right. I did not read carefully first part of the post. That is sort of norm nowadays. You run one capture application and one guide application (MetaGuide, PHD / PHD2 - I use PHD2). Two can usually talk to each other and imaging app will trigger dither in guiding app.
  7. I believe NINA can work for two cameras. You start two instances and then sync them for dithers, filter change / refocusing and meridian flip https://nighttime-imaging.eu/
  8. Do you know how they perform compared to earlier models? I have 6.7mm and apart from ER - I quite like those (have 11mm also but that is not redesigned).
  9. Indeed, like @alacant already said - it is dust on camera chamber window. If you remove dust - you'll need another set of flats for lights taken after dust removal. Best way to remove it is to use one of these: (air blow bulb / air bulb / rubber dedusting ball / etc ...)
  10. Hi and welcome to SGL. Problem is possibly due to start position. You should start in North - level position. That means scope pointing due north and level - parallel to the ground. Maybe youtube video that explains the mount operation and alignment would be of some help? Something like this: (AZGti and AZGte are very similar and difference is only minor).
  11. Did you try refocusing for OIII? What sort of subs / stars are you getting if you focus for that filter?
  12. Not sure that anyone would put spherical mirror in F/5 newtonian. Although it won't make much difference when imaging (seeing will mask any aberrations on that level) - it would seriously impact visual performance on planets and moon. Spherical mirror won't suffer from coma - parabolic mirror will. You should probably concentrate on collimating your scope first. Second thing to try to sort out would be focuser. Focusers on such scopes are rather poor. I've seen people 3d print replacement focuser adapters because original are not quite squared on tube. ASI224 is too small sensor to seriously notice coma. Diffraction limited field has radius of F^3/90 (in millimeters) - where F is F/ratio of the scope - that means that diffraction limited field (as far as coma is concerned) has radius of ~1.4mm. ASI224 has diagonal of 6mm - so for central part of sensor - coma would not be noticed even for planetary performance. DSO imaging is impacted by seeing. Here is an example with F/6 telescope and x0.5 reducer and sensor the size of ASI224: At F/6, diffraction limited field is 2.4mm but with x0.5 reducer that will be 1.2mm on sensor - less than 1.4mm with your setup. No coma to be seen.
  13. You can really achieve the same or even better result by doing the following: 1. Gather data at bin x1 2. Separate into starless and stars version by using starnet (create starless version and then subtract from original - that will create stars only version) 3. Bin starless version x2 in software 4. Resize starless version to original size by using some fancy resampling method (like Lanczos or Mitchel or similar) 5. Add stars back in. What you've done is basically wavelet type denoising selectively applied on the nebulosity and backround and there is no need to acquire two sets of data.
  14. If both subs are stretched the same - then it should not. Different level of stretch can impact star separation a bit. Depends where you put your white point in stretching and how much you stretch. In image above - if you apply weak stretch (top line represents white point level) - you'll get two separate star profiles (orange bars), but if you do strong stretch and set white point low (bottom red line) - you'll get "joined" star profile - bottom orange line. Even if you have same exposure length - if binning is done by summing - then ADU values will be higher in binned image (sum of 4 pixels) - and that will make two peaks higher in the image. If you set white point to same value - it can be lower than in bin x1 image with respect to above profile and one can end up having joined stars while other can have them separate. Above example is mainly due to bayer type binning which is slightly different than what I'm suggesting, and also because of debayering method used. When you finish your capture and stack your bin x1 images - you can experiment with binning that stack while still linear to see if there will be difference in resolution in that case. In PixInsight - binning is done when you select integer resample and set mode to average.
  15. It does improve overall shape of things, but you are quite right, with these settings - image on the left is clearly better.
  16. Since you are using PI - well, do two things to binned frame: 1. Enlarge it x2 using Lanczos resampling 2. See that ADU values are the same between two images (binned version might have x4 as high ADU if it was binned additive instead of average) and stretch them the same then compare them.
  17. Ah, ok, yes. Well - color camera. Bin in firmware needs to be done on bayer matrix so this is in effect 4x4 bin - or no, it's not, but rather 2x2 bayer matrix element which consists of 4x4 pixels is binned into 2x2 bayer matrix element. That is another reason why you should bin in software - it avoids things like that and let's you control things. In reality things are not as bad as it might seem from above images. After debayering and when using proper rescaling algorithm (not nearest neighbor like in preview) things can and will be different. However - that is complex topic that is not quite related to this thread (we can discuss that if you wish as you started the thread and might be OK with that being discussed as well). Here is just a quick example of that I mean: I made simple "two star" system - two dots blurred just tiny bit by gaussian blur. Left image is "zoomed in" image at 800% using nearest neighbor resampling. Right is 800% zoom using cubic interpolation. Left is what happens when people say "my stars look blocky". They just look like they are made up out of little squares - that people often confuse for pixels - but pixels are dimensionless points - they are not squares (image pixels). Squares is just artifact of used resampling method. Similar thing happens when you zoom in image that is still bayer matrix and using nearest neighbor resampling (often used in preview because it is much faster computationally then other methods).
  18. I asked because I think that it might be a firmware binning issue more than anything. But don't worry about that for now - I advocate acquisition at bin 1 mode and binning later. I wanted to check if there is indeed such resolution difference as I don't think it should be if data is binned regularly.
  19. Do you have that bin x1 sub? I don't think that star separation will be affected by bin x1 / bin x2 on ASI2600 and RASA8 like that.
  20. This is excellent. I have only few slight "objections". Core is slightly blown and background is too smooth . I know that second one is silly, but yes, somehow it is too smooth. It does look nice, and yet, I had to check if there was clipping in histogram - there is not, of course, it looks good. Everything looks good - almost too good, if you know what I mean (I'm talking about background here). Third one is about stars, as you mentioned that you tried to keep natural look - well most stars out there are not blue: If you look in the background - most stars in your image have teal color (and look at that smooth flat background - almost "plastic" like). Btw, don't read too much into above "objections" - image is very good indeed and most people won't mind or notice single thing that I mentioned a above.
  21. That is the focal length you are aiming at again? How about something like this: https://www.teleskop-express.de/shop/product_info.php/info/p10095_TS-Optics-PhotoLine-60-mm-f-6-FPL53-Apo---2--R-P-Focuser---RED-Line.html + https://www.teleskop-express.de/shop/product_info.php/info/p7943_Long-Perng-2--0-6x-Reducer-and-Corrector-for-APO-Refractor-Telescopes.html That will give you ~200mm FL at F/3.6. I've seen accounts that x0.6 reducer by Long Perng can happily illuminate and correct ASI183mm sized sensor. Above combination won't break a bank like Borg
  22. Not sure which version he has, but here is a thread featuring Borg image in it:
  23. Should consult @alan potts on that one. I think he has Borg and has done some imaging with it at rather fast F/3.6 or something - and I think there is also CA present with that setup.
  24. With CMOS sensors - capturing bin x2 format gives almost the same results as doing it in software. It is exact same process - except for some edge cases - like when you have signal that is stronger and clips. If you bin in firmware - it will be limited to 16bit (or any other bit range camera uses) - but in software you can have higher bit precision (like doing 32bit floating point). One advantage of binning in firmware is that you save smaller files - that is handy in some cases. On the other hand - if you bin in software, you have flexibility to decide when and how to bin. You can bin in the end - stacked image (probably the way to go), or you can do different way of binning - like split bin for example (which takes data from one frame and instead of producing "already stacked" binned version - instead of produces 4 smaller split frames that you stack with other frames in main stacking - better for sub pixel offset accuracy and pixel blur).
  25. Problem with wide field images is lens distortion. This does not usually happen with telescope images because FOV is too small - few degrees at most. As soon as you start doing FOVs that are larger than that - lens distortion kicks in. It is in fact not due to lens but due to projection. We are trying to image sphere (celestial sphere where coordinates are in angles) onto flat 2d surface of chip. It is a bit like trying to show globe on 2d map - some distortion is necessary. Here is FOV with 4/3 camera and ~400mm FL scope (2.6° x 2°): Look at background RA/DEC grid - sides of sensor are almost parallel to Equatorial grid in the background, although distortion is slowly creeping in (bottom DEC line is not quite parallel to bottom edge of sensor - but a bit bent). Same thing now with APS-C sensor and photo lens: This is now FOV spanning dozen of degrees in RA and DEC - look at level of distortion - RA lines are not parallel to sides and that is obvious - on left side they are tilted inward to the right and on right side - again inward to the left this time. In any case - above is explanation of what is happening. I'm going to list software that can address this - and also caveats of using each: 1. APP. This is payed software that is used for stacking - as far as I know, it can stitch panels into mosaics and deal with lens distortion. I haven't used it myself - that is something that I just saw (or think I saw) in software description 2. Microsoft ICE. This works like charm - but two issues. First, it is no longer available for download (at least it wasn't few months ago - maybe they put it back online again). Second issue is that it works only with 16bit images and you need your images pre stretched for this to work. I prefer this done while data is still linear. It also won't deal with gradients properly (it might correct them just enough to seamlessly join subs). You can find download links via Way Back Machine and I think that people posted here saved versions of ICE (3.1 if I'm not mistaken) - so do search to see what comes up 3. Hugin. This should do the same as Microsoft ICE - except it is readily available for download as it is open source: http://hugin.sourceforge.net/ Haven't used it, but I think that same applies - need to pre stretch panels and deal with gradients independently of software. 4. ImageJ/Fiji has number of plugins that stitch images together. This is scientific software and should deal with linear data properly. It won't deal with gradients, but it is likely that it will transform data and let you combine it later. Not sure if it handles perspective issues. https://imagej.net/plugins/image-stitching I think that most of above (except maybe APP) require same focal length / pixel scale so first step would be to adjust pixel scale on one of the images (but I'm not really sure).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.