Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I'm surprised as I've used ASI178 with 1300mm FL and needed only dozen panels to get full mosaic. With lunar imaging you don't need very high FPS as you can image for longer period of time - you are not restricted by planetary rotation like with Jupiter or Saturn. For that reason there is no need for small ROI. I'm also guessing that there is not much field curvature with F/9 scope and ASI178mm is only 8.9mm diagonal sensor - again, no need for ROI to avoid aberrations.
  2. You need to stack debayered version of your subs where colors are already split Maybe you copy/pasted above from some other website you are using and by accident you also copied their "social tools" box?
  3. In regards to L-eXtreme recommendation - check out this thread: It appears that L-eXtreme filter does not play well with ASI294mc There are flat issues, but also lights have strange gradients in them.
  4. Good point. 3d printed structure that integrates nicely with camera mounting and some sort of support on aperture edge would do the trick. https://www.cloudynights.com/topic/594770-curved-spider-vanes-for-imaging/ In general, if you have particular design - you can easily see what sort of PSF you'll get and hence star profiles. All you need is software capable of doing FFT transforms (power spectrum in particular). Here is example in ImageJ and FFTJ plugin - take any aperture (white value 1 is clear and black - value 0 is masked), do FFT of it and look at power spectrum and blur it a bit with gaussian blur to simulate seeing. Stretch to see star image.
  5. If you bend cables into semi circle - you'll do the same - there is no need for special mask and blockage of the light. Diffraction happens on a straight edge - in direction perpendicular to that edge. If you make edge curved - each little section will diffract in direction perpendicular to it and if you have semi circle - this will spread in 360° effectively creating halo rather than one strong spike. Just make sure you have semi circle and not semi ellipse or otherwise you'll end up with elliptical halo around bright stars, see here for discussion on that:
  6. I guess not all will want to work with AutoStakkert!3 so there is option to get color data right away. Maybe someone just wants to shoot a movie of what the Moon or Jupiter looks like thru their telescope and post it on youtube for example. They expect video to be already debayered.
  7. You will probably need barlow with that scope. With barlows magnification factor depends on distance between it and sensor. Increase the distance - increases magnification and decrease in distance - decreases magnification. For that reason it is good to have "barlow element" - barlow which you can screw off the barlow body and use it on its own - so you can dial in needed magnification with extensions. You'll need x2 barlow with your scope and ASI1600. Optimum F/ratio is determined by pixel size and wavelength you will image at. For full spectrum (like RGB images) - we usually use 500nm wavelength as reference (rather than mid spectrum of 550nm or short end of visible spectrum at 400nm). F/ratio = pixel_size * 2 / wavelength For full spectrum and ASI1600 that would be F/ratio = 3.8µm * 2 / 0.5µm (500nm) = F/15.2 = F/15 or x2 barlow used with F/7.5 scope As far as ROI (region of interest) is concerned - it serves dual purpose. With planetary imaging, you want as high FPS as possible. First you want short exposure to freeze the seeing and that means exposure of 5ms or so (that alone gives you room for 200fps as exposure = 1s / fps), then you want USB connection to be able to transfer all that data - which is why you should use USB 3.0 connection. You want fast drive (like SSD) to be able to record all of that data. In the end - if you transfer whole image instead just some part of it - you'll be transferring much more data then you might need, so it is better to use just central part of sensor. Most planets fit well inside 640x480 ROI - with exception of Jupiter if you plan to image whole Jovian system with moons - then you'll need larger ROI. The Moon is often larger than sensor and we need to utilize mosaic approach here if we want whole disk instead of shooting just certain feature. We still use ROI here as telescopes have diffraction limited performance only in center of the field. Further out, depending on design of telescope, aberrations start to increase - like coma in newtonian telescope or astigmatism in certain designs. Field curvature is present in APO triplets (for that reason flatteners are used for large sensors). Although I'm not sure how large is aberration free field of your scope, I'd recommend using only central <10mm - and that would mean keeping ROI up to ~1900x1600 and not more for Lunar - even if that means more tiles for mosaic. Do have a look around the youtube for planetary / lucky imaging tutorials to see how the capture is performed, and later, more important, how stacking and processing is performed.
  8. I would advocate use of undebayered SER as format of choice primarily because AS!3 uses bayer drizzle to debayer data when stacking. This keeps resolution corresponding to pixel size. Other debayer methods blur image tiny bit and you want the sharpest image possible to start with when processing after stacking. Speed difference can be due to storage speed. Regardless what format is used for storage, data should be downloaded RAW from camera. Only difference there is 8 vs 16 bit. After data is transferred over USB - then it is debayered if you select RGB24 and written into file. If your storage disk can't support data throughput required and there is no memory buffer to store unprocessed subs - your capture speed will drop. In any case - using RAW SER format is the best option. As far as 8bit / 16bit goes - I'd need to run some tests on that one to see the difference. I have feeling that it should be different and that 16bit should be better - but how much, and under which circumstances - I can't tell without checking. Using higher gain will lessen the difference between 8bit and 16bit though.
  9. SER files can be smaller than AVI - depending what format you are using. AVI is usually using RGB 8bit format and that is 24bit per pixel. You can roughly calculate the size of AVI like this height x width x number_of subs x 3 byte. If you record 8bit RAW SER file - it will be x3 smaller than avi as it will record height x width x number_of_subs x 1 byte. If you record 16bit RAW SER file - it will be 2/3 size of AVI as it will record height x width x number_of_subs x 2 byte. SER can also record debayered data and then it will be about the same size as AVI unless AVI is compressed in that case it will be smaller. Here is screen shot from SER specification describing format: It can record mono, OSC in different bayer matrix format or RGB / BGR debayered data
  10. I use ImageJ - free / open source software that is java based (so will work on most platforms / operating systems), though there are other software packages that offer this functionality. It is as simple as load then Image / transform / bin and selecting bin factor and method (average or sum).
  11. Antares and Celestron F/6.3 reducers have different focal length - first one has 220mm and second one has 240mm. There is also question of how much of that is taken by reducer housing (8.8mm in Antares case as far as I can tell). You can try different spacing and calculate effects here: http://www.wilmslowastro.com/software/formulae.htm#FR_b In case of Antares - enter 220mm as focal length and keep in mind that 8.8mm is from lens to thread (according to this: https://agenaastro.com/antares-f-6-3-focal-reducer-schmidt-cassegrain-telescope.html )
  12. Good point, will have a look and post back my findings.
  13. @neil phillips May I propose simple experiment? It consists of taking two flats with your scope and camera. Either with flat panel if you own one or just white wall (but make sure lighting conditions don't change). Exact focus is not critical as in flats - we just need means to get two simple to measure recordings of light. Do one shot with low gain and one with high gain, make sure you have all other settings the same - exposure time, etc ... Scale high gain image with ratio of mean ADUs of both images (so that they have same mean ADU) and compare standard deviation of both shots. One with higher standard deviation is more noisy (signal will be equal if mean ADU is equal).
  14. Of course, I could be wrong as a next person. I meant no disrespect by strongly advocating my point of view.
  15. Well yes. Stretching R, G and B individually will cause them to be stretched differently and that impacts colors in the image. Even regular combined stretch alters colors in a way (usually desaturates them a bit) - but separate stretch causes hue shift as well.
  16. That really depends on seeing on particular night. Start with 5% and if seeing is good - increase number. Autostakkert!3 gives nice "seeing / sub quality" graph - in time and sorted - and you can judge based on that graph how much subs to keep - but even if you keep only 5% - that is 400 subs out of 10000 and that improves SNR by factor of x20. Luna is rather bright and individual subs should already have decent SNR.
  17. I'm saying that, once you dial in your exposure length - one that is short enough to freeze the seeing but not shorter than that, you can raise the gain as much as you want until you hit saturation / clipping and it won't change SNR. In fact, if it were not for gain/read noise relationship - it would not matter which gain was used as gain is only conversion factor. For any given S/N if you use multiplicative constant - you don't change S/N as it is equal to S*c / N*c - constant "cancels out". We recommend high gain simply because of this: It is better to have 0.8e read noise than it is to have 1.5e read noise. In long exposure AP - it does not really matter what the read noise is - we can lessen its impact on final stack by choosing sub exposure length. With planetary we don't have that luxury as we are limited with seeing - we can't use arbitrary long subs. For this reason we want as low read noise as possible. Just to reiterate - read noise is only thing that makes difference between stack of short and long subs (or even single very long sub) that add up to same total time - in terms of SNR. If we had zero read noise, then SNR would be the same for same total time - regardless of individual sub duration. Eyes are not very good judge of SNR - look at this: It might look like right side is much worse as far as noise is concerned - much more grain, but in reality - they have exact same SNR, except right side has been made brighter by multiplying the data by 3.
  18. Hi and welcome to SGL 1. 30mm F/4 guide scope is more than enough for simple mount like StarAdventurer. 2. For guiding there is zero benefits of using USB 3.0 over USB 2.0. USB 3.0 version is good for planetary / lucky type imaging, but for guiding USB 2.0 is more then enough.
  19. Thees seem to be already stretched. Do you stretch your data prior to channel composition?
  20. It looks like you put too much distance between reducer and sensor. If I'm calculating things correctly, there is 8.8mm on reducer itself + 50mm + 16.5 + 21 + 11 + 6.5 = 113.8mm That makes this reducer work at F/5.5 instead F/6.3 and requires 23.5cm inward focus travel. Try placing sensor 84mm away from back of reducer. Just drop 21mm extension and see what you get like that.
  21. Would like to give it a try, but I don't have software that works with xisf files. Maybe fits for us poor souls?
  22. Love the "seagull feature" in Cak image
  23. Yes, stars look better now. Background is a bit lighter in this version? I think I slightly prefer original background.
  24. Here is the result after background gradient removal and binning Data is rather clean except for the issues with stars - Alnitak is wreaking havoc as usual and some of the stars are a bit out of shape. There is also issue with flat calibration of the data - it did not work quite well and interesting thing is that it failed differently depending on channel - it's different for red, green and blue - which you can see in the image - red channel dominates with issues in the corners (over correction of vignetting). Here is little animation that I made that highlights how issue with flat calibration changes between channels: Red channels shows over correction while green and blue show under correction of vignetting - blue more than green. This causes red rim and slight glow in the middle of the finished image.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.