Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,107
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. I would say that there is a bit of connection between that image and usage of binning in astrophotograpy.

    One way binning is used on CCD sensors is to capture R, G and B information at lower resolution, for similar reason as is presented on above image - brain is more sensitive to variations in brightness than it is in color. This image shows more than just that, it shows that brain sees "in context" - in the same way it "reads" in context - Fqr exgmole, yiu shgud be abqe to rkad thws sedtjnce, although it is total rubbish :D - Brain sees start and end of every word and its length and based on context is able to reconstruct what's it is saying. In similar way in above B&W image with superimposed grid - grid is providing hint of color, and our brain just fills in the blanks (but it relies heavily on context - we know what sort of color ones skin should be and possible clothes colors and such).

    On the other hand, binning in AP has also different usage, and I don't think above image could be used to explain that.

    • Like 1
  2. I guessed it was something like that - that is why I asked about drizzling. You say you feel it is better when you drizzle, can you tell me why? I'm interested because you should have better SNR without drizzle and there is nothing to be gained from drizzling (no detail) - according to theory.

  3. 5 hours ago, Cornelius Varley said:

    Orion (same telescopes, different brand name) give the length of the 150P tube as 26.5" (673mm) and the 150PL 45.5" (1156mm).

    This makes sense. I don't know exact numbers, but we could do some math to arrive to approximate figure.

    For 150P and F/5 scope, FL is 750mm. Focal point of such scope will be about 150mm away from outside tube walls - sw has about 60mm high focuser, it needs to be racked out a bit to reach focus and there is at least 50mm 2" to 1.25" adapter.

    image.png.6338da02558dc3748791f9dd16a718a9.png

    There is additional 75mm distance between secondary and tube wall, so total distance between mirrors is about 525mm.

    Beginning of the tube to optical axis of focuser I would say is about 120mm, main mirror is about 15mm thick and there is another 20mm of main mirror cell and back of the tube. If we add all these together - 120+525+15+20 = 680. That agrees with said 673mm very well.

    Similar calculation can be done with 150PL and I think we will get close to it's quoted length of 1156mm. It's got 1.25" focuser so focus position will be closer, and distance to the top of the tube probably a bit less.

  4. 2 minutes ago, emyliano2000 said:

    Yes, I stacked in DSS and I do my processing in photoshop. I only use Pi to sort them out with Blink and sometimes for stacking but I do it the long way and it takes me a while so if I only have data shot with one setup I tend to do it in DSS. I don't know much about Pi, I don't even know how to stack using the batch processing.

    I already had a go at stacking in Pi and something wasn't quite right so I gave up and did it in DSS.

    I think it would be faster to put them on a stick and post them to you. 🤣

    If you could sort out stacking in PI that would be great - it has some advantages over DSS (at least I think so).

    - it will calibrate subs using 32 bit precision (I think DSS is limited to doing that in 16bit - at least it was in version 3.3.2 - last I have used, there are later versions now and that might be fixed).

    - it can use Lanczos resampling when doing registration / alignment of frames - that is a big plus I believe (DSS uses bilinear / bicubic still?)

    PI should provide you with cleaner stack and tighter stars, also try to save your work in 32bit format, preferably FITS - as it is standard for astronomy software.

  5. 23 minutes ago, iwols said:

    some great info there vlaiv cant wait to try again appreciated,when i get the ser files do i try using them in AS3 then

    It would be good to do at least dark subtraction (that includes bias).

    In order to do that, you need to keep everything the same (don't change any settings) and just cover your scope. Shoot a short movie of about 256 to 512 frames (sorry about my binary OCD :D ). If you are using sharpcap for capture you can limit recording to either total duration or number of frames - use total duration for shooting planet movie and number of frames for this dark movie.

    Once you have those two (but flat / flat dark would also be really good to throw in the mix - but that is taken differently - like in regular imaging, you don't need the same settings and exposure will depend on how strong is your flat panel) - you need to open them in PIPP - it is Planetary image pre processor (or something like that) - open source / free software for pre processing planetary movies. There you want to do calibration / image stabilization, also preserve bayer matrix and export as 16bit ser.

    Once you have done that - then open that ser in AS!3 and do stacking. Tell AS!3 to debayer your ser (you can use bayer drizzle or some other debayer method).

    You can skip PIPP and dark calibration, but you'll get better results if you do it as it removes bias signal (and very very tiny dark signal) and it should give you cleaner image in the end.

    • Thanks 1
  6. Don't know if anyone imaged it, but I guess there ought to be amateur images of it out "in the wild".

    At ~13.6mag/arcsec2 of surface brightness it is not that faint at all - it is relatively uniformly lit galaxy (no prominent central core) and I guess it should be similar challenge as M33 for example - which has lower surface brightness at 14.1mag/arcsec2. M33 is readily imaged and not that difficult at all, therefore Barnard's galaxy should be similarly easy/difficult (depends how you look at it :D ).

  7. 9 hours ago, iwols said:

    hi vlaiv the settings for the avi are

    Ok, there are a few settings that are "wrong" from what I can tell.

    - don't use avi :D - use ser (although avi is in principle fine, it supports compression and one wants to avoid compression as it leads to compression artifacts, so it's better to use plain format as ser as it is also written faster and there is no option for compression so you can't go wrong with it)

    - exposure is way high in both (assuming units of seconds which should be reasonable) - no wonder it looks out of focus / blurry. Like I pointed above aim for about 5ms. Actual exposure time depends on something called atmosphere coherence time and it is not easy to judge, but it is time in which distortion of the image due atmospheric seeing does not change - it depends on something called coherence length and speed at which atmosphere is moving (this is why jet stream is bad - it moves air along at great speeds). It is in range of few ms up to 10ms or so. When atmosphere is really steady you can go 20ms or even 30ms. Expressed in units of seconds these numbers are 0.005 - 0.01, and occasionally 0.02-0.03 when you have excellent seeing. In contrast your exposure times are 0.3s and 4.13s.

    - apply high gain - with higher gain you have lower read noise, and you need as low read noise as possible because it helps when you have short exposures (difference in SNR between one long and few short exposures of same total time is in read noise - smaller it is less difference in SNR and you want as high SNR as possible). For 178 sensor, gain of about 270 is going to give you lowest read noise (according to ZWO graph).

    - I would not use auto white balance and set both red and blue to default values - you can white balance in processing

    - if you have auto exposure and such turned on - just turn them off and set exposure manually to something like 5, 6, or 7ms (even 10ms if you have steady skies)

    - Pan/Tilt tells me that you were not in the center of the sensor. This is important as image is the sharpest on optical axis. As soon as you start moving off axis there will be off axis aberrations. With SCT there will be coma (same with parabolic newtonians). This of course holds for well collimated scope. For this reason you want to be on optical axis when doing capture.

    - turn high speed mode - you want to capture as much frames as you can. With this camera if you have SSD and good USB3.0 controller, you should be able to do about 200fps. Do that for up to four minutes and you should gather somewhere around 40000-45000 subs. If you stack top 5-10% of those you should improve your SNR by factor of about x50-x60. That is good enough to do serious wavelet sharpening without bringing too much noise into view.

    HTH

    • Like 2
  8. 9 hours ago, emyliano2000 said:

    Maybe I should increase the exposures for the faint ones. I had a search online and found that I can go up to 600sec with the qhy183m and the AT106 in narrowband under my sky conditions. I remember a while ago I even did 900sec with a dslr on a 1000mm f5 newtonian and I got really good results on targets close to Zenith.

    Maybe I should even try some 900sec to see what I get, I'm pretty sure the mount can handle it, since I put it on a pier my guiding is great.

    Difference between shorter and longer subs (for same total imaging time) depends on read noise. qhy133m is already quite low read noise, so I doubt that you will see much difference - there will be some but not as much as one would hope.

    What you can do with this data set to improve things is to maybe look at binning x2. Qhy183m has rather small pixels, and if I'm correct AT106 has 690mm FL, which gives 0.72"/px sampling rate - and that is almost certainly oversampling. Bin x2 will give you x2 boost in SNR which is equivalent of getting another x3 number of subs for each channel.

    This would be very easy to try out - take your stacks in all three wavelengths and do integer resample in PI (select average method and reduce by x2), and then see what noise looks like and try processing that.

     

  9. 4 minutes ago, emyliano2000 said:

    I have the same number of exposures for each of them, how come there is so much noise in th Sii and Oiii?

    There is no guarantee that light emitted in SII and OIII wavelengths will be of the same intensity as Ha. Each target has it's own "signature" - both in terms of shape of gas distribution but also how bright it is in particular wavelength.

    You don't need any sort of particular settings to handle this - what you need is to get more data on fainter wavelengths until you get acceptable SNR. If you have "limited" budget, then in general Ha tends to be the strongest signal, so take more subs in OIII and SII to get smoother image.

    Another trick that you can use is to use Ha as luminance layer and compose in similar fashion as you would LRGB, but in this case it will be HSHO (LRGB). This depends on target and you need to examine your stacks in each wavelength to see if indeed Ha covers everything.

    • Like 1
    • Thanks 1
  10. Had a look at Jupiter recording, and here are a few suggestions:

    1. use x2 - x2.5 barlow / telecentric for optimum sampling

    2. keep exposure around 5ms range

    3. shoot raw video in .ser format (skip avi if at all possible)

    4. use gain setting at around 270

    5. use ROI to maximize frame rate captured (like in Jupiter video - 640x480 is quite enough to capture a planet)

    6. Use 8bit capture if image is not over-exposed, another trick to maximize fps (at 5ms and 8bit you should be able to hit around 200fps)

    7. Use up to 4 minutes of imaging run per planet. If everything is good you should end up with couple of gigabytes of .ser per capture (yes, quite a lot of data)

    8. Shoot darks - at least 256 to 512 frames at same settings with scope covered (this should not take long - few seconds at 200fps)

    9. If you can - do flats and flat darks also

    10. when shooting make sure planet is centered in sensor FOV and selected FOV is in center of the sensor (as close to optical axis as possible)

    11. Use PIPP to calibrate your movie and save as 16bit ser (even if you took 8bit movie). Don't debayer in PIPP so keep option "protect bayer matrix" turned on. Out of all options, you want only three in principle - don't debayer/protect bayer matrix, calibration, image stabilization (this is not necessary) and save in 16 bit ser format.

    12. Use AS!3 to stack you result

    • Like 1
  11. 4 hours ago, emyliano2000 said:

    In terms of pointing the Sii in the image, to be honest I could only do it if I look at the Sii stack.

    That was a trick question :D , sort of.

    I've noticed that many people like this sort of "SHO palette" but in my view it's missing the point. We use three colors to represent three different wavelengths of light. Once you start changing hue of this color combination to be more pleasing to the eye, you loose ability to differentiate wavelengths and respective elements in the image. Although you get more pleasing result, you loose "scientific" or rather in our case informative value.

    Green looks unnatural in astro images and most people decided to avoid it even in false color narrow band. Predominant combination now seems to be blue for OIII, yellowish for Ha (rather than green) and red for SII. To make things more pleasing to the eye hues of these colors are also shifted a bit. Most people aim for "sky blue" rather than "sea blue" for OIII - much like in your second image, golden yellow for Ha and red/brown for SII.

    I think that you've got OIII and Ha colors/shades real good in second image, but I think that you are missing SII or brown/red component - and hence trick question.

    • Like 2
  12. 4 minutes ago, Buzzard75 said:

    Thank you! This was my understanding on the purpose of the encoders as well and that it was probably the better choice for me. I don't want to say money isn't a concern, but I can afford it. If it's the only mount I will buy for a long time, I'm also willing to make the investment. The only thing I wasn't particularly clear on was how much of an improvement there would be on the length of the subs due to the improved tracking accuracy. I doubt that's something anyone would be able to give a detailed answer on without doing some testing. I know that at longer focal lengths the error of even a few arcseconds can make a huge difference in elongation of stars.

    You could try to estimate what sort of improvement you are going to get if you have enough info on how mount behaves.

    Not sure if anyone published related info on CEM40EC, but there are at least a few posts on how CEM60EC behaves (and CEM120EC for that matter). I think it's safe to say that CEM40EC will perform similarly.

    Start by calculating sampling rate of your setup (arc seconds / px) and also doing some basic measurements on FWHM. For elongation to start showing you need something like 10-20% of difference in two axis. I'll do a simple calculation as a guide on pretty much made up (but still assumed based on real numbers) data.

    You are shooting at 300mm FL, and using apertures below 70mm, so FWHM should be around 3.5" or so. PE of about 0.7" for duration of exposure is going to be upper limit (20% of FWHM).

    CEM40 is listed as having up to 7" P2P PE over 400s. Roughly speaking if we assume that perfect sine wave, 7" will be covered in half that time, so we can say that for 20s or so exposures you will have perfect stars always. If you relax what you perceive as round star, and do about 50% (so its obviously elongated but still not "streaking") you can do one minute exposures with non EC version.

    In contrast to this, EC version has error listed to 0.25" RMS on 400s - so you can safely go with much longer (half a dozen minutes or more) exposures and you won't see any elongation due to PE. With this mount, PA error and atmospheric refraction will be main source of error

    • Thanks 1
  13. Let me have a go at this, although I don't have experience with such mount.

    I think that encoders are good idea in your case and they are worth it. Are they worth the money difference? Can't be the judge of that, and it depends on your budget and if you can afford them. If guiding was an option, then I would say - get plain version  and do the guiding, but since you don't want to get into that yet, then encoders are very good thing.

    They won't help you with PA, but PA errors are quite often much less than people believe. Main thing that is limiting your unguided exposure time with such mounts is periodic error, and that one gets handled by encoders.

    • Thanks 1
  14. Not necessarily out of focus, but it might be.

    What sort of exposure was this? If it is anywhere over 10ms it is going to appear as out of focus, but it will actually be motion blur due to seeing. If you want to do for example 20ms or 30ms exposure, you need really steady seeing, and even then you will get handfull of good frames out of bunch that look blurry.

    If you want to stop the seeing, you need to go with low exposures - don't worry if image looks noisy it is supposed to look noisy and that is what the stacking is for.

    Don't worry about histogram being here or there - these rules don't apply for lucky imaging.

    • Like 2
  15. 22 minutes ago, Stub Mandrel said:

    Diffuse and rather low daylight.

    See if I'm getting this right?

    As (in principle) the only difference between the sensors is the bayer filter the QE should be the same and to a first approximation, the mono cam will collect about three times as many photons per pixel per unit time.

    Any other differences come from the electronics.

    So let's assume that at a particular level of white light you get 100e on the mono cam and 33e on the colour.

    At their maximum gains this gives 213 ADU and 305 ADU respectively, so the colour will appear over half a stop  brighter if these are scaled directly to display levels, even though the mono image has three time the 'real' qauntisation depth?

     

    Almost :D

    Thing with Bayer matrix is that it does not divide 400-700 range into exact thirds. It's actually a bit more complicated as this QE curve for ASI120MC shows:

    image.png.8c2798635f102e3c8de57daca78acf94.png

    From this graph you can see that all three colors have some sensitivity over whole range 400-700nm.

    Bayer matrix lowers QE compared to mono about 10% or so. It can be seen for example in this graph of QE:

    image.png.7eac009ff6ce456d2d464ea1028b7be2.png

    But you are right about what will intensity look like if you take same exposure given max gain settings - converted to ADU, even if you take 1/3 of electrons for green pixel vs mono variant if you apply above e/ADUs for max gain you will end up with roughly 50% higher ADU value and it will look brighter on screen if scaled to same range.

    Of course, point of sensitivity of camera is not how bright things look on screen as you can adjust brightness (do different range scaling, or apply non linear histogram transform). Point of sensitivity is achieved SNR in given time with light source of set intensity (and other things being equal - like sampling rate and aperture).

     

     

  16. To me - second one as far as color rendition.

    First one in terms of denoise control - second one is full of denoising artifacts. First one has some of them too, but second one is too much.

    Also, are spikes artificial? They are way too tight to be natural diffraction spikes. If they are then they have been sharpened quite a bit and stand out from the rest of the image - it is not so sharp.

    My guess about them is that they are artificial because some stars have them in first image while not in second image. I personally don't like synthetic diffraction spikes.

    Although this seems to be rather popular method of rendering SHO, I have a question for you - can you point out regions of SII from your image?

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.