Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. This sort of thread makes me appreciate EP collections more and more
  2. If you have just one eyepiece, then I guess almost any will do. Once you take comparison out of the equation - ep that you have is the sharpest, widest, with the least aberrations ... It's simply the best of the "lot" Cheap zoom in range 8-27 + barlow for example.
  3. If you could sort out stacking in PI that would be great - it has some advantages over DSS (at least I think so). - it will calibrate subs using 32 bit precision (I think DSS is limited to doing that in 16bit - at least it was in version 3.3.2 - last I have used, there are later versions now and that might be fixed). - it can use Lanczos resampling when doing registration / alignment of frames - that is a big plus I believe (DSS uses bilinear / bicubic still?) PI should provide you with cleaner stack and tighter stars, also try to save your work in 32bit format, preferably FITS - as it is standard for astronomy software.
  4. Lately I use VS Code (on both linux and windows) - it feels lightweight enough and powerful enough for my needs ...
  5. Such a tease! I at least expected to see that first line of your Python code
  6. It would be good to do at least dark subtraction (that includes bias). In order to do that, you need to keep everything the same (don't change any settings) and just cover your scope. Shoot a short movie of about 256 to 512 frames (sorry about my binary OCD ). If you are using sharpcap for capture you can limit recording to either total duration or number of frames - use total duration for shooting planet movie and number of frames for this dark movie. Once you have those two (but flat / flat dark would also be really good to throw in the mix - but that is taken differently - like in regular imaging, you don't need the same settings and exposure will depend on how strong is your flat panel) - you need to open them in PIPP - it is Planetary image pre processor (or something like that) - open source / free software for pre processing planetary movies. There you want to do calibration / image stabilization, also preserve bayer matrix and export as 16bit ser. Once you have done that - then open that ser in AS!3 and do stacking. Tell AS!3 to debayer your ser (you can use bayer drizzle or some other debayer method). You can skip PIPP and dark calibration, but you'll get better results if you do it as it removes bias signal (and very very tiny dark signal) and it should give you cleaner image in the end.
  7. Don't know if anyone imaged it, but I guess there ought to be amateur images of it out "in the wild". At ~13.6mag/arcsec2 of surface brightness it is not that faint at all - it is relatively uniformly lit galaxy (no prominent central core) and I guess it should be similar challenge as M33 for example - which has lower surface brightness at 14.1mag/arcsec2. M33 is readily imaged and not that difficult at all, therefore Barnard's galaxy should be similarly easy/difficult (depends how you look at it ).
  8. Ok, there are a few settings that are "wrong" from what I can tell. - don't use avi - use ser (although avi is in principle fine, it supports compression and one wants to avoid compression as it leads to compression artifacts, so it's better to use plain format as ser as it is also written faster and there is no option for compression so you can't go wrong with it) - exposure is way high in both (assuming units of seconds which should be reasonable) - no wonder it looks out of focus / blurry. Like I pointed above aim for about 5ms. Actual exposure time depends on something called atmosphere coherence time and it is not easy to judge, but it is time in which distortion of the image due atmospheric seeing does not change - it depends on something called coherence length and speed at which atmosphere is moving (this is why jet stream is bad - it moves air along at great speeds). It is in range of few ms up to 10ms or so. When atmosphere is really steady you can go 20ms or even 30ms. Expressed in units of seconds these numbers are 0.005 - 0.01, and occasionally 0.02-0.03 when you have excellent seeing. In contrast your exposure times are 0.3s and 4.13s. - apply high gain - with higher gain you have lower read noise, and you need as low read noise as possible because it helps when you have short exposures (difference in SNR between one long and few short exposures of same total time is in read noise - smaller it is less difference in SNR and you want as high SNR as possible). For 178 sensor, gain of about 270 is going to give you lowest read noise (according to ZWO graph). - I would not use auto white balance and set both red and blue to default values - you can white balance in processing - if you have auto exposure and such turned on - just turn them off and set exposure manually to something like 5, 6, or 7ms (even 10ms if you have steady skies) - Pan/Tilt tells me that you were not in the center of the sensor. This is important as image is the sharpest on optical axis. As soon as you start moving off axis there will be off axis aberrations. With SCT there will be coma (same with parabolic newtonians). This of course holds for well collimated scope. For this reason you want to be on optical axis when doing capture. - turn high speed mode - you want to capture as much frames as you can. With this camera if you have SSD and good USB3.0 controller, you should be able to do about 200fps. Do that for up to four minutes and you should gather somewhere around 40000-45000 subs. If you stack top 5-10% of those you should improve your SNR by factor of about x50-x60. That is good enough to do serious wavelet sharpening without bringing too much noise into view. HTH
  9. Difference between shorter and longer subs (for same total imaging time) depends on read noise. qhy133m is already quite low read noise, so I doubt that you will see much difference - there will be some but not as much as one would hope. What you can do with this data set to improve things is to maybe look at binning x2. Qhy183m has rather small pixels, and if I'm correct AT106 has 690mm FL, which gives 0.72"/px sampling rate - and that is almost certainly oversampling. Bin x2 will give you x2 boost in SNR which is equivalent of getting another x3 number of subs for each channel. This would be very easy to try out - take your stacks in all three wavelengths and do integer resample in PI (select average method and reduce by x2), and then see what noise looks like and try processing that.
  10. There is no guarantee that light emitted in SII and OIII wavelengths will be of the same intensity as Ha. Each target has it's own "signature" - both in terms of shape of gas distribution but also how bright it is in particular wavelength. You don't need any sort of particular settings to handle this - what you need is to get more data on fainter wavelengths until you get acceptable SNR. If you have "limited" budget, then in general Ha tends to be the strongest signal, so take more subs in OIII and SII to get smoother image. Another trick that you can use is to use Ha as luminance layer and compose in similar fashion as you would LRGB, but in this case it will be HSHO (LRGB). This depends on target and you need to examine your stacks in each wavelength to see if indeed Ha covers everything.
  11. Had a look at Jupiter recording, and here are a few suggestions: 1. use x2 - x2.5 barlow / telecentric for optimum sampling 2. keep exposure around 5ms range 3. shoot raw video in .ser format (skip avi if at all possible) 4. use gain setting at around 270 5. use ROI to maximize frame rate captured (like in Jupiter video - 640x480 is quite enough to capture a planet) 6. Use 8bit capture if image is not over-exposed, another trick to maximize fps (at 5ms and 8bit you should be able to hit around 200fps) 7. Use up to 4 minutes of imaging run per planet. If everything is good you should end up with couple of gigabytes of .ser per capture (yes, quite a lot of data) 8. Shoot darks - at least 256 to 512 frames at same settings with scope covered (this should not take long - few seconds at 200fps) 9. If you can - do flats and flat darks also 10. when shooting make sure planet is centered in sensor FOV and selected FOV is in center of the sensor (as close to optical axis as possible) 11. Use PIPP to calibrate your movie and save as 16bit ser (even if you took 8bit movie). Don't debayer in PIPP so keep option "protect bayer matrix" turned on. Out of all options, you want only three in principle - don't debayer/protect bayer matrix, calibration, image stabilization (this is not necessary) and save in 16 bit ser format. 12. Use AS!3 to stack you result
  12. That was a trick question , sort of. I've noticed that many people like this sort of "SHO palette" but in my view it's missing the point. We use three colors to represent three different wavelengths of light. Once you start changing hue of this color combination to be more pleasing to the eye, you loose ability to differentiate wavelengths and respective elements in the image. Although you get more pleasing result, you loose "scientific" or rather in our case informative value. Green looks unnatural in astro images and most people decided to avoid it even in false color narrow band. Predominant combination now seems to be blue for OIII, yellowish for Ha (rather than green) and red for SII. To make things more pleasing to the eye hues of these colors are also shifted a bit. Most people aim for "sky blue" rather than "sea blue" for OIII - much like in your second image, golden yellow for Ha and red/brown for SII. I think that you've got OIII and Ha colors/shades real good in second image, but I think that you are missing SII or brown/red component - and hence trick question.
  13. This is very interesting: There is section on both guided and unguided exposures and star shapes, there is also guide assistant in PHD2 showing that at this particular setup (PA error) drift rates were 0.7"/min in RA and 0.6"/min in DEC. Elongation started showing in 5 minute subs without guiding.
  14. You could try to estimate what sort of improvement you are going to get if you have enough info on how mount behaves. Not sure if anyone published related info on CEM40EC, but there are at least a few posts on how CEM60EC behaves (and CEM120EC for that matter). I think it's safe to say that CEM40EC will perform similarly. Start by calculating sampling rate of your setup (arc seconds / px) and also doing some basic measurements on FWHM. For elongation to start showing you need something like 10-20% of difference in two axis. I'll do a simple calculation as a guide on pretty much made up (but still assumed based on real numbers) data. You are shooting at 300mm FL, and using apertures below 70mm, so FWHM should be around 3.5" or so. PE of about 0.7" for duration of exposure is going to be upper limit (20% of FWHM). CEM40 is listed as having up to 7" P2P PE over 400s. Roughly speaking if we assume that perfect sine wave, 7" will be covered in half that time, so we can say that for 20s or so exposures you will have perfect stars always. If you relax what you perceive as round star, and do about 50% (so its obviously elongated but still not "streaking") you can do one minute exposures with non EC version. In contrast to this, EC version has error listed to 0.25" RMS on 400s - so you can safely go with much longer (half a dozen minutes or more) exposures and you won't see any elongation due to PE. With this mount, PA error and atmospheric refraction will be main source of error
  15. Let me have a go at this, although I don't have experience with such mount. I think that encoders are good idea in your case and they are worth it. Are they worth the money difference? Can't be the judge of that, and it depends on your budget and if you can afford them. If guiding was an option, then I would say - get plain version and do the guiding, but since you don't want to get into that yet, then encoders are very good thing. They won't help you with PA, but PA errors are quite often much less than people believe. Main thing that is limiting your unguided exposure time with such mounts is periodic error, and that one gets handled by encoders.
  16. Not necessarily out of focus, but it might be. What sort of exposure was this? If it is anywhere over 10ms it is going to appear as out of focus, but it will actually be motion blur due to seeing. If you want to do for example 20ms or 30ms exposure, you need really steady seeing, and even then you will get handfull of good frames out of bunch that look blurry. If you want to stop the seeing, you need to go with low exposures - don't worry if image looks noisy it is supposed to look noisy and that is what the stacking is for. Don't worry about histogram being here or there - these rules don't apply for lucky imaging.
  17. Almost Thing with Bayer matrix is that it does not divide 400-700 range into exact thirds. It's actually a bit more complicated as this QE curve for ASI120MC shows: From this graph you can see that all three colors have some sensitivity over whole range 400-700nm. Bayer matrix lowers QE compared to mono about 10% or so. It can be seen for example in this graph of QE: But you are right about what will intensity look like if you take same exposure given max gain settings - converted to ADU, even if you take 1/3 of electrons for green pixel vs mono variant if you apply above e/ADUs for max gain you will end up with roughly 50% higher ADU value and it will look brighter on screen if scaled to same range. Of course, point of sensitivity of camera is not how bright things look on screen as you can adjust brightness (do different range scaling, or apply non linear histogram transform). Point of sensitivity is achieved SNR in given time with light source of set intensity (and other things being equal - like sampling rate and aperture).
  18. To me - second one as far as color rendition. First one in terms of denoise control - second one is full of denoising artifacts. First one has some of them too, but second one is too much. Also, are spikes artificial? They are way too tight to be natural diffraction spikes. If they are then they have been sharpened quite a bit and stand out from the rest of the image - it is not so sharp. My guess about them is that they are artificial because some stars have them in first image while not in second image. I personally don't like synthetic diffraction spikes. Although this seems to be rather popular method of rendering SHO, I have a question for you - can you point out regions of SII from your image?
  19. I don't think you have anything in these values that would describe sensitivity of sensor. I'm not sure how the measurements were made, but e/ADU part needs flat panel to be established. If you did use flat panel, then it should be measured value, if not, it's probably read from driver instead. Let's examine values and what they mean. 1. Gain - it is usually arbitrary number assigned in drivers to represent e/ADU value (actually inverse - as people are used to idea higher gain means lower e/ADU, or higher ADU/e - which would be reciprocal). ZWO drivers are following rather good convention and they use 0.1db units of gain, so 200 gain is actually gain of 20db. We will get back to that when examining relative gain and relative gain in db. 2. e/ADU - this one is straight forward. Light comes in photons and sensor pixels are recording photons by capturing electrons in potential well. When a photon hits pixel there is certain probability that electron will be captured (photon ejects electron from photo sensitive material and it's being captured by potential well). This probability is quantum efficiency of a sensor, so sensor with 75% QE has 75% chance of converting photon to electron. After exposure there will be number of electrons in each pixel's potential well. When value is read out it is converted to number called ADU (that is short for Analog-Digital Unit), or what we know as "pixel value" - which is a number, a integer number recorded in fixed number of bits. This number is calculated like this - number of electrons is divided with e/ADU and it gives numeric value in ADU which is then rounded and recorded in fixed number of bits. 3. Read noise is standard deviation of very short exposure pixel values when bias is removed. In this case it is expressed in electrons. It is measured by taking bias subs, then stacking them in such way that it removes bias signal (easy way to do it is stack first half of subs to one stack and other half of subs to second stack and then subtracting those two stacks - what remains is pure read noise scaled down by square root of total number of subs stacked in both stacks). This way you get value in ADU units, so you use e/ADU to convert back to electrons. Read noise in electrons can be compared between gain values - not one in ADUs because intensity of ADUs will depend on gain setting. 4. Full well capacity is essentially how many electrons can any one pixel capture. In this case, true full well capacity is not given, but rather effective full well capacity, which is consequence of number of bits used to record each ADU value. That is 12 bits in this case or values in range 0-4095. So this column actually holds values that you get when you multiply 4096 with e/ADU (converting from max ADU back to electron count) - as that is maximum number of electrons that you can effectively record with these sensors. 5. Relative gain and 6. Relative gain in db are just a way to represent how e/ADU changes as you increase "gain" setting. First value of reference gain is 1 - that is "starting point". Each next value is just simply how many times e/ADU is smaller than this base value (divide any e/ADU with first - reference e/ADU and you will get relative gain). Relative gain in db is just previous column expressed in db (logarithmic) scale. For db scale look here: https://en.wikipedia.org/wiki/Decibel 7. Dynamic range is calculated by dividing full well capacity with read noise and taking logarithm with base two of that. It sort of represents "number of bits in binary system needed to record useful signal level above read noise that is capable of being recorded by pixel" - note this is my explanation for it and not some general definition. I don't really find any use of this number in AP although it is often quoted. I think it is more useful in regular photography with single exposure as it gives you idea of how much histogram manipulation you can do when you have raw image (like adjust exposure stops up/down and such - higher dynamic range - more adjustment possible).
  20. Gimp is also an option - I use it and it can do all of that (same as PS).
  21. No it won't. It is just a physical extension of the tube, it neither changes distance between the mirrors nor shape of them, and those are things that define focal length of scope (actually just shapes, but they need to be properly spaced so you can reach focus and get proper illumination at eyepiece). It is concern only if you extend physical tube between primary and secondary mirror but this is extension outside of this zone.
  22. I forgot flocking, very important thing with some telescope designs (some that use internal baffling are less affected). With newtonian design, you want to extend tube (can be blackened cardboard / plastic extension like dew shield) so that focuser is at least 1.5 times diameter of tube from the scope aperture. This again prevents stray light reaching tube walls opposite focuser and improves contrast (although you have flocking and light absorbing paint it is better if light does not reach those places at all).
  23. I can name a few that I'm aware of: - observe target when it is highest in the sky and in region the least effected by light pollution. - use filters that are appropriate for target you are observing that lessen effects of light pollution (there are several choices of filters but not all filters are suited to all targets, UHC tends to work the best but it's suited to emission nebulae for example) - transparency of the sky is related to LP levels - aim for transparent skies for best results. It has dual effect - target light is attenuated the least and LP is scattered much less when sky is very transparent. Avoid haze and too much moisture in the air. There are online services that give you transparency forecast - that is useful when planning a session. - when observing point sources like stars, atmospheric seeing also has effect. Steady atmosphere lets star light be concentrated in single point and not smeared so it is at peak intensity and easier to see - both to detect and to detect color as threshold for both depends on amount of light - if looking for faint stuff be sure that you are dark adapted. This means shielding from local light sources and creating dark environment. It can be as simple as putting dark cloth over your head when you are at the eyepiece. Mind you, point is not to cover your head but rather to shield from surrounding light - so make cloth long enough and possibly keep it wrapped from below with one hand or something. - dark adaptation hurts your color vision, so if you plan to look at doubles trying to see their color, or when observing planets - don't get fully dark adapted, keep some soft light on nearby (if you are in the back yard for example - soft light coming from your windows is enough - no need to turn on porch light or anything like that). On nights of full moon - moonshine will be enough of light to keep you from dark adaptation.
  24. Yep, one way of creating a star mask would be: - take rgb image and make a copy of it. - do star removal on that image (do something like http://www.astro-photography.net/Star-Removal.html ) - subtract two images (in PS put them in layers and do difference or similar as blend mode) - flatten that image and do selection on background fill it with black, do inverse on selection and fill it with white.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.