Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. I got little guide scope for that purpose exactly - I can also get long enough exposure at F/2 with lens but I want to dither. Dither is rather important at randomizing noise stuff and helps with noise reduction. That really depends on software that you are using, but simple procedure would be: Stack both short and long exposures to their respective stacks. Align short stack with long stack (register them without stacking). Use pixel math and replace all pixels in long stack that have value larger than 80% of max value with scaled value from short stack. Scaled value from short stack simply means that if you took say 60s long subs and 2s short subs - you need to multiply short stack with 30 to get compatible values between stacks.
  2. It wasn't argument against 533mc as much as argument for ASI224. I don't mind you upgrading your camera - I personally value larger sensors as they are "faster" sensors (when paired with appropriate scope). My only concern is if you want to replace your camera for the right reasons. With most cameras you will run into a problem of clipping the brightest stars. Only way around that is to use camera with 0 read noise - and such thing does not exist. Clipped stars are dealt with in different way. You take longer exposure for faint signal and in the end you do couple of short exposures for star cores. You combine the two after stacking each in processing stage (best when still linear). I don't believe that you are saturating your targets or that you saturate sensor due to LP. I'm also in rather heavy LP and I've just checked one of my subs that I've taken with ASI178mcc and Samyang 85mm F/1.4 lens (at F/2). Core of M31 has average ADU of about 2500 at gain higher than unity (this camera has lowest gain at about ~0.92) - that is about 2300e. This was with 60s exposure. Granted, camera has 2.4µm pixel size - so ASI224 in these conditions would have 5500e - but that is 60s exposure and core of M31 - it saturates so easily.
  3. Well, use exposures at 5s or 2s - you loose nothing. For example, you quote 533 with having 14bit ADC and 16000e at full well as camera that would suit you better. Pixel size of each is roughly the same 3.75µm vs 3.76µm and QE is about the same at 80%. Now take 4 exposures with ASI224 being 2.5s and one exposure of ASI533 of 10s - what difference will there be between the two? If you add 4 exposures with ASI224 you will get 14bit and 16k FW capacity. You will get 2.4e read noise since read noise at unity gain is around 1.2e. ASI533 will have 14bit and 16k FW capacity and about 1.55e. So, yes, ASI533 has slight edge in read noise - which is really not that important at F/2 if your subs are close to saturating - read noise will be drowned in LP noise and difference is minimal. If bit count and full well capacity is your only concern - then you don't need to spend your money - you only need to change your approach - just simply take shorter exposures - for same total imaging time. Results will be 99.99% the same and you won't have saturated star cores. Good reason to switch to new camera would be sensor size - you can capture much more of the sky in single go with larger sensor.
  4. How exactly 12bit and full well capacity limits you?
  5. Many people now start to process starless version of their images (look up StarNet++ - software for removing stars from the image) and then put back in stars later on. In fact - I was going to suggest that as a way of processing nebula images with these UHC/Narrowband filters that alter color heavily. If you look at above example posted by @drjolo far right color checker has only two colors that are almost as they should be - red and teal which are colors of Ha and OIII (or close enough). This means that these filters don't need color correction when shooting nebulosity. They do change star color very much, but you can take some exposures without filter and then use StarNet++ to create starless version of image taken with filter and stars only version of image taken without filter and combine the two. That would give you proper color in both nebulosity and for stars.
  6. Yes, those duo / tree / quad band filters are good solution for emission type targets. See this graph. L-eXtreme filter will capture OIII and Ha - which are two most prominent wavelengths in emission type nebulae. It will block almost everything else (there will be small "leak" around these wavelengths as filters have some band pass). On the other hand, UHC type filter - passes more of the light: or Optolong version: It passes both Hb and OIII but lets in more light. Such filters are better in moderate to strong light pollution but not as good in extreme light pollution. In general Hb is about 1/4 of Ha in strength or less and SII is often very faint. Those are worth including only if one does not include as much light pollution as well. This does not mean that you can't use your V4 - it will just have slightly worse performance than L-eXtreme (and on some targets it might be better if Hb is strong enough). No, they are different. They also block some of the light, but much less than UHC / NB type filters. Here, look at Optolong version of this filter: It tries to cover most of the spectrum between 400-700nm with exception of few gaps - and these few gaps contain most of light pollution (of certain type - like those old yellow street lamps - in above graph marked with orange lines). Good LPS filter will block that unwanted light but let all other light pass. Problem is of course that these days, our light pollution is no longer concentrated in these few lines - but also spread over whole spectrum (due to LEDs replacing those old sodium lamps).
  7. V4 is UHC type filter L-Extreme is sort of two band narrowband filter Both are good choices for emission type nebulae but not for broadband targets like galaxies or reflection type nebulae. Btw, IDAS also has these duo band filters: https://www.firstlightoptics.com/idas-filters/idas-narrow-band-nebula-nb2-filter.html For those - best filter is general type LPS - light pollution suppression filter. https://www.firstlightoptics.com/optolong-filters/optolong-l-pro-light-pollution-broadband-filter.html or IDAS variant: https://www.firstlightoptics.com/idas-filters/idas-lps-p3-light-pollution-suppression-filter.html You won't be needing filters for planets - they are generally not affected by LP
  8. Hi and welcome to SGL. If you want to get very basic setup that will allow for AP, then here is the list of things to get: - EQ5 mount - RA tracking motor for EQ5. If you have DIY skills, you can save some money there by purchasing stepper motor and making controller to drive it at sidereal rate. - 150PDS - SW coma corrector According to FLO prices that will cost: £265 + £119 + £229 + £129 = £742 = ~ $1313.77 AUD I included dual axis motor with hand controller that costs a bit more, but I'm sure you can add tracking motor in DIY for less money. This will be very basic setup, but will enable you to get a bit longer exposures and as you say - get to stage 2. You can easily add guiding to this setup on budget as well. Dual axis motor that I linked already has ST4 port and you'll get 6x30 finder which you can convert to little guide scope - again some DIY - should be fairly easy if you have access to 3D printer. Modified web cam can be used for guiding so that should not set you back much. Probably biggest cost will be that of laptop if you don't own one already. A lot can be saved if you are prepared to go second hand, or DIY some things, or use combination of gear that might not provide you with best experience (like not getting full goto version but just getting regular mount and adding tracking motors). In any case, here is list of items I made original list with - so you can check local prices and availability: https://www.firstlightoptics.com/reflectors/skywatcher-explorer-150p-ds-ota.html https://www.firstlightoptics.com/equatorial-astronomy-mounts/skywatcher-eq5-deluxe.html https://www.firstlightoptics.com/sky-watcher-mount-accessories/enhanced-dual-axis-dc-motor-drives-for-eq-5.html https://www.firstlightoptics.com/coma-correctors/skywatcher-coma-corrector.html
  9. I we can offer what we thing is improvement, here is my version: I'm just sucker for proper sampling rate
  10. Here is break down of what I did with data. I first loaded it in Gimp and did RGB separation to channels, then I saved each channel as mono firs image. This step was needed because I think that ImageJ won't properly load 32bit TIFF (might be just habit from old version of ImageJ - I did not check if recent versions load TIFF properly). Then I load fits files into ImageJ and crop and bin data. Binning increases SNR and makes image smaller in size - so it is easier to work with. Next step is background removal - I run some code that I've written for ImageJ that removes background gradient. I do this on all three channels. Next I would do color calibration - but your data seemed ok as is so I skipped this step. I just equalized min and max values so Gimp will won't mess up color information when I load channels separately (it scales data when loading from fits - and I don't want different scaling for each channel). I load all three channels into Gimp and do RGB combine. Next I decompose that image into LAB components and discard A and B channels and work on L (I keep original RGB master as well). I stretch Luminance (L component) by using levels until I'm satisfied of how it looks. Add copy of that layer that I do denoising on and then I use mask on that layer - inverted layer itself on which I adjust levels so that mask only applies denoised layer in dark areas. This smooths out the noise but prevents blurriness where signal is strong. I flatten that and copy it. I paste that as new layer on top of RGB image. With RGB image (bottom layer) - I do level stretch where I enter 2.4 in middle value - this simulates gamma stretch of sRGB color model. I set luminanfe (top layer) to layer mode luminance. I flatten the image. Next I do (and this is part that I do so that image looks as most people would process it): - Increase saturation to 200% - Change temperature of the image by about 2000K to fix atmospheric reddening - do a little bit of curves to brighten things up And that is it.
  11. What do you think of the filter in terms of seeing impact reduction? It should give best of both worlds - shorter wavelengths (than say Ha) which allows for greater telescope resolution, but maybe poorer performance with respect to seeing (longer wavelengths tend to bend less in atmosphere).
  12. No filter is going to help with moonlight if you are trying to image broad band targets like galaxies. I'm fairly sure that your sensor is not degrading - you can easily check that with any lens you might have for it - just take daytime photo and check the colors. Using LPS filter is sure way to disturb color balance and color accuracy and special care must be taken in processing of the data to ensure you get accurate colors back. It is very complex topic and I'm not sure you would like to get into all of that - as there is no simple way to go about it, it is indeed quite involved. In order to understand a bit more, you can do a little test - it is fairly simple test that you can do if you really want to understand color in astrophotography. Take your scope and camera and during daytime - point it to something colorful near by (but distant enough so you can reach focus without too much issues) - maybe billboard with colorful commercial or similar. Then take regular photo of that object. Now repeat process, but this time - act as you would with astro imaging: - use L-Pro filter - shoot in raw mode without any color balance - use very short exposures so that image is essentially very dark and histogram is all to the left - shoot 20 or so of them - you'll be stacking those later - take bias (it will act as darks since exposure is likely to be very short) and flats also Now, do your regular astro photography workflow - calibrate data, stack it (in software - tell it not to align on stars - as there are no stars, just stack images as they are) and then try to process the image so you'll end up with same colors as on single image you took first. This will show you what sort of color we are capturing in astrophotography and how it relates to actual color - that both can be captured with camera and that we see with our own eyes.
  13. There are few issues with the image. First is - you are pushing your data too much. You should stop as soon as you hit noise. Sometimes things will be faint, and that is OK. Things are faint. If you want to exaggerate brightness - you really need much more exposure length. Second - color is there, but again, like brightness, there is not much color. If you want to get stronger color - you need to boost saturation. I'm against that, but people seems to both do it and like images like that. For that reason, I boosted saturation here as well. Third issue is with your scope - it is not very well corrected, and you really need specific filter to deal with this - or processing trick. Here is my take of your data: And straight away - you see the issue. Stars have blue halos - no way around it since scope is relatively fast (F/7) ED doublet. You could try using Astronomik L3 filter with this scope to reduce impact or use processing trick. With Gimp, it is fairly easy to minimize blue halo - you can selectively desaturate blue color and get something like this:
  14. Here is my rendition of the data. I must confess that I processed as per popular expectation more than what I believe it should look like. Hope you like it.
  15. I think that it is already very good image as is. Yes, there are few little things wrong with it - like that annoying gradient, and burned cores, but otherwise it is very nice. Will see what I can pull out of the data.
  16. +1 collimation Stars at the bottom of the image show Coma.
  17. This should not happen. When you tighten the clutch - there should no play in RA and only way you should be able to move your scope is putting tension in the system (mount should resist). By play I mean motion of the scope without this resistance.
  18. Very nice. I'm surprised at the level of CA I'm seeing in full resolution image at edges. I would not expect it from 70mm F/13 scope. It is almost up to Conrady standard with ~4.71 CA index. Then again, that index is for visual and not imaging.
  19. Have couple suggestions that you can try - maybe some of it will help: 1. Don't calibrate where you image - calibrate near meridian with DEC close to 0. That way you'll get most precision in RA calibration 2. What is your guide exposure? Don't keep it too short - set it to 2-3 seconds just to minimize the seeing. 3. Is your RA stiff in any way? When you are slewing does it make funny noise on one part of its travel? You say your clutches are pain - what exactly is troubling you? Could it be that they loosen up after some time and mount starts slipping? Is there any chance there is cable snag or anything like that? Sudden worsening of guiding can be due to seeing or local thermals. How high in the sky are you trying to image? Are there any sort of surfaces that will give off heat - like paved roads or large bodies of water (pools, ponds, lakes, whatever)? It can also be due to RA axis starting to seize up or slip or something.
  20. Hi and welcome to SGL. Nice moon image. I think that your focus is just fine. At these magnifications, atmosphere is starting to show considerably and it will blur your shots. Either do bunch of shots and select clearest one (atmosphere changes from instant to instant) - or consider doing lucky imaging approach (look it up - it is technique when you record video - or rather large sequence of frames and software select the best among them and forms image out of them - there is a bit of processing involved on your part - but nothing that can't be learned).
  21. Yep, like @JamesF noted above - using 2x barlow is like using x2 longer focal length - sampling rate increases and number of pixels increases.
  22. Yes. We are talking 20000mm of focal length here. Let's say you have DSLR with 4.3µm pixel size - each pixel will cover only 0.04" of the sky. In fact, each pixel at La Palma will capture 0.04" x 0.04 = 0.0016 arc seconds squared of sky (surface) 200mm F/4 will sample with same pixel size at 1.11"/px, or it will cover 1.11" x 1.11" = 1.2321 arc seconds squared of the sky Single pixel of your F/4 will cover x770 more sky than at La Palma. This means that your single pixel will receive all those photons that would fall on 770 pixels at La Palma Now, 2000mm of aperture is x100 larger gathering area than 200mm of aperture (x10 by diameter, x100 by surface). This means that telescope at La Palma will gather x100 more photons than your scope, but it will spread those photons over 770 pixels. Let's say that M1 from little patch of its surface shines with 10 photons on your telescope and all these 10 photons end up on one pixel. La Palma scope will gather 1000 photons in the same time - but they will be spread over 770 pixels, so each pixel will gather ~1.3 photons or x7.7 less than your scope. There is simple way to get slow telescope to be faster than your telescope - change pixel size. Say you are using 200mm F/4 telescope again, and you have 4.3µm pixel size and 10 photons fall on your aperture per your single pixel in exposure - your pixel has 10e. I'm going to use 100mm F/10 telescope. Now, my telescope is both smaller aperture and slower than your scope, how could it possibly be faster? I'm going to use 12.9µm pixel size with my telescope. If your telescope gathers 10e per exposure - that means that 1.2321"^2 of the sky produces 10 photons on 200mm aperture. 1"^2 will produce 10/1.2321 photons on 200mm aperture - that is ~8.116 photons per 200mm or 8.116/4 per 100mm and that is ~2.0291 photons per arc second squared per 100mm of aperture. What is my sampling resolution? I use 1000mm FL and 12.9µm pixel size and that gives 2.66"/px or 2.66" x 2.66" = 7.0756"^2 of sky is covered by my pixel Now, my 100mm of aperture gathers 2.0291 photons per exposure per 1"^2 and 7.0756 of arc second squared end up in single pixel in my setup - that means that my pixel gathers 2.0291 x 7.0756 = ~14.3571e There you go - your pixel gathered 10e and my pixel gathered 14.3571e - my setup is faster by 40%, regardless of the fact that I have four times less aperture and my system is F/10 vs yours that is F/4 How about that? Again, it is aperture at resolution that defines speed. If you decrease aperture and want to get fast system - decrease resolution more. In above case aperture was halved (thus x4 less photons were captured) but I increased pixel size by factor of x3 to offset for that and slight change in focal length (from 800mm to 1000mm).
  23. Yes, however, that does not mean that F/4 is faster than F/5. Many people think F/4 is faster than F/5 and hence the term speed. F/4 is faster than F/5 if you keep pixel size the same. Two things can happen when we switch between F/4 and F/5 if we keep pixel size the same: - If we switch from F/4 to F/5 by using aperture stop - we are limiting amount of photons and this is why F/5 is slower - less photons reach sensor - If we switch from F/4 to F/5 by increasing focal length - we are increasing working resolution - we spread light over more pixels (since pixels are the same size). This also means that two F/5 scopes will have same speed even if they have different apertures - if you keep pixel size the same. As aperture increases, so does focal length - and hence working resolution. Increase in photons is exactly the same as increase in spread of these photons so 4" F/5 scope will be of same speed as 6" F/5 scope - if you, of course, keep pixel size the same. This is another thing that reinforces F/ratio myth. F/ratio myth is not the myth if you keep the camera same (as is case in daytime photography when talking about F/stops). On the other hand - this means that F/5 scope can be faster than F/4 scope - if you change pixel size or in particular - if you can set working resolution by either changing physical size of pixels (using different camera) or changing logical size of pixels (using binning to increase size). For this reason it is best to compare "aperture at resolution" of two systems rather than F/numbers - as first takes camera into account, while second does not.
  24. Different focal length - different amount of sky covered by single pixel Say there is nebula that is perfect square and has 4" x 4" in dimensions. It emits 16 photons each second that falls on telescope aperture. If you have such focal length that 4" x 4" of the sky is mapped on 4px x 4px - 16 px total (resolution of 1"/px), then those 16 photons each second end up divided by 16 pixels. Each pixel gets 1 photon per second. Integrate for 1 minute and each pixel gets 60 photons (let's have QE of 100% and gain of 1e/ADU) - or 60 ADU. Now instead, use same nebula, same aperture - we still get 16 photons per second from that nebula, amount of photons has not changed, but half the focal length of telescope. We now have 2px x 2px covering the nebula - or total of 4px (resolution of 2"/px). Now 16 photons each second is divided among 4 px (not 16px as above). This means that each pixel gets 4 photons per second, or if we integrate for 1 minute, each pixel will now have 240 ADU instead of 60 ADU - image is brighter. Note that we don't have to also change pixel size for this to happen - nor we have to change focal length when we change pixel size for this to happen - it happens when we change either of the two (or both). It happens any time sampling resolution has changed. This is why only two things are important - aperture size - which determines how much photons is captured and working resolution - which determines into how many "parts" those captured photons are divided. This is also the reason why you can improve SNR by binning even after capture (software binning) - by taking divided signal and putting it back together.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.