Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. As long as JWST is fulfilling its primary role - to make freaking beautiful images
  2. Not only that - they can be quite accurate as far as color matching goes. With a bit of science - very good reconstruction is possible. Stars shine continuous spectra - and curve largely depends on temperature. This can be used to reconstruct accurate color from any 3 filters (or even 2 filters). Emission lines also come in multiples because electrons can jump from more energy states. This means that Ha, OIII, SII and other element also shine in IR part of spectrum with very distinct signature. http://astronomy.nmsu.edu/drewski/tableofemissionlines.html
  3. Monochrome chip like that offers even more flexibility for lunar. You can use NB filters to suppress seeing effects. Planetary/Lunar/Solar astrophotography is quite different discipline, so it would be advisable to look up lucky/planetary imaging and how to capture, stack and process such images. I'd personally use NB filter around 500-550nm (either OIII or Baader solar continuum filter) and about F/9 (x1.5 barlow element). SharpCap, 5ms or less exposure, try to achieve as best FPS as you can. Use multiple panels that you will stitch later. That way you can achieve excellent lunar images (and using multiple panels for full lunar disk). Here is image captured with 102mm Maksutov and ASI178 color model with application of above advice: (you can right clicky / zoom viewy thingy).
  4. It would be helpful if you listed software you used and steps, as well as any detail.
  5. I'm almost certain that this is true. I've seen more with 4" in bortle 4 than with 8" in bortle 8
  6. Yes, it does seem to contradict original document - and also common sense Drizzle takes input pixels and "drizzles" them across larger output image, meaning that not all output pixels will have data from all input pixels for any given sub. This in turn means that number of averaged samples for stacked output image will be less than number of subs. SNR improvement is equal to square root of averaged samples, so less samples averaged leads to lower SNR improvement over using regular stacking method where input and output samples are matched 1:1 and there is no reduction in number of samples.
  7. Really sorry for making this issue way more technical than it needed to be. In short, my answer to the question would be: Don't bother drizzling if you are going to reduce file afterwards. Drizzling will reduce quality of the stack (more noise) in exchange for possibility of improving detail. My view is that detail won't be there to begin with in majority of amateur setups as we are mostly over sampled, so by drizzling - you only loose SNR. In any case, reducing image afterwards is just throwing away that improved detail (if any). Best to do regular integration which keeps file size smaller in the first place. so there is no need for reduction.
  8. Yes, that is well known resource and origin of the drizzle algorithm (NASA / Hubble team). Have you seen last post in that discussion - one without answer? I'll quote: Indeed - it is very hard to imagine what is drizzle supposed to do if you don't increase resolution - you just "shrink" pixel and place it in its original place - you effectively do nothing as there is no change to pixel. I want to address one more misconception. We have read previously in this thread - in one other quote from various posts on other forums - that drizzle does not perform interpolation. But look at following quote from above page: This is actually quote from original handbook - but it clearly shows that input pixel values are split over several output pixel values - thus creating pixel to pixel correlation. That in effect is interpolation. If you split droplet to few pixels weighing by surface area - if you don't have rotation that is mathematically equivalent to linear interpolation.
  9. Out of interest - what does it mean to drizzle without upsampling? I have few issues with quoted text. This is very interesting part. I wonder how on earth would one do sub pixel alignment without interpolation of sorts? We can choose not to perform interpolation, but instead just perform simple "shift and add". However, such approach will inevitably increase star FWHM significantly (we don't have precision in our mounts to prepare subs in such way that it does not impact FWHM - at least whole pixel dithers aligned with main imaging axis would be needed), and any field rotation due to slight polar misalignment would be visible as no rotation of subs for alignment is performed.
  10. Bayer drizzle is regularly used with OSC planetary imaging where sampling is right there at critical sampling - and it works well for that, but yes, for long exposure imaging we are more often over sampled than under sampled (it is actually hard to be under sampled with amateur setups because of F/ratios of optics and pixel sizes that are now common. Only fast optics like F/2 lenses and such stand a chance to be over sampled - but then again, those are not diffraction limited at such high speeds and produce additional blur with optics).
  11. Could you please share comparison. I've never seen one do so, and I'd be interested to see the difference.
  12. I guess it is fairly easy to do normal integration and see if you loose anything (or gain something).
  13. For planets - it is fairly easy to do. One should use phone camera attached to eyepiece and do 20-30ms exposures (timing is important so that seeing disturbances average out and give same blurring humans see). No stacking and sharpening should be performed. One should select image most representative of what was seen at the eyepiece. I'd argue that taking a video is better way to represent what is seen at the eyepiece (again - use 20-30ms exposures, or 30-50fps when recording). For faint objects, it is rather more difficult, and will involve some level of post processing and concentration while observing / recording. Person recording must also observe object. After image is made - brightness, contrast and saturation should be adjusted to match what was seen at the eyepiece. It is probably not good idea to adjust image immediately while observing (like with phone/tablet app) - as that will ruin night vision. Rather notes can be taken of what is seen - to help recreate sight from memory while processing image at computer to match what was seen.
  14. Don't be afraid to add brightness to the image (and control background with contrast). Here is what can be quickly done in IrfanView (it has very basic set of color correction options):
  15. Why do you want that? You seem to have ASI178mm. Use that to practice and to see what is possible with your scope. You already have barlow that you can use with that camera. Just aim to get a bit below x2 and that should work fine. Much better than with DSLR. Aim for 5ms exposure length, use 640x480 roi, capture as many subs as you can. Use gain in 260-300 range. As far as recommended OSC for planetary imaging - go with ASI224 or ASI385 - depending on how much money you are willing to spend and if you need larger sensor (ASI385 is larger sensor which is useful for moon). Although, for moon, you can simply use ASI178. It is largest and you can capture lunar images in mono, so you don't even need filters for that (or you can just use green or red filter to help with seeing a bit).
  16. Main reason to go for planetary with dedicated camera is ability to take large number of exposures in sequence (frames) - in Raw format. If you already have ASI178 - then I'd try with that first - just to hone your technique. For planetary imaging - you don't need to drive to dark location - you can do it from your back yard. Light pollution is not an issue there. It would be best to get regular barlow for that. You don't want to bin data if you don't need to - as that increases effective read noise with CMOS sensors. For planetary imaging - low read noise is essential. Both of your scopes will have diffraction limited field over such a small sensor and you don't need any additional optics, but I would avoid using RC for planetary. It has very large central obstruction, and although, one can image planets with it - results won't be as good as scopes with smaller central obstruction (even SCTs). Anyway, for ASI178 you'd need ~F/10 scope. Get this barlow: https://www.firstlightoptics.com/barlows/astro-essentials-125-2x-barlow-with-t-thread.html It has detachable barlow element that you can use to dial in magnification for your 200p. Magnification of barlow element depends on distance to sensor and you can use some sort of variable length extension to attach it directly to camera body via T2 and then use 2" camera nose piece to fit it inside focuser. This way you can dial in magnification to get F/10
  17. If you really want to use D800 for planets, then based on pixel size which is 4.88, you'll need F/19.52, so yes, x3.25 is needed. However, I think that it would be better to invest in planetary camera rather than focal extender for D800. I would also advise against using focal extender for both 200p and RC6 for DSO imaging. Both scopes have enough focal length and don't need one. For last question - don't use field flattener/reducer with barlow / focal extender. Not much point in doing so - you first reduce focal length only to extend it. If you just use field flattener and barlow / focal extender - it is also not necessary. Field flattener is often needed for outer part of the field - central 1/2 of the field does not need it. With barlow, you magnify image and you just take central part of image (1/2 or less of it) and you spread it over whole sensor. That part of the image does not need flattening anyway.
  18. Not sure what you want to image, but that scope is not the best choice for imaging. If you want to try deep sky astrophotography - then you would need something like Eq6 to hold that scope stable. Besides weight, which is at the limit (but doable with compact scope) - that scope is way too long for mount like Heq5. For planetary astrophotography - heq5 would be more than suitable as in that case you don't need very precise tracking. Just keeping planet in FOV is enough and that can be accomplished with even EQ5 class mount.
  19. Have you seen any Jupiter images produced with TAK that is better than say mass produced 8" newtonian? But they are not. You are confusing (still) two things: FOV and scaled image and 100% zoom 1:1 pixel image. This is the size of your Jupiter: It is the same size of Jupiter that you will get if you open image you posted above in new window (right click, open in new window) and then - set zoom to 100% instead of image being scaled to display size. Whenever you have image that is larger in pixels than display device - it will be scaled down to fit the screen and objects in the image will look smaller. You don't need that much FOV around planetary images. That is why people use ROI of say 640x480px. It is more than enough for even 14" telescope to fit Jupiter. Look at image posted by Neil above - it is only ~450x450 px - yet Jupiter is large in it. Your image is full format 1936x1096. Compared to image itself, Jupiter that should be something like 136px or so - will be tiny - it will be less than 10% in width and height. On the other hand - if you put image that is 400x400 - it will occupy 1/3 in height and width. You should really try to understand: - FOV vs pixel count and pixel size - Scaling - especially "fit to screen" and 100% zoom level
  20. Your planets are already larger then they should be. I know this might sound strange, but just hear me out for a second. Maximum planet size that you can record - that is not over sampled - meaning "just enlarged without any detail" (which is what you also get when enlarging in software) - is governed by aperture size. There is only so much detail that you can get with given aperture size - that is called resolving capability of the telescope. It is down to laws of physics and does not depend on quality of the telescope. You can calculate this size for any given aperture and also calculate needed F/ratio for given pixel size. ASI462 has 2.9um pixel size. Optimum F/ratio for planetary imaging is x4 this number at F/11.6. You are already at F/7 with your esprit, so you need only x1.65 barlow (not x2.5 or higher. Take x1.5 or x2 barlow element and dial in distance to sensor to get x1.65 magnification - by the way, this only works with barlow, not with telecentric like powermate). At F/11.6, 150mm of aperture will result in 1740mm of focal length. With 2.9um pixel size - that is 0.3438"/px. Given that Jupiter is now 48" in diameter - that results in 139px image of Jupiter's disk. That is maximum that you can get with full detail. With planetary camera you need to consider only a few things: - QE of sensor - Read noise level - how fast the read out is. Maximum frame rate of ASI224 is over 300fps (640×480 299.4fps, 320×240 577.9fps in 8bit mode) and it is still one of the best planetary cameras. ASI462 is not bad either. Both can produce excellent results. Only difference is in pixel size and F/ratio needed. ASI224 needs F/15 while ASI462 needs F/11.6
  21. You should aim for F/15, so x2.5 barlow would be ideal. Barlow magnification changes as you change sensor to barlow element distance. This can help you get x2.5 from either of two barlows that you own. For x2 you need to increase sensor barlow distance and for x3 - you need to reduce it. In any case - you can dial in x2.5 in daytime if you record distant object and do some measurements and calculations. Find object with feature that is easily measured for size in pixels - like tall building, church tower, bridge. Shoot it without barlow and measure size in pixels. Now use barlow and adjust distance (using spacers, adjustable extension or pulling camera nose piece in/out of the barlow itself) and record/measure again until you get image that is x2.5 as large (x2.5 more pixels measured in feature). That is your target setup.
  22. I'm a bit confused with what you've written above and your data. First, here is what can be teased out from that tif: A bit noisy, but decent amount of detail. But here is the problem. You say you use XT8 and ASI120 without any barlow. That is 1200mm of focal length at 3.75um pixel size. Sampling rate is therefore ~0.645"/px Jupiter is about 45-50 arc seconds in diameter at opposition (depends on year), but let's take 50". With above sampling rate and 50" - max size of Jupiter in pixels that you can record is about 77px. I've measured diameter of Jupiter in your image above to be around 170px give or take. My guess is that you drizzled your data x3 in AS!3. You already don't have good frame count to combat noise and you on top of that drizzled your data which severely reduces SNR and does not improve detail. When you start denoising, and in particular applying deconvolution to such noisy over sampled data - you get bunch of ripples around noisy dots. That is what your large image represents. Btw - look at how nice image looks at 50% size of that tiff: I know it is small, but it is sharp and virtually noise free.
  23. If you are limited in budget that you have, you can try following: - You could try adding DIY tracking motor to your EQ mount. - You could try to manually track. In fact, given that Pleiades are quite wide target, you could try to get a bit better finder scope than the one that you get with telescope - for example this one: https://www.svbony.com/sv165-mini-guider-scope/ or perhaps something like this: https://www.svbony.com/sv28-spotting-scope/#F9308B-W2546A Then you mount that small scope on your main scope and attach mobile phone to it. You will image with small scope and use astromaster 130 as your guide scope. You put high power eyepiece in your astromaster 130 and then when you start exposure - you manually move scope slowly so that stars stay in the same place in eyepiece.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.