Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yes, but think for a moment what happens: If zero order image is not the same as original source - and first dispersion is in effect producing number of "sources" for second dispersion - we don't have recording of those "sources" - only zero orders of those "sources".
  2. @robin_astro Follow up question with regards to above - do we know for sure that Zero Order image is unaffected by grating response? Above can only work if zero order image has same spectrum as the source (only attenuated) - otherwise it won't.
  3. Trying to wrap my head around this as it sounds like rather interesting proposition to measure SA - regardless of response of front mounted grating (laser printed or 3d printed). I think I understand. Resulting spectrum would look something like this: Then I measure diagonal and divide with vertical as vertical is : source * optics * first grating while diagonal is : source * optics * first grating * second grating Do I have this right? If this is correct - then excellent - knowing source and SA allows me to measure sensor response later on.
  4. vlaiv

    M110

    Here is bin x4 of original stack. I did not remove vignetting, and cropped to try to minimize it - but once I stretched - it really shows. However, image is quite smooth now (I did some selective denoising) and there is a lot of structure in the galaxy:
  5. vlaiv

    M110

    If you have original linear version (one from first post) - maybe post that as well as it is easier to bin single image in the end.
  6. I guess I do It's quite useful tool - spend short time crunching numbers and it gives you good starting point and it lets you set your expectations as well.
  7. Sorry to hijack the thread, I'll try not to do it any more Just need to ask one question @robin_astro If I do the following, can I extract SA response: - take transparency sheet for over head projector (printable one) and measure its response (divide artificial star spectrum with / without front aperture cover made from transparency sheet) - print full aperture grating with laser printer - get spectrum of artificial star using full aperture grating and correct for its response that I got in step 1 (which leaves optics + camera response and source spectrum) - get spectrum with SA and use above to extract only SA response What I'm trying to ask is will adding the grating change spectral response of transparency sheet for overhead projector? Or similar question - if I 3d print front aperture grid (assuming I can print that fine) - will "air gaps" have response of 1? (does grid affect response in any way - or is it just optical properties of material in gaps?).
  8. Yes, there is significant difference between Jupiter / Saturn and Neptune. Jupiter for example has magnitude of about -2.8 (-2.88 this opposition), while Neptune has 7.82. That is 7.82 - (- 2.88) = 7.82 + 2.88 = 10.7 magnitudes of difference in brightness, or about x19000 fainter Neptune is x19000 fainter than Jupiter. Next, Jupiter's apparent size in opposition is 49.12" in diameter while Neptune has 2.36" diameter. With 9.25 EdgeHD - if you sample at critical sampling rate, you'll be sampling at 0.22"/px - this means that your pixel covers 0.22" x 0.22" = 0.0484"2 of sky. Area of Jupiter is 49.12" ^ 2 * pi / 4 = ~1894.9886"2 / 0.0484"2 = 39152px Area of Neptune is 2.36" ^2 * pi / 4 = ~ 4.3744"2 / 0.0484"2 = ~90.4px Jupiter has x433 more pixels than Neptune if you sample at critical resolution - which means that on average each pixel receives 19000 / 433 = x43.88 less light. There fore, if you image Jupiter with 5ms exposure - you need to image for x43.88 longer per exposure to get same SNR - or about 220ms (if my calculation above is right of course - please do check it again just to make sure). Problem with doing 220ms exposures is that you won't freeze the seeing and you need very good seeing conditions and long video - many many frames in order to collect needed number to get decent stack. With regular seeing - it can happen that you don't get single good / not blurred sub with 220ms exposures. In the end - best you can hope for is blue dot about 11px in diameter
  9. vlaiv

    M110

    For start - maybe consider binning your data to actual resolution achieved? The way it is - it is very over sampled: You can easily bin that by x4 to improve SNR x4.
  10. Yes, that is the last step - removing influence of atmosphere, however G2V star is not really white - it is this color: here is white for reference:
  11. I want to be able to measure camera response. It has to do with imaging and color reproduction. That way I can take any light source and produce accurate spectrum of it - that is one thing. Other thing is that I can produce QE response curve of sensor. Without knowing camera and SA response - I can measure response curves of filters - and that is one part that I'd like to do (simply do with / without filter and divide). But I would also like to measure sensor response curve. This dates back to when I discovered that Christian measured ASI1600 QE and it was significantly different to published value. I have no idea which one, if either is correct. In any case - these are all low resolution applications without need for high accuracy.
  12. Is there any way to separate SA100/200 part of response and camera response / optics response? I guess that it can be done if one has sensor with known curve available - that way optics + SA can be extracted.
  13. How about obtaining instrument response with Tungsten-Halogen lamp acting as artificial star (pinhole and enough distance)? Are those precise enough? I don't mean special ones that are classified as calibration source (too expensive) - but regular, of the shelf ones?
  14. One with the largest sensor size that can be covered by RASA8 corrected circle and highest QE. Since RASA8 can cover 30mm - that means APS-C sensor size. ASI2600 seems like logical choice as it has higher QE than ASI071MC. Any ZWO camera you choose will need binning since all have small pixels.
  15. I think it is standard 16:9 isn't it? Same as for 482MC and similar to other models that have ~1920 x 1080 (actually other models like 462, 385, 290 are 1936 x 1096 but that is just 16 extra lines for calibration that have been "unlocked"). I guess that best EEVA "small" sensor is still ASI183 - but it is also more expensive then other models (although by size - it is quite close to these two new models 1" vs 1/1.2").
  16. You will loose "resolution" but you won't loose detail. Resolution here stands for number of megapixel and level of "magnification" when you show image at 100% of it size - meaning one image pixel to one screen pixel. Here is an example of what it means to properly sample and over sample image: I have an image that is sampled at 1"/px - which is oversampling. Actual detail in the image needs only 2"/px: That is crop shown at 100% of that image. If I now resample it to 2"/px and show it at 100% (same area crop) - this is what we will see: All the detail from above image is still here. Image looks sharper and has more contrast - although all we did was just change sampling rate. Now, if you want to look at it enlarged - you can, and you will get exact same image as above: Indeed - this image is not original one, this is reduced image (one at 2"/px) enlarged to 200% (resampled back to 1"/px) - yet it shows same level of detail. This tells us that no detail is present in image for sampling rates finer than 2"/px and that is all you need to present data you captured. It looks better when viewed at respective 100% zoom level and if you look at full image (fit to screen) - there will be 0 difference. If you really want to "zoom in" reduced version - then do so and it will still show same level of detail as oversampled image. You can also try the same with properly sampled image - reduce it x2 and then enlarge it - and you will see the difference because twice reduced version of properly sampled image won't be able to record all the detail. From SNR point of view - it is better to have properly sampled image than over sampled. So yes, I'd say that with RASA8 - you actually bin x3 rather than by x2 as it is closer to 6.7µm pixel size if you are working with 2.4µm pixel camera as 2.4µm x 3 = 7.2µm.
  17. I think that you can bin in drivers - so you can choose binned format with any software that uses ASCOM drivers. Possibly that native drivers support binning as well.
  18. Maybe a tad too much of a stretch and background definitively set too high.
  19. Actually - maybe still consider 485 It appears that these two cameras are really the same except for pixel size and it might well be that it is similar case to ASI294 which had double pixels. ASI485 has 2.9µm and ASI482 has - exactly double that 5.8µm ASI485 has 13K FW and ASI482 has 51K FW. 13 * 4 = 52 ASI485 is quoted as 0.7-6.4e of read noise while ASI482 has 1.5-12.9 0.7 * 2 = 1.4 6.4 * 2 = 12.8 In another words - you get ASI482 from ASI485 if you bin your data in software. You can easily bin in software - but you can't really divide pixels in software.
  20. I really gave my best and it only possibly shows hints of both Heart and Soul (in reality it only shows dark features around them than actual nebulosity - compare with Olly's image above). This has been binned x6 in order to attempt to pull out any sort of signal from the background.
  21. I'm afraid that is not possible with this data - faint signal has been lost due to something and I can't really tell what it is.
  22. Something is wrong with the data and I can't quite tell what it is. I guess it is down to stacking process. Did you by any chance use median as stacking method (that is just wild guess - I have no idea what might cause that)? Here is what I mean by strange data - all channels have odd histogram - here is red for example: Normal stack should have single histogram peak near background value. This one has 6 humps - three quite prominent. Similar thing happens with other channels. When zoomed in and linearly stretched - background shows posterization instead of normal noise: Could it be that you shot 8bit images instead of 16bit, or used jpeg instead of RAW? Maybe in camera noise reduction was turned on so each sub ended up being filtered?
  23. Yes, flats are not quite good. Here is what I managed to pull out
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.