Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. Yes it will, especially if you live in high light pollution. It's best to use super pixel mode to debayer your data when you use such filter.
  2. I can't answer question about optical performance of each (yet) - would need some time to conduct the tests - but when I purchased my last scope - I simply added 1.25" 99% dielectric diagonal you linked because I know that Mak102 comes with simple 90% plastic diagonal. I now have both and can run some tests for you later this summer when I return to observing and planets put on a show (maybe even sooner than that - on lunar).
  3. F/ratio or F/speed is mostly term from daytime photography and while it is somewhat useful in astrophotography - it does not have the same significance. I'll explain. In daytime photography, we are operating in light dominated regime - there is simply plenty of light and signal to noise is well defined with light level. In astrophotography, things are not so simple as we are operating in photon starved domain - many other noise sources start to be important and have impact on SNR - therefore simple rule like F/ratio change means certain exposure change simply no longer hold. Another thing is that F/ratio in daytime photography is usually used in context of same camera. You swap F/2 lens for F/1.4 lens or you set aperture on your lens and change it from F/5.6 to F/4. Pixel size remains the same and does not change. In astrophotography, pixel size is important variable and determines speed of the system - you have a choice of camera to put on a telescope as well as choice of telescope itself. We have seen how change of both focal length and pixel size can make F/6 telescope be the same speed as F/12 telescope. What we did not say there is FOV aspects of that gymnastics. If you have twice as large pixel on one camera versus another - you'll have to juggle sensor size as well. 3000 x 6µm is 18mm and 4000 x 3µm is only 12mm. If you set your working resolution - field of view that you get will be determined by number of pixels. You better make sure that you have enough pixels to cover the target - that your sensor is large enough and that your scope can produce image that large in focal plane. Here is a spot diagram of Celestron RASA scope. It shows field definition - or how ideal star will look like when imaged. In each of these boxes there is small circle. That is airy disk. If you have perfect scope with no aberrations - it will still not produce pin point result - it will produce this: Star / pin point source will actually be little circle / disk when magnified and there will be rings around it. Smaller the aperture of the scope - larger that disk is. That is known as airy disk and it impacts resolution of telescope. It is in part what blurs the image that telescope produces. Above spot diagram is simulation of what actual image of pin point source looks like thru given telescope - and there is reference circle representing ideal telescope. Btw, airy disk depends on wavelength of light as can be seen in above diagram - circles for red are larger than for blue part of spectrum. In any case - diffraction limited telescope is one where all (or majority) of these dots on spot diagram land inside airy disk. Above scope is clearly diffraction limited only in green light and only in central portion of the field - up to 5mm away from optical axis (or 10mm diameter). Such scope can only be used as wide field imaging instrument where resolution wanted and achieved is significantly smaller than aperture of telescope would suggest.
  4. Imaging flat box is excellent tool for this, and yes - everything else is self explanatory. I still don't get this part - isn't lens entrance pupil in fact its aperture for collimated / parallel rays (object at infinity)? If so - large fast lens will have aperture much larger than exit pupil of eyepiece. That is exactly my point - fast phone lens will become small phone lens if its aperture is smaller than exit pupil of eyepiece - it will stop down exit beam because it is simply smaller than that - it is the same case like we have in using eyepieces with very large exit pupils - our eye becomes aperture stop if exit pupil is larger than about 7mm. Similarly if aperture of camera lens is smaller than exit pupil - it will become aperture stop.
  5. If you set your working resolution, like in your example - then as far as speed of system is concerned - aperture wins. Speed can be defined as aperture at resolution. If both scopes are F/6 and both sample at same 1.03"/px - larger scope will be faster. In fact, in your proposed case it will reach same SNR in 1/4 of the time. If you change F/ratio of one scope and use F/6 scope with smaller pixels and F/12 scope with larger pixels - they will reach same SNR in the same time - because they have same aperture size. Other than that - there is plethora of things to consider when choosing a telescope - size and weight (thus mounting requirements) are just part of the story. What about spot diagrams? Is telescope diffraction limited and over what field? How big is fully illuminated field? Is telescope mechanically sound / rigid? Is it well baffled for stray light? How well does it tolerate temperature changes? Well, you answered question yourself. As is F/4 classical newtonian is going to have very small diffraction limited field. It will require coma corrector. Coma corrector while corrects for coma - enlarges star sizes, depending on type of corrector introduces spherical aberration on axis and so on. Such fast system is very sensitive to collimation and sensor tilt. Even if you account for everything - you'll get maybe 24-25mm imaging circle. Part of achieved resolution depends on telescope size. You can take 80mm apo scope that has very good field and is mechanically sound and everything - but you can't image with it at 1"/px with it as airy disk size of such scope is 3.21". No way that you are going to achieve 1"/px with so small aperture - best that you can hope to is about 2"/px. 150mm scope has 1.71" airy disk size and can image in 1.4-1.5"/px range at best.
  6. Not sure that I understand this. Say I have 85mm F/1.4 lens - that lens has aperture of about 60.71mm. This is regardless of where it's iris is. Exit beam of light from eyepiece is collimated and it has certain diameter of "pencil" - regardless where you position that "pencil" in lens aperture - it will be focused on camera sensor. It works the same as telescope - if you put aperture mask with small opening on refractor - it does not matter where you put that aperture mask opening (it does matter as far as lens figure is concerned - but not as far as light throughput / vignetting and distortion is concerned). With large lens - it does not matter if you put it before exit pupil position or behind exit pupil position - pencils will still hit aperture (left is eyepiece and right is large lens - it has aperture over whole rectangle side). Additional benefit of long focal length lens is that you'll magnify image from eyepiece significantly and lens aberrations will be minimized in comparison to eyepiece aberrations. Only drawback is that you need to shoot multiple images in order to cover whole FOV of eyepiece and you need to stitch those images together using clever software like Microsoft ICE. I'm not entirely sure - eyepiece aberrations will be "imprinted" into wavefront of exiting light "pencil" - if you cut away part of that wavefront, you'll effectively throw away aberrations. It is like putting aperture mask on telescope to mask turned down edge for example.
  7. How does that work? What about simple drift timing method?
  8. Ah yes, sorry, I did not pay attention to that detail. Your scope sits on AltAz type mount - which is less then ideal for imaging. Do have a look at this thread: It is very long running thread - but should serve to encourage you. You'll just need to adopt certain technique of imaging - short exposures and lots of them. You'll have to learn how to process such data. Hopefully, you'll find guidance in comments in said thread.
  9. I'm sure it is fine for autoguiding as it was built for that purpose. It uses good sensor - IMX290 - same sensor used in ASI290 and other cameras. You'll be also able to image with said camera - just keep in mind that your success depends on scope you'll be using. This is small sensor with very small pixels. It is best that you pair it with fast, very short focal lengths scope. Alternatively - consider getting one of these: https://www.firstlightoptics.com/reducersflatteners/astro-essentials-05x-1-25-focal-reducer.html I started out with camera like that (QHY/ALCCD 5-IILc it was equivalent of ASI120 color) and I used it for everything - planetary imaging, guiding and some DSO work.
  10. You won't see any difference in the amount of sky you can see between two eyepieces provided that they are both 1.25" eyepieces. 32mm 50° AFOV EP (regular plossl) will have field stop that is max field stop for 1.25" format. 40mm will show you same sky only smaller in AFOV - only about 43° if I'm not mistaken. Only thing that will change apart from image size will be exit pupil. This is sometimes good if your scope is particularly slow (think F/12 and above) - otherwise, 32mm is simply more sensible option to use.
  11. No, but you can if you wish. L-enhence filter is UHC type filter and is suitable for imaging emission type targets - which means Ha regions, Planetary and emission type nebulae and super nova remnants. These targets emit most of their light in few narrow emission bands - Ha, Hb, OIII, SII, NII and so on. UHC type filter will pass most if not all of these and cut most of the rest of wavelengths. This means that color of these targets will be good regardless of filter used. What won't be good is star color. You can't get good star color using such filters because stars emit light over whole spectrum and this is very restrictive filter. You have a choice - you can put up with very strange star colors in your images or you can shoot RGB data for use with stars only. This requires special processing, but good news is that you don't need as much data to capture star colors as you need for nebulosity. This means that you can dedicate just a small amount of time to capture unfiltered star color data and dedicate most of your time to capture target with filter. If you want to attempt to blend in proper star color, here is workflow that I would recommend: - shoot target with filter and then create starless version of the image - process it separately - use starless version of the image to get stars only version of the image - use unfiltered RGB data to color the stars only version of the image (color transfer) - blend in two versions in the end - starless processed separately and stars only with true star color transferred. Btw, don't use this filter on star clusters, galaxies or reflection nebulae - these are all broad band type targets.
  12. Light pollution is very difficult to predict and explain because it varies with many parameters. It is both directional and omnidirectional thing. You'll notice that most of the time - sky is better over the sea because there are no light sources to be scattered around in that area. Turning your back on street lighting or having cliffs blocking direct light is good thing as far as your night vision is concerned but won't help with scattered light. Even if you don't see street lights directly - you'll notice that sky is brighter in that general direction. Ground light reflects of the atmosphere and creates a glow. How much light is scattered in atmosphere depends on several factors - first of course is how much light there is in the first place, and I don't mean moving around and going further away from light sources - even if you stay put, level of light changes during the night. Traffic will change intensity and all those car lights contribute. People go to sleep at some time and they turn of their lights - I noticed that it usually gets darker after about 1am here where I am. Humidity in air is major factor for how much light is scattered. Transparent nights - which are good for observing of faint stuff also lower light pollution as light does not scatter as much in atmosphere. If there is haze - then it deals you a "double damage" - it attenuates faint light of your target and helps scatter light pollution making sky look brighter. If you spend some time at night shielded from direct sources of light and just get dark adapted - you'll be able to judge which parts of the sky are darker than others. Best place will probably be near zenith and a bit to opposite side of those ground lights. You should wait for targets to be in that zone for best observing.
  13. It is a bit more complicated. You need to create sequence of couple of tiles - as @iapa already said - capture software has features to help you create mosaics - you just say how you want to do it and they create sequence with coordinates and all. After that - stacking software should deal with stitching mosaics as well - at least I think APP does that for you. I made couple of mosaics and I stitched those "by hand". That is a bit more involved, but it can still be done. Other than taking more panels and stitching those after (and any binning involved if you want to image in same time as you would capture target with smaller scope) - it is pretty much the same as doing LRGB - there you also need to do multiple sets of subs in different filter - here you only reposition the scope between sets.
  14. Just a question, why do you need to do it in a single go? You can always do mosaic. APP (not sure if you managed to try it out and if you like it) - will stitch them for you if I'm not mistaken.
  15. Well, we have stats above. Those are in ADU units, and if I'm not mistaken gain was set to around 0.25 e/ADU (from fits header). Yep, it's about 0.25 (to round things up). Measured background values with filter: R: 613.77 G1: 2222.72 G2: 2222.75 B: 1290.32 Even if we take lowest one - 600 and multiply that with e/ADU to get number of electrons - ~125e. So LP alone had signal of 125e or about 11.8e of noise (square root of signal). ASI2600MC Pro has about 1.5e of read noise at gain 100 - which was used (marked is read noise and e/ADU value that we saw in fits header): If LP noise is ~11 - that is at least x6-7 larger than read noise. Since noise adds like linearly independent vectors (square root of sum of squares) - 1.5e of read noise is going to make about 1% of difference to overall noise at most. sqrt(1.5^2 + 11^2) = sqrt(2.25+121) = sqrt(123.25) = 11.1 It really makes minimal difference over already present noise - and this is for filtered red channel - which had by far the least amount of light pollution present (all others had more LP and hence higher LP noise - so read noise was even more inconsequential). In general - read noise is really not issue with "normal" exposure lengths (few minutes or more) in light polluted areas - since LP noise easily swamps read noise.
  16. What sort of sensor induced noise are you talking about? From your response - you seem to distinguish between read noise and this other kind. Are you referring to dark current noise? (that is usually very small with cooled astro cameras - often lower or equal to read noise in exposure times normally used).
  17. Up to a point. At some point - no matter how hard you try - atmosphere is limiting factor and you simply won't make use of smaller pixels. Here is simple formula to help you calculate sampling resolution: sampling rate = 206.3 * pixel_size(in µm) / focal_length(in mm) With modern cameras we now have pixel sizes that are around 4µm. If we put in rule of 1"/px as limit in above equation, we will get: 1"/px >= 206.3 * 4 / focal_length focal_length <= ~825mm There you go - you really don't need longer focal length than about 800mm with modern sensors and small pixels. That does not mean that you can't use longer FL scope. I'm using 1600mm FL scope, but then I have to be careful and process my data accordingly - I bin x2 or x3 depending on sky conditions on particular night with that scope. This is where you need to factor in that for given conditions - you won't resolve more detail. Say you have object that is 3 arc minutes in size. If you sample it with say 1"/px - it will be 180px large in the image. Can you make it be 360px large? Sure - you can do it in two different ways: - you can use longer focal length and sample at 0.5"/px - you can sample at said 1"/px and simply enlarge the image (resample it in software). You will now say - well, what is the point in enlarging the image - I'll just get larger image with no detail, and I'll say - same thing will happen if you image at 0.5"/px - you will get larger image with no detail (and more noise). This is very important to understand - there is a limit to what you can resolve and if you push that limit - you'll get larger blurry image - same as if you enlarged image on computer, but only with extra noise because you are spreading light over more pixels and signal part is getting lower in SNR equation (there is limited amount of light that you capture with given setup). There is a good exercise that you can do - examine people's images posted here on SGL - note their sampling resolution and look at high resolution image of object by Hubble or such and compare them. I'll get you started with some examples - one of my captures: This image is presented at 1"/px here. However - this image is not 1"/px resolution image - it is lower resolution image. You can't tell that by just looking at the image above, but let me show you in couple of examples. First I'm going to take that same image, and I'm going to reduce it to 50% of its original size in software - thus making it 2"/px Now, to my eye - this version is properly sampled - you can see that by the size of stars and detail in the image (look at dark features in the bridge between galaxies). I'm now going to enlarge this small version back 200% to be the same as before: Can you spot any difference in this and original image (except for noise grain - I'm talking about detail in the image - is there anything that is more blurry in this second version - not as sharp)? Now compare that image - to same sampling rate from Hubble: Now this is resolution that 1"/px can display - it is much more detailed than my image above. In the end, I'd like to point out that there is difference between - resolution that you need to capture all detail available in the image and all detail that particular resolution can show. This is very complex topic, it goes into Fourier domain, Sampling theorem, modulation transfer functions of telescopes and so on ... I'm saying that so you don't get discouraged by looking at peoples images and HST images. HST images show what can be displayed on certain resolution and that is a bit more than other case By the way - look at this, I'm going to compare my image at 2"/px to hubble image at 2"/px: Now they are getting very close in detail and with a bit of sharpening (frequency restoration or difference between those two cases above - sampling rate needed to capture all detail available and available level of detail at particular sampling rate) they would look even more similar. I added a bit of sharpening. Look at bridge and left galaxy - almost the same now. Stars are still much smaller in Hubble image though. Ok, hope this helps a bit with understanding resolution / sampling rate and all of that.
  18. Yes, it has a lot to do with type of light pollution. I'm using IDAS P2 for imaging. We also have quite a bit of yellow type HPS installed all over the city. Here P2 does a good job. Here is aerial photo of my city (I roughly marked my current location with arrow): SNR with/without filter will not depend on exposure length used. Only difference exposure length makes is with respect to read noise - other than that, there is no difference as both signal, and LP noise grow with exposure time in same manner for filtered / unfiltered scenario (math is really simple - there is signal and signal noise called shot noise and there is light pollution signal / background and its associated noise, and a bit of thermal signal and its noise - each signal grows linearly with time and each noise component is square root of that signal - filter just reduces particular signal strength by some percent - except for thermal signal - which is very small anyway compared to rest).
  19. Same logic holds - compare green channel in IMX571 response curve graph and filter response graph. You will get that it cuts down light in similar fashion - but not as drastic as with red. It cuts it down to say 55% or so (factor of x1.9). If you are solely interested in visual - best thing to do would be to actually try that filter on your scope. With imaging we are concerned with improving SNR per exposure (or for total image). For visual you want different thing - you want threshold of JND - just noticeable difference. You want to make nebulosity be at least 10% than LP (actually some sources quote visual JND to be around 7% but let's go with 10%). So it does not matter how much darker target gets - if sky gets more darkened in comparison to unfiltered version. It's a bit like using higher magnification. If you double the magnification - everything will get darker by factor of x4 in light intensity (light is spread over x4 larger surface). Cutting light to 50% is not that big of a deal. Actual benefit for you will also depend on spectrum of your light pollution. How much darker does background sky get with filter? Say you want to observe sqm22 target in sqm18 skies. 10% is 0.1 as ratio and that translates into 2.5 magnitudes of difference. Now, you have 4 magnitudes of difference between target and sky. Will this filter help you or not? Say that your LP is such that this filter cuts it down by factor of x5 while it cuts down green part of spectrum of your night vision by factor of x2. This means that you improve contrast ratio by factor of 2.5. It no longer needs to be 0.1 - it can now be 0.04 when it is unfiltered as filter will raise that to 0.1 Again - we can translate 0.04 into magnitudes - it is ~3.5mags of difference - you still won't be able to see sqm22 target in sqm18 skies - but you'll be able to see sqm22 target in sqm18.6 skies for example (just barely detectable). In any case - filter acted as if sky was improved by roughly 1 magnitude in darkness.
  20. Focal length is not really that important. You should really consider everything in arc seconds per pixel (sampling rate). Here are couple of rules to help you out: 1. You won't manage to image on lower resolution than about 1"/px - without oversampling. That is just fact of life - or rather consequence of atmospheric seeing. Consider this to be upper limit. 2. Over sampling is bad for you - it lowers SNR that you can achieve in set amount of time 3. Your mount should be able to do at least half of your imaging resolution in terms of RMS - accurately measured. Say you want to image at 1.2"/px - well you should really make sure your mount guides below 0.6" RMS (accurately measured). You won't be able to accurately measure 0.6" RMS with finder/guider scope with say 160mm of FL. If you want to go high res - you really need to think in terms of OAG rather than guide scope 4. You can recover from oversampling by use of binning. However - you must be aware of read noise and its impact on single sub duration. Slow scopes require longer individual exposures that are later stacked. 5. Maksutov scopes are OK, but RC scopes are better for imaging. Larger corrected field, no dew issues - no moving primary mirror (although you can "lock" mirror in Mak), etc ... 6. Achieved resolution in part depends on scope aperture. I would not consider going high res without at least 8" of aperture. Here is example of Bubble nebula done at 1"/px in narrowband: This was taken with 8" RC scope
  21. Again, that depends. If you take for example 600-700nm range to be red part of the spectrum as is with interference filters, then you would expect a drop to about 50% (or factor of x2) if observed spectrum is uniform: I outlined 600-700nm range. Black curve is filter response curve. Area under the curve is roughly equal to area above the curve - this means that in 600-700nm range filter cuts uniform spectra roughly to half of its original strength. You can of course have also other extremes - signal can be filtered by factor of x100 (to 1% of its original value) - and also filtered by factor of x1.05 - to 95% of its original value - but that would mean that signal itself is not very uniform. Here are example of such cases: Here we have example of "orange" signal - that is all concentrated in 620-630nm part. Such signal would be completely obliterated by this filter. On the other hand - if that signal was in range 680-690nm like this: It would only loose about 5% of its value. This all holds true if we use very sharp cut off filters for colors - between 600-700nm for red. Above data was taken with OSC camera. Such camera does not have sharp cut off filters - it QE graph looks like this: Red is sensitive all the way down to 470nm (although small sensitivity) and even a bit below 430nm. Bulk of sensitivity of red goes down to about 570nm. It is also worth noting that part between 600-650nm is about 20-25% more sensitive than part between 650-700nm. D3 Filter blocks more sensitive part - meaning that more signal is cut by filter. Above measured x2.5 stronger signal measured without filter - fits nicely to this if you observe red from 565nm to 700nm and its distribution vs filter cut off.
  22. Welcome to SGL. Another factor is that OSC sensor is used. OSC sensors don't have sharp cut offs for their components and that means that D3 gap 550nm and 640nm will affect both green and red colors (even blue to some extent). Most of light from galaxies is in fact stellar in nature and as such has Planckian / Black body type spectrum that is more or less uniform in visible range (there is just "tilt" that determines if star is reddish, yellow or bluish - depending on temperature). If we cut into spectrum with filter - we will just clip target as well.
  23. I love this kinds of tests, however, I have to point out couple of things. If you want to do this kinds of tests properly - you need large, sharp, fast lens and DSLR as best option. For example, say in your test above - you used F/7.5 scope (80mm / 600mm) with 42mm eyepiece. That gives you exit pupil of 5.6mm. Problem is that your phone lens will not accept that large exit pupil. Phone cameras being very compact in size - use very small sensors and very short FL lenses. For example iPhone 11 has 26mm equivalent lens at F/1.4. Sensor size is 1/2.55" (crop factor of about x6), so actual lens focal length is about 4.3mm. With F/1.4 - it has aperture size of only 3mm - it can't accept exit pupil of 5.6mm - lens is not wide enough.
  24. Ok, I have results for this D3 filter and interestingly enough - it's actually hurting you to use it. Here are my findings. First background levels. Here filter helps quite a bit. Without filter, ADU measurement of background values is as follows: R: 2651.48 G1: 4433.10 G2: 4436.68 B: 2875.33 (I did not debayer - I just split bayer components and measured each - that is why there is G1 and G2 - two green components out of RGGB) Background values without filter are: R: 613.77 G1: 2222.72 G2: 2222.75 B: 1290.32 For R, reduction in background levels is x4.32, for green it is about x2 (almost exactly x2), and for blue channel - it is x2.23 There is definitive reduction in background, however, will target signal remain the same to justify filter use? These filter also eat into broad band targets as well. Here are SNR measurements on sub without filter: Red channel: - signal : 173.99 - noise : 102.75 SNR: 1.6933 Green 1: - signal : 216.14 - noise : 134.17 SNR: 1.6109 Green 2: - signal : 216.58 - noise : 133.44 SNR: 1.6231 Blue channel: - signal : 108.16 - noise : 108.24 SNR: 0.9993 I measured small square of 10x10 pixels in M81 between two stars (so I can repeat measurement between two subs) and background noise was measured on a patch 100x100px far away from galaxy. Now measurement of sub with filtering (D3 filter): Red channel: - signal : 68.83 - noise : 49.26 SNR: 1.3973 Green 1: - signal : 129.07 - noise : 96.04 SNR: 1.3439 Green 2: - signal : 113.59 - noise : 94.5 SNR: 1.202 Blue: - signal : 77.46 - noise : 72.29 SNR: 1.0715 There is more variation between G1 and G2 than I would like - maybe due to pixel shift a bit of star signal went into G1 or maybe it is just randomness of data. In either case - both R and G suffered SNR penalty by using filter. Blue remained the same or improved slightly (it is hard to tell on just a single sub - there could be variance due to randomness of data - more data is really needed to form proper statistics). In any case - target suffers reduction of about - x2.5 for Red channel, x1.9 for green channel and x1.4 for blue channel, versus x4.32, x2 and x2.23 for background. Not enough to result in SNR improvement - in fact - it results in SNR degradation for this particular target (or for that matter - any target fainter than LP). Quite surprising for Bortle 8 skies. Maybe it is down to structure of LP.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.