Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. There are already filters like that on the market (or very similar). How about green from RGB? Might be a tad restrictive - but this is Baader offering - I've seen some other filters being less restrictive (but maybe not so sharp edges). Astronomik for example: maybe for start try to see how these two fare?
  2. The more I look at these images - the more I'm convinced that this is processing artifact and not the real thing. To reiterate - image that I've found online in the first post: versus screen capture from the you tube: Just try to match Ha patterns in the image and you'll see that there are vast patches of what should be Ha - missing on one or the other image. How can this be?
  3. This really depends on level of read noise. It is certainly true for old CCDs that had 7e, 8e and even 10e or more of read noise. Modern cmos sensors have very low read noise, and if you have something like 1e or less of read noise, and you image in city center (as is often the case - people in heavy LP usually go for NB to have any chance of imaging) - then you might be surprised by needed exposure length - it can be as low as few minutes, especially for wider band filters (some really narrow band - like 3.5nm filters will still need longer exposures even with small read noise). BTW - with narrow band, it's worth examining dark current noise - it can become most significant noise source over LP noise for some cameras, especially in summer months if one decides to cool down to -5C or -10C versus for example -20C
  4. For Samyang and similar - you don't have to use these expensive filters. You can use regular one - as long as you mount it at front of the lens and not before sensor. This usually means 2" version - and it will reduce aperture to about 47mm thus making it 135/47 = F/~2.87 lens instead, but people usually stop the lens to F/2.8 even for NB to get good looking stars. Front mounted lens filter operates in collimated beam and not converging one and angles are comparable to slow scopes (even for wide field - one captures only few degrees of axis - much less than 14 degrees off axis for F/2 beam)
  5. No. In general, diagonals are redundant for any type of imaging. There is however one special case and special type of "diagonal" that you might have seen for solar imaging. This is not regular diagonal, although it looks just like diagonal. It is Hershel wedge - which is acting like a solar filter (it must be combined with other filters) and is used on refractor telescopes. this image shows such a setup for imaging. This piece of kit is used for white light solar imaging. Another place where you are likely to see diagonal used for solar imaging is with Ha scopes. This is not because the diagonal is needed - but because Ha scopes have special filter (again it's about filters with Solar) - that is placed in diagonal for convenience. It is there mainly for visual - and there are straight through filters that can be purchased for imaging, but most people use their Ha scopes for visual and don't want to pay for additional piece just for imaging - and then image with diagonal. Here is image of such setup: Diagonal here is true diagonal - but with addition of blocking filter (you must use one with Solar Ha scope). There is version of blocking that is straight through, that is suitable for imaging Here it is Lunt B3400 - but it's very expensive and only dedicated Solar Ha imagers usually get one.
  6. OIII filter at F/2 beam? Will it work? Seems that Baader has optimized filters for this - but not cheap (and not overly expensive - I guess people with RASA scopes already thought of that and are using these fast filters?).
  7. I think that whole feature will fit in single panel on 1000mm with APS-C but it would not leave much surrounding for context. Bad for image making, good for "sciency" stuff. It would be best to dedicate whole sensor to object being imaged so we can try to detect it and confirm existence rather than produce nice image. From the image - it looks like it is about 2 degrees in length. and 1000mm with APS-C is not going to cover it , but 600mm will as it produces ~ 2.25 degrees in width. So 130PDS or maybe faster 150 F/4 newtonian? I would even consider using 127 F/5 achromat for this if only capturing OIII (it would probably be scope on loan for project as not much people have it for imaging).
  8. Actually, scopes up to 1000mm with APS-C sized sensor can cover that feature - so 8" F/5 would be ideal instrument. At F/5, filter should still work properly and there is no need for "fast filters". With proper binning, I think that we can bring those 100+ hours down to one night session.
  9. Yeah - I'm not very fond of that concept. While sound - it can create some issues. Continuum removal works if one has uniform spectrum in blue - and one certainly does not have it. Just take two stars and you will see that - while they can have same value captured with OIII filter - they can differ very much in amount of blue they produce (blue filter response). How do you then decide how much blue from blue filter you want to remove? There must be some criteria by which you choose - most straight forward is to use a constant value to multiply blue filter value - and this constant method fails on simple things as stars. You can easily remove too much of blue and effectively kill off signal. Similarly, you can remove too little blue. If you happen to have combination of the two - remove too much blue in one place and too little blue in other place - what you are left with is "feature in OIII". I'd like to have confirmation from a third party source. Possibly someone with Samyang F/2 and front mounted OIII filter in 2" variety doing one night imaging and we will bin the **** out of the data to see if we can tease out something? @ollypenrice You seem to be fond of Samyang F/2 imaging lately, fancy a go at this once M31 is positioned right?
  10. That all looks good. Yes, you are right - I rounded up to 50 because I was not sure read noise was exactly 1.4 - someone measured it to be just a bit higher than this value and ended up with 50.something - so I've chosen 50 to be the the figure - nice and round (it does not change the result very much if you use 50 instead of 49 or 51 - but if you can - go with exact figure). Just a note - for bias removal - ideally just get one dark for the same exposure length as your test exposure (so 60 seconds) - but do pay attention that you have all the settings the same (gain, offset, temperature - the lot) as you would when taking regular darks for your regular light exposures. Then measure average of that and use in calculation.
  11. Depending on conditions of the night that you don't control difference of 9.99% in light transmission is small. You can get up to 40-50% loss of light depending on how and when you choose to image a target - and most people don't ever consider this. Same principle applies to OSC as does to Mono when comparing two filters with different bandwidth. Both will let same signal pass (with differences to a few percent in transmission). Where they differ is amount of other light that they will pass. This is value of NB filter - in cutting off unwanted light. Tighter the bandwidth - less unwanted light is passed. This is especially important in light polluted scenarios, or when you want to separate close lines - like Ha and SII (656nm and 672 - so gap between is about 16nm and filters like Ha with 30nm bandwidth might not separate the two - depending on CWL).
  12. Do you have any details on this? I haven't found the explanation of how this was taken. I've just read that it was confirmed - but I haven't found that other image that confirms the feature.
  13. This is one of my concerns - use of "advanced" tools that rely on AI to produce final result. I'd be much happier with simple black and white OIII image linearly stretched, noisy and all - that shows the feature than this over processed image.
  14. That is advice for properly exposing flats - not optimum exposure for shooting lights. It is therefore completely wrong in this context. I'll try to explain in simplest terms possible why we do this calculation and how it works, and then provide you with easy to follow steps. Only issue is - you can't perform these steps while you are shooting unless you have some sort of software that will do it for you (or can be mix of different software + spreadsheet for calculation). We determine minimum exposure length so that it will not significantly impact our final result in terms of noise. Shorter we go - noisier final result becomes. This is because there is one noise source that does not behave like all the rest noise sources - and that is read noise. Read noise comes per exposure. All other stuff that is important - signal and various other noises (thermal noise, shot noise, LP noise) behaves differently than read noise and varies with time. They all only depend on total imaging time and not number of exposures. What makes the difference between many short subs and few long subs is the only thing that does not depend on total imaging time - and that is read noise. We expose for long enough so that read noise per exposure becomes insignificant - or increases total noise by indistinguishable amount. This happens when read noise is 5 times less than the largest other noise source and that is most often sky background noise. That is why we compare the two. Here is step by step, easy to follow, check list how to calculate optimum exposure length. It will be tailored for ASI533 and Gain 200 - but I will put differences to all other models in brackets. 1. Take single exposure of patch of the sky where your target is. Expose for some amount of time - let's say 60 seconds 2. Measure median ADU value of the empty patch of that exposure (select background with as few stars as possible - ideally without any objects and stars) 3. From that ADU value - subtract mean bias or dark sub value of same exposure length (you either need to take these in advance, or cover scope - take one sub and measure it) 4. Divide this value with 4 (use different number here for other models, 4 is for 14 bit ADC, 16 is for 12 bit ADC and 1 is for 16 bit ADC) and multiply it with 0.3162 (use actual e/ADU value for the camera and gain you are using) 5. Divide number you got in previous step with 50 (read noise for your camera and gain setting multiplied with 5 and then squared) 6. Divide 60 seconds (or whichever exposure length you used in step 1) with number you get in step 5 and that is exposure length that you should be using for your sky conditions and your setup. That x19 is wrong - because I assumed you are familiar with bit counts and ADU values and that you adjusted for 14 bit of camera. I was wrong, you don't know these concepts - and it was pointed out by @ONIKKINEN in one of his replies above: That and unknown offset value makes wrong number in the end - but procedure presented was correct. I took a brief look at the calculator - and I think it is in essence correct - but that calculator is made so you can estimate exposure length without measuring it. You don't need to take test exposure with that calculator - but you do need to know SQM value of your sky. It is only estimation to get in the ballpark (which is probably just fine and can be used). Above method is relying on actually measuring things for your location - taking test exposure and measuring sky brightness that way and not relying on SQM. It is also different in that you don't need to know details of your equipment (that is also measured) - which you normally know anyway (scope focal length, pixel size) - and some that you don't necessarily know (optical efficiency of your scope, QE of your sensor and so on ...).
  15. (read_noise * 5)^2 / gain + offset_ADU (adjust to taste for bit count) It is better to compare achieved and target background in electrons then converted to ADU units that are reported by imaging software - as you can easily calculate needed exposure without guessing. Just divide the two and that will give you number to multiply your current exposure by. For example if you measured 50e and you calculated that you need 75e background level - then increase exposure by 75/50 = 1.5 times. You can't do this once you add offset and convert to ADU units.
  16. I have issue with this. Image does not seem to match other sources in some respects. This is the image showing phenomenon: But compare Ha structures around Andromeda galaxy with this other reference image: Look at Ha whirls blow supposed OIII feature. Right one matches reference image - but left is completely different. So is Ha region around bright blue star right of the supposed OIII feature.
  17. Depends on filters in question. You need to look at curves of both filters and see where they overlap. If they overlap completely - you don't need to stack - use one that is more restrictive - in above case, L-eNhance is completely within band of L3 - so you don't need to use both. You can use both if you have L3 "permanently" mounted in front of flattener for example (many people use 2" version in front of flattener so that it's always applied and they don't need to think about it and can use their filter wheel for other filters). If they partially overlap - then their combination will be intersection of their bands.
  18. Yes, you can do that - just remove bias (and if you you have darks - maybe better to do average dark ADU value for that exposure). 5000-2000 = 3000 ADU, 3000 * 0.3162 = ~ 948.6 = ~950e Given that your exposure requires only 50e of background signal, you are exposing for x19 longer with your setup than you need to. I'm guessing that you have fast optics and heavy LP? Probably just a quick and imprecise glance at ZWO chart? It is hard to precisely determine e/ADU value from graph like that. It is a bit closer to 0.25 than it is to 0.5 (and it should be since the value is ~0.3162). You should be able to access this data in fits header of your subs, so check values stored there. I find this method much more complicated with DSLR as there is no e/ADU value for given ISO. How do you convert to electrons with DSLR? Not sure what you find complicated - you take a calibrated sub (you should really calibrate your subs for stacking, so there is no reason not to have calibrated value) - you measure median value on patch of background in the image - multiply with a number and compare to reference value you calculated from read noise.
  19. That is right. You need to use calibrated data because you want only background signal. You don't want offset / dark current signal when you measure average background level as that will give you wrong reading. Ok - here it is a bit better explained. Say that you have 1.4e of read noise at gain 200. Say that we set our ratio to 5 (you can use 3 here if you want, but let's go with 5). This means that background noise should be at least 1.4e * 5 = 7e. This is background noise and in order to get background signal - you need to square this value because noise is equal to square root of signal it relates to. So you need to have background value expressed in electrons to be at least 49 electrons. You've got too high value of 2000e for your background signal in step 3 - and there could be two reasons for that: 1. you haven't calibrated your sub properly (haven't removed offset / dark current) 2. you've been using wrong e/ADU value you say that you think e/ADU is 0.4 at gain 200? ASI has gain expressed in 0.1 dB values. Unity gain is at 100, so gain 200 is actually 10 dB higher than gain 100. 10 dB is ~3.162 times difference, so actual e/ADU value is 1 / 3.162 = 0.3162 e/ADU 5000 * 0.3162 = 1581 electrons (but again - have you removed offset?)
  20. Well, that is true - one can't talk about nothing without a good Haiku
  21. It turns out that things are not that simple. Colours like black and white (and red, purple, green, yellow and all other) - are product of our mind. You don't need all wavelengths of light to induce sensation of white - and interesting thing is - you won't see the same mix of wavelength as being white always. There is no universal white as far as spectrum goes - there is only "white adaptation" of our brains - or mix of wavelengths that our brain adopts as being white at that particular time. Similarly - black is not absence of light - it is thing of contrast. You need to have something that is white in order to perceive something as black. If you have no reference (in absence of all light) - most people "see" sort of dark gray (see link I provided few posts up). Then there are "impossible" colours - colours people report seeing that are physically impossible to get with light - yet they are regularly reported and induced in peoples minds. https://en.wikipedia.org/wiki/Impossible_color
  22. Now I get it - it is higher level algebra Number divided with zero is infinity Bank balance minus Takahashi scopes is nothing
  23. This is actually not true - we see some sort of gray shade and not black. There is name to this effect that I can't remember, but I will look it up. Ah, here it is: https://en.wikipedia.org/wiki/Eigengrau
  24. Colour is technically a psychological sensation. It is not physical thing - much like warmth or sound. There are physical corresponding stimuli that produce these sensations. Electromagnetic radiation in visible part of spectrum creates visual sensation (colour / brightness / visual image). Pressure waves in air cause sound sensation. Mechanical vibration of atoms / molecules causes warmth sensation. We can measure all these physical properties without the need for our senses to tell us its there, and our senses are not the most reliable instrument. We can hear things when nothing is to be heard (quite dangerous thing - people hearing voices often end up hospitalized), we can "see" things and colors that do not exist - we can feel the same temperature as both cold and warm (depending on our surroundings or previous state of our body - keep one hand in hot water and other in cold and touch the same object - it will feel at different temperature depending on which hand you "sense"). Nothingness can't be associated with either psychological sensation or physical phenomena. It can't be measured. It is purely philosophical concept - it does not exist in nature. Even completely "empty" patch of space, contains multitude of things - it contains space-time with its curvature, it contains quantum fields - and on top of that, it contains random fluctuations in those quantum fields - pairs of particle/antiparticle popping in and out of existence. Quite a busy place for "nothing" or "empty" space. Nothingness is much like infinity - a concept we can discuss without actual physical phenomenon.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.