Jump to content

Adam J

Members
  • Posts

    4,967
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Adam J

  1. Very nice but i see you only manged to get the tip in frame. Adam
  2. Yes, exact spacing is not important with a flat field APO like the Redcat, you just need to make sure that the camera is sufficiently far back to be able to make focus. So given a DSLR makes forus with its 55mm backspacing I would think that a T48 to T2 adaptor and something like a 30mm T2 spacer will put you into about the right place. But depending on the Redcat's back focus you may not even need a spacer. If you purchased the Redcat from a good supplier they should be happy to help you. So: 55mm - 18mm (camera) - 6.5mm (typical t48 to T2) = 30mm (T2 extender) Cant link you to anything as I am not familiar with suppliers in the USA. Adam
  3. That is not a bad plan but you would be surprised what you might get from home, I would get the equipment that you are going to be able to use the most. Being able to image from home is a big positive. The new L-extreme filter is a good option for mobile as you will be able to collect two narrow band channels at the same time. But all in all I would not move away from mono except for some niche applications. Adam
  4. I see this on both my Esprit 100 and FMA180 refactors with the ASI1600mm pro, it probably is some interaction with the pixel grid. I note that I only ever seem to see in in very long / deep integrations. Not something to worry over at any rate, I quite like the effect. Adam
  5. I am on the southern outskirts of Lincoln, I have a small role of roof observatory in my back garden. I have poor views to the south though so am currently putting a mobile rig together to allow me to image more targets and to take on holiday with me. For Narrow-band light pollution in Lincoln will not effect your imaging unless your right up in the centre. Adam
  6. Forgive the numbering I dont want to end up with multiple quotes. 1) No actually i say information and SNR are not linked. Or at least thats what I wanted to say i cant be bothered looking. 2) Increased integration will resolve finer detail as it increases SNR allowing faint signal (detail) to rise above the noise floor. 3) Longer integration improves your estimate of photon flux but yes it doesn’t change the photon flux. But again longer integration 100% does help resolve detail in the object being imaged due to noise averaging out or what are we all doing? 4) Yes I know what it is and what it stands for lol 5) Yes 6) Yes but the two processes can lead to the same SNR but one (increased subs) will always improve image quality and allow you to extract more information on the target being imaged. The other will result in the same SNR but no additional information being derived about the target, all you did is make a trade between two different measures of image quality the underlying data didn’t get any better. So fundamentally they are two different processes in terms of the quality of the image produced. With one you are simply trading resolution for SNR with the other you are improving SNR and keeping resolution constant. I would say stacking more images always produces a better quality image, but binning in my experience can be detrimental especially when you have some areas with good signal and some with poor signal. So yes you can improve SNR by binning but in the end the quality of image you produce is still fundamentally limited by the SNR of the original stack and the SNR of a software binned image is simply derivative of that and not a primary measure of the quality of your data as it is prior to image processing. If all you really want to do is occlude the noise you would as well to maintain resolution and applying a Gaussian blur to the entire image, that really will reduce SNR too and so by that measure you will pretty much have done the same thing, but the problem is you have not done the same thing have you, so what the point in SNR as a measurement at this point? Hence the only meaningful place to measure SNR is directly after stacking and talking about SNR of a software binned image is not nessaserily helpful. The only place this falls over as you say is when very oversampled....but most people spend lots of time choosing camera and scope combinations for exactly this kind of reason. In the end you are always limited by the SNR of the stacked image prior to processing, a high SNR means you have more head room in processing the image, anything you do to manipulate the image after that becomes a totally meaningless measure in terms of image quality. So yes there is only one definition and meaning of SNR but you can increase it in a number of ways that result in different outcomes. So in a way the means by which you are increasing the SNR is more important than the raw measure itself. We don’t talk about how a noise reduction algorithms increase SNR do we? or how a Gaussian blur increases SNR. Yet that is all binning is, blunt force noise reduction that comes at a cost. Hence, I feel that the data you have to work with is the data you have to work with and any reduction in SNR derived by software manipulation of the image is purely academic and almost certainly meaningless in terms of a measure of the end product quality. I use SNR to inform me of things like how much more integration I need to see a significant improvement in my data or compare the performance of different cameras, sky conditions, filters, gain setting, sub length....etc etc. All of those are valid uses, but once you are into processing software binning will not improve the quality of your data that is now set. So for me I fundamentally do not consider SNR at all when choosing if I should SW bin its not relevant, I just look at the image and decide if it made it better or worse and often with long integrations the answer is worse. So for me I just do not see SNR as applied to software binning in the same way and believe that you cant view SNR while processing in the same way as you view it during image capture. I think you are looking at this in a very pure way SNR is SNR is SNR, but for me I am also considering it in terms of how I am making use of that measurement within my workflow and it means less as a measure once you are into processing and you have finished capture as improvements in SNR do not always equal improvements in my image from that point onwards. So for me SNR improvement after stacking is not the same thing as improving SNR before and during stacking. So thats why I do not think of the SNR improvement in SW binning as the same thing as an SNR improvement made during stacking or capture. If you don’t get my perspective on this then you don’t get it, that’s ok. But as I continue to point out my point of view is not about the maths, I can do all those calculations myself, I have modelled it all in Excel etc. So forgetting my inadequate attempts to explain myself above I hope this better explains my thinking. But perhapse we should move the discussion to another venue as its no longer relevant to the OPs thread. Hence my last post on this here. Adam
  7. From you responce you are not understanding my argument on a conceptual level. I will see this as my failing to be able to properly articulate it. You will get more infromation by increased intergration. But at any given intergration you will not gain any infrormation about the object you are imaging by binning the image in software. Detail that is being resolved at a pixel level is information. By software binning you lose that information its just gone, but you do gain other information as a result of increase in "SNR" showing fainter objects in reduced detail, hence information within the image is conserved litterally. So increasing SNR by getting 4 x the number of subs is not the same as increasing the SNR by binning 2x2 in terms of information within the image as in the first instance you will end up with more information and in the second you will not, that is because in the first instance you have increased the true SNR of the image but in the second you have only increased the preceived image quality. Yet in your argument they are end up with the same mathematically calculated SNR in both cases. Taking this to an extream if you average every pixel in the image you will have a really very very accurate idea if the average illumination of the sensor, but you will have no information at all on what it just took a image of. Hence you have exchanged one type of infrormation for another type of information. But of course its not a better image, so there is a balance between resolution and "SNR" and eventually lower resolution and better SNR will start to reduce image quality. Have have you increased the SNR of a noisy picture printed and hung on the wall of your living room by looking at it from twice the distance, so you cant resolve the noise anymore? Of course not, but thats the argument you are making. Now does that image look better for standing further away, yes. Hence I will say again, your maths is as alway impecable but its the way SNR is being used in this context that I have an issue with. You are trying to counter a perceptual interpretation of how people make use of the term SNR within astro imaging with a mathamatical argument. Adam
  8. Like i say I am not debating the maths. The better way to think about it is that if you litterally increase SNR you will increase the amount of extractable information within the image and the best way to do that is by say stacking more images. If you bin a CCD 2x2 then you will do the same as the base noise per unit area of the chip is reduced. But if you bin a CMOS you are not gaining any information about the object you are imaging as you are losing as much in reduced resolution as you gain in "SNR". Now that might well produce a better looking image but its a better looking image that contains no additional information. Adam
  9. I am aware of the maths and agree with it 100%, just disagree with your definition of SNR within this context. In my view the signal and the noise contained within the image are identical as they always have been. I would say that what is going on is more like some sort of processing gain / latteral intergration. Adam
  10. I am sure Vlaiv will answer this better but, its a tricky definition as technically you have not changed the signal to noise ratio after all the same number of photons are collected and the same amount of noise so that is fixed. You have done something along the lines of decreasing the perceived noise at the expense of resolution, which depending on some balance will or will not result in am increase in image quality. Adam
  11. I would think that the reducer night work with a similar focal length scope, you could try the F5 - F6 range. Much depends on how specific it is to the FMA 180 in term of any chromatic aberation correction charactoristics, but it seems likely that it will work....I could try it on my Esprit 100 as its F5.5 and has the backfocus for that to work. You would need a flattner if not using a focal reducer for imaging unless it was a very small sensor such as a guide camera sized sensor etc. I am not really sure what would work though as it would need to use a M42 thread to attach it to the scope...Its possible the new flattner for the EVO GUIDE 50 might work... Adam
  12. I would not get an uncooled 294 either, its got really quite high dark current and horrible amp glow that you will not be able to calibrate out without the set point TEC cooling. Look out for a cooled camera or you may as well stick with the DSLR as the amp glow will just be the bane of your existance. Listen to what others are saying to you above, get TEC cooled or you will buy twice. Adam
  13. Yes thats not normal I have never seen anyone else complain of anything like that, it remindes me of bad columns on a CCD sensor, but this is a CMOS so i dont understand how this would happen. Either way if it is present across the board and then cooled correctly then I would say that you need to contact your retailer and ask for an exchange. Has it been like this from new? How new is it? Adam
  14. Honestly I think that the 183c is a hard sell now in comparison to either a IMX294 based camera or the IMX533 based ASI533mc pro. At the focal lenght and F-ratio you are looking at larger pixels will help. Adam
  15. Not using one but my thinking on it is that there are no advantages I am aware of over the asi1600mm pro and a 8 position efw...and infact there are some disadvantages.
  16. The real issue with all the zwo wheels and taller filters is not that you can't fit them into the wheel it's that the thread of your extenders etc protrudes into the wheel and contacts the filters in many cases and it's this that can be the limiting factor.
  17. Thanks for the kind comments all. I am really happy with this one. Adam
  18. any chance that there was any condensation built up in the sensor at the time?
  19. I think the camera thread would be the place. Never seen anything like that before, is it on every single frame of just the stacked image?
  20. Yeah I dont see it happening unless you chip the edge of the filter or expose it to some sort of thermal shock. AD used to give a even longer warrenty I think your ok for a decade or so to be honest. Probably much longer than that.
  21. Wow that did not last long it's nearly been out 9months.
  22. Nope not with sputter deposition. It's hard to even imagine it delaminating without physical damage to the edge.
  23. I doubt that they would want to admit it if they are the same unit to be honest.
  24. Would love to know what mechanism you imagined might cause filter deterioration?
  25. Optically the same no doubt but not sure if they dont have a different spacer incorporated. Wow just realised that this is a real zombi thread...
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.