Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

andrew s

Members
  • Posts

    4,300
  • Joined

  • Last visited

  • Days Won

    6

Posts posted by andrew s

  1. 21 minutes ago, vlaiv said:

    I think that there are better methods - do agree on issue with telegraph noise if camera is susceptible to that - but selective binning is better way of dealing with it than using median filter before binning.

    Median filter acts as noise shaper and reshapes the noise in funny ways and if possible it would be best to avoid it as algorithms further down the pipeline often assume noise of certain shape (most notably Gaussian + Poisson).

    In any case - telegraph noise can be dealt with in manner similar to hot pixels - by using sigma clipping. If enough samples show that there should not be anomalous value there - then anomalous value should be excluded from bin / stack (rather than all values replaced with some median value).

    I am surprised at such a quick dismissal. Christian is a careful worker and I respect his opinions as I do yours.

    However,  given the speed if your response I assume you have looked at his algorithm before and compared the results. If so could you share the results.

    Regards Andrew 

  2. One area of noise which has grown in importance with small pixel CMOS cameras is random telegraph noise. If it's been covered above please forgive my not reading the whole thread.

    I have not seen it discussed much but it can be the dominant noise. C Buil discusses it on his site and give an algorithm for reducing it in over sampled images here in section 6 (wrongly labled as 5) It is in French but Google translate does a fair job.

    There are also some English discussion on it in the CMOS camera reviews. Here for example. 

    Regards Andrew 

    Note his data refers to the native camera bit depth not 16 bit unless it is a 16 bit camera.

  3. 32 minutes ago, Ags said:

    For example, as negative mass matter (NMM) would produce negative gravity, is dark energy potentially produced by a hazy intergalactic cloud of NMM.

    As it makes up some 68% of the mass/energy of the Universe we would not see the gravitationally formed structures we see today if it were your proposed negative mass.

    Regards Andrew 

  4. 1 hour ago, wesdon1 said:

    @maw lod qan I was just commenting about the fact this thread has really got people going! I never realised people felt so strongly about GR and QM and Newtonian laws etc!? 

    I think this is one of the very few places on the web, or elsewhere come to that, where one can have a civilised conversation on science related topics. 

    It has a concentration of science aware individuals with a broad range of knowledge and skills and it's all the better for that.

    Regards Andrew 

    • Like 3
  5. 11 minutes ago, vlaiv said:

    Nysquist sampling theorem clearly states what it applies to - any band limited signal / function (which really means that Fourier transform of that function has maximum parameter value after which all values of Fourier transform are zero).

    Strictly,  it also has to be point sampled. This is not the case with areal sensors like CMOS cameras. 

    Regards Andrew 

  6. On 31/08/2023 at 21:32, ollypenrice said:

    The problem is that we want to test assorted models against what seems reasonable to us - but what seems reasonable to us has been defined by very limited, and very local, experiences. The trick is to be willing to embrace what does not seem reasonable to us because, in all probability, that is where the truth will lie.

    Olly

    It worth reminding ourselves just how far we have come.

    Simple observations are the Sun, moon and stars go round the earth but we now know better.

    What causes an apple to fall to earth is the same as that which keeps the planets in orbit and the galaxy turning.

    Gravity is not a force but a curvature of spacetime. 

    Nuclear fusion powers the stars and we are literally star dust.

    Solid objects are 99.9% empty space.

    Obvious now?

    Regards Andrew 

    • Like 1
  7. 4 minutes ago, vlaiv said:

    I'm with you on that - if it were not for working resolution. It is ~2"/px and such differences should be visible in higher working resolutions but not so much on 2"/px. In wide field images FWHM per channel is roughly the same with very little difference.

     Ok so why do the stars have blue halos? Regards Andrew 

  8. Putting aside differences in perceived sharpness I think there is possibly a technical reason for a difference between stars and hydrogen emission nebulae. 

    Stars are wide band and subject to the full force of atmospheric and chromatic aberration . While the nebula is predominantly narrow band in the red and thus less impacted by the atmosphere and chromatic effects.

    It's noticeable that @ollypenrice original stars have blue halos.

    I doubt a reconciliation is possible though 😊.

    Regards Andrew 

    • Like 1
  9. 17 minutes ago, vlaiv said:

    As far as I understood Olly - he claims that star size don't correlate with background detail in RASA data - that somehow stars are being large / soft (sign of high level of blur - high FWHM) - but that detail is still there in the background.

    I'll have to let Olly comment on that.

    However,  what counts to the eye is the detail it can pick out. A bright star will have an obvious impact over many pixels. A chain of dim stars (with the same FWHM as the bright star) might well be visible as a linear feature just a pixel wide.

    This difference between point and linear resolution was well known to well respected visual observers of the past.

    It would be easy to assume by looking at the bright star blur seeing the pixel wide feature would be impossible to see.  (I am not saying you are doing this.)

    I feel though this effect may be the root of your different positions. 

    Regards  Andrew 

  10. 3 minutes ago, vlaiv said:

    I'm not sure diffraction spikes are good comparison point.

    Their intensity is always percentage of brightness of the source that is creating them. They are equally present on stars as they are on extended objects like planets. Take image of Jupiter taken with reflector with spider, and stretch it very hard - you will get diffraction spikes from planet as well.

    We just don't see them because intensity needed to notice them is very very large and comes from brightest stars only in normal stretch levels.

    Here is old planetary capture of mine extremely stretched:

    image.png.2b399077430f1384d250421ea44d10fa.png

    Besides that strange circular feature that I think is refection of a barlow lens used and shows aperture of the telescope (some unfocused light) in top corner - spikes are starting to show

    But back on issue of blur - FWHM is the same for star and for nebulosity and nebulosity simply won't show features that are that order of size or smaller - due to blur.

    FWHM is important bit. That and how much high frequency components there are (or are expected) in signal itself.

    Take uniform light without variation - no matter how much you blur it - it will look the same. Sky in daytime photo for example - if it's clear and there are no clouds - it will look the same in sharp and in blurred image.

    If you expect smooth varying nebulosity and that is what you get - you might think it was not affected by blur in the same way stars were - but you'd better check other sources to verify if nebulosity is indeed smooth or not.

    I don't think what you say here contradicts what I said. It's a matter of contrast .

    Obviously,  the FWHM gives an estimate of the maximum spacial frequency that can be seen (but it's complex point v edge  v gradient etc.) .

    Your examples seem to confirm what I was saying. 

    I don't think @ollypenrice is disputing the resolution of the RASA.

    Regards Andrew 

     

  11. @vlaiv and @ollypenrice you are both right. @vlaiv is correct that that abberations are universal and impact both point like and extended sources. 

    However, just as dimmer stars show less pronounced diffraction spikes, extended objects like planets tend not to show them. This is what @ollypenrice observes. Due to differences in contrast they seem for all practical purposes to be absent.

    A classic example is curved spider vanes compared to straight ones. Both suffer diffraction but the curved blades result in a distributed low contrast result compared to the high contrast focused spikes of the straight ones.

    Regards Andrew 

    • Like 1
  12. 12 minutes ago, vlaiv said:

    only time as being cost to do so

    I think this is the key point. The RASA has a higher etendue than the others.  It can capture more light than the others for while its capture area is the same it captures it over a wider field. Great if you have limited clear skies.

    In the end it depends what you want  from your system.

    We are just lucky to have such a wide choice of telescopes and modern CMOS detectors to choose from.

    Regards Andrew 

     

    • Like 1
  13. Building on @Zermelo post above. Classical and Quantum mechanics are the two simplest examples of what are know as General Probabilistic Theories I.e. theories that describe correlations of detector clicks. 

    Over simplifying the first, classical, probability theory has the sum of the possible outcome amplitudes adding to 1, while the second, QM, has the sum of the squares of the possible outcome amplitudes summing to one. More complex options follow with accompanying new phenomena.

    Who knows maybe the rip it all down and start again replacements to GR and QM require the next level 😈 

    Regards Andrew 

     

  14. 30 minutes ago, vlaiv said:

    I've outlined where many worlds fails to reproduce experimental results.

    If your correct I don't understand why serious physicists still consider it a valid interpretation.  Personally,  I find it unattractive but that's an ascetic perspective. 

    Regards Andrew 

    PS I found this

    "A popular criticism of the MWI in the past, see Belinfante 1975, which was repeated by Putnam 2005, is based on the naive derivation of the probability of an outcome of a quantum experiment as being proportional to the number of worlds with this outcome. Such a derivation leads to the wrong predictions, but accepting the idea of probability being proportional to the measure of existence of a world resolves this problem. Although this involves adding a postulate, we do not complicate the mathematical part (i) of the theory since we do not change the ontology, namely, the wave function. It is a postulate belonging to part (ii), the connection to our experience, and it is a very natural postulate: differences in the mathematical descriptions of worlds are manifest in our experience, see Saunders 1998."

    from here which may be of interest .

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.