Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

andrew s

Members
  • Posts

    4,298
  • Joined

  • Last visited

  • Days Won

    6

Posts posted by andrew s

  1. 14 minutes ago, Michael Kieth Adams said:

    We have recently discovered that apparently space can expand faster than the speed of light and that two linked particles can somehow communicate instantly.  Just thos e two things must  clear a lot of bets from the table

    Well it depends on your definition of recent. Hubble's law was established in 1929.  Quantum entanglement was proposed by Einstein, Podolsky and Rosen in 1935.

    While pop science likes to sensationalise faster than light metric expansion and spooky action at a distance these are well known to working scientists as part of their day to day work.

    What you may not realise is that our classical world emerges from entanglement with the environment at large. You and I have our classical form due to our interaction with for example the CMB photons, the molecules of the atmosphere and light from the Sun.

    Regards Andrew 

  2. 28 minutes ago, Gfamily said:

    Interestingly, it seems that any DM that falls directly into the Supermassive Black Hole at the centre of a galaxy will be captured and will not escape - as far as I can tell, this is the only way of constraining DM. This last point is not one I've seen referenced anywhere, so it is my personal contribution to cosmology. If I was a better astronomer I'd have a clue how to apply this hypothesis to see if it's useful.

    Yes it has to directly "impact" the BH event horizon.  As you point out it can't join the accretion disc. Not sure it's a unique insight though.

    Regards Andrew 

  3. 28 minutes ago, robin_astro said:

    Yes that is what I understood but was then surprised to find that physics does not preclude the formation of supermassive dark matter stars, in the early universe at least

    https://www.scientificamerican.com/article/jwst-might-have-spotted-the-first-dark-matter-stars/

    Interesting with lots of maybe and ifs and buts. If darkmatter is a particle and if it is its own anti particle then in the much denser early universe it could have got dense enough for there to be a significant number of collision of dark matter particles to annihilate and power a dark star.

    If such a process was going on now in our galaxies dark matter halo we should see a background glow. I am not aware of any such detection so it seems to be too diffuse. 

    Regards Andrew 

  4. You miss two key points.

    The average density of matter is very, very low. We only notice it when it's concentrated into stars, planets and clouds etc.

    Which brings me to my second point. While dark and normal matter are effected by gravity in the say way dark matter can't concentrate as normal matter does. To concentrate matter needs to lose angular momentum and kinetic energy. Normal matter does this by emitting electromagnetic radiation giving us some of the most spectacular sights e.g. accretion discs. Dark matter can't do this so remains diffuse.

    Regards Andrew 

  5. 1) Yes lensing is used to compute the distribution of matter (normal and dark) in galaxies. 

    2) If our measures are that far out I think our predictions on binary stars motion would be way off but they seem ok. Look here for a bigger example.

    3) the mass of dark matter in the solar system is estimated to be about 1/9 the mass of Ceres.

    I have no idea what your last sentence means. We currently don't know what dark matter is.

    Regards Andrew 

    • Like 1
  6. 24 minutes ago, vlaiv said:

    Same math applies to both images, and math says - If there is something at high frequencies and you remove it - it will be obvious in the image.

    On the other hand - if there is no signal in high frequencies and you remove it - there will be no difference.

    Absolutely,  agree. By definition if you under sample the you are losing higher frequencies!  If your combined system optics/mount/seeing is capable of delivering 1 arc sec and you under sample you will lose resolution and information.  

    A pure star field may look the same as your just looking at gaussian  images which scale. Look at a planet, nebula etc with 1 arc sec detail. and they will be different. 

    Regards Andrew 

  7. It's difficult due to the difference in stretch but to me the whole lower image looks grainy. This is more obvious in the faint outer regions.

    It may be due to the noise and or stretch but how can you clearly delineate the data from the noise. You can't, especially in the areas where the signal approaches the noise floor.  

    Regards Andrew 

    PS I assume you accept there are differences in the leaf picture given your earlier reply. The same maths applies to both. 

     

  8. 3 hours ago, vlaiv said:

    @Dan_Paris

    Have a look at this and tell me what you think:

    image.png.372f4c7d4ed9b6de771a72c32532d174.png

    Top image is bin1 image and its Fourier transform.

    Bottom right image is that same Fourier transform with me setting to zero all the values higher than half sampling frequency (frequency that corresponds to x2 coarser sampling) to zero. I effectively removed all the higher frequencies that would not be captured if you sampled at half the current rate.

    Then I did inverse FT of that - which is bottom left image. Can you tell the difference in resolution between top and bottom image?

    And mind you - this is even done on processed data - not even on 32bit float point with much higher precision, yet results are evident.

    Well I can see differences in these two images. So that seems to contradict what you just posted. However,  I won't continue. 

    Regards Andrew 

  9. 1 hour ago, vlaiv said:

    That is exactly my point.

    If you have image that is properly sampled and you cut frequencies above certain point - it will show in image.

    Take a look at one post above that - where I did the same - can you see equal loss of detail in that image - can you spot any place where you can no longer resolve feature - like smallest fibers not being resolvable in leaf example

    Musing about all this, the artifacts in the under-sampled leaf when stacked and processed could easily give the impression of enhanced detail if you don't have a higher resolution reference image.

    One way to look at this is your seeing/guiding/optics need to be spatially band limiting your system.  This  gives some good examples of the problems you can get in normal photography if you under sample. It says this

    "Many digital camera sensors— especially older cameras with interchangeable lenses— have anti-aliasing or optical lowpass filters (OLPFs) to reduce response above Nyquist. Anti-aliasing filters blur the image slightly, i.e., they reduce resolution. Sharp cutoff filters don’t exist in optics as they do in electronics, so some residual aliasing remains, especially with very sharp lenses. The design of anti-aliasing filters involves a tradeoff between sharpness and aliasing (with cost thrown in)."

    Astro imagers have the atmosphere and their mounts and optics to do the filtering! 

    My two pennyworth would be it's better to slightly oversample than risk under-sampling. 

    Regards Andrew 

  10. 51 minutes ago, vlaiv said:

    Just for reference - this happens to properly sampled image when you do it:

    image.png.431d32526e52927ee214b725db94487e.png

    Not sure exactly what your trying to show but on my device the top leaf is much sharper by eye than the lower one which also shows artifacts. Regards Andrew 

    • Like 1
  11. Lots of misunderstanding of each other. 

    For example,  a cameras are areal detectors normally square but give a single "point" output per pixel.

    Another would be a signal varying in time compared with one in space with 1/time frequency compared to 1/distance a spatial frequency. 

    Best to follow your own proposal and call it an unsatisfactory no score draw. 😊

    Regards Andrew 

  12. 47 minutes ago, vlaiv said:

    I'd be more concerned with gaussian filter in the workflow in light of above discussion to be honest.

    It is deliberate reduction in resolution without any real reason to do so.

    image.png.06569f3bf9ab2fae196d074e99a61c2d.png

    Even if data is over sampled - using gaussian filter on it will further reduce detail.

    Now I wonder if just using Gaussian with FWHM of 2-4 pixels binning the data x3 (similar to above median x3 but better behaved) would yield the same noise reduction of x4.

    Alternatively, I do wonder how would stack of such subs with selective bin x4 behave (and also in terms of resolution).

    Maybe, as you said, it's too different a use case. In high resolution spectroscopy you have one long exposure image. So stacking is not normally an option.  In the text he does discuss the issues you raise but obviously answerers them in his context.

    Regards Andrew 

  13. 21 minutes ago, vlaiv said:

    I think that there are better methods - do agree on issue with telegraph noise if camera is susceptible to that - but selective binning is better way of dealing with it than using median filter before binning.

    Median filter acts as noise shaper and reshapes the noise in funny ways and if possible it would be best to avoid it as algorithms further down the pipeline often assume noise of certain shape (most notably Gaussian + Poisson).

    In any case - telegraph noise can be dealt with in manner similar to hot pixels - by using sigma clipping. If enough samples show that there should not be anomalous value there - then anomalous value should be excluded from bin / stack (rather than all values replaced with some median value).

    I am surprised at such a quick dismissal. Christian is a careful worker and I respect his opinions as I do yours.

    However,  given the speed if your response I assume you have looked at his algorithm before and compared the results. If so could you share the results.

    Regards Andrew 

  14. One area of noise which has grown in importance with small pixel CMOS cameras is random telegraph noise. If it's been covered above please forgive my not reading the whole thread.

    I have not seen it discussed much but it can be the dominant noise. C Buil discusses it on his site and give an algorithm for reducing it in over sampled images here in section 6 (wrongly labled as 5) It is in French but Google translate does a fair job.

    There are also some English discussion on it in the CMOS camera reviews. Here for example. 

    Regards Andrew 

    Note his data refers to the native camera bit depth not 16 bit unless it is a 16 bit camera.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.