Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Whirlwind

Members
  • Posts

    537
  • Joined

  • Last visited

Posts posted by Whirlwind

  1. 2 hours ago, Adam J said:

    2 images would double the number of levels, but who on earth stacks only two frames anyway? By the time you have taken 16 frames your going to be ok.

    Yes lower read noise lets you take shorter exposures and get to that point faster. But as the KAF8300 has a 16-bit A/D and the ASI1600mm pro a 12-bit A/D it takes more frames for the ASI1600mm pro to overcome quantization noise at lower gains.

    Adam

    In the UK?  When you are trying ultra narrowband, set 30minute sub-exposures and then the clouds roll in, maybe?

    I think I've been misunderstood.  I was hypothesising why there would be a difference between a more sensitive Sony CCD and the 'less' sensitive Kodak (i.e. both 16bit).  Even where similar images were taken why there was more noise in the Sony, but looked less washed out.  Agreed that a 12 bit camera will need a lot more frames (to offset the quantization noise) but I wasn't comparing these two.  I was conjecturing that the lower dynamic range of the Kodak was the cause.  It should be a relatively simple test though to take the ideal exposures for each camera (in reality using a fast system so that camera noise was overwhelmed earlier) and then combine to see whether the noise in the Sony was still as high.

  2. 3 hours ago, Adam J said:

    That effect is largely mitigated by stacking. When you stack you are averaging values so your final image is not restricted to the same number of "bands" as in the sub frames. 

    Yes, but there is a balance in this, the more you stack the more you mitigate the effect - it doesn't disappear when you just stack two images for example.  In the test examples there were relatively few stacks so hence there is possibility that with extra the dynamic range the Sony CCD has there simply wasn't enough images in the stack to offset the additional bands.  This then becomes one of the benefits of a lower noise camera, you can more quickly overcome the camera noise and hence sub-exposures can be shorter, increasing the number of images in the stack and mitigating the potential issue associated with more 'bands' because of the greater dynamic range.

  3. 9 hours ago, Northernlight said:

    Adam, the QSI 683 doesn't need 30 mins subs to get good results - I live under bortle 6 skies and have managed some decent images.

    I took all of these with 600s subs on my qsi 683 -  and they were all about 10-12  frames per channel. - and i'm no expert at processing by any means but managed to get do ok using 600s subs, well except for the last image - got a bit carried away with the processing and it's a bit overcooked and and a bit in your face - never the less - QSI683 produces some nice images with relatively small amounts of data.

    All cameras can get great images.  The benefit of the lower noise cameras is that the optimum sub-lengths can be shorter because the camera noise becomes overwhelmed by other sources of noise.  Hence the benefit of the CMOS and the Sony CCDs.  It's especially true for ultranarrow band imaging.  As such burdens on the mount is lessened.  This presentation gives the best information that I've found:-

     

  4. On 31/05/2020 at 09:13, Northernlight said:

    I also stumbled across a very interesting real-world comparison between the QSI 683(Kaf8300) and the QSI690 (IMX814) from Sara Wager - showing her direct comparisons of her own data she found that the KAf8300 actually produced cleaner images with less noise - despite the specs of the camera specs saying otherwise. This is very interesting coming from one of the words most respected imagers.

    https://www.swagastro.com/real-world-comparison---kodak--sony-chip.html

    Rich.

    I've always wondered whether this came from differences in the dynamic ranges of the cameras that fed into the final images.  When I look at the two comparisons I kodak chip seems more 'washed out' compared the sony chip. In this I mean that the contrast between the areas with less signal and more signal are more pronounced in the sony ccd but are noisier.  At a very broad discussion level then the lower dynamic range has fewer 'bands' in the 16 bit system to drop the signal into.  As such each band has more information but because there are less 'bands' there can be less contrast.  In comparison in a higher dynamic range camera you have more bands but each band will effectively have less signal - so the contract in regions is higher but the signal in each band is lower so hence greater contrast but higher noise.  In principle the more exposures you take the more that this remove this issue though.   Just some thoughts as to why the sony appears noisier even thought the figures suggest otherwise.

  5. 17 hours ago, Northernlight said:

    Is there any specific time of the year when new camera are launched ?  or do manufacturers just release at different intervals ?

    I might just have to go with the ASI1600mm Pro - maybe see if i can pick one up second hand as an interim measure if there is nothing else around.  It's just a shame that Sony didn't release a mono version of the IMX571 found in the ASI2600, as it's a large chip at a good price point on the ZWO.

    Unfortunately I don't think the Astronomy community works to the Xmas schedule in the same way! :) 

    Realistically they will be tied to what the CMOS and CCD manufacturers are working to, so it depends on what they are bring to the market.  The Astronomy market is so small compared to camera market they won't be making any decisions based on the astro market.  

  6. I think you are going to have to make some form of sacrifice here.  There is no suggested replacement for the ASI1600MM that I've seen anyone mentioned (plus also you should consider the artefacts you can get around bright stars due to the specifics of this sensor).

    As for narrowband imaging, it is of course possible but you lose some sensitivity and in addition you will still get some contamination in pixels. For example the green pixels will still detect about 10% of the incoming halpha signal so there will still be some cross talk - hence even with a multiband filter the green won't exactly correlate to OIII.  The red will also capture the SII regions so you are going to lose some control.

    The only option that really covers all your options the ASI6200 (or equivalent) but assume this is out of the price range as it wasn't mentioned.  Otherwise you will have to look at going smaller, colour or an older design and continue managing the noise. 

  7. 7 hours ago, Jkulin said:

    A planetary nebula is one of the last stages of life for most mid-sized stars. Once a star runs out of fuel, it begins to collapse in on itself, creating the clouds of red and blue in the photo.

    A beautiful image.  The one correction I'd point out is the above.  Planetary Nebula's and White Dwarfs aren't from a star collapsing in on itself.  This results in Neutron stars, pulsars and black holes.

    A White Dwarf is in effect the exposed core of the original star.  To very briefly summarise the process.  Any star with a mass of less than about 10 Suns will follow this route.  The star will continue to burn hydrogen in the core until it runs out of fuel.  This continues until the Helium ash core is too large and the pressure at the edge is not sufficient to burn the Hydrogen and the layers above the core contract slightly onto the core.  This raises the pressure/temperature again at the core boundary instigating a another period of hydrogen burning in the 'shell'.  As the burning is in the shell not the core there is less material between to 'shield' the outer layers from the pressure generated at the shell and hence it bloats to a red giant.  The next stages then depend on the mass of the star.  Assuming > approx. solar mass stars eventually the shell will produce enough helium ash that drops into the core that eventually the pressure and temperature rise high enough to start core burning of helium to carbon.  This shuts down shell burning and hence the outer layers shrink again.  Eventually though this shuts down again and the cycle repeats except now you can get shells of hydrogen and helium burning at different stages.  These ever expanding shells are less and less shielded from the outer layers (and helium burning also produces higher temps/pressures).  These higher pressures therefore exert more force on the outer shells that ultimately lead to it blowing off the outer shells.  This becomes the planetary nebula and as there is no longer any material to continue to burn in the shells this burning shuts down and you are left with a White Dwarf, or the core of the now dead star.  

    There's also thinking that planetary nebula only form in a binary/multiple system.  The ejection of the outer layers of a star is thought to be a relatively slow process and hence raises questions as to whether enough material can be ejected quick enough to form the planetary nebula.  In a binary system the shell can be ejected a lot quicker as the expanding shell interacts with the companion through conservation laws.  The companion spirals inwards and this 'energy' is transferred into the shell and ejected.

     

    • Thanks 2
  8. 2 hours ago, beka said:

    I am at about 2300m altitude and judging from the discussions on this forum the seeing at my location seems to be better than most folks in UK have, so I am in favor of larger apertures and pushing my equipment to get the best resolution and to pull in the faintest objects. While by no means scientific I tried to look at images I had done with a Canon D700  (30sec subs) on Celestron 102SLT and C11 scopes to compare to faintest stars captured and it was 15.95 and 17.95 mags respectively - which also appears to align roughly with theory.

    At 2300m high you are imaging at the same height as some of the professional observatories.  It is a lot less air to be imaging through, I assume a lot less light pollution (which with the higher altitude also means a lot less scatter).  All in all sounds like a wonderful place to be imaging.  It is the sort of location that would be ideal for remote imaging set ups (*barring I don't know what the weather is like).  In comparison in the UK most areas are under 500m in height, there is a lot of light pollution and almost permanently under the jet stream (making seeing poor at most times).  Hence the ability to make the most out of a telescopes resolution is limited and larger just means the ability to capture more light.

    • Like 2
  9. I suppose there are a couple of options:-

    Astrodon NII filter
    A NIR filter (slightly different view of the dusty objects, though it depends on how well corrected the scope is at these wavelengths)
    Photometry filter of some form (exoplanet, variables, binaries)
    Star Analyser to do some basic spectroscopy

    • Thanks 1
  10. I'm currently using Prism as it effectively does everything in one package (no third party software, except perhaps some types of plate solving).

    I think it depends on how you want to control things.  On a night by night basis NINA looks excellent, although does need a full version of Windows installed (so watch any mini-PCs with a lighter version of Windows installed). 

    NINA doesn't seem to quite yet have all the automation stuff integrated into it (last time I looked) for a full observatory but that may or may not be needed depending on the setup.

  11. 11 hours ago, alan4908 said:

    On your blue processing point, I believe the blues are an accurate representation.  For instance, take a look at these images from three accomplished astrophotographers:

    Adam Block - http://www.caelumobservatory.com/obs/m109.html 

    Robert Gendler - http://www.robgendlerastropics.com/M109.html

    On the small blue stars and the blue tint of the small background galaxies, these also appear accurate - for instance,  have a look at this APOD from Bob Franke:   http://www.star.ucl.ac.uk/~apod/apod/ap130523.html 

    Hmm I think they should be more pink in the main galaxy.  From the larger images they appear to be star forming regions and hence you'd have some red from the emission nebula as well as the hotter stars.  The very intense blue would be representative of very hot stars and relatively they are rare.  As for the Franke image I'd have the same argument.  The smaller galaxies are too blue.  The cores of galaxies are generally old and hence cooler.  Such intense blue is generally only seen in galaxies that are interacting (at least nearby anyway) and still in the tidal tails rather than the core.  The core of them would be more akin to M109 pinky blue at the edges and cooler towards the centre (and if it is an elliptical then cool all the way).  I've included a basic photoshop image of the areas.

    Of course it is artistic licence and still an excellent picture but I just find the intense blue areas 'distracting'.

    Bluepoints.jpg

  12. It's a lovely image.

    Can I ask though what's happened with the blue element of the processing? It seems to make the image look a bit 'spotty' and there are areas that have a very blue tint (e.g. PGC37700 / PGC37621)

  13. 4 hours ago, Northernlight said:

    Gorr_77,

    I spoke to manufacturer directly about the E.Fric mount and the 30kg limit and there are some caveats to that 30kg.

    for Imaging I was told that the 30kg limit was for Pier mounted "Compact" scope - but for other configurations such a big Newts then the Max imaging payload would be 24-25kg.  Not entirely sure what constitutes as a compact scope, maybe RC's / SCT's etc.

    Rich

    Astro-physics have a diagram on their site showing how size affects the capability of a mount.

    https://www.astro-physics.com/mach2gto

    Although I have no evidence I wouldn't be surprised if the profile isn't similar for all mounts (just proportion it to 30kg rather then 35kg of the Mach2).

  14. 22 hours ago, Datalord said:

    eDIT:

    Fornax 52. Formidable contender!

    Used Mesu 200. Sounds like a very possible option.

    TTS-160. Atl-Az with a rotator. I don't want that complication.

    Vixen AXJ. 22kg payload is too little for my liking.

    iOptron in general. I was woo'ed by them previously, but I worry a lot about the software side. They seem to have quite a few issues all around.

    GM1000HPS. Can use my existing tripod? Puts it back in the budget range.

    Crux 200HD. £8k, but serious reviews and performance.

    Just to note the TTS-160 is also quoted as being for 22kg, so if this is bare minimum then it is something to consider.  

    As an aside if you are willing to look at used, then an Astro-physics AP900 may be worth considering (but no encoders). 

    Unfortunately everything is expensive at the moment because of the exchange rate so don't expect that to improve in the near term.

    The issues with iOptron seem to be more related to their encoders.  People seem happy with the non-encoder versions.

    • Like 1
  15. Another option would be the Vixen AXJ.  That has a quoted payload of 22kg and can be bought with encoders (or a can be upgraded later if needed) if you decide it's necessary.  Should just about fit into the budget. 

    Note I think the distributer has recently changed and there can be discrepancies I think between 'old' and 'new' prices because of this.

  16. 11 hours ago, oymd said:

    wow....gorgeous...

    Can you please stop posting photos like that....?? You are leading some of us to desperation...!!

    :)

     

    Lots of dark skies and low light pollution, good seeing and high transparency with many hours not affected by weather will always get pictures that can overachieve what you can do in the UK with even the largest of telescopes.  There are few places in the UK that can achieve even one of these.  It's why people consider remote imaging.

    • Like 2
  17. 22 hours ago, ollypenrice said:

    These are cogent arguments but are predictated upon using CCD technology. For better or worse CCD chips will probably go out of production to be replaced by CMOS. What are the binning advantages of CMOS compared with CCD?

    Olly

    There's no fundamental difference between binning between CMOS and CCDs.  CMOS reads out each pixel individually whereas CCDs readout by line or column.  Generally then you hardware bin for CCDs as the readout occurs not at the pixel level and can be easily done in the readout column/row.  As CMOS is read out at the pixel level you readout each individually and then combine the data (so it is more 'software' binning).  They both do the same thing though which is combine the data from multiple pixels into one 'super pixel'.  They each have their own individual read noise characteristics but is still overall beneficial compared to trying to image at too high a resolution.

    • Like 1
  18. 15 hours ago, ollypenrice said:

    Edit. One more consideration. Pixels are getting smaller. Do you want to end up with a focal length entirely inappropriate to these cameras? I can't read the future but my impression is that it is not going the way of the big amateur reflector.

    It don't think big reflectors have had their day.  It is just there has to be a shift in thinking as to how they would be used.  If we go back 10 or so years most CCDs had large pixels and hence you sacrificed resolution for field of view.  With the new CMOS cameras that is no longer correct, and as noted, with smaller pixel sizes even a moderately focal length instrument can get you to high resolution and wide image scales.  The old CCDs also had higher read noise, so each sub-exposure had to be longer requiring a much more hefty mount for the larger telescopes as well as longer guiding, more risk from plane trails and so forth.  In comparison new cameras have lower read noise which means that the sub-exposures can be much shorter even for narrowband, alleviating the need for that perfect mount.

    So what is the benefit of a large reflector then?  Well lets consider the situation of a16" 3250mm RC scope paired with a new 6200 mono class CMOS camera.  At native focal length you have 0.24" per pixel (and 62Mpixels which is it's own issue) which you may get the resolution advantage to use once in a lifetime in the UK (assuming no clouds).  But this isn't the only benefit as it is still a larger aperture.  If you binned the camera 3x3, now you have a much more reasonable 0.71" per pixel and down to about 7Mpixel files (which is still fine for almost everything you want to do with the image).  So what is the benefit in doing this.  Well lets compare what type of telescope system you would need to get the same resolution - you would have a 16" 1080mm.  That's equivalent to a F2.7 system using the same camera.  Now lets add 0.8 reducer/flattener to the system.  That takes the native unbinned resolution 0.3"/pixel and 2600mm which is still way too small.  Bin it 3x3 again and we are at a more reasonable 0.9".  Again in comparison the focal length would be 860mm at 16".  That's equivalent to an F2.1 system using the same camera.

    So where does this get us? We can't improve on the seeing unfortunately (not until we get real AO systems for amateur market anyway). So the resolution we can achieve is still limited.  So what is the point of the large telescope.. Well it means that low S/N areas of an image can get much more data much quicker because of the larger light collecting area.  As such that faint detail that would be processed out in smaller aperture instruments (because there was too much noise) will be able to be displayed in all its glory.  This would give the impression of higher resolution when in reality all it is allowing is to tease out more detail at the lower resolution (so for smaller galaxies you would be able to pick out faint tidal tails etc).  The only areas where it is unlikely to be of use would be very bright areas such as the core of M42.

    As such larger reflectors will return to what visual astronomers see them as....huge light buckets!  However you do sacrifice field of view, extra maintenance, diffraction spikes and so forth.

    (*Note I've made some abstraction on read noise etc issues when binning so it won't be quite as good as this but the principles do stand).

    - As for the argument as to whether a C14 Edge or RC16 it is going to be same as any argument between the Edge series and RC's at the equivalent sizes.  I'd wonder whether the weight of the mirror might be an issue in the edge and it moving whilst imaging (whereas the RC16 should be more fixed but collimation is more tricky).  From a personal perspective I'd go RC16 but that's because I like to a bit of photometry/spectroscopy with a scope that size and generally lens are only corrected at optical wavelengths whereas a mirror system would also be able to image slightly into the Near Infrared or blue/near UV where lens aren't generally well corrected.

     

    • Like 2
    • Thanks 2
  19. It's not usually a column that is defective but a single pixel that has been damaged by radiation in some way.  The way data is readout then means that this error is added onto every pixel in that column (hence the statement a defective column).  Most CCDs will pick up these errors over time.  The older the CCD the more likely you will have them.  They can usually be calibrated out just fine. In certain software you can tell it where the defect is so it can be manage better (such as in Pixinsight)

  20. 55 minutes ago, Dave Smith said:

    Thank you very much for your reply. My initial thought was that your possible explanation was not the reason because I had tested the signal strength at the beginning of measurements but then I examined the output file and found that a very large proportion of my readings were saturated but not at the beginning. It looks as if the sky became more transparent rather than high cloud appearing. It is a lesson learnt although rather severe to have two nights wasted due to the same error.  I am  rather relieved that the cause has been revealed. Without your post I would not have thought to check that out, so thanks again.

    Dave

    Cool, glad it helped solve the problem.  It could be an improvement in seeing as well.  However, you shouldn't worry too much...this impacts professional observations as well with comparison stars.  You start with a 10 second exposure and you are far below saturation (usually 40-50% saturation is targeted) at 2.5" seeing and if it improves to <1" then the signal in the comparison explodes and saturates, commencing a lot of cursing and rapid change of the exposure time!

    • Like 1
  21. This looks like too much high cloud that wasn't easily observable.  It's difficult to use the comparison stars as a metric in this case because we don't what was being used and its magnitude.  A much brighter star will show less variability because the flux is so much higher and it still gives a very strong signal to noise despite the cloud.  If you have a background of 10 and a flux of 1,000,000 losing 10% flux because of cloud isn't that significant from a S/N perspective.  If you have a flux of 100 and lose 10% of your signal variably it is much more of an issue from an S/N perspective.  I would suspect if you use increasingly fainter comparisonthen you will see the scatter increase similarly to the target star.   Indeed clouds can be useful for photometry for very bright stars as they can scale back the flux and reduce the risk of saturation (though generally defocussing is more consistent).

  22. On 15/02/2020 at 20:42, Marvin Jenkins said:

    Now I am going to throw this out there... it seems there was a companion. Now one cannot be be found. Betelgeuse has indigestion and is bloated like a python!

    Not scientific I know, but what a gutsy pig. Rennes anyone?

    Marvin

    It is known that stars in in the RGB and AGB phases can expand and envelope a secondary star.  The evidence for these systems comes from White Dwarfs with very short period companions that couldn't have arisen from a formation mechanism.  Although no planets have (yet) been shown to survive such a mechanism, Brown Dwarfs can.  One of the shortest known such systems has a period of about 68 minutes (https://academic.oup.com/mnras/article/476/1/1405/4832497).  There are also eclipsing examples (https://academic.oup.com/mnras/article/471/1/976/3892366).  Such systems are known as Post Common Envelope Binaries (PCEB).

    The general theory is that the companion is swallowed during the RGB and AGB stages and then spirals inwards whilst imparting angular momentum into the shell of the star which eventually ejects the shell of the star.  It is thought that sub-dwarf stars are post AGB/RGB stars where this has happened but have not yet shrunk down to White Dwarfs. About 50% of the sub-dwarfs show close companions.  It is speculated that the other 50% could have their shells ejected because of massive planets being swallowed but are ultimately destroyed.

    As such it is possible that Betelgeuse could have a companion.  Which in itself would be fascinating as these stages are very short lived (in astronomical terms) and difficult to identify.  What makes the above plots questionable is that this stage generally very rapidly makes orbits circular rather than elliptical and hence to see in a rare event in a very rare circumstance is unlikely (but not impossible).  It does potentially explain Betelgeuse's rapid rotation though.  We can all do the experiment on a spinning chair to show how we spin faster if the mass is closer to the centre of rotation.  As such as a star expands its rotation should slow.  A fast rotation at this stage would imply the progenitor star was rotating unusually rapidly or that something is spinning it up.  That could be a common envelope companion.  The disadvantage is that if there are companion(s) then our estimates of Betelgeuse's mass could be wrong and it may never go supernova.

     

    • Like 2
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.