Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

stargazine_ep9_banner.thumb.jpg.05c1bdd298547fd225896a3d99c9bc17.jpg

Whirlwind

Members
  • Content Count

    392
  • Joined

  • Last visited

Community Reputation

155 Excellent

About Whirlwind

  • Rank
    Star Forming

Profile Information

  • Location
    Leicester
  1. Also make sure you take into account any filter you might attach to the DSLR (e.g. light pollution filter/multipass narrowband filter) as the thickness of the filter will alter back focus slightly.
  2. I always find late August to late September as the best time of year. There is a good amount of darkness but can still be pleasantly warm. We can have decent spells of weather at this time of the year and there's generally a good variety of targets to observe.
  3. In PI when there is an option in ImageIntegration for Pixel Rejection (2). In here there is a sigma low and sigma high value. These settings help you tailor how many outlier pixels are discarded. There is a balance (too much and real data gets discarded) but have found it useful to get rid of the remaining hot/warm pixels if used carefully.
  4. It looks like you are using Pixinsight. It is worth playing with the Pixel Rejection options as this can be quite effective at removing these sort of pixels - but this is a bit OT. This shouldn't be a surprise because of how errors propagate So for example on a single pixel you take 5 30s darks. The values you get are:- 300; 301; 310; 281; 295; So you average value is 297.4 and the standard deviation is 10.644; hence commonly referenced as 297.4 +/- 10.644. This means there is a 67% chance that the 'true' result is between 286.756 to 308.044; there's a 95.4% chance that the result lies between 276.112 and 318.688; and a 99.7% chance that the true value lies between 265.468 and 329.332. This commonly referenced as the 3 sigma value and is usually the minimum that would be considered a 'result' (which is also why figures splashed on news/papers/adverts are all generally nonsense because they don't provide the error but that's another topic). However your camera noise is 297.4, your random noise is 10.644. Back on topic. Lets suppose you now have your uncalibrated image and the same pixel on that has values of 781 +/- 15.344 (i.e. 765.656 to 769.344) Now a dark is subtracted so would be the uncalibrated image minus the master dark So how is this undertaken with the errors. For simple subtraction you have to find the two largest extremes So hence subtract the most extreme values giving the following result 457.612 to 509.588. The average value being 483.6 (also being 781-297.4) so hence the error value is 483.6 - 457.612 = 25.988. Your new calibrated result is 483.6 +/- 25.988. Notice the error is also (10.644 + 15.344). As such although you've subtracted the dark you've added and compounded the errors. Therefore from a simple error calculation it should not be a surprise to see your error increase (now you have to add in your flat which is a divide and that makes the errors compound in different ways - see here for example:- https://thefactfactor.com/facts/pure_science/physics/propagation-of-errors/9502/
  5. You've still got at least some hot pixels in the image so it can't be fully calibrated? These pixels will skew your standard deviation as they don't represent random (gaussian) noise. It's unclear whether you've used the same size box either so they might not be directly comparable (especially if you are picking up more hot pixels)? You may also be picking up actual data whereas before it was more dominated by noise (so you might be sampling signal not noise, but it is difficult to know because the image is too small) - was this just a bias or dark frame?
  6. There are other options that you can consider as well that fall within this type of budget range. You could include the StellaMira 85mm or William Optics GT81 both with an associated flattener/reducer. The benefit from the triplet is colour correction (which would be helpful with a DSLR to avoid some bloat). My concern with a TS telescope is that I'd expect returns to become more difficult as pass the end of the year so may be worth factoring in (returning things to the US is truly a work of the devil with all the paperwork you need) and would expect this to be the same to the EU when the transition period ends (both ways). It can be quite difficult to compare two setups identically. It's rare that people have duplicate scopes just to check certain combinations and even astrobin might have some examples the images will be dependent on seeing, image processing skills etc. None are likely to be bad (though Melon'y Lemons do occasionally get through) so those that provide pre-checks can be a benefit. You will probably see the biggest change when you move to CCD/CMOS with guiding because of the reduction in noise compared to a DSLR.
  7. Sky flats should be fine for the telescope. Gradients only tend to become an issue on widefield setups, but I'm guessing the focal length we are looking at here is 1600mm+ and hence the field of view is small. Just make sure you point the telescope in the right direction (i.e. about 180 degrees from the sun, i.e. west at dawn, east at dusk and about 45-60 degrees altitude). The real trick is that exposure times need to be constantly adjusted. If you have fast downloads then you may be able to run a sequence of 10 or so and then change exposure. Some programs will do this automatically. You want to aim for the exposure of about half the full depth (so a count of about 25000-35000 for 16 bit cameras, adjust accordingly for 12/14 bit CMOS etc). It doesn't matter that you are altering exposure length as long as the counts stay around these figures (though very short exposures and long exposures raise the risk of shutter effects or stars becoming visible so should be avoided). There is also the risk of point source (stars) and there are two ways to avoid this (noting though you should always avoid bright stars). Firstly you can turn off tracking, stars will then trail and hence the area of the sky being observed changes. Hence when you combine multiple flats these blurred areas are averaged away. The alternative is to take a sequence of say 10 images, then slew the telescope slightly and then another sequence (easier with faster downloads). Because conditions change quickly it can take time to build up an appropriate sequence of flats especially when you factor in multiple filters as well. A lot of professional observatories cycle their flats i.e. every day they will take dawn and dusk flats for their filters and then for images use those generated x weeks before. This way they have a continually relevant master flat. They have the same issue in that you can't have a white panel once you get above a certain size. Finally don't use flats when there are any clouds. This will really mess up the flats as you no longer have consistent even illumination.
  8. Dawes limit only really applies to point sources (so star clusters, doublets etc). It means less when you consider extended objects (planets, nebulae, galaxies etc). Some planetary imagers (e.g. Damian Peach) image way past the Dawes limit of their telescopes. Dawes limit applies in specific circumstances. However, it is generally correct though that the finer the image scale you are getting less signal for the same noise per pixel and can have diminishing returns. As for the original post I would suggest an ASI183 for sub-1000 as well - this has the benefit that you can use a wide field telescope and still get decent resolution. For 1k - 2k probably something like the Atik383L - long in the tooth but a good all rounder camera.
  9. In the UK? When you are trying ultra narrowband, set 30minute sub-exposures and then the clouds roll in, maybe? I think I've been misunderstood. I was hypothesising why there would be a difference between a more sensitive Sony CCD and the 'less' sensitive Kodak (i.e. both 16bit). Even where similar images were taken why there was more noise in the Sony, but looked less washed out. Agreed that a 12 bit camera will need a lot more frames (to offset the quantization noise) but I wasn't comparing these two. I was conjecturing that the lower dynamic range of the Kodak was the cause. It should be a relatively simple test though to take the ideal exposures for each camera (in reality using a fast system so that camera noise was overwhelmed earlier) and then combine to see whether the noise in the Sony was still as high.
  10. Yes, but there is a balance in this, the more you stack the more you mitigate the effect - it doesn't disappear when you just stack two images for example. In the test examples there were relatively few stacks so hence there is possibility that with extra the dynamic range the Sony CCD has there simply wasn't enough images in the stack to offset the additional bands. This then becomes one of the benefits of a lower noise camera, you can more quickly overcome the camera noise and hence sub-exposures can be shorter, increasing the number of images in the stack and mitigating the potential issue associated with more 'bands' because of the greater dynamic range.
  11. All cameras can get great images. The benefit of the lower noise cameras is that the optimum sub-lengths can be shorter because the camera noise becomes overwhelmed by other sources of noise. Hence the benefit of the CMOS and the Sony CCDs. It's especially true for ultranarrow band imaging. As such burdens on the mount is lessened. This presentation gives the best information that I've found:-
  12. I've always wondered whether this came from differences in the dynamic ranges of the cameras that fed into the final images. When I look at the two comparisons I kodak chip seems more 'washed out' compared the sony chip. In this I mean that the contrast between the areas with less signal and more signal are more pronounced in the sony ccd but are noisier. At a very broad discussion level then the lower dynamic range has fewer 'bands' in the 16 bit system to drop the signal into. As such each band has more information but because there are less 'bands' there can be less contrast. In comparison in a higher dynamic range camera you have more bands but each band will effectively have less signal - so the contract in regions is higher but the signal in each band is lower so hence greater contrast but higher noise. In principle the more exposures you take the more that this remove this issue though. Just some thoughts as to why the sony appears noisier even thought the figures suggest otherwise.
  13. Unfortunately I don't think the Astronomy community works to the Xmas schedule in the same way! Realistically they will be tied to what the CMOS and CCD manufacturers are working to, so it depends on what they are bring to the market. The Astronomy market is so small compared to camera market they won't be making any decisions based on the astro market.
  14. I think you are going to have to make some form of sacrifice here. There is no suggested replacement for the ASI1600MM that I've seen anyone mentioned (plus also you should consider the artefacts you can get around bright stars due to the specifics of this sensor). As for narrowband imaging, it is of course possible but you lose some sensitivity and in addition you will still get some contamination in pixels. For example the green pixels will still detect about 10% of the incoming halpha signal so there will still be some cross talk - hence even with a multiband filter the green won't exactly correlate to OIII. The red will also capture the SII regions so you are going to lose some control. The only option that really covers all your options the ASI6200 (or equivalent) but assume this is out of the price range as it wasn't mentioned. Otherwise you will have to look at going smaller, colour or an older design and continue managing the noise.
  15. A beautiful image. The one correction I'd point out is the above. Planetary Nebula's and White Dwarfs aren't from a star collapsing in on itself. This results in Neutron stars, pulsars and black holes. A White Dwarf is in effect the exposed core of the original star. To very briefly summarise the process. Any star with a mass of less than about 10 Suns will follow this route. The star will continue to burn hydrogen in the core until it runs out of fuel. This continues until the Helium ash core is too large and the pressure at the edge is not sufficient to burn the Hydrogen and the layers above the core contract slightly onto the core. This raises the pressure/temperature again at the core boundary instigating a another period of hydrogen burning in the 'shell'. As the burning is in the shell not the core there is less material between to 'shield' the outer layers from the pressure generated at the shell and hence it bloats to a red giant. The next stages then depend on the mass of the star. Assuming > approx. solar mass stars eventually the shell will produce enough helium ash that drops into the core that eventually the pressure and temperature rise high enough to start core burning of helium to carbon. This shuts down shell burning and hence the outer layers shrink again. Eventually though this shuts down again and the cycle repeats except now you can get shells of hydrogen and helium burning at different stages. These ever expanding shells are less and less shielded from the outer layers (and helium burning also produces higher temps/pressures). These higher pressures therefore exert more force on the outer shells that ultimately lead to it blowing off the outer shells. This becomes the planetary nebula and as there is no longer any material to continue to burn in the shells this burning shuts down and you are left with a White Dwarf, or the core of the now dead star. There's also thinking that planetary nebula only form in a binary/multiple system. The ejection of the outer layers of a star is thought to be a relatively slow process and hence raises questions as to whether enough material can be ejected quick enough to form the planetary nebula. In a binary system the shell can be ejected a lot quicker as the expanding shell interacts with the companion through conservation laws. The companion spirals inwards and this 'energy' is transferred into the shell and ejected.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.