Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

stargazine_ep14_banner.thumb.jpg.27eb9b06c9c8a1fe5ac3bae21c92743b.jpg

sharkmelley

Members
  • Content Count

    1,161
  • Joined

  • Last visited

Community Reputation

1,132 Excellent

3 Followers

About sharkmelley

  • Rank
    Sub Dwarf

Profile Information

  • Gender
    Male
  • Location
    Tenterden, Kent
  1. Comet C/2020 F3 (NEOWISE), taken from home in Kent at around 3:30am. RedCat51 on Canon 600D. 10 exposures of 2sec at ISO 200 stacked, scaled by 2/3 and then cropped. Mark
  2. Use the camera manufacturer's software or if not, RawTherapee. Mark
  3. Your best bet is to take some additional exposures with the frame shifted. That reflection will then move position (relative to the star field) and that will allow you to replace the existing artefact with clean data. Mark
  4. I use CFA Drizzle (a.k.a. Bayer Drizzle) as an important part of my DSLR workflow. It means that Bayer interpolation no longer takes place and it results in noise that is more pleasant-looking and more finely grained. Such noise is easier to remove from the image and gives a better looking final result. Mark
  5. I would not expect that to happen. After converting the 32-bit image to a 16-bit image check the noise level (i.e. the standard deviation) in a small area of background of the 16-bit image. You should find it adequately dithers the quantisation. If so, then it means that the reduction to 16-bit is not the cause of your posterization issue. No amount of stretching will introduce posterization in an image where the quantisation is adequately dithered by noise. Mark
  6. That's a clever solution and a good write-up! Mark
  7. Your "per pixel" metric is misleading you. A better metric is the total number of photons captured by the whole sensor. My advice is to choose whether you want to go OSC or Mono and then choose a large sensor. Otherwise you will miss the field-of-view of your Canon. Mark
  8. I just accidentally found a document that defines gamut of a sensor. See section 2.1: https://corp.dxomark.com/wp-content/uploads/2017/11/EI-2008-Color-Sensitivity-6817-28.pdf So contrary to what I thought, the concept does exist! Mark
  9. Can you clarify what you mean here. I agree that with set-point cooling, darks don't need to be scaled (if you match exposure times). But that doesn't mean dark scaling can't be done for CMOS. Mark
  10. I know exactly what you are trying to do here. Funnily enough, earlier this week I had a disagreement with someone on another astro-forum who called the Bayer filters "sloppy" because their transmission bands overlap! It's obvious to most people (but not to the contributor to that forum) that the sharp cut-off RGB filters typically used for astro-imaging are inferior for colour reproduction. The example of trying to image a rainbow is a great example of this. The sharp cut-off RGB filters cannot reproduce the continuous change of colour within the rainbow. But this is not a problem of gamut. Gamut applies to display devices. For instance an LED display can reproduce all the colours within the colour triangle formed by its Red, Green and Blue LEDs. This is its gamut. However, a camera with RGB filters can be considered to be full gamut because it is able to record all those colours i.e. there is no colour it is unable to record unless there are gaps between the filter transmission bands. The problem it has is the inability to distinguish between a wide range of colours i.e. many different colours give exactly the same RGB pixel output values from the sensor. The concept you need is "metameric failure". This is the inability of the camera to distinguish between colours that the human eye sees as being different. Those who test consumer cameras will report a "sensitivity metamerism index" (SMI) for the camera which is a standard way to measure its colour accuracy. Mark
  11. It's a good way of telling if someone has added diffraction spikes to their colour image during post-processing. Such diffraction spikes are typically uniform in colour and won't have the colour banding effect. Mark
  12. If the fractionally binned image is split into 4 sub-images then you're right in saying that there is no correlation between them. I'm seeing the same standard deviation (on average) in each of the 4 sub-images as I do in the fractionally binned image. Mark
  13. I'm very late to this thread but it looked interesting. Did you find your problem? In any case I simulated this in a spreadsheet and found that 1.5x binning leads to a SNR reduction of 1.8x, which is what you expected. I generated 10,000 groups of 9 random values. Each group of 9 values was binned down to 4 values using your algorithm. This created 10,000 groups of 4 values. The SNR of the resulting 40,000 values was 1.8x lower than the SNR of the original 90,000 values. Mark
  14. That's a very informative video and an interesting solution to the problem of using filters with the Nikon Z cameras. The approach probably works well for lenses but I think 1.25" filters will cause vignetting issues when attaching the camera to a scope. Mark
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.