Jump to content

ollypenrice

Members
  • Posts

    38,263
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. However you look at it the debayering process does manipulate the OSC data by estimating what the red and blue channel would have found under the green filtered pixels and ditto for all three colours. This is very clever and the best algorithms all but eliminate the loss in resolution created by the block of four RGGB. I never argue that this significantly impairs OSC resolution because I don't think it does. It was a passing thought following on from tomato's point. However, the signal strength of OSC remains at 50% for green and 25% for red and blue in my opinion. (Roughly, at least, since OSC filters have more colour overlap than astro RGB filters.) You are certainly right that one way to tackle this is to blur the colour to reduce colour noise and allow higher saturation. I do this myself, as do many people, when blending luminance with RGB. I add some of the luminance (that's to say all of it but at partial opacity), blur the luminance, flatten onto the RGB and repeat. In the final iteration I don't blur the luminance and so restore all/most of the resolution. This is pretty much what you're trying and it works up to a point but, like every step in processing, it shouldn't be over done. I think you're pushing your luck in using DSLR colour under mono luminance. It's not the wrong step, it's just a step too far - in my view. Olly
  2. Yes, that's my honest opinion. At one time I had the mono and OSC versions of the same camera and did combine their data into single images sometimes, but not by using one for lum and one for colour. That will not work well, in my view, in equivalent cameras. Why not? Because an hour of lum and an hour of OSC are not remotely compatible. What you really have after an hour with an OSC is 15 minutes of red, 15 minutes of blue and half an hour of green. An hour of lum will completely overwhelm that. Which begs the question, 'When will you capture enough colour to fill the lum?' I attacked this in two different ways. I shot NB in the mono to add to the OSC and/or I shot LRGB in one camera and OSC in the other. But L in one and OSC in the other over the same time would not work. What if the cameras are not equivalent? For Brendan the problem will be even worse than it was for me because his lum camera will be more sensitive and have better SNR than his colour. For tomato (I'll use that name ) I don't think we know, because the OSC camera my have a higher performance than the mono. I still bet on the OSC being unable to support the luminance over the same exposure, though. Olly Edit. Another thought: how will you extract the colour from the OSC? It will have been through the algorithm which corrects the loss of resolution from the 4-pixel 'blocks' of the Bayer matrix. This, in essence, is a process which creates a synthetic luminance layer. Merely splitting the OSC channels will not remove this 'virtual luminance.' It will be present in each colour channel just as the luminance is present in each colour channel in an LRGB image.
  3. ^^ By which I meant, this is SGL, not Cloudy Nights. Olly
  4. Higher than that for me. (I spent some time trying to persuade Alacant of this, a while back!) 20-20-20 is my personal minimum and I prefer 23-23-23. One of my cameras simply refuses to get that high, though. It also depends on what's in there beside background since the eye responds to contrast. And, again, if there is obscuring dust in an image a higher dust-free background will help accentuate it. I've been staring at Alacant's background in a state of bafflement since it didn't look too red or to high to me but it's now been edited? Looks good now, to me, in any event. This is SGL. Olly
  5. My starting point as well. Olly
  6. In order to comment on depth of field don't you need to know pixel size? A larger pixel can move forwards and backwards in the light cone by more than a small one without going out of focus. Olly
  7. I agree, the distortions are still present even thought the focus is much improved. I can see no pressing reason to use an OAG with a refractor, especially a small one. I use guidescopes at 0.9"PP on a large refractor (140mm). It depends on how fussy you are about getting the corners right. This is the first time I've heard of a malfunctioning Bahtinov mask! Maybe it is incorrectly configured for your FL or incorrect in some other way. However, I use a B mask made for a Tak Baby Q on a Tak 106 with perfectly good results. And, yes, a nice image! Olly
  8. He sees more sky, certainly, but has a quarter of the light. Noooo!!! Shock horror. 😁 Crop factor is the most misleading term ever introduced into photography. It's not entirely meaningless because it allows subject framing (AKA field of view) to be compared across two focal lengths and chip sizes. However, it implies a relationship between resolution and chip size when no such relationship exists. It also, I've discovered since exploring daytime photography, adds confusion regarding depth of field. I've read numerous claims to the effect that full frame sensors give a greater depth of field. They may do, but not because they are larger sensors but because they have larger pixels. Jemima's Aperture Increaser has no effect whatever on her image scale or her system resolution in arcseconds per pixel. These are unchanged, though she does gain in optical resolution assuming her Increaser is diffraction limited. Noah loses system resolution in arcseconds per pixel while his optical resolution, assuming his reducer is diffraction limited, is unchanged. True. Noah only wins if he needs the extra FOV. On all other counts Jemima wins, I would say. Olly
  9. Whoa! Jemima has an Aperture Increaser, not an Aperture And Focal Length Increaser! So there is no change in Jemima's FOV and she puts four times as many object photons on the chip. Noah has no new object photons but puts the ones he has already onto fewer pixels. I have an aperture increaser in every camera lens in my bag. Olly
  10. OK, I'm a former teacher so 'how to teach things' is a bit of an obsession. How about this: Noah has an 8 inch F10 telescope and wants to speed up his image capture time. He buys a 0.5x focal reducer to bring his telescope to 8 inch F5. His sister Jemima has the same scope but she's opinionated and ignores the 0.5 focal reducer option in favour of an invention of her own, an Aperture Increaser , which also takes her scope to F5. Who wins? 👹lly
  11. A good result but what struck me was your phrase 'reduce green and magenta.' These colours are on the same colour axis so I'm not sure what you're reducing... Olly
  12. Remember that a focal reducer is not an aperture increaser so it doesn't mean 'shorter image time' without also meaning 'smaller image.' This is not equivalent to opening up the aperture on a camera lens, which really does mean shorter image time. Olly
  13. I'm open to PMs on this matter. I don't say this because I fear OO UK will threaten me with their lawyers, as they did threaten one of my own customers. I say it because it does not make for the kind of atmosphere we all enjoy on SGL. Olly
  14. This was not my experience at Astrofest. You only get one opportunity in life to speak to me as I was spoken to on that occasion. Olly
  15. As I've said on other threads, I do think the CMOS cameras have made a profound difference in the OSC-Mono debate. In the CCD era I really do think it was a no-brainer (or almost) in favour of mono. But I now take CMOS OSC cameras pretty seriously with or without the dual or tri-band filters. If using these new filters I would be aiming to blend pure OSC and filtered OSC into one final image on emission nebulae. This is an old-ish thread so I wanted to update my position. Processing these data captured by Yves Van den Broek played no small part in my thinking. https://www.astrobin.com/g82xf7/B/?nc=user Olly
  16. Yes, I think that the dust is one of the great things to photograph because this stuff was discovered by photography. Barnard agonized for years over the question of whether his dark patches were windows into truly empty space or patches of opaque matter. His final conclusion was the right one. I'm also intrigued by it because I'm not very good at photographing it! Olly
  17. The only thing I ever asked for (perfectly politely in my view) was a manual for a new telescope. I didn't get one though. Olly
  18. There you go. Whenever I look at an Iris image I look for this as the benchmark of a really good one, along with a fully resolved progenitor star. Super. Olly
  19. Tremendous contrast and colour variety in the dust. This is an all-time great bit of sky! What we don't see here are the faint pinks which sometimes appear among the blues around the Iris. I haven't processed the IKI data this time but, on looking through some excellent processing jobs in the thread, I don't see them there either. (I though AnneS had the best hint of them.) I wonder what the IKI camera is and whether this could be a CMOS versus CCD issue. Sara has them bright and clear in this one: https://www.swagastro.com/uploads/2/3/3/7/23377322/iris.jpg Olly
  20. I think a lot depends on how much of the glow is from very local, immediate sources. If it is, then the more distant spot will be a benefit. If it isn't, I don't think it will have a huge effect at low elevations. But there really is only one way to be sure... Olly
  21. Sorry, is this right? Your problem blobs have now gone but you're left with vignetting, slightly offset? (Or have I missed a step in the narrative?) It isn't unusual for vignetting to be slightly off axis. I've never used a setup on which it was perfectly concentric. The best way to examine the vignetting is to take a linear flat (ie unstretched) with the histogram peak between a third and half way to the right and then to measure the ADU on this flat in the corners and the centre. This will tell you the drop-off in illumination from the lens. My Tak FSQ106N with a full frame chip has considerable drop-off (about 19000 in the corners and 23500 in the centre) but it corrects perfectly with flats. I think it would be very optimistic to expect a lens of such short FL and fast F ratio to deliver anything like even illumination on a 35mm chip, especially bearing in mind that astro images will end up very highly stretched. Olly
  22. Like the others, I'd keep the magnification right down for viewing from a boat. The alternative would be image stabilized binoculars but clearly not on a budget since they're expensive. Olly
  23. Always do a test shot to see if you're on target. Besides, the plate solution won't necessarily give you the best composition in terms of stars around the edge of the picture. This is quite important when it comes to the visual appeal of the final result. You can also end up with flares from out-of-shot stars. If this happens, drive the mount towards the out-of-shot star and the flare may vanish. Olly
  24. I think that's gorgeous. A wonderful blend of dusty signal, reflection and Ha. I've never attempted this in a field wide enough to include Rigel (what a cop-out!) and if I were you I'd be thinking of ways to pin down that almighty star in as natural a way as possible. I know it's a fundamental problem but the rest is incredible, especially the variety of colours in the dust. That's my favourite image of the year so far. Olly
  25. No, because the filter is largely opaque to blue if I'm not mistaken. What it gives you is an approximation to the HOO palette. The first thing to ask is, how much Ha does your camera pass at the moment? This depends on how recent it is and/or on whether its modded. If its Bayer Matrix is blocking Ha it won't see the Ha passed by the filter anyway. Then it's important to understand what filters do and don't do. They do not let you capture more Ha or OIII (or whatever) in a given time. What they do is isolate those wavelengths and block the rest of the light, allowing you to expose for longer without letting other light sources (either from the object itself or from polluting sources) swamp the signal from gas emission. This means that we use them for three reasons: - To block LP including lunar LP. - To block non-emission signal from emission nebulae so as to obtain higher contrasts which reveal structure in the gasses. - To exclude LP and non-emission object signal so as to allow us to expose for long enough to capture very faint emission signal which we then stretch above the level of the background sky. (In reality it is not above the level of the background sky even at a dark site but, in processing filtered data, we can stretch it to make it brighter.) The second two of these reasons make the use of the dual band filter valid even from a dark site but, in order to replicate an HaLRGB or HaOIIILRGB image you would need to shoot both filtered and unfiltered OSC and blend the two in the way that we do with conventional filters. Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.