Jump to content

ollypenrice

Members
  • Posts

    38,263
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. You could try running Starnet on the finished LRGB and then pasting the RGB-only on top of the de-starred image and choosing blend mode lighten. You ought to find that only the RGB stars will be visible in the output image because only the stars will be brighter than the base image now. If anything else is being applied from the RGB top layer you could manipulate it in curves so that only the stars did appear. If you de-star the luminance and apply it in blend mode Luminosity it will make a mess of your stars by illuminating them from the starless lum. On the other hand you could try applying your starless Lum in Blend Mode Lighten over the RGB. I've never tried that. Yet another way might be to paste the LRGB over the RGB and use Noel's actions, Select Brighter Stars, and expand/feather the selection being careful to keep the selection within the circumference of the stars. You could then erase, wholly or partially, the selected stellar cores of the luminance. And yet another!! 🤣 I think I'd go for this first. Make a copy of your Lum and de-star it. Paste this on top of your starry Lum in Blend Mode Darken so that the stars disappear, then lower the opacity of that layer till the stars re-appear to some extent. They will be considerably subdued, particularly in their cores. With the right blend of starry and starless Lum you should find they won't blow out your RGB cores. Olly
  2. There is another way to speed up capture... the dual rig. That's what we do here. Olly
  3. You can increase effective pixel size if you choose mono and binning. There's an awful lot to be said for that. Olly
  4. Quite so, but the impressive size of their optics is easier to see from down here than the equally impressive size of their pixels! 😁lly
  5. No, a smaller chip just reduces the field of view. It has nothing whatever to do with increasing the resolution (also known as 'getting closer' or 'capturing more detail.') What you need are smaller pixels so that more of them lie under the projected image of the object. When shown at full size (1 camera pixel given 1 screen pixel) this will give you a larger object image on your screen. I would begin by thinking about resolution in arcseconds per pixel. Anything less than 1"PP is probably going to be unrealistic since it will be lost to seeing and guiding problems. Your ASI has much smaller pixels than your Canon. As things stand you're at about 1.3"PP in the ASI and about 2.2"PP in the Canon. If you binned the ASI 2x2 it would be close enough in effective pixel size for either camera to work in the same scope. If you don't bin the ASI there is no one scope which can work with both cameras at longer focal length. Your first decision would be which camera to choose. To be honest you're not at a bad resolution already with the ASI. There will be plenty of nights when the seeing won't support more resolution than you have already. However, a little more focal length and aperture might be nice. Just don't overdo the focal length. I'd be looking for something that would take the ASI to about an arcsec per pixel. There's a calculator on FLO's website. When less experienced imagers discuss mounts they jump straight to payload. That does have to be respected but, most of the time, accuracy is a bigger issue. Your guide RMS in arcsecs needs to be no more than half your image scale in "PP. (Give PHD your guide cam pixel size and guidescope focal length and it will automatically give you your RMS in arcseconds.) Olly
  6. I only keep the masters. I have enough darned files as it is! Olly
  7. My observations are in red. I love the narrow field my C11 gives - especially for small galaxies. But having it out imaging in the open is reserved for clear nights without wind, as the long FL is hard to keep under control. There is no reason to love a narrow field of view. The tight framing of a target does not bring any more detail to the image. The detail you can theoretically resolve is measured in acrcseconds per pixel and is not affected by how big your chip is or how much empty sky you have round a galaxy. You can always crop the image. The detail you can really resolve is limited by atmospheric seeing and guiding. Getting any real detail below about 1 arcsecond per pixel is difficult. The long time it takes to gather light is also something that has been on my mind. The size of your aperture is the only thing which affects your telescope's light grasp. Reducers at the back can obviously have no effect on this. Light goes in at the front! I've looked into ways to reduce integration time and the focal length, but both give me a smaller imaging field of view. No, reducing the focal length increases the field of view. With a given aperture you can reduce integration time by using a more sensitive camera, binning the capture or by using larger pixels. The hyperstar would bring it to F/2, making it a whopping 26 times faster than at f/10. Wrong. Don't be fooled by the Hyperstar website. Exactly the same amount of light is entering the telescope with or without the Hyperstar. The Hyperstar just puts object photons onto fewer pixels which 'fill' faster. But do you want a tiny galaxy without details and surrounded by empty sky? In truth you cannot compare exposure times with and without the Hyperstar because there is no way in the world that you would use them on the same targets. But it would also reduce the FOV to about the size I get with my Esprit 120. No, this means the Hyperstar will greatly increase the field of view. However, the Hyperstar will be faster on the same FOV than the Esprit but the Esprit will give you a much better result for several reasons. It is optically and mechanically much better and F2 is too fast to be practical. The reducer makes it f/7, cutting the integration time in half, and keeping a decent "small" fov. (but keeping the wind/guiding issues in play) Again, no, F ratio does not work like that with reducers. What would be the better upgrade? Are objects still good quality when cropped out of the 'large' FOV (and more: are they better than in the Esprit?) Or is it worth spending more time to gather light and just take the reducer? Use the reducer to widen the field of view. Anyone care to weigh in on this? Both solutions would be at the same cost, as with the reducer I would still need an extra focuser to keep the guiding (oag) under control. What has been running through my comments is a thing called The F Ratio Myth. It causes a lot of noise and disagreement. Basically the camera F ratio rule (One stop down is twice as fast) works because in a camera lens you increase the area of aperture when you open up the diaphragm. Adding a focal reducer doesn't do that. It doesn't make the objective bigger. What you should do to speed up capture is use bigger pixels. It's the amount of light per pixel which matters. You can do this with a mono camera by binning 2x2, 3x3 etc. With colour cameras you're stuck in Bin1 so you would need a change of camera. The Sony S7 is a good match for the C11. Your 500D in the C11 is working at 0.35 arcsecs per pixel. It is totally impossible to capture real detail at this resolution so you would do far better to have pixels at least twice as big or to use a camera which, binned 2x2 or 3x3 would give you effectively bigger pixels. This would mean switching to mono. Mono is faster for other reasons as well. Olly
  8. Joe, your camera is CCD rather than CMOS so you do not need to shoot darks for flats. Instead you can use a master bias as a dark for your flats and save yourself a lot of bother. At the short exposures times of flats there will be no significant difference between a dark of that short exposure time and a bias, so just use a master bias as a flat dark. It may help to make a copy of your master bias and re-name it 'Flat dark' or something because some stacking software doesn't like seeing the same file name in two places. Olly Edit: CMOS cameras do need flat darks taken at the same settings as the flats.
  9. There is no need to re-do flat darks every time you do new flats provided the settings are the same. Flat exposure times are pretty sensitive, though, because the exposures are very short and the cameras very sensitive. A slight change in the brightness of your panel might need a different setting. If it doesn't you can re-use your flat darks. I think gain settings will affect any kind of dark but I'm not a CMOS user. Olly
  10. Given that a scope with a flat panel on it can't see the stars anyway, it really doesn't matter where it's pointing! Most people using a panel point at the zenith to avoid covering the floor with bits of broken panel... 😁 Olly
  11. Very tight and crisp. Nice. I might have a look at the star colour, though. Looking a tad blue-green on this monitor. Olly
  12. You do have an additional option if you use a layers-based program like Photoshop. - Make two stacks, Version 1 for low noise and V2 for low trails. Give both an initial and identical stretch in levels (Just take the grey point slider to the same value in both, enough to make residual trails visible in V1) - Paste V1 low noise onto V2 low trails. - Run a small, feathered eraser over the visible trails on the top layer. Flatten and save. This will mean you have the low noise version everywhere except where the trails used to be. I've used this basic idea several times to save damaged subs from the dustbin. Olly Edit: I missed Dave's post above saying the same thing!!! Sorry Dave. 🤣
  13. Thanks. I didn't know the underlying maths but guessed at the principle. Olly
  14. I can't give you an authoritative answer but I presume that the level of deviation from the norm before the outliers are identified as outliers is a decision made by the software authors. In AA the sigma routine is adjustable though I haven't found an explanation of what the 1.8 default setting actually means. (It's a long manual!) I do remember that Tom and I were both imaging the Witch Head, a region plagued by geostationaries, some years ago. I'd just moved from AA4 to AA5 and the trails were disappearing for me. Tom was struggling with them in AA4 and upgraded to solve the problem. Olly
  15. Yes, a good sigma routine will identify the pixels with trails as outliers and ignore them, giving them instead the pixel value derived from the average of the rest of the stack. The more subs you have, the better it will work. More than a dozen is best. Not all sigma routines are equal. The one in AstroArt from V5 onwards is excellent. AstroArt also has a 'remove line' feature. For severe trails you click on both ends and apply the filter. The trail will be diminished or removed. If you clean up your worst affected subs first, before stacking, you can often keep the lot. Olly
  16. It is my right hand any number of times during a processing job. You never have to worry about cropping one channel of an image because you can just register-crop-pad the rest to it an any time. I wouldn't be without it. Olly
  17. Let's start from the beginning. 1) Open your RGB image in Registar. (Make sure it has nothing 'funny' about it. You can sometimes end up with an invisible extra channel created in Ps so check it in Ps channels to ensure it really is just RGB.) 2) Open your Ha. Go to Register Images icon and register Ha to RGB. This will throw up a screen-only image like the one on the right of your post. It's a screen rendition. Don't save it and don't worry about its colour. 3) Go to Crop and Pad icon and select 'Just this image' and 'Match.' This will crop off the registered Ha image where there is no RGB underneath it and fill in the Ha with black where the Ha fails to cover the RGB. Save this. It will be something like Ha_reg_crop. Now you can open Ha_reg_crop in Ps, split the channels of your RGB, paste Ha_reg_crop on top of it and choose blend mode lighten. The dark corners won't be visible or applied in this blend mode. Your main concerns will be twofold. Ensure that the background sky is not being lightened by the Ha. Adjust the black point or the bottom of the Curve if it is. Then ensure that the Ha is suffiently effective in bringing out the nebulae. If it isn't you can pin the bottom of the Curve and lift it higher up to get it to do more. Flatten and merge channels in RGB. Olly
  18. Don't do that. That is not what Registar is for. How you choose to combine Ha and (L)RGB is a subtle business about which not all imagers agree but don't ask Registar to make this decision for you. It is a stunningly good program which I use repeatedly on every image I make but it is not your Artistic Director regarding how to combine Ha! No, just ask Registar to align and crop/pad your Ha to your (L)RGB image and save the result. Export this aligned and cropped image into whatever program you are using to construct your final image. In my case that would be Photoshop. I will then process the post-Registar Ha in a way which bears in mind how I'm going to use it to make an Ha(L)RGB image. This will not be quite the same processing as I'd use to make a nice standalone Ha image. Why not? Because I can push the 'Ha to be added to LRGB' far harder than I can push the 'Standalone Ha.' I'm going to add it to the red channel in Photoshop's Blend Mode Lighten which means it will only appear in the red channel where it is brighter than the red channel. If the dark parts of my Ha image are stretched beyond the noise limit (as they will be) it doesn't matter so long as they are still darker than the dark parts of the red channel. They won't be applied. In my post-processing workflow my software priorities are 1) Photoshop 2) Registar 3) Pixinsight. I don't doubt that those who can fathom the autistic mysteries of Pixinsight will say that it can do the lot. They may be right but it doesn't communicate with me well enough for us to come to an understanding... Olly
  19. Thanks for the links. I would say that the 24 inch takes resolution to new heights on this thread. Olly
  20. There are not many users of such scopes and many of those who do use them post mostly on their own websites rather than on sites like this or Astrobin. Olly
  21. I think we can all agree that the larger apertures should do this but the question is, do they? I've seen some excellent globular images from SCTs so maybe they do, but then why did Ole's refractor split doubles better than his SCT in the images he posted above? It's a difficult business, all this. Goran's results with his Sony camera and 14 inch SCT are extremely persuasive. A cooled version of this camera or its chip in an astro camera would clearly be just right for the big reflectors. One thing about a 16 inch Truss would be that just looking at it on cloudy nights might be deeply pleasing! 😁lly
  22. It also looks as if you might have had less of a fight with the stars from the TOA 150? I'm looking at the green noise around the bright star to the right of the galaxy in the C11 image. They're both good images but I'd choose the Tak one, personally. Olly
  23. Provided you don't kill the aperture advantage by over-sampling, you're perfectly right. Our adventures with a 14 inch ODK were hampered by oversampling because the SXVH36 we were using wouldn't bin properly. Not all CCD cameras do, be it said. We have two Atik 11000s here, one of which bins perfectly while the other just throws up artifacts. Remember, though, that with a very long focal length you can bin in search of a good pixel scale but your chip doesn't get any bigger so your FOV risks being limited. For all that, there are plenty of small targets. Olly
  24. I wouldn't put money on it either. The TEC was corrected primarily for visual and I did get some slight blue bloat occasionally, until I added the TEC flattener for use with my full frame camera. While TEC strenuously deny that it has any colour correction effects, I and plenty of other users insist from experience that it does. When I compare Alnitak shot in the TEC140 with the same shot in the Tak FSQ106N, the TEC blows the 106 out of the water. A set of 10 minute L subs from the TEC, given a basic log stretch, leave the star cleanly split as a double. In the Tak it's a blob and needs short subs and layer masking to save it. To be fair reflectors also split the star cleanly, though the diff spikes are prodigious. Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.