Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ollypenrice

Members
  • Posts

    38,132
  • Joined

  • Last visited

  • Days Won

    304

Posts posted by ollypenrice

  1. Very good, especially at the lower end of brightness. I think the bright end is a bit too bright but I've no idea how to fix that in PI. In a sensible program like Photoshop 👹:grin: it would take a click, a slider and a second click to sort out.

    Olly

  2. On 20/06/2024 at 14:40, Clarkey said:

    I have been using an RC8 as my main DSO scope for some time. It will be pushing the HEQ5 - but it will do it. I have used it on mine, but it is better on the AZ-EQ6. I used to use a reducer/flattener - but now I tend to bin the data instead as I have othe4r options for wider FOV. Personally, I think they are underrated scopes - most people run a mile due to the potential collimation issues. 

    Here a couple of mine:

     

    NGC 6888 Crescent Nebula v2.jpg

    Deer Lick and Stephans Quintet v2.jpg

    M64 Final.jpg

    These are certainly good images. No doubt about that.

    Olly

    • Like 1
  3. 10 hours ago, Ags said:

    At first tried two stretches for the nebula merged with a layer mask but I couldn't get that to work in Gimp (still something to learn I suppose)

    Great result, especially for a one-layer stretch.

    It's worth getting to the bottom of layer masking, though. The principle is simple: we want to blend a partially saturated image (called S) with a partially underexposed one (called U).

    If we make U a top layer over S, we want the over exposed parts of S to be replaced by the correctly exposed parts of U.  We can copy and paste S onto a layer mask for the top layer so the top layer will only be allowed through where the layer mask is white, meaning over exposed. In reality some modification to the layer mask will make it work better. A big increase in the mask's contrast (using an S curve and clipping both the black and the white points to make them totally opaque and transparent respectively) will combine all of the best parts of both layers. A substantial blur of the mask also makes for a smoother transition and removes newly created noise. (Gaussian blur of 1 to 3 but experiment.)

    If you keep this principle in mind I'm sure GIMP will allow you to do it. It just works so sweetly!

    Olly

    • Like 2
  4. 16 minutes ago, Ags said:

     

    I know 5 hours of subs is not a lot for Bortle 8 skies (and relatively broad-banded ZWO Duo filter), but it's great to see confirmation there is something viable in the data after all!

    Exactly. This was one of the things I found captivating in your data. In the not very distant past those shells required hours and hours of Ha and the same again in OIII. And here, dammit, you have all this in five hours of OSC!!! 👌 Incredible. I think it comes from both new cameras and new processing tools.

    Olly

  5. I think that your image is as close to perfection as we are ever likely to see. It is absolutely exquisite on all counts. Bravo.

    It would be interesting to compare it with one shot with luminance at native FL (which is how my own galaxies are shot with my TEC 140.) Most of the theory I can think of says it won't make much difference but I once processed some data shot by Julian Shaw on M106 using an AP extender on a 6 inch refractor and it was quite outstanding.

    Olly

  6. Sorry but this one really got under my skin! It dawned on me to go back to a kind of aggressive opening stretch which I use on Ha.

    stretch.JPG.3e677d7c89d97c527be36b5be829a813.JPG

    This gives the best separation between background and faint nebulosity because of the near-vertical curve at the bottom. You have to deal with the over exposed parts later. I found this let me get the outer shells out of the background. By the time this layer was done the core was almost saturated out.

    I then did a regular stretch just for the core. In Ps you can paste the core image on top the extensions image, add a layer mask, paste the over exposed image onto the mask,  blur it, increase its contrast and get a seamless (?) blend of the two.

    I've toned down the green this time and replaced stars at the end.

    V3FINWEB.jpg.27f341b5eaeeb8142d4b47c893e1dad1.jpg

    Olly

     

    • Like 6
    • Thanks 1
  7. 3 hours ago, wimvb said:

    It's there, but you need to do a very careful stretch. In a curves stretch (not the initial stretch), put a marker just right of the histogram peak at x = y (so this point won't get stretched), then put a marker to the right of this with y > x. Put markers halfway at x = y, and a marker at x = 0.9 of maximum value and y = 0.8 of maximum value, to tone the core down a bit. Adjust the markers until you get a pleasant result.

    Something like this (this is pixinsight, but any image processing software should have this function)

    curves.thumb.png.b2d54fa4457eaa0bc3f16798870eb33d.png

    Yes. In processing, the stretch is everything. You're showing a single stretch (with the stars awaiting a different one) but sometimes different stretches in different layers may be necessary. But... the stretch remains everything. That's why I don't use ready made stretches. They are guesses, sometimes clever ones, but still guesses.

    Olly

     

  8. I think you'll find that the problem lies in your blue channel. Certainly on the screen grab, looking at each channel separately shows that red and green are admirably flat but that blue is much darker (and less regular) in the region of the globular.

    Before thinking about flats, can we clarify whether or not this has had any kind of gradient removal? Automatic gradient tools are prone to sampling the background sky too close to the bright target object and, in so doing, creating an over-darkened region round it. Since a globular is made of stars, and telescopes struggle to keep blue light in tight focus, a gradient removal tool might pick up too much background blue and erroneously pull it down.

    If this hasn't had any gradient work done to it then your flats could be over-correcting, especially in blue. Finding out why can be a nightmare. One thing to try would be take your master flat, split its channels, and replace the blue with a second iteration of the green.

    Olly

  9. 3 hours ago, Mr Spock said:

    Whether it matters or not, not levelling your pier is just sloppy. And sloppiness is like a virus, it creeps into everything.

    What would you say to someone who asserted that only piers painted red could provide perfect polar alignment?

    Olly

  10. The difference in signal between outer extensions and core is very slight, so pushing those extensions is always going to be difficult. Using a layers program (Photoshop in my case) I blended two stretches, one for extensions and one for core. To stretch the extensions I made a copy layer, added a layer mask, pasted the image onto that, Equalized it in Ps, blurred it, and stretched through that. I think Carole has done a tutorial on how to do this.

    I thought the main nebula was beautiful in your data and the weakest part was the fainter stars which seemed to lack tightness and contrast. For this reason I played them down more than I usually would. Was focus nailed? My guess would be that it wasn't, quite.

    In my view processing any image like this without removing and then replacing stars is on a hiding to nothing. Starnett++ is free, I think, though I used Star Xterminator.

    Although I tried both DBE and SCNR Green on the data, I fould they did too much damage to the precious extensions so I didn't apply either. Very unusual for me.

    M27SGLFINweb.thumb.jpg.06004ccfce26665b3628a805ca8fc1b7.jpg

    Olly

    • Like 5
    • Thanks 1
  11. 8 hours ago, saac said:

    Well, as Olly has explained above, there is no real sense of a wonky angle with respect to a pier, it has absolutely no impact at all.  Beyond the aesthetic then, there is no sense of "doing it correctly".  To be honest, from a purely engineering perspective, doing it correctly would mean not wasting time and resources on setting a configuration that has no operational impact.  That all said,  this is something we do for fun and we each get our fun in different ways. So hats off to those who pursue a perpendicular and level pier. :) 

    Jim  

    OK, but I'd rather say Hats Off to those, like Lucas Mesu, who thought about it afresh and, discarding slavish orthodoxy, realized that there was no need for a meridian flip.

    Lucas.JPG.5f94a0a32168862542ea32e1ed6c8eb2.JPG

    Olly

    • Like 2
  12. Just now, VNA said:

     

     

    Hello, why not doing correctly, because it will be one less problem to deal with?

    It won't be one less problem to deal with. Pointing the pier directly at the centre of the Earth is 100% useless. Think it through. Once you have your wonderfully level pier you are going to crank your RA axis to the angle of your latitude so it points at the celestial north pole. That is what matters, not where your pier is pointing.

    Olly

    • Like 3
  13. I think you may be over-estimating the importance of PA for guiding. Get it right by all means, but I've never seen a strong correlation between better PA and better guiding. The periodic error is likely to be 20 arcsecs or more, which tends to swamp the corrections needed to fix PA drift. Good PA is important for unguided imaging, however.

    What I would do next is work out your sampling rate in arcsecs per pixel. The real world best you're likely to get from the seeing will be around 1.3 arcsecs per pixel so, if you are sampling a long well below that, you might consider binning, stacking in super pixel or resampling the image downwards by an appropriate amount before post processing. This will give you a cleaner image with no loss of real detail.

    If you use the ZWO ASI 224MC, for instance, it looks like you'll be sampling at just under 0.5 arcsecs per pixel. That is certainly oversampled and would require a guide RMS of about 0.25 arcsecs. In reality you'll be doing well to reach 0.5, so it would be hugely advantageous to accept this and bin accordingly.

    Matching camera to optics is so important in this game.

    Olly

    • Like 1
  14. If you consider the whole of the the word 'travel' rather than just the bit about the aeroplane, I wonder how travel-friendly an F4 Newt is likely to be? An instrument which is very sensitive to fine tuning is going to need fine tuning at your destination.  How much of your holiday time will that take? How much imaging time will be consumed? Does anybody know? I don't, but I think it's a big risk and not one that I, personally, would want to take.

    Olly

  15. 1 hour ago, WolfieGlos said:

    Is that from jumping up and down, or just you walking into the room? 

    Because I also get the same in both scenarios 😁

    No, when I walk into the room she just barks, No, I don't know yet what we're having for dinner the evening!

    :grin:lly

    • Like 1
    • Haha 1
  16. 7 minutes ago, TiffsAndAstro said:

    ty :) i have been wondering about how the Ha and the Oiii could both be included with RGB to created a final image. i guess the green channel of Oiii will look different to the blue channell of Oiii? but it seems target/nebula dependant?

    turns out im not keen on hubble palette and was wondering if there was a different way that i'd prefer. just a personal preference thing. cheers for the help again :)

    Tony, the OP, will have his own answer to this.

    In my case I create three Ps layers having added Ha to red already.

    R/Ha  G/OIII  B

    R/Ha  G         B/OIII

    R/Ha  G         B

    I can then weight the relative OIII contribution to G and B very easily, seeing the result in real time, using the opacity slider. I can also weight the overall contribution of the OIII in G and B  to the HaRGB at this stage or I can do that later.

    Olly

     

    • Like 2
  17. 1 minute ago, TiffsAndAstro said:

    noob question apologies, i know what the H and the first O in HOORGB is but what is the second O for ?

    OIII applied to the blue channel.  The OIII line lies on the blue-green border, sometimes called 'Teal Blue.' On emission nebulae this will give a result very similar to RGB, so it's Ha-OIII-OIII.  It will differ from RGB, however, when there is reflection nebulosity which a blue filter will pass and which an OIII filter will mostly block.

    Olly

    • Like 1
    • Thanks 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.