Jump to content

ollypenrice

Members
  • Posts

    38,263
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. I don't think it's very common to combine different sub lengths. M42, yes, every time. I use three different lengths on that. In thousands of hours of imaging I've probably done it on two other targets at most. Unless you know that a target's dynamic range exceeds that of your camera don't do it at all. (Some mono users like myself do different sub lengths for luminance and colour but this doesn't apply to OSC users.) Alacant has shown that you have the full dynamic range in your capture. Nice! Could you post a TIFF with a large part of the background sky cropped off. 90 meg is defeating the rural French internet connection! Just keep the globular and some surrounding sky if you can. Olly
  2. Why take different sub lengths to put into the same stack? I wouldn't do that. It's best to find out the optimal sub length and take those. Very exceptionally it is necessary to take short subs to cover the bright parts of a target with high dynamic range - M42 being the classic example. But don't throw the lot onto one stack, make two and then look into blending techniques. In this case I would make two separate stacks and then combine those. Olly
  3. Alacant has answered over image history. The way to check for over-exposure is simply to look at the linear stack (ie the unstretched stack.) If it isn't saturated in the core and you can see some separation of stars then that can, with careful stretching, be preserved. As presented I'd say the core was slightly saturated. Looking at the linear stack will tell you whether it's happened at capture or in processing. There isn't much in the way of star colour. The cluster is made up of mostly whitish stars with some red and blue ones sprinkled about. This isn't yet showing. There may be two reasons for this: Firstly over-stretching tends to burn all colour to white so a softer stretch would be helpful. Secondly star colour is generally captured in the outer fringes of stars where they are not so bright (as just mentioned) so any black clipping will crop out this information. You could crop out a good part of the border to save file space and post a Dropbox link to the linear TIFF file if you'd like us to have a look at it from scratch. Olly
  4. Focus needs to be reasonably close to the point used for the capture but a small deviation won't matter. Olly
  5. Pixinsight will suit some people and not others. I'm one of the latter. I use both but find that PI is very hard work and I do must of my processing in Photoshop. I have CS3 on a legitimate disk, thank goodness, but would I give Adobe £120 a year for Ps? I possibly would but it would make me spit to do so and I'd be looking at other layers based programs as well. At one time Pixinsight tried to move to subscription but gave up. Was this on legal grounds, having started off with promises of free updates? The package the OP mentions looks like the right one. Olly
  6. I'll be up there this morning giving it a go, Mike! A perfect lockdown entertainment. Olly
  7. The problem with a straight blend of Ha-red is that the Ha stars are much smaller, so the final stellar colour balance will be affected. The background will be darker, too, so you'll need to rebalance that. Also not all the red signal comes from Ha so that which doesn't will be diminished by a blend. Blend mode lighten is 'add only,' hence its charm. Did you capture in Bin 1? Anyway it's a nice result. Olly
  8. I have to say that I omitted to sniff the one Takahashi I bought new. The rest, quite elderly, have always arrived with that faint odour of 'blokes in sheds.' Since I'm familiar with this odour it has never troubled me... 😃 Olly
  9. It doesn't matter which program you used to make the image. What is wrong with it is very easy to see. It is badly black clipped. Here is the histogram of a screen grab in Photoshop. (I've tested screen grabs of my own data versus the original data for this test and found no difference.) The histogram peak is jammed up against the left hand edge of the graph when it should have a flat line to start with then a somewhat progressive rise. (Not terribly progressive, however, in an image with just stars and background as opposed to stars, background and nebulosity.) Here are my 'stock' demo images. The first histogram is like yours, black clipped: This is a healthy histogram from the same data. Note the flat line on the left before the histo peak. You need that: See how much faint signal is cropped out by bringing the black point slider too far in to the right? Don't capture it then throw it away! There is a big temptation to crop out light pollution and other gradients by clipping them out but resist this temptation because you'll clip out the real stuff at the same time. Build up your skills in DBE to remove gradients and leave the black point slider out of it. Olly
  10. This applies if you're using a median stacking routine for these files instead of a simple average. In my opinion it's guff, in practical terms, whether you are or you're not. You'll never tell the difference... Olly
  11. Very sound and the consequences of failure are entirely different. If you beg forgiveness and fail you still have the telescope. 👹 Olly
  12. Personally I would like to get my colours to be close to what we would see if we had eyes which had evolved to allow us to see colour in deep sky objects. (Perhaps if the females of our species greatly preferred males who could do this! That's not so implausible. Such males would waste less money on otherwise unproductive astrophotography. 🤣) We'll always have to accept that it's difficult to know what colour we would see if we had such eyes but the science does give us some clues and I think there has been a collective movement towards the intensification of blue in spiral arms. I'm going to get out of my blue period and accept less colourful galaxies for a while! Olly
  13. What you can do in AA is go to Colour - Split Channels and then mouse over a region of background sky in each channel. The ADU is shown at the bottom of the screen. This allows you to measure the background in each channel. Unfortunately this is measuring individual pixels rather than a 3x3 or 5x5 average as in Ps so you get some variation. Still, it does give you an idea. It may be possible to draw a box and get the average within it, I don't know. It may also be possible that the free GIMP has an equivalent of the Ps sampler. I do advise finding some way of doing it, though. When the background is right the rest becomes a lot easier. Olly Edit, I ran a screen grab of your first image through the Ps Colour sampler system I use and here's the result: You'll see the 4 sample points on the image and the RGB values in the lower four boxes of the Info window. Red and green are well balanced but very low and blue is about twice as bright, or almost, depending on where you look. I aim for at least 20/20/20 though I'd prefer 23/23/23 when my data allow it. Sometimes they just don't. Only the colour balance is holding your image back. It's a real cracker in other respects.
  14. Lovely. And, my, what a stylish pier you have! Olly PS Personally I find some microfocusers too slow for visual. They stop me getting that 'snap' effect of perfect focus.
  15. The best way to avoid orientation issues is to use the same orientation every time. Unless you really can't frame the object except at a particular angle, align your camera with RA and Dec, either in Landscape or portrait mode (Landscape being long side parallel with RA.) Do this roughly by setting the camera parallel with the dovetail or counterweight bar and perfect it by taking a sub of a few seconds and slewing slowly during the exposure. You'll get a star trail sowing your present camera angle. Ajust and repeat. It takes little time and means you can easily come back to a target to add more. The Rolls Royce of alignment/resizing software tools is Registar. Brilliant but not cheap. Olly
  16. Yes, like RBA I do tend to include pointers towards any extreme processing and its purpose. He has a particular style and is very skilled but he certainly doesn't play the shrinking violet! Olly
  17. No, I think more of both would help the image but it would need to be quite a lot more to make a big difference. I say this because I've just processed an M101 dataset captured by another member and that went a little deeper and would take more sharpening. The big difference was that his was shot from some altitude in the Alps under a dark sky. You're fighting skyglow and your only weapon is exposure time, as Craig Stark has discussed persuasively. I do think you're getting there with the image and I like your latest one. If you look at the third step in my processing method you'll find a very effective technique for reducing background sky noise. Because it involves no pixel-to-pixel communication it is not a blurring technique and does not produce the oily, 'vaseline on the lens' look. There are other ways of reducing noise with less pixel-to-pixel communication than the NR algorithms, too, but this one is easy and retains exactly the same 'grain pattern' as the original but just reduces by as much or as little as you like. I found it a big help in your luminance image. The entire image is blue-dominated in the first example and is still slightly so in the second, to my eye. I always measure my background sky colour using Photoshop's Colour Sampler eyedropper set to read 3x3 or 5x5 pixel samples. I'm aiming for parity in each colour. This really is one of the foundation stones of any image I process. I'm not sure about the second but I'll bet the first one is high in blue. Olly
  18. You need an optician! Or a sunny day!! 😁 The only way a face looks like the left hand side to my eye is in a thick fog. The right hand side is a little saturated but not by that much. We are making pictures, Rodd. They are not, and never will be, reality. What about writers? Take a writer who is trying to evoke a scene so that it comes to life for the reader. A writer who plods through all the objective details will never succeed in doing this. Good writing involves embracing the subjective experience, emphasizing the features which define that experience over those which are to be found everywhere. Exaggeration? Of course, it is necessary. Selection of features of interest? Obviously - or you'll drown in details about the character's bus ticket that day. (It was a slightly yellowish grey, just a tad less yellow and more grey than the lady next door's poodle's collar, except when it was wet. The collar's yellow became slightly more intense in the rain... zzzzzzzz!!!) I agree, though I think that exaggerating galaxy halos and outer arms, if they are genuinely buried in the data, is the whole point of a certain kind of imaging. If you don't want to see them, do visual observing and you won't. If you're interested in what's really there, as I am, you'll need to exaggerate them to reveal them. I don't do astrophotography to confirm what I can't see at the eyepiece, I do it to find what I can't see at the eyepiece. The discussion on colour is very interesting and confirms something I've long suspected - that galaxies are not, in fact, all that colourful and are probably a lot greener than we usually see them. Olly
  19. This set me thinking. How close can I get to this with my data and processing. Here I'm using my own M101, not Rodd's. I'm unsure how to de-blue any harder than this and I can't get that slivery-green, silvery-blue look really. I think they might have me on resolution as well... 🤣 Thoughts, anyone? Olly
  20. What's the source of this one, Vlad? Is it Hubble? Olly
  21. I do things a little differently from most people. For one thing I don't use decovolution. (I know this is odd!) This is how I did my version: 1 DBE in Pixinsight, then Photoshop. 2) Basic Log stretch and black point adjustment till the brighter background pixels reached 23. 3) Background noise reduction in Curves. Pin the curve at 23 and place fixing points above that. Raise the curve below 23 to taste, so lightening the darker pixels to reduce speckle. 4) Copy layer, working on the top one. In Curves, pin the background again at 23 and add one fixing point below that. Continue to stretch above that. (There is no point in further stretching the background sky.) This stretched faint stuff considerably above its noise floor but the medium bright stuff was still safe from noise. Erase the top layer bright stars outside the galaxy and flatten. 5) Copy layer, top layer invisible, bottom active. Heavy noise reduction. I used Noels Actions, Reduce Space Noise. I want the faint outer arms and the regions between the spirals to be noise free. Too much NR is OK at this stage. We won't be applying all of it. 6) Top layer visible and active. Use Colour Select (it works on greyscale) to select the faint and semi-faint stuff where the noise is a problem. Expand by 1 and feather by 1. Set the eraser to 50% and erase the top layer where selected. Too much, too little? Go back and reduce the eraser percentage, not enough, repeat it/increase it, etc. If some small areas are still noisy remove the selection and use a small eraser brush to remove them to the percentage you want. (SO much better than masks!!!) 7) Sharpening. Copy layer, top layer invisible, bottom layer active. Select stars (Noel's Actions or MartinB's tutorial on SGL.) Expand and feather selection then select inverse so the stars are excluded. I'll now use a mixture of Smart Sharpen and Unsharp Mask, mostly the latter. I don't look at damage it does to the lower brightnesses, I look only at the parts it enhances. I'll only be keeping those. Most of the image will look horrible but a few bits around the core will look great. 8 ) Top layer visible and active, erase at high percentage those areas which will stand it. Reduce the percentage of the brush and erase those areas which will stand some sharpening, etc. Basically work down the brightnesses with an ever reducing percentage. It is often helpful to do the sharpening with different values in several iterations. Flatten and that's it. Layers and eraser versus masks? If, by some dark magic, you can create the perfect mask which goes just where you want it at the opacity at which you want it, then use masks. If you prefer to see what you are doing, where you are doing it, while you are doing it then use layers and eraser. I know which I prefer! Olly Edit Some of these methods cannot be used on targets with darker-then-background dusty regions.
  22. Well spotted, Dave! I've been busting my brains on that darned colour and actually had noticed the L-B similarity but didn't follow the thought through. I was wondering whether it was an effect of the LP filter since I don't use one. Doh! Olly
  23. Hi Rodd, I'm still downloading the colour files on our slow internet but I had a go with the lum. This is a slightly cropped JPEG at full resolution. I don't process for pixel peeping so this is intended to hold up - just about - at 100% and TIFF. The data is good. It differs from my own TEC140/Atik 460 data mostly in respect of the background sky. Yours is much lighter, not surprisingly, so that makes it harder to squeeze out those very faint outer arms and separate them from the background sky. Mine would also take a little more sharpening because, again, the dark site produces less noise. After DBE in PI the rest was done in Photoshop CS3. Olly Edit: It does look significantly better in PS and TIFF format.
  24. It's tricky visually even with a 20 inch Dob, at least with my eyesioght. Olly
  25. We share a birthplace, Andy! I, too, was born in Billinge Maternity Hospital, Feb 12th 1953. Small world. Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.