Jump to content

ollypenrice

Members
  • Posts

    38,263
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. There is no point in taking a 1 hour and a 4 hour image and applying the processing of the 1 hour image to the 4 hour. The whole point of the 4 hour (or, ideally, the 20 hour!) image is that it will stand much harder processing. However, it would be instructive to apply the harder processing to the shorter image because all its shortcomings would show. For me the second image is clearly better, particularly in terms of colour intensity. I also think it has a lot more colour to give. It would probably stand more sharpening and local contrast enhancement as well and a harder stretch of just the outer halo. You have to catch more data but you also have to exploit it. Olly
  2. I experimented with this in a previous discussion with Alacant over the histogram and evidence of black clipping. I showed that a screen grab of his image (not this one) with its Levels histogram in Ps showed black clipping. But did this arise from the screen grab itself? I then took a screen grab of an image of my own, posted on here, opened it in Ps and looked at its histogram. There was no difference between the histogram of the screen grab and that of the original image. (I posted the evidence.) So for the purposes of looking at the histo related to black point I think the screen grab is roughly OK. It certainly won't work for all tests though. Olly
  3. You can certainly have too much mount. If lugging it about puts you off you won't want to do it. The EQ6, in my view, is significantly less portable than the HEQ5 and it is no more accurate - though it has a bigger payload. I image at about 0.9"PP with a premium mount at a premium site and the fact is that it is by no means always possible to beat the seeing, even so. Olly
  4. 🤣 I knew it! I knew I shouldn't have posted that screen grab with you on the case!!! I somehow managed to muck up my Ps profile settings when I opened it but rather than search for a solution then and there I pressed on with my reply to Alacant's post. The evidence regarding the black point remained valid so I left it. Ignore the colour profile. (Out of interest I don't know what I altered but a net search told me to close Ps then open it again while waiting a fraction of a second before hitting Shift/Control/Alt. Well that might be easy for somoeone with the fingers of a concert pianist but it took me a good few attempts. Now fixed. Olly
  5. It's deep, with the faint outer arms showing. It's very well resolved. The background sky is neither too dark nor too smooth so the black point has been nicely judged. It does lack colour, as you say. There are two Ps techniques which might be available in GIMP, I don't know. 1) Make three layers. Set top to blend mode Soft Light and flatten onto second layer. Set second layer to blend mode colour and flatten. 2) Convert the image to Lab colour mode and greatly increase the contrast in a and b channels by the same amount. These are less noise-inducing than using Saturation. It looks as if your flats have very slightly over-corrected. You could fix this in Curves. Put in fixing points at and above the brightest part of the background and slightly lift the curve below them. Olly
  6. Greetings, Harel, and welcome back. That's a lovely M33. I particularly like the muted colour because that's very much how I think it is. Olly
  7. At last our internet has allowed me to see the big one and it was worth the wait. This is just glorious and Zoomified is the only way to present it. Roll on Orion and the galactic centre. Great stuff, Tom. Olly
  8. BTW, if you want to take the camera off and replace it quickly, just lock up the mount with the OTA and counterweights horizontal, stick some masking tape on the camera back, hold a spirit level against it and rule a line on the tape. When you replace it you can use this line and the spirit level to find the original orientation. Saves some faffing. Olly
  9. Why I use mono: Mono allows you to shoot luminance (all colours at once) so that stage means you work about three times faster than OSC where the chip is always trapped behind RGB filters. It is therefore about 20% faster - at least. Some targets are best shot in RGB plus narrowband. For example, Narrowband is obviously best shot in mono because all the pixels will receive light through the filter. Only the red filtered pixels (1 in 4) receive light through an OSC Bayer Matrix. Narrowband beats light pollution (though I don't have any.) Ha can be shot in moonlight on all pixels, not 1 in 4. Processing mono images: does it take longer? I found it very hard to get to the standard I wanted using OSC. I don't know why exactly. At the stacking, calibrating and combining stage it adds no more then ten minutes, probably less. Do you need to refocus between filters? If they are the same make and decent ones, no, but you can. In OSC you can't - and the need to refocus is not driven by the filters but by the optics. If you focus in Luminance then RGB will be OK. Do you need flats per filter? Once in a blue moon you might but 95% of my images use a luminance flat for all channels. In mono you can shoot just what you want to shoot based on the target. This might mean lots of L and little RGB. Or no L and just RGB. Or either of these combinations plus narrowband. You are not stuck with all those RGB filters blocking the light when not wanted and getting in your way. Olly Edit: Carole, below, mentions the ability to bin 2X2 for colour. I'd forgotten that. You can also bin 2x2 (or 3x3) to avoid being over sampled in all filters. With CMOS cameras now having such tiny pixels this might be an advantage at even quite modest focal lengths- though the gain from binning with CMOS is not as great as with CCD.
  10. I had some very like this from Meindel, so reputable, but the uppers separated from the rubber in far too short a time to represent acceptable value. I loved them, though. Until they fell apart they were perfect. What make are these? Olly
  11. Really? They weigh so little. I've never noticed any effect from their weight. Olly
  12. Could it just be the screen stretch setting on your capture programme? Download a sub and stretch it in your processing software to see what's in it. Olly
  13. Do you have a router? If so you could cut a couple thin plywood rings of the right diameter. I've done this in the past. Olly
  14. I would get the software out of the polar alignment entirely and do it using the classical drift method. There is nothing to go wrong in the old way. In an observatory you're not re-doing it night after night so any increase in time it takes won't matter. And because it is bound to work you may save time. As it is you have one bit of software telling you one thing and another telling you something different. No surprises there!! Olly (powered by steam.)
  15. Camping mat makes great dewshields but I think you'll be lucky to stick edge to edge reliably. I put tape round them. Alternatively you could make two cylinders bonded together, the one inside the other with the joints opposite each other. Olly
  16. For those who like more 'pop' here's a version with higher contrast. Olly
  17. What a great image and great object! Superb. Our dual refractor is working away as I type. Nice system to use. Olly
  18. Nice one Dave. The outer structure grows and grows as you add data. Olly
  19. This was done with the productive dual TEC140 rig co-owned by myself, Tom O'Donoghue and Mr and Mrs Gnomus. (Atik 460 and Moravian 8300 on Mesu 200 mount.) RGB 3 hrs per colour. Ha 8.25 hrs. OIII 7.75 Hrs. Stacked in Astro Art, components aligned and resized in Registar, post-processing initially in PI but mostly PsCS3. Ha to red, OIII to green and blue all in blend mode lighten with the green/blue balance done in Ps Layers. An Ha-OIII monochrome blend was very lightly applied as luminance as well. Let's spare a though for two people, here: Fritz Zwicky who had the intellectual courage to realize that nature could create neutron stars and Heather Couper, with whom I overlapped at university without ever meeting her, and who died recently. She was a great communicator of astronomical science. It's a crop posted full size if you click on it. Olly
  20. Check it out for supernovae. I had a weird blue-green star in an M101 image which turned out to be an SNR remnant! Olly
  21. We overlapped at univesity but I didn't know her there. This is a terrible shock. The first proper astronomy book I read was co-authored by Heather and Nigel Henbest. It was an excellent start. Olly
  22. That's my starting point, too, though I do the trails test to tighten it up. I've just been helping to set up a CMOS camera in our robotic shed and the daft designers have given no external clues as to the chip orientation. GRRR. What you can still do, though, is look down the spout (sometimes known as the objective) and see the chip from the front. Put in the lum filter for this task. WOn't work with a shutter though! For setting up a dual rig I'm even more particular and set up a camera crosshair on the capture program then slew between quick subs on one axis to make sure a central star stays on the line of the crosshair right across the chip. Plate solving? Hey, we run on steam here. Olly
  23. Hi Steve, for a long time now I've had my cameras set orthogonally with RA and Dec. If the long side of the chip lies along the lines of RA we call it 'landscape' and if the long side aligns with Dec we call it portrait, following the daytime convention. To get to one of these orientations simply set up a 5 second sub and during the exposure do a slew in either RA or Dec. This will produce long trails. When the trails align with the chip you're orthogonal in one of the two possibilities. The virtue of this is that it's repeatable. Any time you want to add data your camera will be at the same angle as last time - or 90 degrees out. If it's 90 degrees out you can soon turn it, do the trails test and be ready to add new data. Finding a random camera angle wastes hours. Also it's far easier for mosaics. I tried to make a mosaic with an angled camera orientation. Once! Olly
  24. Indeed so. I felt exactly the same. I'm about to shoot colour for our Ha-OIII and am thinking about leaving luminance out of the mix because it will fill the central part of the nebula with light and so reduce the striking fine filaments in the OIII. Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.