Jump to content

ollypenrice

Members
  • Posts

    38,263
  • Joined

  • Last visited

  • Days Won

    307

Everything posted by ollypenrice

  1. Ooooh, now that's a good 'un. Deep and clean. World class without a doubt. Olly
  2. While worrying about the blemishes on the corrector plate, take another look at that vast black thing smack in the middle of the light path and totally opaque. The secondary mirror. Think about it!! 😁lly
  3. I'm sorry but I have never pretended that my images were taken from anywhere else but planet Earth and my understanding of 'natural colour' is 'colour as seen from here.' What are we to make of, say, IC342 which is not only reddened by the earth's atmosphere but by intervening galactic dust? In the link I've compared it with M101 for size but it can also be compared for colour, as can the stars and background. I would be very surprised if, intervening dust permitting, there were much difference between the two galaxies but what I'm trying to do is photograph them from here. https://www.astrobin.com/full/327910/0/ Olly
  4. 'As for the color, obviously I didn't have enough data to get an accurate rendition of the color in the IFN, and what I've got wasn't deep at all. I knew beforehand I wouldn't have enough time to get deep and detailed color data, so I compromised with at least getting enough color for the stars and the field, hoping that the IFN would at least inherit some color from the background signal, which is most definitely the case here, and that's why the IFN has a brownish hue versus the more expected blueish cast - regardless, the IFN not only scatters blue light but is also fluorescing a broad red spectrum of light known as the Extended Red Emission (ERE), so the brownish hue acquired from my poor color data isn't completely off track.' That's from RBA's website: http://www.deepskycolors.com/archivo/2010/04/08/integrated-Flux-Nebula-Ifn-really-wide.html For what it's worth, my own IFN: Same as Rogelio's. So what's your conclusion? 'It isn't brown, it only looks brown?' Olly
  5. As an occasional writer of fiction (for fun) I penned a short story lampooning 'conceptual art.' A pretentious critic with a wealthy mother is looking at the Emin bed and muses, 'Although beyond his own means it would probably not be beyond those of his mother but, when he had once dropped a hint, she had replied that she had beds of her own and someone to keep them tidy.' I suspect that, for the purposes of the OP's question and this imaging debate in general, the answer would be 'yes.' Different systems will produce different results but not radically different ones, at least not radical enough to offend what most people mean by faithful colour. Consistency: a few years ago I posted up a Christmas dataset on here for folks to process as they wished. It was HaLRGB on the Cave Nebula, so a broadband target enhanced by Ha. Barry Wilson processed a rendition entirely in Pixinsight. My own was done in Photoshop. They were, to my eye, very much the same and, if forced to identify my own against Barry's in a blind test after a week, I'd have struggled. When Tom O'Donoghue and I decided to go after the IFN in our joint dual rig we were interested in the colour of this nebulosity. At that time it was usually seen as fairly grey but we threw a lot of RGB time at it and deliberately processed it independently. Our results were nearly identical. We found it to be brown, as do others who've gone after it. If we exclude colour mapped images are LRGB/OSC images really as varied as some suggest? Olly
  6. Very interesting on Affinity Photo. That may be good news for the OP and others. I bought PI a long time ago and also Warren's book. I even reviewed his CDs for Astronomy Now. There's no doubt that PI is a powerful tool but it's not its difficulty which makes me prefer Ps for most things, it's the very nature of the way the user operates within it. I cannot imagine processing without layers. (Someone quoted RBA earlier on in the thread as agreeing that the layers method is a significant absence in PI.) In my workflow layers are, well, everything. I'll even import a PI modification into Ps and use it as a layer! (Bigamy!!🤣) I think any beginner should look at APP for sure. I initially joined the Beta team but we were so busy here at the time that I just ran out of hours in the day. It's a very good program from a very good guy. Ironically I've just finished a magazine article in which it gets honorable mention over this project: https://www.astrobin.com/g82xf7/B/?nc=user The original imager built up a mosaic in APP, automating the construction. The image geometry was sound but it was patchy and unevenly lit. I was asked to see if I could fix it so I did so in my usual Registar-and-Photoshop workflow but using the APP image as a geometric template. It was a far from automated process with each individual stitching checked by hand and fixed where necessary in levels, using the equalize adjustment to search for splicing errors. I don't doubt the rebuild could have been done in other programs but I just used the ones I know. (I very much doubt that any program can automate a 32 panel astrophoto though.) This is a very good thread. Thanks to the OP. Olly
  7. That's fair enough. Actually I sometimes hanker for the old days when we didn't go so deep. The great thing is, you can still process your data as you like. The first time I saw the Elephant Trunk Nebula in an image, the only only sign of nebulosity was in the glowing rim of the Trunk. I thought this was stunningly beautiful and eerie. As we all started to go deeper and deeper in H alpha the whole area around the Trunk turned into a rather flat and featureless sea of red. The last time we did it here (this one was with Yves Van den Broek) I decided not to exploit the full strength of the Ha signal available but to emphasize its contrasts and hold it down so it really picked out the rim of the Trunk as in that first picture I saw. It's also quite common for imagers to post multiple versions. I always do this if I'm posting an Ha version of a target not usually seen with the hydrogen structures visible. (The Double Cluster or the wide field around the Cocoon Nebula, for instance.) I photograph these Ha backgrounds simply because they are there. I don't think they make a better picture, just a different one. And I do like a faint Rosette seen as a diaphanous and discrete circular object. Deep images connect it in Ha all the way up to the Cone Nebula. We can have both. Olly
  8. Keep close to authentic look of object or create stunning, award winning and over processed photo? I'd still like to come back to the top of the thread, above. Could we have some links to images which are 'stunning, award winning and over processes photos' and to others which 'keep close to authentic look of object.' It's very easy to toss out these terms and present them as alternatives. It may not be quite so easy to justify them. Without specific examples of each we are (are we not?) just waffling? Olly
  9. The wider it is, the sooner you have to do a meridian flip... Olly
  10. I don't think PHD uses the FL information you give it for guiding. I can't be sure, but being a lazy so-and-so I never bother telling it! It calibrates perfectly happily though. Olly
  11. In truth it is hard for the camera to match the eyepiece on the stars and hard for the eyepiece to match the camera on the nebulosity. (But the camera will beat the eyepice most of the time on stellar colour...) Olly
  12. Firstly we must distinguish false colour imaging from broadband imaging. In narrowband images, colours are ascribed to particular wavelengths, and so to particular gasses - in the same way colours on a geology map might be ascribed to different rocks. Your blue and brown Rosettes are in this category. Nobody claims that the Rosette really is this colour. This imaging has its origins in science but it has been taken up by amateurs, particularly those with no access to dark sites, because the narrowband filters used to isolate the gasses also obstruct light pollution. That leaves broadband or 'natural colour' imaging and here I think you're in danger of presenting us with a false dichotomy. I don't think it is true that highly exaggerated images are admired. Most experienced audiences want to see images in which there is a sense of a little bit being left in the data. An image should never look as if it has been pushed beyond what the data had to give. A good imager can wring every last drop out of the data while making it look perfectly relaxed. Nor should any processing techniques leave visible traces. This applies particularly to noise reduction and sharpening. If you're an imager yourself you can spot the signature of processing excess a mile away. The holy grail is invisible processing. It is that which is most admired in the community. My processing aims are: - invisble processing. - colour close to what our eyes would see if they could see faint colours in the dark. (My Rosettes are always red and slightly bluer in the middle!) Star colour should agree with spectral class. (A good test.) - as much information to be found as possible. (Long exposures and careful stretching to find the very faintest signal possible, especially signal so faint as to be entirely imperceptible to the eye in any telescope. I see that as the whole point.) So I will certainly exaggerate the contrasts between features if it helps distinguish structures in the target, but I won't invent any. - Colour saturation (as opposed to colour itself) is largely mood dependent! Vlad, if the Trapezium looks more like the lower image in your telescope you need to replace it!!! Olly
  13. Probably nothing to worry about, Alan. Our OAG guide graph at 2.4 metres FL looked Himalayan when it was presenting the trace in pixels. In fact it was running at about 0.3 arcsecs RMS which is brilliant. If you plug the guiding focal length (805mm) and the guide cam pixel size into PHD it will give you the guide value in arcseconds and you will probably be greatly reassured by this! It won't do any harm to use an OAG. It's probably a good idea, but I guide my TEC140 (1015mm/0.9"PP) with a guidescope. Olly
  14. That's exactly what I felt when I started. You get used to flitting back and forth, though. In reality I do just gradient removal and SCNR green to the linear data in PI and then all the rest is Ps. Gradient removal really is important, though. Gradient Xterminator is pretty good. As you get involved with processing you'll want to get as close to perfection as you can but, at first, a simple stretch and setting of the black point will give you a picture which you'll probably find wonderful just as it is. We all enjoy different aspects of AP to different extents. My favourite part is post-processing so I have a big box of tricks in that domain but a basic picture isn't too hard to produce. Above all, have fun. Olly
  15. I use CS3. It's fine for AP. I'm not aware of any significant missing pieces and the key plug ins (notably Gradient Xterminator and Noel's Actions, AKA Pro Digital Astronomy Tools, work fine.) I don't think CS2 is very different. At least one famous DS imager was still using it last time we were in contact and I have felt no need whatever to modernize my version. The key things I do outside Ps are: Dynamic Background Extraction in Pixinsight. (Grad X for Ps is a good substitute.) SCNR Green also in PI. There is an alternative available for free or donation on Rogelio Bernal Andreo's Deep Sky Colors site. (Hasta La Vista Green.) Resizing, composite imaging and mosaic making in Registar. Starnet++ star removal. Mine operates within PI but I believe it can be standalone. It is a powerful alternative to star masking for star reduction. (My star masking method appears in Steve's book. Fame! 😄) More seriously, the book is first class. Olly PS To answer Heather's question on Ps Elements, alas it is not remotely adequate for our purposes.
  16. I have an article in the pipleline on making megamosaics. The hard part is going to be generating the template which defines the image's geometry. Do you have a plan for how to do this? In using a rig like this for mosaics, one bonus will be losing the need for precise alignment of the optical systems. There will be no need for the edges of one channel's image to be aligned with the other channels. They can overlap in different places, possibly with advantage. Most exciting! Olly
  17. I don't think it's quite as simple as that. The choice between Ps and Pixinsight is a complex one. While they are both graphics programs the user experience is radically different and someone who gets on with Ps may not get on with PI. I speak from experience! Ps became the world leader in its field because of its brilliant analogue interface, intended to get designers from the print and paper world comfortable in a digital environment. Its user interface is essentially a series of metaphors from the print and darkroom world. (Eg unsharp masking.) It also very visual. Communication is Photoshop's genius and its absence is Pixinsight's worst failing. I would describe PI as autistic. Click on a help tab and you get, This parameter is expressed in Sigma units, with reference to the estimated mean background value, or Sample simulations can be used to deal with sample irregularities that possess symmetric distributions... Gee thanks, that was great. I'm sure the Ps developers are as geeky as the PI developers but someone at Adobe is translating them on behalf of the user. If I had to rely on PI exclusively I would give up imaging because I don't enjoy the PI environment at all. In Ps I can make a modification to a layer, see if I like it and then choose where and by how much to apply it, all the while looking at the result in real time. Both can do the job provided the user can master them. There's a big 'if' in there. My old and legit Ps CD has worked on every machine I've ever put it on, so Win 7, Win 8 and Win 10. I don't register it each time, I just run it. Olly
  18. Thanks for the post. It's one thing to know it, another to see it and feel it. I will try Moonshed's experiment to intensify the experience of feeling it. One of the first shocks I had in looking through a telescope came in 'seeing the Earth's rotation.' As a provider I sometimes have first-time observers here and it isn't unusual for them to say just the same thing. 'That's the Earth moving... Wow.' We have grown up with the idea but it took humans a long time to accept that the earth is moving. As Einstein said, 'Common sense is the name given to that set of prejudices accumulated by age eighteen.' I wonder what other fragments of our present day common sense will be discarded by future generations. Boy, would I ever love to know! Olly
  19. That's good. I would now use the colour sampler Eyedropper to measure the background. I think you'll find the green channel a little high. Olly
  20. Let me give some pointers on post-processing noise reduction in Photoshop. NEVER apply noise reduction globally. You have great selection tools, so exploit them. Use the Colour Select tool to pick regions of low signal (background sky and faint nebulosity) where the noise is at its worst and explore all the possibilities. Reduce colour saturation. Noise- Reduce Noise. Experiment with the sliders. Zoom in to pixel scale and look for rogue pixels, some of which can be systematic. You might have a smattering of, say, rogue magenta pixels. Use colour select to select them and then experiment with the noise-median filter. You can usually make them vanish. Try using layers: make a copy layer and use heavy noise reduction on the bottom layer. Back on the top layer use the selection tools to pick the noisy pixels and then use the eraser to remove them. You choose the ones to remove so this is non-global. Global is always bad. I think that Photoshop offers the best noise reduction in the business precisely because it can be finely customised. Olly
  21. I'd say that it was well worthwhile. Sometimes (indeed usually) a lot of effort produces a small benefit but that's why amateur images go on getting better and better. Olly
  22. This is what you'll get, courtesy of Registar. Mine overlaid over a screengrab of your screengrab above. (!!!🤣) Olly
  23. Did you try Pixinsight's Dynamic Background Extraction? My own images are taken here under very darks skies but sometimpes guests bring along data taken from heavily light polluted sites. A linear stack will open to reveal an almost uniform sheet of bright orange but, incredibly, DBE turns this into an astrophoto in a minute or two! Olly
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.