Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Recommended Posts

The last of my images from France.

Taken with an SW 150mm Esprit, QSI683 on a 10 micron GM2000HPS.

There were some unexplained issues with this image and I didn't get as much colour data as I would have liked as it was my last image on the last night.

M78 LRGB.jpg

Link to comment
Share on other sites

  • Replies 39
  • Created
  • Last Reply

Great capture. You will see that with carefull processing, much colour and detail can be revealed. That's the challenge of AP. When viewed on my mobile device, the lower bright neb seems blown out, but the upper neb seems spot on. Beautiful.

Thanks for sharing

Link to comment
Share on other sites

2 minutes ago, steppenwolf said:

 

2 minutes ago, steppenwolf said:

This is a rather tricky object but I do like this image. What exposures and binning did you use?

Thanks steppenwolf. LRGB: 170:50:50:50 all unbinned. I never use binning, might miss good seeing :) 

 

Link to comment
Share on other sites

Nice work on this object - the dust is very nice especially around the top feature.  Colour should not be an issue - even if you do not have enough just give it a little nudge on the saturation and it will be there.  Detail all in the lum and that looks good.

I agree with Wim (I am viewing on a fully calibrated monitor) and it is just a tad bright.  I also agree with you it is a bright area so down to taste.  But a little more work around the lower bulb will likely reveal some more detail and possibly the offending star(s) hiding in there :) This is [part of] an old image of mine at lower res', need to go back and improve but it does realise a little texture and detail in that area.

2016-12-16_19-07-38.jpg

If i look at your image I can see there is some detail in there in terms of variation but the lum has saturated it - at 170 second subs though should be able to go back a to original data and pull it out.

2016-12-16_19-16-56.jpg

Not detracting from a nice shot though.  Hopefully just useful advise.  There is detail there which can be seen just need to process the zone differently to rest of the image.

Cheers, Paddy

Link to comment
Share on other sites

Yes, I think that layering might reduce the saturation as well. I strongly suspect that there is structure in that core somewhere. The noise levels in the hard part, the dusty bridge etc, are remarkably low. Really nice.

For the colour, a partial and iterative addition of the L to the RGB, with a boost in saturation at each iteration and a slight blur on the L, might boost the colour noiselessly. In the final iteration of L you wouldn't apply the blur, of course. (Rob Gendler's website for this one, I think.)

Nice view of the hardest darned target I've ever come across!

Olly

Link to comment
Share on other sites

I have recently imaged this darned object with exactly the same telescope and camera as you (but with a Mesu Mount) hence my query about exposures and I have more detail in the 'cores' but at the expense of the quality of the dust regions. I'm with Olly on this, it is a very difficult object to image (Olly and I have discussed this one before!), the dynamic range is wide but the object doesn't seem to respond well to normal layer blending processing! I'm not giving up on it but this is such a challenging object that I'm keen to learn its secrets! Patrick has an excellent balance in his image.

This is my current attempt but to say that I am not happy with it is an understatement and it is not the gear that is at fault!

M78_LRGB.png

Link to comment
Share on other sites

1 hour ago, steppenwolf said:

the object doesn't seem to respond well to normal layer blending processing!

 

1 hour ago, ollypenrice said:

I think that layering might reduce the saturation as well

If i recall correctly (was a while ago) I created three version of this area.  Each at different stretches low>medium>high (natural level).  Then I processed them mainly with HDMRT/LHE but you could try DPP/Curves or similar.This allowed the equalisation of the range at a lower stretch which i then layered in.  I think in your case Steve you are not too far away, that looks like its just about refining what you have done.  All that said if it is blown out then HDR time and you cant recover it.  Just a case of trying and finding out.  Paddy

Link to comment
Share on other sites

Thanks for all the comments, Actually I was less worried about the brightest core and more about something else. Stretching the core, I can do to show more detail, but it does now make this part as dark as some of the dust lanes, which in reality it is not. So originally,I made the brightest part around 95% of saturation, but it seems you guys are keen to show every bit of detail at the expense of reality, so here you go :) 

Adrian

M78-RGB-LRGB2.jpg

Link to comment
Share on other sites

9 hours ago, CCD Imager said:

Thanks for all the comments, Actually I was less worried about the brightest core and more about something else. Stretching the core, I can do to show more detail, but it does now make this part as dark as some of the dust lanes, which in reality it is not. So originally,I made the brightest part around 95% of saturation, but it seems you guys are keen to show every bit of detail at the expense of reality, so here you go :) 

Adrian

M78-RGB-LRGB2.jpg

I prefer that. Very nice indeed.  We could argue about 'reality' of course. :BangHead: Why consider a picture with a largely featurelss white core to be more 'real' than one which shows the details contained within it? Reality (in terms of the recording of true brightness) is represented by the linear image. Once you stretch you exchange that 'reality' for one which presents more information about the object.  The more effective the stretch the more of this second, and to me more interesting, reality you'll reveal.

I really must get back to this target. It's so beautiful but so infernally tricky.

Olly

Link to comment
Share on other sites

Very nice image, I prefer the final version you did showing the detail.  Very nice work.  I have been trying to image this target for a while now, but I have to wait for opportunities to get to a dark site and it's normally a  bit low whenever I do and whilst I got some reasonable Lum, the RGB is proving somewhat difficult to get, yours looks much better than mine on this front.  

Carole 

Link to comment
Share on other sites

Olly, its a question of how far you go with the selective non-linnear stretching and other enhancements, how much artistic license is allowed?? I do know of a well known astroimager of many years standing who made a large mosaic, but missed on section due to bad weather, so he copied and pasted a section of stars to fill in missing data - acceptable?

Have you read Sara Wager's article in the November issue of AN? Food for thought.

You do get the feeling that the better astrophotographs are from photoshop savvy people, a pre-requisite these days :) I'm at a slight advantage.....

Link to comment
Share on other sites

I have been discussing this issue with CCD I for some time. And as with all things it is a continuim with simple changes such as non linear stretches being considered good and the pasting of details from someone elses image (professional observatory/Hubble) as being not acceptable. Where do you draw the line and at what point does the image become more Photoshop/PinInsight and less real data? It is obviously down to the person doing the processing. However, there are some images that people wow and clap but the colour saturation is way over the top and the image is made up of layer, upon layer, upon layer. What relationship is there between that and the object itself? I would say very little. I am not arguing for dull boring images but some of the super smartie stars  images are just excessive slider tweaking.

When you get brilliant data, as we did from France, then there is less need to overcook the images. It is more tempting as there are no light pollution backgrounds that will  limit the extent to which you can process. Ironic, better images need less processing but they are easier to over process....

I think I would prefer a mixture of the two with the naturalness of the first image which tries to maintain the right relationships between the details but with a bit more detail from the second. But each to their own.

Link to comment
Share on other sites

If the day ever arrives when we can tune our bionic eyes to accept the huge dynamic range and wide gamut of light wavelength then we won't need to adjust the data, until then we have no choice (if we want to 'see' what's actually in the data). :)

ChrisH

Link to comment
Share on other sites

Just now, ChrisLX200 said:

If the day ever arrives when we can tune our bionic eyes to accept the huge dynamic range and wide gamut of light wavelength then we won't need to adjust the data, until then we have no choice (if we want to 'see' what's actually in the data). :)

ChrisH

and photographers will go out of business :(

Link to comment
Share on other sites

15 hours ago, CCD Imager said:

but it seems you guys are keen to show every bit of detail at the expense of reality, so here you go :)

 

6 hours ago, ollypenrice said:

 We could argue about 'reality' of course

Hi Adrian,

The old science vs art chesnut :) 

Before i get into this (and I have read @swag72Sara's article which I largely agree with, but not 100% where the fun in that) there is a certain amount of responsibility we have to be accurate and realistic.  But that needs to be quantified: 25 years ago film based AP was more the norm and you captured what was on the film (never done it personally).  The point is realistic is restricted/enhanced by the technology of the day.  Thus reality is relative.

I have had this conversations many times with people from both sides of the fence.  My view is if you're a pretty picture person great, if you want science equally as good, I think the reality is somewhere in the middle always has been will be for quite some time.  Personally I do not fool myself into thinking I am in either camp.  I work the data I have to provide the most information I can within acceptable boundaries and tolerances.  Therein lies the crux, acceptable boundaries and tolerances shift as technology and understanding does; we are constrained or enabled by our times!

There is far more to this and minds greater than mine would need to elaborate on some of my points,  I know enough to be able to at least make them (though may struggle to defend some in real depth :) ) A great example is actually in your posts - a later post to @ChrisLX200 talks about the the CCD being the limiting factor while the quoted one talks of 'at the expense of reality' - you're right that the CCD is limiting and the range it captures needs to be extended out to emulate that perceivable by the human eye.  By how much?  I don't know, but I suspect given enough HDR adjustment time it would be closer to your second image than the first.  I believe the statement on reality (specific to this scenario) less likely to be accurate.  Again though the truth may well lay somewhere between the two.

If science is your aim then you have a different criteria to follow.  It would include but not be limited to, adjustments for light extinction, atmospheric impact at one end of the scale to the other which, subject to targets distance, could start to account for redshift and possibly even lyman forest effects.  You could even start to argue (more so with galaxies than local nebulae) that accounting for redshift via look back time should be accounted for, this look back time would then advise you on how much to shift the entire palette by to be 'non-shifted'.  The first scientific question to answer there is; do we present as it is? Or, as we see it once these criteria have been evaluated?  

When it comes to the 'reality' is that not lost the moment you step outside any of the above criteria in the strictest sense?  Any sort of range compression (which is a inherent feature of current day CCD's) , nonlinear stretching routines, wavelet transforms, star reduction, etc etc could be argued that it is now even further from being 'scientific' or accurate.  In short if you are making a pretty picture then it could be argued it is not scientifically accurate.  If you flew towards IC434 it would not be red, we see a concentrated version of the colour condensed into a small area, if we ever got there it would bull dull and likely lacking any discernable colour up close!    But then again we are working in a range of your camera but to either appeal to or be recognisable by your eye.  An 8300 has 65k levels of sensitivity far less than the human analogue eye.  Technology is again important here though.  New generation cameras will likely exceed this in time taking the range into millions and have advanced HDR capabilities.  When this happens it may well be possible to see M42 from the outer fluffy detail to the trapezium all in one shot.  Is this inaccurate or just as accurate as the technology you are using?  You could even argue the data is actually 'corrupt' the moment you capture it into any HDR range different to that of the human eye.  The range of light is compressed into a small finite range far below what we can perceive, as processors we try to reverse that process.  How accurately can we do?  No idea never tried to measure.  It is though an inherent fact of the data we have.  Based on the logic used for 'reality' all shots of Orion should have the core totally blown out shouldn't they, it is brighter by many factors?  We would only see the trapezium if conducting research type work  or specifically shooting that (in which case you would not see the nebula due to the difference in apparent brightness).

Further on the science side consider 'Photography is not about the thing photographed. It is about how that thing looks photographed.'  Maybe a little profound but consider you are comparing two very different imaging systems.  The eye with a central resolution of 130mp+ but varying degrees of sensitivity away from the centre.  Colour recognition is not dissimilar to a LRGB shot  with L dominating over C  (124:6 Mp), the optical system also discards or pays less attention to what it does not need.  When we make a picture we are asking the eye to do something actually contrary to its design.  Ironic :).  We want it to perceive all the  data in an image simultaneously and discard none.  This is just a point of interest - many good write ups exist and I am not a opthamologist! 

On the art side again the boundaries and tolerance issue dominates for me.  However as per Sara's article this is where our responsibility lies.  Though I think it is two-fold.   1. The old film shots were limited by technology, current CCD images are too, less so but they are.  Our job is to try to overcome these limitations and constraints and and produce a nice picture that is acceptable to the eye.  Is this art?  For me yes, it should  be pleasing, of interest and within certain tolerances (I'll park abstract art etc as I am no art buff!).  The word art though extends boundaries beyond the technical and into the artistic.   You could argue NB is artistic the moment we select the palette.  Back on point though we have to work within currently acceptable boundaries. 2. Boundaries are they to be challenged, this is the purpose of science.  You discover by experimenting and if the experiment is proven it is then accepted as the normal.  Pushing the boundaries this way results in many more failures than successes, history demonstrates this all too well.  I don't of course mean just challenging it with no reason or substance, it will need to stand up to scrutiny.  But if proven slowly it becomes  adopted as the norm and you then are adhering to this rule.  The hubble palette itself is a great example of us trying to adhere to something 'made-up' with little relevance to what it looks like.  But there is science in that data still, lots of it!

I do not claim to be right in anything I have said (firmly on the fence :) ) but feel until such time we have a camera pointing at the skies that really can emulate the capability of the human eye and maybe even with options such as AO to deal with atmospheric aberrations we will find it very difficult to accurately (down to the last true few %) present what we have imaged.  Until such time we have a responsibility to work the data to the best of our abilities and produce images that are as accurate as we can whilst ensuring we still look for new ways, either through technology or  through our processing to push these boundaries further and find more in the data.  Sometimes the art leads to the science - the hubble palette pleasing as it is also provides science data through delineation of colour ranges etc.  It will always be a quandary, I consider older film to likely be less accurate due to lack of the ability to capture HDR (I am not aware of any way to recover through post processing?) than human eye, which takes less of the bright and more of the dim and adjusts, a film or ccd camera on a long exposure has no such feature and will simply blow out once its well depth equivalent is reached in any given area.  Given this fact we are all really producing art that leads to science! The reviewing and standardisation of the art processes/technology we use/complete leads inexorably to science.  Be responsible with your work whilst pushing the boundaries now and then, if it works great, if not undo, repeat enough times and you will have nice pictures, you will realise more potential from your data and slowly science will happen :) as a result.

Cheers

Paddy

 

Link to comment
Share on other sites

I think the discussion is getting over-complicated. We are all agreed that nothing should be invented - that is to say drawn or painted in. So what we are talking about is how to reveal the maximum amount of information contained in the data.  If it is not contained in the data I don't want to see it in the image. If it is, I probably do. (Only 'probably' because it can be interesting to drag out the very faintest signal but we don't have to have lashings of Integrated Flux around M81 and M82 in every image we see of this target, or have an Ha background to every image of the Double Cluster or the Cocoon.)

My standard anaolgy is with archaelolgy, where the expert delicately tries to remove everything around the precious discovery. This is very unlike Michelangelo removing all the stone surrounding La Pieta!!!

:icon_biggrin:lly

 

Link to comment
Share on other sites

Looking at this logically, I wouldn't have thought not one person on here would image M42 without exposing the core with shorter exposures and combining, therefore I don't see any harm when stretching an image to stretch some of it and other parts less so.  The original data has not been altered, it is only the amount by which it has been stretched in various areas.

Carole 

 

Link to comment
Share on other sites

17 minutes ago, ollypenrice said:

I think the discussion is getting over-complicated

Me complicated, my counsellor  says otherwise i'll have you know :) 

Paraphrasing the above - sometimes what is it the data needs to be gently exposed as you  point out.  Sometimes the feature can be lost and needs alternate approaches (HDR) to be realised (EG Trapezium/M42).  Where the line of reality lies is debatable, but creative ways to realise lead to science.  Even if that is only finding way to replicate the human eye range. (wish i had just typed this in the first place)

I think.....:icon_albino:

Paddy

 

Link to comment
Share on other sites

Paddy, I can't argue with anything you have said. One point about the human eye is that it can distinguish upto 14 stops of light, whereas the best camera's only upto 11 stops (there is some marketing hype around tho!). I can't see camera's matching the dynamic range of our eye anytime soon. But one area we do fail in is with night vision, Our rods are pretty useless and being able to discern colour at a truely dark site is gone. However, the objects we image emit light at certain wavelengths represented by a colour, which  as astrophotographers, we should try to replicate. Currently Astrophotography is all about artistic license to do whatever you want, you can make faint detail bright and vice versa. I have watched the changes over the last 20 years and given that an astrophotographer has done the basics of getting an image at the scope, the highly processed images done by photoshop knowledgeable persons are the most admired around the globe. It is becoming an essential skill of the astrophotographer! I have 3 long standing astrophotographer friends who frown upon this - bodged, slider tweaker, Xmas tree lights and more. I've been called all names :) I've tried to explain, but to no avail, I feel they are being left behind. They have excellent kit, great technical knowledge of astronomy, years of experience, yet only do the minimum processing. I sometimes think they are using their lack of photoshop skills as an excuse :) OTOH, there are images out there that are glaringly over -processed, like Oxford St at Christmas, that will put off these doubters. My own opinion is to selectively process to make an image look better, but try and retain a sense of reality, You have now seen three I took from France, so you know where I am coming from :)

Adrian

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.