Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

sgl_imaging_challenge_banner_30_second_exp_2_winners.thumb.jpg.b5430b40547c40d344fd4493776ab99f.jpg

ultranova

M31 A different way of catching the light

Recommended Posts

Hi Everyone, I Set my heart on capturing the Helix nebula on Friday but the atmospheric conditions were bad, all around me up to around 30 degrees it was bright orange, I could see know stars above magnitude 2-3 in the 30 degree belt surrounding me.

So I though I would have ago at M31 an old favourite of mine instead, Plus I am trying to perfect a technique that will help The Standard SLR or Astro modified SLR capture better data once combined that's the Theory anyway.

I have never tried it before but there might be something to it.

I put the SLR into Monochrome mode at 3200 iso to get me the luminance channel idea being that it should help not having the colour matrix active on the camera when it processes the data which in my thinking even though I could be wrong should help with the chip noise at very high ISO, because it does not process the colour Data or if it does it discards it not sure how it works to be honest.

Then I captured my colour subs at Iso 800 / 400 and 200 by putting the camera back into normal colour capture mode.

Then I combined them the Monochrome images for the light subs then I done the same for the colour the combined the Lot Tweaking along the way.

I don't know if doing it this way, without a direct comparison from the way I would normally do it if there would be much difference.

Has anyone else tried it like this, Do you think its worth more trial and error to find out if its a better way to capture.

The usual set up Quattro 8 inch cf modified Canon 1100d with cls clip filter,

neq6 pro mount , Mppc coma corrector,qhy5 auto guider with Phd

10 x 2 minute subs iso 3200 in monochrome

10 x 4 minute subs iso 800 in colour

10 x 2 minute subs iso 400 in colour

10 x 2 minute subs iso 200 in colour

No darks or flats (I will one day)

processed in PI and PE

Total 1 hour 40 minutes

The Idea was to try and cut down on the noise and keep more colour

All comments welcome.

Thanks for looking Paul

post-23517-0-15514900-1350218188_thumb.j

100% CROP of middle area.

post-23517-0-66498800-1350218261_thumb.j

Share this post


Link to post
Share on other sites

Thanks Tom, it does appear so, plus the added benefit is it captures the light subs fairly quickly,

could have done with another 10 lights at 3200 iso, hopefully will add soon weather permitting

Paul

Share this post


Link to post
Share on other sites

Nice image for sure, but I don't think switching your DSLR to mono mode is worth while.

The way i understand itis that It will only take, what would be a colour image and convert to B&W in camera. I think you would be better off using that time to capture more colour data and create a false luminance layer in PS - where you have more control.

I am, of course, happy to be proven wrong.

Share this post


Link to post
Share on other sites

Thanks for the Input Lee,

The way I understand it is you are better letting the onboard camera processor to contol what it discards and rejects than letting

anothe pice of software do it especialy in Raw mode as the camers alograthims work much better in performing the white / black tonal (Momochrome) ranges

than software like PS

In other words your not letting PS or any other photo package manipulate what the the camreras RAW processor is best at doing if that makes sence

Please let me know if this is not the case

Thanks Paul

Share this post


Link to post
Share on other sites

Thanks for the Input Lee,

The way I understand it is you are better letting the onboard camera processor to contol what it discards and rejects than letting

anothe pice of software do it especialy in Raw mode as the camers alograthims work much better in performing the white / black tonal (Momochrome) ranges

than software like PS

In other words your not letting PS or any other photo package manipulate what the the camreras RAW processor is best at doing if that makes sence

Please let me know if this is not the case

Thanks Paul

I would suggest that is back to front. You are better controlling the conversion in PS where you have control on what to do and not the camera where there is no control. The other thing to remember is RAW is just that - RAW or unprocessed until you decide to process it either in PS or DSS etc.

Share this post


Link to post
Share on other sites

Thanks Darth,

Early days yet, As Lee pointed out you can take a colour picture strip it of it colour then use the luminance layer like that.

The thing is does having the colour data in with the luminance channel affect the false luminance channel that's created in a software package like PS, this I don't know.

All I know is unless I try and see if it makes a difference for what ever reason then ill never know, You have to love this hobby.

Share this post


Link to post
Share on other sites

Hi Lee,

I understand what you are saying, A raw Photo has all the colour data and luminance data combined and inevitably the software will disregard the colour matrix to give you a pure Monochrome image , combine these and you should have the same affect as shooting in Monochrome in the first place.

The thing that I am not sure about is the way that DSS works on the Raw images directly from the camera, even though DSS shows them as the Raw photo and in colour because there has been know manipulation from any software to strip the colour matrix from the image there does appear to be a slight difference don't ask me why, that why I am experimenting.

Try it for your self.

Don't expect huge differences its subtle but as far I can see there is a difference. in the noise levels. after combing in DSS.

Share this post


Link to post
Share on other sites

Interesting thread :) As I understand it, the processing in the camera only affects the JPEG image. As said above and my understanding is that the RAW data is straight from the sensor with just amplifier gain control - ISO. The RAW image is monochrome being debayered in the software. The Bayer mask in the image sensor cannot be disabled - it's a grid of coloured filters in front of the CMOS transistors that pick up the light. Various people have tried scraping or otherwise removing the Bayer layer with very limited success unfortunately. A Bayer-less image sensor would make a great astro imager. It would be great to have all the pixels responding for NB imaging. I still think the Canon sensor with Digic 4 processor makes a good OSC astro imager when cooled.

  • Like 1

Share this post


Link to post
Share on other sites

Hi Gina,

You are as usual correct, it would be nice as you say to have Bayer-less Canon.

Unfortunately I am not as brave as you, I think you have taken the Canon mod to a different level.

Perhaps next time your out shooting your next target with your Canon you can try what I have done, as you use DSS for stacking if I remember correctly, what have you got to loose.

In hindsight I should have done a side by side comparison, its the only difference in my images to how I normally process them and Take them, I cant explain the very low noise levels

especially at 3200 iso, as last year when I tried it on 3200 iso, and it was colder, the noise was extremely high as you would expect.

Paul.

Share this post


Link to post
Share on other sites

Hi Lee,

I understand what you are saying, A raw Photo has all the colour data and luminance data combined and inevitably the software will disregard the colour matrix to give you a pure Monochrome image , combine these and you should have the same affect as shooting in Monochrome in the first place.

A bit of a broad generalisation there IMO.

The monochrome image you are getting, is this a jpeg? I suspect so, thus you are letting the camera perform a process to the RAW data which would be better done in PS or other an other program - just think of the computing power of a DSLR compared to a PC.

I would be interested to see a side by side comparison next time your out, to see the benefits of this but I won't be trying this as I would rather spend my time shooting all my subs in colour with my DSLR and creating a false luminance after thus ensuring I have the most sub exposures in the time I have available.

Share this post


Link to post
Share on other sites

That's the problem - we get so little imaging time that it's too precious for that sort of experiment. I agree that it would be interesting but I'm afraid it has lower priority than simply grabbing data. Unless we get a nice long run of clear nights, of course :D

Share this post


Link to post
Share on other sites

I admire the image but, like Lee, am not convinced by the theory. It is impossible to stop the Bayer Matrix from being active. It is active in the sense that it colour-fllters the incoming light and that's that. What lands on the red filtered pixels is red light only. It doesn't matter what you call it, it is in fact red and has no green or blue light contributing to the signal. It cannot be a luminance layer. A luminance filter lets the full visible spectrum onto the chip, which is why it is faster and worthwhile. I don't agree, then, that you have used a 'new way of catching the light.' You have caught the same light as usual.

Again like Lee, I think you can do all this in post processing simply by extracting a luminance layer and processing that for contrast and detail. I would then process the normal image (the one shot colour RGB) without regard for detail, looking only for high colour saturation and low noise. I'd use noise reduction and not worry about loss of detail or the oily, polished look of too much NR. And then I'd apply the sharpened synthetic luminance as an L layer to that.

As far as I can see, switching off the colour after the light has gone through the Bayer filters gives you no new information at all so I can't see how it would help. ANd yet again like Lee, I'm happy to be corrected if wrong.

I believe also that somewhere along the line the software, in camera or out, creates an extrapolated L layer. I think it is programmed to recognize boundaries which may be present in two colours but not a third and will modify the third in order to preserve the continuity of the boundary. I nothing much about this though.

Olly

Edited by ollypenrice
  • Like 1

Share this post


Link to post
Share on other sites

Hi Paul, That as come out really good, colour looks fine, dust lanes show up very good to.

peter

Share this post


Link to post
Share on other sites

Hi Everyone,

Perhaps there is another way I can explain this,

and as I have already said this is work in progress,

the pictures below are of the corners and middle using PI

firstly please excuse the coma lol,

Both are of the same image converted from Raw file using PI the other was Canon's Photo professional .

only difference is 255a was taken in raw on the camera in Monochrome the other was processed from the colour image to get luminance both were stretched in PI

Even the way that different Imaging soft ware extrapolates the information can make an overall difference in Noise and quality of the finished article.

Be under no illusion I am not saying that there is huge differences but maybe enough to make a little difference on the final picture,

this is work in progress and I am trying to look into the various very small variations in how camera /software work with each other to get the results we want.

Any how this is the final result not allot of difference , but a difference none the least.

Paul

post-23517-0-79553200-1350230977_thumb.j

post-23517-0-31632800-1350231045_thumb.j

Share this post


Link to post
Share on other sites

Very nice image,I dont know much about the technicalities but the little I do know(and it is only a little) fits in with Gina and ollys explanation

Share this post


Link to post
Share on other sites

Paul, I don't think there is such a thing as a B&W RAW taken with a colour DSLR. My guess is that your camera flags the RAW image as taken in "B&W mode". Then, your Canon software will be converting it to greyscale when importing the RAW data, as instructed by the attached exif data.

DSS also allows you to choose between debayering methods in the RAW settings. For example, you could choose "super-pixel" debayering, which would combine all 4 pixels into a single, colour, pixel. Yes, this would reduce noise but no more (afaik) than resizing your stacked image to 50% would.

Regardless of all this, it's a very nice image! :)

Share this post


Link to post
Share on other sites

Hi Lewis, Thanks.

I think people are misinterpreting what I am trying to say

And I am pants at trying to explain things lol.

Its not about the fact that I took Monochrome images on the Camera,in raw, I already know its the same shot and its the software that takes the colour out.

What I am so badly trying to explain is all depending what software you use to process the images from the camera Monochrome / colour, reflects on how much noise is present.

so all depending what software you use to get the Raw image from the camera to be worked on makes a difference to the noise levels before they are stacked.

Because what I found was that there was more noise (only slightly) when the raw was processed in canons own raw converter to PixInsights, and there are subtle differences in other Astro software Packages like DSS that once it reads and converts the image from the raw format there is slight differences.

I know stacking helps eliminate the noise but surely its better to start with a slightly better image in the first place, that's what I am trying to work on,

It probably still don't make sense, and I am going to give up boring you good folks with this waffle

I think to save the confusion next time ill just post the image and no writing. Lol

Thanks for your patience everybody

Best regards Paul

Share this post


Link to post
Share on other sites

Not boring at all and I can see what you are saying now, though I have no knowledge of how it works since I don't image with a DSLR. It's important to try new techniques. Good for you.

Olly

Share this post


Link to post
Share on other sites

I can see what you are saying and trying to do, but here are the facts about the canon cameras.....

Gina had this right, whatever picture settings you have set the camera to mono or otherwise is ignored when shooting raw, you can test this in digital photo professional by using the pull down menu's for shot settings and picture style, only jpg images use the settings and are processed in camera (this is mentioned in the camera manual). The display on the back of the camera uses a jpg converted from the raw for display purposes only.

Downloading the raw file with different software again has no impact on the original file, only the software processing makes a difference, whether you use dpp, lightroom, ps or pixinsight.

The so called colour matrix is a layer of coloured microlenses in front of the sensor it canot be deactivated as its a physical thing here is canons web page on the subject http://cpn.canon-europe.com/content/education/infobank/capturing_the_image/photo_sensors.do

http://cpn.canon-europe.com/content/education/infobank/capturing_the_image/ccd_and_cmos_sensors.do

a good pdf on the subject http://www.robgalbra...White_Paper.pdf

I hope that clears a few points up for people :grin:

Edited by Auntystatic

Share this post


Link to post
Share on other sites

Not sure i understand the technical detail being a die hard planetary imager! but love the shot, well done.

Share this post


Link to post
Share on other sites

Thanks Olly,

Much Appreciated, that's the problem when your trying to explain to other people what you are trying to do and put it into some sort of perspective and I am no good at that.

Ill keep tying different techniques with the various astro Packages to see the different results

Mainly around the raw information and how the different processing structures convert it from the raw image.

This of course is for DSLR but for the like of fits files that's probably a whole different ball game and not one I want to get into, But then again I would imagine that even fit files might be interpreted slightly different by various soft ware packages.

Share this post


Link to post
Share on other sites

That's what I am trying to say in a long winded way admittedly.

I can see a difference in how PixInsight, DSS, Canons Photo professional deals with the raw image from the bayer matrix on the camera,

From just converting the Raw into a tiff file and doing a simple stretch of the image in Monochrome without the colour data I can see a difference in the noise levels between the images.

As a previous post shows , no other manipulation of the image was used just a simple stretch.

Now if I was going to stack these images I would rather stack them form the software package that gives me the least noise per image that has been converted from a raw file to start with.

And as I have already said the differences are very subtle but there is a difference.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.