Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Red glow Canon 700D


Maxrayne

Recommended Posts

Afternoon all

I've managed to get out on a couple of nights recently and put the 700D to work. I've noticed though that once I've stacked the images (both 1 min and 90 second frames) that the centre of the final image has some species of red "glow." This happens whether I use darks or not. Does anyone have any ideas? I've attached an image to show you what I mean. The only edit has been a crop for the edges and an exposure increase by 1.5 stops. This is at ISO 800, 1 minute subs, and includes darks in the final image

 

 

Red glow-1.jpg

Link to comment
Share on other sites

Can't see the image, unfortunately, but it might be a combination of vignetting and light pollution.  Darks won't help, if so, but you may find flats will make a difference, if you are able to capture some, even now?

Link to comment
Share on other sites

5 minutes ago, almcl said:

Can't see the image, unfortunately, but it might be a combination of vignetting and light pollution.  Darks won't help, if so, but you may find flats will make a difference, if you are able to capture some, even now?

Just reuploaded the image as it kept corrupting for some reason.

I've not taught myself flats yet unfortunately but it's definitely on my to-do list ?

Link to comment
Share on other sites

OK, yes, can see it now.

There's definitely vignetting, but there may be other things going on as well. 

Was the viewfinder covered with the little rubber cap that comes on the strap?  Was live view turned off? Also the Canon noise reduction function? 

Setting up to do flats can take a bit of time - an evenly illuminated white (ish) surface is needed - but once organised, 30 flats can be rattled off in a couple of minutes or so, aiming for a 50% histogram.  They will make a difference.

 

Link to comment
Share on other sites

7 minutes ago, almcl said:

OK, yes, can see it now.

There's definitely vignetting, but there may be other things going on as well. 

Was the viewfinder covered with the little rubber cap that comes on the strap?  Was live view turned off? Also the Canon noise reduction function? 

Setting up to do flats can take a bit of time - an evenly illuminated white (ish) surface is needed - but once organised, 30 flats can be rattled off in a couple of minutes or so, aiming for a 50% histogram.  They will make a difference.

 

Unfortunately I don't have the strap and hence no cap. However I can try and tape that off. As far as I know LV was off (the screen was rotated back to the "close" position which I assume kills it. But I'll have a look at settings to make sure it's disabled anyway. As for NR, both high ISO NR and long exp NR are set to off.

I'll have a proper look at doing flats though as its something I need to do anyway

Link to comment
Share on other sites

4 minutes ago, Alien 13 said:

Was this taken with a scope or camera lenses, if the latter you can often get by just using the generic camera lens correction data instead of flats.

Alan

Hi Alan

Just a camera lens. I tried the correction stuff in LightRoom with the same results :(

Link to comment
Share on other sites

I think I would explain this as follows:

It is flats issue combined with light pollution and read noise floor. On most DSLR-s light pollution gives red cast to the image. Central part of image usually contains more signal then the rest due to vignetting, matching that of shown image. So everything checks out so far, except edges being less red than the center - and this is where noise floor steps in - read / dark current noise is random in nature and does not depend on quantum efficiency of sensor, so in color sensor it will not have any sort of cast - it will be mostly gray (think snow on tv when no signal but in all three channels equally). Light pollution signal is strong enough to over power read noise in center of the image, but due vignetting its intensity is falling of towards the edge and in this particular case falling below read/dark current noise floor - thus background becomes more grayish than red there.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

I think I would explain this as follows:

It is flats issue combined with light pollution and read noise floor. On most DSLR-s light pollution gives red cast to the image. Central part of image usually contains more signal then the rest due to vignetting, matching that of shown image. So everything checks out so far, except edges being less red than the center - and this is where noise floor steps in - read / dark current noise is random in nature and does not depend on quantum efficiency of sensor, so in color sensor it will not have any sort of cast - it will be mostly gray (think snow on tv when no signal but in all three channels equally). Light pollution signal is strong enough to over power read noise in center of the image, but due vignetting its intensity is falling of towards the edge and in this particular case falling below read/dark current noise floor - thus background becomes more grayish than red there.

Is it a fair assumption to make that an LP filter would help some? Along with doing flats of course

Link to comment
Share on other sites

Just had a thought (contrary to popular opinion, this happens occasionally) - could it also have anything to do with the dew shield? It's not a controllable one and plugs straight into USB and kicks out a reasonable amount of heat once it's warmed up

Link to comment
Share on other sites

1 hour ago, Maxrayne said:

Is it a fair assumption to make that an LP filter would help some? Along with doing flats of course

I would first try flats. This should sort out the problem of uneven background.

LP is much tougher to fight. Color cast can be effectively removed in post processing - just bring black point (background) to even (very dark gray value) and then color balance. Remove any gradient present in image. Problem with LP is that it creates a lot of noise, and only way to fight that is to get more data (you can of course change location and shoot in darker skies - that is best way to deal with LP).

LP suppression filters work to some extent, but it depends on other circumstances like type of light pollution. You are also using camera that is not cooled, so other sources of noise can be present as well, like thermal noise and of course read noise. If these are substantial, removing some of LP can help, but the question is how much this will improve your results. If funds are not issue, then get LP filter for your camera, but this can be somewhat expensive, and I can't be certain if results would be worth it.

37 minutes ago, Maxrayne said:

Just had a thought (contrary to popular opinion, this happens occasionally) - could it also have anything to do with the dew shield? It's not a controllable one and plugs straight into USB and kicks out a reasonable amount of heat once it's warmed up

I doubt that this is of any significance, but you can easily try - just image one session without dew shield and see if things improve.

Link to comment
Share on other sites

So that I'm understanding this right (my apologies, fighting brain fog with meds atm), my usual workflow is:

Import RAW images

Select the ones for stacking (including darks), and stack in either Sequator (my usual), or DSS

Import final image into LightRoom

Set the crop and exposure

Set the white balance

Set black point using the curves tool

Other adjustments, noise reduction etc

 

 

In fairness, I do need to be grabbing much more data. Typically I do about an hour per target each session. Clearly that needs to change as I'm obviously spreading myself too thin and need to concentrate on one at a time. Looking at the skies tonight I might get chance. I've already disabled LV and taped up the red "write" LED. Once I'm on target I'm going to tape off the VF as well. Hopefully those will help eliminate any light leak at least. 

Thank you, you've given me a lot to do :D (or rather I have a lot to look into lol)

Link to comment
Share on other sites

That processing flow is pretty much it, but I would change

(including darks) -> (including darks, flats and flat darks (or bias for this))

and reorder few things, namely:

Set black point

Exposure / stretch

White balance

Crop

This is order of things that I usually use. Exposure / stretch I do with levels and curves and I set black point in this process, so it can be combined in one - this is in Gimp 2.10

Link to comment
Share on other sites

14 hours ago, vlaiv said:

That processing flow is pretty much it, but I would change

(including darks) -> (including darks, flats and flat darks (or bias for this))

and reorder few things, namely:

Set black point

Exposure / stretch

White balance

Crop

This is order of things that I usually use. Exposure / stretch I do with levels and curves and I set black point in this process, so it can be combined in one - this is in Gimp 2.10

Well I've started taking flats and I'm already seeing a difference.  Believe it or not I've found an app on Android called white screen that seems to work well for them. If I remember right, bias frames are with the lens cap on and shot at the fastest speed, same ISO as the subs?

Going to sit down today and reorder my workflow as you've suggested, although I'm confused as to why WB is near the end. Would that not be one of the first things to do in order to get a correct exposure/stretch? 

Link to comment
Share on other sites

2 hours ago, Maxrayne said:

I'm confused as to why WB is near the end. Would that not be one of the first things to do in order to get a correct exposure/stretch?

You are correct about that one, but let me explain why I put white balance at the end.

There is difference in processing of "artistic" and "scientific" approaches (let's call them that, there is no such established terminology).

I tend to think "scientific" - that would mean following work flow:

- Get linear data from stack and subtract background levels in each (at this point I look at each channel as mono intensity data rather than being a color and I'm interested in removing background signal)

- Next would be sensor response transform to XYZ color space - this can be done by measuring photometric response on various stars and forming transform matrix. This probably does not mean a lot to you, so I'll explain it a bit more. There is standard XYZ colorimetric space that represents response of "imaginary" sensor to light wavelengths. Real sensors have usually different response to various wavelengths of light and we are trying to transform real sensor response to this "standardized" sensor response. You can think of it as being just channel mixing, but coefficients for each channel (new r = r*A + g*B+ b*C) are derived from star measurement - spectra of know star types are used and one can calculate expected XYZ values and compare them to measured values to derive transform coefficients

- Next step involves separating color information from luminosity information (XYZ allows this by transforming to CIE Lab, and using a and b as color, and L being luminosity).

- We stretch only L (if we are doing LRGB imaging, we take L from Luminance and discard above L from CIE Lab because luminance L is of better quality)

- We apply a and b to stretched L

So in essence we do color balancing in linear stage prior to stretch. But this method involves quite a bit of math and can't be used with regular image processing software (Photoshop, gimp and alike).

Then there is artistic approach - where colors are balanced according to person doing the processing. One can choose to impact certain tone to the image in order to create wanted "feel" in presented image. If you are processing some faint group of galaxies or reflection nebula, you might choose cooler tones to create sense of distance and isolation. If you are imaging star cluster embedded in nebulosity where stars are being born you might want to do warmer color cast to emphasize heat of star creation and such - you know what I mean. You can adjust saturation to match your artistic vision (in scientific approach all of these are determined by data rather than left for creative expression).

Now before you stretch you can do color balance, but you will miss on important data in your image. There is a lot of faint stuff in images that you don't really see before you do the stretch and doing white balance before you even see what is there, limits you from choosing color cast that you see fit for particular object.

The longer you image single target and more subs you stack - greater dynamic range of resulting stack and when imaging faint stuff - most things are in "shadows". Let me give you an example from one of my images (this is L channel only):

image.png.2555639839d6b7547de7f9589692f091.png

This image is very high dynamic range stacked out of over hundred subs. Left is what it looks like in linear phase - right is just a rough stretch to show you what is there. If I tried to color balance prior to stretch I would have no idea what color spiral arms do I want to see, since I can't even see the spiral arms, right?

If you do color balance after stretch you are sort of loosing ability to do scientific approach, but most people don't do it like that (too involved) and use artistic approach. Here loss of true scientific color balance does not matter as people choose how to present their image.

This is why I would recommend color balance after stretch - once you see all of what is there in the image you can assign wanted color balance to it.

 

Link to comment
Share on other sites

36 minutes ago, vlaiv said:

You are correct about that one, but let me explain why I put white balance at the end.

There is difference in processing of "artistic" and "scientific" approaches (let's call them that, there is no such established terminology).

I tend to think "scientific" - that would mean following work flow:

- Get linear data from stack and subtract background levels in each (at this point I look at each channel as mono intensity data rather than being a color and I'm interested in removing background signal)

- Next would be sensor response transform to XYZ color space - this can be done by measuring photometric response on various stars and forming transform matrix. This probably does not mean a lot to you, so I'll explain it a bit more. There is standard XYZ colorimetric space that represents response of "imaginary" sensor to light wavelengths. Real sensors have usually different response to various wavelengths of light and we are trying to transform real sensor response to this "standardized" sensor response. You can think of it as being just channel mixing, but coefficients for each channel (new r = r*A + g*B+ b*C) are derived from star measurement - spectra of know star types are used and one can calculate expected XYZ values and compare them to measured values to derive transform coefficients

- Next step involves separating color information from luminosity information (XYZ allows this by transforming to CIE Lab, and using a and b as color, and L being luminosity).

- We stretch only L (if we are doing LRGB imaging, we take L from Luminance and discard above L from CIE Lab because luminance L is of better quality)

- We apply a and b to stretched L

So in essence we do color balancing in linear stage prior to stretch. But this method involves quite a bit of math and can't be used with regular image processing software (Photoshop, gimp and alike).

Then there is artistic approach - where colors are balanced according to person doing the processing. One can choose to impact certain tone to the image in order to create wanted "feel" in presented image. If you are processing some faint group of galaxies or reflection nebula, you might choose cooler tones to create sense of distance and isolation. If you are imaging star cluster embedded in nebulosity where stars are being born you might want to do warmer color cast to emphasize heat of star creation and such - you know what I mean. You can adjust saturation to match your artistic vision (in scientific approach all of these are determined by data rather than left for creative expression).

Now before you stretch you can do color balance, but you will miss on important data in your image. There is a lot of faint stuff in images that you don't really see before you do the stretch and doing white balance before you even see what is there, limits you from choosing color cast that you see fit for particular object.

The longer you image single target and more subs you stack - greater dynamic range of resulting stack and when imaging faint stuff - most things are in "shadows". Let me give you an example from one of my images (this is L channel only):

image.png.2555639839d6b7547de7f9589692f091.png

This image is very high dynamic range stacked out of over hundred subs. Left is what it looks like in linear phase - right is just a rough stretch to show you what is there. If I tried to color balance prior to stretch I would have no idea what color spiral arms do I want to see, since I can't even see the spiral arms, right?

If you do color balance after stretch you are sort of loosing ability to do scientific approach, but most people don't do it like that (too involved) and use artistic approach. Here loss of true scientific color balance does not matter as people choose how to present their image.

This is why I would recommend color balance after stretch - once you see all of what is there in the image you can assign wanted color balance to it.

 

Just...wow. That's possibly the best explanation of something I've read for a while :) 

Now you've put it that way, I can see what you mean. The "presentation" is what we're indeed after I think. Although the scientific aspect IS interesting, that tends to be where people (generally speaking) "close down" and rapidly lose interest. Where I've not been using this method, instead opting to WB it before stretching the final image, I have a feeling that I've quite possibly missed a fair bit. I think I have some work to do again whilst we're clouded out (AGAIN!) It's also another reminder to me to concentrate more on actually obtaining data as opposed to grabbing what I can and "putting it out there"

Link to comment
Share on other sites

This may be totally irrelevant but I have had similar results with 2 canon cameras and finally discovered it was due to shooting straight from live view. 

I solved it by disconnecting the power to the camera for a few seconds  once I was focused and framed. 

I don’t know if that will help but I thought I’d add my pennies worth! 

Good luck in solving it! 

Bryan 

Link to comment
Share on other sites

15 hours ago, assouptro said:

This may be totally irrelevant but I have had similar results with 2 canon cameras and finally discovered it was due to shooting straight from live view. 

I solved it by disconnecting the power to the camera for a few seconds  once I was focused and framed. 

I don’t know if that will help but I thought I’d add my pennies worth! 

Good luck in solving it! 

Bryan 

Hi Bryan, thanks muchly ? I don't honestly recall it on my 550D, but I'll give that another go on this one. It might be that when I close the screen, it doesn't actually shut down as such. I might be wrong there, but it's worth a go ?

Taken some flat frames and reprocessed some images with them and there's a definite improvement there so I'm starting to think it may be a culmination of things and it's just a case of working through them one by one, as with any new camera.

 

On another note, just seen you're getting rid of your modded 450? I'm torn between this and a WO Zenith 73mm I've seen for £270 complete with rings and finder. Why do these things crop up in pairs? ?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.