Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

What is the meaning of the term "sub"?


cjdawson

Recommended Posts

@4th asked the above question "What is the meaning of the term "sub"?"

For those that still don't know the answer.

A "sub" is a "sub frame".  Or in laymans terms a photo.

When producing astro photos a number of sub frames will be captured. This will then be processed using various methods like alignment, stacking, histogram streching and various other techniques to produce the final output image.


Sub frames are also called light frames. In addition to this, you get dark frames, bias and flat frames.

So here's a quick run down of all of these types of frame.

Sub/light frames - pictures of the target, say 5 min exposure @ ISO 800 of M42.
Dark frames - picutres of using the same settings as the light frame, only the camera is put into a dark box, so no light can enter the camera.
Flat frames - This is a picture of a plain colour.  Normally white, but it has to be even.  It's a good exposure, of the surface.  What it does is capture the inperfections in the camera optics.
Bias frames - These are pictures taken at the fastest shutter setting of a camera, again in a dark box.

note: The flats need to be taken with the camera attached to the telescope and the focusser in the same position as when the subs are taken.  Don't move anything or it won't work right.

In addition.  It is best if all of the photos are taken with the camera at the same temperature.  Again this effects the image quality.


So, how do all these images work together?

There is some debate about this, but so far my understand goes something like this.

1. take lots of light frames.  I like to take sequences of 20-30 images.  Beyond that, it appears to be quickly diminishing returns.  Of those 20, I'll probably only end up using the 16 best images on visual comparison.
2. Dark frames - I'll take 20 of these and keep them in a library.  It's specific to the camera.
3. Flat frames - I'll take 20 of these at after completing the light frames.
4. Bias frames - I'll take 20 of these.


Currently, I've decided not to use all of those images in processing, but I'll take them anyway, so that I have them incase I dicover something in the future.


The theory goes something like this.
Combine the Dark frames together, averaging each pixel across the 20 images.  This will produce a Master Dark frame.
Do the same for the flats, bias and flat bias.  To produce master versions of these.

I'm currently DSLR imaging and am of the understanding that the Dark frames, also include the BIAS - the bias being the minimum exposure, so for now, I'm putting the bias frames aside.

This leave the sub frames * 20-30, the Master bias and the Master flat frame to work with.
Next step is to average each sub frame with the Master flat frame.

    Doing this will get ride of dust bunnies and vignetting.

Next step is to subtract the Master Dark Frame from each of the 20-30 sub frames.

    This will get remove the camera noise from the image.

Finally, it's time to align each of the 20-30 image frames, then average them together to produce a stacked image.

The theory goes that doing all of this will give you a final image output with much less noise and artifacts.  Which will allow much more detail to be pulled out of the image in post processing.


You don't have to do all the steps above by hand, there are programs such as deep sky stacker that will do it for you.

 

updated:  removed the "Flat bias frames" in light of @ollypenrice's comment below.

Link to comment
Share on other sites

  • Replies 28
  • Created
  • Last Reply

It is when you ad 2 pixels across and 2 pixels down together to form 1 bigger pixel, so 4 pixels in total so in theory 4 times more sensitive, and twice the size, so,if you had 5 micron pixels on your camera and you bin 2x2 you will have 10micron size pixels, so you will get more arc seconds of Sky on each super pixel.

it helps with getting batter pixel scales and sampling rates on certain camera and scope combinations, when imaging.

hope that helps :)

Link to comment
Share on other sites

Whoa.

Unfortunately your definition of flat bias frames is wrong. In fact there is really no such thing as a flat bias frame. There can be be a bias used to calibrate flats, but it is just a normal bias and it most certainly doesn't have any light getting to it when it is captured.

A flat frame is, as you say, a picture of an even surface. Now these flats do, themselves, need to be calibrated or they will inaccurately calibrate your pictures. The textbook says that you should take 'darks for flats.' This means that you would take a set of dark frames at the same setting as your flats and subtract them from your flats. There is nothing wrong with this but it would be laborious and is, mercifully, entirely unnecessary. All you need to do is make a master bias (a set of darks at the shortest setting your camera can take with all light excluded) and treat it as a dark for your flats.

If you want to use classical darks (I don't and nor do lots of other people) you need a set of lights, a set of flats, a set of darks and a set of bias. That's it. If you give all this to a competent stacking routine it will do the right thing with them, but you may need to put the bias in twice, once as bias and once as 'darks for flats.' It depends how the software has been set up.

The bias signal is contained in the darks and a sensible stacking programme won't double subtract it. In the 'classical darks' caibration scenario the bias has only two functions. 1) It is used to scale non-temperature matched darks to lights. 2) It can be used as a dark for flats as I describe above. You may need to tell the software to do this by putting the bias where the 'darks for flats' would normally go.

The alternative to classical darks might best be left to another thread but I do far better without normal darks than with them, even with CCD. Tony Hallas makes the case for ditching them in DSLR imaging as well.

Olly

 

Link to comment
Share on other sites

On 13/1/2017 at 14:31, cjdawson said:

Or in laymans terms a photo.

Brilliant. Calling a spade a spade, rather than a semi-mobile-solid-material-moving-device makes things so much more understandable. It reminds me of StarTools and PixInsight respectively;).

Link to comment
Share on other sites

I found the words sub and frame somewhat confusing when I first started imaging.  I guessed what they meant from the way they were being used. But to me a frame was what a picture is displayed in - from a photographer's point of view a picture being an image, a photograph or an exposure.  Is "frame" an American expression for a photograph? 

Link to comment
Share on other sites

7 minutes ago, Ouroboros said:

I found the words sub and frame somewhat confusing when I first started imaging.  I guessed what they meant from the way they were being used. But to me a frame was what a picture is displayed in - from a photographer's point of view a picture being an image, a photograph or an exposure.  Is "frame" an American expression for a photograph? 

I think the word "frame" only comes into play when you are stacking multiple images taken on a video cam/CCD etc. Each "frame" is essentially a single "image".

TV and movie cameras work on i think 24 frames (images) per second. So when you are watching tv, what you are seeing is nothing more then 24 individual images displayed on screen every single second

Link to comment
Share on other sites

Look at a roll of camera film from the good old days (back in the 80's). Each square on the negative is a frame. Put a roll of film in a cine camera and you have a "movie". I remember as a kid my dad owning a Browns cine camera (without sound) and he made us kids do stupid silly stuff, just to get it on film and watch back on a projector.It was the height of technology back in the late 70's,early 80's.

Oh how far we have come since then.

Link to comment
Share on other sites

12 minutes ago, wimvb said:

Frame refers to the "old" days of film photography. A roll of film had 12, 24 or 36 frames.

Oh, yes of course. :) I'd forgotten that. Also in cinematography in the old days of film - frame rate and so on, an expression still used in video imaging.   Oops I see I'm repeating what several people have said already. 

Link to comment
Share on other sites

Ah yes that's right - applies to still photography back in the film days.  I did know that as I have been an amateur photographer most of my life and developed and printed my own films both monochrome and colour.  I wasn't able to afford cinematography in the film days - I had to wait for digital :D

Link to comment
Share on other sites

4 minutes ago, alacant said:

Is it correct to refer to light frame, flat frame, bias frame  etc?

I don't  know about correct but they are certainly the terms used. Actually they're all just images really. 

Link to comment
Share on other sites

Dont forget.........

Back in the day when it was discovered/invented that you could take multiple images and string them together to make a "Movie" that they still had no sound. When it was discovered/invented to add sound to those movies, they became known as "Talkies".

 

Link to comment
Share on other sites

7 minutes ago, LukeSkywatcher said:

Dont forget.........

Back in the day when it was discovered/invented that you could take multiple images and string them together to make a "Movie" that they still had no sound. When it was discovered/invented to add sound to those movies, they became known as "Talkies".

 

I'm still looking forward to the "feelies". 

Link to comment
Share on other sites

19 minutes ago, Gina said:

Ah yes that's right - applies to still photography back in the film days.  I did know that as I have been an amateur photographer most of my life and developed and printed my own films both monochrome and colour.  I wasn't able to afford cinematography in the film days - I had to wait for digital :D

Me too, used to have a dark room and do a lot of b/w. As a matter of fact, I sold my Hasselblad last year to cover the cost of a new AZ EQ6. "Times, they are a changing."

Never was interested in cinematic film, though.

Link to comment
Share on other sites

Just to add a little more theory to @cjdawson's explanation (and to get this thread back on track again)

As far as I understand it:

When light is collected by the optics (scope + filters + camera), it is first affected by the optics. The light intensity is MULTIPLIED by the transfer function of the optical system (fancy phrase for light fall-off by vignetting and dust bunnies, mainly)

Second, in the camera, photons are absorbed, and create charge in each pixel. However, there will also be charge created by the pixels themselves being at a certain temperature. This is so called dark current. The charge built up in each pixel is time dependent (= dark current x time). This charge ADDS to the charge created by photons.

Finally, when the charge in each pixel is collected, it is slightly changed by the electronics that "reads" the pixel content, and does the in camera processing. This means that the signal from the sensor becomes "biased". This change is independent of exposure time. But, depending on how the sensor does the charge collection, whole rows or collums of pixels may be affected similarly. This gives rise to a read pattern. Charge is added or subtracted (-ve addition)

Because of all that is happening in the imaging system, the image that is saved on a computer or in the camera (DSLR) is not a true representation of the photons that were collected during an exposure.By using calibration frames, the effects of the camera and optics can be reversed. But the order in which these calibration images are applied to the light frames, is important. It must be the reverse order in which the light/charge was affected.

Dark frames have, ideally, the same amount of dark current (including hot pixels), as the light frames. They just lack the light signal. They are also affected by the camera's read process, so they also contain the read pattern. Dark frames have to be SUBTRACTED from the light frames. In this process, the bias signal is also subtracted. That is why you normally don't have to subtract bias frames and dark frames. Since dark frames (and bias frames) are taken without any light involved, they are unaffected by the optics used. So you don't have to take dark frames with the camera attached to the scope.

Flat frames (taken using a flat illumination that should be evenly distributed across the optical aperture) only contains the pattern (= transfer function) of the optical system. Since this pattern was multiplied by the light signal, it now has to be DIVIDED.

Flat frames that have been recored by the camera, also contain dark current and a read pattern. So, flat frames need to be calibrated, ideally with flat dark frames. But since the exposure time for flat frames usually is much shorter than for light frames, flat frames mainly contain read pattern and only little dark current. So it may be enough to just use bias frames to calibrate flat frames.

The actual calibration process can vary, depending on which software is used. For example, PixInsight can use a single set of dark frames for calibration of both the light frames and the flat frames. Since flat frames are taken at a different exposure time than light frames, PixInsight will try to optimise the dark frames to the flat frame exposure time. But since dark frames contain both a time dependent part (dark current, hot pixels) and a time independent part (read pattern), they first have to be calibrated (i.e. have the read signal removed). If dark frames are not to be used in the calibration of flat frames, they do not need to be calibrated, since they do not need to be optimised. If dark frames are to be used for light frames that have received different exposures, they do need to be calibrated. Other calibration software can do this differently, so you may need to consult the manual.

To sum it up

Data collection:

Light --> through optics (Incl vignetting, dust) --> converted to charge + temperature created charge + electronics created charge --> image saved

                multiplication                                                                                                           addition

Calibration:

Saved image --> - (temperature + electronics created) charge --> corrected for optics --> image containing light signal 

                                          subtraction                                              division

Hope this doesn't confuse too much

(and please correct me if I'm wrong)

Link to comment
Share on other sites

11 hours ago, alacant said:

Is it correct to refer to light frame, flat frame, bias frame  etc?

Yes, this is what people do.

I would rather call a sub (short for sub exposure) a 'sub' than a 'photo' because, while a sub is  a photo, it isn't the complete photo but only a part of it.  The term 'sub' is not jargon for jargon's sake. It arises from the fact that we might make twenty hours of expousre but not continuously. The twenty hour exposure will be assembled from sub exposures totalling twenty hours. It's a good term and we should continue to use it.

(Being pedantic :D we could call flats 'photos' as well because they graphs made from photons. By the same reasoning we could not call bias or darks photos because they are made without light. But who wants to be pedantic on a Sunday morning??)

So the definitions I use are:

Lights: Images of the target.

Sub exposures: individual lights to be averaged to produce a master image.

Flats: images of an even light source whose purpose is to record the uneven illumination of the system and to correct the master image by division.

Darks: captures of the camera noise at the same exposure setting as the sub frames to be subtracted from the master image.

Bias: the shortest darks your camera can take. They can be used instead of darks if a bad pixel map and/or dither is employed, they can be used as darks for flats and they can be used to scale non temperature matched darks to subs. If darks are being subtracted from the subs the bias should not be subtracted because the bias noise is present in the dark.

Olly

Edit...

It occurs to me that the statement 'a sub is a photo' can be formally incorrect because it falls foul of what philosophers call the law of the undistributed middle term. Woo Hoo!!! :BangHead:While all subs are photos, not all photos are subs. A photo is only a sub if it forms part of a set of other similar subs destined for combination. Don't worry, I know it doesn't matter!!!!!!!!!!!!!

 

Link to comment
Share on other sites

2 hours ago, ollypenrice said:

.

Olly

Edit...

It occurs to me that the statement 'a sub is a photo' can be formally incorrect because it falls foul of what philosophers call the law of the undistributed middle term. Woo Hoo!!! :BangHead:While all subs are photos, not all photos are subs. A photo is only a sub if it forms part of a set of other similar subs destined for combination. Don't worry, I know it doesn't matter!!!!!!!!!!!!!

 

Which raises the question, are master darks and master bias(es?) images?

I have a vague feeling we are going off topic again. Otoh, the full moon prevents imaging anyway.:wink:

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.