Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

DLSR flats causing confusion


Recommended Posts

2 minutes ago, tooth_dr said:

One of the cameras was set to autorotate, so I don't know what this means to my final images?

If the artefact is on the sensor then it might not matter as it will rotate with the camera, but if it is elsewhere in the optical train, then yes it will appear on the sensor in a different location, so flats taken when the camera wasn't rotated won't work with lights taken with the rotated camera images. To be safe, I'd say that rotated camera images need to be calibrated with their own flats, same as anytime you move the camera within the optical train for what ever reason.

Link to comment
Share on other sites

  • Replies 37
  • Created
  • Last Reply
16 minutes ago, geoflewis said:

If the artefact is on the sensor then it might not matter as it will rotate with the camera, but if it is elsewhere in the optical train, then yes it will appear on the sensor in a different location, so flats taken when the camera wasn't rotated won't work with lights taken with the rotated camera images. To be safe, I'd say that rotated camera images need to be calibrated with their own flats, same as anytime you move the camera within the optical train for what ever reason.

Thanks Geof.  I've now turned off the auto rotate feature properly so in future that will rule out that issue.  Now I cant even be sure if the flats were rotated or not because telescope was pointed directly upwards so I dont know what the camera orientation may have been!  Lesson learnt for next time!

Link to comment
Share on other sites

Quote

If the artefact is on the sensor then it might not matter as it will rotate with the camera, but if it is elsewhere in the optical train, then yes it will appear on the sensor in a different location, so flats taken when the camera wasn't rotated won't work with lights taken with the rotated camera images. To be safe, I'd say that rotated camera images need to be calibrated with their own flats, same as anytime you move the camera within the optical train for what ever reason.

I think that has hit the nail on the head, I did not know what autorotate meant, so the penny didn't drop.

Carole 

Link to comment
Share on other sites

1 hour ago, tooth_dr said:

Hi Neil

 

I havent posted any results DSS yet, but I am currently stacking the 1000d subs to see how it compares.  I cant use these both together in DSS, as you have suggested, because for some reason when it opens the files from the two cameras in DSS there have a different pixel size by 2px in the length so it refuses to stack them.  Yet in APP it sees them both as the same pixel size.

 

How are you making sure APP uses the right flats for the right lights? (I have no experience of using it).

Link to comment
Share on other sites

59 minutes ago, Stub Mandrel said:

How are you making sure APP uses the right flats for the right lights? (I have no experience of using it).

I put lights flats and bias from one camera only into APP. These are calibrated and saved as calibrated lights. I clear the file list and repeat for the second camera. I then clear the list and stack the two sets of calibrated lights. 

Link to comment
Share on other sites

21 hours ago, tooth_dr said:

One of the cameras was set to autorotate, so I don't know what this means to my final images?  Is the orientation of the sensor known/labelled, such that the software will apply the flats and bias always in the right way, even if you rotate the images before stacking?   If that is the case then calibration will be applied correctly to all my frames, and the calibrated lights will simply be rotated to suit in the stacking/intregration process

 

21 hours ago, geoflewis said:

If the artefact is on the sensor then it might not matter as it will rotate with the camera, but if it is elsewhere in the optical train, then yes it will appear on the sensor in a different location, so flats taken when the camera wasn't rotated won't work with lights taken with the rotated camera images. To be safe, I'd say that rotated camera images need to be calibrated with their own flats, same as anytime you move the camera within the optical train for what ever reason.

 

19 hours ago, carastro said:

I think that has hit the nail on the head, I did not know what autorotate meant, so the penny didn't drop.

Carole 

Just to clarify this part of the thread.

If you manually rotate the camera by loosening the attachment between camera and optical train or use a mechanical rotor to change the orientation of the camera with respect to the OTA then yes, in theory you would need to take new flats though in some circumstances if the OTA is well collimated and the vignetted field is symmetrical and the only in-focus dust in the system is on optical components that rotate with the camera (flatteners, filters etc) then you may get away with using the same flats after a manual rotation of the camera.

The Autorotate function in the DLSR camera does not change the physical orientation of the sensor to the optical path and nor does it change the scan readout direction of the pixels on the sensor, it merely adds a tag to the data recorded with the image to tell the image processing application that the user held the camera in landscape orientation, horizontally, or turned the camera vertically for a portrait shot.

There is no need to take separate flats in the case that the DSLR camera's autorotate function was triggered because of, for example, a telescope slew to a different orientation for the flats acquisition, as long as the camera maintains the same physical relationship to the OTA during both image and flats acquisition then the autorotate function can generally be ignored though a caveat applies:

Purpose written astrophotography image processing applications should ignore the autorotate tag attached to the raw data and always read the sensor data in bit order, irrespective of the orientation of the camera. Lights, darks, bias and flats will always be treated the same irrespective of the orientation of the camera at the time of acquisition. This will not be the case with general use image processing programs which will use the autorotate tag in the raw data to display the image on screen in landscape or portrait mode, even some astro processing software may get this wrong and so needs to be checked rather than making a general assumption.

When combining DSLR lights images on either side of a meridian flip only one set of calibration files is needed but when registering and integrating the calibrated lights it will be necessary to sort the before and after meridian flip into two groups and manually flip all the calibrated lights in one of the two groups so that both sets have the same orientation before registration and integration. (When using a dedicated astro camera that is integrated into a control program that reads the mount orientation then before and after meridian flip reorientation of the captured lights is often done automatically, for a DSLR you need to do this yourself).

William.

 

Link to comment
Share on other sites

20 hours ago, tooth_dr said:

Could my flats being under exposed cause an issue?

Hi Adam.

I did a little digging this morning and found that there has been a change to internal file handling in P.I. Although I had been aware of it's introduction I had not appreciated the implications. The pixel ADU values displayed in the tool bar are scaled for 32bit xisf format so when you import a 14bit DSLR RAW or 16bit fits image the ADU values look ridiculously low. I found some of my old Nikon flats from a few years back that I knew were good and they read even lower ADU scores in P.I. than your Canon flats. When I look at your flats in Maxim they are fine, so I think you can ignore that path for now it was a bit of a red herring

I had a look at the APP website this morning and see that there were a few posts from people struggling with under correcting flats, the same as you.

One reason given in their forum was a mismatch in the ISO used for the flats and the rest of the biases and lights, you need matching ISO bias frames for the flats and separate matching ISO bias frames for the lights. If you  took the lights flats and biases into APP as raws direct from the camera then APP would not scale the flats correctly if it couldn't find any matching ISO bias frames for the flats.

I read that they plan to remove this restriction in the next update to APP.

Does this apply to your situation?

Looking at your images it seems that the flats are under correcting and if you did not have a matching ISO biases for the flats and matching ISO biases for lights then this may be the source of the problem.

If so, try taking a set of bias frames at the same ISO as the flats and rerun the calibration procedure loading both bias groups together at matching ISO's of the lights and the flats.

To save time, work on a reduced data set, say, ten each of lights, flats and biases in matching ISO's and see if that removes the dust bunnies and vignetting, the resulting image will be noisy but at least it will let you see if the flats work properly before committing time to the full data set.

I don't know if APP is clever enough to sort the images from two different cameras in one operation, if not, you should only calibrate the images from one camera at a time and either side of the meridian flip is ok but you need to separate the lights after calibration into two groups, before and after flip, then process-flip one of the groups so the orientation of both groups are the same before finally registering and integrating both groups into a single image. Then do the same for the other camera and register - integrate the two camera's calibrated output images into a final single frame.

PixInsight uses biases for calibration irrespective of ISO so I can try a run through of the calibration in PixInsight if you like and see if the problem disappears.

I would need a master flat made from non-calibrated, non-registered, integrated and combined flat frames, ditto for the master bias, whatever ISO you have, and an uncalibrated, integrated stack of registered lights from one side of the meridian flip. Ten frames in each master would be enough to prove a point. Fit or Tiff format, 16bit or 32bit, doesn't matter.

William.

 

Link to comment
Share on other sites

Thanks William for the comprehensive answer.

My flats bias and lights were all taken at ISO800. I always would match the ISO of calibration frames to the lights.

To clarify: I calibrate one of the cameras lights with the respective flats and bias in a single operation. I then repeat this for the other camera in a separate operation. I then stack all the calibrated files together. 

I will try that flipping them post calibration. I guess I thought that the software would know to rotate them to line up the stars? DSS does that, maybe APP doesn’t?

Link to comment
Share on other sites

23 minutes ago, Stub Mandrel said:

Could you stack the images from the two cameras separately?

If only one has faults that will tell us a lot.

Actually just done this for each camera in both DSS and APP. I’ll post results later, daughters 4th birthday party today. 

Link to comment
Share on other sites

Just a thought Adam.

It doesn't help that APP haven’t yet published a manual on-line though I believe it is in the works.

I notice that in the DSLR calibration walk through videos that the programs author, Mabula Haverkamp has provided and his comments in some forum posts that he does use darks as an important part of the calibration procedure.

There is a link to a calibration principles page that he has published here:  

https://www.astropixelprocessor.com/community/tutorials-workflows/astronomical-data-calibration-priciples-must-read/ 

Where he states that "flats can and should be calibrated with a matching ISO dark frame" and "lights should be calibrated with a matching ISO dark frame".

Might be worth shooting half a dozen darks to match the lights and flats with one of the cameras and thow them into the mix to see what happens?

EDIT: Also spotted in his calibration rules page that APP reads the entire sensor including the normally invisible 'dark' pixel rows and columns making the image size bigger than is seen in other applications that use DCRaw as the raw conversion engine, while there might be a good reason he has done this in APP that does make it very difficult to use data from other applications in a sort of pick-and-mix operation.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.