Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. By the way - I also noticed that flat exposures are 0.32938s and flat darks are 0.48406s. Is there any particular reason these two are mismatched?
  2. I think I found a solution to this, but we should really call in author of APT in this discussion as I suspect that there is a bug in APT auto flat routine. @Yoddha This is not the first time I've come across this and before I suggested people to ditch auto flat routine and shoot flats as regular exposures - and set exposure time manually. Now I have confirmation that there is probably something wrong with APT. You mentioned that you had your flat auto routine set to 25000 ADU mark, right? Look what happens to calibration of single sub when I add DC offset to master flat (making it larger - as we have established that it is too low and over correcting because of that): Above is normal calibration - and it is obviously over correcting. Now I started adding DC offset to master flat When I add 100ADU it looks like there is slight improvement. when I add 10000ADU - there is visible improvement. when I add 25000ADU - it looks like flats are working properly (this is much harder stretch than two above - yet background appears flat). If I add 35000 ADU - then flats are starting to under correct. To me it looks like software is doing something wrong. Actual ADU of flat is indeed ~25000 - but I'm suspecting that it is actually setting exposure time so that average ADU is 50000 - twice what is set as target and then subtracting 25000 from that. I might be completely wrong on this - but that is what it looks like. In any case - to solve these images - add 25000 ADU dc offset to master flat and in future - don't use APT auto flats feature but rather set your flat exposure manually - like when shooting lights. In fact - it is best if you don't mention flats to APT at all (just say you are shooting lights of a different target called flat panel ).
  3. As far as lunar with IR filter - it is tradeoff. Longer wavelengths are much less affected by atmosphere than short wavelengths (blue light bends more than red in both glass and atmosphere) but longer wavelengths suffer more from diffraction effects - larger airy disk pattern. IR is probably better suited for large aperture scopes that are more affected by seeing and lower wavelengths for smaller scopes (like Baader Solar Continuum that is centered around 540nm). In bad seeing - use IR of course, but in excellent seeing - it can actually produce less sharp image then otherwise possible due to airy disk size.
  4. I happily use 4/3 sensor with 1.25" filters at F/4.8. In any case, how about going a bit bigger - say 4"? There are very fine triplets out there in 4" range for imaging.
  5. Those settings should be fine indeed. Unfocused light will be like DC offset - I don't think it will create significant pattern. In any case, I'll see if I can find something meaningful when you post your subs. I'll inspect them in ImageJ to see if I can find any clues as what has happened.
  6. Ascom drivers are the way to go really, so I would not change that. Reflection off filter could be the reason - but I think that it would affect both flats and lights so it would sort of cancel out. You should however make sure your light path is clear of any reflections so do try that other attachment. What was your offset set to? It could be issue with offset as well. Out of the 4 types of the subs - only flat darks would be affected by low offset in a bad way since all others have additional signal (even darks in long exposure have some dark current, and flat darks tend to be short). If offset is set too low - that would make flat darks higher in value and in turn master flat lower in value.
  7. Flat seems to be over correcting. Since it is light / flat and value is higher than it should be - either light is higher than it should be or flat is lower than it should be - maybe both are happening. How do you take your darks? Both darks and flat darks? My guess is that you took flat darks while camera was on scope and front of the scope covered and regular darks you took with camera off the scope. If that is the case - you might have a light leak. This happens on the back of the scope if there is light near the end of the scope. People deal with this in various ways, for example: In any case - lights were taken on scope (obviously) and darks off scope - (lights - darks) will have higher value because of additional light from back of the scope. Flats and flat darks were taken with camera on the scope - here flat darks are higher in value (flats might be also - if so the cancel out) - so resulting value will be smaller. Hope it makes sense - moral of this story is check if you have light leak on the back of your scope. Another possibility is IR light that passes thru plastic cover on front of the scope - but that is unlikely if you have UV/IR cut filter.
  8. Oh it makes perfect sense if you mean the shape of FT of box function - that part I'm fine with. It's just the way FT calculates and yes - box and sinc are know dual of FT. It is just usability of it as low pass filter that I was referring to. For some strange reason I expected a good low pass filter to be monotonically decreasing in frequency domain - like say Gaussian blur filter, but in reality, why should it. I guess that only requirement for filter to be called low pass is to attenuate higher frequencies more than lower, right?
  9. Use SER file format for capture. Every sensor is monochromatic sensor. Color comes from smart handling of filters imprinted on individual pixels. This is called Bayer matrix for OSC cameras - for details, look here: https://en.wikipedia.org/wiki/Bayer_filter You can use color filters with color camera - but results won't be the one you expect - it will not behave as mono sensor + filters. You are still capturing R, G and B data and you can separate them (see that Bayer filter thing). Raw8 means that you are capturing 8 bit of information, Raw16 means that you are capturing 16 bit of information per each pixel. Original data is data prior to debayering - while red, green and blue pixels are still separated. If you plan on stacking with AutoStakkert!3 - leave data original as this software knows how to deal with it to exploit it fully resolution wise. As for filters - you'll need UV/IR block filter. You can get IR 685nm pass filter if you want to do IR astrophotography. Maybe better solution would be ZWO 850nm pass filter - as camera response is uniform above 850nm and it will behave like true monochromatic sensor in this band. Don't get LRGB set unless you are planning to get mono camera later on. ADC helps when planets are lower in the sky to fix the fact that atmosphere acts as prism and slightly separates colors. It has nothing to do with how you capture and process your data.
  10. Yes, background is much nicer. I think you might pushed saturation a bit too far this time?
  11. I guess you are right. I found it strange due to my expectations that low pass filter should be monotonically decreasing in frequency domain.
  12. Moving average is rather "strange" low pass filter. Fourier transform of box is Sinc function. Frequency response of moving average looks like this: (colors show different size moving average) More details here: https://ptolemy.berkeley.edu/eecs20/week12/freqResponseRA.html
  13. Well, image is ~ 4400 x 3175px, so I assumed that you cropped about 100 or so pixels on each side due to aggressive dithering. According to pixel scale (I measured distance between two stars and compared to Stellarium angle measure), it looks like reducer was working at ~0.7165 (assuming original FL is 1624mm, but that will vary by few mm depending on distance between mirrors due to collimation). This scope should cover APS-C sized sensor without too much issues in the corners. That would be around 27-28mm on diagonal. ASI1600 has 22.2mm diagonal and if we divide that with 0.7165 we get - ~30.98 or 31mm. That is probably a bit too much although TS website says that sensors up to 30mm can be used without flattening. Maybe this reducer should be kept at 0.75 max - that would give 22.2 / 0.75 = 29.6mm I also have this reducer, but when I tried it once - I did not really like what I saw. Maybe I was trying too much reduction. Will need to give it another go with less reduction - up to x0.75
  14. You should bin at linear stage, either after calibration and before stacking or stack once it is finished. What software are you using for stacking? CMOS and CCD binning provides almost the same level of benefit - only difference being in level of resulting read noise. Here CCD has slight edge, but CMOS sensors have lower read noise to start with so that evens out the playing field. Your stars look rather decent. Here is crop form "worst" corner at good sampling rate (x0.5 or about 1.32"/px) : Does not look bad at all.
  15. What did you use to reduce FL on RC? By the way - you should look into incorporating at least bin x2 (or optimally x3 for this image) in your work flow - it will give you SNR boost and make processing much easier.
  16. there are couple of threads on it - one in Astro Lounge, one in imaging section. Here is link to actual website: https://astroanarchy.blogspot.com/2021/03/gigapixel-mosaic-of-milky-way-1250.html
  17. In both equation same three numbers figure - focal length of telescope, focal length of eyepiece and size of aperture. First approach combines focal lengths into single entity - telescope_fl / ep_fl = magnification and then adds aperture Second approach combines telescope focal length and aperture into single entity - F/ratio = telescope_fl / aperture and adds eyepiece FL into the mix. That is why these two approaches are equal - same three numbers are used to calculate exit pupil. Actual equation is simple proportion which says aperture : exit_pupil = telescope_fl : eyepiece_fl (ratio of input and output pupil is same as ratio of focal lengths), but above two equations are much simpler to remember.
  18. This is probably the deepest Veil image that I could find: But it is narrowband - if the feature you are referring to is not narrowband in nature - it will likely not show in the image. By the way, author of this image recently released stunning mosaic of the milky way in NB - 12 years, 234 panels and 1.7 gigapixels
  19. I can try to. I don't know much about IFN - but let's see what can be found online. Lowest estimate of JND that I've found is about 7% for visual. That is 0.07 as ratio or 2.77 magnitudes of difference. Dark natural sky is SQM 22 - this means that IFN needs to be brighter than sqm 24.7 to be visible against background sky. I've now looked up Herschel's Ghosts article on Mel's website (here is link: http://www.bbastrodesigns.com/Herschels Ghosts.html) and we agree on this calculation, here is quote: But I have problem with this. We can easily hit 25-26 mag / arcsec^2 in our images. Outer parts of galaxies are that bright. Have a look at these: or for M110 Both of these have features that are 25mag or above - we regularly capture them in images. Why don't we capture IFN regularly in our images? - it is rather hard to capture it. For example - image of M31: Mel's sketch of M31: Is it really IFN or some sort of transient sky effect that people are seeing? What is your experience - do you always see the same things? Looking around the internet - features near M31 are not IFN - but rather Ha nebulosity much closer to home - according to this website: http://www.deepskycolors.com/archive/2017/01/01/Clouds-Of-Andromeda.html It is still interesting that we can't normally find them in images (means very faint) and that Mel sees them.
  20. Just be careful - if you over do it - you'll introduce diffraction of prism edge It will be very small effect - but I've seen it in some of my images. above is one of my images - look at bright stars near to edge where OAG sits - they have single spike due to diffraction of the edge. As soon as you move away from that area - stars are normal again. Here is shadow of prism starting to show on flat.
  21. Contrast in linear light is fixed and for most part we follow that with our vision. There is Weber's law - or Just Noticeable Difference - that is about 10% for human vision. This means that if surface brightness of nebula is 10% higher than sky - we will be able to detect it. Except this does not work in very low illumination levels (or very high). (above image taken from here: https://engineering.purdue.edu/~bouman/ece637/notes/pdf/Vision.pdf) With using filters and varying exit pupil or magnification - we are aiming for that sweet spot - where sky is dark enough to be in photon noise domain where our brain will effectively filter it out - but nebula bright enough for us to see it. Perceived contrast increases because of that.
  22. It looks very much like triplets that have been mentioned above and there is high chance it comes from same factory. Maybe these lenses are a bit oversized so they can be stopped down in lens cell to avoid edge problems on image quality? Maybe WO just checks those lenses and utilizes full 81mm of aperture rather than just 80mm
  23. I think that 290 is going to be a bit better than QHY but not by much. It will have lower read noise - which is good with short exposure and just a bit better QE. You'll still need to bin it to get enough sensitivity and use longer exposures. Fact that you have quite a bit of coma does not help. Would refocusing a bit help? Thing with coma is that it moves light from the star into coma tail and this lowers brightness of the star - it is harder to detect then - almost like being defocused. You could also try to carefully position pick off prism. Place it on the longer side of sensor ("below" or "above" the DSLR - not sideways) and move it to the point where you start getting shadow on the sensor. This will minimize coma.
  24. Is it? My bad then. I was looking at FLO website for description and pricing: https://www.firstlightoptics.com/william-optics/william-optics-zenithstar-81-apo.html but I guess, I got the scope wrong, didn't I? There is Zenithstar 81 and GT81 - two different scopes
  25. Exit pupil size is determined by used magnification - in this case they are the same thing as exit pupil = aperture / magnification Less magnification you use - larger exit pupil. Yes, it is magnification that helps here - or lack of it. There is limited number of cells in the back of our eyes and limited number of photons coming from nebula If you spread light from nebula over more light sensing cells - each cell will get to sense smaller number of photons. Image will be fainter. You can easily see this effect when observing the moon for example - pump up the magnification and image will become fainter - as light is spread over more cells and each cell gets less photons. Using lower magnification does opposite - makes image brighter. Problem is that it does the same with background light pollution - again you can see this - use low mag eyepiece and sky will look bright - use high mag EP and it will turn black (or at least darker - depending on how much LP you have). Filter removes much of the LP and once light falls below certain threshold - we no longer see it. This is the key - make nebula brighter by using lower magnification and at the same time - make sky dark by use of filter.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.