Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.



  • Content Count

  • Joined

  • Last visited

Everything posted by narrowbandpaul

  1. Hi again, Have has the chance to practice a few v curves whilst dodging clouds and rain, so a full evaluation yet to be conducted. Ran through the first light wizard and that was OK, then did some v curves. I was manually setting the start and end points and the step size. These ran fine though some stars I'm sure (I hope ) were saturated as the v curve had a pretty flat min fwhm. On the next clear night I will ensure that I'm only at 2/3 full well when approx focused. So I think I'm ok with the generation of V curves and selecting them to be included in the profile. The better v curves had a PID of about 5 steps (f4.5 scope). This is a really stupid question but after some satisfactory v curves have been produced, you then want to focus. Do you hit the focus button on the focus tab? This selects the brightest star in the FOV but what if I wasn't using the brightest star. I could have used a different star. There are a few values to set in the setup tab (I think) focus the near focus HFD. I used a value of 10 with 5 exposures. When I hit focus it ran the Focuser to about a HFD of 10 and took 5 exposures. Once the best focus had reached one value it moved to there and it was done. It gave a good focus anyway. I'm not really sure what values to enter there. I have the manual but as I say I was fighting rain so had to be very swift. I don't get how the Focuser can move to a HFD of 10, take 5 exposures and some how calculate focus from that?! The next properly clear night should see remaining questions cleared up. Although not fully evaluated yet, seeing your Focuser automatically generate v curves and the focus is a thing of beauty. No plans to look back! Any further advice any of you have is gratefully received Cheers Paul
  2. Hi all, I don't do a lot of RGB imaging as funnily enough it always seems to be full moon when it's clear so normally it's emission line stuff. Recently I was playing with some data from a cloudy nights member and some of the RGB that I have taken in the past. I understand using a G2V for colour balance, however I am puzzled by this choice. I guess it is chosen as it is a fairly typical star and so if that is defined as white then there will be a good range if stars both bluer and redder than it, which will look nice. However, astronomically, white is not defined as being G2, instead the colourless star is Vega, an A0V. Since this is the definition of white in astronomy why don't people use this as the colour reference. I have tried using hotter stars than G2 in a bid to replicate the A0 by picking the bluest unsaturated stars by matching stars in the UCAC4 catalogue. But even this doesn't give the colours I would normally see (ie, striking blue spiral arms etc). Even the G2 star didn't give the normal colours. To calculate the colour weights I used the aperture tool in maxim which adds up the total flux from the star and subtracts the background. Basically aperture photometry. I would think this is a fairly accurate way to calculate colour weights. I have tried excalibrator which seems fairly easy to use, but hasn't given any results yet. What I did use yesterday was a variant of this. Plate solve your image, drop it in to aladin, add the SDSS or UCAC or whatever catalogue you want. This overlays the catalogue data on to your image. In UCAC there is B and V magnitudes, but not B-V (colour index). However creating a column of B-V in aladin is dead easy. You can then sort through to find the B-V nearest 0 (for A0) or around 0.6 for G2. Then go back to maxim and use the aperture method on that star to calculate the weights. In this way, any B-V can be used to calculate the weights. By using a star in the image surely this takes in to account atmospheric extinction to give accurate and faithful colour rendition. The results were very similar to previous attempts. Overlaying catalogue data on your image is really cool though and really simple! So what other methods exist for accurate colour calibration? A piece of white paper? Another thing: if this using the aperture tool and a star in the image is actually a good way of doing it, then does that mean that images showing very different colour are in a small part fictitious? I'm just a bit unsure really, I know how to balance colour using a G2 in the image but it doesn't seem to give the correct result.... Any thoughts...? Cheers Paul
  3. Im hoping there will be no issues with the focuser, its a 3.5" feathertouch. Usually rock solid! A further question if I may, again reading what Neil Fleming had to say, he usually uses around a dozen V curves. What is the point of this many? Does focusmax take an average? Also, when do you have a good enough V curve to achieve focus? Is it once both slopes agree on the position of best focus, ie the point of intersection is the same from both slopes of the V? Cheers Paul
  4. Hi All, Thanks for the feedback. Had the briefest of opportunities to test out the focuser last night, between rain and cloud. Used it initially in manual mode with a bahtinov mask. So far so good. Now have very fine control of focuser movements. Next it will be V Curve time! Cheers Paul
  5. Hi Ian, Great to see some testing going on, it's the only way to learn. I would like to throw my tuppence in to this discussion if I may. In your discussion of noise being random, that may be true but certainly with regard to image sensor there are 2 components to noise. The random bit we all know and love and the fixed pattern component which you regard as signal. Considering the 'bright' fixed pattern noise, you are certainly correct when you say that it removes all those effects from vignetting to pixel level sensitivity variations, and I don't think many realise that. However, I don't agree that it is a signal, at least not how I would define it. I'd you look at the SNR equation you will see that there is a term for fixed pattern noise in the noise part of the equation. It is defined by FPN=PRNUxS, where PRNU (photoresponse nonuniformity) is the scale of the fixed pattern variations and S is the signal. We see it scales linearly and consequently limits the SNR to a value of 1/P. Clearly flats are required to improve SNR. So the fixed pattern component of the total noise is very much a noise. Random noise comes from several sources, again you hit the nail on the head. There is random noise from the read out electronics and a fixed pattern component (it's fun to take lots of bias to reduce this to the sub electron level to really see the pattern). There is random shot noise from the object and the sky background, there is a fixed pattern noise associated which was discussed above. There is a random noise generated through the emission of electrons: Dark shot noise. There is also a fixed component to this too. It is the conventional hot and cold pixels that we all hate. Again this is described by similar formula. DFPN=DSNU*D, where DSNU is the dark signal nonuniformity and D is the number of dark electrons generated. So dark noise has a random component and a fixed component. What does the dark subtraction do? I find this is commonly misunderstood. Phrases like remove the dark current are ambiguous at best. Removing the random dark noise too is a factually incorrect statement. First thing to realise is that you cannot subtract out random noise, when you subtract two images with random noise, the resultant random noise will be higher. The only way to reduce random noise is to take many frames and average them. This is why we take many light frames, flat frames, bias frames, and dark frames. It is solely to reduce the random noise in these images to zero. Taking many bias frames leaves only the fixed component of the noise (or pixel to pixel variations), taking many flats leaves only the fixed component of noise and taking many darks leaves just the fixed component: hot and cold pixels. So when we do image calibration we do NOT subtract the random noise, we subtract the fixed pattern components. You can ONLY remove fixed stuff via subtraction. The SNR equation certainly treats all the things listed above as noise. The only thing that gets counted as Signal is the signal from the object itself. It doesn't include the sky background even as it merely adds an offset value that is easily removed, however the random noise and fixed pattern noise it injects is most certainly treated as noise. I think on the points where we differ it is perhaps just a question of definition, you regard noise and only the random stuff and I see it as a pixel to pixel variation that can be both random and fixed (or spatially invariant). Anyway, I just wanted to clarify a few things. It is good to see people running experiments, understanding how noise works really is essential for the imager and can only make your images better. The SNR equation that I am referring to can be written (with random and fixed components) as: So/sqrt (R^2+So+Sb+(PRNU*S)^2+D+(DSNU*D)^2) Where So is the signal from the object. Sb is the signal from the background. R is the read noise. PRNU is the photoresponse nonuniformity, D is the dark electrons generated (dark current*exp time) and DSNU is the dark signal nonuniformity. All these properties can be deduced using just excel. The method is called Photon Transfer Analysis. Hope that was useful, Cheers Paul
  6. Hi Gordon Dont see this imaged often in the tricolour palette and its easy to see why. If after 5 hours there is little evidence of OIII or SII then it really is faint. Still, you have got some reds and blues showing so qudos. Good image and well processed image of a very tricky target. Is it me or is there an arc of OIII between y Cas and the nebula itself? Paul
  7. Yes, of course there are, but this alone cannot explain why all stars are reddened in and around dark clouds. Several things can make stars red, carbon is another but in this case all IR imagery shows dust between the star in question and earth. Of course, a red star does not imply dust, but if the vast majority look red then either they are all very old or there's dust.
  8. IR image from WISE of M42 http://wise.ssl.berkeley.edu/gallery_OrionNebula.html IR image from WISE showing the flame and horsehead http://wise.ssl.berkeley.edu/gallery_FlameNebula2.html Clearly you can see the dust you were trying to image through below the Horsehead Paul
  9. Hubble has imaged the horsehead in the NIR at 1.1um and 1.6um Herschel joined in and imaged the whole of the Orion B molecular cloud in the far infrared. This images shows all the stuff that permeates the constellation. Dust is the same reason we can see the centre of the Milky Way optically or see galaxies through it. Yes, there are things like IC342 (I think) where the level of dust obscuration is low enough to let some visible light through. I think the maffei catalogue has many such objects. These are much brighter in the NIR than they are in the optical. See the Herschel Hubble image here http://spaceinimages.esa.int/var/esa/storage/images/esa_multimedia/images/2013/04/zooming_in_on_the_horsehead/12631199-2-eng-GB/Zooming_in_on_the_Horsehead.jpg The NIR image bottom left is the one I referenced in the top line. I didn't say there was Ha all over it, just hydrogen. It is in the molecular form. Ha comes from the n=3-2 transition in atomic hydrogen. Neutral hydrogen permeates the entire universe. It's presence however is detected by a transition of the electron spin from aligned to anti-aligned resulting in the famous 21cm emission. As this Wikipedia article points out the dark stuff is caused by all sorts really: sub micrometre dust particles covered in carbon monoxide and nitrogen. Also present is molecular hydrogen and atomic helium and some organic molecules. See more here : http://en.wikipedia.org/wiki/Dark_nebulae I'm not very well versed in the dark stuff, normally it's the emissive type of nebula I like to image. Hope that helps Paul
  10. Also note that due to this reddening the stars become redder themselves (not surprising). It is the tell tale sign of dust. If all the stars look red then there must be dust somewhere. In your image the star in question although faint is definitely red. Have a look at images of dusty objects. TV Davis images a lot of dark dusty objects, and you will see this effect in play. Here is an image of his of the Iris. Note how where there is dust there are yellow/red stars. More dust=more red. http://tvdavisastropics.com/astroimages-1_00000f.htm Paul
  11. Nice image, surely M33 isn't too far from there is you are looking below Beta Andromedae. The markarian chain is part of Virgo, not the Perseus Pisces cluster. Nice target!
  12. The reason the star has such wildly different magnitudes in the various bands is due to dust. Dust preferentially scatters blue light more than red. So blue light is scattered away from the line of sight meaning little makes its way to your camera, whereas red and the NIR are less affected and will appear relatively brighter. This is called interstellar reddening. It's the reason why we have IR observatories-to look through the obscuring dust. The same affect makes the sky blue. There are many wavelength dependent phenomenon at play though- 1. Star brightness varies with wavelength, scattering/absorption due to dust, seeing, refractive index of your lens, and the quantum efficiency of your camera. The universe is full of things that depend on wavelength. We know it must be dust in this case as the whole of Orion is covered in it. Long exposures reveal hydrogen all over it. It is this and possibly other material in the line of sight that causes the star to be bright in the NIR but faint in the optical. Amazing, no? Paul
  13. Hi All About to move in to the big bad world of autofocus, so was wanting a few things clarified if possible. I understand the premise of focusmax: V Curve to characterise your system, calculate slopes and generate the position intercepts. Can I assume that your system is only characterised when the PID is zero or negligible. I.e you need a good v curve for good focus. I read an article by Neil Fleming who had Position Intercepts that agree to the 3rd SF. Once this situation has been reached and you press focus what actually happens. Does the Focuser simply move to the calculated position and you're done, or is there more calculation involved? How do you refocus during an imaging session, ie for a temp change (no temp compensation available). If focusing is just the position intercept from the V Curve then do you need to run another V Curve analysis to allow for a change of temp. Currently we use a bahtinov mask and bahtinov grabber to assure focus as a close as possible. In terms of accuracy how does using the handset to move the motorised focuser with the bahtinov mask in place and using the software to tell you the pixel error compare to the standard fully automatic focussing with focusmax. The bahtinov mask is quick and easy to use and the motorised Focuser should allow for high precision in aligning the diffraction spikes. Any helpful tips on motorised focusing would be great! Cheers Paul
  14. AP can also be (is more often) Astro-Physics. A company making things worth remortgaging for. CCD is Charge-Coupled Device CMOS is Complementary Metal Oxide Semiconductor DSS Deep Sky Stacker PHD Push Here Dummy guiding software NB Narrowband
  15. Could be from the filter; small blemishes could cause the spikes. When you say clean the sensor do you mean the cover glass? I really wouldn't touch the actual silicon sensor. Blowing dust off is one thing but touching the silicon itself I would avoid. Have a look at everything in your optical train. The answer lies in there. It has to. From the lens all the way to the sensor itself, check for marks, blemishes and dust. Those spikes are quite bad so hopefully the root cause shouldn't be too hard to pin down.
  16. Hi Olly, Nice to see it being controlled optically rather than with Photoshop jiggery pokery. Too many seem to throw every photoshop tool at their images and I firmly believe that a good image has high S/N rather than high processing times in PS. Having had the fortune to image through the TEC140 though without the dedicated flattener I can agree its a fine scope. What then makes this combo better for controlling star flare rather than other high quality imaging apo's like the FSQs. I think what's obvious is that you can't beat high quality optics....especially a nice big bit of glass. Paul
  17. Hi Olly, Just curious as to the low signal level used in your flats? Paul
  18. Arc spec/pix = 206*p(um)/f(mm) Where p is the pixel size in microns and f is the focl length in mm. Eg a 5um at 1000mm would approx 1"/pix. Multiply the pixel guide error by your arcsec/pix and bobs your uncle. The 206 is related to the number of arc seconds in one radian Paul
  19. Thanks Olly, It's always nice when theory and reality match up. I far prefer taking a few long images rather than short ones. Even just a handful of long exposures is still better to process than many short ones. For a more complete analysis including graphs you may want to consult http://www.narrowbandimaging.com/images/exposure_number_for_minimum_noise_rev.pdf As well as being a very prominent imager Richard is also very good with theory. Paul
  20. Bit of background info...http://stargazerslounge.com/topic/52031-choosing-a-ccd-camera/ If I were buying a camera now, the choice would be an ICX694 or the KAF8300 Filters and filter wheel will add to your budget so worth bearing in mind Hope that's useful
  21. Hi Olly. Cracking image there. I'm pleased to see agreement between theory and experiment. I too am convinced that long is better. Paul
  22. More than welcome. Always a pleasure to help people who want some advice etc. I a pretty confident that 20min subs on the Pleiades will give a nice looking image. Though there will be a lot of LP if you aren't using the LPR filter. There is a world of difference with one in place.
  23. I'm fairly sure that it's the filter thickness that really determines parfocality. Read an interesting analysis by Don Goldman who showed that typical filter thickness tolerance and easily exceed the critical focus zone. So even if they are meant to be the same thickness there still may be a small focus shift. I don't know if the Baader filters are all the same thickness. They are a good quality though. I agree with Sara, the 3nm astrodome are nice!
  24. Hi Rik, If you look at the SNR for multiple images combined then you see that the SNR is proportional to sqrt(N). This is precisely where the law of diminishing returns comes from. In fact the sqrt(N) also appears in the SNR for the long single sub. If you plot this function y=sqrt(x) you will see a curve that for large values of x becomes very flat. In simpler terms, if you have 4 images then you double the SNR. If you have 9 images you treble the SNR. If you have 100 images then SNR goes up by a factor of 10. What about 101 images well the SNR goes up by a factor 10.05. Was the extra image worth it? Thats the law of diminishing returns. Both equations have this feature built in. As for 5 min subs rather than 10mins when background limited, I suspect that there is more going on. Fixed pattern noise may play a part too and may in some regard act like read noise (thinking about that in my head). This would also show a need for longer subs. One other thing is that no matter what (in the simple analysis) that the equation for the single long sub will always have a higher SNR than many combined. The difference between the diminishing only with high signal. They would be identical only for infinite signal (if that happens then we needn't argue of SNR!) What would I do? I would go for as long as is feasible and still gather enough sub for a good noise reject. I don't think I would ever do say a single one hour sub. Although with a good dark subtraction and flats I wonder how well it would turn out. Interesting experiment. Gav, generally I would go for as long as possible (background and guiding etc permitting) as the SNR must always be higher with a few long subs rather than many short ones. If you don't have an LPR filter then I strongly recommend investing in one. Cutting out the background noise will boost your SNR. Hope that clarifies a few things Paul
  25. Well that "fog" as you put it is just background. I assume you live somewhere with at least a little light pollution. In that case with an FSQ and a sensitive camera you will pick up these photons. They will inject noise in the image and will almost certainly be the overwhelming source of noise. Even with the LPR filter in it won't filter out all the LP. So actually the test I proposed could be worthwhile. If the noise is well above the read noise then it certainly indicates that you could do with better filtration. Narrowband imaging is one possibility. Some who live in areas with horrendous LP have this as their only option. However, looking at your image it seems OK. In all probability you have some noise from the background and a bit of vignetting. I do recommend to evaluate things numerically where possible. Appearances can be deceptive. Also having a good understanding of noise is pretty useful when imaging. After all that's what it's all about!
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.