Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,107
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. I don't think you have anything in these values that would describe sensitivity of sensor.

    I'm not sure how the measurements were made, but e/ADU part needs flat panel to be established. If you did use flat panel, then it should be measured value, if not, it's probably read from driver instead.

    Let's examine values and what they mean.

    1. Gain - it is usually arbitrary number assigned in drivers to represent e/ADU value (actually inverse - as people are used to idea higher gain means lower e/ADU, or higher ADU/e - which would be reciprocal). ZWO drivers are following rather good convention and they use 0.1db units of gain, so 200 gain is actually gain of 20db. We will get back to that when examining relative gain and relative gain in db.

    2. e/ADU - this one is straight forward. Light comes in photons and sensor pixels are recording photons by capturing electrons in potential well. When a photon hits pixel there is certain probability that electron will be captured (photon ejects electron from photo sensitive material and it's being captured by potential well). This probability is quantum efficiency of a sensor, so sensor with 75% QE has 75% chance of converting photon to electron. After exposure there will be number of electrons in each pixel's potential well. When value is read out it is converted to number called ADU (that is short for Analog-Digital Unit), or what we know as "pixel value" - which is a number, a integer number recorded in fixed number of bits. This number is calculated like this - number of electrons is divided with e/ADU and it gives numeric value in ADU which is then rounded and recorded in fixed number of bits.

    3. Read noise is standard deviation of very short exposure pixel values when bias is removed. In this case it is expressed in electrons. It is measured by taking bias subs, then stacking them in such way that it removes bias signal (easy way to do it is stack first half of subs to one stack and other half of subs to second stack and then subtracting those two stacks - what remains is pure read noise scaled down by square root of total number of subs stacked in both stacks). This way you get value in ADU units, so you use e/ADU to convert back to electrons. Read noise in electrons can be compared between gain values - not one in ADUs because intensity of ADUs will depend on gain setting.

    4. Full well capacity is essentially how many electrons can any one pixel capture. In this case, true full well capacity is not given, but rather effective full well capacity, which is consequence of number of bits used to record each ADU value. That is 12 bits in this case or values in range 0-4095. So this column actually holds values that you get when you multiply 4096 with e/ADU (converting from max ADU back to electron count) - as that is maximum number of electrons that you can effectively record with these sensors.

    5. Relative gain

    and

    6. Relative gain in db

    are just a way to represent how e/ADU changes as you increase "gain" setting. First value of reference gain is 1 - that is "starting point". Each next value is just simply how many times e/ADU is smaller than this base value (divide any e/ADU with first - reference e/ADU and you will get relative gain). Relative gain in db is just previous column expressed in db (logarithmic) scale. For db scale look here:

    https://en.wikipedia.org/wiki/Decibel

    7. Dynamic range is calculated by dividing full well capacity with read noise and taking logarithm with base two of that. It sort of represents "number of bits in binary system needed to record useful signal level above read noise that is capable of being recorded by pixel" - note this is my explanation for it and not some general definition. I don't really find any use of this number in AP although it is often quoted. I think it is more useful in regular photography with single exposure as it gives you idea of how much histogram manipulation you can do when you have raw image (like adjust exposure stops up/down and such - higher dynamic range - more adjustment possible).

     

     

     

     

     

  2. 2 minutes ago, PlanetGazer said:

    sorry for this question, but won't the shield extend the focal length of the telescope? which could be an advantage for fast telscopes

    No it won't. It is just a physical extension of the tube, it neither changes distance between the mirrors nor shape of them, and those are things that define focal length of scope (actually just shapes, but they need to be properly spaced so you can reach focus and get proper illumination at eyepiece). It is concern only if you extend physical tube between primary and secondary mirror but this is extension outside of this zone.

    • Like 2
  3. 2 minutes ago, Alan64 said:

    You want the entire interior of the telescope to be dead to all stray-light, to all reflections, for example...

    I forgot flocking, very important thing with some telescope designs (some that use internal baffling are less affected).

    With newtonian design, you want to extend tube (can be blackened cardboard / plastic extension like dew shield) so that focuser is at least 1.5 times diameter of tube from the scope aperture. This again prevents stray light reaching tube walls opposite focuser and improves contrast (although you have flocking and light absorbing paint it is better if light does not reach those places at all).

    • Like 3
  4. 7 minutes ago, PlanetGazer said:

    Would love to hear about the arts and crafts that would reduce the effects of Light Pollution

    I can name a few that I'm aware of:

    - observe target when it is highest in the sky and in region the least effected by light pollution.

    - use filters that are appropriate for target you are observing that lessen effects of light pollution (there are several choices of filters but not all filters are suited to all targets, UHC tends to work the best but it's suited to emission nebulae for example)

    - transparency of the sky is related to LP levels - aim for transparent skies for best results. It has dual effect - target light is attenuated the least and LP is scattered much less when sky is very transparent. Avoid haze and too much moisture in the air. There are online services that give you transparency forecast - that is useful when planning a session.

    - when observing point sources like stars, atmospheric seeing also has effect. Steady atmosphere lets star light be concentrated in single point and not smeared so it is at peak intensity and easier to see - both to detect and to detect color as threshold for both depends on amount of light

    - if looking for faint stuff be sure that you are dark adapted. This means shielding from local light sources and creating dark environment. It can be as simple as putting dark cloth over your head when you are at the eyepiece. Mind you, point is not to cover your head but rather to shield from surrounding light - so make cloth long enough and possibly keep it wrapped from below with one hand or something.

    - dark adaptation hurts your color vision, so if you plan to look at doubles trying to see their color, or when observing planets - don't get fully dark adapted, keep some soft light on nearby (if you are in the back yard for example - soft light coming from your windows is enough - no need to turn on porch light or anything like that). On nights of full moon - moonshine will be enough of light to keep you from dark adaptation.

    • Like 1
  5. 3 hours ago, Starwiz said:

    Can anyone direct me to a tutorial that shows how to replace narrowband stars with RGB?

    I've searched but can't find what I'm looking for.

    Thanks

    John

    Don't know any tutorial on how to do it, but I can describe sensible steps to perform the operation.

    In principle, I would be able to do all steps but one - for which I'm not sure if I could do it without first searching the internet for some ideas or even small tutorial on that particular part.

    Problematic step is layer mask of stars - you need to be able to isolate stars in one image, and that is basically it. DSS for example has the option to create star mask - that could be very helpful (have not tried it, but I know it's there). Other astro software probably has something similar.

    Anyway, you get your LRGB/RGB image and create nice looking star field. There are couple of ways to do it, I prefer RGB ratio method since you don't have to worry about color being lost when doing stretch.

    Next thing is to process your NB image as you would and in the end you take RGB image, apply star mask and layer on top of NB with 100% opacity where stars are (background should be transparent).

    Just to point out something - your NB and RGB images need to be aligned / registered to same reference frame (that is sort of obvious, but just in case ...).

    • Like 1
    • Thanks 1
  6. I have slightly different method of locating M31 and M33 than given above (maybe it will be useful to someone):

    image.png.e76714c0936ddbb4d3e4ba324da37630.png

    My approach is to find Cassiopeia and then find string of 4 stars underneath it (all bright and can be seen even in quite a bit of LP). Going from left to right, identify 3rd star and then start "raising" by first finding single star and then pair of stars above (all marked in the image). M31 is just slightly "above" two stars. M33 is about the same distance as pair of stars, but in other direction ("down").

    • Like 1
  7. 31 minutes ago, Stub Mandrel said:

    ED 66 66mm x 400mm

    130PD-S 130mm x 590mm (with coma corrector)

    150PL 150mm x 1200mm

    Using the f/3.6 rule, I get optimum pixel sizes of  1.68, 1.26 and 2.2um.

    If I use a 0.5 focal reducer these pixel sizes halve.

    The ASI 183 has  2.4um pixels and the ASI1600, 3.8um, to me the ASI183 seems better match?

    I know there are other ways of calculating the optimum pixel size and it depends on seeing & guiding as well. On all but the 150PL my RMS guiding on a good night is <=1 pixel with the ASI183.

    With 150PL I'd  experiment with binning in processing.

    I'll give you my view on this - it might be useful to you.

    At 400mm in case of ED66, ASI1600 is going to give you 1.96"/px. In my view this is almost optimal sampling rate for 66mm scope. You can expect to get more than 3" FWHM stars with such scope (for example 3.2" FWHM in 1.5" seeing and 1" RMS guide error).

    We can say that FWHM / 1.6 is optimal (or close to optimal) sampling rate, so 3.2" FWHM means 2"/px sampling rate.

    Therefore ED66 and ASI1600 is very good match for wide field shots. You can even add flattener with reduction factor and you won't be undersampling much if at all (in just a bit poorer seeing you will be at optimum again).

    With my 80mm at F/4.75 (~380mm FL, F/6 native with x0.79 reduction) I happily use ASI1600 and I don't feel images are undersampled at all - results look very good.

    With 130PDS at 590mm you will be 1.33"/px. This will actually be closer to oversampling in most circumstances than undersampling. You need ~2.13" or less FWHM in order not to oversample. In same conditions as above - 1.5" seeing and 1" RMS guide error, at this aperture size you are likely to get around 2.9" FWHM stars. To utilize 1.33"/px to full potential at this resolution, you would need either 0.5" RMS guide error in 1.5" seeing, or perhaps 0.8" seeing with 0.8" guide RMS. With 1" RMS guide error it is virtually impossible to reach 2.13" FWHM regardless of the seeing at 130mm aperture.

    With 150PL and 1200mm focal length, you will be at 0.65"/px. If you bin x2 you will be at 1.3"/px - just look at above to see if that is undersampling. I would argue that if you have worse guiding results with such a long scope, you will be still oversampling by quite a bit. With this scope I would rather go for x3 binning than x2.

    You might say - what would be the purpose as you already have setup that does 2"/px (ED66), but these two approaches will be rather different. ED66 is going to do wide field stuff, while 150PL is going to be really fast in comparison to ED66 (when binned x3) but will be of a quite narrow field - so suitable for small targets.

    This is good example how slower scope can be "faster" than fast scope. If you observe these two setups in different way - aperture at resolution way, you can see how much 150PL will be faster. You have 150mm vs 66mm both sampling at 2"/px. That is more than twice aperture by diameter (more than x4 light gathering area) and in terms of SNR, 150PL will be at least twice as fast.

     

    • Like 1
    • Thanks 2
  8. 57 minutes ago, Jonk said:

    Exciting!

    Please keep this thread updated with your findings / experience as it will help people like me who are on the verge of ordering one of these mounts.

    Hopefully, you will be installed and producing results quickly.

    In the meantime, here is what I intend to do with my pier... anyone have any advice / comments?

    From this:

    pier.thumb.jpg.abe60392e83814748122f44f0b21d790.jpg

    To this (or something very close)...

    1021140773_Largepiermodified2bends(alternativeposn)finsupportv1cropped.png.b25d449dc82a59791b986b49de0481a4.png   148324197_Largepiermodified2bends(alternativeposn)finsupportv1croppedside.png.589140cc1d1b1ff5e3c5a75b5de30bf0.png

    No it won't be floating, it's WIP 3D cad to enable me to confirm what will work and to get local quotes.

    I've also email Lucas Mesu for some dimension details so hopefully he'll reply quickly and I can get on with it.

     

    I think that, depending on your latitude, two cuts and welds should be enough to modify the pier.

    At least that was what I was thinking as I'm at 45 degrees (give or take few minutes) - I need one at 45 degrees and one at 90 - that should be easy thing to do I reckon.

  9. 1 hour ago, dph1nm said:

    Strictly not quite true as if you did this you would get more sky than necessary. There is an optimum sampling area for best S/N which is some fixed fraction of the FWHM of the star, but I can never remember exactly what it is.

    NIgelM

    Would it depend on difference between stellar magnitude and sky surface brightness? (something tells me that it should, but I did not give it much thought).

  10. 18 minutes ago, Jessun said:

    It's just optics and the 'sensor' should be though of as a white postage stamp rather than something that is segmented.

    This is the one I'm not comfortable with. Both due to nature of light (that it comes in photons) and due to fact that there is no continuous measuring device that will give exact x/y position of a photon hit (nor could there be - uncertainty principle).

    For me it is much easier to consider finite pixel size (as one more variable) and it also leads to correct results in terms of calculated / obtained SNR in given time.

  11. 1 hour ago, DrRobin said:

    A focal reducer doesn't change the F ratio of the scope, it changes the size of the sensing element. Your 200mm RC is still F/8, but the pixel size has effectively doubled in size (4 x the area) and this makes it more sensitive at the cost of resolution.

    Not sure if there is a simple way to say it and still be 100% correct.

    You can say that it alters F/ratio as it creates the light beam that has properties of altered F/ratio - system behaves as if it has shorter focal length with the same aperture. Light beam converges as if it came from shorter focal length and same size aperture - in that sense it "changes F/ratio" - whatever you put after focal reducer "experiences" different F/ratio. On the other hand saying that F/ratio of the scope changed is not correct either - scope still has certain focal length and certain aperture.

    Similarly you don't change size of sensing element - it is still the same size of pixel in physical terms. We might say that "mapping" between angular size and physical size of pixel changed. Similar thing happens with binning - physical pixel size stays the same, but "logical" or perhaps better termed "effective" pixel size is increased. In similar way reducer alters "effective" focal length and "effective" F/ratio of the system.

    1 hour ago, lukebl said:

    Reading this thread reminds me that this subject crops up rather frequently, and always, i mean ALWAYS, ends up in a lengthy, conflciting and complicated dabate. At the end of which I am none the wiser. And I'm still none the wiser!

    I have a personal issue related to this. I like to measure asteroid occulations, and use a RunCam Night Eagle camera which operates at a fixed frame rate of 25 per second. It's very sensitive, and with my 200mm RC with a 0.5x focal reducer making it effectively f/4 (I think) it will easily capture stars to about 13th magnitude even with such a short exposures. Most occultations don't pass over my obs, so I've decided to have a mobile lightweight rig, using either my ST80 f/5 scope, or my 90mm f/13.9  Cass with focal reducer (making it f/7?).

    Anyway, my tests have shown that neither scope can capture stars fainter than about mag 10, and the 'slower' Cass perfoms better than the ST80, so I will have to stuck to brighter occultations when operating away from home.

    The point being is that this says to me that the aperture really is everything. I dont understand the F-ratio myth, and probably never will. It's all above my pay grade!

    Best thing to do when having this requirement of capturing single star and doing measurements is to make sure star is covered by single pixel. Star profile in focal plane plays important role here, and in principle best results are when you have single read noise "dose" per star intensity readout - that would mean placing it on a single pixel. Second important thing is of course aperture - you want as many photons in given time coming from star to be captured.

    • Like 1
  12. I like the hint about choice of units :D

    Train is one bridge distance away from the start of the bridge, and it is moving at 18km/h (x3 speed of that man)

    As for F/ratio myth - whether there is in fact myth or truth depends on formulation of that F/ratio myth.

    If we examine the statement:

    "Having two scopes of different F/ratio, there is a relationship in corresponding times for attaining given SNR which depends solely on F/ratio of said scopes", then I have to say - it is a myth.

  13. 3 minutes ago, Jessun said:

    or go down the route that astrophotography is somehow different from normal photography just because we have some very bright, small stars in the mix

    O, but it is very different thing :D

    In principle they are the same thing, but normal / daytime photography is operating "subsonic" and AP is operating "supersonic" - two different regimes of the same thing (air drag).

    With day time / normal photography you have following "restrictions":

    Target shot noise by far most dominant type of noise, or plenty of signal. Dynamic range of target is often quite small. Dealing with relatively short exposures. Capturing image in single exposure (there are of course exceptions - like depth of field stacking, ultra fast exposures and such, but let's not consider those "normal" photography - again special regime).

    With AP things are quite different:

    Target shot noise is often behind other noise sources (if it's not then one is very lucky to have pristine skies, almost no read and thermal noise and relatively bright target) - minimal signal per exposure (often less than one photon per pixel per exposure). Dynamic range is couple orders of magnitude larger than in normal photography. Dealing with larger number of long exposures.

    So you are quite right to say that in principle they are the same thing - sensor / lens / light, but due to different regimes of operation, they are practically very different.

  14. 13 minutes ago, Jessun said:

    Things go funny when we include the mystical spot target or the (almost as confusing) baffling choice of sensors and pixel sizes.

    In essence F ratio is simply dealing with the theory of optics. It has only a number, and no frills to it. F5 is F5. Not F5 Special because of this or that.

    Draw a simple telescope on a paper, one lens, one tube, one focal plane with an image circle. No measurements needed. Imagine it's a small telescope then the image circle is small. And vice versa for a big one. A simple division is all the maths needed. Point what ever sized telescope you envision to the mother of all flat panels and then put a photon counter, the size of a plank length (or half lol) and for any sized version of your design the photons will come in at the same rate at the counter wherever you count, lets say smack in the middle or 10% off centre or anywhere, proportionally.

    There is no real mystery to this. Confusion kicks in when there is a large fix sized pixel at the end.

    Aperture determines the max theoretical resolution and surely doesn't dictate the faintest object you can detect. A photon is a photon. It won't aim squarely at only large telescopes if they come from afar...

    A lower f-ratio is per definition always faster than a higher one.  There is no way around this. One only has to do away with the idea of finite sampling points in the image circle. Imagine an infinite number instead to better understand F ratio.


    /Jessun

    No need to define F/ratio in any other terms than it is defined - as a ratio of focal length to aperture.

    If you put a physical constraint on "photon counter" - in terms of its absolute size being always the same and finite, then yes F/ratio of two optical systems will indeed define respective speed in terms of average number of detected photons per unit time.

    I'm just saying that special case can't be used as basis of understanding of whole class of phenomena. This comes from day time photography and usage of lens. It works in that domain because two important facts - target shot noise is dominant source of noise (plenty of light bouncing off object being photographed) and when working with single camera and exchanging lens - you are working with fixed size pixels. Daytime AP does not know concept of binning to change pixel size.

    When I first started thinking about AP I quickly realized that F/ratio and its use is very limited in determining performance of imaging system. I fiddled around with math and concluded that aperture at resolution is much more "natural" way of thinking of speed of acquisition in AP. I tried to formulate "one number says it all" approach, but while doable - it is too complicated for quick mental comparison (square roots and such).

    After all of that, I concluded that aperture at resolution is better approach because it lends itself to more or less "natural" way one would choose scope / camera combination. Instead of thinking of F/ratio - speed of scope, one should go the other way around, and follow similar steps to these:

    1. I've got a mount that is capable of X precision in tracking / guiding

    2. My average or good seeing (depends on what you base your decision on) is Y

    3. From 1 and 2, I can conclude that my average FWHM will be Z and therefore I need to sample at resolution S to capture W% of data available (in terms of resolution)

    4. I have choice of cameras .... (and then you think of QE, sensor size, technology, pixel size....) and choice of scopes (regardless of F/ratio) - which combination (including binning) will offer me most aperture at target resolution S and still fit other criteria (mount capacity, budget, preferred optical design, ....)

     

  15. 7 minutes ago, DrRobin said:

    Binning doesn't help that much.  If you use a 2x2 bin (4 times the area), but they way the signal is read it is more like 2x the signal.  To get back to 4x signal you have to use 4x4 bin, roughly speaking.

    Smaller pixel sizes often have a lower light sensitive area to total area, due to the need for readout registers.  It's so dfficult to compare different ccds.

    If you look at my post from 2013 you will see I differentiated between a DSO (the subject of this thread) and a star.  This is important if you are considering point sources or a light spread out of an area.

    No, binning provides you with exact signal improvement as number of pixels would suggest. Add 4 pixels together and you will have x4 signal strength (for uniform signal over those four pixels). It is SNR that doubles in that case - same as when stacking 4 images - SNR improves by factor of SQRT(number of stacked samples).

    Sensitive area of pixel is "included" in QE of sensor, so in principle you don't have to worry about that - total area of 2x2 in relation to total sensitive area will be the same as with single pixel, so binning does not change QE over single pixel (neither increases it nor decreases).

    All of this is of course for extended sources - surface brightness. With point sources like stars same applies except that you need to factor in PSF of optical system (there also larger aperture has an edge as it will provide tighter PSF for same conditions versus smaller aperture).

    We don't need to compare different CCDs. Here is another example:

    You have two 6" scopes. One is F/8 and one is F/5. Use the same sensor on both of them.

    Only difference being that you bin x2 pixels on F/8 vs no binning on F/5 - which one will be faster? F/8 one will be faster. In this case we have same aperture but different resolution (pixel gathering area). F/8 binned x2 will be of the same speed as F/4 if both scopes have same aperture.

  16. In that PDF that I linked, there are couple of projects that are worth looking up to see what is involved.

    One of them is meteor detection. Another interesting one can be Itty-Bitty radio Telescope (IBT) - that one is just satellite dish and receiver (satellite detector).

    Maybe that second one could be interesting starting point as you can "upgrade" it bit by bit. It is just a dish connected to satellite detector - one that "beeps" when you have a signal. So you point dish to something and detector beeps - you have a detection of a radio source! Most often you start by finding Sun this way.

    Possible upgrade paths would be - mounting dish on EQ mount so you can use handset or computer to point it to wanted location. Another one would be connecting satellite detector to computer via some sort of sampling device - like simple external (USB) audio card that can sample input, so instead of listening to "beep" of detector - you record it via computer.

    Look here for resources:

    https://opensourceradiotelescopes.org/itty-bitty-radio-telescope/

    • Thanks 1
  17. Do you just plan to get basic understanding on the matter, or are you interested in doing some radio astronomy?

    Radio astronomy is one of those fields of astronomy that you can't simply have "hands on" approach like observing. You need at least some level of technical background to be able to understand what you are doing. Results of radio observations are often just measurements of sorts - graphs and charts, rather than anything you can instantly see or take a photo of.

    You can produce images, but it is rather complicated, and as amateur with any sort of gear that amateur can house and operate - it will be very low resolution type of the image, so you can in principle only do "wide field" images - like that of Milky way.

    • Thanks 1
  18. 8 minutes ago, DrRobin said:

    I was making the assumption that it would be the same camera on both.  If you change the pixel size then everything changes, oh and you might as well change location to above the atmosphere where there is less loss.

    One does not need to change the camera, you can bin your pixels to get larger collecting surface / lower sampling rate.

    8 minutes ago, DrRobin said:

    I doubt this is true either, both systems end up with the same amount of sky per pixel, if both cameras have the same sensitivity (difficult to achieve) then both will image in the same time as they both have the same number of photons to play with.

    Don't have to be doubtful about it - do the math and you will see.

    Here is very simple example that is enough to show this. Let's say that we have two scopes - one with 8" and one with 6" aperture. We match them with sensors / pixel sizes, so that both give 1"/px. This means that one pixel will gather all photons from 1"x1" patch of the sky that were collected with respective aperture and focused on that pixel (all photons fall on aperture as parallel rays and they are then focused on that pixel so all photons from 1"x1" region that aperture collects end up on that given pixel and are accumulated as signal). Next step is fairly easy one - 8" will collect more photons than 6" -> Pixel on camera attached to 8" scope will gather more photons and have stronger signal than Pixel on camera attached to 6" scope.

    Better SNR in same time = same SNR in shorter time.

    This is why I always say - don't think in terms of F/ratio or speed of scope, but rather thing in terms of "aperture at resolution". More aperture on same resolution will be faster. That is why are larger scopes better for DSO imaging, or rather primarily reason if you can pair them up with sensor to give you wanted / reasonable resolution (possible binning included in consideration).

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.