Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,029
  • Joined

  • Last visited

  • Days Won

    11

Posts posted by vlaiv

  1. 57 minutes ago, FMA said:

    I didn't understand all, but I assume in your opinion is less accurate than this...

    https://www.firstlightoptics.com/skywatcher-mounts/skywatcher-heq5-pro-synscan.html

    And is 230 pounds less.......

    Am being stupid to think that having an equatorial and az in the same tripod is something desirable?

     

    I'm not sure who you are referring to, but if that response was directed towards me, I was saying following:

    If you want a mount that you will be using for both visual in alt-az mode and imaging in eq mode and that is equally important to you, and you want to have something more lightweight and you have limited budget - then it is a good choice.

    If you don't have limited budget and you don't mind the weight - AZEQ6 is better mount for both imaging and observing - well, primarily for imaging.

    If you are limited with your budget and imaging is very important to you, but visual not so much - then look at Heq5, and understand that stock version is not going to perform to best of its ability, but once you tune it and mod it - it will perform better than AZEQ5 mount for imaging.

    From what I gathered, I think you will be happy with AZEQ5 as you want to both observe and do some casual imaging - this is why I linked that thread where AZEQ5 mount has been tuned a bit. I also asked on that thread if owner could confirm what sort of guiding results could be expected from AZEQ5 - and yes, I was right about that - about 1" RMS or a bit less. This is in range of stock Heq5 mount, so guide performance for casual imaging will be quite good (although that also means that you will need to tweak your mount as most SkyWatcher owners do at some point when imaging to bring the best out of their mounts).

  2. 2 minutes ago, Chefgage said:

    From a GIMP point of view is this what it refers to as interpolation. Using the linear, cubic method of resizing.

    Indeed - when I say there are different methods of resizing - I mean interpolation. Binning is sort of interpolation algorithm, but it works only on integer down sampling factor.

    In fact - binning 2x2 is mathematically equivalent to 50% size reduction + (0.5px, 0.5px) translation with use of bilinear interpolation method (average of 4 adjacent pixels is the same as linearly interpolating at point that is corner joining all 4 pixels - if pixels were considered to be squares).

    I mentioned one advantage of binning - no correlation introduced. Don't know how much you know about interpolation, but different interpolation algorithms use more than just surrounding pixels to calculate value when resampling - bilinear uses 4 nearest pixels, but bicubic uses 16 pixels around wanted point (matrix of 4x4 - as you need 4 values to fix cubic function, 3 values to fix quadratic function and only 2 to fix linear function). There are of course other interpolation methods and each of them will have different properties with respect to details in the image and impact on noise.

  3. 9 minutes ago, FMA said:

    Vlaiv, your answer are always poor gold for beginners.......

     

    What do you think about this one? its the one im in love with

     

    https://www.firstlightoptics.com/skywatcher-mounts/skywatcher-az-eq5-gt-geq-alt-az-mount.html

     

     

    Don't have much info on that mount. Most of info on mounts that I have, comes from reading about the specs and more important - first hand experience. I don't remember reading any first hand experience with that mount from people doing AP. It does not mean that it is not good mount.

    I know people owning older brother of that mount - AZEQ6 who are perfectly happy with their mounts. AZEQ6 performance is similar to the performance of EQ6 - maybe a bit better than old EQ6 and in line with EQ-6R because both AZEQ6 and EQ6-R have belt drive.

    AZEQ5 also has belt drive, so that is a plus.

    I've read once that someone found that lead that connects RA and DEC axis to be awkward. This one:

    image.png.8ca012bbe98ddc2d60fe62bb5a2a11a9.png

    Fact that this mount has both AZ and EQ operation does mean it is preferable in rather specific situation - where you want to use it as both Imaging and Visual mount (I'll address this point later with some additional info).

    As for imaging performance, I can only sort of guess what it will be like based on numbers. On this page, we have information on internals of this mount:

    http://eq-mod.sourceforge.net/prerequisites.html

    One thing that I'm slightly concerned is 0.25" stepper resolution - it is less than Heq5 class mount by almost double, and just a bit more precise than Eq5 class mount. With stepper motors this limits how well guided this mount can be. I think that realistically you can expect about 1" RMS guiding with it, maybe down to 0.8" RMS on a good night - which again means that max resolution that you can go with such mount is 1.5"/px - 2.0"/px - which is ok, that is sort of smaller scope - up to 700mm focal length with sensibly sized pixels (or some sort of binning to get you to that resolution).

    It looks like this mount has PPEC, and this is good. My Heq5 does not have this feature and I use VS-PEC in EQMod - which means that I can't really use my mount for AP and visual, because I would like to be able to use mount for visual without dragging laptop outside - with only hand controller. But that messes up my periodic error correction since it is not permanent and I need to park to exact position each time I finish using my mount otherwise PEC will go out of sync (no encoders on the mount).

    AZEQ5 has encoders and PPEC so it is well suited to be used in both roles at the same time (well not at the same exact time - but you know what I mean - you don't to do anything special if you image one night and observe the other in AZ configuration).

    If you want better precision in imaging then go with either Heq5 or EQ6-R or AZEQ6 - all of which are heavier mounts but offer better precision for AP. Mind you - stock HEQ5 / EQ6 is going to have vary greatly sample to sample in their performance and only once you tune it and mod it - it will be able to deliver best performance.

    I stripped my Heq5 and changed all the bearings, I did belt mod on it, replaced saddle plate and changed tripod and now it guides at 0.5" RMS, so if you are going to keep things stock - not sure if price premium (of EQ6 class) and weight are worth it if you have your heart set on AZEQ5.

    Hope this helps

     

    • Like 1
  4. This is not easy topic, and in fact, many will point out (me included) that mount is the most important piece of kit when it comes to imaging.

    In budget category, and all the mounts you've listed are budget category mounts, we can roughly say there are two important things that you need to pay attention to:

    - weight capacity

    - accuracy of tracking

    Btw, I said budget mounts - but that does not mean all of these are poor mounts - a lot can be accomplished with such mounts - I have Heq5 for example and it serves me very well within its limits.

    Weight capacity - you need to have at least 50% of head room with respect of all your imaging gear to be placed on the mount (this is just a recommendation - more stuff you put on the mount - greater chance something will not work as it should - mount will not track properly, wind will be more of an issue, something will not work as expected). My Heq5 is rated at 15-18kg (depending on the source) and I've put as much as 15kg on it for imaging and it worked, but I did not feel comfortable with that much weight on it. Nowadays, I limit weight on the mount to about 10-11kg (maybe 12kg, but no more), so keeping about 50% of headroom is very sound advice.

    Accuracy of tracking - most of these budget mounts will have issues unguided. Most suffer from periodic error and greatly benefit from guiding. Longer the focal length (or to be more specific - higher sampling rate you use) - shorter your exposures will need to be to avoid star trailing.

    Exception to above are mounts with encoders, and in this price range, as far as I know, only iOptron offers encoders. Encoders are really expensive and it is much cheaper to get guiding kit. Guiding solves some other issues that encoders can't (at least not without very sophisticated software and building of sky model - something that is done in permanent setups as it is time consuming to be done on each session).

    Eq3 / Eq35 are really very basic mounts that can hold camera + lens and very small scopes. You will be very limited in exposure length with those mounts and will benefit greatly from guiding.

    Eq5 is step up from above two in terms of weight capacity and performance, but same holds - smaller / lighter scopes (up to say 6-7kg of total weight, btw when I say total weight - that does not include counter weights, it just means scope and camera and any other gear attached to it) and if possible - guide it.

    I would personally avoid AVX mount as I've read that it suffers from some issues that make it less than desirable imaging platform. Take this with grain of salt as I've never even seen one live, let alone used one.

    iOptron mounts are said to be good, and I've read many excellent reports on their performance. I would personally probably go for iOptron with encoders (EC model) if I don't plan to guide. Since you are in Texas, you'll probably get better prices on iOptron then we get here in Europe, so that is a plus.

    If you can - get iOptron CEM40EC, it is over your budget at about $3000 - but it is Heq5 class mount with 18kg payload and it has encoders, and is lighter weight.

    CEM25EC is EQ5 class mount - so keep weight up to 8-9kg on it (it has 13kg load capacity) and again it has encoders - which you want if you don't want to guide.

    That is about mounts, now about scopes and camera.

    ASI120mc is very good planetary camera and very good guide camera. You can use it for EAA/EEVA with 130slt, but that is about it. You can try imaging with it - I've done it and even managed some decent images, but that sensor is very small.

    If you are serious about imaging - you will need better imaging gear. Look into Skywatcher ED80 + flattener / reducer or Skywatcher 130PDS newtonian and coma correcotr and Canon DSLR. Something like used Canon 450 or similar will be very good option.

    Imaging is rather serious business if you want to do it right and there are many aspects of it, and best thing you can do is do a lot of research before you commit to particular gear. Book - "Making every photon count" is said to be very good book for anyone planning to get into astro imaging. Of course, SGL is place where you can read a lot about all topics that interest you and ask questions, and hopefully get decent answers.

     

  5. I think that you have all the gear that you need to do this - it just takes a bit of fiddling with the data that you capture in order to turn it into something that is useful.

    You have ASI120MC, and I presume you have small lens that comes with it - all sky lens? If not - it is really not that expensive to get one. With it you can create images like this:

    image.png.e69e04f912872fed3b8d725385a456d8.png

    That gives you coverage of whole sky from certain vantage point. People sometimes use such images to create LP maps for their location - like this:

    image.png.c911b56a549329f72d58d10683e1b40e.png

    You could mount such system on your car and then gather similar images from different locations, and then it is down to interpreting that data - you could do some sort of 3D visualization of that data as for each location you have total LP coming from a certain direction - multiple such points would help build a model of LP in certain volume of sky - and then you could try to simulate ground lighting that produces such sky glow.

    I mean this is advanced stuff as you would need to model scattering in atmosphere and types of illumination from the ground, but just a set of images like above one with map and direction of main sources of LP - and coordinates where those directions intersect could point out to particular LP source (a bit like triangulation of radio sources).

    For example, in above image it is evident that strongest LP comes from about 350° (which is not surprising as it is direction of my home town for location above image was taken and it glows bright :( ) - but in your images - that would be one line for intersection on the map.

     

    • Like 1
  6. 15 minutes ago, happy-kat said:

    Is the x1.6 significant to camera sensor size or a universal figure to use for any camera?

    That 1.6 is related to Nyquist sampling theorem and is approximately sampling rate that corresponds to given FWHM. If FWHM is expressed in arc seconds, then dividing with x1.6 will give you sampling resolution in arc seconds per pixel. If FWHM is expressed in pixels - then dividing with 1.6 will give you "pixel size" - or factor to reduce your image by. In above example if FWHM is 4px then ideal pixel size is 2.5 - so you need to reduce image by factor of x2.5

    5 minutes ago, Chefgage said:

    If by bin you mean cropping then no i do not. But i have only just started doing astro photograpghy really so still learning the basics.

    Binning is form of resizing the image down and has some advantages over regular resizing down. It reduces noise better. Every resizing down of the image reduces the noise in the image, but different resizing methods reduce noise by different amount. Binning just adds adjacent pixels together and forms one large pixel out of group of 2x2 or 3x3 pixels. For this reason it can only resize down by integer factor - either by x2 or by x3 or x4, etc ... (it can't reduce size by x2.3 for example). However such reducing down has very good effect on the noise - it also reduces noise by x2, x3, x4 .... etc and it is mathematically predictable in the way it changes noise (no correlation between pixels, always exact improvement in noise and such) and it is therefore preferred way to do things in astronomy (science side of things).

    • Like 1
  7. Hi, you need not worry about display size - it will always be "fit the screen" by default, regardless of resolution of underlying image.

    Some people, me included, like to view image at 1:1 setting or 100% zoom, even if I need to pan around and that is possible if you open image by itself in browser - I usually hit right mouse button and do - open in new tab ...

    At first image will be scaled to fit the window again - but simple click on it will expand it to full size and you can pan around.

    I mention above because of what I'm about to say next. There is proper resolution for astronomy image, or rather range of proper resolutions. Sometimes with modern cameras and larger telescopes (longer focal length) people make image that is just too zoomed in when viewed at 100%. Stars are no longer small dots but rather "balls" suspended in space and everything starts to look blurry on 100% zoom viewing. It will still be nice image to look when it is viewed scaled to screen size.

    Such images are worth down sampling to appropriate size as it will make viewing at 100% more enjoyable. Luckily there is simple technique to determine proper sampling rate for the image - you need to measure star FWHM (Deep sky stacker gives you this information for each frame) and divide that with 1.6.

    If your star FWHM is 4px then 4/1.6 = 2.5 - you need to resize your image to 2.5 smaller size. If you get number that is less than one - just leave image as is - don't enlarge it as it will be blurry (enlarging won't bring missing detail back in).

    • Like 3
  8. Why does this bother you? That is perfectly normal for sensor that is not cooled - you take dark frames and after dark calibration you should not have any more hot pixels remaining and if you do - dithering and sigma reject will sort it out.

    Btw - here is screen shot from a piece of my dark frame (in fact master made out of 16 subs cooled at -20C):

    image.png.9abf183dd8e526b4d6d9abc0a426fdb1.png

    Plenty of hot pixels there as well. It is not about how many hot pixels you have - it is about how hot are they and if you can calibrate them out.

    Can you post a single raw file dark sub, or better - maybe couple so we can try dark/dark calibration and see if these hot pixels calibrate out?

  9. 2 minutes ago, sploo said:

    I thought about one of those (a sort of Poncet Platform I believe) but I really like the idea of making a fork mount. I'm probably making life hard for myself aren't I 😉

    Probably :D

    I mean - as DIY then sure, if that is a challenge for you and you like that sort of challenge - sure. What I'm trying to say is that you should not go for it based solely on expectation that it will provide good tracking for such a large scope because it is better design than EQ platform. Unless you are very very skilled at building things - odds are that you will have large PE and only planetary and lucky DSO imaging will only be possible anyway.

    • Thanks 1
  10. 9 minutes ago, sploo said:

    Interesting - thanks again.

    I've done lucky imaging of the moon (though only currently with a DSLR + lens + SkyWatcher Star Adventurer mount), and some limited DSO shooting with the same (30 second exposures max).

    The little Star Adventurer mount is obviously not going to carry the 300P (somewhere in the 20kg region I think), and commercial EQ mounts capable of tracking such a tube tend to be expensive.

    I have some basic engineering gear (lathe), plus a CNC machine (though the latter would not handle steel), so DIY is attractive.

    Then have a look at this:

    It is not fork mounted and needs to be rewind for each use - you get about 30mins to 1h in one go, but it is much easier to make (and cheaper).

  11. 21 minutes ago, sploo said:

    Understood - many thanks. At ultra low rpm I was thinking that the stepper movement would not be sufficiently smooth, but with a 1012.5:1 reduction that still means (I think) somewhere in the region of 150 microsteps being done per second (with 200 steps per circle and 64 micro steps per step).

    The large diameter of the Dob style mount had occurred to me (from the point of view of being able to fit in a big worm wheel. That should also result in plenty of tooth engagement, and therefore reduced risk of stripping the teeth.

    Photography: initial planetary, and hopefully some DSO. I currently only have DSLR gear, so exposure times would be limited by the sensors.

    PS A quick Google didn't provide the meaning of "EEVA", so I'm afraid that one's lost of me.

    EEVA section exists here on SGL, but it is often called EAA - Electronically assisted astronomy. EEVA stands for electronically enhanced visual astronomy (or similar, I'm not 100% sure).

    EEVA is a bit broader term than EAA as it includes night vision devices, where original usage of EAA term was using of video cameras and recently cmos cameras and viewing recording on monitor / computer screen.

    It is very close to planetary style / lucky DSO imaging. In planetary style lucky imaging exposures are very short - order of 5 to 10 ms. For Lucky DSO imaging - exposures are kept short at about 1-2s, while EEVA / EAA or "Live Stacking" as it is sometimes called uses exposures longer than that, but still shorter than regular DSO imaging - from few seconds up to dozen or so seconds (sometimes people use half a minute exposures).

    Point with EEVA is to watch image of target build up in (near) real time - so you observe for few minutes (and stack up to 30-40 short exposures) and the move on to different target. This requires goto and computer control to locate next target, but in principle you can move scope by hand.

    In any case - do search for EQ platform as that is going to be by far easiest solution to either purchase or DIY. It will let you do most of things mentioned here - Planetary for certain and Lucky DSO imaging. Depending on tracking accuracy of EQ platform you might even be able to do EEVA.

    Another solution that you might want to try is friction gear instead of worm. That one has both advantages and disadvantages compared to worm.

     

  12. 1 minute ago, sharkmelley said:

    That should not happen. After converting the 32-bit image to a 16-bit image check the noise level (i.e. the standard deviation) in a small area of background of the 16-bit image.  You should find it adequately dithers the quantisation.  If so, then it means that the reduction to 16-bit is not the cause of the posterization issue.

    Mark

    Random noise is going to dither quantization provided that is of proper magnitude with respect to quantization. One of the reasons why sensor designers leave certain amount of read noise present, or rather "tune" read noise levels - to dither things.

    Here we are talking about stacked image. Noise will drop as a function of number of stacked subs and at one point noise will be too small to dither quantization.

    We are also talking about signal and the fact that signal needs to have enough bits of precision to be properly recorded. If signal for example has dynamic of 5-6 bits, then it should really have at least 5-6 bits of storage to be recorded. If you give it 2-3 bits it will be posterized due to rounding.

    In any case, here is what you've proposed:

    image.png.31cff57587b3c4aada32aaaaad3eef2e.png

    First measurement is that of 32bit image - small selection of the background. Standard deviation of that patch is ~0.142745

    Next measurement is that of whole image. Important thing to note is that here values are up to 12 bit or a bit less (<4096 because of 12 bit camera, offset removed and flat calibration performed) - but format is 32bit float. This will be needed later to do "conversion" of noise.

    Third measurement is small selection after converting image to 16 bit format and last one is full image at 16 bit.

    We can now compare noise levels in both images. Converted noise level from 32bit image would be 0.142744802 * 65535 / (3401.327392578 + 1.790456772) = ~2.748885291

    while measured noise level is 2.76030548 or difference of about 0.4%. Many will say that increase in noise of less than one percent is not significant - but we don't know how distribution of the noise changed - and this was only due to bit number conversion.

    Now let's examine something else that is important - here is screen shot of section that I'm examining now:

    image.png.2f94fb4411de5ef87e7e097765c42934.png

    I tried to select part of the background where there are no stars but there is variation of brightness in nebulosity.

    image.png.ac469e1e94f888583a07e0049ef49919.png

    That part of the image has something like 6-7 in terms of dynamic range. It has max value of 1.16 and noise of 0.167 so dynamic range is ~6.95. This is of course in floating point numbers so there is plenty of precision to record all information but what will happen if we convert it to 16 bit?

    We have seen that it takes multiplying with about x19 to convert (65535/~3402) and here we have total range of about 1.5 (from -0.37 to 1.16), so converted range will be ~28.9. That is less than 5 bits total (not dynamic range) - we have at least 2 bits lost for that data or x4 number of levels (128 vs 32 or less levels).

    This is why my above example shows posterization in faint areas - because it is really there.

     

     

  13. Why would you have your stepper running at 200rpm?

    I think you are approaching this the wrong way. If you want to determine good reduction ratio for a stepper motor to be driving RA - you need to think in terms of resolution rather than speed.

    Stepper motors have about 200 steps (1.8 degree per step) and each of those steps has certain number of micro steps - let's say you will do 64 micro steps per step.

    You also want good resolution of about 0.1 arc second per step.

    This means that full circle will have 360 x 60 x 60 x 10 = 12960000 steps - that is you need 12960000 steps, 0.1" each to make full revolution.

    With 200 steps per circle and 64 micro steps - one revolution of stepper motor will have 200 x 64 steps = 12800 steps.

    Reduction that you will need will be - 1012.5 : 1 reduction

    In fact, HEQ5 equatorial mount has something like 0.14" per step and worm gear system having 705:1 reduction.

    If you are going to use dob mount, base that enables azimuth movement has base with diameter of at least half a meter. That means that circumference of it will be at least meter and a half.

    You will have no problem getting 1000 worm teeth on such diameter and using simple screw that has 1mm pitch to drive it for reduction of about 1000:1

    In fact - this is what is commercially available for dob mounts in terms of EQ platforms, and I would recommend you looking at one to see if that will satisfy your photographic needs. You did not mention what sort of photography you want to do.

    EQ platform will be more than enough for EEVA and planetary, and even some short exposure, lucky type DSO imaging. Of course, if you are into DIY than this fork mount thing could be nice project, but so would EQ platform.

     

  14. 3 minutes ago, gorann said:

    So could I concude that as long that I am stuck to 16 bit and image at a dark site, I should go for long rather than short exposures with my CMOS cameras? Yes, I know we are getting a bit away from the topic of the tread but not much seems to happen with the Astrobin situation right now so we could see it as an Intermission....

    It could help if you can sacrifice high part of the range.

    Let's put it in simple numbers to explain what will happen. Imagine you do 1 minute vs 10 minute subs.

    Signal level in 1 minute sub is at 2% of full well capacity. Stacking bunch of such subs with average method will leave signal level at 2% (take bunch of 0.02 and average them - you will get 0.02).

    Similarly in 10 minute sub - signal will reach 20% of full well capacity, again stacking with average will leave that at 20%.

    Signal at 2% will have 10.3 bit of dynamic range, while signal at 20% will have 13.7 bit of dynamic range - clearly better and there will be less posterization of faint stuff. However by using 10 minute subs - you will blow out more cores, in fact you will saturate with signal x10 weaker than in 1 minute range. This is what it means to loose high part of the range.

    If you try to mix in that high range in linear stage - you will suppress lower range again - only way you can mix in burn out features is via layers in PS - the way Olly does it and often suggests that should be done (because it works for him with this approach).

  15. 7 minutes ago, gorann said:

    Thanks Valiv, very enlightening, I am with you, and maybe slightly worried,  but would we notice a significant difference processing in 16 bit vs 32 bit floating and should we put out a warning not to use old 16 bit versions of PS, like me and Olly @ollypenrice?

    I believe there should be difference, and just how much - depends on your imaging workflow. Using very long exposure and fewer number of them - will not put all signal in low range and consequently it will be less posterized by use of 16 bit.

    Here is example of that happening - I used my H alpha stack (4 minute subs, 4h total, binned x2 for 1"/px sampling rate) in 32 bit and same image first converted to 16bit. I used just one round of levels - same on each:

    image.png.0d38a9239e45fa9dc83d5775b1890d41.png

    and here is same done on 16 bit version:

    image.png.69e6e5e01641d71f2da4a499bd3f6ed0.png

    See how posterized faint regions become?

    This image is made out of - let's say 4 x 16 x 4 = 64 x 4 = 256 stacked subs (4 minute and 16 subs per hour make 64 subs total, but I did bin x2 with average method so that is another x4 in number of samples per pixel). That is enough data with small signal to keep things in low values and show posterization.

    Maybe posterization won't be as bad with 30-40 ten minute subs

  16. 2 minutes ago, gorann said:

    Vlaiv @vlaiv, that is a very striking demonstration that images saved as 8 bit should not be used for further processing. Would you be equally worried about 16 bit since that is what probably the majority here are processing. 8 bit has only 4% of the theoretical dynamic range of 16 bit.

    By the way, yes this thread started becasue Astrobin crashed and a lot of people have lost their posted images there, and waiting to find out what can be done. Then it turned out that Rodd has been using Astrobin to save his data rather than trusting a hard drive or some other storage device or cloud. But since his data is saved as 8 bit jpg on Astrobin I have now tried to tell him not to use Astrobin as a way to save data, but only for posting his images, and that he would be better off saving it as something not compressed, like tiff or fits. This may be especially true since he works with PI which I think is normally saving everything as 32 or even 64 bit, which you appear to prefer.

    For 16 bit - I recommend against it on a principle - it is limited in dynamic range. It will not be much of a problem if you for example have high dynamic range image and you stretch it to an extent while in 32bit format and then save it as 16bit. Stretching "compresses" dynamic range and you don't loose much.

    That is "a trick" I used when working with StarNet++. It requires both stretched data and 16bit format to remove stars and when processing NB data - I first do a stretch per channel - but only something like 1/4 of what I would normally stretch - since I want to later stretch more and denoise data after removing stars and also want to do channel mixing.

    Problem with 16 bit data comes when you use it on your linear data to start working on stretching. We have seen above how limited 8bit data really is. Using short exposure that is common with modern sensors (cmos in particular) and because more and more people image in LP and will not benefit from long exposures - makes stacked images very "compressed" in left part of histogram - low values.

    Imagine that all truly interesting signal is in 2-3% lower part of histogram (left part). That means that this signal occupies only 2-3% of 16 bit range. In values this would mean that this signal only has 65535/40 = ~1600 levels. Now we are down to 10.5 bits - very close to 8bits - you will soon start loosing detail in faint parts of the image. Average galaxy has something like 7-8 mags of dynamic range or even more and guess what? 8 mag is about x1600 between brightest and faintest part - or said 10.5 bits.

  17. 24 minutes ago, gorann said:

    Could feel like that until you compare it to 16 bit, or 32 bit as Valiv pointed out. However, I have a question to @vlaiv: Why go over to 32 bit if there never was 32 bit information in the image to start with? Yes, I know you will probably tell me that stacking many 16 bit images creates more than 16 bit, but it is still quite a big dymanic range. In any case I am stuck on my old 16 bit PS, and the new pay-per-view PS has apparently lost quite a bit of the nice "bits" of the old one I have been told.

    Simply put - 16bit image does not hold enough information that you obtain by stacking and calibration, but more important thing - it is fixed point format. Which means that you not only limit data per pixel at 16 bit but you also limit total dynamics of the image to 16 bit.

    32bit floating point does not have much more bits per pixel to hold information, it is only 24bits so only 8 bits more than 16bit format, but it is floating point precision - which means that it has huge dynamic range

    from image.png.0c7265722b3fe22b4d2b153254e54fa8.png to image.png.b9058e8e9a86daf3b580d9e29c6450f4.png (source: https://en.wikipedia.org/wiki/Single-precision_floating-point_format)

    What does this mean? Well let's do simple example - we stack by adding 4 subs from 14 bit camera. We have a very bright star and we have background with no light pollution. First is almost saturating 14bit range at 16384 and later is sitting around 0 (we have read noise so it is in some +/- read noise range, shifted with offset but let's ignore details for now).

    Star will add to 16 bit (4x16384 = 65536 = 16 bit), while noise around 0 will add to be again noise around 0. Stacking increases dynamic range of whole image besides needing more precision individual pixels. This creates problem with fixed point representation because there is fixed ratio between strongest pixel and weakest pixel - it is always only 16 bits in 16bit format or x65536. If you will, we can convert that into magnitudes and it is about 12mags. You simply put a firm limit on dynamic range of your image at 12mags. If you record a signal that has some intensity - signal that is 12mags fainter will be a single number - a constant value - or there won't be any detail (no variation in that single value).

    In comparison 32bit float point has 24 bits of precision per pixel (that means that you can stack 256 subs of 16 bits each until you start to need more "space") or in another words error due to precision will be 1 in 16777216. But more importantly you can record much higher dynamics in your image - about 10^83 or in magnitudes - over 200 magnitudes of difference in intensity.

  18. 12 minutes ago, Rodd said:

    Not really, since monitors only show the 256.  I tweak JPEGS all the time and the world continues to turn.  Some even got thumbs up from you as being improved!

    Well, for starter Jpeg is lossy format - which means that it alters your image (looses information). You can check that it does so by using regular 8bit image saved as PNG and then, same image saved as Jpeg (even at highest quality setting) and subtract the two - you will not get "blank" image.

    Here is an example:

    image.png.7be5184135b53dd04bb15842838fdb35.png

    This is famous Lena image (used often as test image for algorithms) - left is unaltered png, and right is same png image saved as 100% quality JPEG (chroma sampling 1:1 and such). And here is what you get if you subtract the two:

    image.png.f178cdccc51f81dc3d2cf4003762696f.png

    There is clearly something done to the Jpeg image that makes it different than original image.

    Now, let's do another experiment to see how higher bit count fares versus 8bit format.

    This is single frame (binned to oblivion to pull the data out and make it small and easy to copy/paste) of 1 minute exposure in 32bit format - prior to stretch:

    image.png.1ef3c7b2dd8a9c9bf9af919a71fca43c.png

    This is exactly the same image, except converted to 8bit format:

    image.png.4a93c83ae5f3888ad9bc57c262cc4f8b.png

    So far, so good - not much difference, but let's stretch a bit that data and see what happens:

    Here is 32bit version with very basic stretch, btw stretch is saved as preset:

    image.png.badf7c09a1f6308fffc4f5b40d9dee87.png

    Here is the same stuff in 8bit format:

    image.png.8c6ed5cbff8ce8f6a5bd7f7e5da00a61.png

    Look at that grain and noise, that stuff was not in above image, clearly 8bit image can't take same level of manipulation as 32bit image.

    • Like 1
  19. 1 minute ago, Rodd said:

    I process JPEGS all the time.  I think people get hung up on this stuff.

    :D

    I'm probably one of those people. For me, this is break down of bit format and usage:

    16 bit - good only for raw subs out of the camera and it's usage should ideally stop there ( I know that some people use 16bit format because of older versions of PS, but no excuse really :D )

    8 bit - good only for display after all processing has been finished

    32 bit float point - all the rest.

    • Like 1
    • Thanks 1
  20. 14 minutes ago, gorann said:

    Yes, png may be ok since it stores at 16 bit and not 8 bit like jpg so you could have all your dynamic range there. I bet someone like Vlaiv @vlaiv could as usual help us out here. I took one of my 89.9 Mb tif files into PS and saved it as png. File size fell a little bit to 72.4 Mb but when I opened it in PS it was still 16 bit. When I save the same file as jpg in PS (at maximum quality of 12) it becomes 10.4 Mb and when I open it it is only 8 bit - so crap for further processing.

    I can try, but have no idea what is being discussed here (sorry, I did not read thread posts). I can see that it has something to do with Astrobin having issues and you are mentioning file formats and file sizes?

  21. 1 hour ago, daemon said:

    Surely that is a matter of taste, as is the case with many imaging techniques. 

    Don't get me wrong - I like the spikes when image is created with reflector. I was just pointing out that artificially added spikes to look bad compared to "natural" ones (in terms of what they look like and how they behave - artificial ones usually don't follow laws of physics and look differently then "natural" spikes).

    But, yes, it is matter of taste - some people probably like such spikes.

    #matter of taste

    image.png.5a8e94c06eb0cede6bf82b215e4b5f4e.png

    • Like 1
  22. 1 hour ago, Juicy6 said:

    Am I asking for trouble with internal reflection?

    Using any sort of interference filters? You certainly are asking for reflection trouble :D. Good/bad thing about it (depends on how you look at it) is that you don't really have much control over it, or rather you have no idea for the most part of how your actions will affect end result. One configuration might lead to very bad reflections, then change something by very small amount and reflections are gone. This is because light interaction with itself is complex thing and depends on very short distances - order of wavelength of light in question (it is due to interference of light with itself).

    Could be that you will have reflections in certain combination, but probably best attitude to have towards that fact is: "Cross that bridge when we come to it ....".

    1 hour ago, Juicy6 said:

    Is it good practise to always have IR/UV cut filter mounted? I only use scope for imaging.

    In general no. Sometimes you need to have your IR/UV cut filter "permanently" mounted, but most of the times having double stacked filters hurts your efforts unless you have very specific reasons to stack filters.

    In your above case - it would probably hurt more then help. If you look at transmission curves of filters you are using together, you will see that they are redundant. In fact, here is good example for and against having stacked UV/IR cut filter:

    image.png.a9679a82ee46d5405bc62599d917658a.png

    This is comparison between CLS-CCD and CLS (plain or visual) transmission curves. CCD version of CLS filter does not pass any light below 400nm and no light above 700nm (same as UV/IR cut filter would do) - so in case you are using CLS-CCD filter - UV/IR cut filter is not needed. In case of plain CLS filter used mainly for visual, things are different - that one does not filter out light above 700nm. This is IR part of spectrum and human eye can't see it, but sensor can detect it and refracting telescopes are not well corrected in that part of the spectrum. In this case you need UV/IR cut filter.

    I've shown you example where you need to have UV/IR cut filter combined (other cases include some RGB filters and in general any filters that have "leaks" in UV or IR part of the spectrum and you are using refractor - then you need stacked UV/IR cut filter), and example where you don't need one - but does it hurt to have one?

    Well it does. A bit - and again that will depend on filters. First thing - more possibility of reflections. In your case this is minimized by large spacing between filters. Second thing - you can see from the graph above that filters don't have 100% transmission and cause some light loss. If you don't need filters stacked - why block light more than you need to? 90% * 90% = 81%, so you can loose as much as 10% of light when you stack filters. Third thing is that filters are not ideal in optical performance - they distort light, and although that distortion is low and filters are usually 1/10 wavelength in wavefront aberrations - again such aberrations compound together like light loss - so why distort wavefront more than you need.

    I want to address one more thing in the end - distance of filters from the sensor. That is sort of battle of two things - you want your filter close enough to sensor as not to introduce vignetting (which depends on sensor size, filter size and speed of telescope light cone) but you also want your filters far away enough to reduce impact of reflections.

    Reflections are always there - it is just about amount of light that gets reflected and how concentrated that light is on chip. By having filter (or other source of reflected light) further away from sensor - reflected light reaching the sensor will be more out of focus and thus spread over larger surface - which means each pixel will receive less photons, and if level of photons from reflection is below noise floor - you will not see it in the image.

    Since you are using 2" filters - you can move your filter drawer away from camera without much fear of introducing vignetting because of that. This means that you have some "room for maneuvering" if you get reflections from your filter in the drawer - you can always swap filter drawer and extension tube positions in your diagram above - that moves filter further away from sensor and yet keeps total distance between FF/FR and sensor the same.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.