Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. Yes, that seem to be more what one would expect.

    Again, you have some seriously red stars - I'm not sure you can get so red stars in real life - even 2000K is not going to produce such red color, but again - RGB is not really proper color space for color calibration. I wonder if PI color calibration uses different color space or not.

    One thing that can help with result would be not to stretch luminance as much. There is another thing that can help - and that is variable saturation. Done like this, it keeps max saturation for each pixel. This means that if color data is noisy compared to lum data and you stretch lum data quite a bit - there will be color noise in faint regions and it will be quite evident.

    Reducing saturation in noisy regions can help with this. Actually you can use stretched luminance to regulate saturation in dark regions of the image. But these are all tricks. If you want to try, just take stretched lum, increase brightness (not by further stretching as that will increase noise - you don't want that, you can even blur the image to remove noise) by adding constant offset to image, don't worry if you blow out bright parts, or rather make 100% in value all parts where you want to keep 100% saturation. Use such modified luminance and multiply it with normalized color - other processing is the same. This will in effect reduce "normalization" in darker parts so blue that was 1:0:0 will end up being 0.5:0:0 for example.

    Maybe best thing at this point is to use above image that has colorful stars to blend in missing star color in "normally" processed image - just select star cores and copy data from above image into your standard processing?

  2. 3 minutes ago, Datalord said:

    But, how do I do that? White balancing works on the entire colour range. I usually do PhotometricColorCalibration on the combined color image.

    What I could do is to process the colors combined, then split them and go through this procedure?

    Do you do photometric color calibration on unstretched linear color image?

    If so, then yes, combine R, G and B into color image, use screen transfer function to see what the image looks like, but work on linear data (I think that PI can do this, right?) - wipe background and make it black / uniform, and do photometric color calibration. When you are done - split color image again into R, G and B and then proceed to create normalized values.

     

  3. I think that you have performed all steps correctly, but you have issue with your color data.

    - first point is that you have star saturation/clipping in your color data, on few brighter stars - ones that ended up white in finished image

    - second thing that you probably missed out to do is background neutralization - wipe, and proper white balance.

    For this to work, your color channels need to be white balanced already and of even background. With this image that is a bit hard to do since most of it contains nebulosity so you have to "pick out" places where background space is showing thru and bring that to equal 0 level - again don't go full 0 as you don't want negative or zero valued pixels.

    White balance is out as color only composite has very yellow cast - that means that whole image has yellow cast - and most stars in the image are red - in fact very red.

    Maybe if you post your fits stack and I'll do procedure in another software to see if I get same results - just to rule out any issues in procedure?

  4. 12 minutes ago, Datalord said:

    It really does. Now the problem is to figure out how to do this normalization in PI. The only part I can't just do in PixelMath is the "Maximum stacking method", which I have no clue where to find.

    There should be maximum stacking in regular stacking - just create stack of three subs (R, G and B ), and turn off all "advanced features" - like normalization, pixel rejection, .... and do simple max stacking.

    Let me see if I can find docs for that:

    image.png.92418135f2a9c8983d1bc25c917190a1.png

    So you want maximum stacking, and No normalization, no weights, and no Scale estimator, essentially all other options "turned off"

  5. I can explain a technique on how to never get a blown star core provided that it is not clipped in color subs (luminance can actually be clipped - it won't make a difference).

    Technique is quite simple and can be easily tried, but this is not a tutorial in particular software so anyone trying this with their favorite software will need to figure out how to do each step (all are fairly easy).

    Have your color subs stacked and wiped (meaning removed background gradients and background neutralized to gray value). Make sure also that you don't push your color subs to negative - add offset so that background is not 0 and there are no negative pixels.

    Now turn this into normalized RGB ratios - here is the most easy way to do it:

    Stack those three subs with maximum stacking method (each resulting pixel will be maximum value out of R, G and B).

    Divide each of R, G and B subs with resulting maximum stack to obtain normalized R, G and B subs. These will look ugly, but don't worry about it, this is just color information. Combine those three into color image - again don't worry as you will get color mess at this point.

    Now process your luminance layer like it is mono image, neutralize background, do histogram stretch. Don't worry that star cores clip at 100% at this point. This is luminosity information, and star cores are supposed to be 100% bright. Don't blow DSO core though, as you will loose detail.

    Now comes mixing part. Take normalized color image and split into L*a*b* channels. Normally one would discard L channel, but we won't do that in this method.

    Take color part L and multiply with stretched luminance. Result is new L that we will use. Now just recompose final image with this new L and a and b channels from color part and convert back to RGB.

    Now you have "full color" image without blown cores of stars.

     

    For those that are interested in why star cores get blown "normally" and why this approach preserves star cores, here is a brief explanation.

    When you do a histogram stretch on your image if it is a color image and each channel represents 0-1 values, you will end up stretching all three components R, G and B. When you stretch to your liking, bright parts get compressed towards value 1. There is no way to avoid this and you will end up with R:1, B:1, G:1 - which is white color.

    This is why you get white cores for all stars. In fact, problem is quite big here. Imagine you have blue star and red star and white star (for purpose of discussion let's go with pure red and pure blue, although stars never have these colors).

    Imagine also that these three stars are equally bright, Any stretch will keep them equally bright, but if we want to keep our color the way it is - colors don't have same luminosity! In RGB mode, white point sets maximum luminosity, any other color that can be displayed will have less luminosity than this. Here is a good example:

    image.png.fc61a1834d0044e75f741f6d9e7ac67b.png

    You can see that yellow comes close to full luminosity of white, but pure red is at 54% and pure blue is even lower at 44% - this means that stretch needs to depend on color - it can't be done simultaneously for each pixel in the image.

    This is what we do above. First we get pure color in terms of RGB ratios - by normalizing color information - this just means that we get purest RGB ratio without altering that ratio that can be shown in RGB color space. One of three components - one that is largest (max function) will end up being 1. No component will be larger than one (no clipping) and we are scaling them so ratio is preserved. Mind you this is simple method but if you want to be precise - this is not best way to handle color - you should not be doing it in RGB domain.

    Next we use luminance to bring out detail. In the end we take our "pure color" information and we look at how much luminance each pure color requires. For example pure red star will have pixels that have "intrinsic" lum level at 54%, while pure blue star will have "intrinsic" lum level at 44%.

    When we multiply stretched true image luminance with this "intrinsic" luminance we are in effect adjusting Lum so that it can represent actual true color without clipping.

    Hope that all makes sense.

     

     

    • Like 5
  6. 2 minutes ago, Littleguy80 said:

    Thank you. I only have a 2" diagonal so wanted to know if I needed to factor the cost of 1.25" diagonal into the price.

    In principle there would be no problem with using 2" diagonal but there would be no point in using 2" EPs - you would get "vignetting" sort of thing (not really as you would not see anything in that region anyway), but with this scope I suppose that you would have different issues when using 2" diagonal.

    Focuser is helical and I suppose it does not have quite a large travel. 2" Diagonals are usually much "longer" in terms of optical path and I suspect that you might not be able to reach focus with them. 1.25" is the way to go in this situation.

    • Like 1
    • Thanks 1
  7. 1 minute ago, Adam J said:

    Not sure about that but ZWO no longer list it on their own web page. 

    https://astronomy-imaging-camera.com/product-category/dso-cameras

    Could be that ZWO is no longer accepting customer orders because they stopped production, and above batch was ordered by TS some time ago.

    Anyway, if you want that camera, I guess it's worth checking again after 9 days if it's in stock, or as suggested above - look at other vendors offerings.

  8. Well, there is potential use even in LP.

    These devices have quite a broad spectral response, up to 1000nm wavelengths. They "convert" all wavelengths in one particular - green display, or "broadband" type - white light. But this means that you can observe Ha/Hb/OIII/SII emission nebulae without restriction. You can use new multi pass NB filters - like duoband or triband, without worrying of human eye sensitivity in particular range - it "converts" all those wavelengths in part of the spectrum where our eyes are the most sensitive.

    For stellar/galaxy type targets, you can use NIR part of the spectrum with usage of NIR filters, otherwise only used for imaging, since eye can't detect wavelengths above 700nm on its own.

    Targets that will probably see the least benefit in LP would be reflection type nebulae. This agrees with recent Stu's report on using NV gear.

    But those price you found are seriously steep.

    I managed to find couple of sources of Image Intensifier Tubes via alibaba - a few Russian manufacturers, but again no prices were given. These are earlier generation devices - like Gen 2+ or Gen 3, but have good specs (~60lp/mm, SNR of about 20) and larger diameter like 37.5mm or even larger models - which is good for astronomy usage compared to standard 16 and 18mm, but again no prices listed and aimed at military usage.

  9. Just had my first NV moment :D

    Researching further, it's safe to say that CMOS + AMOLED technology is not there yet. Display side of things is lacking - it turns out that smart watch market is interesting one for small round displays and many are available today, but unfortunately resolution offered is too small.

    Most displays can be classified as 300-400dpi displays. Even highest resolution displays nowadays are still below 1000dpi, let alone 3600dpi needed.

    To test what would image be like with current displays - I took 32mm eyepiece, unscrewed the bottom and put it against smartphone display. I even loaded M51 mono image I took some time ago to have proper "feel" of it. It indeed works, image can be seen in sharp focus, but pixels are obvious. I remember someone using this technique to judge EP quality - actual pixels can be observed when EP is used against the display.

    It does however feel quite "real" in terms of observing. Given proper resolution, I would be really happy with such NV eyepiece - consisting of CMOS and proper AMOLED or similar display.

    Back on the original topic, @alanjgreen brought to my attention that it has been covered before, and indeed, this post is rather informative:

    It seems that Image Intensifier Tubes can be used as starting point for constructing true astronomical NV eyepiece / filter. These are "hearts" of NV devices currently used. I wonder if such items could be purchased in retail, and what would the price be. Manufacturer mentioned for EU market, indeed has these listed on their website, but currently only military application is envisioned and no retail of these is done.

    For phosphorous screens, adequate resolution is listed at 64lp/mm, which translates to ~3200dpi, so above calculation is quite good - we need at least 3000dpi displays to be able to project natural looking image.

    • Like 1
  10. 1 hour ago, elpajare said:

    I am going to say one thing that many will not like but I think that time will give me the reason

    The traditional astrophotography will be reduced to a small group of selected practitioners while the EAA or NV will attract many other people interested in seeing what's up there.

    Electronics is the future

    I think that you are quite right in growing EAA/NV potential. I'm not sure that AP practitioner base will shrink, people do get hooked to it, but given the cost and involvement, over time, many people wanting to start "taking pictures" of night sky will turn to EAA (live stacking) rather than full fledged AP.

    Both NV and Live stacking need simplification / cost reduction and user friendliness to move forward at speed. At the moment I think NV is hampered both by high price and also availability and legislative. Some countries prohibit use of these devices, because they are often associated with firearms - either in military or hunting role.

    Dedicated astro NV device would certainly be exempt from such laws?

    • Like 1
  11. I see couple of "problems" that need to be addressed first with above approach.

    1) Should it be eyepiece, or more flexible device like a "filter". Now that @Paul73 mentioned Daystar - much like a Quark / Quark combo. Problem that I'm seeing with this is that phosphorous screen is going to be very "spherical" light source, so not sure if regular eyepieces will be able to cope with that - we all know that EPs have trouble handling fast optics.  On the other hand, having integrated EP optics at EP end would reduce flexibility of such device.

    2) This is in essence analogue device, I'm wondering what would be QE of such device - given photoelectric element and of course electron multiplier. If we look at above diagram for one type of electron multiplier with holes - they seem to be rather sparse and overall "QE" can't be that high. I also wonder about "read noise" of such device, or rather electron multiplier thermal noise. Is it comparable to modern CMOS sensors?

    If we replace photoelectric element with CMOS sensor, and phosphorous screen with some other type of monochromatic screen that gives off light in green spectrum, and electron multiplier with electronic processing unit that scans CMOS sensors and produces output on screen - would it still be the same device? In principle it would. Maybe even more sensitive than NV. Probably bulkier? I wonder if there are screens with pixels small enough to produce natural image after "EP amplification" - I guess we can calculate this from human angular resolution and apparent field of view.

    Let's say we have something like 60 degrees AFOV, and human angular sensitivity is about 1'. That means that we need at least 3600 pixels across diagonal for image to look natural. Putting that in one inch field gives 3600ppi display - not sure such things exist at the moment.

     

  12. Due to recent participation in discussions on status of NV, I had opportunity to get to know a bit better what principle of operation NV devices utilize.

    Current state of affairs is to use ready made NV device and use eyepiece. It is form of EP projection approach because NV devices are made to be used stand alone - much like consumer type point and shoot camera, it has integrated lens.

    I deliberately used above comparison, because I needed a mental image of how NV devices work in terms of optics, and it is analog to using EP projection technique with point and shoot camera (or smartphone camera).

    Diagram on wiki explains this very well:

    image.png.b8dd92487dd3f7d29dc44bc1e5da40eb.png

    Device needs collimated beam on "input" because it has integrated focusing lens. Next stage is photoelectric device turning photons into electrons. These electrons hit electron multiplication device that sends stream of electrons hitting phosphorous screen that emits light. Light is then sent thru number of lenses that represent another "scope - ep" combination.

    What I'm wondering is if above device could be turned into simple eyepiece. Or maybe EP created using same principle as above. This could potentially bring down cost of EV equipment and with utilization of astro standards (like 1.25" / 2" barrel) - ease of use as well.

    Focusing lens at the front is not needed, because light beam from scope is already focused. This would also imply that eyepiece is not needed in front of EV eyepiece of such design.

    Does anyone have idea what is the size of photoelectric device used at front? That would need to be matched fully illuminated field of scope for best utilization - it's a bit like size of field stop with regular EP.

    There is also matter of "resolution" of the image, that needs to be matched with the eye angular resolution. Electron multiplication device actually works as if it has "pixels", here is diagram from wiki:

    image.png.f249e482d05b6a1368c4e2a342bdce4e.png

    This type of electron multiplier is used in Gen II and Gen III devices. Holes are 8um in size, but spaced at 10um. This effectively defines "pixel QE" and sampling resolution of EV device, when coupled with certain scope.

    Any ideas on this topic?

     

    • Like 5
  13. 3 minutes ago, Stu said:

    The eyepiece comes first (going into the focuser as normal), then the NV device connects to the top of the eyepiece so you are directly viewing the 'screen'

    Hm, that is interesting.

    It does, however, make me wonder how it's actually working then. Since it's amplifying light by conversion to electrons, if light beam is coming from eyepiece, then it's collimated and at an angle (amplified angle). NV device must preserve this angle in amplified light - this means that surface emitting light needs to know direction that it needs to emit light in.

    It can't just be a little screen with pixels - no human can focus at such a short distance.

    Does using NV device change focus position for given eyepiece? (vs same EP without NV device)

  14. I think people have some misconceptions about EAA and "processing".

    NV is like having analog sensor and old type cathode ray tube screen, small enough so that eyepiece is needed to view the image on it.

    EAA is same thing in digital form - sensor is digital, and display is large and TFT/LCD type instead of CRT.

    EAA offers better control. Processing is nothing more but enhancing what is already there - a bit like using filters in traditional observing. NV signal amplification is type of processing - linear processing. It even contains form of integration as phosphorus display shines for a bit longer - it accumulates multiple electron hits into single light output.

    There is nothing wrong with processing - after all, there is processing done in traditional observing - it's done by our brain. One never sees photon noise while observing. This is because our brain filters out this noise. Why do we see things better the longer we observe it? Our brain also uses "stacking" - or integration of the signal.

    • Like 8
  15. 3 minutes ago, Stu said:

    Ahh, that's where you are wrong vlaiv. NV does indeed amplify light, that is what is magic about it. The simple principle for nebulae observing is that you filter heavily on the frequencies of light you want, then put those into the NV which amplifies what is left so you can see it. Very clever.

    No, not magic, physics! :D

    Been a bit researching on the whole NV topic as well as trying to figure out the root of all the sore feelings about NV being moved together with EAA into EEVA.

    I understand the sentiment that NV reports should be together with other observation reports, and I agree about exposure part - many more people will get in touch with this idea and technology that way. On the other hand, I do understand NV to be a part of EEVA and in my view EEVA is only to gain with such diversity. After all, I feel that anyone interested in EEVA accepts it not as alternative to "traditional" observation (if there is such a thing), but rather another tool in observation box, or in this case - bunch of tools, all relying on electronics to enhance viewing/observational experience.

    Just to touch up on NV side of things, and in particular comparison to live stacking / video astronomy. Same amount of light is entering objective of the scope, and no miracle is going to make it stronger - it's just about efficiently detecting that light and presenting it to the eye of the observer. Whether it is an analogue amplifier or digital one, I really see no difference.

    There is however difference to how one perceives observational experience, and in some sense NV is closer to traditional observing in this regard.

    • Like 1
  16. Just now, alanjgreen said:

    It appears that we NV users are supposed to join in the discussions on "live stacking" and other things that are of no interest to use whatsoever!

    I wont be bothering.

    I don't think this is necessarily so. There is a common theme to all branches of EAA, or EEVA what ever majority decides to call it.

    It is after all, Electronically Assisted - Astronomy. We can share experiences of being able to see deeper than would be possible without aid of electronic devices. It is still live activity, requiring observer to be interacting with equipment in order to see objects (either at eyepiece or at computer screen), and it is visual, although photographic evidence can be taken with all of diverse activities that we call EAA - be that picture with smartphone or image processed out of live stack session, or maybe screen shot from video feed.

    @PeterW Live stacking does not need be "delayed" - it can be live as well - stacking video at 30fps would certainly qualify as live feed, and it even does not need to be stacked - video astronomy just uses video cameras and feed to monitor for live view.

    • Like 1
  17. I'm sorry to hear that some people feel that NV is somehow being brushed aside. Maybe it even is, I have no idea if that is the case and personally never had that impression, but then again I did not participate in much of NV discussion - just read couple of reviews and saw numerous images taken with smartphone - which I found to be really interesting indeed.

    On the other hand, I would like to point out that EAA and NV are in essence the same thing, unless I'm again misunderstanding how NV devices work - you still need a battery to get the image out of it, right? It does not do any sort of magical amplification of light, it has to be some sort of sensor and some sort of signal amplification and some sort of projection screen.

    • Like 2
  18. I'm somewhat confused by all of this.

    Does "traditional EAA", either in form of video astronomy (old video cams at eyepiece) or live stacking type fall under EEVA definition?

    Also, @GavStar and @alanjgreen, this is a genuine question, not trying to provoke an argument or mock subject or whatever - Why do you feel that NV is more visual astronomy then for example live stacking?

    If I understand things correctly, both NV and Live stacking - use light sensing device, a form of amplification of signal and light emitting device to enable observer to look at object with their own eyes. Granted, NV device is self contained / compact one, while camera / cable / laptop with screen is not as compact, but both systems in essence provide the same thing. Also, experience will differ, but does the principle of operation outlined above differ as well?

    • Like 2
  19. 4 hours ago, Starwiz said:

    If all goes to plan I'll be starting imaging with an ASI1600mm-Pro next month.  The exposure times seem incredibly short compared to the 10 - 15 minutes I was doing with my modded Canon 1200d.  Is that because the sensor is much more sensitive on the ASI?

    BTW, I've just emigrated to Malta, so looking forward to the cloudless skies.  Light pollution where I am is slightly worse than my UK location (Bortle 5 compared to Bortle 4), but I'll also be doing narrow-band as well as LRGB.

    No, sensitivity does not play a part in that. Here we are talking about splitting one very large exposure into smaller ones. You can look at it that way - instead of doing 1 grand exposure lasting multiple hours, you break it into shorter ones and add up those shorter ones. All things that are important to final result add up with time - signal adds up, more you image more there is of it. Light pollution behaves the same - longer exposure you accumulate more light pollution. Thermal noise is also the same - more you image, more it builds up. Everything except the read noise - it is the same regardless of exposure length. 10 minute sub will have same read noise as 1 minute sub.

    And that is the difference - more subs you take, more times you add read noise. Level of read noise determines how much it will impact final result - lower the read noise, less impact it will have compared to other sources. Most often read noise is compared to LP noise - look above at triangle explanation, but other noise sources also participate. With narrowband imaging you eliminate most of LP - this is why you need to go longer in NB - read noise becomes important factor.

    ASI1600 has very low read noise compared to other sensors. Most CCDs and DSLRs have it at about 7-9e range, some low read noise models have it at 4-5e, but that is still x3 compared to ASI1600 (and other CMOS models). With DSLRs it's often recommended to go for ISO800 or similar instead of ISO100 - this is because CMOS sensors tend to have lower read noise at higher gain settings.

    • Like 1
  20. 1 minute ago, eshy76 said:

    The 20 x read noise can also be 3 x read noise squared or 10 x read noise squared....there's some discussion about that. 

    There is no definite value that one should use here. It comes down to when you can consider read noise contribution too small to matter. Best way to "visualize" this is by right angle triangle. If one side is much smaller than other - longer side approaches hypotenuse in length. Total noise is hypotenuse and read noise and LP shot noise are sides of right angle triangle. When LP shot noise "side" becomes much larger than read noise "side" - total noise (hypotenuse) comes close to that LP shot noise "side".

    image.png.a37f47e6c67353bfe9a48488a3a370cc.png

    a = b implies c>b

    image.png.749d9f22e6084599a3366a95949de35a.png

    O<<A implies H and A almost equal in length (O here being read noise almost has no impact on total noise - it's dominated by LP noise - here A)

    • Like 2
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.