Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Adam J

Members
  • Posts

    4,952
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Adam J

  1. On 06/05/2024 at 18:54, cloudyweather said:

    Hi all,

    First let me say that I've read as much as I can find online and I'm looking for help.

    I'm fairly new at astronomy, retired and a  semi-professional photographer in the past (love HQ lenses).  Not sure about all the image processing/stacking that seems to be needed for astro but may give that a go (I have an APS-C Fuji). But I'm also looking for Visual and somewhat portable (car / easy setup).

    Consider the Starfield 102 + 0.8 reducer/flattener and the Askar 103mm + 08. reducer/flattener to be the same price and both in stock.

    I've read nothing but glowing reports of the Starfield everywhere. Praising the quality build, finish & optics for both Visual and Imaging.

    The Askar seems to have split views. Concern about cheaper glass than perhaps more expensive triplets. I did read of possible focuser issues on one site.

    Why am I asking?  The Starfield has been out of stock for a few months and I could get the Askar, but I'll wait a little longer if needed.

    Has anyone actually used both telescopes?  First hand experience of both? For Visual and Imaging?

    Many thanks

    You will get the TS Optics 115mm triplet for a similar price. Its well established as a good imaging scope. 

    Adam

  2. Just now, vlaiv said:

    I would advise against this as stacking would not make sense after you alter noise distribution.

    In any case - why don't you simply try it out on existing data you have? Make comparison between two approaches.

    Maybe best way to do it would be to create "split screen" type of image. Register both stacks against the same sub so they are "compatible" - prepare the data one way and the other and compose final image prior to processing out of two halves - left and right copied from first and second method.

    That way final processing will treat both datasets the same as you'll be doing it on single image.

    Alternatively - if you can't make it work that way - just do regular comparison - do full process one way and then the other.

    I'd be very happy to see the results.

    I didn't say that though, I said I would apply it after stacking. 

    I have tried it and the thread is to ask if others have observed the same thing. 

    Adam

  3. 42 minutes ago, vlaiv said:

    We seem to be having different concepts of what the binning is for.

    I see it as a data gathering step - integral into data reduction stage, rather than final processing tool.

    In my view - if you choose to bin your data - there is no sense in up sampling it. You don't up sample your data from regular pixels either, right?

    You choose to bin your data because you'll be happy with final sampling rate that binned pixels give you - as if you were simply using larger pixels in the first place.

    If you leave your image binned - it won't be any different than shooting "natively" at that sampling rate. It can loose detail versus properly sampled image - but that is not due to binning, same would happen if you compared two images taken with regular pixels - ones being bigger and under sampling and ones being smaller and properly sampling. Under sampled data will not show the same level of detail - simply because it's under sampled (not because it was binned - because in this case it was not).

     

    I would say there are two reasons to bin, 

    1) You want to improve SNR by effectively making the pixels bigger. 

    2) You are oversampled, likely due to seeing and you want to present your image at a scale at which it appears sharp to the eye as opposed to soft. 

    In the case of two I would argue that the AI noise reduction will provide a better result if used before you software bin and in this case I would apply noise reduction as a first step following stacking then resample. In the case of one I would just let the AI noise reduction do it's thing if the image is critically sampled or undersampled. If oversampled see case two. 

    Adam

     

  4. 29 minutes ago, vlaiv said:

    Can you explain this?

    When is binning detrimental to image quality?

    yes, it's detrimental when you are undersampled and you loose detail in bright areas to gain SNR in faint areas of the image. Something that occurs in wide field imaging quite allot. The point being in wide field you are almost never limited by seeing. 

    My example is that when I processed my M45 wide image at 180mm focal length, the surrounding dust looks better if you bin the image as you might expect. But in binning the whole image you loose detail in the core of M45. 

    Yes it would be possible to bin an image, resample back up to original level and then mask the brighter areas of the unbinned image back in but the AI noise reduction seems to achieve this with better granularity and minimal effort. 

  5. 19 minutes ago, vlaiv said:

    With some things I agree 100% and with others 0% :D. I'll explain.

    1. Yes, QE does matter, but only up to a point.

    Nowadays, most cameras have very similar QE. There is really not much difference in 81% vs 83%. Transparency on the night of imaging and position of the target can have greater impact on speed than this. So does the choice of stacking algorithm if one uses subs of different quality. My reasoning is to go with higher QE only if it fits other criteria below

    2. Read noise is in my view completely inconsequential for long exposure stacked imaging where we control sub duration at will.

    Since we can swamp the read noise with selected exposure length and again CMOS cameras have much lower read noise than CCDs used to have - I again don't see it as very important factor. I would not mind using 3e read noise camera over 1.4e read noise camera if it fits with other criteria

    3. Sensor size is very important for speed - because it lets us use larger aperture while having enough of FOV to capture our target. I guess this is self explanatory

    4. Binning is the key in achieving our target sampling rate with large scopes and large sensors.

    Speed is ultimately surface_of_the_sky_covered_by_sampling_element * QE * losses in telescope * aperture

    In above equation there are only two parameters that can be varied to a greater extent - one is aperture and the other is sampling rate or sky area. Latter determines the type of the image we are after - do we want wide field image with low sampling rate or we aim to be right there on the max detail possible for our conditions. If we aim for latter - well we don't have that much freedom in this parameter. This leaves aperture. We can choose to go with 50mm or 300mm scope, but in order to hit our target sampling rate we must have equal range of pixel sizes - that we don't have. Binning to the rescue.

    I simply don't like AI side of things. We have very good conventional algorithms that do wonders as well if applied correctly.

     

    I think it's better to say they have a similar peak QE, although legacy cameras do have lower QEs, the main difference for OSC is in what the bayer matrix is doing to that QE. 

    In terms of read noise. At a dark site my understanding is that Narrowband will take significant time to burry the read noise with LP at least. Now it may be so low even in that case that shot noise from a faint target is still the biggest factor. 

    My personal current struggle is to replace my ASI1600mm pro with a higher QE camera or buy a second and use in a duel rig. I am leaning towards a duel rig as the used coat has come down. 

    All in all to op I would say get the largest sensor you can afford with the possible exception of wanting to specialise in galaxy imaging. 

  6. 12 hours ago, vlaiv said:

    Yes, provided it is used with the same optics and nothing special is done to the data.

    If you have the choice of telescopes you can use with your cameras, and you can bin your data - then it depends.

    If you want to get really faint stuff in "reasonable" amount of time, then this would be my advice:

    - figure out your working resolution in arc seconds per pixel (one that will capture all the detail you are after and will not over sample)

    - get the biggest aperture that will give you focal length in combination with pixel size you have and any binning factor that will provide you with wanted working resolution - and will have enough FOV to capture the target.

    (of course, consider all other variables like ability to mount the said largest aperture scope, costs involved, quality of optics and so on ...)

     

    Am starting to think that the only things that matters are QE, Read noise per unit area and sensor size. 

    And this is why, 

    I don't think software Binning for increased SNR is the way to go anymore. 

    Interested in your thoughts. 

    Adam

  7. My basic theory is that when used purely for the purpose of noise reduction the use of Binning is made redundant by the advent of modern AI noise reduction techniques and in some cases my be detrimental to image quality. 

    In effect AI noise reduction is making best use of available information contained within the raw image to reduce noise dynamically across the image dependent on local SNR. By binning the image you are simply depriving the AI algorithm of information and potentially missing out on detail in areas of already high SNR if whole image binning is applied. 

    In the case of a oversampled image I would always now apply AI noise reduction and then as a final step resample the image to present it at an appropriate image scale. 

    Thoughts? 

    Adam

    • Like 1
  8. 42 minutes ago, ollypenrice said:

    The only reason to use a reducer is to increase workable field of view. If it's not going to do that, it's not worth bothering.

    If you would like to swap more signal for less resolution you can resample the image downwards before processing. Beware the F ratio myth when it comes to reducers.

    Olly

    I disagree you are exchanging Resolution for both FOV and Reduced time to reach a given SNR. As a Rasa owner its exactly the same thing as your scope is doing only it is preconfigured to do that.

    Resampling is worse because you exchange resolution for speed without also gaining FOV.  Can only do both with a reducer. 

    Adam, 

  9. 18 minutes ago, Giles_B said:

    Can you explain why - because of the small image circle? square stars? pixel size? Or just personal preference because of all of the above?

    Its a complex thing to put into words but in the end what you are doing with a corrector is correcting the field curvature as a function of distance from the optical axis. The amount of correction needed changes with distance from the optical axis, so with a faster corrector you have a steeper correction "gradient" (not the right term but I am having a go here), you need to compensate for that gradient more precisely, That means that you need a better match between the original profile of the field to be corrected and the correctors profile itself. So you will need more precision back focus, but also any miss match in that gradient is more significant. On top of this you are also going to increase vignetting with a faster corrector. The larger the chip the worse it all gets.  It also gets more sensitive to tilt in the corrector relative to the principle optical axis. 

    That is the best i can explain it.  In essence 0.8x is the standard for a reason, its a design balance. 

    Adam

    • Like 1
    • Thanks 1
  10. 9 hours ago, Budgie1 said:

    So, my imaging rig to date has been using refractors, Evostar 100ED DS Pro, Evostar 80ED DSO Pro & WO Zenith 73 III, which has been alright but the number of clear nights seems to be getting fewer and the West Coast of Scotland isn't renounced for it's temperate rain forests because it's very dry! :clouds2::clouds1:

    So that I can try to get the most out of the clear nights we do get, I would like to upgrade to a faster scope and I'm thinking about the RASA 8, which should suit my ASI294MC Pro but I'm not too sure about the weight on the HEQ5.

    My HEQ5 is about 3 years old, has the Rowan belt conversion fitted and is mounted on a solid concrete pier (see my Obsy build thread for details).  With the RASA 8, ASI294MC Pro, dew shield & guide scope & camera (I would rather have guiding than not) then it's getting very close to the maximum imaging payload for the HEQ5. I know it could really do with the EQ6-R or similar, but I can't justify changing the mount & scope, and there's no point just changing the mount and sticking with the scopes I already have.

    So, has anyone got or used a RASA 8 on the HEQ5 and how did it handle? Any issues balancing the rig and did you use a guide scope or not?

    Thanks for any advise. :D

    You may get  away with it just about due to the relatively low image scale. 

    Adam 

    • Like 1
  11. On 18/04/2024 at 18:29, FLO said:

    asi2600mcair.jpg.0862081bdc5af011b54e24bab081c7ed.jpg

     

     

     

    Dont like this as an idea, You are basically paying for three things and need to upgrade all three things if at any point you just need to change one of them. Also I dont see a  UAB3 type B connector at the back...and that makes me worry that you will be forced to use the ASI air with this camera.  

    Adam

  12. 16 hours ago, ollypenrice said:

    Don't look at a stretched version of your flat to decide. It tells you very little.  What you should do is read off the ADU value of the unstretched flat (ie the linear flat) in the corners and in the middle. I've found that it was perfectly possible to ask flats to correct a 25% drop-off in brightness between centre and corners, so you need to know what your light drop-off is. It may well be that your drop-off is greater than that but, before spending, it would be worth a check.

    Is there any way in which you might get your filterwheel closer to the chip? You don't have any spacers between F/W and camera?

    Olly

    I would say that the main indicator that bad things are happening are the reflections and diffraction spikes most likely coming from the edge of the filter mount interrupting the light cone. 

    Adam

    • Like 1
  13. 1 hour ago, Anne S said:

    I had a similar problem when I installed a focal reducer on my 102mm Wave some years ago. A smaller ccd, just a SX694. Completely solved with 36mm filters. When I removed the reducer everything was fine. My ccd was only 16mm diagonal though. I'd like a 26M camera some day and I'm concerned that I'd need to go up to 2 inch filters. My 694 backfocus is 17.5 approx.
     

    The reducer steepens the light cone which increases the risk of vignetting.

    There is just no way that a even a F3 would need more than 1.25 inch filters with a SX694, I strongly suspect something else was going on. As I understand it 36mm is fine and used by many people on the 26M with no issues. The only thing I can think is that you must have had the filter a long long way from the sensor. 

    Adam

  14. 7 minutes ago, tico said:
    Hello,
    Currently I use an ED80 refractor, with which I observe visually from my backyard, especially the Moon and doubles and planets, the rest of the objects due to light pollution I do not observe....
    The truth is, I would like to upgrade to a larger telescope, without being huge, with more resolution than the ED80 for planets and the Moon. To notice a difference about the ED80, in your opinion what telescope could it be?
    I prefer it to be a telescope that is not too bulky or heavy, short, the mount I have is a celestron nexstar SE, I had thought about a 4" or 5" Mak...maybe? or a SC C5.. maybe a C6..? Although I don't like collimating telescopes...
    thank you so much
    Tico

    Skymax 150 would be my choice for planets. C6 would be ok but I think the skymax would be better. 

    Adam

  15.  

    So I had not noticed this before but it looks like Sony have put a OSC / Colour sensor based on the IMX492 design on their website.

    Here:

    IMX492LQJ_Flyer.pdf (sony-semicon.com)

    not to be confused with:

    IMX294CJK_Flyer.pdf (sony-semicon.com)

    They are not the same sensor. 

    The IMX294CJK is currently used in the ASI294mc pro and it uses a odd Quad Bayer Matrix to provide 4.6um effective pixels but with each of the RGGB sub pixels themselves being made up of a cluster of 4....sub sub pixels...

    Below this is the IMX492LLJ that is currently used in the ASI294mm pro mono camera, it can be used in 1x1 or 2x2 binning. 

    IMX492LLJ_Flyer.pdf (sony-semicon.com)

    Note that ZWO assumed that the IMX492LLJ was in effect a mono version of the IMX294CKJ....it seems that as I predicted some time back this is not quite the case and the reason for the IMX492 designation was that the way the pixels are read is different and so a revision of the silicon has taken place. 

    SO this new IMX492LQJ is in fact a OSC version of the mono IMX492LLJ used in the ASI294mm pro that is using a conventional RGGB matrix on 2.3um pixels for a 12-bit, 47.08mp sensor. 

    I expect two things, 

    1) This is going to turn up in Astro cameras at some point in the future. 

    2) ZWO having painted themselves into a corner by calling the IMX492LLJ based camera the ASI294mm pro, have a headache now as to what to call a camera based on this sensor.....ASI492mc pro????

     

    Adam

    • Like 1
  16. 6 hours ago, AndrewRrrrrr said:

    Evening All,

    Looking for some advice:

    I recently upgraded my camera from a 183M to a 26M. But using the same EFW with 1.25" inch filters. This is with a "wave 80" scope (80mm aperture, f6 with a 0.8x reducer so 384mm focal length) 

    Sensor to filter distance is 28.5mm.

    I thought I might get away with it: according to the "CCD Filter Size" on astronomy.tools website which specifies a minimum filter size of 32mm (1.25" = 31.75mm) 

    Attached is the integrated LUM channel and it's corresponding flat. (I might re-do the flats to see if anything has changed, I'm still in the "shake down" phase of the new camera I guess......)

    Or maybe it's something else altogether? I haven't attached the heater yet but it wasn't a cold or humid night

    .

    Opinions welcome, thanks in advance!
     

     

    M101_April_2024-Lum-session_1-St.jpg

    MF-IG_100.0-E_1.0s-AA26MTEC_USB2.0_-6224x4168--Lum-session_1-St.jpg

    Yes the 1.25 inch filters will not cover a APS-C sized sensor, you will need 36mm filters for that. The best you can get from 1.25 inch is about F4 with a 4/3 sensor like the ASI1600mm pro or the ASI294mm pro.  You can always just crop it to about that size for now without any negative effects but if you want to use the fill sensor you are going to have to change your filter and filter wheel. 

    Adam

  17. 18 minutes ago, Elp said:

    I charged the battery up, went up to 13.4V. Tried it on my rig, the voltage quickly drops again to mid or just above 11V. Think it has to have a regulator connected up.

    But for my purposes which is likely more power hungry than a "normal" setup I instead bought a Pegasus 12V 10A mains adaptor. Ran it for 30 minutes indoors with the peltiers running near 85pc and the rig didn't restart once, which is expected running off mains. So for me this is sufficient for my needs, I likely won't take this setup off site.

    No that should not happen unless you are drawing more than 8 amps or so, as that will trigger some of the over current protection. 

  18. 9 minutes ago, fireballxl5 said:

    So you agree that it is (almost) a 100Wh capacity battery? Again,  my understanding is that the power capacity is fixed with the time that current can be drawn determined by the voltage used (and the current level of course).  This would mean that at 3.6V/1A the current rating is 27.65Ah,  whereas at 12V/1A (using a handy USB PD 12V trigger) it would be 8.3Ah.

    Regards,  Andy

    Yes but it does not mention 3.6volts at any point I can see, you basically have to reverse engineer that. For me at least that does not build confidence. The only voltage it mentions at the link is 28 volts for example. 

  19. 13 hours ago, Elp said:

    Of course not, here due to moisture in the air you need to use dew straps anyway so it's part of the process, allowing scopes to acclimatise is essential regardless of using dew straps or not. It could be manufacturing fault but statistically more glass and more likelihood of issues, so other triplets, quads, petzvals all could potentially suffer similar, the venerable SW Esprit, one of the best refractors for imaging suffers from it too.

     

    On 02/04/2024 at 17:23, Kyuss said:

    Searching for a small refractor and that seemed to check all the boxes. Then Cuiv kinda ruined it with the pinched optics:

     

     

    He is actually wrong it's not pinched, pinching will show that effect across the entire frame including the centre of the field. It's actually a sign that the corrector design is not working well at full frame.

    It's a common effect in the corners of large sensors as the reducer effectiveness breaks down. I can't find a spot diagram for this scope. However for example this is a Askar scope. 

    Note the bottom right star 22mm spot shapes, when pixel sampled this ends up looking like a cross. 

    image.png.a743e62764255f07c80bfdf5a05511bb.png

    • Like 1
  20. 6 hours ago, fireballxl5 said:

    Your calculation is implying that current and power capacity are the same. I'm no electrical engineer but my understanding is that the relationship between power and current capacity (Wh and Ah) depends on the voltage that is driving the current. So a stated power capacity of 99.54Wh (Anker product website for this power bank) implies a cell voltage of 3.6V, i.e. power / current 99.54Wh/27.65Ah = 3.6V. . 

    Happy to be corrected here🙂

    yes I actually noted 3.6volts but in my post but for some reason it's been cut short. 

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.