Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,029
  • Joined

  • Last visited

  • Days Won

    11

Posts posted by vlaiv

  1. 1 hour ago, oymd said:

    well, it would make sense that with higher end mounts the encoders would also be higher spec...no?

    i think I read somewhere that the encoders used by AP are the top end ones...

    I think they are called “Reninshaw” 

    pretty sure I got that name wrong...

    :)

     

    Could be. I don't know that much about encoders to assert things with any certainty other to do basic math of what might be needed.

    In fact, I might be wrong on my calculations above as well - as I assumed simple model of operation, but that might not be the case.

    I said above that we need something like 24bit encoder to have precision of 0.1", and calculated single division of circle to be around 24nm. I also noted that such precision can't be done in optical configuration.

    In above post I also gave a screen shot of ACURO AC58 absolute rotary encoder with 22bit max resolution. That is single turn resolution. Datasheet for this product, says following:

    image.png.36351fa264d4c4ce7e37d56cafe5f555.png

    Now, I would expect 22bit resolution encoder to have 4,194,304 ticks. Like we said there is 60*60*360 arc seconds per full turn = 1,296,000 from this simple math says that there is about 3 and a bit more ticks per arc second or to be precise - single tick is 0.308990... arc seconds.

    However - absolute accuracy is actually x100 less than that?

    There is probably a lot of engineering involved in selecting and using absolute rotary encoder - knowledge and skill that I simply don't have.

    What I do "know" is that you need really precise encoders if you want them to work properly and make mount precise enough and those tend to be expensive.

    • Like 2
  2. 15 minutes ago, oymd said:

    If I got that right, are you suggesting that not all encoders are equal in their precision?

    that is, encoders on say an IOptron CEM25P EC would be less precise than the ones on a CEM120EC, which would be less precise than those on Astrophysics and Paramount mounts etc?

    the more expensive mounts would have a higher number of clicks/counts etc?

    First part is correct - not all encoders are of equal precision - in fact, one of main specs of encoder is its resolution (number of data bits it produces). Encoder precision is directly related to encoder price.

    image.png.57e48cca1627c21c851966b0ade50843.png

    I can't say if encoders on CEM25EC are less precise than those on CEM120EC - that is up to manufacturer to decide (which ones they will use) and to publish or not relevant information.

     

  3. 59 minutes ago, almcl said:

    On individual subs it is just a point source but when stacked the line shows up.  If it's an artefact I can deal with it, but if it's genuine feel it should be left there.

    How much point like?

    If it is single pixel then - it is obviously hot pixel, but if it has PSF (blurred star like appearance) then it is probably genuine thing.

    • Like 1
  4. 52 minutes ago, Ags said:

    Yes, darks, lights and flats are all shot with the same Gain and Exposure Length. I'll try and get some FITS files for you (they are currently buried in SER files).

    Temperature should be the same. Flats are shot indoors immediately after the session, so I don't think the camera chip warms up in 7 minutes. Flats are shot by putting a sheet of paper over the front of the scope and pointing it at a room light. The histogram peaks at 50% with no clipping according to Sharpcap. In contrast my lights are so underexposed I could use them as darks. Could the big exposure difference between lights and flats be an issue in itself?

    Ah, now I see - you don't have set point cooling - that is regular camera. I did not pay attention to that detail.

    Flat calibration can fail for a number of reasons - mismatch in gain or offset of exposure length or problem with darks or problem with flat darks or light leak - but all of these reasons boil down to single thing - there is some signal that should not be there. Flats work when only signal present is light signal that came "down the tube" - if there is unfocused light it will cause problem.

    Issue with non cooled camera is that you can't guarantee that dark current will be the same between subs. In principle once camera reaches ambient - it should be fairly stable - but any change in ambient is going to reflect on dark current.

    Only about 6 degrees C can double dark current and camera body is made out of aluminum - so it can quickly reach ambient. This goes both ways - it will cool faster, but it will also warm up faster. Powering camera on is going to heat it quite fast regardless of the fact that it was in cold just few minutes ago. Try shooting separate flat darks at the same ambient temperature as lights (if you shoot flats indoors - run camera for few minutes so that it stabilizes in temperature, then record set of flats and right afterwards record set of matching darks).

     

    • Thanks 1
  5. 26 minutes ago, Ags said:

    Yes I use flat darks - these are same same as my ordinary darks because the darks, flats and lights are all shot with the same settings.

    Exposure length?

    Darks need to match lights in exposure length, and flat darks need to match flats in exposure length.

    Flats must not clip.

    If you want and if it is not too much trouble for you - maybe post single light, dark, flat, flat dark for inspection and calibration? Post them in fits format if possible (or even whole .ser files if they are not too large, or maybe use file transfer service of some kind).

    I'd be happy to take look and see if I can find any issues with files for you.

    • Thanks 1
  6. +1 temperature shift.

    Let the gear cool to ambient before you start and then refocus roughly at every 1C of ambient change (if you have slow scope - you can go for 2C, but anything F/5 or faster - odds are you'll need to refocus on ~1C).

    If you have motor focuser - make your sequence check for focus every hour and on filter change of course (even if you have par focal filters).

    • Like 2
  7. 1 minute ago, Ags said:

    @vlaiv any idea why Autostakkert is over-correcting for my flats? I shoot the flats with the same settings as the lights, but the flats are 50 times brighter (based on histogram peak).

    Do you use flat darks?

    I think that it would be a good idea to do calibration yourself. That way you have more control over the process. Here is the workflow that I would recommend.

    Shoot lights as .ser sequence. Of course shoot as raw 16bit data. Shoot darks, flats and flat darks the same way.

    Use PIPP to convert .ser sequence into bunch of .fits files. Make sure no other processing is done by PIPP - we just want fits files at this stage. In fact, you can skip this step if you use SharpCap and tell it to record sequence as .fits files in the first place.

    Drawback to this method is that you are going to have bunch of files instead of one. Upside is that you will control calibration process and work in 32bit mode.

    Then use ImageJ to create master calibration files and to calibrate your lights. If you don't know how to do it - I'll walk you thru it step by step (although I already did a topic on that and small tutorial - I think it can be found on SGL somewhere).

    In the end you load bunch of .fits in AS!3 and tell it to create 32bit result (last time I checked it was a feature for "scientific use" - and worked only on mono subs - so you are fine there).

    Then process in Gimp...

     

  8. 49 minutes ago, Ships and Stars said:

    If anyone knows of a good detailed text on galaxies please let me know. A fascinating topic that I've not delved into a great deal.

    I find wiki to be very good starting point. For example:

    https://en.wikipedia.org/wiki/Galaxy

    then follow up with:

    https://en.wikipedia.org/wiki/Galaxy_formation_and_evolution

    and of course - if at any time you want to go deeper into particular subject - there is list of references that you can follow.

    If you want to play a bit with dissecting our nearest neighbor - here is excellent resource:

    https://www.spacetelescope.org/images/heic1502a/zoomable/

    I just love the way you can zoom in on a "bright star" to learn it is in fact globular cluster :D

    Here is feature "in a distant galaxy not really resolved by our telescope"

    image.png.85a8846b61d35c897d9a8b6f663c9e2a.png

    Same feature a bit better resolved:

    image.png.c1108bfc5ff646c2674f5c83aedb5610.png

    But what are those? A star and a cloud?

    image.png.40bfecfc3eb1f3e689f99a57a7919949.png

    Young open cluster in a bit of gas and globular cluster! Imagine that! :D

    • Like 1
    • Thanks 1
  9. 31 minutes ago, Ships and Stars said:

    Does anyone know if these areas are massive emissions/reflection nebulae or newly formed stars? Or a mix of both?

    Probably emission nebulae / young star clusters.

    Take for example Orion molecular cloud complex here in Milky way. It has size of about 5000 ly (give or take). M101 has diameter of 170,000 ly.

    image.png.bfc48bf9f6b9ba62ede195b0d55178ca.png

    If we take one such feature in the image and measure length of it and measure size of galaxy on the image (or if you want to be more precise - plate solve to get arc seconds per pixel and get estimated distance to M101 and solve for size) - we get following:

    37.7 ly/px and 130 px = 4901 ly (within error of rough measurement of course).

    This just shows that feature above in the image is the size of feature that we know in our own galaxy - Orion molecular cloud complex. Both "glow" with knots and features:

    Orion_Head_to_Toe.jpg

    credit: Rogelio Bernal Andreo, source Wiki page on Orion Molecular Cloud Complex: https://en.wikipedia.org/wiki/Orion_Molecular_Cloud_Complex

    42 minutes ago, Ships and Stars said:

    I see some HII regions internally in M101 listed on Stellarium, but they don't seem to correspond. What is the difference between an HII region and a nebula?

    Nebula is any kind of nebulosity - can be dark nebula (gas/dust that is not illuminated and obstructs view of background stars / objects), reflection negula (gas / dust being illuminated by nearby star or stars), planetary nebula (exploding star remains), emission nebula (hot gas emitting light - similar to neon light - excite it with electricity and it will glow, but gas in interstellar space is excited by temperature/atomic collisions or stellar wind) and special case of emission nebula where dominant component of gas is HII - HII region (often birth place of stars since young stars are formed from Hydrogen gas collapsing under gravity).

    47 minutes ago, Ships and Stars said:

    Can you have a smaller galaxy form inside a 'host galaxy'?

    No, not really - closest thing to small galaxy in host galaxy would be globular cluster, and I think one of hypothesis for their formation was - core of small galaxy that was absorbed into host galaxy.

    HTH

    • Thanks 1
  10. 13 minutes ago, Anthonyexmouth said:

    Would encoders be a replacement for guiding? or is guiding superior? 

    Both have positive and negative ....

    Encoders can't compensate for dynamic things nor for other types of errors. Poor polar alignment can't be corrected with encoders. Sudden displacement of telescope tube (wind, kick or whatever) - won't be compensated by encoders. Encoders can't correct for atmospheric refraction - you need elaborate sky model for that.

    Guiders have quirks - need to be calibrated, are always dynamic and depend on seeing and other factors. If not configured properly can hurt more then help (chasing the seeing, over corrections - causing oscillations, etc ...)

    One thing is certain - guiding is more cost effective way of doing it (arguably - will depend on justifying some expenses on other tasks as well - such as imaging, for example use of computer), and more people opt to do it that way.

    I think encoders are good thing - if done properly. One of the problems associated with encoders is that it is really hard to get good precision on encoders. You want your encoders to have rather good resolution. For example, I would say that resolution of encoders needs to be around 0.1-0.2" to get really good results (just imagine DEC axis - you want your DEC error to be rather low - and that is dictated by precision in DEC encoder, because DEC is for the most part stationary).

    Now if we want to have that much precision - we need encoders that divide circle in 10 * 60 * 60 * 360 = 12,960,000 steps. That is 23.6 bits of precision, so you need something like 24 bit precision encoder.

    You can also see how tricky this is if you for example imagine that your encoder has 10cm diameter. Circumference of it will be 31.415.... cm = 314.159... millimeters = 314159.265um = 314,159,265.358... nm - you need single "tick" on such encoder to be 24nm wide. You can't even use photons to read such a tick because regular photon is about x20 larger in wavelength.

    Now you can see why encoders are very expensive for that sort of resolution, not to mention level of craftsmanship needed to make perfect circle and avoid what we combat all the time - imperfection in manufacturing of round things (periodic error).

    In the end - I think maybe best design for a mount would be to accept periodic error of worm gear - minimize backlash by using spring or magnetic loaded automatic tension system on worm and have encoders on worm shaft rather than main shaft. This way all other imperfections would be minimized (motor imperfections, gear system imperfections, belt system imperfections) and motion of worm would be properly timed and smooth enough so guiding would take care of worm period error with ease as it is smooth changing. Because mounts have at least 120+ teeth on worm gear - that is about 7-8 bits less precision needed - 16bit encoders are readily available at reasonable cost.

     

     

    • Like 6
  11. Hi,

    No, my profession is programming / system architect - I do physics and astronomy as a hobby.

    I can do above example and post here results. In fact, I think that I already did something similar once - to show to people that drizzle is not doing what they believe it is doing. I just downloaded latest version of DSS and I have some old data that I can use. It is not under sampled but can be easily made so by software binning.

    In your example with globular cluster - you enlarged regular version by x3. What interpolation did you use for that? Much depends on interpolation algorithm.

    Also - looking at the stretched images can be misleading. We tend to stretch as far as image looks pleasing. If two images have different SNR - they will have different level of stretch as a result.

    These are just some reasons why you should not visually judge actual resolution of the image. I'll quickly demonstrate both points above.

    image.png.c2cfecc940d72330776edec28e6e6a5f.png

    This image shows under sampled star being enlarged by two different algorithms. Same base under sampled image was used - left image is bilinear interpolation, right image is Cubic O-Moms. Star in the right image looks "tighter" and better resolved than left

    Here is example for other thing that I mentioned above:

    image.png.7369f9d4bf9378ebbb4be8e0461a1012.png

    Which star do you feel is "tighter" and which image has "better resolution" - left one, right? star is smaller. In fact - it is the same image, left one polluted with x3 higher noise than right - making SNR x3 larger in the right image (drizzle x3 will produce x3 less SNR) and I applied linear stretch to both until background looked the same.

    Since we have point sources in the image - stars, we do have PSF for our linear images - and that is what we should measure because that is perfect measure of resolution of the image. Visual clues can be misleading.

  12. That is walking noise and I'm not sure if there is anything you can do about it in already recorded image. For next time - you should be dithering if you are guiding - that reduces walking noise.

    I still don't quite understand why it happens - and not sure if anyone knows exactly. I suspect it has something to do with application of bias and/or read noise and the fact that there is slight drift of frames in that direction over the course of the session.

  13. I briefly looked at your example and have a few objections, if you don't mind.

    Putting aside if new workflow works better for you and you like images made with such workflow better - that is just fine but it's not relevant to drizzle or topic if drizzle works.

    My position is that in amateur setups - drizzle does not work, and is in fact waste of SNR.

    In order to test this, I advocate following workflow - use same software for alignment and stacking, instead inspecting images after processing - do analysis on what is actually relevant - compare star FWHM in drizzled and non drizzled image. In fact - throw in control as well. Here is complete workflow:

    Calibrate your images in AstroimageJ as you usually do and then do split demosaic (extract two green fields out of the image first field will have odd rows/columns, while other even- let's use monochromatic data for the test - it is easier to do). Both green fields will have half the original resolution - but don't resample them or interpolate them - that is actual sampling rate.

    Stack original data without modification to one stack - measure FWHM on couple of stars that don't saturate but have good enough SNR

    Do drizzle stack x3 to another stack - measure FWHM on same stars but divide result with x3 (because of drizzle factor).

    For control - use advanced resampling method like lanczos3 resampling to enlarge each sub prior to stacking - scale them x3 larger, stack normally and again measure FWHM - divide result with x3.

    While you are at it - measure signal of one star (AstroimageJ photometry tool) and measure standard deviation on patch of background without stars - divide the two to get SNR for comparison.

  14. Ok, data is poor. Now it is a bit better, but still lacking quite a bit.

    Here are green and red channels of this new stack.

    Screenshot_4.jpg.9772903b3d5cc8d1285cb6441fe6ed80.jpg

    Hot pixels are now visible due to fact that no sigma clip was used. Red also shows some vignetting now.  There is evident structure in M1 unlike in previous stack.

    I think there could have been dew on scope and you should be able to see it if you compare different subs. Subs from the beginning of the session won't have it, but later subs will show reduction in brightness.

    Because of this difference between subs caused by buildup of dew - sigma clip algorithm (for which I believe you used quite aggressive settings) threw away quite a bit of data - believing that regular data are outliers due to differences between subs caused by dew.

    I'm not 100% sure that above happened, but you can test it by looking at the subs themselves - if they get progressively worse towards the end of session - its likely dew issue.

    HTH

  15. I think something went wrong with calibration of this data.

    Maybe try to stack just lights without any darks. Stack only 26x90s without two additional longer subs. Save result as 32bit format rather than 16 bit format just to make sure.

    Blue channel and green channel look ok / similar:

    Screenshot_1.jpg.539687d458e2ea8e8612eacd2ed86e31.jpg

    Screenshot_2.jpg.863ac7c02361bd480ede654196dfc28b.jpg

    Same vignetting / same dust shadows, blue being more noisy because camera is less sensitive in blue and also there is twice as many green pixels so more data there.

    But look at red:

    Screenshot_3.jpg.332170425dc3d7c63462d1549d60f354.jpg

    It is almost flat. It does have some signature of central region being vignetted but corners are flat - as if something clipped histogram - or maybe sensitivity in red is by far lowest of the three.

    To reiterate:

    Stack without darks and stack only subs of 90 seconds so you don't mix them. Use super pixel debayering mode, use regular average stacking method (nothing fancy) - don't use background calibration - we want the simplest stacking method / options to see if data is poor or something went wrong in the settings. Don't remember to save it as 32bit image rather than 16bit.

    • Like 1
  16. Could you post 32bit version of stacked data for inspection?

    Also, how many light and dark subs did you use? Did you dither?

    You are sampling at 0.48"/px - which is very high sampling rate, you are oversampling quite a bit. This will bring signal down, and SNR per exposure will be low.

    Can't tell what sort of conditions this was shot in, but could be low transparency that contributed.

    Could be that this is due to processing, but I think you are out of focus a bit:

    image.png.bab1971db389fbba75dbc7b67bc4e797.png

     

  17. 13 minutes ago, William Productions said:

    I guess I will just stick with the SA, I think that the more polar aligned it is and the less error it has the better the alignment?

    Yes indeed. If you are limited with your budget, then I guess you won't be guiding your mount? For that you need additional budget for guide scope, guide camera and few accessories to tie all of that together.

    Depending on imaging resolution, regular polar alignment (using polar scope) can be sufficient for couple of minutes of imaging without trailing. If you won't be guiding, then I think your bigger concern is periodic error.

    According to some sources on internet periodic error can be as much as 50 arc seconds p2p, and worm period of SA is 10 minutes. This means that you are likely to have about 5"/minute of error at some points (sometimes a bit more, sometimes a bit less).

    If you use ED72 and DSLR camera (something with pixel size of about 4um) - you will be imaging at 2.31"/px, so you could expect 2 pixel elongation in some of the subs if you don't guide - that should not be much of an issue. Key is to keep exposures relatively short - about a minute or so.

  18. 5 minutes ago, William Productions said:

    Is this a good mount?

    The name of it is called Sky-Watcher EQ5 Mount and Tripod.

    That is very decent mount, however that particular version is not suited for astrophotograpy - because it has no motors.

    If you want to look at EQ5 mount, and that would be quite decent option for you, you should be looking at this model:

    image.png.50bec5c972b1e2e9c25d6a7744c868a0.png

    here https://www.firstlightoptics.com/skywatcher-mounts/skywatcher-eq5-pro-synscan-goto.html

    That version has goto (you select object you want to image, and mount computer points the mount in the right direction for you).

    If you want to use above mount, you will need to purchase and fit motor kit to it:

    https://www.firstlightoptics.com/sky-watcher-mount-accessories/enhanced-dual-axis-dc-motor-drives-for-eq-5.html

    It is less expensive, but I think goto mount has higher quality motors and of course goto capability.

  19. 7 minutes ago, William Productions said:

    Is there a way to find another RA and DEC axis mount that costs around 240 pounds.

    Only way I see that happening is to monitor classifieds section and look for second hand mount like EQ3 or iOptron SmartEQ.

    I would strongly advise against using such mount. Mount is the most important part of astrophotograpy setup. You need to put at least x3-x4 that much budget towards to mount to begin with for anything serious (Heq5 class mount). People can image with mount such as EQ3 or EQ5 if they understand limits of their mounts and image accordingly.

  20. @William Productions

    If you are referring to guiding capability then yes - guiding in RA only makes a difference vs guiding in both RA and DEC.

    There are two main issues that affect how well a mount tracks - one is polar alignment and the other is periodic error.

    Both of these things cause drift over time - meaning that mount is no longer pointing where it started pointing and in astrohphotography terms - stars become small ellipses or even lines.

    Polar alignment error causes drift in DEC axis while periodic error causes drift in RA axis. Mount that guides only in RA will still be susceptible to star trails if polar alignment is not good and polar alignment error is large enough. You can control polar alignment error but you can't control periodic error (you can to some extent - there is periodic error correction that some mounts are capable of, but in principle you can't correct for periodic error completely).

    With mount that guides in both RA and DEC you don't need to be extremely precise with polar alignment, but with mount that guides only in RA - maximum exposure length will be limited by how well you polar aligned your mount.

  21. 2 minutes ago, Prolifics said:

    At least we know they are not par focal all same manufacturer .

     

    Blue was taken last and would have been coldest part of the night probably 3c less than when Lum was taken.

     

    Order Lum R G B

     

    David

    I don't think this shows that filters are not parfocal - it certainly points to temperature drop. It is also consistent with 1C temp drop shift in focus from critical zone.

    Although R should be less defocused than L if temperature was dropping steadily - there could be couple of things that impacted this - dew heaters and the fact that scope is refractor. I know that it is triplet lens and you should not be able to see significant shift in focal position between colors - maybe there was some impact still.

    In any case - refocus on filter change and refocus on temperature change of 1C or higher (usually every hour or so if things are normal - in rapid temperature drop nights it could be down to half an hour).

  22. 11 minutes ago, Prolifics said:

    I have the Pegusus power box so the dew heaters come on and off automatically when the due point gets near the temperature point. That would make a difference if the tube was getting slightly warmer by .5c would expand the tube?

    Maybe having the due heaters on all the time would be better when its cold outside?

    I have no idea - never considered dew heaters and their impact on tube temperature. Maybe best way would be to examine your subs and also if you can - record temperature (if not use forecast data to get rough idea of how temperature changed in the night). If you see that your subs change FWHM/HFR at some point in the night - compare that to temperature and figure out if you need to refocus and under which conditions.

    • Like 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.