Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ONIKKINEN

Members
  • Posts

    2,422
  • Joined

  • Last visited

  • Days Won

    5

Posts posted by ONIKKINEN

  1. 55 minutes ago, Iem1 said:

    Hey folks, another question.

    Had another outing a few days ago, long story short it did not go as smoothly as the previous session, I had trouble doing the PA. First attempt was leaving me with a small amount of trails in 60 seconds and the 30 seconds were a little inconsistent.

    I read online the PA with the hand controller should be iterative, the more times you do it the more accurate it will be. However, when I try to do a second PA to improve accuracy, I select the star, the mount does its 'GoTo' and normally you are asked to center the star with the hand controller first berfore anything else, sometimes the GoTo is very accurate and it is already dead center, sometimes I need to shimmy across a little, but eitherway, upon advancing to the stage where the mount is supposed to then 'drift' away and then ask you to adjust using Alt/Az bolts in turn, it does not move? 

    There is no drift at all, it simply says adjust the bolts, but of course, it is already dead center having just done the first step. The only way I can do my PA again at the moment is by switching the mount on and off, doing a fresh 3 star alignment and hoping the new PA is better.

    When attempting a second PA (without starting from scratch) should I be using the same star as the first PA or selecting a star further away or something? I can not see a way to improve upon my first PA.

    I will be investing in ASIair and guide cam next month, looking forward to another step up, but wanting to have the basics of GoTo nailed down first :D 

     

    Thanks, as always.

    Is the star trail drift in RA or DEC? Very important to know. DEC drift would be polar alignment, while RA would not. Below is a graph of my AZ-EQ6 running under guiding assistant in PHD2 with guiding disabled. This was around 15 minutes of observation. Blue is RA, red is DEC. The jittery motion is due to mechanical issues within my telescope and the way my 60mm guidescope is mounted rather than actual jittery motion of the mount itself (although some of it is probably that too).

    69731395_periodicerror.thumb.PNG.08e52141c1d1e50fe2cb236fd8f8f0b9.PNG

    The sharpest cliffs in blue here are happening in around 20 seconds and span across several arcseconds, so if you expose at this part of the worm, the exposure will be a guaranteed failure. Other parts are less severe and its possible you just by chance land on a part of the period where you see no issues for 60s. The worm in my mount isn't particularly good because of its aggressive features, but you could have something like this too, no way to know without measuring. This is no issue at all when guiding however, all of these are "slow" and predictable enough that guiding takes care of it completely. Maybe the sharp cliff shown here can have an extra 0.1'' RMS error during an exposure there, but its not a deal breaker.

    As for the handset PA, i think you have it as good as it can be with that method. If your stars land centered each time and the mount sees no reason to adjust, it means it cant do better than you already tried.

    • Like 1
  2. Is that a DIY cooler? Looks interesting.

    If you can make the DIY cooler work at consistent temperatures night after night, you can take one set of darks, biases and dark flats and use those for every session and dont need to take them each night. If your flats exposure time is very short you can get away with not using dark flats and instead just use bias to calibrate the flats. If its maybe not so controllable you could take several sets of calibration frames at different temperatures and then use whichever set is closest to the actual imaging temperature that night. Flats will have to be taken in the exact condition the optics were in when imaging, so before you remove the camera from the scope and each night with their own flats. If you can leave the setup as is each night and dont take anything apart, its possible your flats will work just fine reused but i would take them every time. Using a light panel of some kind is the simplest method, but since you have 50mm of aperture you can use a decent sized phone screen, tablet or laptop screen as a light source for the flats. and so probably dont need an extra tool for this. Very quick to take and so not much in the way of excuses to not do this each time you're out.

    As for the multi session thing, you need to calibrate your session with matching calibration frames. So if you can take a library of darks, bias (and dark flats, if bothered) at matching temperatures where they can be reused indefinitely, the only calibration frame that is session sensitive would be the flats. Once you have calibrated the subs well it doesn't matter from which session it was anymore and you can stack any number of them together without issue and disregarding the session it was from. If you dont want to go through the trouble of storing calibrated frames you can use DeepSkyStacker with the "groups" function to drop each session to their own group. This feature calibrates each set with their own calibration frames and then stacks everything together in the end.

    • Like 1
    • Thanks 1
  3. 2 minutes ago, Stuart1971 said:

    Hmmm, I used an L-extreme 7nm Ha, and OIII, last week, and got some good subs of M81 and M82, but each to there own…👍🏼

    M82 has a significant starburst thing going on that is bright in Ha so its not a bad choice for that, or the many emission regions within M81. But to capture the galaxy itself, especially M81 since its face-on towards us would be faster without the filter. Starlight will still radiate in OIII and Ha even though they do not "emit" in it like nebulae so filter or not there will be a picture in the end.

    • Like 1
    • Thanks 1
  4. My mantra on galaxy imaging is that broadband filters have too many negatives to be of any decent use and i dont see how one would be useful, unless in awful conditions (Bortle 8 maybe).

    Firstly, real colour results are off the table, because a large part of the spectrum is blocked. Expect everything to look blue with most broadband filters. With CLS filters you get green results as most of the reds are gone.

    Second, blocking light pollution also blocks the galaxy itself. Starlight is mostly the same colour as sunlight, which is what artificial lighting tries to emulate, and light from galaxies is almost entirely starlight with a speck of emission here and there in actively starforming galaxies. So by blocking the part of the spectrum where light pollution is most intense, you also block the brightest* part of the galaxy.

    * Depends on the galaxy partly. Very active spiral galaxies like M33 are noticeably bluer than most other galaxies, so the negatives are less apparent while still there.

    • Thanks 1
  5. +1 vote for ditching the bahtinov mask.

    I started comparing bahtinov mask focus to just HFR reading focus with NINA a while back and found that star HFR can often be improved by as much as a whole pixel even when focus seemed perfect with the mask.

    Now that im used to it i get consistently up to 30% better focus by just looking at the reported star HFR values compared to a mask.

    • Like 1
  6. Bird or insect maybe?

    Or perhaps some asymmetric piece of space junk/space rock that "skips" off the atmosphere and so changes direction.

    I really dont think its a satellite as its doing an inclination change burn as this kind of change would need ~10km/s delta v to make in low earth orbit. If it was in higher orbit, we would see only a part of this trail as it would travel much slower. 10km/s is not really something that any satellite or other spacecraft for that matter is going to have available to it. Its even worse when you think about the time this exposure spans, which is probably just a few minutes. So a very short burn and 10km/s delta v = not happening. Very strange.

    • Like 1
  7. 17 minutes ago, Steve143 said:

    Thanks. Would this periodic error cause the apparent movement of my field of view between each exposure? The movement is very small but noticeable when you move from one frame to the next.

    Steady drift to one direction is probably drift in declination due to poor polar alignment. Actually doesnt even have to be that poor if the session is long and you did not recenter. I found it necessary to babysit the mount and check the framing from time to time when i shot unguided.

    Periodic error is back and forth where the tracking speed of your mount underperforms and overperforms periodically over one worm period (480s) but shouldnt result in steady drift for longer than one worm cycle. This motion is only in RA.

    By the way doing the 3 star alignment will tell you how much youre off in polar alignment in the end, so you could use that to check how accurate it was. This does rely on you being very accurate when centering the alignment stars. It would make GO-TOs more accurate too, but not help with tracking accuracy.

    • Like 1
  8. Looking great for first tries with short integration!

    Guiding is what will make your EQM35 work much more reliably, although still wont do miracles for you. My unit had a particularly nasty periodic error so unguided was not a good way to spend clear nights for me.

    You can try and test how much periodic error you have by polar aligning as well as you can, orienting the camera so that RA is level left to right and shooting an exposure of at least 480s towards a low declination part of the sky. The image will be ruined of course but then you can measure how much periodic error you have by measuring the length of the trails in RA. Wont work if its windy though as the mount doesnt like that.

    When you get this measurement you can make an educated guess as to how long could you shoot unguided. I would guess that 15s is already close to the max exposure time that is somewhat reliable, so you could just use that.

    • Like 1
  9. I dont doubt that visual astronony is in decline due to light pollution increasing globally and most people cant easily access the quality skies needed for the best views. But i dont think visual is dying, or actually will probably ever die because it is enjoyable even through poor skies. It just takes an attitude and expectation shift for the viewer to understand that some views will not be happening under LP.  I find visual very satisfying whenever i convince myself to leave the camera at home even if the views are somewhat disappointing.

    I think representation online plays a part on this perception of visual being on the way out too. Most of the time i dont feel like writing what i observed on a short session, but images i take will one day end up online. The process of taking and handling the images also sparks discussion far easier than observing IMO. 

    I am maybe 90% imaging and 10% visual, but i dont think ill ever drop the visual side to 0.

  10. +1 vote for the Rising cam IMX571. Its a bit more expensive than the altair 294 but the newer tech is worth it IMO. If calibration issues worry you, you dont have those problems with the 571 based cameras (mostly). Also true for the 533 by the way.

    I would also urge you to forget about the higher resolution estimates given by astronomy.tools and focus on getting closer to the lower estimates so 2" per pixel or more. Dont forget that binning allows you to change shooting resolution at will (by multiples of original resolution).

    From several thousand subs i have shot so far most are oversampled at 1.84" per pixel where i work and i can safely say none of them come close to 1" resolution in real detail and that is with a 200mm aperture.

  11. The AZ-EQ6 guides fairly well, considering it was pointing my scope towards a tree 🤣:

    720941293_guidingthroughtree.thumb.PNG.d2ff8491356442cf65f07c81bbca87ce.PNG

    I was really scratching my head on what went wrong and why the mount was suddenly acting up, especially in declination which really had not moved much for the entire night (1arcmin PA). Checked all kinds of cable snags and whether the tripod had sunk in to the icy surface it was planted on and only then saw that the scope was pointing to a forest in the way. I had for some reason not thought of the target moving to the other side of the sky during the night, which of course it did and this location is against a forest on the west side.

    Not sure what i learned from this, other than PHD2 does not have forest-compensating features.

     

    • Haha 3
  12. On 16/03/2022 at 17:27, alacant said:

    The most expensive item by far is the dovetail plate which retails at €silly for what is simply a length of rectangular aluminium. A 500x100x15 lump and an angle grinder is all you need. Your local window frame supplier will sort you. Claim a 500mm length of profile for the top of the rings from his scrap bin. 1.6mm springs on AliExpress are around €3 per box of 10. Silicone sealant, even cheaper.

    Cheers

    I have attached some hose clamps i think made for AC tubing to tension the tube and hopefully improve rigidity, like i am pretty sure @Captain Scarlet did to his 8'' (cant remember whether it was the VX). I also drilled some holes through the existing tube clamps and fit a Skywatcher vixen dovetail i had lying around on it to act as a top rail to tie the tube rings together. I lack the critical astronomy resource known as a local window frame supplier 🤣 so this will have to do for now, but looks like these fixes while ugly have improved the mechanics of the tube quite a bit. Trying to squeeze or lightly punch the tube with a laser in the focuser reveals that it doesn't move nearly as much as it used to be so you are correct this operation did not in fact cost a fortune.

    The springs and silicone thing im not worried about (yet). It looks like the VX mirror cell already uses this method, seeing as there are 9 (or is it 6? Not sure) small blobs of silicone between the mirror and the cell itself. I dont have the cell out right now but last i checked there was enough clearance between the clips and the mirror to slide something like a credit card in without much resistance. Not sure why the clips are still in place though if its already been glued. Precaution by design perhaps?

    20220320_191536.thumb.jpg.b388ab4eb9204b3ca60eda8d17c07edd.jpg

    Poking this silicone reveals that it is glued to both the mirror and the cell and not just something thats in between the 2. The springs themselves are also very strong and it takes considerable force to move the collimation nuts, so much that its actually a bit annoying. Im sure it still can be rattled around in transportation but during session, i dont think so.

    • Like 1
  13. 2 hours ago, dazzystar said:

    I put the diagonal in and an EP then focused. I then replaced the EP with the camera and at least can see something now (pic below). I've had to wind the focuser almost all the way in to get it into some kind of focus. Will I be okay to remove the diagonal and just have the camera without this or the nose when I come to use it on stars?

    You will not be able to get focus on anything without enough backspacing. So either the diagonal or an extension piece in place of the diagonal.

    Would be better to ditch the diagonal and get the extender, as a poor quality diagonal is just an extra source of optical aberrations that is really not needed.

  14. 2 minutes ago, dazzystar said:

    Thanks again for the reply. The camera is in place of an eyepiece and there's no diagonal. Should I put an eyepiece in (without a diagonal) and focus (or try!) and then replace it with the camera to see?

    The eyepiece wont reach focus either without the diagonal. Refractors have long backfocus distances behind the scope to give enough room for the diagonal so now without one there is no hope of reaching focus without extensions. You will need an extension that is the same length as your diagonals light path, now what that length is i dont know (since i dont have this scope) but you could try to measure it roughly and get one that is closest to that. I think it should be somewhere around 100mm? But do check this yourself before buying something.

  15. 21 minutes ago, dazzystar said:

    Thanks so much. I've adjusted those settings and here's what I'm getting now. I've tried rolling the focus tube in and out. The object (a small building) is visible when I remove the camera. I don't know what settings a proper exposure should be.

    image.thumb.png.22f44dc1d49b9da537346363abd0160f.png

     

     

     

    Now that its not saturated it doesn't matter nearly as much anymore, if there was detail it would be visible here. You can see your exposure is usable now as none of the histogram peaks are to the right edge (saturated).

    If you dont see any difference when rolling the focuser in and out you are probably far out of the focal plane and so this difference is difficult to see. How is the camera attached? If its attached at the same place as an eyepiece, it should reach focus at roughly the same place as that eyepiece. If you have taken the diagonal off, you will need to have some kind of extension between the focuser and your camera to replace the lost light path of the diagonal.

  16. Looking at your settings its difficult to tell whether you would be in focus or not. You have a tiny little area on the sensor ROI active so the field of view here is much tighter than you think. You are also over-exposing quite a lot. Green peak is oversaturated.

    Use the full sensor and drop your exposuretime until you get a proper exposure and it will be easier to tell whats going on.

    And the green colour cast is to be expected, since you are using a camera that has twice the number of green pixels compared to red and blue so dont worry about it.

  17. I am doing some window shopping and saw this: https://eu.primalucelab.com/equatorial/skywatcher-star-adventurer-gti-mount.html . No price available right now.

    Quoted as: Available starting summer 2022.

    Looks like it might have the AZ-GTI style tripod based on the picture. Perhaps its aimed at replacing both the star adventurer and the AZ-GTI as it does what both do, but better? Probably wont replace them though as this one has to be pricier than either of those.

  18. 1 hour ago, vlaiv said:

    If you don't mind the doing another go - try using Lanczos interpolation for registration - just to see what sort of difference you'll get in star FWHM.

    I suspect that pixel area relation functions pretty much as linear interpolation and will introduce significant blur and widen FWHM - more so when data is closer to proper sampling (with over sampling you already have blurring on pixel level so additional blurring does not make much of a difference, but if data is sharp - it will be noticed).

    Well i did that and now FWHM is smaller in the split version compared to the RCD+BIN2 version. Curiously noise in standard deviation is lower in the RCD+BIN version, but in the RMSE (whatever this means) its the other way around. Interestingly noise measurements of both kind are much higher than with pixel area relation, so this means you are right and some kind of blurring (acting as denoising) could take place when not wanted?

    RCD debayer + BIN2:

    Full Width Half Maximum:
            FWHMx=3.83"
            FWHMy=3.50"

    RMSE:
            RMSE=5.721e-04
    02:06:39: Background noise value (channel: #0): 2.896 (4.419e-05)
    02:06:39: Background noise value (channel: #1): 3.270 (4.990e-05)
    02:06:39: Background noise value (channel: #2): 2.845 (4.342e-05)

    ------------------------------------

    Bayer split:

    Full Width Half Maximum:
            FWHMx=3.71"
            FWHMy=3.50"

    RMSE:
            RMSE=5.690e-04
    01:57:23: Background noise value (channel: #0): 4.011 (6.120e-05)
    01:57:23: Background noise value (channel: #1): 3.725 (5.683e-05)
    01:57:23: Background noise value (channel: #2): 3.691 (5.633e-05)

    -------------------------------------

    This time AstroimageJ reports both as 2.60 pixels in FWHM, which is close enough to SiriL measurements.

    I avoided Lanczos-4 before because it left some cold pixel artifacts around some stars, but ill just deal with this with cosmetic correction or something since a drop of 1'' in FWHM is pretty significant.

    • Like 1
  19. 2 hours ago, vlaiv said:

    Worse star sizes in what respect FWHM or visual size of stars after stretching?

    When you split bayer matrix and then stack resulting frames - selection of alignment interpolation method is very important. Using say bilinear interpolation for sub alignment will have less impact on over sampled image, so high quality interpolation needs to be used for split data.

    In any case - compare star size with robust FWHM estimation method. Some FWHM algorithms have issue with actual FWHM measurement because they rely on background noise estimation. I think that AstroImageJ has good FWHM measurement tool - maybe use that one to compare star sizes.

    I measured FWHM using the SiriL photometry tool. Have not tested whether or not i can spot the difference visually so not sure if the difference matters, or not actually sure how to do an apples to apples direct comparison in terms of stretching equally. Both images were from the same 120 calibrated subs, just went through different routes to get to the final RGB image. I am pretty sure i used the default interpolation method for registering: Pixel area relation.

    But here is what i found using the SiriL photometry tool around a star and running the "noise estimation" tool in SiriL (i think its just standard deviation):

    RCD interpolated, then binned 2x2 in ASTAP:

    Full Width Half Maximum:
            FWHMx=3.87"
            FWHMy=3.68"

    RMSE:
            RMSE=5.577e-04
    17:06:50: Background noise value (channel: #0): 2.416 (3.687e-05)
    17:06:50: Background noise value (channel: #1): 2.764 (4.217e-05)
    17:06:50: Background noise value (channel: #2): 2.363 (3.605e-05)

    --------------------------------------

    Bayer split, then recomposited as RGB:

    Full Width Half Maximum:
            FWHMx=4.89"
            FWHMy=4.65"

    RMSE:
            RMSE=2.526e-04

    19:24:44: Background noise value (channel: #0): 1.351 (2.062e-05)
    19:24:44: Background noise value (channel: #1): 1.555 (2.372e-05)
    19:24:44: Background noise value (channel: #2): 2.441 (3.725e-05)

     

    Running the multi-aperture thing in AstroimageJ around the same star i used in SiriL gives me:

    RCD and bin: FWHM 2.64px

    980484512_SeeingProfile-astap2x2bin.png.5170c136d2943f27af2ad110113b8356.png

    Bayer split: FWHM 2.91px

    1321598402_SeeingProfile-split.png.16a53460259c7c0fc533ebc7f212fe58.png

    Again there is a difference in size but this time the difference is less severe. There is also a difference in brightness, apparently. Wonder what went wrong 😬.

    I was going to make a thread investigating different binning and resampling methods for OSC and how they differ, but i realized i dont understand the data i got from my tests. I understand some points, like star size and background noise, but i dont understand why they would be so different so just kind of forgot about it.

  20. 2 minutes ago, powerlord said:

    Thanks. Good to know. Have you any examples that show a direct comparison with RCD to show it's better and if so, how much ?

    stu

    I did run some tests on SNR and star fwhm but got some results i cant explain. The splitstacked image had the best SNR, but worse star sizes compared to stacking with RCD interpolation and then binning the stack 2x2. Not sure why that is, maybe theres someone who could answer that but its not me. Hopefully i remember this right but ill have to check the data i got when i get home.

    Ill post what i found in a few hours, if i ever get off work that is.

    • Like 1
  21. 37 minutes ago, powerlord said:

    ah ok - I use Siril for debayering usually, which does not support super pixel (I assume is drizzling?) or channel split, but I note they suggest that there RCD is pretty good and the default:

    https://siril.org/2020/09/whats-new-in-siril-0.99.4/

    You can use the "seqsplit_CFA" command to create split frames from the loaded sequence subs. Only works if the channels are still intact and debayering has not taken place. For a 100 subs you will get 100 subs each with the prefixes CFA_0 to CFA_3, so 400 subs. 2 of these will be green, 1 blue and 1 red, you have to figure out which number is which yourself but for me CFA_0 and CFA_3 are green, 1 is red and 2 is blue.

    I have a script for SiriL that calibrates my data and then outputs split monochrome frames. Takes a few minutes to run on a batch of a few hundred subs. Subsequent stacking is much faster since its now mono data and 1/4th the size in megapixels per frame.

    • Like 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.