Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

ONIKKINEN

Members
  • Posts

    2,422
  • Joined

  • Last visited

  • Days Won

    5

Posts posted by ONIKKINEN

  1. Image looks pretty good to me, if a bit green. Try running SCNR green? Worked great when i tried it with just the JPEG. I tried processing your file and didn't get the green cast with SiriL photometric colour calibration so not sure how to advice on how that side of thing would go in PI. But SCNR rescues these anyway, so whatever the reason that there is an easy fix.

    I can get plenty of star colour out of the image and without much of a saturation boost even so i guess one of the processes in your workflow brought the issue in. Try without starnet or any other layer trickery. Star removal AIs leave a bit of fat stars behind and the resulting star layer can look like you described, chroma noisy, after saturation attempts. If this happens to me i try with a lower stretched version of the image and see if the artifacts go away.

    Stars in your image do look a bit fat and soft to me anyway, and measuring FWHM gets concerning results of around 5-6'' FWHM (depending on measurement tool). This seems like focus, seeing, guiding issues. Could be any one of the former or a mix of all of them. You could try being more strict with the subframe selector tool and see if some of the data is better?

    Part of the issue, one that is easy to fix by restacking with colour channel alignment if this exists in PI (pretty sure it does) is that your colour channels are a bit misaligned. Happens when the target is low in the sky. Lower than 50 degrees and this gets very noticeable. Lower than maybe 40 degrees and it starts getting into a big problem, which could also explain the big stars if you did shoot when the target was at these altitudes. Also, plays a part in how much saturation you can push the stars through, since quite early on you start to notice this colour separation issue.

    Example below, reds and blues are on opposite sides because they refract at different rates through the atmosphere.

    examplestar.PNG.e79dc2ca302386570e76b693caed9b7c.PNG

    • Thanks 1
  2. 2 hours ago, vlaiv said:

    That is not what is intended nor what happened. I guess that software that you are using can't deal with negative values.

    It puts background at zero - since there is some noise in background - some values end up negative - but that is just math thing - you can put black point where you want - all data is still there.

    Here is it stretched in Gimp:

    image.png.6f0b758d69c08048881cb3136f0b7d50.png

    Results are not much different than ASTAP and SIRIL and I guess it's down to the actual data - gradients are not due to simple light pollution, but maybe some issues with calibrations or perhaps something happened when stacking.

    If you want to investigate possibility of issue when stacking - then try to do just normal add/average stack and see if you can successfully wipe the background.

    Sigma reject algorithms can cause issues with linear gradients.

    Sum / average of images with linear gradients will produce linear gradients - even if gradients have different direction - result will still be linear gradient. Sigma reject can start to do funny things if subs are not normalized for gradient as well - in some places gradient will be "to the left" and in others "to the right" and in some subs it will be stronger on one side of the image than on other - and all those pixels will be seen as "deviant" as they don't follow same distribution as others (signal is actually different and it can be handled with regular sub normalization).

    I have another algorithm that I termed "Tip-Tilt sub normalization" that deals with this on sub level and aligns gradient to be the same in each sub thus creating "stack compatible" subs that work well with Sigma reject algorithms.

    Well how about that, now it looks more or less the same as the other tools, perhaps even a bit better? Siril reports that some pixel values are down to -44 in the previous stack, which would indeed seem to be the issue causing the weird looking blacks. This new version you posted is as id expect. Definitely would not have figured that one out myself.

    I chose this stack for this experiment because i know it to be flawed in many ways regarding the gradient and flats, there were light leaks, shots from 2 different focusers, different secondary spider vanes, pre flocking and post flocking, 2 different methods of shooting flats etc so yes flats have not worked perfectly for all of the subs which is the cause of the still uneven background after attempts at extraction. How i deal with a fragile image and a strong gradient is to remove it from the subs pre stacking and then recompute normalization. Takes forever with thousands of subs but it works great in the end and there are no surprises with sigma clipping and normalization with most of the gradients removed (the parts that can be, not applicable to the issues where flats have failed of course).

    But from what i can see now is that your algorithm works great even for data that has some flaws.

  3. 4 hours ago, vlaiv said:

    It would be nice if you could do small side by side comparison to other methods you used - just a simple stretch after removing background.

    The comparison is difficult in this case since your algorithm crushed blacks to 0. Intended to do so?

    2022-04-09T15_57_38.thumb.png.94d7ea885dca16269c923680a77db13c.png

    SiriL background extraction:

    2022-04-09T15_59_46.thumb.png.d8f240a66a9def365d78f6e18c525e20.png

    ASTAP linear gradient tool:

    2022-04-09T16_10_07.thumb.png.d0c39959951fc4fc59769f83f239224b.png

    And what it looked like before any of this:

    2022-04-09T16_04_19.thumb.png.28c4bc4c613a8c92346e0e8ef3f1628b.png

    Neither of the methods resulted in what i would prefer to get out of background removal. SiriL and ASTAP dont remove the background gradient completely and your tool turns out very different from the rest so not sure how to compare the results. I would use gradientXterminator later on in processing to deal with the corners the gradient removal tools have left too dark and it would turn out ok.

    4 hours ago, vlaiv said:

    Not sure what you are asking, but here is how I classify things:

    1. Regular sub calibration (darks, flats, that lot)

    2. Stacking

    3. Color calibration - this step is meant to transform (L)RGB data (irrespective if it's coming from mono or OSC) to XYZ color space (which can be thought of "standardized" sensor raw - but it is more than that as it is aligned with our visual system - Y corresponds to our perception of luminance).

    4. Gradient removal. This step is actually "optional". In daytime camera making an image setup - closest thing to it is "red eye removal". It is altering of the data to account for well understood phenomena in order to more faithfully render the target that has been tampered by environmental factors (using flashlight and retina reflection for red eye, light pollution thing for astrophotography).

    Step 4 does not really depend on step 3 - you can remove gradients both pre and post color calibration. This is because light adds linearly and color space transform is also linear operation. Result of 3 then 4 will be the same as 4 then 3.

    5. Stretch

    .... rest of it.

     

    The way i understood the colour calibration on camera thing is that this is done just once and the raw subs themselves would have corrected colourbalance. I guess its the same as just leaving this step for the stacked image though? Not sure i understood the point you had so not sure i understood how to ask the right questions 😃.

    But if it did go this way and the image would have the corrected colour balance when stacked or this was applied to the stacked image, i would still have to do background neutralization and gradient removal to deal with added brown from LP and if this step is not perfect, the previously set in-camera colour balance thing would not result in good colours. Thats what i thought, if one has to do colour calibration in one way or another anyway, why bother with the in-camera colour transformation thing? Maybe i just dont get it.

    Not sure any of what i typed above makes any sense, i have a clear thought in my head that i want to type out but dont think i got it out 😅. Ill stop here so i dont get more confused.

  4. 1 hour ago, vlaiv said:

    @ONIKKINEN Here is result of "automatic background removal" on image that already featured in this thread - M101 by @Pitch Black Skies

    image.png.861b7152d6a997d2ad8d07399c45a8b4.png

    Left is image after removal, second is what algorithm identified as foreground and background (white and black) and third is removed gradient

    I guess extended nebulae filling the FOV would probably be issue for the algorithm - but I don't have any test data to run it against (I did run it against data provided by IKI observatory and it did fine in spite having not very well defined background in some cases).

     

    Is this different from the available tools already, like siril background extraction, photoshop gradientXterminator, Pixinsight this and that? This particular dataset was in my opinion very good and needed the bare minimum of work to get a decent result.

    So how about worse data, how does this algorithm work on that?

    Try with this one if you want to: Camelopardalis-IFN.fit

    Thats a split green channel stack from a target i have in the works (for who knows how long). This particular stack has terrible SNR, terrible gradients and generally just isn't good data. The IFN structures around NGC2633 (the weird looking barred spiral galaxy to the left) are very weak at this integration and anything other than a perfect background removal process kills it completely.

    But back to the light pollution calibration thing. If one has the hypothetical on camera colour calibration thing going on at capture/calibration and one still has to do a background calibration for the stack, then isn't this just normal colour calibration with extra steps?

  5. 16 hours ago, Graham Darke said:

    Here's my effort in StarTools. I always use Film Dev for my post Wipe stretch.

    My process here was 1. AutoDev 2. Bin 3. Crop  4. Wipe 5. Film Dev to 95.34% with Skyglow set to 5%  6. HDR  7. Sharp   8. Decon   9. Colour - clicked on outer core to reduce green and reduced percentage to 143% on "constancy" colour setting. 10. Noise Reduction set to 2 pixels 

    M101 19 hr 10 mn calibrated2.jpg

    This is very nice, doesn't look "startoolsy" and artificial at all to me. I have been convinced with my own attempts with startools that there is no way to get a good looking result (judging from pictures posted most have this issue), but this shows that there is and one just has to learn to use the software.

    • Like 1
  6. 8 minutes ago, vlaiv said:

    Best approach would be to do "in house" color calibration (needs to be done only once) and then use color calibration tool in Siril and others to make fine adjustments to color (atmosphere tends to shift color of object because blue light is scattered more then red - sun for example appears red/orange near horizon because of this effect - but that is not true color of the sun).

    How would one subtract light pollution then? Most images are varying degrees of brown if camera white balance was sound during capture. I dont think this goes onto the fine adjustments category of colour processing.

    The degree of light pollution often varies within the night too, so a simple calibration image of sky colour to subtract from the images wouldnt work, unless a calibration image was done uniquely for each sub.

    But i see what you mean with the limited data thing to produce all of the colours. The noisier my image is the higher the likelihood that the result comes out as (what i think is) a bad colour balance. With maybe 6h+ images i dont see weird results at all however, usually at this point the tool has several hundred stars to calibrate on and i guess there are enough samples to get it right.

  7. 5 minutes ago, tomato said:

    I would love a processing package that would produce the best result from the input data by following a set of rules based on the science. However, the first and biggest hurdle is determining what is the ‘best’ result. The huge variation of results presented in image processing competitions is testament that getting consensus on this would be hard. For example I would agree that most galaxy images display too much colour saturation, but that's what most imagers (including myself) do, I can’t see that changing.

    Colour calibration tools based on photometry and a visual spectrum capture (no useless LP filters) are pretty close to processing with facts, are they not?

    Siril photometric colour calibration always produces good colours as long as you have enough integration and suitable stars found.

  8. Just now, vlaiv said:

    I'm not sure I agree with this part.

    How can person, observing an image of object for the first time - decide if something is real or not?

    Imagine someone looking at the image of platypus for the first time - never hearing about the animal before.

    image.png.89fdc8550a5a3fb79eba8daba7798d43.png

    Hairy thing with duck like beak and Interdigital webbing, come on, really? That's a thing and not photoshopped?

    They cannot know what is real and what is not, and i agree its a big problem especially in astrophotography presented to the masses that dont know what something looks like. The most common offender is a purple and blue M31. Just google it and you see the issue with presenting it in a million different ways.

    I try to keep realism in as far as it can be kept in my images and wouldnt dream of paintbrushing something away or making something blue that isnt, but also i realize that some people will do that and its none of my business to tell them they cant do that even if a part of me maybe wants to. I might say that im not a fan of it though, or just move on.

    • Like 3
  9. Astrophotographers are somewhere on a spectrum that spans between astronomy as a science, trying to uncover facts from limited data and photography as an art, trying to create the most pleasing image with whatever means available.

    It IS up to the person doing the imaging and processing to decide what is right and what is not, but also up to the viewer to decide if a process went too far into fantasy from their perspective.

    • Like 1
  10. Damaging? Probably not, but your gear may not work properly.

    I am amazed you have not run into mount tracking issues with this setup. 10.4v is definitely not enough to run any 12v motors at their intended powers. 12v appliances run best at 13-14 volts.

    "Dumb" batteries like the skywatcher one are terrible for anything but very low power consumption gear just because of what you experienced, they are imcapable of producing steady power under load (especially if its cold).

    Smarter, regulated power supplies keep the output voltage high and adapt to power draw. I have this one: https://eu.ecoflow.com/products/river-portable-power-station?variant=37254607863972

    (€€€,i know....). 

    It supplies 13,6v until the last drop of power has been drained.

    I would look at a regulated power supply, or wall power if thats possible. No way you wont run into trouble with your current battery!

  11. 1 hour ago, JonCarleton said:

    I like Siril for stacking, but I don't find it particularly "strict" when it comes to excluding marginal images.  I tend to use the grading in ASTAP first and exclude bad images, then stack in Siril.

    Still, gotta love M101!  

    You can use the plot tab to pick your subs!

    After registering your calibrated frames the plot tab will have a graph of all your subs and their measured fwhm and roundness. Tick off the ones that are clear outliers and you will have a much sharper stack afterwards.

    You can also draw a selection around a star and run the PSF for sequence tool which creates a plot of many more things like SNR, background levels, magnitude (relative) and more. With this you can deselect subs that had low SNR or high background levels (maybe a passing cloud or neighbours lights etc). 

    Automatic quality estimates are in my opinion not that good and the extra effort manually weeding out the bad subs is worth it.

    • Like 1
  12. 50 minutes ago, alacant said:

    Nice.

    The main issue I have with Siril -and I admit to not having looked much into why- is the detail. Almost certainly my lack of patience though!

    image.png.01f79bf563c896cc9148f2fec3eb7794.png  ss_4.png.f3f63f2c2b0952dd1d7452e741e2818f.png

    This is one of startools best parts. One stop shop for processing.

    Siril is just a linear processing tool getting you the data stretched and colour calibrated before you move on to the details. You could do deconvolution and wavelet sharpening to the above picture but it wont come out of siril looking like the bottom no mattet what.

  13. 5 minutes ago, Pitch Black Skies said:

    Ah I see, I was trying to move the midpoint before I had even pressed the auto-stretch button :blink:

    I see why the 32bit is so important now.

    My thinking about the gain is if I increased to 200 I would have lower read noise and shorter exposure times leading to tighter stars possibly?

    You could actually keep the gain as is and still lower your exposure to 1 or 2 minutes and still swamp read noise by maybe 5x. If your mount has issues then definitely worth doing just that.

    If youre chasing tighter stars then stacking with siril and rejecting the softest subs would be effective too. DSS produces softer stars than siril, i believe because it uses bilinear interpolation for alignment of frames (adds blur). You have 24 hours, of which you could reject half of and still have a deep image left. The list of things to do in search of sharpness is endless, so just throwing ideas now.

     

    • Thanks 1
  14. The histogram does this weird dance that looks like a seismograph when you move the sliders, its normal. If you see posterization after moving the slider that still remains, you have stretched the image too far, or are processing a 16-bit file (save as 32bit always). I wouldn't bother with upping the gain, as you see there are diminishing returns from doing so. With your 5min subs i dont see how there could be any noticeable positives from doing so, but star cores would start to saturate earlier and so you would have fatter stars (the brightest ones).

    Capture1234.PNG.c011cbd880df7e4167499a941a030d14.PNG

    This button stretches the image with the autostretch function, the same as the lower tool bar thing that only visualizes the data for you. But like the autostretch preview, it is very aggressive so i recommend you move the midpoint slider a bit back towards the middle after pressing the button. You will probably need to zoom in on the histogram to see where the slider even is after the autostretch by the way.

    • Like 1
    • Thanks 1
  15. This is normal, actually expected from skies as dark as yours and that camera.

    Your camera has a huge dynamic range which makes saturating signal very difficult. The result is that all of the faint signal is usually in the black parts of the shot. I have had images that contain on average just 25 photons per pixel per sub and there is no issue.

    But like you said stretching is no issue so there is no problem here with the histogram.

    By the way the siril autostretch function is quite aggressive, especially for low noise images like yours so when actually applying the stretch to the histogram i recommend walking the midpoint slider a bit back from what the autostretch function wants.

    • Like 2
    • Thanks 1
  16. 2 hours ago, Pitch Black Skies said:

    That's really nice, much more pleasing on the eye.

    'My tries with Startools always end up looking like they came out of Startools.'

    I feel the same but I don't understand why the results look artificial. Maybe it's the colour and mushy look like you mentioned.

    'One issue i found is that the image is 16-bit which is problematic for the faintest detail.'

    I can restack from the start and save as 32 bit if you'd like to have another go? Maybe you could show the histogram of it like above, it would be interesting to see.

    'Why not try processing in SiriL though?'

    I'll try it. Startools seemed the most user friendly for beginners so I went with it. I've never used Gimp or Photoshop either, I wouldn't really know where to begin with them.

    'Data is great by the way. Really nice capture.'

    Thanks mate 👍

    Startools is definitely the easiest software out there, i think. The module structure makes you work with at least a workable workflow and guarantees that there is an image in the end. Other dedicated astrophotography processing software might as well be voodoo for a beginner, none of it makes any sense at all when starting out.

    Below is an image of what the histogra should look like after stretching, from one of my own projects:

    32bithisto.PNG.24f711bfa81961bf9fb07f38645cc7a6.PNG

    Nice curve with no defined "end" to the data, it just slowly diffuses to the black and white ends.

    The effect is probably small, or non existent if the person doing the processing is maybe not able to use all of the image but you have worked hard on a long integration, so taking the small steps is only a good thing. I think i may have stretched the image just a bit further to get the wispy stuff out more, and the core might be a bit better. There are no negatives to using a 32bit file for the linear part of the processing, other than a bit larger file size and maybe it draws more juice out of your PC. After stretching the added precision is not needed and you can save the image as 16-bit, if further processing software doesn't like 32-bit (like photoshop).

    • Thanks 1
  17. Quick process of the unstretched file in SiriL and photoshop:

    234216122_M10119hr10mncalibrated_bin2x2-siril-J.thumb.jpg.cec033297d8c5671bcab129a6c858ab0.jpg

    Just did photometric colour calibration, stretch, deconvolution, saturation, the usual stuff. In photoshop some layer masked saturation/noise control/sharpness things. Didn't go too far on this one but i think with sharpening and maybe an HDR type process with layer masks there would be more in the image.

    I cant advice on how to get results out of startools, because so far i have not had success with it. My tries with Startools always end up looking like they came out of Startools, and i dont think that's a great thing. I find real issue in how the colour module just doesn't seem to get real colour results out no matter what i do, and all the other modules slowly walk the image towards what you got out of it, a posterized and mushy looking image. Some might like the look of getting every bit out of an image but i think it detracts from the image. I see that some people get much better looking results than i can get out of it, so much of this is user error in my end, but i find other software much easier to use for processing.

    One issue i found is that the image is 16-bit which is problematic for the faintest detail. I believe that is possibly a part of the issue with the posterized looking edges in Startools. Below is an image of the histogram in siril:

    16bitissue.PNG.414fa86dc3f59bbe920bca0e66d575a4.PNG

    There are gaps in the histogram, meaning that the image has been stretched so far as to no longer fill all the values of a 16bit image. This can lead to abrubt looking edges in faint detail instead of evenly fading away into the background like it should be. Its not super obvious in this example and this is something many people dont bother with, but there is a difference in 32bit and 16bit processing and i think that's reason enough.

    Why not try processing in SiriL though? Photometric colour calibration and a histogram transformation and you already have a real colour stretched image that you could then fiddle with in Photoshop or Gimp until the end of time until you get what you like.

    Data is great by the way. Really nice capture.

    • Like 5
    • Thanks 1
  18. 13 hours ago, Padraic M said:

    Hi @BrendanC I know I'm coming late to this discussion and you've made great progress with the issues already, but just to add some thoughts on light leaking through the PDS OTAs. I recently took on a 150PDS and had issues with gradients and LP. I tested the OTA for leaks (similar to what Vlaid suggested earlier), by putting the scope cover in place; looping 1-second or 2-second captures, and shining a bright torch all over the OTA. I found issues with the primary mirror end, obviously, but I also found significant issues around the focuser draw-tube. I know that you've covered the primary end with a shower-cap, but you will also need to cover the focuser draw-tube somehow. I haven't quite found the best way to do this. I use tin-foil when I'm taking flats, but when taking lights there will need to be some flexibility there to allow for autofocus movement of the draw-tube while still blocking the light.

    Bear in mind that there should be very little light-leak at night when taking lights, but a nearby street light, or potentially an LED on some part of your rig, could cause problems over a number of minutes exposure.

    However, when your LED panel is lit up to take flats, you will definitely get lots of light leaking into the focuser, unless you have fully masked the LED panel around the scope aperture. This means that the lights could be fine but your flats are contaminated.

    Finally, any environmental lighting shining obliquely across the OTA opening may illuminate the inside of the tube slightly and cause problems. I now fit a sizeable dew shield the the top of the scope to protect the opening (actually the dew shield for my C8 SCT fits perfectly).

    Environmental light leaks are a pain because they move relative to the OTA/sensor as you track and flip; while sky light pollution and vignetting/reflections in the optical path stay in the same orientation as the image. Consider if you were to track a DSO through a full 90 degrees; your environmental light leak would illuminate your subs on two or even three sides through the duration of the session. No light pollution/gradient removal tool will be able to address this. You may also get LP gradients at different angles in your R, G and B channels when shooting mono as the light ingress angle will change relative to the OTA as the night progresses.

    hth

    @BrendanC Most of these issues above are fixed by flocking the inside of the tube, but of course if there is significant local lighting going on you need to block these somehow. The focuser drawtube light leak issue is quite small and i doubt it ruins your lights or flats unless you shine a light directly to the focuser, but this will definitely ruin your darks. The stock paint inside the tube is hardly black and will reflect some light here and there and may ruin flats, especially the area just behind the secondary mirror will reflect unwanted light back into your camera. Uncovered shiny screwheads and such also should be blackened.

    You can buy flocking material from FLO to line the inside of the tube with. I think it was something like 5 pounds per roll (you will probably need 2) so not one of the more expensive investments in the hobby. I used to have problems with flats but after doing many modifications, flocking the tube being one of them, now i really have no issues at all with flats.

    • Like 1
  19. Applies for imaging too i have found. Its always worth it to travel to darker skies even if the extra travel time means you get half the time to image in the end. I have started using a bortle 4/SQM21.3 location instead of my usual SQM19.5 spot and have found that an hour from the better skies can be about the same in value as an entire night in the worse spot... Difficult to tell myself to do the extra travel (1h extra both ways) but the results are obvious so at least moonless nights will always have me travel further from now on. Getting just a couple of hours of decent subs from the darker location is more than the worse location could give me in an entire night so i waste less clear skies too.

    Cant change the weather, but can choose to use the weather more efficiently!

    • Like 1
  20. Stellarium - the windows downloadable version that is - is what i mostly use to check framing on targets before planning further. You can put your scope and camera details in there and it will show you how the target fits your particular setup. Stellarium images are not very deep though and outside the usual suspects of imaging, the bright messiers and such, the others are quite lackluster and to check a deeper image of what im planning on imaging i use the legacy sky survey browser: https://www.legacysurvey.org/viewer

    It has many wavelengths and different catalogues and some are exposed very deep. You can easily see if your target has IFN around it for example.

    NINAs sky atlas and framing tool is what i use to create the sequence itself, but i have used it for discovery too. You can input parameters to the sky atlas browser and constrain what you get as a search result. I put in declination of at least 60 and galaxy or galaxy cluster of a size at least 5 arcminutes. Turns out there were a number of galaxies i had not heard of at all that i could image - with NGC 4236 as an example. Beautiful, but very dim spiral galaxy but for some reason not often imaged as images of this are hard to come by.

  21. 39 minutes ago, michael8554 said:

    Before moving to the target I focus my DSLR on a nearby bright star, using magnified LiveView, a Bahtinov Mask, and Bahtinov_Grabber software.

    Shouldn't you fix that first ?

    Michael

    Ideally yes, of course. Bank account not in agreement on that though 😆.

    Over summer break, which starts in 2 weeks ill do a number of upgrades and a new corrector is on the top of the list for sure.

  22. 2 minutes ago, scotty38 said:

    I think an autofocuser is a definite quality of life thing even if you ignore all its other benefits 🙂 

    Just off to have a look but can you do anything manually with the Hocus Focus plugin??????

    Just looking at the values, im not using the plugin.

    Check the value reported from a few subs to get a rough estimate then move the focuser slightly one way and check if it improves or not. Once it doesn't improve its set. Good seeing and another target and this takes no more than a minute.

  23. Im having difficulty reaching ideal focus with this target, i have some ideas how to improve but tips welcome!

    Now why is it difficult: I focus manually looking at the HFR values reported by NINA, and normally its very straight forward. 2s or 4s exposures depending on seeing guarantees that the measurement is more or less independent from high frequency seeing wobbles and is still reasonably short so it doesn't take forever. But with the coma cluster there are hundreds of galaxies, some of which are picked up as stars by NINA and they obviously are not point sources, so will impact HFR calculations drastically. They are also so faint that it varies quite a lot between subs, so HFR based focusing is difficult. I also think that as focus gets better, more and more of these faint galaxies are picked up as stars and HFR actually increases as focus improves. Judging from the reported HFR values my frames are junk (HFR of more than 4 pixels, in other targets this is go back home territory of bad) so i really cant use this (i should note that the frames can look ok or good with poor HFR though, so just the measurement is suspect). Having a newtonian i have handy diffraction spikes that can act as focus indicators quite easily, but here i run into another issue: my coma corrector, which produces uneven star images across my APS-C sized camera so in order to have a middle ground i need to use a star not in the center, but not in the edges either (not that i would want to use the edge ones, since they are not good).

    Solutions i have:

    Spend longer on focusing, with longer focus exposures to have better star images to visually gauge diffraction patterns from brighter stars. I used 15s subs and i was able to determine decent focus in a few minutes. But since my scope takes at least an hour to cool down i have to do this a couple times a night. These are painful minutes as the night is not even 5 hours long at this time of the year, but i must have good focus or the subs are a waste.

    Focus somewhere else and then slew back to the target. Problem is: even though my diamond steeltrack is a very good focuser, i find it can slip after slewing when the weather is cold and the critical focus area to get good diffraction spikes and star images across the screen is ridiculously small with my Maxfield 0.95x coma reducer. Once all of the scope is equalized in temperature to the ambient temperatures, this problem goes away with tensioning the focus drawtube using the tensioning screw. Before that, this is an issue.

    Would having an autofocuser fix this issue? Can the algorithm figure out ideal focus when the target is inherently throwing a spanner in the works? So far i have not considered autofocus because i am always scopeside when imaging and the setup is not left unattended.

     

  24. Try SiriL, with Lanczos-4 interpolation as the method for registration. Should produce noticeably better stars than DSS which i believe uses bilinear interpolation for its registration.

    As a newtonian imager myself, i will also say that your collimation might be different on the other side of the meridian, especially if you have the stock focuser, stock tube, or really stock anything in the mechanical parts of the scope that are not really up to the task of imaging for stock scopes. Guiding is also worse after a flip for me. Not sure how to fix that issue though. Maybe recalibrate guiding after the flip? Still experimenting so wont comment more on that as im just guessing myself.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.