Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Adam1234

Members
  • Posts

    835
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Adam1234

  1. I usually dither every frame for narrowband, every 3 frames for R,G,B filters, and every 5 frames for luminance. 

    Though the reason I dither every 3 and 5 for RGB and L is because of the larger number of frames I take compared to NB (owing to shorter exposure), and wanting to save back from imaging time

  2. Nice effort considering 5s subs and untracked. Out of the 2 lenses, I'd go with the samyang 135mm so you get a narrower field of view. I haven't got one myself but I've seen people get incredible images with this. There is a huge thread on imaging with this lens 

     

    I'd also recommend tracking so that you get get longer exposures and increase your signal-to-noise. A simple star tracker such as the SkyWatcher Star Adventurer or ioptron skyguider pro should be more than sufficient for imaging with a camera and lens

     

     

     

    • Like 1
  3. 46 minutes ago, Ratlet said:

    The autofocus pixel thing is horizontal lines.  Not sure what that red banding is. 

    Is it possible it's some effect of light pollution? It probably isn't, but I do have at least 3 streetlights facing into my garden.

    I'll try and investigate the raw the images tomorrow and see if it's present in those 

  4. 11 hours ago, Uranium235 said:

    Its one reason why ive avioded the new generation of sensors that have dual pixels for focusing (which this camera has), as they are the cause of this (seemingly unfixable) banding.

    I'm a bit unsure what you mean by dual pixels, how does this work?

  5. 14 minutes ago, Uranium235 said:

    Just a question, have you tired removing that verical banding in the image? (does it calibrate out?) I had a bash at it with noels actions, but it wont shift.

    Its one reason why ive avioded the new generation of sensors that have dual pixels for focusing (which this camera has), as they are the cause of this (seemingly unfixable) banding.

    No I've not tried to remove it - I'm not entirely sure where it's come from, I need to investigate that. 

    I calibrated with bias and flats (flats done an amazing job at getting rid of heavy vignetting which was present without the flats).

  6. 20 minutes ago, symmetal said:

    For a full frame sensor I'd say those stars are very good indeed. Being a posted jpg it's hard to say but the slight corner spikeyness is most likely very slight coma It's certainly better than my redcat which has noticeable coma in one corner on an APS-C sensor.

    As you mentioned there is no required back focus distance with the redcat, the quoted figure just makes the distance scale on the focus ring read correct, but as long as you are able to go through focus you're fine. 😊

    Alan

    That's reassuring,  thanks

  7. My first test run on the Redcat51 (with Canon R6, Optolong L-Pro and EQ6-R Pro). Certainly nothing to write home about by a long shot. Only about half hour of 60s images. I did have about 2 hours, but culled half hours worth because of tracking issues (no guiding), and another hour because of the local power lines through the image. 

    At 250mm, I don't think I'll be using this in my garden, way too much light pollution (bortle 8-ish + street lights as my garden faces out into the road), and the short focal length means I can't escape the power lines like I can with my deep sky set up. With my usual deep sky gear, the pixel rejection during stacking sorts out the shadows from the power lines, but I had them in most of the images in this case so it simply didn't work well.

    This is one scope I shall be reserving as a travel scope for darker skies. That was always the plan anyway, but I had to test it out at home before going out in the field. Luckily I live close to the New Forest national park anyway. 

    With 6.58um pixel size, the Canon R6 probably isn't the ideal camera either as I'm under sampling by quite a bit at about 5.43"/pixel according to astronomy tools so I drizzled 2x which did seem to help with the blocky stars. I do plan on getting another dedicated astro cam with smaller pixel size sometime in the future. 

    I do think I need to nail the focus, as stars were more or less ok in the centre but looked a bit pointy around the corners. The direction of the stars I think indicates the back focus wasn't quite right, but I've read that due to the design of the optics, as long as you are in focus, you have the correct back focus, which suggests my focus is slightly off. I find it reasonably difficult to get exact focus with very small movements of the focuser making a huge difference to the focus. 

    Anyway. here is said test image, of the Sadr region, with a quick process in PI.  At least I can make out a bit of nebulosity, can even see the crescent nebula in the lower right.

     

    image.thumb.jpeg.4030c5728064af34e642afe60444b664.jpeg

     

    Adam

    • Like 4
  8. 2 hours ago, davies07 said:

    I think what you've done looks good to me.

    Your final tests should should be to test for coma on a centrally placed star. Zoom in on a central star at focus and then slightly defocus it moving the focuser out. Turn off any auto-stretching and defocus only enough to reveal a tiny doughnut. Study the distribution of the light around the central hole. It should be symmetrical but poor seeing  makes it tricky to assess. If the annulus of light shows a soft edge on one side and hard edge on the opposite side, you have some residual coma. Also, look for the Young Spot (a tiny dot of light) in the centre of the dark area. It should be in the centre. Make any corrections to the primary, only.

    You could use Metaguide, for example, for checking the collimation on a central star. I've found it found it to work well.

    Take a test image on a star field and use Pixinsight to measure  the FWHM and eccentricity of the stars (script/Image analysis/FWHMeccentricity). Click the support button to see the graphs. They should show the smallest and most circular stars symmetrical about the centre of the field. 

    Thank you! I will try a star test when I get the chance. I tried a star test the other week after using the cheshire, and made a few minor adjustments but then it didn't agree with the cheshire. By the sound of it, I defocused to much!

    Out of interest, why make corrections to the primary only when doing the star test? I've seen other people mention this, but others have said seconddary only, and others have said both?

  9. I've been revisiting collimation of my Stella Lyra RC8 now that Jupiter is on it's way back, and I think (hope) I have got it more of less there.

    I was really struggling at first,  spending days doing lots of research, reading all the websites, watching youtube videos. I refuse to buy a howie glatter as they seem to be nearly as much as the scope, and I'm also convinced that a lot of the instructions out there are either completely or partially wrong (or at least rely on various assumptions being correct for the method to work), or incomplete. 

    For example one video on youtube instructed you to align the secondary using cheshire eyepiece - fine.  And then to align the primary using a standard laser collimator, adjusting the primary until the laser hit the centre of the target on the laser. Correct me if I'm wrong but this method makes a massive assumption that the focuser is completely square...which was not mentioned at all in the video. As a result I took my primary WAY of out of whack, to the point where one of the screws was completely loose. 

    Anyway, I decided to settle on the good old cheshire eyepiece method, i.e. 1st - align the secondary by getting the dot made by the cheshire eyepiece hole into the donut on the secondary mirror, 2nd -align the primary by adjusting the mirror to get the thin strip of light (optical axis) even around the edge (some also say you should see concentric circles within the shadow of the secondary). 

    Coll-768x719.jpg.e3f4ce3891bd24de1cf40ccc63077c6f.jpg

    Capture.PNG.908d6d409db8659814cce18dd5628a49.PNG

     

    Sounded easy enough, but I couldn't seen the centre dot properly because the crosshair of the cheshire was in the way, and I couldn't see the thin line of white around the edge or concentric circles. 

    I ended up removing the crosshair from the cheshire and find it much easier to the centre dot/donut. I also discovered that if I shined a torch directly into the 45 reflective surface of the chesshire I could indeed see the concentric circles., so here is what I did:

    1) I lined my camera up a best as possible to the chesshire with 10x zoom, and used the adjustable circles in the collimation aid in Astrophotography Tool while adjusting the secondary to get the centre dot right in the middle of the donut

    2) shined the torch into the chesshire to light up the concentric rings, and again used to the collimation aid circles while adjusting the primary to get all rings concentric

    3) Repeating 1 & 2 again until I was happy.

     

    Here are my final results of steps 1 and 2 after the final check, (with and without the collimation aid active to ease of viewing).

    How does my collimation look? Was this a good approach to make and am I interpreted the views correctly?

     

    Centre dot/donut 

    1.thumb.png.68dd6186cd572af5a0d17865eb6de572.png

    1653516315_1nocrosshair.thumb.png.58fe3336947b5731f62e0f75dd8ba932.png

     

    Concentric circles

    2.thumb.png.1ebe87bfb5b0b0499c20020776424656.png

    681328770_2nocrosshair.thumb.png.372bbe18778c8e63a29c19fd6035e7a9.png

     

    Thanks

    Adam

  10. I'm thinking of buying a filter to pair with the Redcat51 and the Canon R6 mirrorless to cut out the light pollution. in my area  I was originally thinking of either the Optolong L-eNhance dual narrowband filter, or the L-eXtreme, but then I thought would they work or provide any benefit with the R6 since it is not astromodded?

    Would I be better off with the Optolong L-Pro Broadband Filter (or similar? 

    Suggestions on what would work or what others use would be appreciated!! Thanks!

    Adam

  11. 12 minutes ago, vlaiv said:

    Best to measure it as you want darks that include any bias offset.

    Either take darks that you already have (from previous session or ones you prepared for this session) - or take single dark in the field matching light you want to try out.

    Measure mean adu of that dark sub and use that value.

    Btw - electrons to ADU conversion is simple - there is published e/ADU for gain you are using (you can read that off the graph).

    Divide with that value to convert from electrons to ADU and multiply with that value to convert from ADUs to electrons.

    Just be careful if your camera has lower bit count than 16 bit. In that case there is additional step between sub values and ADUs (although we mean ADU when we say values measured directly from sub). If camera has less bit (like 12bit) - sub values are actually multiplied with 2^(16-bit_count).

    In case of ASI1600 - since it is 12bit camera - all sub values are multiplied with 16 (2^(16-12) = 2^4 = 2 * 2 * 2 * 2 = 16). If your camera is 14 bit - this multiplier is 4 (2 to the power of 2).

    That's great thank you! 

  12. 23 minutes ago, vlaiv said:

    You can do it on uncalibrated sub, but you need to know mean dark value.

    1. Read mean background value in ADU

    2. Subtract mean dark current value in ADU

    3. Convert to electrons ...

    rest is the same (it works on "reverse" method as well - (read noise * 5)^2  = exposure_factor * (mean_background_adu - mean_dark_adu) * e/ADU )

    Is the dark current a value you need to measure, or can this be taken from the graph of dark current vs sensory temp displayed in the camera specs? For example, for the ASI1600mm pro at -20 the graph gives 0.0062e/s/pix. 

    If the former, how do you measure it? Or if the latter, how do you convert to adu?

  13. On 14/06/2022 at 20:11, vlaiv said:

    Offset is not important for sub exposure length. Use gain setting that you will be using for imaging.

    If you want to determine what is best tradeoff for sub length - here are guidelines:

    1. How much data you want to stack and process? Shorter subs mean more data. Some algorithms like more data, others like good SNR per sub

    2. How likely is it that you'll get ruined sub (for whatever reason - wind, earthquake, airplane flying thru the FOV - whatever makes you discard the whole sub - satellite trails can be easily dealt with in stacking if you use some sort sigma reject). Longer discarded subs mean more imaging time wasted

    3. Differences in setup - in general, you'll have different sub length for each filter, but sometimes you will want to keep single exposure length over range of filters (like same exposure for LRGB and same for NB filters) as this simplifies calibration - only one set of darks instead of darks for each filter

    4. What is the increase in noise that you are prepared to tolerate?

     

    Only difference between many short subs and few long subs (including one long sub lasting whole imaging time) - totaling to same total imaging time - is in read noise. More specifically, difference comes down to how small read noise is compared to other noise sources in the system.

    When using cooled cameras and shooting faint targets - LP noise is by far the most dominant noise source, that is why we decide based on it, but it does not  have to be (another thing to consider when calculating). If you have very dark skies and use NB filters - it can turn out that thermal noise is highest component, so this calculation should be carried out against it instead.

    In fact - you want "sum" of all time dependent noise sources (which are target shot noise, LP noise and dark current or thermal noise - all depend on exposure length) and compare that to read noise.

    Read noise is only time independent type.

    Noises add like linearly independent vectors - square root of sum of squares. This is important bit, because this means that total increase is small if you have components that are significantly different in magnitude. Here is example:

    Let's calculate percentage of increase if we have LP noise that is same, twice as large, 3 times as large and 5 times as large as read noise.

    "sum" of noises will be sqrt( read_noise^2 + lp_noise^2) so we have following:

    1. sqrt(read_noise^2 + (1 x read_noise)^2) = sqrt( 2 * read_noise^2) = read_noise * sqrt(2) = read_noise * 1.4142 ... or 41.42% increase in total noise due to read noise

    2. sqrt(read_noise^2 + (2 x read_noise)^2) = sqrt(5 * read_noise^2) = read_noise * sqrt(5) = read_noise * 2.23607 = (2 * read_noise) * (2.23607/2) = (2*read_noise) * 1.118 or 11.8% increase (over LP noise which is 2*read_noise in this case)

    3. sqrt(read_noise^2 + (3 x read_noise)^2) = sqrt(10 * read_noise^2) = read_noise * sqrt(10) = read_noise * 3.162278 = (3 * read_noise) * 1.054093 = 5.4% increase over LP noise alone (which is 3*read_noise here)

    4. sqrt(read_noise^2 + (5 x read_noise^2) = sqrt(26 * read_noise^2) = read_noise * 5.09902 = (5* read_noise) * 1.0198 = 1.98% increase over LP noise alone

    From this you can see that if you opt for read noise to be x3 smaller than LP noise - it will be the same as having only 5.4% larger LP noise and no read noise, and if you select x5 smaller read noise - it will be like you increased LP noise by only 1.98% (and no read noise).

    Most people choose either x3 or x5 - but you can choose any multiplier you want - depending how much you want to impact final result. Thing is - as you start increasing multipliers - gains get progressively smaller, so there is really not much point going above ~ x5

    Ok, but how to measure it?

    That is fairly easy - take any of your calibrated subs and convert to electrons using e/ADU for your camera. CCD will have fixed system gain, while gain on CMOS will depend on selected gain. Pay attention when using CMOS cameras if your camera has lower bit count than 16 bits. In that case you need to additionally divide with 2^(16-number_of_bits) - or divide with 4 for 14 bit camera, with 16 for 12bit camera and 64 for 10bit camera.

    When you prepare your sub - just select empty background and measure mean, or even better median electron value on it (median is better if you select odd star or very faint object that you don't notice). This will give you background value in electrons.

    Square root of this value is your LP noise. You need to increase exposure until this LP noise value is your factor times larger than read noise of your camera.

    Alternatively, if you want to get exposure from single frame - take your read noise, multiply with selected factor, square it and this will give you "target" LP level. You need to expose for "target" / "measured" longer (or shorter - depending on number you get).

    Makes sense?

     

    Is it possible to use this calculation for an uncalibrated sub? Or is there an alternative method for uncalibrated subs, so you can easily and quickly determine if you're swamping the RN while out in the field?

  14. 11 hours ago, teoria_del_big_bang said:

    Wheres the sparrow images then ? 🙂 

    Still on the camera at the moment! I'll try and post them up soon

    11 hours ago, teoria_del_big_bang said:

    I keep wanting one of these for a widefield setup but would need to sell something I think to justify another major purchase at this stage.

    Think of it as an investment 😀  or think of it as buy now and sell something else to make up the money later 😀

    • Like 2
    • Haha 1
  15. I was having these error messages with PCC earlier today as well. Tried a different database, tried manually selecting my object from the search function, acquiring metadata from the image, ticked force platesolve, checked the coordinates etc were correct, nothing worked. 

    Then I ran the platesolver script, it came up with the same metadata entered into PCC (coordinates, observation date etc) then all of sudden PCC worked, so not entirely sure what that was about

    • Like 1
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.