Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Whirlwind

Members
  • Posts

    537
  • Joined

  • Last visited

Posts posted by Whirlwind

  1. 9 hours ago, Jkulin said:

    Please understand that you have asked for opinions, but it just isn’t for me as it goes against the grain of my personal feelings about remote imaging.

    For me there are a number of classifications: -

    1. Those who hire time from downloading images using commercial equipment not owned by them, for me it doesn’t contain any merit.

    2. Those that get a company to setup their equipment and maintain it, again to me that contains very little merit.

    3. Those that install, setup, configure and maintain their own equipment remotely, this indeed is in my opinion a worthy capture method, although not for me.

    So although I appreciate you have still had to process the images it is rather like going fishing, that everything is done for you so that you can pose with the fish after it has been landed.

    No insult inferred, it just really isn’t for me

    There's a lot of reasons though that people interested in the hobby might not be able to effectively image though and remote imaging can fill a void.

    For example:-

    If you work night shifts, have very early mornings, work abroad frequently
    If your home residence is conducive to imaging (you live in a flat in the city)
    You have a medical condition, bad back etc that makes it difficult to do these things yourself anyway
    You live in the UK and would prefer your equipment doesn't spend more time as a towel rack rather than imaging
    You prefer more science based work (e.g. observing transits etc) where the UK weather is not conducive to this sort of work.
    You live in north Scotland and would prefer to image some summer targets
    You can use it to top up data you might have missed in the UK (e.g. you managed to get the LRG but not the B!)

    I get the point that for some the challenge is to get everything just right, but then some just like to have images to show for it.

    I do find the cost for itelescope expensive though.  For the 'lowest grade' (and I say that with some hesitancy) you are still looking at £30 per hour.  A whole night imaging would be close to £250

     

    • Like 1
  2. Seems that you are getting lots of lovely imaging done...do you have cloud gun?

    My suggestion would be to process the two elements separately.  Mask off the galaxies and background in turn and then recombine as a suggestion?

  3. 34 minutes ago, newbie alert said:

    Hang on..thought the comparrison was with a 460 with a Sony chip not a Panasonic? The kaf 8300 chip will do better with a longer sub, the cmos will do better with shorter subs..

    Problem is with a variable gain with a cmos you will get a variable electron reading.. cmos will have a lower stat as it works in a different way...do a 15min exposure with a cmos and compare...

    Let's compare apples with apples..

    .. Everyone I know that uses the 1600 in whatever format uses darkframe calibration....read noise is bias?

     

    Yeah this test is probably not ideal in the way it is being shown.  The KAF8300 CCD has higher read noise and the image appears to be narrowband.  To truly get the best out of the CCD you need exposures so that the other sources of noise will overwhelm the read noise.  With a low read noise in the CMOS this doesn't take very long and 60s will do this.  In CCDs for narrowband you need much longer exposures of 20-30 minutes to overwhelm the read noise.  A fairer test to demonstrate achievable results would be to compare to 4 x 1200s exposures vs 80 x 60s.  If you compared a 20minute CMOS exposure to the a 20minute CCD exposure the results would probably favour the CCD.

    What this really demonstrates is that the CMOS allows us to image with less accurate mounts as any lost time because of guiding/worm irregularities can easily be discarded.  In this case the CCDs need better mounts so that they can expose for longer (especially in narrowband).

    They both have their pros and cons.  CMOS also tends to require much more aggressive dithering from what I have read to combat fixed pattern noise.  In addition around bright stars the micro-lens can give strange diffraction patterns.  But they can both give excellent results if used in a way that utilises their strengths and mitigates their weaknesses.

    • Thanks 1
  4. 23 hours ago, han59 said:

    With respect to timing an object in a stacked image, If the stacking program keeps a record of the start exposure times and fills the final stack with the correct earliest start exposure time and correct summed exposure time this "timing problem" should be no problem.  I assume an equivalent solution would be to draw a trend through an abundant amount of measurements. All according the same principle noise reduces with more samples.

    Just noted the time is geocentric. Is helio-centric not more used?

     

    If you are imaging an object that has repeatable cycles (e.g. a transit) it is best to observe multiple periods and then phase fold the data onto the identified period.  This way you combine the data in phase to get a better signal to noise.  However that can be more tricky in the UK because of weather and when a transit might occur.

    Yes heliocentric (HJD) is much better to use. Barycentric data is even better really (although there isn't yet a standard determination for BJD) although is only really important for very time sensitive observations such as pulsars.  I'm not sure why anyone would want data in JD for anything that might be periodic.  It means you can be up to about 15 minutes out depending on the time of year, your observing location and where the object is located. If you want to compare observation over long time frames (e.g. if there are subtle changes over time) then using JD will give erroneous timing issues.  

  5. It's a lovely image. 

    My only comment is that you may have clipped the black point a bit too aggressively and makes the contrast look a bit too stark.  In comparison your M51 looked better in this regard.  

    On the other hand I always find M33 is always a pain in the proverbial to process correctly for some reason.

  6. On ‎02‎/‎06‎/‎2019 at 16:03, LordLoki said:

    Hi whirlwind,

    Thanks for your post lot of good feedback. I would not have a permanent setup but most likely would not strip down completely since I have a lot of space downstairs to store the mount. 

    As a developer I would say I am very technology averse and have no problem with mechanics. I tinker a lot with electronics and software. 

    I really like that you posted some high end options too. But I think for the beginning I will stick to something in the 2k range since i dont want to big of an argument with the wife :)

    That's OK, it's never entirely clear when people say they have no concerns over budget.  

    However, based on your description I would be wary about an EQ6.  They are not light, you are looking at 23-24kg for the mount and tripod.  Unless you are a weightlifter you are likely to strain something if you try and carry that out with the telescope/camera/counterweights.  This before you consider that it is not packaged in a conveniently lift-able manner.

    It may seem OK during the day on this forum, but try lugging about 25kg in the dark/cold at 3am in the morning.  Too heavy equipment is a big factor for those stripping down and putting up every night especially if the weather is uncertain.  

    Given what you have stated are your limits I would suggest you look at the Vixen SXD2.  It is lightweight and very easy to move with what appears decent tracking.  Easily accommodates your scope and should have flexibility to go up to something in size similar to the C925 edge which would be more than enough focal length for most.  It is also in your budget limit.  You are likely to be able to carry out the whole setup in one go.

  7. If you want to get into astrophotography then the mount is the most important aspect of your setup.  With a good mount most telescopes/cameras can give you a good image once you have learnt the processing steps.  But there is a learning curve.

    However, they also depend on your circumstances.  Do you have a permanent setup or will you be setting up and stripping down each night?  If this is the case then something less weighty would be useful - the quicker you can setup the more likely you will use it.  Are you technology averse or mechanical averse. Some people like to tinker with the mechanics but don't like a lot of software gismos and vice versa.

    If money really is no object and you do want to really get into astrophotography then my suggestion would be to skip the above mounts.  Not that they are bad, but they can still be a challenge to getting working right out of the box sometimes and may need refining.  

    If you want a full long term mount and money really is no object then I'd also consider the following (they all have pros and cons):-

    Mach2GTO (Astrophysics) , or a Mach1GTO if you can find one used (US based so some issues if there ever is a problem).
    10Micron GM1000 HPS
    Avalon Instruments M-Uno
    Software Bisque MyT

    or if you are after something more lightweight then the VIxen SXP2 could also fit the bill.  

    Really depends on how much you are willing to spend...

    • Like 2
  8. Cool, great to see that you have found a solution.

    I suspect what this is doing is reducing the dynamic range (so most of the walking noise is one/two levels and then the offset is slightly clipping this out leaving just the small remaining edges of the walking noise.  It may be worth playing with the parameters slightly starting from this point to see if you can change the gain / offset to see if there are any further improvements (for example to remove the remaining trace of the noise.  

  9. 58 minutes ago, TheMan said:

    I don't believe I have changed anything, but i'll definitely try resetting everything to default and making a new profile. Also is -10 on the camera too low/high for good results?

    I don't quite understand what you mean with that part.

    In terms of the video, a lot of software allow you to produce a playback of the blinked images which would demonstrate how frequently and what is happening between each image.  

    I'm unsure about colour CMOS cameras in terms of actual settings. I think noise in CMOS is meant to be generally quite low so excess cooling may be unnecessary?

    It is strange that it has gone from being not a problem to all of a sudden being so which would suggest to me some change in the setup that is causing the issue.

     

  10. I think it is better to confirm this first.  Have you blinked the sequence of images without a star correction to see whether there is any movement?

    I am unsure whether SGP can manage the dither to the above degree of requirements, but increasing the dithering pixel step size/duration enough to wind in the backlash should help.  Are you guiding at all or is it very short subs stacked?

    Sometimes placing a slight misalignment in the balance that works against the backlash can also help - but is dependent on where you are imaging.  Effectively the slight moment on one side acts to 'press against' the backlash.

  11. Fixed pattern noise is difficult to post process out because it isn't random per se so it is difficult for software to identify it (it appears more like 'real' data).

    Dithering is meant to resolve this because it adds a random shift in to each image so then the fixed pattern noise then becomes more apparent to the software as the noise it is.

    The noise in the fixed pattern arises in the read noise so can be more prevalent in CMOS cameras because of the low read noise and the tendency to use very short exposures.  Increasing exposure durations so that thermal noise overwhelms the read noise would be one way to overcome this.  You could also image with the camera slightly warmer to increase the thermal noise a bit.

    For this image it may be worth checking that your darks don't need updating if you are using them.

    Also try blinking you images to check that dithering is actually working as expected.  If you have a lot of backlash in the mount then you may think you are dithering but in one direction it isn't and using up the backlash in the mount.  The 'rain pattern' might suggest dithering isn't working in one direction.

  12. Is this just for imaging of visual as well?  I ask because if you want a longer focal length of about 1600mm and it is just for imaging then why not say the VXL12"?  This would be less burdensome mount wise and aperture is less important (compared to visual) when it comes to imaging.  It's also not as costly so would allow more funds to be spent on the mount (which is always the critical part of imaging set up).  You are saving about £2k.  That could then go towards upscaling the mount (which might open up things like the MESU) and/or a cooled CMOS colour camera which would also likely improve your images.  If you are committed to going for this weight capacity the only other mount with this capacity and price range that I know of is the JTW OGEM but is relatively untested (the base version is similar to the CEM120 in price). 

    On the other hand if you want a visual telescope as well then a 16" will work wonders and that changes your mount choices.

    • Like 1
  13. 9 hours ago, PhotoGav said:

    I occasionally notice vertical banding on subs gathered with my QSI 683-WSG8. Scouring the net I have found an old thread on Cloudy Nights that goes into detail about this problem. It seems that I am not alone. My instance of the issue fits all the descriptions mentioned - it happens in very cold ambient temperatures and is most obvious down the left side of the image. It appears that there is a fix for the issue in a release of Firmware, but I cannot find the firmware for download anywhere... Has anybody else had this issue, have they fixed it and do they have the firmware available to share with me please?

    Have you tried Wayback Machine.  You can get access to version 6.3.1 from here (not sure what version you are using?):-

    https://web.archive.org/web/20161224162028/http://qsimaging.com/software.html

    It appears that Jan 1st 2017 is the last time it took a snapshot where you could download the files directly.  After that it always said to contact customer support.  I did have a link to the beta testing site, but that seems to have been killed off since they updated the website.

    Thanks

    Ian

     

  14. On ‎18‎/‎12‎/‎2018 at 00:54, vlaiv said:

    This leads me to conclude that your description does not account 100% for what is going on - clearly presence of filter and it's reflective surfaces in some way contributes to interference pattern. It is also very indicative that different people get different results based on type of filters they are using. Some even have this effect on R, G and B interference filters.

    Don't get me the wrong way - not trying to say that your analysis is flawed and that you are wrong - I'm just questioning if it's the complete picture of the issue. From above, it seems to me that it might be the case that type of filter, it's position in optical train and F/ratio of system all have "a say" in how pronounced effect is, or if it's there at all.

    I think you may be correct in that the filter has an effect but perhaps not in the way you are thinking.

    I would suggest the following reason for an enhanced effect in the narrowband filters.  It appears to be generally accepted that the diffraction pattern is off the microlens.  However it is a lens that likely diffract different wavelengths by differing amounts.  In a broadband image you have many different wavelengths that are all being diffracted by a slightly different amount.  As each wavelength is diffracted slightly differently than they all overlay at the sensor in a slightly different way.  As the pixels are (at a broad assumption) broadly equivalently detected then these patterns merge and form one continuous 'blob' (highly technical term).  Hence you likely get a broadened star but with no pattern.  

    As you place narrowband filters in the imaging train you now have one specific wavelength which has one specific diffraction pattern.  As there is no other wavelengths affecting the CCD you get to directly see that diffraction pattern.  

    In effect the broadband image is like taking thousands of individual narrowband images and then slightly altering the pattern so that when they overlay they have merged.  

    As such you can clearly see the effect in narrowband but not in the broadband images, but it is still there.

    This should be testable by using consecutively broader filters.  If you started with a 3nm Ha filter and then tested the same exposure using a 5nm, 6nm, 9nm, 12nm Ha and so forth then as you broadened the wavelength range you should see that the effect 'fades away'.  It might also explain why some people don't see the effect.  They are using wider narrowband filters compared to someone that easily sees it in a Chroma/Astrodon 3nm. 

    • Like 1
  15. I suppose you could put the larger CCD (I'm assuming the Atik?) on the 6 inch and then bin the 460 (I assume) until you get roughly the same "/pixel.  Then image the same object in 3nm narrowband.  Then you should have the calibration for the CCD sensitivity, then you can run the same exercise with the two telescopes making the "/pixel roughly equivalent.  Then compare a bright but non-saturated object flux (star is likely best) and divide out the response issue from the CCDs.  

    Of course that means multiple striping of setups and retesting, and imaging time is valuable.

  16. On ‎08‎/‎12‎/‎2018 at 12:31, ollypenrice said:

    Why are you keen to add resolution via focal length and then reduce it via binning? It really doesn't matter whether you fill the frame or not. What matters is how many pixels (single or 4 pixel superpixels) you put under the object's image as projected by the telescope. It is this which determines the object's image's final size at full size (1 camera pixel = 1 screen pixel.) *

    Earlier in the year I did a comparison in Astronomy Now between a 14 inch scope (2.4M FL) with large pixels and a 150mm scope (1M FL) with small pixels on galaxies. I found that the level of resolution and the size at which the images could be presented on screen was effectively the same. It seems that the considerably greater optical resolution of the larger scope was not translating into more final details. In both cases the mount was running with an RMS of less than half the image scale.

     

    I didn't read the article myself but this in itself should not be a surprise.  I'm assuming the resolution of the test between the 14 inch and 6 inch were similar overall?  

    The benefit of a larger scope comes from that at the same arcsec/pixel you can get more flux per pixel for the same amount of time.  So if you 0.8" per pixel on the 6" then on a 14" at the same resolution (and assuming same sensitivity) you'll get more flux on each pixel.  As such fainter details should show up earlier for less time observed. Usually though longer focal length instruments have "/pixel that are much smaller and spread the light out further so you don't see this gain in aperture.

    I think sometimes this is where getting more resolution issue comes from.  There is likely a limit in any sky as to how well resolved you can make an object (without fancy tech anyway and 8000ft high telescopes). However you should get to the fainter stuff quicker *at the same "/pixel* which might make it seem like you are getting a higher resolution.

  17. 4 hours ago, ollypenrice said:

    I use both Petzval scopes (whose flatteners are on the objective side of the focuser) and flattened triplets (whose flatteners are on the camera side of the focuser) and have never found that flats were sensitive to fine focus. And think about all those robotic scope users. They can't shoot flats 'per focus.' No, I simply don't believe that flats are sensitive to fine focus at the level carried out during a night's imaging. It simply flies in the face of everyones' experience.

    Olly

    However all of these are going to be dependent on the system being used.  The FSQs are well corrected sealed units so there is less likelyhood that these will show some form of aberration.  There's likely a number of factors that come into play.  What we do know is that the size of the shadowing is dependent on the distance to the sensor.  The further away the larger it becomes, but the more diffuse as the effect s spread out over a larger area of the ccd. The reason we don't see dust from an objective is because it is so far away that effectively it has been 'diffused' to being not visible. 

    On on that basis it then becomes a question of percentage change where changing the location of the sensor to the dust makes a noticeable difference.  If the back focus distance between ccd and dust was 50mm and by focusing you changed that by 1mm then the difference in the dust shadow will be more significant compared to one where the back focus distance is much larger (let's say 150mm).  It will also be dependent on the size of the pixels as you will be more sensitive to such changes as they get smaller.  Hence it may not be a case that they aren't there in remote setups but they aren't noticeable as the setup as designed.  However more flexible setups may find they are more susceptible to this kind of fluctuation.

     

    edit  - apologies for the typos, I hate typing on an iPad.

  18. 5 hours ago, ollypenrice said:

    And, although I follow Oddsocks' logic, I have never ever detected any issues arising from changes in focus between flats and capture. We refocus regularly but nobody shoots a set of flats prior to doing a refocus - do they? I've never heard of anyone doing so and they'd be pretty unpopular at star parties. ? If, inadvertently, you'd shot your flats at a very different point of focus then, yes, they would probably have an issue but the tiny adjustments we make to focus on an imaging run simply do not require replicating when shooting flats.  Is there any chance of a big accidental change in focus?

    It probably depends on which side of the focuser the flattener is.  If you have the ability to put the flattener on the telescope side of there is a set of lenses much nearer the ccd (e.g. An Edge) then if you have any dust on the flattener then you may need separate flats if you change the focus position because you are so much closer to the dust.  However I imagine that in most cases it probably isn't noticeable if you change focus position several times over one night as the stacking will likely reject the variations. It only becomes obvious when the flats are done at one position and the lights at another.

    My general suspicion is that these effects arise when there is some slight motion because something isn't quite locked down as tightly as you realise resulting in a slight shift and a residual 'ring'.

  19. Have you definitely ruled out a guiding issue by seeing if the same thing happens when the lodestar is observing but not sending guiding corrections.  You explanation appears cyclical which I would not expect to arise from a shift of the lodestar - I would have thought that would have been more 'jumpy' as the stress overcomes the resistance.  A cyclical effect would imply a cyclical cause which would indicate a gear related issue?  What is you guiding exposure and mount correction rate?

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.